@Neoon said:
Reminder, SG (Nexusbytes) has been set out of stock since months, however we still got a bunch of containers running.
This Location could go offline this week, according to sources.
Ooooh, I have one. Do I need to do anything, or just wait and see what happens to it? I can happily terminate my container if that'd be useful as I only use it occasionally for checking connectivity to elsewhere.
You can watch it burn if you want.
Remember when OVH was on fire and there was a screenshot of one if the servers thermals monitoring. Ill be like that
I dislike the current state of microlxc.
The ongoing issues with LXD, ongoing Network issues in some Locations, does not really reach the quality level I wanted to provide.
Plus, I can only code for a few hours per day and I have a lot of projects to code on.
Hence I made the decision to close the registration for new accounts, for now.
No, I don't have in mind to kill the project, however, purging LXD and replacing it internally with Proxmox, which is more reliable in the long run.
Yea I know Bookworm is going to ship with LXD, without snap, however I don't know if this would make LXD feasible again.
Likely in 4-6 weeks I am going to have 1-2 nodes available to carry out a test for Debian 12 / Bookworm with LXD.
This Test will be Public as before, however time limited.
Approximately 3 months, up to 6 months, if it's a success indefinitely.
At this point I would either switch the rest of the systems to Debian 12 or Proxmox.
@Neoon said:
I dislike the current state of microlxc.
The ongoing issues with LXD, ongoing Network issues in some Locations, does not really reach the quality level I wanted to provide.
Plus, I can only code for a few hours per day and I have a lot of projects to code on.
Hence I made the decision to close the registration for new accounts, for now.
No, I don't have in mind to kill the project, however, purging LXD and replacing it internally with Proxmox, which is more reliable in the long run.
Yea I know Bookworm is going to ship with LXD, without snap, however I don't know if this would make LXD feasible again.
That would be a thing to figure out.
Is the code still open? Nothing related appears on GitHub anymore.
@Neoon said:
I dislike the current state of microlxc.
The ongoing issues with LXD, ongoing Network issues in some Locations, does not really reach the quality level I wanted to provide.
Plus, I can only code for a few hours per day and I have a lot of projects to code on.
Hence I made the decision to close the registration for new accounts, for now.
No, I don't have in mind to kill the project, however, purging LXD and replacing it internally with Proxmox, which is more reliable in the long run.
Yea I know Bookworm is going to ship with LXD, without snap, however I don't know if this would make LXD feasible again.
That would be a thing to figure out.
Is the code still open? Nothing related appears on GitHub anymore.
It was never open source.
@yoursunny said:
Mentally strong people use plain LXC.
lxc-create
lxc-unpriv-start
lxc-unpriv-attach
LXC has fewer layers.
It allows more precise control and customization.
I don't think LXD itself is to blame, I rather think, that additional shit what is shipped with snap packages, are causing issues.
The shit I have seen with Ubuntu 20.04 and snap, applications before never having issues and out of the sudden, are slower and randomly crashing or getting unresponsive.
Chromium on Ubuntu, its a crap experience.
You are loosing like 20%-30% alone on performance.
@Neoon said:
I dislike the current state of microlxc.
The ongoing issues with LXD, ongoing Network issues in some Locations, does not really reach the quality level I wanted to provide.
Plus, I can only code for a few hours per day and I have a lot of projects to code on.
Hence I made the decision to close the registration for new accounts, for now.
No, I don't have in mind to kill the project, however, purging LXD and replacing it internally with Proxmox, which is more reliable in the long run.
Yea I know Bookworm is going to ship with LXD, without snap, however I don't know if this would make LXD feasible again.
That would be a thing to figure out.
Is the code still open? Nothing related appears on GitHub anymore.
It was never open source.
Perhaps it is NanoKVM I was thinking of? I remember a NanoKVM-Tools repo.
@Neoon said:
I dislike the current state of microlxc.
The ongoing issues with LXD, ongoing Network issues in some Locations, does not really reach the quality level I wanted to provide.
Plus, I can only code for a few hours per day and I have a lot of projects to code on.
Hence I made the decision to close the registration for new accounts, for now.
No, I don't have in mind to kill the project, however, purging LXD and replacing it internally with Proxmox, which is more reliable in the long run.
Yea I know Bookworm is going to ship with LXD, without snap, however I don't know if this would make LXD feasible again.
That would be a thing to figure out.
Is the code still open? Nothing related appears on GitHub anymore.
It was never open source.
Perhaps it is NanoKVM I was thinking of? I remember a NanoKVM-Tools repo.
Yea, I removed the NanoKVM-Tools repo, since it was outdated and kinda janky.
if there is interest, I might upload the better code and network setup.
@Neoon I saw on your signup page you are asking for "public key" instead of "password" for the VPS setup. This is very good thing to do and I hope more and more providers starts doing this. Using a password for a server, specially if you enable root ssh access, is VERY insecure.
I decided to spin up a microlxc instance in Auckland last night to run a few mtrs to my other servers, but it looks like something went slightly wrong with IPv6 setup. It's the debian template if that's relevant.
In the control panel the address is ... ::2, but the container actually has a randomised address in the same /64.
There doesn't seem to be an /etc/network/interfaces, so not sure this gets set up in lxc. Is this configured host-side or can I just make the interfaces file like I would on a KVM instance?
@ralf said:
There doesn't seem to be an /etc/network/interfaces, so not sure this gets set up in lxc. Is this configured host-side or can I just make the interfaces file like I would on a KVM instance?
@ralf said:
I decided to spin up a microlxc instance in Auckland last night to run a few mtrs to my other servers, but it looks like something went slightly wrong with IPv6 setup. It's the debian template if that's relevant.
In the control panel the address is ... ::2, but the container actually has a randomised address in the same /64.
There doesn't seem to be an /etc/network/interfaces, so not sure this gets set up in lxc. Is this configured host-side or can I just make the interfaces file like I would on a KVM instance?
If your VM or Container gets a /64 IPv6, the initial IPv6 is randomly (by mac) assigned by dnsmasq.
It doesn't update in the Panel, yet.
@ralf said:
I decided to spin up a microlxc instance in Auckland last night to run a few mtrs to my other servers, but it looks like something went slightly wrong with IPv6 setup. It's the debian template if that's relevant.
In the control panel the address is ... ::2, but the container actually has a randomised address in the same /64.
There doesn't seem to be an /etc/network/interfaces, so not sure this gets set up in lxc. Is this configured host-side or can I just make the interfaces file like I would on a KVM instance?
I pushed a patch, the current IPv6 should be now reflected in the Panel.
However, only for new Deployments or if you Reinstall your VM, since the database still has the old ::2 as record.
@Abdullah dropped a fat SG Machine, which gives me the opportunity to use a different storage backend.
Hence I likely open the registration again, since the other stuff is delayed, so this can be tested meanwhile.
Neoon said: @Abdullah dropped a fat SG Machine, which gives me the opportunity to use a different storage backend.
Hence I likely open the registration again, since the other stuff is delayed, so this can be tested meanwhile.
Neoon said: @Abdullah dropped a fat SG Machine, which gives me the opportunity to use a different storage backend.
Hence I likely open the registration again, since the other stuff is delayed, so this can be tested meanwhile.
SG is not ready jet, IPv6 is not working as expected, will hopefully get it fixed this week.
Comments
Purged.
Free NAT KVM | Free NAT LXC | Bobr
ITS WEDNESDAY MY DUDES
Remember when OVH was on fire and there was a screenshot of one if the servers thermals monitoring. Ill be like that
Crunchbits Technical Support, Technical Writer, and Sales
Contact me at: +1 (509) 606-3569 or [email protected]
Thank you 🇯🇵
Hi @Neoon Auckland lxc is having problem I guess.
I can't ssh into it. Connection refuse.
I'll try to terminate and re launch.
https://microlxc.net/
The Machine has been rebooted from the ISP, there was no maintenance announcement or anything.
Besides that, I don't see anything wrong.
Free NAT KVM | Free NAT LXC | Bobr
ITS WEDNESDAY MY DUDES
The Monitoring didn't picked up the reboot, iptables forwarding rules where not applied.
Should be fixed.
Free NAT KVM | Free NAT LXC | Bobr
ITS WEDNESDAY MY DUDES
Everything is working now!
https://microlxc.net/
8e93-6011-cf98-0168
Thank you.
MicroLXC is lovable. Uptime of C1V
653b-fb70-08ed-3fce
Thanks!
The Ultimate Speedtest Script | Get Instant Alerts on new LES/LET deals | Cheap VPS Deals
FREE KVM VPS - FreeVPS.org | FREE LXC VPS - MicroLXC
I dislike the current state of microlxc.
The ongoing issues with LXD, ongoing Network issues in some Locations, does not really reach the quality level I wanted to provide.
Plus, I can only code for a few hours per day and I have a lot of projects to code on.
Hence I made the decision to close the registration for new accounts, for now.
No, I don't have in mind to kill the project, however, purging LXD and replacing it internally with Proxmox, which is more reliable in the long run.
Yea I know Bookworm is going to ship with LXD, without snap, however I don't know if this would make LXD feasible again.
That would be a thing to figure out.
Free NAT KVM | Free NAT LXC | Bobr
ITS WEDNESDAY MY DUDES
Likely in 4-6 weeks I am going to have 1-2 nodes available to carry out a test for Debian 12 / Bookworm with LXD.
This Test will be Public as before, however time limited.
Approximately 3 months, up to 6 months, if it's a success indefinitely.
At this point I would either switch the rest of the systems to Debian 12 or Proxmox.
Free NAT KVM | Free NAT LXC | Bobr
ITS WEDNESDAY MY DUDES
Mentally strong people use plain LXC.
LXC has fewer layers.
It allows more precise control and customization.
HostBrr aff best VPS; VirmAche aff worst VPS.
Unable to push-up due to shoulder injury 😣
Is the code still open? Nothing related appears on GitHub anymore.
It was never open source.
I don't think LXD itself is to blame, I rather think, that additional shit what is shipped with snap packages, are causing issues.
The shit I have seen with Ubuntu 20.04 and snap, applications before never having issues and out of the sudden, are slower and randomly crashing or getting unresponsive.
Chromium on Ubuntu, its a crap experience.
You are loosing like 20%-30% alone on performance.
Free NAT KVM | Free NAT LXC | Bobr
ITS WEDNESDAY MY DUDES
Perhaps it is NanoKVM I was thinking of? I remember a NanoKVM-Tools repo.
Yea, I removed the NanoKVM-Tools repo, since it was outdated and kinda janky.
if there is interest, I might upload the better code and network setup.
Free NAT KVM | Free NAT LXC | Bobr
ITS WEDNESDAY MY DUDES
IPv6 died in NL, working on it.
Free NAT KVM | Free NAT LXC | Bobr
ITS WEDNESDAY MY DUDES
Fixed.
Free NAT KVM | Free NAT LXC | Bobr
ITS WEDNESDAY MY DUDES
@Neoon I saw on your signup page you are asking for "public key" instead of "password" for the VPS setup. This is very good thing to do and I hope more and more providers starts doing this. Using a password for a server, specially if you enable root ssh access, is VERY insecure.
Websites have ads, I have ad-blocker.
I decided to spin up a microlxc instance in Auckland last night to run a few mtrs to my other servers, but it looks like something went slightly wrong with IPv6 setup. It's the debian template if that's relevant.
In the control panel the address is ... ::2, but the container actually has a randomised address in the same /64.
There doesn't seem to be an /etc/network/interfaces, so not sure this gets set up in lxc. Is this configured host-side or can I just make the interfaces file like I would on a KVM instance?
make the interface file
"How miserable life is in the abuses of power..."
F. Battiato ---
If your VM or Container gets a /64 IPv6, the initial IPv6 is randomly (by mac) assigned by dnsmasq.
It doesn't update in the Panel, yet.
Free NAT KVM | Free NAT LXC | Bobr
ITS WEDNESDAY MY DUDES
I pushed a patch, the current IPv6 should be now reflected in the Panel.
However, only for new Deployments or if you Reinstall your VM, since the database still has the old ::2 as record.
Free NAT KVM | Free NAT LXC | Bobr
ITS WEDNESDAY MY DUDES
@Abdullah dropped a fat SG Machine, which gives me the opportunity to use a different storage backend.
Hence I likely open the registration again, since the other stuff is delayed, so this can be tested meanwhile.
Free NAT KVM | Free NAT LXC | Bobr
ITS WEDNESDAY MY DUDES
SG is not ready jet, IPv6 is not working as expected, will hopefully get it fixed this week.
Free NAT KVM | Free NAT LXC | Bobr
ITS WEDNESDAY MY DUDES
New ETA for SG is Friday.
Free NAT KVM | Free NAT LXC | Bobr
ITS WEDNESDAY MY DUDES
NL was down, its partially back up, prob. router issue.
As of right now, someone is looking into it.
Free NAT KVM | Free NAT LXC | Bobr
ITS WEDNESDAY MY DUDES
No update yet, still waiting for the Subnet to get routed.
Free NAT KVM | Free NAT LXC | Bobr
ITS WEDNESDAY MY DUDES