@Neoon said:
Testing the nixOS image currently, since officially its not supported by LXD, but seems to work well.
Should be added soon, hopefully.
Nice, if you need a guinea pig for the NixOS image, give me a ping. It has a bit of a few quirks due to the closure builder sandboxing when running on containers.
Don't know what LXC container backend microLXC is running (e.g., LXD, Proxmox, systemd-nspawn), but you may need to do lxc.apparmor.profileunconfined or this.
Don't know what LXC container backend microLXC is running (e.g., LXD, Proxmox, systemd-nspawn), but you may need to do lxc.apparmor.profileunconfined or this.
Well, yea the Image as said before is not officially supported by LXD.
When I tested it with Incus and partially with LXD, it was working fine.
My best guess is they added apparmor profiles for NixOS to Incus.
Which are missing, hence he said it was not available for LXD.
Incus LTS is available since a few days, so technically I can start upgrading the Nodes.
However, I rather wait a bit, for other people to pentest it.
Don't know what LXC container backend microLXC is running (e.g., LXD, Proxmox, systemd-nspawn), but you may need to do lxc.apparmor.profileunconfined or this.
Well, the new LXD Images include NixOS and it seems to work fine with Nesting enabled.
I will replace it on the Nodes so you can give it a try again.
NixOS has been enabled again, I replaced the current Image with a new one for LXD.
Testing so far was fine, if nesting is enabled, don't forget that to enable it under Settings.
Don't know what LXC container backend microLXC is running (e.g., LXD, Proxmox, systemd-nspawn), but you may need to do lxc.apparmor.profileunconfined or this.
Well, the new LXD Images include NixOS and it seems to work fine with Nesting enabled.
I will replace it on the Nodes so you can give it a try again.
Well, the new LXD Images include NixOS and it seems to work fine with Nesting enabled.
I will replace it on the Nodes so you can give it a try again.
Noice, thanks. I'll give it a spin.
Now works great 🤌
To NixOS friends minmaxing, a few tricks: nix-collect-garbage -d and nix-channel --remove nixos will free you some disk bringing base usage to 300M. You can then just push closures remotely (I doubt you'll be able to get the builder to run and switch on 128M anyway).
Had to Reboot Pakistan and Valdivia, due to the same issue that happend on JP.
The issues just recently appeared, no clue yet, still troubleshooting why this happens.
So I wrote some code to add a simple CPU abuse detection system.
This will notify users via email if the CPU usage is higher than 50% for the last 30 minutes.
The System doesn't stop or suspend anything yet.
However, the idea would be a strike like system.
If you have been notified a bunch of times, your container / virtual machine will be stopped.
@Neoon said:
This week I had a few cases of CPU abuse.
So I wrote some code to add a simple CPU abuse detection system.
This will notify users via email if the CPU usage is higher than 50% for the last 30 minutes.
The System doesn't stop or suspend anything yet.
However, the idea would be a strike like system.
If you have been notified a bunch of times, your container / virtual machine will be stopped.
Smol update.
You will be send 3 notifications via email before the System will take action.
Roughly 2 Hours with more than 50% CPU load.
The 4th time you exceed the threshold your virtual machine / container will be stopped and you will be notified via email.
I will post here again once the automatic suspension is enabled, until then, it will just send notifications.
If you notice any bugs, feel free to let me know.
Hey,
The following containers on microLXC have been renewed:
List of ips .... will expire in 60 day(s)
Any questions? You can mail us at: [email protected]
microLXC
Hey,
The following containers on microLXC have been renewed:
List of ips .... will expire in 60 day(s)
Any questions? You can mail us at: [email protected]
microLXC
You have to login every 60 days to confirm your activity.
This is just the confirmation email that the listed containers have been extended because you logged in.
Maintenance Announcement
I have to carry out some changes on the backend, this will make the backend unavailable for roughly 1 hour or less.
Running machines are not affected, however no tasks or deployments can be done.
Will be done this week, Saturday, 11th of May, at around 20:00 GMT.
@Neoon said: Maintenance Announcement
I have to carry out some changes on the backend, this will make the backend unavailable for roughly 1 hour or less.
Running machines are not affected, however no tasks or deployments can be done.
Will be done this week, Saturday, 11th of May, at around 20:00 GMT.
microLXC still got 3 nodes running on an older version of Ubuntu.
I will upgrade these 3 nodes next week, the Kernel version will be bumped to 5.15 afterwards.
Affected Locations
- Sandefjord
- Melbourne
- Tokyo
The System will be only rebooted once, so downtime should be minimal however expect 30-60 minutes.
Will be done next week, Saturday, 18th of May, at around 20:00 GMT.
I mentioned a while ago once Incus becomes LTS, that microLXC will slowly migrate away from LXD.
Incus is basically a fork from LXD, which has been created a few months after Canonical took over LXD and changed the licenses.
Technically LXD still has support until 2029, so does Incus, however due to the consistent issues, with LXD and snap, I did choose to migrate to Incus.
So far my testing has gone without any issues, hence I want to start migrating the first nodes.
The first batch of Nodes that will be migrated this Weekend are Johannesburg and Valdivia.
This will be done Sunday, 19th of May, at around 20:00 GMT.
Downtime should be minimal, expected to be 10 minutes or less, since no reboot is needed.
microLXC still got 3 nodes running on an older version of Ubuntu.
I will upgrade these 3 nodes next week, the Kernel version will be bumped to 5.15 afterwards.
Affected Locations
- Sandefjord
- Melbourne
- Tokyo
The System will be only rebooted once, so downtime should be minimal however expect 30-60 minutes.
Will be done next week, Saturday, 18th of May, at around 20:00 GMT.
microLXC still got 3 nodes running on an older version of Ubuntu.
I will upgrade these 3 nodes next week, the Kernel version will be bumped to 5.15 afterwards.
Affected Locations
- Sandefjord
- Melbourne
- Tokyo
The System will be only rebooted once, so downtime should be minimal however expect 30-60 minutes.
Will be done next week, Saturday, 18th of May, at around 20:00 GMT.
Done.
Could you check if the IPv6 setup in Melbourne got messed up somehow? Outgoing IPv6 works fine but incoming connections don't get through:
microLXC still got 3 nodes running on an older version of Ubuntu.
I will upgrade these 3 nodes next week, the Kernel version will be bumped to 5.15 afterwards.
Affected Locations
- Sandefjord
- Melbourne
- Tokyo
The System will be only rebooted once, so downtime should be minimal however expect 30-60 minutes.
Will be done next week, Saturday, 18th of May, at around 20:00 GMT.
Done.
Could you check if the IPv6 setup in Melbourne got messed up somehow? Outgoing IPv6 works fine but incoming connections don't get through:
Comments
Nice, if you need a guinea pig for the NixOS image, give me a ping. It has a bit of a few quirks due to the closure builder sandboxing when running on containers.
OS / Package availability updates
OS
Packages
Free NAT KVM | Free NAT LXC | Bobr
ITS WEDNESDAY MY DUDES
OS availability updates
- Added Alpine 3.19 (LXC/KVM)
- Added NixOS (LXC)
Alpine is as before available from 64MB, NixOS from 128MB.
Free NAT KVM | Free NAT LXC | Bobr
ITS WEDNESDAY MY DUDES
NixOS seems to be failing due to the nix-daemon's inability to remount
/nix/store
.Don't know what LXC container backend microLXC is running (e.g., LXD, Proxmox, systemd-nspawn), but you may need to do
lxc.apparmor.profile
unconfined
or this.Well, yea the Image as said before is not officially supported by LXD.
When I tested it with Incus and partially with LXD, it was working fine.
My best guess is they added apparmor profiles for NixOS to Incus.
Which are missing, hence he said it was not available for LXD.
Incus LTS is available since a few days, so technically I can start upgrading the Nodes.
However, I rather wait a bit, for other people to pentest it.
Free NAT KVM | Free NAT LXC | Bobr
ITS WEDNESDAY MY DUDES
26D7-D40A-F519-5E25
Well, the new LXD Images include NixOS and it seems to work fine with Nesting enabled.
I will replace it on the Nodes so you can give it a try again.
Free NAT KVM | Free NAT LXC | Bobr
ITS WEDNESDAY MY DUDES
NixOS has been enabled again, I replaced the current Image with a new one for LXD.
Testing so far was fine, if nesting is enabled, don't forget that to enable it under Settings.
Free NAT KVM | Free NAT LXC | Bobr
ITS WEDNESDAY MY DUDES
Noice, thanks. I'll give it a spin.
Now works great 🤌
To NixOS friends minmaxing, a few tricks:
nix-collect-garbage -d
andnix-channel --remove nixos
will free you some disk bringing base usage to 300M. You can then just push closures remotely (I doubt you'll be able to get the builder to run and switch on 128M anyway).Had to Reboot Pakistan and Valdivia, due to the same issue that happend on JP.
The issues just recently appeared, no clue yet, still troubleshooting why this happens.
Free NAT KVM | Free NAT LXC | Bobr
ITS WEDNESDAY MY DUDES
OS availability updates
- Added Ubuntu Noble Numbat (LXC/KVM)
Free NAT KVM | Free NAT LXC | Bobr
ITS WEDNESDAY MY DUDES
Reinstalling seems to silently undo Nesting (panel still shows Disable Nesting). Just need to toggle afterwards, no biggie.
Its recreating the container and by default it has nesting not enabled.
Hence its disabled, however it should show it as disabled and not enabled.
Fixed that.
Free NAT KVM | Free NAT LXC | Bobr
ITS WEDNESDAY MY DUDES
This week I had a few cases of CPU abuse.
So I wrote some code to add a simple CPU abuse detection system.
This will notify users via email if the CPU usage is higher than 50% for the last 30 minutes.
The System doesn't stop or suspend anything yet.
However, the idea would be a strike like system.
If you have been notified a bunch of times, your container / virtual machine will be stopped.
Free NAT KVM | Free NAT LXC | Bobr
ITS WEDNESDAY MY DUDES
Smol update.
You will be send 3 notifications via email before the System will take action.
Roughly 2 Hours with more than 50% CPU load.
The 4th time you exceed the threshold your virtual machine / container will be stopped and you will be notified via email.
I will post here again once the automatic suspension is enabled, until then, it will just send notifications.
If you notice any bugs, feel free to let me know.
Free NAT KVM | Free NAT LXC | Bobr
ITS WEDNESDAY MY DUDES
Got this email, what does this mean?
https://microlxc.net/
You have to login every 60 days to confirm your activity.
This is just the confirmation email that the listed containers have been extended because you logged in.
Free NAT KVM | Free NAT LXC | Bobr
ITS WEDNESDAY MY DUDES
@Neoon got it.
https://microlxc.net/
Maintenance Announcement
I have to carry out some changes on the backend, this will make the backend unavailable for roughly 1 hour or less.
Running machines are not affected, however no tasks or deployments can be done.
Will be done this week, Saturday, 11th of May, at around 20:00 GMT.
Free NAT KVM | Free NAT LXC | Bobr
ITS WEDNESDAY MY DUDES
Done.
Free NAT KVM | Free NAT LXC | Bobr
ITS WEDNESDAY MY DUDES
hey @Neoon
time we donate to you
give us a paypal email?
yes,,, now .... use money to buy new pants
hey @Neoon - there's a Tokyo Equinix 'microJP' instance that can't be terminated.
(trying to delete the 128MB and recreate it with 256MB, or else Debian craps its bed with some operations)
If you want to donate, I added [email protected] to Paypal, so you can do so.
Tanks, probably gonna buy me a pack of new pants though, for real.
I rebooted it, fixed for now.
As long we are using LXD, the error probably will come up from time to time.
Have to migrate all nodes to Incus, however I don't wanna rush it.
Probably though gonna do a test migration and then start migrating a bunch.
Free NAT KVM | Free NAT LXC | Bobr
ITS WEDNESDAY MY DUDES
Maintenance Announcement
microLXC still got 3 nodes running on an older version of Ubuntu.
I will upgrade these 3 nodes next week, the Kernel version will be bumped to 5.15 afterwards.
Affected Locations
- Sandefjord
- Melbourne
- Tokyo
The System will be only rebooted once, so downtime should be minimal however expect 30-60 minutes.
Will be done next week, Saturday, 18th of May, at around 20:00 GMT.
Free NAT KVM | Free NAT LXC | Bobr
ITS WEDNESDAY MY DUDES
e41b-a63b-50e6-f44f
edit: wow what a easy process, decided to jump on and play around and it was easy as to get a system deployed
Maintenance Announcement
I mentioned a while ago once Incus becomes LTS, that microLXC will slowly migrate away from LXD.
Incus is basically a fork from LXD, which has been created a few months after Canonical took over LXD and changed the licenses.
Technically LXD still has support until 2029, so does Incus, however due to the consistent issues, with LXD and snap, I did choose to migrate to Incus.
So far my testing has gone without any issues, hence I want to start migrating the first nodes.
The first batch of Nodes that will be migrated this Weekend are Johannesburg and Valdivia.
This will be done Sunday, 19th of May, at around 20:00 GMT.
Downtime should be minimal, expected to be 10 minutes or less, since no reboot is needed.
Free NAT KVM | Free NAT LXC | Bobr
ITS WEDNESDAY MY DUDES
Done.
Free NAT KVM | Free NAT LXC | Bobr
ITS WEDNESDAY MY DUDES
Could you check if the IPv6 setup in Melbourne got messed up somehow? Outgoing IPv6 works fine but incoming connections don't get through:
dnscry.pt - Public DNSCrypt resolvers hosted by LowEnd providers • Need a free NAT LXC? -> https://microlxc.net/
IPv6 forwarding got disabled for some reason.
Does it work now?
Free NAT KVM | Free NAT LXC | Bobr
ITS WEDNESDAY MY DUDES