microLXC Public Test

11415171920

Comments

  • @Neoon said:
    Testing the nixOS image currently, since officially its not supported by LXD, but seems to work well.
    Should be added soon, hopefully.

    Nice, if you need a guinea pig for the NixOS image, give me a ping. It has a bit of a few quirks due to the closure builder sandboxing when running on containers.

  • NeoonNeoon OG
    edited April 8

    OS / Package availability updates

    OS

    • Ubuntu can not longer be installed on 128MB due to OOM issues
    • Archlinux and Alpinelinux are now available for KVM, including the 256MB KVM Package
    • BYOOS has been removed from the 256MB KVM Package due to OOM issues
    • Rocklinux, CentOS, Almalinux and Debian are now available to be Installed on the 384MB KVM Package

    Packages

    • New 192MB LXC Package, mainly for Ubuntu but for any other distros also
    • 384MB KVM Package is also now available Norway
    Thanked by (1)Shot²
  • NeoonNeoon OG
    edited April 8

    OS availability updates
    - Added Alpine 3.19 (LXC/KVM)
    - Added NixOS (LXC)

    Alpine is as before available from 64MB, NixOS from 128MB.

    Thanked by (1)terrorgen
  • edited April 8

    NixOS seems to be failing due to the nix-daemon's inability to remount /nix/store.

    # nix-channel --add https://nixos.org/channels/nixos-23.11 nixos
    # nix-channel --update
    error: cannot open connection to remote store 'daemon': error: writing to file: Broken pipe
    
    Apr 08 22:54:01 nixos nix-daemon[769]: accepted connection from pid 767, user root (trusted)
    Apr 08 22:54:01 nixos nix-daemon[771]: unexpected Nix daemon error: error: remounting /nix/store writable: Permission denied
    

    Don't know what LXC container backend microLXC is running (e.g., LXD, Proxmox, systemd-nspawn), but you may need to do lxc.apparmor.profile unconfined or this.

  • NeoonNeoon OG
    edited April 8

    @jmgcaguicla said:
    NixOS seems to be failing due to the nix-daemon's inability to remount /nix/store.

    # nix-channel --add https://nixos.org/channels/nixos-23.11 nixos
    # nix-channel --update
    error: cannot open connection to remote store 'daemon': error: writing to file: Broken pipe
    
    Apr 08 22:54:01 nixos nix-daemon[769]: accepted connection from pid 767, user root (trusted)
    Apr 08 22:54:01 nixos nix-daemon[771]: unexpected Nix daemon error: error: remounting /nix/store writable: Permission denied
    

    Don't know what LXC container backend microLXC is running (e.g., LXD, Proxmox, systemd-nspawn), but you may need to do lxc.apparmor.profile unconfined or this.

    Well, yea the Image as said before is not officially supported by LXD.
    When I tested it with Incus and partially with LXD, it was working fine.

    My best guess is they added apparmor profiles for NixOS to Incus.
    Which are missing, hence he said it was not available for LXD.

    Incus LTS is available since a few days, so technically I can start upgrading the Nodes.
    However, I rather wait a bit, for other people to pentest it.

    Thanked by (1)jmgcaguicla
  • 26D7-D40A-F519-5E25

  • @jmgcaguicla said:
    NixOS seems to be failing due to the nix-daemon's inability to remount /nix/store.

    # nix-channel --add https://nixos.org/channels/nixos-23.11 nixos
    # nix-channel --update
    error: cannot open connection to remote store 'daemon': error: writing to file: Broken pipe
    
    Apr 08 22:54:01 nixos nix-daemon[769]: accepted connection from pid 767, user root (trusted)
    Apr 08 22:54:01 nixos nix-daemon[771]: unexpected Nix daemon error: error: remounting /nix/store writable: Permission denied
    

    Don't know what LXC container backend microLXC is running (e.g., LXD, Proxmox, systemd-nspawn), but you may need to do lxc.apparmor.profile unconfined or this.

    Well, the new LXD Images include NixOS and it seems to work fine with Nesting enabled.
    I will replace it on the Nodes so you can give it a try again.

    Thanked by (2)jmgcaguicla terrorgen
  • NixOS has been enabled again, I replaced the current Image with a new one for LXD.
    Testing so far was fine, if nesting is enabled, don't forget that to enable it under Settings.

  • @Neoon said:

    @jmgcaguicla said:
    NixOS seems to be failing due to the nix-daemon's inability to remount /nix/store.

    # nix-channel --add https://nixos.org/channels/nixos-23.11 nixos
    # nix-channel --update
    error: cannot open connection to remote store 'daemon': error: writing to file: Broken pipe
    
    Apr 08 22:54:01 nixos nix-daemon[769]: accepted connection from pid 767, user root (trusted)
    Apr 08 22:54:01 nixos nix-daemon[771]: unexpected Nix daemon error: error: remounting /nix/store writable: Permission denied
    

    Don't know what LXC container backend microLXC is running (e.g., LXD, Proxmox, systemd-nspawn), but you may need to do lxc.apparmor.profile unconfined or this.

    Well, the new LXD Images include NixOS and it seems to work fine with Nesting enabled.
    I will replace it on the Nodes so you can give it a try again.

    Noice, thanks. I'll give it a spin.

  • edited April 24

    @jmgcaguicla said:

    @Neoon said:

    Well, the new LXD Images include NixOS and it seems to work fine with Nesting enabled.
    I will replace it on the Nodes so you can give it a try again.

    Noice, thanks. I'll give it a spin.

    Now works great 🤌

    To NixOS friends minmaxing, a few tricks: nix-collect-garbage -d and nix-channel --remove nixos will free you some disk bringing base usage to 300M. You can then just push closures remotely (I doubt you'll be able to get the builder to run and switch on 128M anyway).

    Thanked by (2)Neoon terrorgen
  • Had to Reboot Pakistan and Valdivia, due to the same issue that happend on JP.
    The issues just recently appeared, no clue yet, still troubleshooting why this happens.

  • NeoonNeoon OG
    edited April 25

    OS availability updates
    - Added Ubuntu Noble Numbat (LXC/KVM)

    Thanked by (3)bliss ElonBezos Erisa
  • edited April 26

    Reinstalling seems to silently undo Nesting (panel still shows Disable Nesting). Just need to toggle afterwards, no biggie.

  • @jmgcaguicla said:
    Reinstalling seems to silently undo Nesting (panel still shows Disable Nesting). Just need to toggle afterwards, no biggie.

    Its recreating the container and by default it has nesting not enabled.
    Hence its disabled, however it should show it as disabled and not enabled.

    Fixed that.

    Thanked by (1)jmgcaguicla
  • NeoonNeoon OG
    edited April 28

    This week I had a few cases of CPU abuse.

    So I wrote some code to add a simple CPU abuse detection system.
    This will notify users via email if the CPU usage is higher than 50% for the last 30 minutes.

    The System doesn't stop or suspend anything yet.
    However, the idea would be a strike like system.

    If you have been notified a bunch of times, your container / virtual machine will be stopped.

  • @Neoon said:
    This week I had a few cases of CPU abuse.

    So I wrote some code to add a simple CPU abuse detection system.
    This will notify users via email if the CPU usage is higher than 50% for the last 30 minutes.

    The System doesn't stop or suspend anything yet.
    However, the idea would be a strike like system.

    If you have been notified a bunch of times, your container / virtual machine will be stopped.

    Smol update.

    You will be send 3 notifications via email before the System will take action.
    Roughly 2 Hours with more than 50% CPU load.

    The 4th time you exceed the threshold your virtual machine / container will be stopped and you will be notified via email.

    I will post here again once the automatic suspension is enabled, until then, it will just send notifications.
    If you notice any bugs, feel free to let me know.

  • edited May 1

    Got this email, what does this mean?

    Hey,
    
    The following containers on microLXC have been renewed:
    
    List of ips ....  will expire in 60 day(s)
    
    Any questions? You can mail us at: [email protected]
    
    microLXC
    
  • @Fritz said:
    Got this email, what does this mean?

    Hey,
    
    The following containers on microLXC have been renewed:
    
    List of ips ....  will expire in 60 day(s)
    
    Any questions? You can mail us at: [email protected]
    
    microLXC
    

    You have to login every 60 days to confirm your activity.
    This is just the confirmation email that the listed containers have been extended because you logged in.

    Thanked by (1)Fritz
  • Maintenance Announcement
    I have to carry out some changes on the backend, this will make the backend unavailable for roughly 1 hour or less.
    Running machines are not affected, however no tasks or deployments can be done.
    Will be done this week, Saturday, 11th of May, at around 20:00 GMT.

    Thanked by (2)carlin0 Wonder_Woman
  • @Neoon said:
    Maintenance Announcement
    I have to carry out some changes on the backend, this will make the backend unavailable for roughly 1 hour or less.
    Running machines are not affected, however no tasks or deployments can be done.
    Will be done this week, Saturday, 11th of May, at around 20:00 GMT.

    Done.

    Thanked by (1)carlin0
  • hey @Neoon

    time we donate to you

    give us a paypal email?

    yes,,, now .... use money to buy new pants :)

  • hey @Neoon - there's a Tokyo Equinix 'microJP' instance that can't be terminated.

    (trying to delete the 128MB and recreate it with 256MB, or else Debian craps its bed with some operations)

  • @ehab said:
    hey @Neoon

    time we donate to you

    give us a paypal email?

    yes,,, now .... use money to buy new pants :)

    If you want to donate, I added [email protected] to Paypal, so you can do so.
    Tanks, probably gonna buy me a pack of new pants though, for real.

    @Shot² said:
    hey @Neoon - there's a Tokyo Equinix 'microJP' instance that can't be terminated.

    (trying to delete the 128MB and recreate it with 256MB, or else Debian craps its bed with some operations)

    I rebooted it, fixed for now.
    As long we are using LXD, the error probably will come up from time to time.

    Have to migrate all nodes to Incus, however I don't wanna rush it.
    Probably though gonna do a test migration and then start migrating a bunch.

    Thanked by (3)ehab Shot² Wonder_Woman
  • NeoonNeoon OG
    edited May 12

    Maintenance Announcement

    microLXC still got 3 nodes running on an older version of Ubuntu.
    I will upgrade these 3 nodes next week, the Kernel version will be bumped to 5.15 afterwards.

    Affected Locations
    - Sandefjord
    - Melbourne
    - Tokyo

    The System will be only rebooted once, so downtime should be minimal however expect 30-60 minutes.
    Will be done next week, Saturday, 18th of May, at around 20:00 GMT.

    Thanked by (1)carlin0
  • edited May 12

    e41b-a63b-50e6-f44f

    edit: wow what a easy process, decided to jump on and play around and it was easy as to get a system deployed :)

  • NeoonNeoon OG
    edited May 13

    Maintenance Announcement

    I mentioned a while ago once Incus becomes LTS, that microLXC will slowly migrate away from LXD.
    Incus is basically a fork from LXD, which has been created a few months after Canonical took over LXD and changed the licenses.

    Technically LXD still has support until 2029, so does Incus, however due to the consistent issues, with LXD and snap, I did choose to migrate to Incus.
    So far my testing has gone without any issues, hence I want to start migrating the first nodes.

    The first batch of Nodes that will be migrated this Weekend are Johannesburg and Valdivia.
    This will be done Sunday, 19th of May, at around 20:00 GMT.

    Downtime should be minimal, expected to be 10 minutes or less, since no reboot is needed.

  • @Neoon said:
    Maintenance Announcement

    microLXC still got 3 nodes running on an older version of Ubuntu.
    I will upgrade these 3 nodes next week, the Kernel version will be bumped to 5.15 afterwards.

    Affected Locations
    - Sandefjord
    - Melbourne
    - Tokyo

    The System will be only rebooted once, so downtime should be minimal however expect 30-60 minutes.
    Will be done next week, Saturday, 18th of May, at around 20:00 GMT.

    Done.

  • @Neoon said:

    @Neoon said:
    Maintenance Announcement

    microLXC still got 3 nodes running on an older version of Ubuntu.
    I will upgrade these 3 nodes next week, the Kernel version will be bumped to 5.15 afterwards.

    Affected Locations
    - Sandefjord
    - Melbourne
    - Tokyo

    The System will be only rebooted once, so downtime should be minimal however expect 30-60 minutes.
    Will be done next week, Saturday, 18th of May, at around 20:00 GMT.

    Done.

    Could you check if the IPv6 setup in Melbourne got messed up somehow? Outgoing IPv6 works fine but incoming connections don't get through:

    19. 2406:d501:f:99::                                                                                                                      0.0%    17  387.4 337.5 319.3 402.8  25.1
    20. 2406:d501:f:106::3                                                                                                                    0.0%    17  358.7 335.1 319.5 375.8  18.8
    21. 2402:7340:3::1                                                                                                                        5.9%    17  377.5 332.8 319.1 389.6  22.8
    22. (no route to host)
    

    dnscry.pt - Public DNSCrypt resolvers hosted by LowEnd providers • Need a free NAT LXC? -> https://microlxc.net/

  • @Brueggus said:

    @Neoon said:

    @Neoon said:
    Maintenance Announcement

    microLXC still got 3 nodes running on an older version of Ubuntu.
    I will upgrade these 3 nodes next week, the Kernel version will be bumped to 5.15 afterwards.

    Affected Locations
    - Sandefjord
    - Melbourne
    - Tokyo

    The System will be only rebooted once, so downtime should be minimal however expect 30-60 minutes.
    Will be done next week, Saturday, 18th of May, at around 20:00 GMT.

    Done.

    Could you check if the IPv6 setup in Melbourne got messed up somehow? Outgoing IPv6 works fine but incoming connections don't get through:

    19. 2406:d501:f:99::                                                                                                                      0.0%    17  387.4 337.5 319.3 402.8  25.1
    20. 2406:d501:f:106::3                                                                                                                    0.0%    17  358.7 335.1 319.5 375.8  18.8
    21. 2402:7340:3::1                                                                                                                        5.9%    17  377.5 332.8 319.1 389.6  22.8
    22. (no route to host)
    

    IPv6 forwarding got disabled for some reason.
    Does it work now?

    Thanked by (1)Brueggus
Sign In or Register to comment.