microLXC Public Test

13468920

Comments

  • @Ganonk said:

    @Neoon said:

    The 512MB Package remains the same.

    where is locations Sir?

    Norway

  • Hello,

    got a problem with micro server in belgium: 10.0.11.66

    I try to destroy it but it always says "The network is currently in use". Can you please terminate it? :)

  • @Multi_ said:
    Hello,

    got a problem with micro server in belgium: 10.0.11.66

    I try to destroy it but it always says "The network is currently in use". Can you please terminate it? :)

    Done, I will look into the issue.

    Thanked by (1)Multi_
  • bdlbdl OG
    edited May 2021

    To change location, can we do a "reinstall" via portal or have to request via a new token? :)

  • @bdl said:
    To change location, can we do a "reinstall" via portal or have to request via a new token? :)

    Just destroy your instance and order a new one.

    Thanked by (2)bdl Neoon
  • d0a7-6110-9007-e855

  • bdlbdl OG
    edited May 2021

    Just wanted to thank @Neoon (again) for a great service. I just moved my service from .jp to nz and it's running smooooth af. B)

  • NeoonNeoon OG
    edited May 2021

    @bdl said:
    Just wanted to thank @Neoon (again) for a great service. I just moved my service from .jp to nz and it's running smooooth af. B)

    Tank you, tank @Zappie too

    Thanked by (2)bdl Zappie
  • IPv6 issue in JP has been fixed.

  • May allowed to have LXC , Sir ..? ;)

    ed83-79b9-add0-8bf5

  • Not_OlesNot_Oles Hosting ProviderContent Writer

    Hello @Neoon!

    from: https://microlxc.net/index.php?p=dash :
    If your quota has been increased, you can deploy a second container here.

    Requesting quota increase, please. Use case: serve a very small, almost no traffic website. Thank you! :)

    I hope everyone gets the servers they want!

  • @Not_Oles said:
    Hello @Neoon!

    from: https://microlxc.net/index.php?p=dash :
    If your quota has been increased, you can deploy a second container here.

    Requesting quota increase, please. Use case: serve a very small, almost no traffic website. Thank you! :)

    Yes, if you want to increase it to 2, you just add microlxc to your sig and lemme know when its done.

  • @Neoon said:

    @Not_Oles said:
    Hello @Neoon!

    from: https://microlxc.net/index.php?p=dash :
    If your quota has been increased, you can deploy a second container here.

    Requesting quota increase, please. Use case: serve a very small, almost no traffic website. Thank you! :)

    Yes, if you want to increase it to 2, you just add microlxc to your sig and lemme know when its done.

    I don't need a second VPS - but added MicroLXC to my signature- as a "Thank you" for the awesome service.

    Thanked by (3)Not_Oles Ympker Neoon

    ———-
    blog | exploring visually |

  • Not_OlesNot_Oles Hosting ProviderContent Writer

    @Neoon said: Yes, if you want to increase it to 2, you just add microlxc to your sig and lemme know when its done.

    @Neoon Hey! Hope you like my updated sig. Thanks for the wonderful, painless, fast, free hosting with a beautiful web interface!

    @vyas Always lovely to see your comments! Might MicroLXC be even faster if MicroLXC ran at MetalVPS? :)

    Best wishes from Mexico! 🏜️

    Thanked by (1)vyas

    I hope everyone gets the servers they want!

  • NeoonNeoon OG
    edited September 2021

    Norway has been rebooted to address the mounting issues.

    If your countainer failed to start due to the recent LXD update, you need to start them manually by hand via the control panel.
    A patch will be applied once it has been started.

    Thanked by (1)Brueggus
  • 497b-7a94-6810-e085

  • Patch Notes:

    • removed cooldown on deployments
    • removed Fedora, since the package manger needs 256MB+ memory to run
    • changed on successful deploy you will now be redirect to the dashboard instead
    • added Debian 11, Rocky Linux 8
    • fixed LET account issues
  • NeoonNeoon OG
    edited October 2021

    Regarding the mount issues, I got a plausible workaround.
    Most of the servers are already using the LTS branch for stability reasons.

    Which means its bug fix only, so rarely getting any updates, however, these updates are applied automatically.
    There seems to be issues, when they are applied, there is a small chance, under specific circumstances these can cause those mounting issues.

    It does not affect any running containers nor does it seem to affect any data.
    However, if this bug appears, you likely see it when you want to delete the container or reinstall it.

    So you can't do it, because LXD is unable to unmount the container.
    According to the developers, its possible to unmount the container by hand, however this method is not working reliability.

    The only known working fix, would be rebooting the system.
    I don't expect any fix soon, since the developers can't even reproduce that bug, but suggest where the issue may be.

    Means, we will need to reboot the system now and then, to keep LXC/LXD up to date.
    I will announce these reboots a few days ahead, containers will be started automatically, so as long your application is in auto start should be not a problem.

    I expect a reboot to happen every few months, the downtime will not be more than a few minutes.
    The Kernel will still be live patched as usually.

    Currently I know the following nodes are affected and will be rebooted Tuesday 05.10.21 23:00 CET:
    Dronten
    Antwerp

    Other nodes are not affected, as of now.
    The bug should disappear, once we do the updates manually.

    However, due to the recent breaking LXD update into the LTS branch, you need to boot your container by hand, after this maintenance. Auto start of some containers is not possible, but by starting it manually, a fix will be applied, you just need to do this once.

    If you created the container recently, you are not affected by this.

    Thanked by (3)ehab simonindia bdl
  • Patch Notes:

    • removed Debian 9
    • removed Post4VPS Forum (new accounts)
    • added Almalinux 8.4

    • added Support for static IPv6 configuration (CentOS/Almalinux/Rockylinux)
      If static IPv6 configuration is needed, it will be configured automatically

    • changed HAProxy entries will now be checked if they resolve and point to the Node

    • changed 6 months account requirement to 3 months, posts and thanks will remain the same
    • fixed Mailserver issues

    If the abuses remain on the same level, we keep the 3 months, will see.
    Also, the inactivity system will now start stopping contains in the next week, which exceed the 60 days.

    1 Week additional grace period, until the system will stop these containers.
    Afterwards we will patch the system to delete containers that have been stopped for 1 week after exceeding the 60 days of inactivity.

    You can anytime add your email to get notifications, 30, 14, 7 and 1 day(s) alerts will be send before the system will stop your container.

    SSH Login is enough, to mark the container as active.

    Thanked by (3)Not_Oles nyamenk Ganonk
  • @Neoon said:
    SSH Login is enough, to mark the container as active.

    Can you detect SSH login over IPv6, or does it have to go through the NAT port?
    What if the IPv6 SSH port is changed?

    Is it against the rules if someone runs SSH login in a crontab?

    Thanked by (1)Not_Oles
  • @yoursunny said:

    @Neoon said:
    SSH Login is enough, to mark the container as active.

    Can you detect SSH login over IPv6, or does it have to go through the NAT port?
    What if the IPv6 SSH port is changed?

    Is it against the rules if someone runs SSH login in a crontab?

    Its not network based, it dosen't matter which user you use to login or if NAT or IPv6.
    The cronjob runs daily, which runs "last" in each container, parses it and converts the last login to a unix time stamp.

    I am aware that this can be manipulated or automated.
    But the idea was, to keep it easy.

    If this feature gets abused, its would be an idea to disable it.

    Thanked by (1)Not_Oles
  • @Neoon said:

    @yoursunny said:

    @Neoon said:
    SSH Login is enough, to mark the container as active.

    Can you detect SSH login over IPv6, or does it have to go through the NAT port?
    What if the IPv6 SSH port is changed?

    Its not network based, it dosen't matter which user you use to login or if NAT or IPv6.
    The cronjob runs daily, which runs "last" in each container, parses it and converts the last login to a unix time stamp.

    What if the last command is deleted or overwritten for some reason?

    Thanked by (1)Not_Oles
  • @yoursunny said:

    @Neoon said:

    @yoursunny said:

    @Neoon said:
    SSH Login is enough, to mark the container as active.

    Can you detect SSH login over IPv6, or does it have to go through the NAT port?
    What if the IPv6 SSH port is changed?

    Its not network based, it dosen't matter which user you use to login or if NAT or IPv6.
    The cronjob runs daily, which runs "last" in each container, parses it and converts the last login to a unix time stamp.

    What if the last command is deleted or overwritten for some reason?

    Well, could checksum that file, but even that ain't 100% bulletproof.
    There is practically no way, to determine this in an untrusted environment by 100%.

    Even if this would be network based, you could cronjob it away.

    Thanked by (1)Not_Oles
  • Not_OlesNot_Oles Hosting ProviderContent Writer

    @Neoon said:
    Patch Notes:

    • removed Debian 9
    • removed Post4VPS Forum (new accounts)
    • added Almalinux 8.4

    • added Support for static IPv6 configuration (CentOS/Almalinux/Rockylinux)
      If static IPv6 configuration is needed, it will be configured automatically

    • changed HAProxy entries will now be checked if they resolve and point to the Node

    • changed 6 months account requirement to 3 months, posts and thanks will remain the same
    • fixed Mailserver issues

    If the abuses remain on the same level, we keep the 3 months, will see.
    Also, the inactivity system will now start stopping contains in the next week, which exceed the 60 days.

    1 Week additional grace period, until the system will stop these containers.
    Afterwards we will patch the system to delete containers that have been stopped for 1 week after exceeding the 60 days of inactivity.

    You can anytime add your email to get notifications, 30, 14, 7 and 1 day(s) alerts will be send before the system will stop your container.

    SSH Login is enough, to mark the container as active.

    Thanks for all this fine work! And the free plans! My MicroLXC works great! Best wishes from Mexico! 🗽🇺🇸🇲🇽🏜️

    Thanked by (1)Neoon

    I hope everyone gets the servers they want!

  • As mentioned, the activity check has been enabled, with a delay of 7 days for every inactive container.
    The unit test gave it a go, it purrs like a cat now.

  • Patch Notes:

    • added You will get an email once your container has been stopped due to inactivity.
    • added You will get an email once your container has been terminated due to inactivity
    • added Termination after 67 Days of inactivity
      Your container will get stopped after 60 days of inactivity, plus 7 days grace period, where you can log in and start the container again, if you wish to continue using it. After 7 days (67 days), your container will be terminated by the system.
      So even if you don't subscribe to the notifications and forgot about it, with working monitoring, you should take notice.

    • removed SSH activity check
      Please log in to the Portal instead, once logged in your account will be marked as active.

  • NeoonNeoon OG
    edited October 2021

    Maintenance Announcement
    Melbourne and Los Angeles will be rebooted Sunday Evening 23:00 CEST

    Thanked by (3)bdl Not_Oles Ganonk
  • 2c06-3e3b-04b9-5442

  • Not_OlesNot_Oles Hosting ProviderContent Writer

    Hello @Neoon! Thanks for the lutefisk! Just landed, but she seems to work great! Best wishes from Mexico! 🎃

    I hope everyone gets the servers they want!

Sign In or Register to comment.