Norway has been rebooted to address the mounting issues.
If your countainer failed to start due to the recent LXD update, you need to start them manually by hand via the control panel.
A patch will be applied once it has been started.
Regarding the mount issues, I got a plausible workaround.
Most of the servers are already using the LTS branch for stability reasons.
Which means its bug fix only, so rarely getting any updates, however, these updates are applied automatically.
There seems to be issues, when they are applied, there is a small chance, under specific circumstances these can cause those mounting issues.
It does not affect any running containers nor does it seem to affect any data.
However, if this bug appears, you likely see it when you want to delete the container or reinstall it.
So you can't do it, because LXD is unable to unmount the container.
According to the developers, its possible to unmount the container by hand, however this method is not working reliability.
The only known working fix, would be rebooting the system.
I don't expect any fix soon, since the developers can't even reproduce that bug, but suggest where the issue may be.
Means, we will need to reboot the system now and then, to keep LXC/LXD up to date.
I will announce these reboots a few days ahead, containers will be started automatically, so as long your application is in auto start should be not a problem.
I expect a reboot to happen every few months, the downtime will not be more than a few minutes.
The Kernel will still be live patched as usually.
Currently I know the following nodes are affected and will be rebooted Tuesday 05.10.21 23:00 CET:
Dronten
Antwerp
Other nodes are not affected, as of now.
The bug should disappear, once we do the updates manually.
However, due to the recent breaking LXD update into the LTS branch, you need to boot your container by hand, after this maintenance. Auto start of some containers is not possible, but by starting it manually, a fix will be applied, you just need to do this once.
If you created the container recently, you are not affected by this.
added Support for static IPv6 configuration (CentOS/Almalinux/Rockylinux)
If static IPv6 configuration is needed, it will be configured automatically
changed HAProxy entries will now be checked if they resolve and point to the Node
changed 6 months account requirement to 3 months, posts and thanks will remain the same
fixed Mailserver issues
If the abuses remain on the same level, we keep the 3 months, will see.
Also, the inactivity system will now start stopping contains in the next week, which exceed the 60 days.
1 Week additional grace period, until the system will stop these containers.
Afterwards we will patch the system to delete containers that have been stopped for 1 week after exceeding the 60 days of inactivity.
You can anytime add your email to get notifications, 30, 14, 7 and 1 day(s) alerts will be send before the system will stop your container.
SSH Login is enough, to mark the container as active.
@Neoon said:
SSH Login is enough, to mark the container as active.
Can you detect SSH login over IPv6, or does it have to go through the NAT port?
What if the IPv6 SSH port is changed?
Is it against the rules if someone runs SSH login in a crontab?
Its not network based, it dosen't matter which user you use to login or if NAT or IPv6.
The cronjob runs daily, which runs "last" in each container, parses it and converts the last login to a unix time stamp.
I am aware that this can be manipulated or automated.
But the idea was, to keep it easy.
If this feature gets abused, its would be an idea to disable it.
@Neoon said:
SSH Login is enough, to mark the container as active.
Can you detect SSH login over IPv6, or does it have to go through the NAT port?
What if the IPv6 SSH port is changed?
Its not network based, it dosen't matter which user you use to login or if NAT or IPv6.
The cronjob runs daily, which runs "last" in each container, parses it and converts the last login to a unix time stamp.
What if the last command is deleted or overwritten for some reason?
@Neoon said:
SSH Login is enough, to mark the container as active.
Can you detect SSH login over IPv6, or does it have to go through the NAT port?
What if the IPv6 SSH port is changed?
Its not network based, it dosen't matter which user you use to login or if NAT or IPv6.
The cronjob runs daily, which runs "last" in each container, parses it and converts the last login to a unix time stamp.
What if the last command is deleted or overwritten for some reason?
Well, could checksum that file, but even that ain't 100% bulletproof.
There is practically no way, to determine this in an untrusted environment by 100%.
Even if this would be network based, you could cronjob it away.
added Support for static IPv6 configuration (CentOS/Almalinux/Rockylinux)
If static IPv6 configuration is needed, it will be configured automatically
changed HAProxy entries will now be checked if they resolve and point to the Node
changed 6 months account requirement to 3 months, posts and thanks will remain the same
fixed Mailserver issues
If the abuses remain on the same level, we keep the 3 months, will see.
Also, the inactivity system will now start stopping contains in the next week, which exceed the 60 days.
1 Week additional grace period, until the system will stop these containers.
Afterwards we will patch the system to delete containers that have been stopped for 1 week after exceeding the 60 days of inactivity.
You can anytime add your email to get notifications, 30, 14, 7 and 1 day(s) alerts will be send before the system will stop your container.
SSH Login is enough, to mark the container as active.
Thanks for all this fine work! And the free plans! My MicroLXC works great! Best wishes from Mexico! 🗽🇺🇸🇲🇽🏜️
As mentioned, the activity check has been enabled, with a delay of 7 days for every inactive container.
The unit test gave it a go, it purrs like a cat now.
added You will get an email once your container has been stopped due to inactivity.
added You will get an email once your container has been terminated due to inactivity
added Termination after 67 Days of inactivity
Your container will get stopped after 60 days of inactivity, plus 7 days grace period, where you can log in and start the container again, if you wish to continue using it. After 7 days (67 days), your container will be terminated by the system.
So even if you don't subscribe to the notifications and forgot about it, with working monitoring, you should take notice.
removed SSH activity check
Please log in to the Portal instead, once logged in your account will be marked as active.
Comments
Norway
Free NAT KVM | Free NAT LXC | Bobr
ITS WEDNESDAY MY DUDES
Hello,
got a problem with micro server in belgium: 10.0.11.66
I try to destroy it but it always says "The network is currently in use". Can you please terminate it?
Done, I will look into the issue.
Free NAT KVM | Free NAT LXC | Bobr
ITS WEDNESDAY MY DUDES
To change location, can we do a "reinstall" via portal or have to request via a new token?
Just destroy your instance and order a new one.
d0a7-6110-9007-e855
Just wanted to thank @Neoon (again) for a great service. I just moved my service from .jp to nz and it's running smooooth af.
Tank you, tank @Zappie too
Free NAT KVM | Free NAT LXC | Bobr
ITS WEDNESDAY MY DUDES
IPv6 issue in JP has been fixed.
Free NAT KVM | Free NAT LXC | Bobr
ITS WEDNESDAY MY DUDES
May allowed to have LXC , Sir ..?
ed83-79b9-add0-8bf5
Hello @Neoon!
Requesting quota increase, please. Use case: serve a very small, almost no traffic website. Thank you!
I hope everyone gets the servers they want!
Yes, if you want to increase it to 2, you just add microlxc to your sig and lemme know when its done.
Free NAT KVM | Free NAT LXC | Bobr
ITS WEDNESDAY MY DUDES
I don't need a second VPS - but added MicroLXC to my signature- as a "Thank you" for the awesome service.
———-
blog | exploring visually |
@Neoon Hey! Hope you like my updated sig. Thanks for the wonderful, painless, fast, free hosting with a beautiful web interface!
@vyas Always lovely to see your comments! Might MicroLXC be even faster if MicroLXC ran at MetalVPS?
Best wishes from Mexico! 🏜️
I hope everyone gets the servers they want!
Norway has been rebooted to address the mounting issues.
If your countainer failed to start due to the recent LXD update, you need to start them manually by hand via the control panel.
A patch will be applied once it has been started.
Free NAT KVM | Free NAT LXC | Bobr
ITS WEDNESDAY MY DUDES
497b-7a94-6810-e085
Patch Notes:
Free NAT KVM | Free NAT LXC | Bobr
ITS WEDNESDAY MY DUDES
Regarding the mount issues, I got a plausible workaround.
Most of the servers are already using the LTS branch for stability reasons.
Which means its bug fix only, so rarely getting any updates, however, these updates are applied automatically.
There seems to be issues, when they are applied, there is a small chance, under specific circumstances these can cause those mounting issues.
It does not affect any running containers nor does it seem to affect any data.
However, if this bug appears, you likely see it when you want to delete the container or reinstall it.
So you can't do it, because LXD is unable to unmount the container.
According to the developers, its possible to unmount the container by hand, however this method is not working reliability.
The only known working fix, would be rebooting the system.
I don't expect any fix soon, since the developers can't even reproduce that bug, but suggest where the issue may be.
Means, we will need to reboot the system now and then, to keep LXC/LXD up to date.
I will announce these reboots a few days ahead, containers will be started automatically, so as long your application is in auto start should be not a problem.
I expect a reboot to happen every few months, the downtime will not be more than a few minutes.
The Kernel will still be live patched as usually.
Currently I know the following nodes are affected and will be rebooted Tuesday 05.10.21 23:00 CET:
Dronten
Antwerp
Other nodes are not affected, as of now.
The bug should disappear, once we do the updates manually.
However, due to the recent breaking LXD update into the LTS branch, you need to boot your container by hand, after this maintenance. Auto start of some containers is not possible, but by starting it manually, a fix will be applied, you just need to do this once.
If you created the container recently, you are not affected by this.
Free NAT KVM | Free NAT LXC | Bobr
ITS WEDNESDAY MY DUDES
Patch Notes:
added Almalinux 8.4
added Support for static IPv6 configuration (CentOS/Almalinux/Rockylinux)
If static IPv6 configuration is needed, it will be configured automatically
changed HAProxy entries will now be checked if they resolve and point to the Node
If the abuses remain on the same level, we keep the 3 months, will see.
Also, the inactivity system will now start stopping contains in the next week, which exceed the 60 days.
1 Week additional grace period, until the system will stop these containers.
Afterwards we will patch the system to delete containers that have been stopped for 1 week after exceeding the 60 days of inactivity.
You can anytime add your email to get notifications, 30, 14, 7 and 1 day(s) alerts will be send before the system will stop your container.
SSH Login is enough, to mark the container as active.
Free NAT KVM | Free NAT LXC | Bobr
ITS WEDNESDAY MY DUDES
Can you detect SSH login over IPv6, or does it have to go through the NAT port?
What if the IPv6 SSH port is changed?
Is it against the rules if someone runs SSH login in a crontab?
HostBrr aff best VPS; VirmAche aff worst VPS.
Unable to push-up due to shoulder injury 😣
Its not network based, it dosen't matter which user you use to login or if NAT or IPv6.
The cronjob runs daily, which runs "last" in each container, parses it and converts the last login to a unix time stamp.
I am aware that this can be manipulated or automated.
But the idea was, to keep it easy.
If this feature gets abused, its would be an idea to disable it.
Free NAT KVM | Free NAT LXC | Bobr
ITS WEDNESDAY MY DUDES
What if the
last
command is deleted or overwritten for some reason?HostBrr aff best VPS; VirmAche aff worst VPS.
Unable to push-up due to shoulder injury 😣
Well, could checksum that file, but even that ain't 100% bulletproof.
There is practically no way, to determine this in an untrusted environment by 100%.
Even if this would be network based, you could cronjob it away.
Free NAT KVM | Free NAT LXC | Bobr
ITS WEDNESDAY MY DUDES
Thanks for all this fine work! And the free plans! My MicroLXC works great! Best wishes from Mexico! 🗽🇺🇸🇲🇽🏜️
I hope everyone gets the servers they want!
As mentioned, the activity check has been enabled, with a delay of 7 days for every inactive container.
The unit test gave it a go, it purrs like a cat now.
Free NAT KVM | Free NAT LXC | Bobr
ITS WEDNESDAY MY DUDES
Patch Notes:
added Termination after 67 Days of inactivity
Your container will get stopped after 60 days of inactivity, plus 7 days grace period, where you can log in and start the container again, if you wish to continue using it. After 7 days (67 days), your container will be terminated by the system.
So even if you don't subscribe to the notifications and forgot about it, with working monitoring, you should take notice.
removed SSH activity check
Please log in to the Portal instead, once logged in your account will be marked as active.
Free NAT KVM | Free NAT LXC | Bobr
ITS WEDNESDAY MY DUDES
Maintenance Announcement
Melbourne and Los Angeles will be rebooted Sunday Evening 23:00 CEST
Free NAT KVM | Free NAT LXC | Bobr
ITS WEDNESDAY MY DUDES
nice
2c06-3e3b-04b9-5442
Hello @Neoon! Thanks for the lutefisk! Just landed, but she seems to work great! Best wishes from Mexico! 🎃
I hope everyone gets the servers they want!