I changed nothing.
Just launched the new VM in Zudafrika, tried to connect with the ssh key over IPv4+high port mentioned in the dashboard, it failed, tried with the IPv6 mentioned, it worked.
root@lxcae31db10:~# netstat -puant
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 607/sshd: /usr/sbin
tcp 0 0 0.0.0.0:5355 0.0.0.0:* LISTEN 137/systemd-resolve
tcp 0 0 127.0.0.54:53 0.0.0.0:* LISTEN 137/systemd-resolve
tcp 0 0 127.0.0.53:53 0.0.0.0:* LISTEN 137/systemd-resolve
tcp6 0 0 :::22 :::* LISTEN 607/sshd: /usr/sbin
tcp6 0 0 :::5355 :::* LISTEN 137/systemd-resolve
@Shot² said:
I changed nothing.
Just launched the new VM in Zudafrika, tried to connect with the ssh key over IPv4+high port mentioned in the dashboard, it failed, tried with the IPv6 mentioned, it worked.
However, for me is a skill issue, I don't speak BGP, I never ran or owned my own ASN.
I could though, but before someone told me BGP is just a TCP stream, I though it would be a connection to the heavens.
@Shot² said:
I changed nothing.
Just launched the new VM in Zudafrika, tried to connect with the ssh key over IPv4+high port mentioned in the dashboard, it failed, tried with the IPv6 mentioned, it worked.
root@lxcae31db10:~# netstat -puant
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 607/sshd: /usr/sbin
tcp 0 0 0.0.0.0:5355 0.0.0.0:* LISTEN 137/systemd-resolve
tcp 0 0 127.0.0.54:53 0.0.0.0:* LISTEN 137/systemd-resolve
tcp 0 0 127.0.0.53:53 0.0.0.0:* LISTEN 137/systemd-resolve
tcp6 0 0 :::22 :::* LISTEN 607/sshd: /usr/sbin
tcp6 0 0 :::5355 :::* LISTEN 137/systemd-resolve
Okay so, for some reason your container pulled the .75, it suppose to get .66 to have proper forwarding.
Its in the correct network segment, everything looks fine, since we do a /28 per container.
But I can't explain yet, why you got the .75 within your allocated network segment.
What OS did you use?
Nah, it has allocated you correctly, I double checked.
Can't reproduce it either with Bookworm, neither can I find a other container on any of the nodes with that issue.
You can try a reinstall or set it to .66 static or just add .66
As per request I added Alpine, 3.18 also added Devuan Daedalus ( Debian 12 )
Was a bit more work, since microLXC never saved the installed OS, it wasn't necessary after deployment.
However to spawn the correct shell when you use _Console its now needed for Alpine since its uses ash instead of bash.
Stock update / Maintenance
We have a bunch of nodes that still have spare capacity however are limited due to the current configuration.
I plan to reboot the following nodes to increase capacity.
Melbourne
Johannesburg
Auckland
Valdivia
At around 19:00 UTC Wednesday next week.
Expect a few minutes of downtime while the nodes will be rebooted.
Regarding Japan, I am still waiting for IPv6.
However, I will patch microLXC for NAT only, when the IPv6 prefix will become available it should be possible to enable it without a reboot and add a button to the Panel, so you can let microLXC assign you a /64 prefix.
@Neoon said: Stock update / Maintenance
We have a bunch of nodes that still have spare capacity however are limited due to the current configuration.
I plan to reboot the following nodes to increase capacity.
Melbourne
Johannesburg
Auckland
Valdivia
At around 19:00 UTC Wednesday next week.
Expect a few minutes of downtime while the nodes will be rebooted.
@Neoon said: restock will happen later though, no ETA.
I have to rest my left hand for 2 weeks.
I also have to check how much I can restock and take a look at the current usage, maybe apply some rules.
No problem, and thanks for letting me/us know, appreciate it - I won't F5 anymore. :-)
Health should always come first, even though we sometimes forget the importance.
What happened to your left hand, will everything be ok?
( Hand injuries can really suck and can take very long to recover )
ps: Would you suggest I apply so long and select an EU location to 'get onto the system' so long (hopefully) and then later change, or should I rather not and just wait until the restocks before I apply at all?
@Neoon said: restock will happen later though, no ETA.
I have to rest my left hand for 2 weeks.
I also have to check how much I can restock and take a look at the current usage, maybe apply some rules.
No problem, and thanks for letting me/us know, appreciate it - I won't F5 anymore. :-)
Health should always come first, even though we sometimes forget the importance.
What happened to your left hand, will everything be ok?
( Hand injuries can really suck and can take very long to recover )
No idea, should be fine though.
ps: Would you suggest I apply so long and select an EU location to 'get onto the system' so long (hopefully) and then later change, or should I rather not and just wait until the restocks before I apply at all?
NAT only is done and tested, however I still going to write some additional tests for the buildserver.
I didn't had time yet adding the requested os images or smaller packages.
Linux will run happily with only 8 MB of RAM, including all of the bells and whistles such as the X Window System, Emacs, and so on. However, having more memory is almost as important as having a faster processor. Sixteen megabytes is just enough for personal use; 32 MB or more may be needed if you are expecting a heavy user load on the system.
Linux will run happily with only 8 MB of RAM, including all of the bells and whistles such as the X Window System, Emacs, and so on. However, having more memory is almost as important as having a faster processor. Sixteen megabytes is just enough for personal use; 32 MB or more may be needed if you are expecting a heavy user load on the system.
A few things, before I restock.
The Resource allocation now works a bit differently, if you deploy more than 1 Container on the same Node, the cost will increase.
Given by the amount of containers you have/want to deploy.
For example you already have 1 Container on the same Node and you want to deploy another one.
The cost (memory allocation) is doubled, for the 3rd its tripled and so on.
I prefer this instead of a hard limit per node.
Second thing, regarding bigger Packages.
Currently it works like this, you can grab any Package, as long the Node has the Memory available.
However, if the Node is getting low on Memory, it doesn't make much sense, especially on bigger packages.
Lets say the demand on one Node is high, it has roughly 1GB of Memory left to allocate, it would still offer you the 512MB Packages.
This changes by now, in this case, the highest Package that would be available is the 256MB Package.
If the available memory gets lower, it would offer you the next smaller package respectively, downwards to 64MB.
In the case its possible to get the Node upgraded, I will do so.
@Neoon said:
The Resource allocation now works a bit differently, if you deploy more than 1 Container on the same Node, the cost will increase.
Given by the amount of containers you have/want to deploy.
For example you already have 1 Container on the same Node and you want to deploy another one.
The cost (memory allocation) is doubled, for the 3rd its tripled and so on.
Suppose I consider to downgrade but I need to test whether the application would actually work in the smaller package before deleting the larger package:
Start with container A with 256MB.
Create container B with 128MB, counted as 256MB.
I do my test in container B and find that it works in the smaller package.
Delete container A.
Does container B revert to 128MB cost at this point?
Comments
I changed nothing.
Just launched the new VM in Zudafrika, tried to connect with the ssh key over IPv4+high port mentioned in the dashboard, it failed, tried with the IPv6 mentioned, it worked.
Okay, gonna take a look when I have time.
Free NAT KVM | Free NAT LXC | Bobr
We are glad to see the Enable IPv9 button added to microLXC dashboard.
We are looking forward to see this button becoming functional.
HostBrr aff best VPS; VirmAche aff worst VPS.
Unable to push-up due to shoulder injury 😣
We also have a IPv5 button and a BGP button.
However, they have not been functional yet.
The closest thing that could be functional, would be BGP.
Since LXD has integrated BGP Support, plus BakkerIT offered BGP for microLXC.
https://documentation.ubuntu.com/lxd/en/latest/howto/network_bgp/
However, for me is a skill issue, I don't speak BGP, I never ran or owned my own ASN.
I could though, but before someone told me BGP is just a TCP stream, I though it would be a connection to the heavens.
Free NAT KVM | Free NAT LXC | Bobr
SG & NL is back available.
Plus, the Dashboard will show you now how much Memory you have spend and how much you can spend.
Smol useful feature.
@Shot² I will check on Africa next.
Free NAT KVM | Free NAT LXC | Bobr
Okay so, for some reason your container pulled the .75, it suppose to get .66 to have proper forwarding.
Its in the correct network segment, everything looks fine, since we do a /28 per container.
But I can't explain yet, why you got the .75 within your allocated network segment.
What OS did you use?
Free NAT KVM | Free NAT LXC | Bobr
Debian 12 Bookworm. Same specs and OS as in other locations; no such issue with CL, NZ, SG.
Maybe 75 is the new 66.
Nah, it has allocated you correctly, I double checked.
Can't reproduce it either with Bookworm, neither can I find a other container on any of the nodes with that issue.
You can try a reinstall or set it to .66 static or just add .66
Free NAT KVM | Free NAT LXC | Bobr
Redeploying from scratch right now in ZA with the same settings. Let's see.
edit: now it works as expected. oh well.
Nice, you could just run ip address add 10.0.x.66/32 dev eth0 though.
No idea why the DHCP gave you .75
Free NAT KVM | Free NAT LXC | Bobr
It was to test whether the wrong assigment was exactly repeatable, or the result of a transient glitch. Damn you, cosmic rays.
bcfd-9be7-eb95-0159
don't ask why
youtube.com/watch?v=k1BneeJTDcU
As per request I added Alpine, 3.18 also added Devuan Daedalus ( Debian 12 )
Was a bit more work, since microLXC never saved the installed OS, it wasn't necessary after deployment.
However to spawn the correct shell when you use _Console its now needed for Alpine since its uses ash instead of bash.
Free NAT KVM | Free NAT LXC | Bobr
Stock update / Maintenance
We have a bunch of nodes that still have spare capacity however are limited due to the current configuration.
I plan to reboot the following nodes to increase capacity.
At around 19:00 UTC Wednesday next week.
Expect a few minutes of downtime while the nodes will be rebooted.
Free NAT KVM | Free NAT LXC | Bobr
Regarding Japan, I am still waiting for IPv6.
However, I will patch microLXC for NAT only, when the IPv6 prefix will become available it should be possible to enable it without a reboot and add a button to the Panel, so you can let microLXC assign you a /64 prefix.
Free NAT KVM | Free NAT LXC | Bobr
Done, restock will happen later though.
Free NAT KVM | Free NAT LXC | Bobr
I've maybe been F5'ing a bit too anxiously the last half hour, to try and not miss ZA/Jhb again
restock will happen later though, no ETA.
I have to rest my left hand for 2 weeks.
I also have to check how much I can restock and take a look at the current usage, maybe apply some rules.
Free NAT KVM | Free NAT LXC | Bobr
No problem, and thanks for letting me/us know, appreciate it - I won't F5 anymore. :-)
Health should always come first, even though we sometimes forget the importance.
What happened to your left hand, will everything be ok?
( Hand injuries can really suck and can take very long to recover )
ps: Would you suggest I apply so long and select an EU location to 'get onto the system' so long (hopefully) and then later change, or should I rather not and just wait until the restocks before I apply at all?
Is the left hand doing what @ehab thinks it's doing?
HostBrr aff best VPS; VirmAche aff worst VPS.
Unable to push-up due to shoulder injury 😣
No idea what @ehab is thinking.
No idea, should be fine though.
Does not really make a difference.
Free NAT KVM | Free NAT LXC | Bobr
NAT only is done and tested, however I still going to write some additional tests for the buildserver.
I didn't had time yet adding the requested os images or smaller packages.
Free NAT KVM | Free NAT LXC | Bobr
JP2 is available now, no IPv6 yet only NAT, IPv6 will be available once I got the Prefix.
Thanks to @Abdullah / https://webhorizon.net/
Currently 3 Packages are available.
If you have any suggestions, lemme know.
Free NAT KVM | Free NAT LXC | Bobr
64 MB package please.
Why?
Yeah, I have to add a whitelist feature for operating systems that work on low memory systems before I can just add a 64MB Package.
Free NAT KVM | Free NAT LXC | Bobr
32 MB package please.
It's enough for a heavy user load.
Linux will run happily with only 8 MB of RAM, including all of the bells and whistles such as the X Window System, Emacs, and so on. However, having more memory is almost as important as having a faster processor. Sixteen megabytes is just enough for personal use; 32 MB or more may be needed if you are expecting a heavy user load on the system.
HostBrr aff best VPS; VirmAche aff worst VPS.
Unable to push-up due to shoulder injury 😣
Thats even better
Why?
As per Request, I added a 64MB Package.
Currently Alpine is the only whitelisted OS, others will be added.
Restock is next on the list, also have to add the new Node from @host_c
Free NAT KVM | Free NAT LXC | Bobr
A few things, before I restock.
The Resource allocation now works a bit differently, if you deploy more than 1 Container on the same Node, the cost will increase.
Given by the amount of containers you have/want to deploy.
For example you already have 1 Container on the same Node and you want to deploy another one.
The cost (memory allocation) is doubled, for the 3rd its tripled and so on.
I prefer this instead of a hard limit per node.
Second thing, regarding bigger Packages.
Currently it works like this, you can grab any Package, as long the Node has the Memory available.
However, if the Node is getting low on Memory, it doesn't make much sense, especially on bigger packages.
Lets say the demand on one Node is high, it has roughly 1GB of Memory left to allocate, it would still offer you the 512MB Packages.
This changes by now, in this case, the highest Package that would be available is the 256MB Package.
If the available memory gets lower, it would offer you the next smaller package respectively, downwards to 64MB.
In the case its possible to get the Node upgraded, I will do so.
Feedback?
Free NAT KVM | Free NAT LXC | Bobr
Suppose I consider to downgrade but I need to test whether the application would actually work in the smaller package before deleting the larger package:
Does container B revert to 128MB cost at this point?
HostBrr aff best VPS; VirmAche aff worst VPS.
Unable to push-up due to shoulder injury 😣