@Neoon said:
The Resource allocation now works a bit differently, if you deploy more than 1 Container on the same Node, the cost will increase.
Given by the amount of containers you have/want to deploy.
For example you already have 1 Container on the same Node and you want to deploy another one.
The cost (memory allocation) is doubled, for the 3rd its tripled and so on.
Suppose I consider to downgrade but I need to test whether the application would actually work in the smaller package before deleting the larger package:
Start with container A with 256MB.
Create container B with 128MB, counted as 256MB.
I do my test in container B and find that it works in the smaller package.
Delete container A.
Does container B revert to 128MB cost at this point?
Yeah, what Node are you talking about?
I might consider applying it only on Nodes with less than 8GB of memory.
@Neoon said:
The Resource allocation now works a bit differently, if you deploy more than 1 Container on the same Node, the cost will increase.
Given by the amount of containers you have/want to deploy.
For example you already have 1 Container on the same Node and you want to deploy another one.
The cost (memory allocation) is doubled, for the 3rd its tripled and so on.
Suppose I consider to downgrade but I need to test whether the application would actually work in the smaller package before deleting the larger package:
Start with container A with 256MB.
Create container B with 128MB, counted as 256MB.
I do my test in container B and find that it works in the smaller package.
Delete container A.
Does container B revert to 128MB cost at this point?
Yeah, what Node are you talking about?
I might consider applying it only on Nodes with less than 8GB of memory.
I want downgrade in Singapore, as I have 256MB but the application might work on 128MB.
However, this is not about specific node, but the general application logic.
I think the most fair way would be:
Largest container costs 1x.
Second largest container costs 2x.
Third largest container costs 3x.
…
… regardless of creation and deletion order.
The costs are reevaluated upon creating or deleting a container.
@Neoon said:
The Resource allocation now works a bit differently, if you deploy more than 1 Container on the same Node, the cost will increase.
Given by the amount of containers you have/want to deploy.
For example you already have 1 Container on the same Node and you want to deploy another one.
The cost (memory allocation) is doubled, for the 3rd its tripled and so on.
Suppose I consider to downgrade but I need to test whether the application would actually work in the smaller package before deleting the larger package:
Start with container A with 256MB.
Create container B with 128MB, counted as 256MB.
I do my test in container B and find that it works in the smaller package.
Delete container A.
Does container B revert to 128MB cost at this point?
Yeah, what Node are you talking about?
I might consider applying it only on Nodes with less than 8GB of memory.
I want downgrade in Singapore, as I have 256MB but the application might work on 128MB.
However, this is not about specific node, but the general application logic.
I think the most fair way would be:
Singapore isn't a small node, hence the rule would not apply.
If I exclude nodes bigger than 8GB, Singapore, Japan... would not be on that list.
However, all other nodes in APAC, Africa, South Africa would be.
Nodes in Europe would be neither on that list.
Largest container costs 1x.
Second largest container costs 2x.
Third largest container costs 3x.
…
… regardless of creation and deletion order.
The costs are reevaluated upon creating or deleting a container.
The point is, small nodes, have not as much resources.
So you want people not to deploy their allocation mostly there right.
Hence the cost increase if you go for a second container.
So for the future, no more limit qty of how many container we can create?
Nobody said that, you still have a limit.
The Limit works different though.
Second, all locations seems oos.
Yes because we had an increase of roughly 100% and I am currently reworking the memory calculation
Third, what does a bar showing 768MB | 1280MB on top of dashboard menu mean?
Your memory you already used and what you have available.
In exchange of a backlink I did increase the quota to 2, so you could deploy a second container.
Currently this does reflect for some people as 2048MB.
However, given you can deploy a second container anyway, this is likely to be reset to 1024MB.
Oradea is now available, Native IPv6 however only a /70 and depending on version and distro, manual configuration might be required.
In future we should get a bigger IPv6 Prefix, so it will work out of the box.
The concept has intrigued me, but I have no experience with it yet. If I apply for a spot in Oradea, can I still apply for a new one or a replacement when another location becomes available? I already have a NAT VPS with, well, eh, NATVPS, in Orastie.
With 200km it's not directly around the corner from Oradea, but still in Romania as well. Might stock become available in any non-Eurasia datacenter?
@wankel said:
I just found out my stats clear me for access!
Wohoo :-)
The concept has intrigued me, but I have no experience with it yet. If I apply for a spot in Oradea, can I still apply for a new one or a replacement when another location becomes available? I already have a NAT VPS with, well, eh, NATVPS, in Orastie.
With 200km it's not directly around the corner from Oradea, but still in Romania as well. Might stock become available in any non-Eurasia datacenter?
You can delete and recreate them as you want.
I added some Stock, will Restock later again what I can.
Good news everyone, Romania has been upgraded to a /64 prefix per container.
The old prefix will continue to work, however there is currently no button to apply the new network configuration so you essentially stuck on the old prefix until you terminate and deploy again. Reinstall won't change or modify your current allocation.
Or you wait until the Feature is available, possibly this week.
I restocked some Locations, even if the Node shows Available it can be that its suddenly out of stock due to running out of disk.
I modified the system to track storage usage too, because of the increased density, it doesn't reflect that yet.
Groningen needs a maintenance before it can get any restocks, possibly in the next weeks.
@Neoon said:
I modified the system to track storage usage too, because of the increased density, it doesn't reflect that yet.
Storage oversubscription will end very badly.
A few users decide to fill their partition => ENOSPC: no space left on device for everyone.
Nah, microLXC currently does only allocate disk space that physical available.
Before the patch Johannesburg was slightly overallocated, but the stock system takes care of that after the Patch.
Hence Melbourne and Johannesburg are out of stock or going in and out of stock really quickly, despite having enough memory available but storage is the issue.
I already asked for Melbourne to get a bigger disk allocation, will see.
Memory is slightly overallocated < 10% but only on bigger nodes, so no issue there.
Bandwidth wise, 200% - 300% on some nodes, still within limits and if we should hit the bandwidth limit one day, I just ask for an upgrade.
Since the Traffic allocations have been upgraded in SG and JP, kudos to @Abdullah / https://webhorizon.net
All Packages in JP have been upgraded by +50GB.
SG has only one regional Package, mediumSG which also has been upgraded by +50GB.
If someone really needs 250GB in SG, I can create a regional SG package with 250GB allocation, for 256MB and smoler.
Depending on traffic usage, it might be bumped to 300GB, but will see.
@ElonBezos said:
i still can't ssh to my tyo & nz untill now, is it supposed to be like that? but i confirm still able to connect it via _shell
You have to let me know if you have any issues, but I don't see anything wrong there besides your SSH seems not to be even listening on NZ. Looks more like a issue related to SSH for some reason.
Tokyo no idea, we got 2 Nodes, you have to be more precise.
Comments
Yeah, what Node are you talking about?
I might consider applying it only on Nodes with less than 8GB of memory.
Free NAT KVM | Free NAT LXC | Bobr
ITS WEDNESDAY MY DUDES
I want downgrade in Singapore, as I have 256MB but the application might work on 128MB.
However, this is not about specific node, but the general application logic.
I think the most fair way would be:
… regardless of creation and deletion order.
The costs are reevaluated upon creating or deleting a container.
HostBrr aff best VPS; VirmAche aff worst VPS.
Unable to push-up due to shoulder injury 😣
Singapore isn't a small node, hence the rule would not apply.
If I exclude nodes bigger than 8GB, Singapore, Japan... would not be on that list.
However, all other nodes in APAC, Africa, South Africa would be.
Nodes in Europe would be neither on that list.
The point is, small nodes, have not as much resources.
So you want people not to deploy their allocation mostly there right.
Hence the cost increase if you go for a second container.
Free NAT KVM | Free NAT LXC | Bobr
ITS WEDNESDAY MY DUDES
… or just disallow two containers in the same place, but enable upgrade/downgrade button.
HostBrr aff best VPS; VirmAche aff worst VPS.
Unable to push-up due to shoulder injury 😣
I am still against hard limits, would be easily added though.
Upgrade/Downgrade is a nice Feature surely, however has its limitations too.
For example does not work / risky with KVM, at least downgrades.
At some point I will add it, but not now.
Free NAT KVM | Free NAT LXC | Bobr
ITS WEDNESDAY MY DUDES
Change of plans, since I got some feedback.
The cost increase will depend on the Node size.
If the Node has more than 8GB of memory, 2 containers will be calculated as before.
The 3rd, is going to cost double.
If the Node has less than 8GB of memory, 1 container will be calculated as before.
The second container, is going to cost double and so on.
This does not take the size of the Package into account.
If the Node already is low on memory, even a 64MB Package would count.
Since I probably even add a 32MB Package and I don't want to cut the traffic by half, again.
I won't exclude anything smaller than 128MB.
On bigger nodes, this rule might have an exception in the future, if traffic isn't a problem.
Free NAT KVM | Free NAT LXC | Bobr
ITS WEDNESDAY MY DUDES
OK now my head hurts
@Neoon
So for the future, no more limit qty of how many container we can create?
Second, all locations seems oos.
Third, what does a bar showing 768MB | 1280MB on top of dashboard menu mean?
https://microlxc.net/
Means you can deploy several more VPS in terms of memory.
MicroLXC is lovable. Uptime of C1V
Nobody said that, you still have a limit.
The Limit works different though.
Yes because we had an increase of roughly 100% and I am currently reworking the memory calculation
Your memory you already used and what you have available.
In exchange of a backlink I did increase the quota to 2, so you could deploy a second container.
Currently this does reflect for some people as 2048MB.
However, given you can deploy a second container anyway, this is likely to be reset to 1024MB.
Free NAT KVM | Free NAT LXC | Bobr
ITS WEDNESDAY MY DUDES
Oradea is now available, Native IPv6 however only a /70 and depending on version and distro, manual configuration might be required.
In future we should get a bigger IPv6 Prefix, so it will work out of the box.
Thanks to @host_c / https://www.host-c.com
Currently 4 Packages are available.
The new memory calculation is also live, feel free to check it.
Free NAT KVM | Free NAT LXC | Bobr
ITS WEDNESDAY MY DUDES
I just found out my stats clear me for access!
Wohoo :-)
The concept has intrigued me, but I have no experience with it yet. If I apply for a spot in Oradea, can I still apply for a new one or a replacement when another location becomes available? I already have a NAT VPS with, well, eh, NATVPS, in Orastie.
With 200km it's not directly around the corner from Oradea, but still in Romania as well. Might stock become available in any non-Eurasia datacenter?
You can delete and recreate them as you want.
I added some Stock, will Restock later again what I can.
Free NAT KVM | Free NAT LXC | Bobr
ITS WEDNESDAY MY DUDES
fd59-9f49-45ac-a0e0
very appreciative, many thanks! :-)
5cdf-3b14-37e8-4d95
Looking forward!
1be0-ab4c-b5b9-d13e
Any chance for resource modification?
https://microlxc.net/
No, The System is fully automated, there is no Support of any kind to upgrade or modify packages.
Free NAT KVM | Free NAT LXC | Bobr
ITS WEDNESDAY MY DUDES
312e-5479-26bd-dbc4
Understood!
https://microlxc.net/
f4d7-6a4e-bc11-7391
Good news everyone, Romania has been upgraded to a /64 prefix per container.
The old prefix will continue to work, however there is currently no button to apply the new network configuration so you essentially stuck on the old prefix until you terminate and deploy again. Reinstall won't change or modify your current allocation.
Or you wait until the Feature is available, possibly this week.
Big thanks to @host_c
Free NAT KVM | Free NAT LXC | Bobr
ITS WEDNESDAY MY DUDES
I restocked some Locations, even if the Node shows Available it can be that its suddenly out of stock due to running out of disk.
I modified the system to track storage usage too, because of the increased density, it doesn't reflect that yet.
Groningen needs a maintenance before it can get any restocks, possibly in the next weeks.
Free NAT KVM | Free NAT LXC | Bobr
ITS WEDNESDAY MY DUDES
Storage oversubscription will end very badly.
A few users decide to fill their partition => ENOSPC: no space left on device for everyone.
HostBrr aff best VPS; VirmAche aff worst VPS.
Unable to push-up due to shoulder injury 😣
Nah, microLXC currently does only allocate disk space that physical available.
Before the patch Johannesburg was slightly overallocated, but the stock system takes care of that after the Patch.
Hence Melbourne and Johannesburg are out of stock or going in and out of stock really quickly, despite having enough memory available but storage is the issue.
I already asked for Melbourne to get a bigger disk allocation, will see.
Memory is slightly overallocated < 10% but only on bigger nodes, so no issue there.
Bandwidth wise, 200% - 300% on some nodes, still within limits and if we should hit the bandwidth limit one day, I just ask for an upgrade.
Free NAT KVM | Free NAT LXC | Bobr
ITS WEDNESDAY MY DUDES
Since the Traffic allocations have been upgraded in SG and JP, kudos to @Abdullah / https://webhorizon.net
All Packages in JP have been upgraded by +50GB.
SG has only one regional Package, mediumSG which also has been upgraded by +50GB.
If someone really needs 250GB in SG, I can create a regional SG package with 250GB allocation, for 256MB and smoler.
Depending on traffic usage, it might be bumped to 300GB, but will see.
Free NAT KVM | Free NAT LXC | Bobr
ITS WEDNESDAY MY DUDES
Viva La MicroLXC
i still can't ssh to my tyo & nz untill now, is it supposed to be like that? but i confirm still able to connect it via _shell
You have to let me know if you have any issues, but I don't see anything wrong there besides your SSH seems not to be even listening on NZ. Looks more like a issue related to SSH for some reason.
Tokyo no idea, we got 2 Nodes, you have to be more precise.
Free NAT KVM | Free NAT LXC | Bobr
ITS WEDNESDAY MY DUDES
i'll look further in to this
it said tyo only, the other tyo equinix seems fine for me
anyway unable to ssh is not a big deal for me since i can access them from the portal