I paid for VPS about 2 days ago and I see the billing is in Paid status. But my order is still in pending status. Also, I opened a ticket but no response!
@eastwood said:
I paid for VPS about 2 days ago and I see the billing is in Paid status. But my order is still in pending status. Also, I opened a ticket but no response!
Service: NVMe2G VPS : BossyValid-VM
@AlwaysSkint I expect you may want to respond to this one.
@yoursunny said: I've been stressed over the "80 IOPS" limit for a long time because I have no idea how to measure it.
Here you said "average size of 4KB".
Does this mean I can safely read/write large files at 240KB/s speed?
I have Seafile installed for syncing photos from my phones, with daily rclone backup to elsewhere.
So far I didn't set any speed limits, and it hasn't triggered abuse script.
However, I'm happy to limit it further to be on the safe side.
Okay for you @yoursunny I'll go over it more in-depth as I'm completely exhausted and for some reason this feels like I'm taking a break while I try to babysit NYCB009 on the monitoring screen and look for the beginning of a spike. Edit/note: I started writing this reply right after your comment then got too busy to finish it so I've been working through it for a couple days.
Long ago, probably around 2014, I personally didn't really want to write ToS with IOPS limitation but I figured it's better to have it than be vague. I don't remember this too well, but it probably went something like this: wait, isn't this like way too complicated to just put into one number? Why is everyone else doing it? Oh well I guess that's how people want it. Let's look around and see what's the general consensus. 80 IOPS. Well, that's kind of low but I'm sure in some very weird specific situations it definitely could potentially/technically be too much, but we'll just be more lenient for now and at least have some number.
Later on: we should elaborate, we'll also throw in burst high write speeds, and throw in write operations and total average utilization over a longer period and make sure we cover specific other scenarios that could cause problems.
So what does it technically mean? Some examples:
80 IOPS within (2) hour period
<300MB/s write over (10) minutes
300 write operations per second over (1) hour period
20% utilization within (6) hours
When have we shut down or suspended a VM for 81 IOPs in 2 hours?
Never.
80 IOPS within (2) hour period
This essentially means a total of 576,000 operations. We'll get into it deeper further down, but installing Windows on our HDD servers, and running CrystalDiskMark 9x 64GiB and 9x 16MB tests (standard SEQ1MQ8T1,Q1T1 and RND4KQ32T1,Q1T1) results in 286,151,400 operations in about an hour (let's pretend this weird way of measuring it makes total sense, it doesn't. Nothing does.) So like 1000 times less than that? What the heck? Oh right, that's on HDD. That's 42,310,200 operations on NVMe but that's still like 73x more. Except we did it in 30 minutes (partly due to me being slow) so that means over 2 hours it'd technically only be like 18x more, still not good, but not ridiculous. Scroll down to "leniency" to see how we solve it. So in what situation could more than 80 over 2 hours be problematic? Well, this is from our old setup: 4.2 IOPS, 268MB block. That's the entire capability, as in if someone somehow replicated this or got close to some similar situation, they could basically end up with very low IOPS constantly doing 1.1GB/s.
<300MB/s write over (10) minutes
This was meant to both make things more clear as mentioned above, and also curb potential abuse when it comes to the upper limit as mentioned in the last paragraph. So a lot of people might not know what IOPS is or think 80 is really low, well, up to around 300MB/s could be fine most the time: as in this was our way of saying if you do big sequential writes, it doesn't necessarily mean that it will break the 80 IOPS limit. We wouldn't just add a random number that would never be achievable. So going back to our old setup, this would be equivalent to about 4x less than the disks' capabilities at 4MB block size, which would be 268 IOPS or 1.1GB/s. So that also means your IOPS at 1/4th of this would be less than 80 IOPS. So for 10 minutes, you can basically go above 25% utilization, for most people way over their fair share even if they stay at it and not go above it so it's fine long-term, but this is where we have to be careful and add the last part which we'll get to later down below.
300 write operations per second over (1) hour period
Writes are more intensive than reads, at least in terms of real world usage and how it ends up being used. This also represents how operations could technically end up being aggregated, and also touches on write amplifications and potential bottleneck scenarios. Here, we're basically saying (a) we want to be able to react more quickly when it comes to just writes, as in half the usual time, and (b) we're also allowing more and it's technically possible for you to burst 300 write operations per second for an hour and it ends up being 80 or less total IOPS in 2 hours but this one probably ended up being more confusing honestly. In the end, this rule should not be read as "average 300MB/s in 1 hour" like the 80 IOPS but "literally go wild as long as you're not just being malicious, for up to 10 minutes."
20% utilization within (6) hours
This is both very lenient and also the "last resort" option if all else fails. If someone is using 20% CPU time/bandwidth, it's already a lot, especially sustained. A few of these simultaneously could kill the node. It doesn't mean that you can just bypass everything else, but again, it represents that it's technically possible to use up to 20% for several hours. I will say though, it's probably more rare than the others to be able to do this without breaking some other part of the AUP, but just to show you it's possible we're basically going back to the 300MB/s bit, without having to do all the math. There's a sweet spot and several others where you could technically hand around and be fine. This brings it down a little over sustained periods but also, again, shows you that it is obviously a possibility and why we had to write it here (which again means 80 IOPS could literally be as much as 20% to full saturation.) Oh and to show you what I mean about it being more difficult to do: at its peak, the Windows+Benchmark combo only hit 17% utilization.
Why not combine multiple of the above and allow more or less of one thing with another?
If we had some type of combo system to better represent the real world, it'd make more sense, but it'd get super confusing fast. Instead this is where we decide to just be more lenient in the background. For example, naturally, if you do smaller requests, they end up being more stressful per MB on the disk, but that doesn't mean it ends up being more stressful for the same IOPS. In fact, as we kind of covered above, the disk can naturally do less IOPS (with a few caveats) if it's big requests (but way more total MB/s.)
Leniency:
So let's go back to the Windows installation and CrystalDiskMark combo. Let's first talk about some leniencies already built into the above limits and what it really means to install Windows and run that test. We actually ended up reading a total of 100GB and writing 150GB~ which also includes the VM creation/operating system install. It's definitely possible to do this (as in achieve the same thing, just less crazy) in a more mindful manner to take advantage of all the limits, but we don't expect people to do it to this level and why we built in other leniencies. But let's just prove it's possible and see what it would take to do it that way.
For the most part, or at least to some level, we subtract out the VM creation portion. So for this VM to be generated AND for it to install Windows, it is 4,179,600 operations with 2.46GB read and 16.5GB written.
Since the AUP technically doesn't go too in-depth regarding installations, we can assume that it's not "customer's service" yet at this point or not "usage by Customer" as I'm sure any lawyer would interpret, so this IOPS can be assumed to not be counted. It is, by our system, but it's also taken out the totals so it has the same effect. Therefore, let's focus on the other portion of actually using the service, which is running the CrystalDiskMark. I ran it right as it finished the install but on delivery customers are told to wait 10 minutes before using any functions, and any normal person would take a couple minutes to get situated so it's not weird to assume that it would be happening on the next 10 minute period, and thus not combined into the first installation 10 minutes. This means we're allowed to burst above 300MB/s without it being 20 minutes.
So the first issue is that no matter what, we're going to have to do 30,000-60,000 average IOPS for 10-20 minutes to run 97GB read + 134GB in writes. Instead of doing 9 iterations, maxing out the size, let's do the first thing, being rational (this will come handy.) We'll just run it default 5 iterations at default 1GB (or rather 1.07GB since it's 1 GiB.) So it's 8 total tests, 5 times, for total 40 x 1.07GB = 42.8GB instead of 200GB+ so let's see how that goes. I'm actually waiting for the results on this one so I want to make a guess and see if I get anywhere close. We'll be at around 70MB/s in this 10 minute period, and 10,000 IOPS which will around/slightly above 80 IOPS over a 2 hour period, which is within plausibility. This puts us into the 30 minute period realistically by now, which means as long as we don't do anything too crazy right, in 90 minutes this will drop off/no longer be part of the same two hours. Okay results are in and it was 7,837,200 total operations instead of the initial 40~ million. I tried to time it to show something else and it looks like it worked: 766 IOPS in a 10 minute period where it did almost all the reads and a small fraction of the writes. This was about 15GB in reads and 1.6GB writes. Then next 10 minute period was 12,296 IOPS and 0.2GB in reads, with 29.5GB in writes.
Let's next talk about all the actual leniencies and why even the first case would technically be considered fine.
Initial setup: this is not counted at all as mentioned, but it goes further than that. There will be some parts I'm going to keep intentionally vague to not give abusers ideas, so this is one of them, but under no circumstances would we even get close to being flagged down during the initial setup. So this alone covers us but let's keep going.
Actual IOPS limit: our system takes into account other factors at once, and if it sees for example that you're doing non-intensive IOPS, as in smaller size and therefore naturally the disk can handle more of them, it will adjust it. So let's call this "weighted IOPS" whether or not that's an appropriate term that exists. This weighted IOPS will be closer then to assuming best-case but at the very least we can say this is somewhere in the "middle" as in going back to our old 4.2 IOPS number for the largest tested size, and around 40,000 IOPS for smallest (512 Bytes) then the "middle" would be around 4,000 IOPS for the whole disk. Therefore, super oversimplified, we'd assume 50 people can do this (80 IOPS) at once which means 80 IOPS is technically only around 2% utilization which is funny because in this case (the 12,296 IOPS) the write portion was exactly 2% utilization. So it'd be completely fine for a long time, just not forever (since technically 2% constant utilization is above our fair share unless it's...
Larger plan scale: the 80 IOPS figure is meant to be all-encompassing so we don't decide to actually use it for anything outside of our smallest share service. Any plan above this scales up based on disk portion you receive. So a good rule of thumb (but not official, that's still our terms, we're talking about leniency here) is 10GB plan times 24 for example, for our 240GB service. So around 1,920 weighted IOPS. Keep in mind I said weighted IOPS, of course this doesn't mean we can or will let you use that if it's all very large average transfer size (we also check these other relevant figures.)
Time scale: The actual durations will get scaled way up as long as it's not causing active stress. So another leniency you'll get is if no one is using IOPS, you don't really get flagged down at all. Keep in mind you're included as part of this so if you alone are doing something so intense that it does cause high utilization to where it affects everyone, this leniency doesn't kick in.
Throwaway: As long as you only hit the limits once in X period of time, it gets thrown out. We also assume our system could be wrong at least once or twice and if you qualify under the leniencies above for the most part, you might never even get a warning even if you go over.
Warns/Poweroffs: This is where you are considered to be above a no action threshold, as in it happened several times, and/or you did not qualify for some leniencies such as the time scale one. Worst case scenario the system only does an immediate powerdown if you don't qualify for time scale, I made all these words up by the way it's not what we actually call it. Otherwise, you might also get multiple warnings first, and then multiple powerdowns, and it doesn't go to straight suspension.
This isn't even really scratching the surface but it's already reaching a point where no one can realistically read it all. But to leave you with what this all ends up meaning: assuming I didn't miss one, and the reporting is correct, we've reached a point where practically no one has to even get suspended by the system right now. It gets handled, the shutdowns are precise enough and in other cases our systems are good enough too where a human can evaluate it and take action. Originally I was going to tell you that "the last suspension by the system occurred on but I'm all the way to November 2020 and I don't see any. Okay, here we go: the most recent suspension to occur by the auto system, as I see it, was March 14th, 2020. We actually recently tuned it to maybe try to change that, we'll see?
I'll just go ahead and publish a page that does a less insane version of the above explanation and have it be our official "current additional leniencies offered" and have it also go over the scenarios as well as some other stuff maybe. Like an AUP explanation page. It'll still say that in the end it goes based off ToS/AUP but maybe we'll also record any times we didn't follow any leniencies on that page to be fully transparent as well.
Disclaimer: it's very possible the above text has a missing portion or two with placeholders as I was looking for data on something I was trying to state/backup, or non-sensical bits I didn't have time to elaborate on. It definitely won't flow how I wanted it, I had originally planned on actually touching on everything but it's impossible. I think at some point I realized that I shouldn't be writing any of this and got lazy and just tried to tie the loose ends/plot holes like a C-rated comedy movie. It definitely stopped feeling like "taking a break" about halfway through. But yeah, I'm not reading that and no one else should either tbh.
@Jab said:
VirMach on fire with answering questions so I will ask too!
GIFF BACK TPAZ002 aka TPAZ005 aka WHERE IS MY VPS THAT I NEVER USED?!
I decided to go the LAXA014 route with this one. Everything ballooned, the package got delayed, I ran out of timeslot to work on it, it got worse, then it was like "well, it's already been over a week, well, it's already been over two weeks." So it continued having problems, taking longer, and at the same time it went into the trolley problem scenario. Do I save X amount of VMs that have already had a terrible experience (Z hours of outage) and sink in Y more hours into it, or do I save 2-3X amount of VMs that have only experienced 0.1Z outage, as in still salvageable, and spend only 0.5Y doing it? Is it more fair to make everyone have to wait maybe 0.5Z outage so this one can face less? What about all the 20X VMs on nodes that still need more patches/work? Do I prevent those from going into a domino effect? What about all the tickets, probably 5X worth of people waiting various times? Wait, while I was typing that 3 additional days passed by? Do I send a useless email update pissing everyone off more? Should I wait until I diagnose it further? The replacement server this was supposed to go on is having problems, do I build another server, or move them even further away? Do I ship off one of the pre-built servers that may have lower specifications and not enough disk and send two? Wait, if I'm sending two of those, should I just work longer to fix this then and just send it back? Wait, do I even send it back to Tampa when it previously resulted in 2 CPUs and a motherboard and possibly more being broken as a result of a thermal paste re-application request? Oh god, another Tampa server just ran into the same problem, do I work on this one before I work on the first one? Should I stop spending so much time on the evaluation? Loop back 200 times and add in 10x more details while simultaneously juggling 20x other things as 10x more things come up.
Fingers crossed, if half a dozen servers don't decide to have problems today, something should be processed today. That could mean credits and email, and/or re-provisioning, and/or transfers. The paragraph above was from a couple days ago. Or has it already been a week?
@taizi said:
day #2 survived
action:binance bot:on rclone+juicefs:on,but no workload
maybe just don't over 10MB/s ?
Your answer's somewhere in my post above, good luck.
@kun3go said:
TPAZ002(or TPAZ005 as the site said?) is unreachable for months, and the OPEN NETWORK ISSUES are all even marked as resolved now. I am sad.
Closed this maybe 12 hours ago, I plan on something something something because something. I'm out of words.
@VirMach May I trouble you for an urgency renewal billing issue? Invoice #1500688, it has been paid already with my credit balance, but somehow it still says Unpaid and display an unpaid $0.00 invoice on the home page. I opened a ticket for this 7 days ago and still Non-response. I'd like to wait with patience for longer if it wasn't about server renewal, but the server is already suspended because of this $0.00 overdue invoice and expires SOON.
@user127 said: @VirMach May I trouble you for an urgency renewal billing issue? Invoice #1500688, it has been paid already with my credit balance, but somehow it still says Unpaid and display an unpaid $0.00 invoice on the home page. I opened a ticket for this 7 days ago and still Non-response. I'd like to wait with patience for longer if it wasn't about server renewal, but the server is already suspended because of this $0.00 overdue invoice and expires SOON.
@user127 said: @VirMach May I trouble you for an urgency renewal billing issue? Invoice #1500688, it has been paid already with my credit balance, but somehow it still says Unpaid and display an unpaid $0.00 invoice on the home page. I opened a ticket for this 7 days ago and still Non-response. I'd like to wait with patience for longer if it wasn't about server renewal, but the server is already suspended because of this $0.00 overdue invoice and expires SOON.
@user127 said: @VirMach May I trouble you for an urgency renewal billing issue? Invoice #1500688, it has been paid already with my credit balance, but somehow it still says Unpaid and display an unpaid $0.00 invoice on the home page. I opened a ticket for this 7 days ago and still Non-response. I'd like to wait with patience for longer if it wasn't about server renewal, but the server is already suspended because of this $0.00 overdue invoice and expires SOON.
Please take a look ASAP.
Thank you.
Priority ticket or regular?
Which one is correct?
I'm not going to answer that question. What I will do is paste the criteria for a Priority Ticket and let you decide for yourself.
By checking this box you agree to be billed $15 unless you have priority support for your product, there is an immediate outage not described on the network status page, there is a time-sensitive issue that may otherwise result in suspension or termination of your service, or you have been incorrectly suspended.
Anyone notice their additional IP not showing up in WHMCS/SolusVM?
Was doing checking/updates and I just spotted my additional two are missing. They appear to ping from outside, so gonna try a couple of websites on them.
[I'll also need to try to find the means to request them back (again) - I think it was mentioned somewhere in this thread.]
Edit: IPs are active/bound and websites can be accessed using them.
It wisnae me! A big boy done it and ran away.
NVMe2G for life! until death (the end is nigh)
@deafness said:
Hello, since changing the IP, my service ("PlushThreadbare-VM") has been offline for 3 months. It is offline, and I have no effect after reinstalling the operating system. When can you help me fix it?And I can't submit the ticket, the button is disabled
Node Name:CHIZ002
Hostname PlushThreadbare-VM
@VirMach Can you help me solve this problem? I can't do any operation, and I can't even have functions such as issuing the ticket
@user127 said:
somehow it still says Unpaid and display an unpaid $0.00 invoice on the home page.
In October 2022 a user received a bill for his idle VPS stating that he owed $0.00.
He ignored it and threw it away.
In November he received another and threw that one away too.
The following month the hosting company sent him a very nasty note stating they were going to cancel his VPS if he didn't send them $0.00 by return of post.
He ticketed them, talked to them, they said it was a computer error and told him they'd take care of it.
The following month our hero decided that it was about time that he tried out the troublesome VPS figuring that if there were traffic on his account it would put an end to his ridiculous predicament.
However, in the first SSH session that he used his VPS in transmission for his spams he found that his VPS had been suspended.
He ticketed the hosting company who apologized for the computer error once again and said that they would take care of it.
The next day he got a bill for $0.00 stating that payment was now overdue.
Assuming that having spoken to the hosting company only the previous day the latest bill was yet another mistake he ignored it, trusting that the company would be as good as their word and sort the problem out.
The next month he got a bill for $0.00 stating that he had 10 days to pay his account or the company would have to take steps to recover the debt.
Finally giving in he thought he would play the company at their own game and mailed them a check for $0.00.
The computer duly processed his account and returned a statement to the effect that he now owed the hosting company nothing at all.
A week later, the man's bank called him asking him what he was doing writing a check for $0.00.
After a lengthy explanation the bank replied that the $0.00 check had caused their check processing software to fail.
The bank could not now process ANY checks from ANY of their customers that day because the check for $0.00 was causing the computer to crash.
The following month the man received a letter from the hosting company claiming that his check had bounced and that he now owed them $0.00 and unless he sent a check by return of post they would be taking steps to recover the debt.
The man, who had been considering buying his wife a dedicated server for her birthday, bought her an IndirectAdmin account instead.
Comments
TPAZ002(or TPAZ005 as the site said?) is unreachable for months, and the OPEN NETWORK ISSUES are all even marked as resolved now. I am sad.
And IPv6.
And XPG drive.
HostBrr aff best VPS; VirmAche aff worst VPS.
Unable to push-up due to shoulder injury 😣
And the Tokyo Storage relaunch...
and the Ferrari SF90...
Did you mean SSD32G?
Bring Back the Customization
smartass shitposting satirist
No, SSD2G which VirMach was supposed to give to winners.
But dont remind me about that SSD32G from that BF man. I think that was in 2018 or may be 2019? Tried very hard to find that code.
Fast as fuck Core i9 VPS (aff) | Powerful AMD Ryzen VPS (aff)
I paid for VPS about 2 days ago and I see the billing is in Paid status. But my order is still in pending status. Also, I opened a ticket but no response!
Service: NVMe2G VPS : BossyValid-VM
@AlwaysSkint I expect you may want to respond to this one.
For staff assistance or support issues please use the helpdesk ticket system at https://support.lowendspirit.com/index.php?a=add
Nope. Nothing to see, move along. (Apart from, don't start a sentence with 'But' - as bad a 'So'.)
(Surprised that it wasn't auto-provisioned, once payment went through.)
It wisnae me! A big boy done it and ran away.
NVMe2G for life! until death (the end is nigh)
That'll be you in the final image, then. Living up down to your name. Perhaps we'll get some peace!
It wisnae me! A big boy done it and ran away.
NVMe2G for life! until death (the end is nigh)
Waiting on a fraud check perhaps?
day #2 survived
action:binance bot:on rclone+juicefs:on,but no workload
maybe just don't over 10MB/s ?
Okay for you @yoursunny I'll go over it more in-depth as I'm completely exhausted and for some reason this feels like I'm taking a break while I try to babysit NYCB009 on the monitoring screen and look for the beginning of a spike. Edit/note: I started writing this reply right after your comment then got too busy to finish it so I've been working through it for a couple days.
Long ago, probably around 2014, I personally didn't really want to write ToS with IOPS limitation but I figured it's better to have it than be vague. I don't remember this too well, but it probably went something like this: wait, isn't this like way too complicated to just put into one number? Why is everyone else doing it? Oh well I guess that's how people want it. Let's look around and see what's the general consensus. 80 IOPS. Well, that's kind of low but I'm sure in some very weird specific situations it definitely could potentially/technically be too much, but we'll just be more lenient for now and at least have some number.
Later on: we should elaborate, we'll also throw in burst high write speeds, and throw in write operations and total average utilization over a longer period and make sure we cover specific other scenarios that could cause problems.
So what does it technically mean? Some examples:
When have we shut down or suspended a VM for 81 IOPs in 2 hours?
Never.
80 IOPS within (2) hour period
This essentially means a total of 576,000 operations. We'll get into it deeper further down, but installing Windows on our HDD servers, and running CrystalDiskMark 9x 64GiB and 9x 16MB tests (standard SEQ1MQ8T1,Q1T1 and RND4KQ32T1,Q1T1) results in 286,151,400 operations in about an hour (let's pretend this weird way of measuring it makes total sense, it doesn't. Nothing does.) So like 1000 times less than that? What the heck? Oh right, that's on HDD. That's 42,310,200 operations on NVMe but that's still like 73x more. Except we did it in 30 minutes (partly due to me being slow) so that means over 2 hours it'd technically only be like 18x more, still not good, but not ridiculous. Scroll down to "leniency" to see how we solve it. So in what situation could more than 80 over 2 hours be problematic? Well, this is from our old setup: 4.2 IOPS, 268MB block. That's the entire capability, as in if someone somehow replicated this or got close to some similar situation, they could basically end up with very low IOPS constantly doing 1.1GB/s.
<300MB/s write over (10) minutes
This was meant to both make things more clear as mentioned above, and also curb potential abuse when it comes to the upper limit as mentioned in the last paragraph. So a lot of people might not know what IOPS is or think 80 is really low, well, up to around 300MB/s could be fine most the time: as in this was our way of saying if you do big sequential writes, it doesn't necessarily mean that it will break the 80 IOPS limit. We wouldn't just add a random number that would never be achievable. So going back to our old setup, this would be equivalent to about 4x less than the disks' capabilities at 4MB block size, which would be 268 IOPS or 1.1GB/s. So that also means your IOPS at 1/4th of this would be less than 80 IOPS. So for 10 minutes, you can basically go above 25% utilization, for most people way over their fair share even if they stay at it and not go above it so it's fine long-term, but this is where we have to be careful and add the last part which we'll get to later down below.
300 write operations per second over (1) hour period
Writes are more intensive than reads, at least in terms of real world usage and how it ends up being used. This also represents how operations could technically end up being aggregated, and also touches on write amplifications and potential bottleneck scenarios. Here, we're basically saying (a) we want to be able to react more quickly when it comes to just writes, as in half the usual time, and (b) we're also allowing more and it's technically possible for you to burst 300 write operations per second for an hour and it ends up being 80 or less total IOPS in 2 hours but this one probably ended up being more confusing honestly. In the end, this rule should not be read as "average 300MB/s in 1 hour" like the 80 IOPS but "literally go wild as long as you're not just being malicious, for up to 10 minutes."
20% utilization within (6) hours
This is both very lenient and also the "last resort" option if all else fails. If someone is using 20% CPU time/bandwidth, it's already a lot, especially sustained. A few of these simultaneously could kill the node. It doesn't mean that you can just bypass everything else, but again, it represents that it's technically possible to use up to 20% for several hours. I will say though, it's probably more rare than the others to be able to do this without breaking some other part of the AUP, but just to show you it's possible we're basically going back to the 300MB/s bit, without having to do all the math. There's a sweet spot and several others where you could technically hand around and be fine. This brings it down a little over sustained periods but also, again, shows you that it is obviously a possibility and why we had to write it here (which again means 80 IOPS could literally be as much as 20% to full saturation.) Oh and to show you what I mean about it being more difficult to do: at its peak, the Windows+Benchmark combo only hit 17% utilization.
Why not combine multiple of the above and allow more or less of one thing with another?
If we had some type of combo system to better represent the real world, it'd make more sense, but it'd get super confusing fast. Instead this is where we decide to just be more lenient in the background. For example, naturally, if you do smaller requests, they end up being more stressful per MB on the disk, but that doesn't mean it ends up being more stressful for the same IOPS. In fact, as we kind of covered above, the disk can naturally do less IOPS (with a few caveats) if it's big requests (but way more total MB/s.)
Leniency:
So let's go back to the Windows installation and CrystalDiskMark combo. Let's first talk about some leniencies already built into the above limits and what it really means to install Windows and run that test. We actually ended up reading a total of 100GB and writing 150GB~ which also includes the VM creation/operating system install. It's definitely possible to do this (as in achieve the same thing, just less crazy) in a more mindful manner to take advantage of all the limits, but we don't expect people to do it to this level and why we built in other leniencies. But let's just prove it's possible and see what it would take to do it that way.
For the most part, or at least to some level, we subtract out the VM creation portion. So for this VM to be generated AND for it to install Windows, it is 4,179,600 operations with 2.46GB read and 16.5GB written.
Since the AUP technically doesn't go too in-depth regarding installations, we can assume that it's not "customer's service" yet at this point or not "usage by Customer" as I'm sure any lawyer would interpret, so this IOPS can be assumed to not be counted. It is, by our system, but it's also taken out the totals so it has the same effect. Therefore, let's focus on the other portion of actually using the service, which is running the CrystalDiskMark. I ran it right as it finished the install but on delivery customers are told to wait 10 minutes before using any functions, and any normal person would take a couple minutes to get situated so it's not weird to assume that it would be happening on the next 10 minute period, and thus not combined into the first installation 10 minutes. This means we're allowed to burst above 300MB/s without it being 20 minutes.
So the first issue is that no matter what, we're going to have to do 30,000-60,000 average IOPS for 10-20 minutes to run 97GB read + 134GB in writes. Instead of doing 9 iterations, maxing out the size, let's do the first thing, being rational (this will come handy.) We'll just run it default 5 iterations at default 1GB (or rather 1.07GB since it's 1 GiB.) So it's 8 total tests, 5 times, for total 40 x 1.07GB = 42.8GB instead of 200GB+ so let's see how that goes. I'm actually waiting for the results on this one so I want to make a guess and see if I get anywhere close. We'll be at around 70MB/s in this 10 minute period, and 10,000 IOPS which will around/slightly above 80 IOPS over a 2 hour period, which is within plausibility. This puts us into the 30 minute period realistically by now, which means as long as we don't do anything too crazy right, in 90 minutes this will drop off/no longer be part of the same two hours. Okay results are in and it was 7,837,200 total operations instead of the initial 40~ million. I tried to time it to show something else and it looks like it worked: 766 IOPS in a 10 minute period where it did almost all the reads and a small fraction of the writes. This was about 15GB in reads and 1.6GB writes. Then next 10 minute period was 12,296 IOPS and 0.2GB in reads, with 29.5GB in writes.
Let's next talk about all the actual leniencies and why even the first case would technically be considered fine.
This isn't even really scratching the surface but it's already reaching a point where no one can realistically read it all. But to leave you with what this all ends up meaning: assuming I didn't miss one, and the reporting is correct, we've reached a point where practically no one has to even get suspended by the system right now. It gets handled, the shutdowns are precise enough and in other cases our systems are good enough too where a human can evaluate it and take action. Originally I was going to tell you that "the last suspension by the system occurred on but I'm all the way to November 2020 and I don't see any. Okay, here we go: the most recent suspension to occur by the auto system, as I see it, was March 14th, 2020. We actually recently tuned it to maybe try to change that, we'll see?
I'll just go ahead and publish a page that does a less insane version of the above explanation and have it be our official "current additional leniencies offered" and have it also go over the scenarios as well as some other stuff maybe. Like an AUP explanation page. It'll still say that in the end it goes based off ToS/AUP but maybe we'll also record any times we didn't follow any leniencies on that page to be fully transparent as well.
Disclaimer: it's very possible the above text has a missing portion or two with placeholders as I was looking for data on something I was trying to state/backup, or non-sensical bits I didn't have time to elaborate on. It definitely won't flow how I wanted it, I had originally planned on actually touching on everything but it's impossible. I think at some point I realized that I shouldn't be writing any of this and got lazy and just tried to tie the loose ends/plot holes like a C-rated comedy movie. It definitely stopped feeling like "taking a break" about halfway through. But yeah, I'm not reading that and no one else should either tbh.
I decided to go the LAXA014 route with this one. Everything ballooned, the package got delayed, I ran out of timeslot to work on it, it got worse, then it was like "well, it's already been over a week, well, it's already been over two weeks." So it continued having problems, taking longer, and at the same time it went into the trolley problem scenario. Do I save X amount of VMs that have already had a terrible experience (Z hours of outage) and sink in Y more hours into it, or do I save 2-3X amount of VMs that have only experienced 0.1Z outage, as in still salvageable, and spend only 0.5Y doing it? Is it more fair to make everyone have to wait maybe 0.5Z outage so this one can face less? What about all the 20X VMs on nodes that still need more patches/work? Do I prevent those from going into a domino effect? What about all the tickets, probably 5X worth of people waiting various times? Wait, while I was typing that 3 additional days passed by? Do I send a useless email update pissing everyone off more? Should I wait until I diagnose it further? The replacement server this was supposed to go on is having problems, do I build another server, or move them even further away? Do I ship off one of the pre-built servers that may have lower specifications and not enough disk and send two? Wait, if I'm sending two of those, should I just work longer to fix this then and just send it back? Wait, do I even send it back to Tampa when it previously resulted in 2 CPUs and a motherboard and possibly more being broken as a result of a thermal paste re-application request? Oh god, another Tampa server just ran into the same problem, do I work on this one before I work on the first one? Should I stop spending so much time on the evaluation? Loop back 200 times and add in 10x more details while simultaneously juggling 20x other things as 10x more things come up.
Fingers crossed, if half a dozen servers don't decide to have problems today, something should be processed today. That could mean credits and email, and/or re-provisioning, and/or transfers. The paragraph above was from a couple days ago. Or has it already been a week?
I feel like I've met my quota for replies a few times over or maybe even typing for the rest of the year
Pressing both buttons repeatedly at this point and running away.
Your answer's somewhere in my post above, good luck.
Closed this maybe 12 hours ago, I plan on something something something because something. I'm out of words.
So in a nutshell ;')
Haven't bought a single service in VirMach Great Ryzen 2022 - 2023 Flash Sale.
https://lowendspirit.com/uploads/editor/gi/ippw0lcmqowk.png
Hmm, has JSG merged with Virmach?
Patiently awaiting rDNS delegation (not)...
It wisnae me! A big boy done it and ran away.
NVMe2G for life! until death (the end is nigh)
For staff assistance or support issues please use the helpdesk ticket system at https://support.lowendspirit.com/index.php?a=add
@VirMach May I trouble you for an urgency renewal billing issue? Invoice #1500688, it has been paid already with my credit balance, but somehow it still says Unpaid and display an unpaid $0.00 invoice on the home page. I opened a ticket for this 7 days ago and still Non-response. I'd like to wait with patience for longer if it wasn't about server renewal, but the server is already suspended because of this $0.00 overdue invoice and expires SOON.
Please take a look ASAP.
Thank you.
WHMCS strikes again!
Haven't bought a single service in VirMach Great Ryzen 2022 - 2023 Flash Sale.
https://lowendspirit.com/uploads/editor/gi/ippw0lcmqowk.png
Priority ticket or regular?
Which one is correct?
You really need to ask?
The mind boggles.
It wisnae me! A big boy done it and ran away.
NVMe2G for life! until death (the end is nigh)
If it's yours fault = normal ticket.
If it's VirMach fault[1] = priority ticket.
[1] and problem is priority problem, not "it will be broken in 3 months, help me now!!111!onerorenoeneoenoeneoenone"
Haven't bought a single service in VirMach Great Ryzen 2022 - 2023 Flash Sale.
https://lowendspirit.com/uploads/editor/gi/ippw0lcmqowk.png
I'm not going to answer that question. What I will do is paste the criteria for a Priority Ticket and let you decide for yourself.
By checking this box you agree to be billed $15 unless you have priority support for your product, there is an immediate outage not described on the network status page, there is a time-sensitive issue that may otherwise result in suspension or termination of your service, or you have been incorrectly suspended.
Anyone notice their additional IP not showing up in WHMCS/SolusVM?
Was doing checking/updates and I just spotted my additional two are missing. They appear to ping from outside, so gonna try a couple of websites on them.
[I'll also need to try to find the means to request them back (again) - I think it was mentioned somewhere in this thread.]
Edit: IPs are active/bound and websites can be accessed using them.
It wisnae me! A big boy done it and ran away.
NVMe2G for life! until death (the end is nigh)
@VirMach Can you help me solve this problem? I can't do any operation, and I can't even have functions such as issuing the ticket
In October 2022 a user received a bill for his idle VPS stating that he owed $0.00.
He ignored it and threw it away.
In November he received another and threw that one away too.
The following month the hosting company sent him a very nasty note stating they were going to cancel his VPS if he didn't send them $0.00 by return of post.
He ticketed them, talked to them, they said it was a computer error and told him they'd take care of it.
The following month our hero decided that it was about time that he tried out the troublesome VPS figuring that if there were traffic on his account it would put an end to his ridiculous predicament.
However, in the first SSH session that he used his VPS in transmission for his spams he found that his VPS had been suspended.
He ticketed the hosting company who apologized for the computer error once again and said that they would take care of it.
The next day he got a bill for $0.00 stating that payment was now overdue.
Assuming that having spoken to the hosting company only the previous day the latest bill was yet another mistake he ignored it, trusting that the company would be as good as their word and sort the problem out.
The next month he got a bill for $0.00 stating that he had 10 days to pay his account or the company would have to take steps to recover the debt.
Finally giving in he thought he would play the company at their own game and mailed them a check for $0.00.
The computer duly processed his account and returned a statement to the effect that he now owed the hosting company nothing at all.
A week later, the man's bank called him asking him what he was doing writing a check for $0.00.
After a lengthy explanation the bank replied that the $0.00 check had caused their check processing software to fail.
The bank could not now process ANY checks from ANY of their customers that day because the check for $0.00 was causing the computer to crash.
The following month the man received a letter from the hosting company claiming that his check had bounced and that he now owed them $0.00 and unless he sent a check by return of post they would be taking steps to recover the debt.
The man, who had been considering buying his wife a dedicated server for her birthday, bought her an IndirectAdmin account instead.
Adapted from Zero Dollar Charge.
HostBrr aff best VPS; VirmAche aff worst VPS.
Unable to push-up due to shoulder injury 😣