@fan said: Possible disk error/corruption on TYOC040 like 026? Just found the node was unlocked and boot disk is gone.
Not necessarily the whole disk, but your lvm is obviously knackered. Not 100% sure that it applies to your case, but I've had this happen before. Normally VirMach will automatically fix it within a ~couple of weeks. One sign that it has been fixed is that the O/S in SolusVM is blank. You will then need to reinstall.
@fan said:
Possible disk error/corruption on TYOC040 like 026? Just found the node was unlocked and boot disk is gone.
Update: I/O error when access the virtual disk, so reinstallation won't work.
It just keeps getting knocked offline. As in the PCIe link drops. All Tokyo servers are already patched to the max pretty much to resolve all the previous problems but there was possibly at some point a kernel update, firmware update, or BIOS update and now it's no longer in proper equilibrium.
I remember @FrankZ was able to emulate a situation that took down the drive on AMSD030X so it's not necessarily indicative as a "bad" drive. Could be perfect health. Could also be reputable brand SSD. These new problems popping up are NOT related to the XPG fiasco.
(edit) Oh I forgot why I mentioned Frank, that node has basically been stable ever since he stopped stressing the server. So if he can do that, it also means other people can possibly trigger a dropoff, whether intentionally or not. And it's not an easy case of identifying abuse. This can unfortunately happen in a fraction of a second, not hours of thrashing. I'd basically need to be a kernel engineer with a full-time job of diagnosing this to go any further with it. And don't worry this isn't a case of me being incapable, I also phoned in a lot of intelligent friends and they all basically couldn't take it that far. One of them did assist us in fixing maybe 1 out of 10 things that could cause a dropoff and instead it just "overloads" in those scenarios. The overloads happen if for example people start mass re-installing after they see a disk message like yours, it balloons out of control before it can recover. If we could code up a better/faster detection system that isn't intensive what we could do is force the server to basically lock itself out from SolusVM. We've gotten that done to some degree, I just need to push out an update.
It's definitely frustrating but this is something that's had 6 years of Linux kernel bug reports. Seems like every kernel update it may introduce a new specific scenario where perhaps if someone's VM ends up using swap space or something super specific happens, or multiple VMs perform certain extremely spikey behavior it occurs. It would explain why we keep seeing it in Tokyo since that entire region is very spiky in usage. I'm open to any suggestions that aren't "go back in time and buy U.2 drives."
Basically for NVMe SSDs to function properly the motherboard, CPU, kernel, firmware, everything has to perform spectacularly or else it will go away. We've since coded out a "rescuer" that checks and runs on a cron and does everything it possibly can to automatically bring it back up but once it drops off it creates a domino effect that has a low success rate without a cold reboot on LInux. On Windows, in my testing when I stressed the NVMe and it dropped off it would basically fix itself within seconds. On Linux, not so much.
Some of these, if it ends up being related to a specific motherboard being sub-par or not on the perfect combo of everything, will drop off and only come back after hours of attempts.
My TYOC040 node has been stopped for 72 hours. I heard that this node is about to go offline?
he has offered pro-rated refund for that node. feel free to take it like i did.
He still has the face to do activities? It's still Valentine's Day, and I won't buy it for $0.1 on Black Friday. Although he needs a buyer like me, I have lost his credit, so I should withdraw it as soon as possible!
@fan said: Possible disk error/corruption on TYOC040 like 026? Just found the node was unlocked and boot disk is gone.
Not necessarily the whole disk, but your lvm is obviously knackered. Not 100% sure that it applies to your case, but I've had this happen before. Normally VirMach will automatically fix it within a ~couple of weeks. One sign that it has been fixed is that the O/S in SolusVM is blank. You will then need to reinstall.
@fan said:
Possible disk error/corruption on TYOC040 like 026? Just found the node was unlocked and boot disk is gone.
Update: I/O error when access the virtual disk, so reinstallation won't work.
It just keeps getting knocked offline. As in the PCIe link drops. All Tokyo servers are already patched to the max pretty much to resolve all the previous problems but there was possibly at some point a kernel update, firmware update, or BIOS update and now it's no longer in proper equilibrium.
I remember @FrankZ was able to emulate a situation that took down the drive on AMSD030X so it's not necessarily indicative as a "bad" drive. Could be perfect health. Could also be reputable brand SSD. These new problems popping up are NOT related to the XPG fiasco.
(edit) Oh I forgot why I mentioned Frank, that node has basically been stable ever since he stopped stressing the server. So if he can do that, it also means other people can possibly trigger a dropoff, whether intentionally or not. And it's not an easy case of identifying abuse. This can unfortunately happen in a fraction of a second, not hours of thrashing. I'd basically need to be a kernel engineer with a full-time job of diagnosing this to go any further with it. And don't worry this isn't a case of me being incapable, I also phoned in a lot of intelligent friends and they all basically couldn't take it that far. One of them did assist us in fixing maybe 1 out of 10 things that could cause a dropoff and instead it just "overloads" in those scenarios. The overloads happen if for example people start mass re-installing after they see a disk message like yours, it balloons out of control before it can recover. If we could code up a better/faster detection system that isn't intensive what we could do is force the server to basically lock itself out from SolusVM. We've gotten that done to some degree, I just need to push out an update.
It's definitely frustrating but this is something that's had 6 years of Linux kernel bug reports. Seems like every kernel update it may introduce a new specific scenario where perhaps if someone's VM ends up using swap space or something super specific happens, or multiple VMs perform certain extremely spikey behavior it occurs. It would explain why we keep seeing it in Tokyo since that entire region is very spiky in usage. I'm open to any suggestions that aren't "go back in time and buy U.2 drives."
Basically for NVMe SSDs to function properly the motherboard, CPU, kernel, firmware, everything has to perform spectacularly or else it will go away. We've since coded out a "rescuer" that checks and runs on a cron and does everything it possibly can to automatically bring it back up but once it drops off it creates a domino effect that has a low success rate without a cold reboot on LInux. On Windows, in my testing when I stressed the NVMe and it dropped off it would basically fix itself within seconds. On Linux, not so much.
Some of these, if it ends up being related to a specific motherboard being sub-par or not on the perfect combo of everything, will drop off and only come back after hours of attempts.
My TYOC040 node has been stopped for 72 hours. I heard that this node is about to go offline?
he has offered pro-rated refund for that node. feel free to take it like i did.
He still has the face to do activities? It's still Valentine's Day, and I won't buy it for $0.1 on Black Friday. Although he needs a buyer like me, I have lost his credit, so I should withdraw it as soon as possible!
i doubt so if you getting flash deals with low margins. by now its simply your choice if you want to be a customer.
@fan said: Possible disk error/corruption on TYOC040 like 026? Just found the node was unlocked and boot disk is gone.
Not necessarily the whole disk, but your lvm is obviously knackered. Not 100% sure that it applies to your case, but I've had this happen before. Normally VirMach will automatically fix it within a ~couple of weeks. One sign that it has been fixed is that the O/S in SolusVM is blank. You will then need to reinstall.
@fan said:
Possible disk error/corruption on TYOC040 like 026? Just found the node was unlocked and boot disk is gone.
Update: I/O error when access the virtual disk, so reinstallation won't work.
It just keeps getting knocked offline. As in the PCIe link drops. All Tokyo servers are already patched to the max pretty much to resolve all the previous problems but there was possibly at some point a kernel update, firmware update, or BIOS update and now it's no longer in proper equilibrium.
I remember @FrankZ was able to emulate a situation that took down the drive on AMSD030X so it's not necessarily indicative as a "bad" drive. Could be perfect health. Could also be reputable brand SSD. These new problems popping up are NOT related to the XPG fiasco.
(edit) Oh I forgot why I mentioned Frank, that node has basically been stable ever since he stopped stressing the server. So if he can do that, it also means other people can possibly trigger a dropoff, whether intentionally or not. And it's not an easy case of identifying abuse. This can unfortunately happen in a fraction of a second, not hours of thrashing. I'd basically need to be a kernel engineer with a full-time job of diagnosing this to go any further with it. And don't worry this isn't a case of me being incapable, I also phoned in a lot of intelligent friends and they all basically couldn't take it that far. One of them did assist us in fixing maybe 1 out of 10 things that could cause a dropoff and instead it just "overloads" in those scenarios. The overloads happen if for example people start mass re-installing after they see a disk message like yours, it balloons out of control before it can recover. If we could code up a better/faster detection system that isn't intensive what we could do is force the server to basically lock itself out from SolusVM. We've gotten that done to some degree, I just need to push out an update.
It's definitely frustrating but this is something that's had 6 years of Linux kernel bug reports. Seems like every kernel update it may introduce a new specific scenario where perhaps if someone's VM ends up using swap space or something super specific happens, or multiple VMs perform certain extremely spikey behavior it occurs. It would explain why we keep seeing it in Tokyo since that entire region is very spiky in usage. I'm open to any suggestions that aren't "go back in time and buy U.2 drives."
Basically for NVMe SSDs to function properly the motherboard, CPU, kernel, firmware, everything has to perform spectacularly or else it will go away. We've since coded out a "rescuer" that checks and runs on a cron and does everything it possibly can to automatically bring it back up but once it drops off it creates a domino effect that has a low success rate without a cold reboot on LInux. On Windows, in my testing when I stressed the NVMe and it dropped off it would basically fix itself within seconds. On Linux, not so much.
Some of these, if it ends up being related to a specific motherboard being sub-par or not on the perfect combo of everything, will drop off and only come back after hours of attempts.
My TYOC040 node has been stopped for 72 hours. I heard that this node is about to go offline?
he has offered pro-rated refund for that node. feel free to take it like i did.
He still has the face to do activities? It's still Valentine's Day, and I won't buy it for $0.1 on Black Friday. Although he needs a buyer like me, I have lost his credit, so I should withdraw it as soon as possible!
i doubt so if you getting flash deals with low margins. by now its simply your choice if you want to be a customer.
I am obviously not what you said. I have never sent a work order in the past year. During the use, I have been down for more than 2 months. I have been waiting for his fix. It is terrible that I will still refund in the end. My server is uploaded There is a large amount of data, and the migration data tool is relatively large. I have given up complaining and can only refrain from becoming his client in the future.
Reposting because, mysteriously, @VirMach, has been replying to every post in this thread, except mine.
On January 21st 2023, you updated my Ticket (#634655) on the Virmach website, saying this:
"Due date has been extended. I'm going to do one last check over the weekend and if nothing viable is found, I'll let you know so we can close out this ticket. If it is found then we'll proceed with restoring it how you prefer: did you want it to override your service or provide credentials for it to be compressed and dumped inside the VM?"
I replied on the same day with:
"Thanks. Do you mean all folders from the old VM would zipped with a password and dumped on the current VM's C: drive? If so, then that would probably be easiest."
I haven't heard anything back from you since. Did you check for the backup? When can I expect that?
Also, needless to say, the "extension" was rather pointless as there's been no conclusion and your system has already billed me again.
TYOC007
VM no bootable device again for few days now.
Please fix. Thank you.
Main IP pings: false
Node Online: true
Service online: online
Operating System: linux-centos-7-x86_64-minimal-latest
Service Status:Active
Tried: Reboot, Shutdown & Boot. No luck.
Yep it is down for me as well with the same "no bootable device" issue. Status page shows "TYOC007 - Broken reinstalls pending" so I expect we are waiting on DC hands.
I ran database query and did the math. This is what the actual rating is for the reviews on our site.
Our TrustPilot average rating is 53% 5 stars, 11% 4 stars, 6% 3 stars, and 26% 1 stars so the true average is 0.535 + 0.114 + 0.063 + 0.261 = 3.53 out of 5 but the way TrustPilot skews it to sell their product that makes them money ($1,000/MO or something like that to actually keep driving traffic to their website by being able to auto invite all new orders) makes it 2/5 since there was recently negative review spam as well as some genuine negative reviews. It weighs it differently.
Of course if a customer was happy previously and they're still happy and with us through the years it doesn't count their positive review because we didn't spam the customer to write more recent reviews.
I think they also penalize us for not feeding into their system by replying to every negative review quickly, they want to turn it into another LowEnd helpdesk site.
Anyone who actually uses TrustPilot to make a purchase decision is ill-informed. Amazon has a 1.7 rating and everyone uses them, plus they basically instant refund anyone for 30 days and have 24x7 live support. If they can only get a 1.7 out of 5 then it's a clear indicator of the system being terrible and anyone who has 4+ stars on TrustPilot is either a small company since they skew those to be positive initially to reel people into their system, they're paying for fake reviews, or they're paying TrustPilot (which we did at some point when we were 4+ stars something like $1,000 a month) to properly use their system and collect genuine reviews in the correct ratios.
Some further unasked for analysis.
Take a look at RN's reviews. They have a 4.7 rating. There's first of all a warning that they don't pay TrustPilot but send out reviews, which TrustPilot doesn't like, but look at the review dates. First page of reviews is March through May 2020 and half of 2nd page is also May 2020. Then almost no reviews for the rest of the year until November, where they happen to get multiple reviews on exactly November 6th and then half of next page is November 16/17. Then another cluster on the next page. Next three pages are all spaced out again more, then 2nd to last page is a ton of reviews all within like a week of eachother. Next page, same thing.
So 2 .5 pages of reviews over 8 months (March to November 2020.)
Then 3 pages of reviews over the next 18~ months.
Then 3 pages of reviews over the next 7~ months after that.
You can draw your own conclusions but it seems like at the very least they get some targeted bulk positive reviews.
Compare this with ours:
1 page - July 2016 to August 2017 (notice how it starts off slow, like real reviews.)
1 page - August 2017 to January 2018.
1 page - February to April 2018 (2 months)
9 pages - next 3 months (this is when we paid TrustPilot to send bulk reviews.)
5 pages - next 3 months (still paying TrustPilot)
4 pages - next 3 months (still paying TrustPilot)
4 pages - next 3 months (still paying TrustPilot)
It's now May 2019. We stop paying TrustPilot in April, so the last invites trickle over into May. So that's 22 pages of reviews, mostly positive, in 1 year. We had a 1 year subscription.
1 page - May 2019 to November 2019 (5 months.)
1 page - December 2019 to June 2020 (7 months)
1 page - July 2020 to December 2020 (5 months)
1 page - December to April 2021 (5 months)
1 page - May to October 2021 (6 months)
1 page - October to March 2022 (6 months)
1 page - March to May 2022 (3 months)
1 page - May to July 2022 (3 months)
1 page - July to August 2022 (2 months)
1 page - August to October 2022 (3 months)
1 page - October to Now (5 months)
(edit) I think this adds up to 36 out of 38 so I did some miscalculation on the above, most likely 2 more pages get added to the 22 pages of reviews when we paid TrustPilot since I skimmed over those.)
Frequency of reviews went up March to October, mostly negative due to MJJ and also genuine negative experiences due to Ryzen migration fallout. Most of what's counted are these.
66% of our reviews came in within a year by paying them. Mostly positive, since it was balanced not just angry people searching VirMach and leaving a negative review on the first site they find to complain. The other 33% was over 2 years at the beginning and 3 years at the end.
So clearly you are not getting the full picture anymore and it's just turned into a cesspit since all new orders aren't being asked to review and most reviews are just angry people who bought specials and abused or angry people from migrations which are over.
Now people gonna be pissed that boss wasted hour into understanding TrustPilot and then another hour into typing this message that like 2 people gonna read (rest is gonna go TL;DR) rather than [flying to Tokyo to replace the diks, sending those XPGs or answering work orders!]
Reposting because, mysteriously, @VirMach, has been replying to every post in this thread, except mine.
On January 21st 2023, you updated my Ticket (#634655) on the Virmach website, saying this:
"Due date has been extended. I'm going to do one last check over the weekend and if nothing viable is found, I'll let you know so we can close out this ticket. If it is found then we'll proceed with restoring it how you prefer: did you want it to override your service or provide credentials for it to be compressed and dumped inside the VM?"
I replied on the same day with:
"Thanks. Do you mean all folders from the old VM would zipped with a password and dumped on the current VM's C: drive? If so, then that would probably be easiest."
I haven't heard anything back from you since. Did you check for the backup? When can I expect that?
Also, needless to say, the "extension" was rather pointless as there's been no conclusion and your system has already billed me again.
Welcome to the Ignored Club,
I have a VM in TPAZ003.VIRM.AC node that won't boot since December, I've asked here and via support ticket and they never respond.
At this rate, December will come again and they will not have fixed the problem.
@jam said: Welcome to the Ignored Club, I have a VM in TPAZ003.VIRM.AC node that won't boot since December, I've asked here and via support ticket and they never respond. At this rate, December will come again and they will not have fixed the problem.
Sadly true, I've a VM in TPAZ003, out of three (3) months it have been online two (2) weeks. It is currently offline, by this time I've lost interest on it, I can't use it. @VirMach could offer a pro-rated refund to users on that node, I'll take it right away.
There's something inherently scummy about review sites that get paid by the hosts who are reviewed -
For one, high customer acquisition costs means paying more for the service..
Secondly, the review site is incentivized to drive more customers to vendors who pay more ..
I guess there's something to be said for looking for rantings on here -- it helps inform the risk tolerance, as you know what is likely to go wrong
@erk said:
There's something inherently scummy about review sites that get paid by the hosts who are reviewed -
For one, high customer acquisition costs means paying more for the service..
Secondly, the review site is incentivized to drive more customers to vendors who pay more ..
I guess there's something to be said for looking for rantings on here -- it helps inform the risk tolerance, as you know what is likely to go wrong
The worst part is the miniscule value they add, as in they just focus all that money on ways to build up their scheme rather than implementing interesting ways of collecting genuine reviews. Also, they literally scammed us. We signed a one year contract with no renewal clause, they billed us, fought the chargeback, somehow won even though I presented the literal contract to my card company as evidence of cancellation, and to top it off they admitted it at the end (aka the sales guy was like oh you're right and then proceeded to not provide a refund.
Anyway, I'll actually just share my idea here in case anyone wants to develop it and maybe create a better product but sadly it won't actually be used as much as these extortion sites when they pour so much money into only a marketing strategy.
Main idea: an actual API to validate a purchase was made and then presenting the customer with a review box on the business's website instead of spamming emails.
This is a ballpark idea, looking for any feedback. Basically we're going to give a 30 day notice to everyone with it enabled that we're going to begin unlocking them on request even if you lost backup code. It's getting bad, and there's been two times where the system's actually glitched out to some degree so should it fail we need to be able to unlock people if they didn't keep their backup codes. Seems like 2FA activations are going up which is good, but it also means tons of frustrated customers. We'll go over official lockout periods. We'll also send emails to people with email 2FA most likely just to clarify the policy for that as well.
Note, lockout period means we'll remove it after X days requested, unless it just looks super super obviously shady in which case we may push back a little. This means we're relinquishing our responsibility as a guard unless you activate the old method.
For those who still want us to keep it the "old way" AKA if you lose the 2FA and backup you're permanently locked out, we'll add a button to activate that and we'll keep track of accounts who have that enabled so we won't remove it for those, ever.
@jam said: Welcome to the Ignored Club, I have a VM in TPAZ003.VIRM.AC node that won't boot since December, I've asked here and via support ticket and they never respond. At this rate, December will come again and they will not have fixed the problem.
Sadly true, I've a VM in TPAZ003, out of three (3) months it have been online two (2) weeks. It is currently offline, by this time I've lost interest on it, I can't use it. @VirMach could offer a pro-rated refund to users on that node, I'll take it right away.
Reposting because, mysteriously, @VirMach, has been replying to every post in this thread, except mine.
On January 21st 2023, you updated my Ticket (#634655) on the Virmach website, saying this:
"Due date has been extended. I'm going to do one last check over the weekend and if nothing viable is found, I'll let you know so we can close out this ticket. If it is found then we'll proceed with restoring it how you prefer: did you want it to override your service or provide credentials for it to be compressed and dumped inside the VM?"
I replied on the same day with:
"Thanks. Do you mean all folders from the old VM would zipped with a password and dumped on the current VM's C: drive? If so, then that would probably be easiest."
I haven't heard anything back from you since. Did you check for the backup? When can I expect that?
Also, needless to say, the "extension" was rather pointless as there's been no conclusion and your system has already billed me again.
Welcome to the Ignored Club,
I have a VM in TPAZ003.VIRM.AC node that won't boot since December, I've asked here and via support ticket and they never respond.
At this rate, December will come again and they will not have fixed the problem.
This is still being worked on, both the server in small chunks trying to get the data off and recreations. I'm not the one handling recreations and it looks like there's been some bumps, something was supposed to be coded out to make things faster but I think that' gone rather ironically.
Actually I don't even know anymore that might be a completely different Tampa.
Reposting because, mysteriously, @VirMach, has been replying to every post in this thread, except mine.
On January 21st 2023, you updated my Ticket (#634655) on the Virmach website, saying this:
"Due date has been extended. I'm going to do one last check over the weekend and if nothing viable is found, I'll let you know so we can close out this ticket. If it is found then we'll proceed with restoring it how you prefer: did you want it to override your service or provide credentials for it to be compressed and dumped inside the VM?"
I replied on the same day with:
"Thanks. Do you mean all folders from the old VM would zipped with a password and dumped on the current VM's C: drive? If so, then that would probably be easiest."
I haven't heard anything back from you since. Did you check for the backup? When can I expect that?
Also, needless to say, the "extension" was rather pointless as there's been no conclusion and your system has already billed me again.
There's clearly been some delays on everything, I don't know if you saw all the other posts around yours, we're not having a terrific week. Not sure what response you desire.
If I had to keep everyone updated on delays I feel like we'd create an infinite loop.
@VirMach said: We're most likely changing out 2FA policy.
This is a ballpark idea, looking for any feedback. Basically we're going to give a 30 day notice to everyone with it enabled that we're going to begin unlocking them on request even if you lost backup code. It's getting bad, and there's been two times where the system's actually glitched out to some degree so should it fail we need to be able to unlock people if they didn't keep their backup codes. Seems like 2FA activations are going up which is good, but it also means tons of frustrated customers. We'll go over official lockout periods. We'll also send emails to people with email 2FA most likely just to clarify the policy for that as well.
Note, lockout period means we'll remove it after X days requested, unless it just looks super super obviously shady in which case we may push back a little. This means we're relinquishing our responsibility as a guard unless you activate the old method.
For those who still want us to keep it the "old way" AKA if you lose the 2FA and backup you're permanently locked out, we'll add a button to activate that and we'll keep track of accounts who have that enabled so we won't remove it for those, ever.
Comments
Started receiving credit for the locked Tokyo node.
Yo, funny old fart, I dare ya lock the node until next bill cycle
smartass shitposting satirist
He still has the face to do activities? It's still Valentine's Day, and I won't buy it for $0.1 on Black Friday. Although he needs a buyer like me, I have lost his credit, so I should withdraw it as soon as possible!
i doubt so if you getting flash deals with low margins. by now its simply your choice if you want to be a customer.
I bench YABS 24/7/365 unless it's a leap year.
I am obviously not what you said. I have never sent a work order in the past year. During the use, I have been down for more than 2 months. I have been waiting for his fix. It is terrible that I will still refund in the end. My server is uploaded There is a large amount of data, and the migration data tool is relatively large. I have given up complaining and can only refrain from becoming his client in the future.
One hundred pages in a month and a half.
More than two pages a day on average.
See you on
/p200
Yo
work order
,have face
&I have lost his credit
, smelled MJJ hiding behind google translatesmartass shitposting satirist
🌟100 pages🌟
Seems you were heavily hurt by MJJ
@VirMach
Reposting because, mysteriously, @VirMach, has been replying to every post in this thread, except mine.
On January 21st 2023, you updated my Ticket (#634655) on the Virmach website, saying this:
I replied on the same day with:
I haven't heard anything back from you since. Did you check for the backup? When can I expect that?
Also, needless to say, the "extension" was rather pointless as there's been no conclusion and your system has already billed me again.
@VirMach
TYOC007
VM no bootable device again for few days now.
Please fix. Thank you.
Main IP pings: false
Node Online: true
Service online: online
Operating System: linux-centos-7-x86_64-minimal-latest
Service Status:Active
Tried: Reboot, Shutdown & Boot. No luck.
Yep it is down for me as well with the same "no bootable device" issue. Status page shows "TYOC007 - Broken reinstalls pending" so I expect we are waiting on DC hands.
For staff assistance or support issues please use the helpdesk ticket system at https://support.lowendspirit.com/index.php?a=add
Rated 4.25/5 Overall by 1000+ Customers
false advertisingYo @VirMach I dare ya put the Trustpilot link
smartass shitposting satirist
I ran database query and did the math. This is what the actual rating is for the reviews on our site.
Our TrustPilot average rating is 53% 5 stars, 11% 4 stars, 6% 3 stars, and 26% 1 stars so the true average is 0.535 + 0.114 + 0.063 + 0.261 = 3.53 out of 5 but the way TrustPilot skews it to sell their product that makes them money ($1,000/MO or something like that to actually keep driving traffic to their website by being able to auto invite all new orders) makes it 2/5 since there was recently negative review spam as well as some genuine negative reviews. It weighs it differently.
Of course if a customer was happy previously and they're still happy and with us through the years it doesn't count their positive review because we didn't spam the customer to write more recent reviews.
I think they also penalize us for not feeding into their system by replying to every negative review quickly, they want to turn it into another LowEnd helpdesk site.
Anyone who actually uses TrustPilot to make a purchase decision is ill-informed. Amazon has a 1.7 rating and everyone uses them, plus they basically instant refund anyone for 30 days and have 24x7 live support. If they can only get a 1.7 out of 5 then it's a clear indicator of the system being terrible and anyone who has 4+ stars on TrustPilot is either a small company since they skew those to be positive initially to reel people into their system, they're paying for fake reviews, or they're paying TrustPilot (which we did at some point when we were 4+ stars something like $1,000 a month) to properly use their system and collect genuine reviews in the correct ratios.
Some further unasked for analysis.
Take a look at RN's reviews. They have a 4.7 rating. There's first of all a warning that they don't pay TrustPilot but send out reviews, which TrustPilot doesn't like, but look at the review dates. First page of reviews is March through May 2020 and half of 2nd page is also May 2020. Then almost no reviews for the rest of the year until November, where they happen to get multiple reviews on exactly November 6th and then half of next page is November 16/17. Then another cluster on the next page. Next three pages are all spaced out again more, then 2nd to last page is a ton of reviews all within like a week of eachother. Next page, same thing.
So 2 .5 pages of reviews over 8 months (March to November 2020.)
Then 3 pages of reviews over the next 18~ months.
Then 3 pages of reviews over the next 7~ months after that.
You can draw your own conclusions but it seems like at the very least they get some targeted bulk positive reviews.
Compare this with ours:
1 page - July 2016 to August 2017 (notice how it starts off slow, like real reviews.)
1 page - August 2017 to January 2018.
1 page - February to April 2018 (2 months)
9 pages - next 3 months (this is when we paid TrustPilot to send bulk reviews.)
5 pages - next 3 months (still paying TrustPilot)
4 pages - next 3 months (still paying TrustPilot)
4 pages - next 3 months (still paying TrustPilot)
It's now May 2019. We stop paying TrustPilot in April, so the last invites trickle over into May. So that's 22 pages of reviews, mostly positive, in 1 year. We had a 1 year subscription.
1 page - May 2019 to November 2019 (5 months.)
1 page - December 2019 to June 2020 (7 months)
1 page - July 2020 to December 2020 (5 months)
1 page - December to April 2021 (5 months)
1 page - May to October 2021 (6 months)
1 page - October to March 2022 (6 months)
1 page - March to May 2022 (3 months)
1 page - May to July 2022 (3 months)
1 page - July to August 2022 (2 months)
1 page - August to October 2022 (3 months)
1 page - October to Now (5 months)
(edit) I think this adds up to 36 out of 38 so I did some miscalculation on the above, most likely 2 more pages get added to the 22 pages of reviews when we paid TrustPilot since I skimmed over those.)
Frequency of reviews went up March to October, mostly negative due to MJJ and also genuine negative experiences due to Ryzen migration fallout. Most of what's counted are these.
66% of our reviews came in within a year by paying them. Mostly positive, since it was balanced not just angry people searching VirMach and leaving a negative review on the first site they find to complain. The other 33% was over 2 years at the beginning and 3 years at the end.
So clearly you are not getting the full picture anymore and it's just turned into a cesspit since all new orders aren't being asked to review and most reviews are just angry people who bought specials and abused or angry people from migrations which are over.
Now people gonna be pissed that boss wasted hour into understanding TrustPilot and then another hour into typing this message that like 2 people gonna read (rest is gonna go TL;DR) rather than [flying to Tokyo to replace the diks, sending those XPGs or answering work orders!]
Haven't bought a single service in VirMach Great Ryzen 2022 - 2023 Flash Sale.
https://lowendspirit.com/uploads/editor/gi/ippw0lcmqowk.png
Swimming to Tokyo I suppose, cuz it has been days
Anyway, sorry for triggering yor boss though
smartass shitposting satirist
Welcome to the Ignored Club,
I have a VM in TPAZ003.VIRM.AC node that won't boot since December, I've asked here and via support ticket and they never respond.
At this rate, December will come again and they will not have fixed the problem.
Ruby, JS Programmer and Linux user
Sadly true, I've a VM in TPAZ003, out of three (3) months it have been online two (2) weeks. It is currently offline, by this time I've lost interest on it, I can't use it. @VirMach could offer a pro-rated refund to users on that node, I'll take it right away.
There's something inherently scummy about review sites that get paid by the hosts who are reviewed -
For one, high customer acquisition costs means paying more for the service..
Secondly, the review site is incentivized to drive more customers to vendors who pay more ..
I guess there's something to be said for looking for rantings on here -- it helps inform the risk tolerance, as you know what is likely to go wrong
@VirMach, will you put Migrate button on the troubled node(Phoenix, etc)? That way, less people will complain since they moved.
The worst part is the miniscule value they add, as in they just focus all that money on ways to build up their scheme rather than implementing interesting ways of collecting genuine reviews. Also, they literally scammed us. We signed a one year contract with no renewal clause, they billed us, fought the chargeback, somehow won even though I presented the literal contract to my card company as evidence of cancellation, and to top it off they admitted it at the end (aka the sales guy was like oh you're right and then proceeded to not provide a refund.
Anyway, I'll actually just share my idea here in case anyone wants to develop it and maybe create a better product but sadly it won't actually be used as much as these extortion sites when they pour so much money into only a marketing strategy.
Main idea: an actual API to validate a purchase was made and then presenting the customer with a review box on the business's website instead of spamming emails.
We're most likely changing out 2FA policy.
This is a ballpark idea, looking for any feedback. Basically we're going to give a 30 day notice to everyone with it enabled that we're going to begin unlocking them on request even if you lost backup code. It's getting bad, and there's been two times where the system's actually glitched out to some degree so should it fail we need to be able to unlock people if they didn't keep their backup codes. Seems like 2FA activations are going up which is good, but it also means tons of frustrated customers. We'll go over official lockout periods. We'll also send emails to people with email 2FA most likely just to clarify the policy for that as well.
Note, lockout period means we'll remove it after X days requested, unless it just looks super super obviously shady in which case we may push back a little. This means we're relinquishing our responsibility as a guard unless you activate the old method.
For those who still want us to keep it the "old way" AKA if you lose the 2FA and backup you're permanently locked out, we'll add a button to activate that and we'll keep track of accounts who have that enabled so we won't remove it for those, ever.
Sound good?
Migrate button was causing more problems/tickets.
This is still being worked on, both the server in small chunks trying to get the data off and recreations. I'm not the one handling recreations and it looks like there's been some bumps, something was supposed to be coded out to make things faster but I think that' gone rather ironically.
Actually I don't even know anymore that might be a completely different Tampa.
There's clearly been some delays on everything, I don't know if you saw all the other posts around yours, we're not having a terrific week. Not sure what response you desire.
If I had to keep everyone updated on delays I feel like we'd create an infinite loop.
Is the TYC040 node server offline? cannot restart
TYOC040.appears to have been down for about four days as of now. It's on the VirMach status page as having reoccurring issues.
For staff assistance or support issues please use the helpdesk ticket system at https://support.lowendspirit.com/index.php?a=add
thanks for your reply
TYOC026 finally going back up soon, maybe, who knows, it's done a pretty good job of being insanely annoying so far.
I found that no matter what happens VirNerd always has an excuse and it's always someone else's fault.