@taoqi said:
I want to know, is there any way to solve the problem of multiple accounts in the future?
Change households, get a new computer, maybe get a legal name change for good measure and then pay with Coinbase.
Because I want to buy more vps, I did register two accounts on January 4, one of which has been refunded and the other has been marked. For this situation, I admit, but I also want to change the current state. Is there any way to deal with the marked ones? I can pay the management fees for multiple accounts. I just want to use vps better.
I think most providers want customers who break their TOS to leave... It looks like VirMach doesn't want to serve you, so what makes you stick with VirMach?
@taoqi said:
I want to know, is there any way to solve the problem of multiple accounts in the future?
Change households, get a new computer, maybe get a legal name change for good measure and then pay with Coinbase.
Because I want to buy more vps, I did register two accounts on January 4, one of which has been refunded and the other has been marked. For this situation, I admit, but I also want to change the current state. Is there any way to deal with the marked ones? I can pay the management fees for multiple accounts. I just want to use vps better.
I think most providers want customers who break their TOS to leave... It looks like VirMach doesn't want to serve you, so what makes you stick with VirMach?
@tulipyun said: @taizi said:
Sorry, your account is not eligible to create any orders.
imagine your account even can't purchase Account Support Level
That's a feature, not a bug. To be able to purchase the support level you can mention it in your account appeal ticket. Not you specifically though, just in general that's how it would happen. For you I don't recommend making another ticket, it'd just get merged. Your ticket's the only one I think still in the queue and not placed on hold so about 500 more tickets to go and I'll get to it.
actually i have a dedi reinstall ticket got closed,maybe because it is merged.and because my account is flagged,so i can't open new ticket or purchase support level.
the dedi status is halted,and shows can't find PDU device or something.
should i mention this in pusher's ticket,so that i can purchase for my main account and pusher's account.so that both ticket can speedup.
just afraid will make my ticket have to queue again because i made a reply...
Yo serious Simba dude, do ya mean the locked Tokyo chicken? Not a big deal, that's just a proxy & jump node.
My biz have been migrated from Contabo S to Henzer CPX21
Whaaaaaaaaaaaaaaaagh Tokyo storage CPU steal burst and congested network
Sincerely hope @VirMach could ban those mother fucker
my theory is that virmach builds with Ryzen 5XXXs (or specifically 5900X) arent exactly stable due to "insert answer"
feel free to comment yall
People running multiple simultaneous yabs disk tests.
i have 4 VPS on the same node in SEA (3900X). If Virmach allows I can try YABS all in one go, and see how that goes. heck he can even verify if they are all provisioned on the same disk.
I was just funning with you about the yabs. What node are you on in Seattle? I am on most nodes in Seattle so before you break my stuff ....
I informed him of the method I used when asked to try and break the testing node (AMSD030X), it was not yabs, so he does know how to get the disk to drop off if he wants to. I think the issue may be that trying to completely resolve the problem with the motherboards/kernels/etc available has been elusive.
@taoqi said:
I want to know, is there any way to solve the problem of multiple accounts in the future?
Change households, get a new computer, maybe get a legal name change for good measure and then pay with Coinbase.
Because I want to buy more vps, I did register two accounts on January 4, one of which has been refunded and the other has been marked. For this situation, I admit, but I also want to change the current state. Is there any way to deal with the marked ones? I can pay the management fees for multiple accounts. I just want to use vps better.
I think most providers want customers who break their TOS to leave... It looks like VirMach doesn't want to serve you, so what makes you stick with VirMach?
I am not intended to violate TOS. I don't know what other people's vps are like. At present, the vps I have started are really easy to use in the same class. In addition, I really didn't go to see TOS when I bought it. I saw that vps directly placed an order to buy it. At other server vendors.I didn't encounter the problem that I couldn't register more accounts, so I didn't deliberately pay attention to this problem.
@taoqi said:
I am not intended to violate TOS. I don't know what other people's vps are like. At present, the vps I have started are really easy to use in the same class. In addition, I really didn't go to see TOS when I bought it. I saw that vps directly placed an order to buy it. At other server vendors.I didn't encounter the problem that I couldn't register more accounts, so I didn't deliberately pay attention to this problem.
I like the "I don't read, I don't care... Fix Asap" :-)
This is coincidental timing but I just came here to let you know that the feature is now there, sadly I think you may have to wait until the node is functional to use it.
I gave it a try and my cancellation ticket got merged with my ~12 merged IPv6 request tickets.
This is coincidental timing but I just came here to let you know that the feature is now there, sadly I think you may have to wait until the node is functional to use it.
I gave it a try and my cancellation ticket got merged with my ~12 merged IPv6 request tickets.
@reb0rn said:
looks like PHXZ001 go online but its down after a while, its a network issue, server is up 59 days
I already added this one to network status page almost immediately. PHXZ001 had a partial issue that I turned into a full overload. I mean all I did was rotate logs and restart logging service and was going to restart PHP-FPM for SolusVM controls to be restored but clearly there's some other larger issue if that caused it to go crazy.
Going to be juggling this one today with all the others and hopefully it won't turn into a week-long ordeal.
Then after I'm done coughing (I prefer liquid Vicodin for Bronchitis) we'll ship out the rest of the Ryzens, figure out how to actually fill them, fix a dozen broken things related to WHMCS, have to probably actually do some PR so people stop making fun of us, actually hire some people before I end up in an insane asylum, then setting up 2x1Gbit for all the locations, and I still have to fly out everywhere to clean up and organize everything, improve our disaster recovery backup setups and take inventory of all the scattered replacement parts. I'm assuming while doing all this juggling I'll also be involved in a game of dodgeball for all the other curveballs to come. Anyway, I think I can get it done in about 3 days or at the very least, soon.
Then after I'm done coughing (I prefer liquid Vicodin for Bronchitis) we'll ship out the rest of the Ryzens, figure out how to actually fill them, fix a dozen broken things related to WHMCS, have to probably actually do some PR so people stop making fun of us, actually hire some people before I end up in an insane asylum, then setting up 2x1Gbit for all the locations, and I still have to fly out everywhere to clean up and organize everything, improve our disaster recovery backup setups and take inventory of all the scattered replacement parts. I'm assuming while doing all this juggling I'll also be involved in a game of dodgeball for all the other curveballs to come. Anyway, I think I can get it done in about 3 days or at the very least, soon.
Forget the vicodin you need four shots of of Pfizer, three shots of Moderna and a Novavax chaser, right in the throat, for that covid cough before you infect us all.
Forget the vicodin you need four shots of of Pfizer, three shots of Moderna and a Novavax chaser, right in the throat, for that covid cough before you infect us all.
>
Still end up giving it to someone, don't ask me how I know.
For those of you that received IPv4 change notifications for node NYCB036 the new IPs may be routing now.
Mine is working so I have changed over to the new gateway IP and removed the old IP from my network config.
@taoqi said:
I want to know, is there any way to solve the problem of multiple accounts in the future?
Change households, get a new computer, maybe get a legal name change for good measure and then pay with Coinbase.
Because I want to buy more vps, I did register two accounts on January 4, one of which has been refunded and the other has been marked. For this situation, I admit, but I also want to change the current state. Is there any way to deal with the marked ones? I can pay the management fees for multiple accounts. I just want to use vps better.
you just want to break terms of service.
His username literally means naughty or mischievous.
@fan said: Possible disk error/corruption on TYOC040 like 026? Just found the node was unlocked and boot disk is gone.
Not necessarily the whole disk, but your lvm is obviously knackered. Not 100% sure that it applies to your case, but I've had this happen before. Normally VirMach will automatically fix it within a ~couple of weeks. One sign that it has been fixed is that the O/S in SolusVM is blank. You will then need to reinstall.
@fan said:
Possible disk error/corruption on TYOC040 like 026? Just found the node was unlocked and boot disk is gone.
Update: I/O error when access the virtual disk, so reinstallation won't work.
It just keeps getting knocked offline. As in the PCIe link drops. All Tokyo servers are already patched to the max pretty much to resolve all the previous problems but there was possibly at some point a kernel update, firmware update, or BIOS update and now it's no longer in proper equilibrium.
I remember @FrankZ was able to emulate a situation that took down the drive on AMSD030X so it's not necessarily indicative as a "bad" drive. Could be perfect health. Could also be reputable brand SSD. These new problems popping up are NOT related to the XPG fiasco.
(edit) Oh I forgot why I mentioned Frank, that node has basically been stable ever since he stopped stressing the server. So if he can do that, it also means other people can possibly trigger a dropoff, whether intentionally or not. And it's not an easy case of identifying abuse. This can unfortunately happen in a fraction of a second, not hours of thrashing. I'd basically need to be a kernel engineer with a full-time job of diagnosing this to go any further with it. And don't worry this isn't a case of me being incapable, I also phoned in a lot of intelligent friends and they all basically couldn't take it that far. One of them did assist us in fixing maybe 1 out of 10 things that could cause a dropoff and instead it just "overloads" in those scenarios. The overloads happen if for example people start mass re-installing after they see a disk message like yours, it balloons out of control before it can recover. If we could code up a better/faster detection system that isn't intensive what we could do is force the server to basically lock itself out from SolusVM. We've gotten that done to some degree, I just need to push out an update.
It's definitely frustrating but this is something that's had 6 years of Linux kernel bug reports. Seems like every kernel update it may introduce a new specific scenario where perhaps if someone's VM ends up using swap space or something super specific happens, or multiple VMs perform certain extremely spikey behavior it occurs. It would explain why we keep seeing it in Tokyo since that entire region is very spiky in usage. I'm open to any suggestions that aren't "go back in time and buy U.2 drives."
Basically for NVMe SSDs to function properly the motherboard, CPU, kernel, firmware, everything has to perform spectacularly or else it will go away. We've since coded out a "rescuer" that checks and runs on a cron and does everything it possibly can to automatically bring it back up but once it drops off it creates a domino effect that has a low success rate without a cold reboot on LInux. On Windows, in my testing when I stressed the NVMe and it dropped off it would basically fix itself within seconds. On Linux, not so much.
Some of these, if it ends up being related to a specific motherboard being sub-par or not on the perfect combo of everything, will drop off and only come back after hours of attempts.
My TYOC040 node has been stopped for 72 hours. I heard that this node is about to go offline?
@fan said: Possible disk error/corruption on TYOC040 like 026? Just found the node was unlocked and boot disk is gone.
Not necessarily the whole disk, but your lvm is obviously knackered. Not 100% sure that it applies to your case, but I've had this happen before. Normally VirMach will automatically fix it within a ~couple of weeks. One sign that it has been fixed is that the O/S in SolusVM is blank. You will then need to reinstall.
@fan said:
Possible disk error/corruption on TYOC040 like 026? Just found the node was unlocked and boot disk is gone.
Update: I/O error when access the virtual disk, so reinstallation won't work.
It just keeps getting knocked offline. As in the PCIe link drops. All Tokyo servers are already patched to the max pretty much to resolve all the previous problems but there was possibly at some point a kernel update, firmware update, or BIOS update and now it's no longer in proper equilibrium.
I remember @FrankZ was able to emulate a situation that took down the drive on AMSD030X so it's not necessarily indicative as a "bad" drive. Could be perfect health. Could also be reputable brand SSD. These new problems popping up are NOT related to the XPG fiasco.
(edit) Oh I forgot why I mentioned Frank, that node has basically been stable ever since he stopped stressing the server. So if he can do that, it also means other people can possibly trigger a dropoff, whether intentionally or not. And it's not an easy case of identifying abuse. This can unfortunately happen in a fraction of a second, not hours of thrashing. I'd basically need to be a kernel engineer with a full-time job of diagnosing this to go any further with it. And don't worry this isn't a case of me being incapable, I also phoned in a lot of intelligent friends and they all basically couldn't take it that far. One of them did assist us in fixing maybe 1 out of 10 things that could cause a dropoff and instead it just "overloads" in those scenarios. The overloads happen if for example people start mass re-installing after they see a disk message like yours, it balloons out of control before it can recover. If we could code up a better/faster detection system that isn't intensive what we could do is force the server to basically lock itself out from SolusVM. We've gotten that done to some degree, I just need to push out an update.
It's definitely frustrating but this is something that's had 6 years of Linux kernel bug reports. Seems like every kernel update it may introduce a new specific scenario where perhaps if someone's VM ends up using swap space or something super specific happens, or multiple VMs perform certain extremely spikey behavior it occurs. It would explain why we keep seeing it in Tokyo since that entire region is very spiky in usage. I'm open to any suggestions that aren't "go back in time and buy U.2 drives."
Basically for NVMe SSDs to function properly the motherboard, CPU, kernel, firmware, everything has to perform spectacularly or else it will go away. We've since coded out a "rescuer" that checks and runs on a cron and does everything it possibly can to automatically bring it back up but once it drops off it creates a domino effect that has a low success rate without a cold reboot on LInux. On Windows, in my testing when I stressed the NVMe and it dropped off it would basically fix itself within seconds. On Linux, not so much.
Some of these, if it ends up being related to a specific motherboard being sub-par or not on the perfect combo of everything, will drop off and only come back after hours of attempts.
My TYOC040 node has been stopped for 72 hours. I heard that this node is about to go offline?
he has offered pro-rated refund for that node. feel free to take it like i did.
Comments
I think most providers want customers who break their TOS to leave... It looks like VirMach doesn't want to serve you, so what makes you stick with VirMach?
Brand loyalty?
Free Hosting at YetiNode | Cryptid Security | URL Shortener | LaunchVPS | ExtraVM | Host-C | In the Node, or Out of the Loop?
actually i have a dedi reinstall ticket got closed,maybe because it is merged.and because my account is flagged,so i can't open new ticket or purchase support level.
the dedi status is halted,and shows can't find PDU device or something.
should i mention this in pusher's ticket,so that i can purchase for my main account and pusher's account.so that both ticket can speedup.
just afraid will make my ticket have to queue again because i made a reply...
Yo serious Simba dude, do ya mean the locked Tokyo chicken? Not a big deal, that's just a proxy & jump node.
My biz have been migrated from Contabo S to Henzer CPX21
Whaaaaaaaaaaaaaaaagh Tokyo storage CPU steal burst and congested network
Sincerely hope @VirMach could ban those mother fucker
smartass shitposting satirist
SEAZ007
I bench YABS 24/7/365 unless it's a leap year.
You're good I am not on SEAZ006 or SEAZ007.
For staff assistance or support issues please use the helpdesk ticket system at https://support.lowendspirit.com/index.php?a=add
I am not intended to violate TOS. I don't know what other people's vps are like. At present, the vps I have started are really easy to use in the same class. In addition, I really didn't go to see TOS when I bought it. I saw that vps directly placed an order to buy it. At other server vendors.I didn't encounter the problem that I couldn't register more accounts, so I didn't deliberately pay attention to this problem.
just to be clear, if it crashes "it wasn't me".
I bench YABS 24/7/365 unless it's a leap year.
I like the "I don't read, I don't care... Fix Asap" :-)
looks like PHXZ001 go online but its down after a while, its a network issue, server is up 59 days
I gave it a try and my cancellation ticket got merged with my ~12 merged IPv6 request tickets.
"Deeeecent", as Bubbles would say.
dnscry.pt - Public DNSCrypt resolvers hosted by LowEnd providers • Need a free NAT LXC? -> https://microlxc.net/
Free Hosting at YetiNode | Cryptid Security | URL Shortener | LaunchVPS | ExtraVM | Host-C | In the Node, or Out of the Loop?
ignorantia juris non excusat!
Contribute your idling VPS/dedi (link), Android (link) or iOS (link) devices to medical research
[Off Topic] Who said electric bikes? (One for @bikegremlin)
It wisnae me! A big boy done it and ran away.
NVMe2G for life! until death (the end is nigh)
I already added this one to network status page almost immediately. PHXZ001 had a partial issue that I turned into a full overload. I mean all I did was rotate logs and restart logging service and was going to restart PHP-FPM for SolusVM controls to be restored but clearly there's some other larger issue if that caused it to go crazy.
Going to be juggling this one today with all the others and hopefully it won't turn into a week-long ordeal.
Source:
Is this before, during and/or after the Epyc deals!
(rDNS, cough.)
It wisnae me! A big boy done it and ran away.
NVMe2G for life! until death (the end is nigh)
It's all one big juggle, don't remind me.
IPv6 -cough Amsterdam storage cough cough- ticket backlog COUGH last couple doz-- COUGH -en flash deals cough- shipping out XPG cough drives cough cough
^ Needs Bronchial Balsam
It wisnae me! A big boy done it and ran away.
NVMe2G for life! until death (the end is nigh)
Then after I'm done coughing (I prefer liquid Vicodin for Bronchitis) we'll ship out the rest of the Ryzens, figure out how to actually fill them, fix a dozen broken things related to WHMCS, have to probably actually do some PR so people stop making fun of us, actually hire some people before I end up in an insane asylum, then setting up 2x1Gbit for all the locations, and I still have to fly out everywhere to clean up and organize everything, improve our disaster recovery backup setups and take inventory of all the scattered replacement parts. I'm assuming while doing all this juggling I'll also be involved in a game of dodgeball for all the other curveballs to come. Anyway, I think I can get it done in about 3 days or at the very least, soon.
They're comin' to take me you away, ho ho, hee hee, ha ha.
(Google it)
It wisnae me! A big boy done it and ran away.
NVMe2G for life! until death (the end is nigh)
If you need a hand hit me up man.
Free Hosting at YetiNode | Cryptid Security | URL Shortener | LaunchVPS | ExtraVM | Host-C | In the Node, or Out of the Loop?
Forget the vicodin you need four shots of of Pfizer, three shots of Moderna and a Novavax chaser, right in the throat, for that covid cough before you infect us all.
Don't need to. On my regular playlist.
>
Still end up giving it to someone, don't ask me how I know.
Free Hosting at YetiNode | Cryptid Security | URL Shortener | LaunchVPS | ExtraVM | Host-C | In the Node, or Out of the Loop?
For those of you that received IPv4 change notifications for node NYCB036 the new IPs may be routing now.
Mine is working so I have changed over to the new gateway IP and removed the old IP from my network config.
For staff assistance or support issues please use the helpdesk ticket system at https://support.lowendspirit.com/index.php?a=add
His username literally means naughty or mischievous.
https://lowendspirit.com/discussion/comment/129592/#Comment_129592
I bench YABS 24/7/365 unless it's a leap year.
My TYOC040 node has been stopped for 72 hours. I heard that this node is about to go offline?
Why can't I get into the control panel? Have you ever been in this situation?
he has offered pro-rated refund for that node. feel free to take it like i did.
I bench YABS 24/7/365 unless it's a leap year.