VirMach - Complain - Moan - Praise - Chit Chat

1969799101102320

Comments

  • @taoqi said:

    @VirMach said:

    @taoqi said:
    I want to know, is there any way to solve the problem of multiple accounts in the future?

    Change households, get a new computer, maybe get a legal name change for good measure and then pay with Coinbase.

    Because I want to buy more vps, I did register two accounts on January 4, one of which has been refunded and the other has been marked. For this situation, I admit, but I also want to change the current state. Is there any way to deal with the marked ones? I can pay the management fees for multiple accounts. I just want to use vps better.

    I think most providers want customers who break their TOS to leave... It looks like VirMach doesn't want to serve you, so what makes you stick with VirMach?

  • @tototo said:

    @taoqi said:

    @VirMach said:

    @taoqi said:
    I want to know, is there any way to solve the problem of multiple accounts in the future?

    Change households, get a new computer, maybe get a legal name change for good measure and then pay with Coinbase.

    Because I want to buy more vps, I did register two accounts on January 4, one of which has been refunded and the other has been marked. For this situation, I admit, but I also want to change the current state. Is there any way to deal with the marked ones? I can pay the management fees for multiple accounts. I just want to use vps better.

    I think most providers want customers who break their TOS to leave... It looks like VirMach doesn't want to serve you, so what makes you stick with VirMach?

    Brand loyalty?

    Thanked by (1)tototo

    Free Hosting at YetiNode | Cryptid Security | URL Shortener | LaunchVPS | ExtraVM | Host-C | In the Node, or Out of the Loop?

  • @VirMach said:

    @tulipyun said:
    @taizi said:
    Sorry, your account is not eligible to create any orders.
    imagine your account even can't purchase Account Support Level

    That's a feature, not a bug. To be able to purchase the support level you can mention it in your account appeal ticket. Not you specifically though, just in general that's how it would happen. For you I don't recommend making another ticket, it'd just get merged. Your ticket's the only one I think still in the queue and not placed on hold so about 500 more tickets to go and I'll get to it.

    actually i have a dedi reinstall ticket got closed,maybe because it is merged.and because my account is flagged,so i can't open new ticket or purchase support level.
    the dedi status is halted,and shows can't find PDU device or something.
    should i mention this in pusher's ticket,so that i can purchase for my main account and pusher's account.so that both ticket can speedup.
    just afraid will make my ticket have to queue again because i made a reply...

  • edited February 2023

    @AuroraZero said:
    @Flying_Chinaman frustrating isn't it?

    Yo serious Simba dude, do ya mean the locked Tokyo chicken? Not a big deal, that's just a proxy & jump node.
    My biz have been migrated from Contabo S to Henzer CPX21


    Whaaaaaaaaaaaaaaaagh Tokyo storage CPU steal burst and congested network

    Sincerely hope @VirMach could ban those mother fucker

    smartass shitposting satirist

  • cybertechcybertech OGBenchmark King

    @FrankZ said:

    @cybertech said:

    @FrankZ said:

    @cybertech said:

    @FrankZ said:

    @cybertech said:
    what CPU's on AMSD030X?

    RYZEN 5900X

    my theory is that virmach builds with Ryzen 5XXXs (or specifically 5900X) arent exactly stable due to "insert answer"

    feel free to comment yall

    People running multiple simultaneous yabs disk tests.

    i have 4 VPS on the same node in SEA (3900X). If Virmach allows I can try YABS all in one go, and see how that goes. heck he can even verify if they are all provisioned on the same disk.

    I was just funning with you about the yabs. What node are you on in Seattle? I am on most nodes in Seattle so before you break my stuff ....

    I informed him of the method I used when asked to try and break the testing node (AMSD030X), it was not yabs, so he does know how to get the disk to drop off if he wants to. I think the issue may be that trying to completely resolve the problem with the motherboards/kernels/etc available has been elusive.

    SEAZ007

    Thanked by (1)FrankZ

    I bench YABS 24/7/365 unless it's a leap year.

  • FrankZFrankZ Moderator
    edited February 2023

    @cybertech said: SEAZ007

    You're good I am not on SEAZ006 or SEAZ007. :wink:

    I am currently traveling in mostly remote areas until sometime in April 2024. Consequently DM's sent to me will go unanswered during this time.
    For staff assistance or support issues please use the helpdesk ticket system at https://support.lowendspirit.com/index.php?a=add

  • edited February 2023

    @tototo said:

    @taoqi said:

    @VirMach said:

    @taoqi said:
    I want to know, is there any way to solve the problem of multiple accounts in the future?

    Change households, get a new computer, maybe get a legal name change for good measure and then pay with Coinbase.

    Because I want to buy more vps, I did register two accounts on January 4, one of which has been refunded and the other has been marked. For this situation, I admit, but I also want to change the current state. Is there any way to deal with the marked ones? I can pay the management fees for multiple accounts. I just want to use vps better.

    I think most providers want customers who break their TOS to leave... It looks like VirMach doesn't want to serve you, so what makes you stick with VirMach?

    I am not intended to violate TOS. I don't know what other people's vps are like. At present, the vps I have started are really easy to use in the same class. In addition, I really didn't go to see TOS when I bought it. I saw that vps directly placed an order to buy it. At other server vendors.I didn't encounter the problem that I couldn't register more accounts, so I didn't deliberately pay attention to this problem.

  • cybertechcybertech OGBenchmark King
    edited February 2023

    @FrankZ said:

    @cybertech said: SEAZ007

    You're good I am not on SEAZ006 or SEAZ007. :wink:

    just to be clear, if it crashes "it wasn't me".

    Thanked by (1)FrankZ

    I bench YABS 24/7/365 unless it's a leap year.

  • @taoqi said:
    I am not intended to violate TOS. I don't know what other people's vps are like. At present, the vps I have started are really easy to use in the same class. In addition, I really didn't go to see TOS when I bought it. I saw that vps directly placed an order to buy it. At other server vendors.I didn't encounter the problem that I couldn't register more accounts, so I didn't deliberately pay attention to this problem.

    I like the "I don't read, I don't care... Fix Asap" :-)

  • edited February 2023

    looks like PHXZ001 go online but its down after a while, its a network issue, server is up 59 days

  • @VirMach said:

    @yoursunny said: @tarasis suggested me to cancel Tokyo, and I should have listened.

    @Flying_Chinaman said: TOOC036 LOCKED

    @yoursunny said: TYOC026: The node is currently locked.

    This is coincidental timing but I just came here to let you know that the feature is now there, sadly I think you may have to wait until the node is functional to use it.

    I gave it a try and my cancellation ticket got merged with my ~12 merged IPv6 request tickets.

    "Deeeecent", as Bubbles would say.

    dnscry.pt - Public DNSCrypt resolvers hosted by LowEnd providers • Need a free NAT LXC? -> https://microlxc.net/

  • @Brueggus said:

    @VirMach said:

    @yoursunny said: @tarasis suggested me to cancel Tokyo, and I should have listened.

    @Flying_Chinaman said: TOOC036 LOCKED

    @yoursunny said: TYOC026: The node is currently locked.

    This is coincidental timing but I just came here to let you know that the feature is now there, sadly I think you may have to wait until the node is functional to use it.

    I gave it a try and my cancellation ticket got merged with my ~12 merged IPv6 request tickets.

    "Deeeecent", as Bubbles would say.

    Thanked by (1)Brueggus

    Free Hosting at YetiNode | Cryptid Security | URL Shortener | LaunchVPS | ExtraVM | Host-C | In the Node, or Out of the Loop?

  • @risturiz said:
    I like the "I don't read, I don't care... Fix Asap" :-)

    ignorantia juris non excusat!




    Thanked by (1)AlwaysSkint

    Contribute your idling VPS/dedi (link), Android (link) or iOS (link) devices to medical research

  • edited February 2023

    @chimichurri said: ignorantia..

    [Off Topic] Who said electric bikes? ;) (One for @bikegremlin)

    It wisnae me! A big boy done it and ran away.
    NVMe2G for life! until death (the end is nigh)

  • VirMachVirMach Hosting Provider
    edited February 2023

    @reb0rn said:
    looks like PHXZ001 go online but its down after a while, its a network issue, server is up 59 days

    I already added this one to network status page almost immediately. PHXZ001 had a partial issue that I turned into a full overload. I mean all I did was rotate logs and restart logging service and was going to restart PHP-FPM for SolusVM controls to be restored but clearly there's some other larger issue if that caused it to go crazy.

    Going to be juggling this one today with all the others and hopefully it won't turn into a week-long ordeal.

    Source:

  • edited February 2023

    @VirMach said: Going to be juggling this one today with all the others..

    Is this before, during and/or after the Epyc deals! :p :confounded:
    (rDNS, cough.)

    It wisnae me! A big boy done it and ran away.
    NVMe2G for life! until death (the end is nigh)

  • VirMachVirMach Hosting Provider
    edited February 2023

    @AlwaysSkint said:

    @VirMach said: Going to be juggling this one today with all the others..

    Is this before, during and/or after the Epyc deals! :p :confounded:

    It's all one big juggle, don't remind me.

    (rDNS, cough.)

    IPv6 -cough Amsterdam storage cough cough- ticket backlog COUGH last couple doz-- COUGH -en flash deals cough- shipping out XPG cough drives cough cough

  • ^ Needs Bronchial Balsam

    Thanked by (1)sh97

    It wisnae me! A big boy done it and ran away.
    NVMe2G for life! until death (the end is nigh)

  • VirMachVirMach Hosting Provider
    edited February 2023

    @AlwaysSkint said:
    ^ Needs Bronchial Balsam

    Then after I'm done coughing (I prefer liquid Vicodin for Bronchitis) we'll ship out the rest of the Ryzens, figure out how to actually fill them, fix a dozen broken things related to WHMCS, have to probably actually do some PR so people stop making fun of us, actually hire some people before I end up in an insane asylum, then setting up 2x1Gbit for all the locations, and I still have to fly out everywhere to clean up and organize everything, improve our disaster recovery backup setups and take inventory of all the scattered replacement parts. I'm assuming while doing all this juggling I'll also be involved in a game of dodgeball for all the other curveballs to come. Anyway, I think I can get it done in about 3 days or at the very least, soon.

  • edited February 2023

    @VirMach said: ..before I end up in an insane asylum..

    They're comin' to take me you away, ho ho, hee hee, ha ha.
    (Google it)

    Thanked by (1)skorous

    It wisnae me! A big boy done it and ran away.
    NVMe2G for life! until death (the end is nigh)

  • @VirMach said:

    @AlwaysSkint said:
    ^ Needs Bronchial Balsam

    Then after I'm done coughing (I prefer liquid Vicodin for Bronchitis) we'll ship out the rest of the Ryzens, figure out how to actually fill them, fix a dozen broken things related to WHMCS, have to probably actually do some PR so people stop making fun of us, actually hire some people before I end up in an insane asylum, then setting up 2x1Gbit for all the locations, and I still have to fly out everywhere to clean up and organize everything, improve our disaster recovery backup setups and take inventory of all the scattered replacement parts. I'm assuming while doing all this juggling I'll also be involved in a game of dodgeball for all the other curveballs to come. Anyway, I think I can get it done in about 3 days or at the very least, soon.

    If you need a hand hit me up man. :)

    Free Hosting at YetiNode | Cryptid Security | URL Shortener | LaunchVPS | ExtraVM | Host-C | In the Node, or Out of the Loop?

  • edited February 2023

    @VirMach said:
    (rDNS, cough.)

    IPv6 -cough Amsterdam storage cough cough- ticket backlog COUGH last couple doz-- COUGH -en flash deals cough- shipping out XPG cough drives cough cough

    Forget the vicodin you need four shots of of Pfizer, three shots of Moderna and a Novavax chaser, right in the throat, for that covid cough before you infect us all.

  • @AlwaysSkint said:

    @VirMach said: ..before I end up in an insane asylum..

    They're comin' to take me you away, ho ho, hee hee, ha ha.
    (Google it)

    Don't need to. On my regular playlist.

    Thanked by (2)AlwaysSkint FrankZ
  • @nutjob said:

    @VirMach said:
    (rDNS, cough.)

    IPv6 -cough Amsterdam storage cough cough- ticket backlog COUGH last couple doz-- COUGH -en flash deals cough- shipping out XPG cough drives cough cough

    Forget the vicodin you need four shots of of Pfizer, three shots of Moderna and a Novavax chaser, right in the throat, for that covid cough before you infect us all.

    >

    Still end up giving it to someone, don't ask me how I know.

    Free Hosting at YetiNode | Cryptid Security | URL Shortener | LaunchVPS | ExtraVM | Host-C | In the Node, or Out of the Loop?

  • FrankZFrankZ Moderator

    For those of you that received IPv4 change notifications for node NYCB036 the new IPs may be routing now.
    Mine is working so I have changed over to the new gateway IP and removed the old IP from my network config.

    Thanked by (3)ehab AlwaysSkint titus

    I am currently traveling in mostly remote areas until sometime in April 2024. Consequently DM's sent to me will go unanswered during this time.
    For staff assistance or support issues please use the helpdesk ticket system at https://support.lowendspirit.com/index.php?a=add

  • @cybertech said:

    @taoqi said:

    @VirMach said:

    @taoqi said:
    I want to know, is there any way to solve the problem of multiple accounts in the future?

    Change households, get a new computer, maybe get a legal name change for good measure and then pay with Coinbase.

    Because I want to buy more vps, I did register two accounts on January 4, one of which has been refunded and the other has been marked. For this situation, I admit, but I also want to change the current state. Is there any way to deal with the marked ones? I can pay the management fees for multiple accounts. I just want to use vps better.

    you just want to break terms of service.

    His username literally means naughty or mischievous.

    Thanked by (1)cybertech
  • cybertechcybertech OGBenchmark King

    I bench YABS 24/7/365 unless it's a leap year.

  • @VirMach said:

    @FrankZ said:

    @fan said: Possible disk error/corruption on TYOC040 like 026? Just found the node was unlocked and boot disk is gone.

    Not necessarily the whole disk, but your lvm is obviously knackered. Not 100% sure that it applies to your case, but I've had this happen before. Normally VirMach will automatically fix it within a ~couple of weeks. One sign that it has been fixed is that the O/S in SolusVM is blank. You will then need to reinstall.

    @fan said:
    Possible disk error/corruption on TYOC040 like 026? Just found the node was unlocked and boot disk is gone.

    Update: I/O error when access the virtual disk, so reinstallation won't work.

    It just keeps getting knocked offline. As in the PCIe link drops. All Tokyo servers are already patched to the max pretty much to resolve all the previous problems but there was possibly at some point a kernel update, firmware update, or BIOS update and now it's no longer in proper equilibrium.

    I remember @FrankZ was able to emulate a situation that took down the drive on AMSD030X so it's not necessarily indicative as a "bad" drive. Could be perfect health. Could also be reputable brand SSD. These new problems popping up are NOT related to the XPG fiasco.

    (edit) Oh I forgot why I mentioned Frank, that node has basically been stable ever since he stopped stressing the server. So if he can do that, it also means other people can possibly trigger a dropoff, whether intentionally or not. And it's not an easy case of identifying abuse. This can unfortunately happen in a fraction of a second, not hours of thrashing. I'd basically need to be a kernel engineer with a full-time job of diagnosing this to go any further with it. And don't worry this isn't a case of me being incapable, I also phoned in a lot of intelligent friends and they all basically couldn't take it that far. One of them did assist us in fixing maybe 1 out of 10 things that could cause a dropoff and instead it just "overloads" in those scenarios. The overloads happen if for example people start mass re-installing after they see a disk message like yours, it balloons out of control before it can recover. If we could code up a better/faster detection system that isn't intensive what we could do is force the server to basically lock itself out from SolusVM. We've gotten that done to some degree, I just need to push out an update.

    It's definitely frustrating but this is something that's had 6 years of Linux kernel bug reports. Seems like every kernel update it may introduce a new specific scenario where perhaps if someone's VM ends up using swap space or something super specific happens, or multiple VMs perform certain extremely spikey behavior it occurs. It would explain why we keep seeing it in Tokyo since that entire region is very spiky in usage. I'm open to any suggestions that aren't "go back in time and buy U.2 drives."

    Basically for NVMe SSDs to function properly the motherboard, CPU, kernel, firmware, everything has to perform spectacularly or else it will go away. We've since coded out a "rescuer" that checks and runs on a cron and does everything it possibly can to automatically bring it back up but once it drops off it creates a domino effect that has a low success rate without a cold reboot on LInux. On Windows, in my testing when I stressed the NVMe and it dropped off it would basically fix itself within seconds. On Linux, not so much.

    Some of these, if it ends up being related to a specific motherboard being sub-par or not on the perfect combo of everything, will drop off and only come back after hours of attempts.

    My TYOC040 node has been stopped for 72 hours. I heard that this node is about to go offline?

  • Why can't I get into the control panel? Have you ever been in this situation?

  • cybertechcybertech OGBenchmark King

    @Mainly said:

    @VirMach said:

    @FrankZ said:

    @fan said: Possible disk error/corruption on TYOC040 like 026? Just found the node was unlocked and boot disk is gone.

    Not necessarily the whole disk, but your lvm is obviously knackered. Not 100% sure that it applies to your case, but I've had this happen before. Normally VirMach will automatically fix it within a ~couple of weeks. One sign that it has been fixed is that the O/S in SolusVM is blank. You will then need to reinstall.

    @fan said:
    Possible disk error/corruption on TYOC040 like 026? Just found the node was unlocked and boot disk is gone.

    Update: I/O error when access the virtual disk, so reinstallation won't work.

    It just keeps getting knocked offline. As in the PCIe link drops. All Tokyo servers are already patched to the max pretty much to resolve all the previous problems but there was possibly at some point a kernel update, firmware update, or BIOS update and now it's no longer in proper equilibrium.

    I remember @FrankZ was able to emulate a situation that took down the drive on AMSD030X so it's not necessarily indicative as a "bad" drive. Could be perfect health. Could also be reputable brand SSD. These new problems popping up are NOT related to the XPG fiasco.

    (edit) Oh I forgot why I mentioned Frank, that node has basically been stable ever since he stopped stressing the server. So if he can do that, it also means other people can possibly trigger a dropoff, whether intentionally or not. And it's not an easy case of identifying abuse. This can unfortunately happen in a fraction of a second, not hours of thrashing. I'd basically need to be a kernel engineer with a full-time job of diagnosing this to go any further with it. And don't worry this isn't a case of me being incapable, I also phoned in a lot of intelligent friends and they all basically couldn't take it that far. One of them did assist us in fixing maybe 1 out of 10 things that could cause a dropoff and instead it just "overloads" in those scenarios. The overloads happen if for example people start mass re-installing after they see a disk message like yours, it balloons out of control before it can recover. If we could code up a better/faster detection system that isn't intensive what we could do is force the server to basically lock itself out from SolusVM. We've gotten that done to some degree, I just need to push out an update.

    It's definitely frustrating but this is something that's had 6 years of Linux kernel bug reports. Seems like every kernel update it may introduce a new specific scenario where perhaps if someone's VM ends up using swap space or something super specific happens, or multiple VMs perform certain extremely spikey behavior it occurs. It would explain why we keep seeing it in Tokyo since that entire region is very spiky in usage. I'm open to any suggestions that aren't "go back in time and buy U.2 drives."

    Basically for NVMe SSDs to function properly the motherboard, CPU, kernel, firmware, everything has to perform spectacularly or else it will go away. We've since coded out a "rescuer" that checks and runs on a cron and does everything it possibly can to automatically bring it back up but once it drops off it creates a domino effect that has a low success rate without a cold reboot on LInux. On Windows, in my testing when I stressed the NVMe and it dropped off it would basically fix itself within seconds. On Linux, not so much.

    Some of these, if it ends up being related to a specific motherboard being sub-par or not on the perfect combo of everything, will drop off and only come back after hours of attempts.

    My TYOC040 node has been stopped for 72 hours. I heard that this node is about to go offline?

    he has offered pro-rated refund for that node. feel free to take it like i did.

    I bench YABS 24/7/365 unless it's a leap year.

Sign In or Register to comment.