VirMach - Complain - Moan - Praise - Chit Chat

1351352353354355357»

Comments

  • VirMachVirMach Hosting Provider
    edited 8:08AM

    @nahaoba said:

    @VirMach said:

    @nahaoba said: It looks like I'm not lucky. TYOC027 has been down for 22 days since the 19 Feb.How long will it take to repair?

    Sorry, I guess I should just go for it. I'm really dreading this situation because if I take action in this particular case, there is a small chance it could go terribly wrong. But you're right, it's been too long and I haven't been able to find a better solution. The main issue is that if it does go bad, it will require a lot more time to be spent on it. Let's see if we're lucky.

    If the failure takes a long time to process, why not migrate to another node first?

    Tokyo-exclusive issue. We can't move people to nearby location as we normally would (we'd get 95% complaints) and can't move everyone to another Tokyo node either because it'll likely cause problems on the other node after a mass migration. Also this would only be possible for the ones that are online.

    What I can offer is to move anyone who contacts us to get recreated on another node if they want to abandon their data.

    As for what I said I'd do the other day, I did. It didn't result in it going terribly wrong but it also didn't go terribly right. Closer to wrong than right. There's a lot more that could be said but it's not helpful.

  • JabJab Senpai
    edited 10:48AM


    Just waiting for a moment that Virmach AI would deem this idler using too much CPU and ban me (-:

    --- 1.1.1.1 ping statistics ---
    26 packets transmitted, 25 received, 3.84615% packet loss, time 26498ms
    rtt min/avg/max/mdev = 1.813/147.843/396.488/93.441 ms
    

    min 1.8 ms, avg 150 :-D

    Haven't bought a single service in VirMach Great Ryzen 2022 - 2023 Flash Sale.
    https://lowendspirit.com/uploads/editor/gi/ippw0lcmqowk.png

  • AMSD014 coincidentally started to sweat up :(

    https://imgur.com/a/HSiaKeb

  • @VirMach said:

    @nahaoba said:

    @VirMach said:

    @nahaoba said: It looks like I'm not lucky. TYOC027 has been down for 22 days since the 19 Feb.How long will it take to repair?

    Sorry, I guess I should just go for it. I'm really dreading this situation because if I take action in this particular case, there is a small chance it could go terribly wrong. But you're right, it's been too long and I haven't been able to find a better solution. The main issue is that if it does go bad, it will require a lot more time to be spent on it. Let's see if we're lucky.

    If the failure takes a long time to process, why not migrate to another node first?

    Tokyo-exclusive issue. We can't move people to nearby location as we normally would (we'd get 95% complaints) and can't move everyone to another Tokyo node either because it'll likely cause problems on the other node after a mass migration. Also this would only be possible for the ones that are online.

    What I can offer is to move anyone who contacts us to get recreated on another node if they want to abandon their data.

    As for what I said I'd do the other day, I did. It didn't result in it going terribly wrong but it also didn't go terribly right. Closer to wrong than right. There's a lot more that could be said but it's not helpful.

    I can abandon my data,can you help me to move another Tokyo node?

  • OFFER #3212 is good ~ 2 IPv4's :), but I am waiting for the $8 8 IPv4s :lol:

  • @Jab said:

    Just waiting for a moment that Virmach AI would deem this idler using too much CPU and ban me (-:

    This is by far my "idlest" server. Hope they won't look only at that chart like that...

  • VirMachVirMach Hosting Provider

    @eliphas said:
    AMSD014 coincidentally started to sweat up :(

    https://imgur.com/a/HSiaKeb

    That one started being weird a day or two ago. None of this is a coincidence though, I just haven't had time to play detective while dealing with LAXA004S. Such a PITA, I had to basically piece together the host operating system because it decided to die mid-patch.

  • FrankZFrankZ ModeratorOG

    Shutting down my VM on AMSD014 due to high [70%+] steal.
    I've got extras so no worries.

    @VirMach said: What I can offer is to move anyone who contacts us to get recreated on another node if they want to abandon their data.

    If this is your preferred solution I'll contact you, otherwise I'll just wait it out.

  • VirMachVirMach Hosting Provider

    @FrankZ said: Shutting down my VM on AMSD014 due to high [70%+] steal.

    Should be OK now. Monitoring further.

    Thanked by (1)FrankZ
Sign In or Register to comment.