@nahaoba said: It looks like I'm not lucky. TYOC027 has been down for 22 days since the 19 Feb.How long will it take to repair?
Sorry, I guess I should just go for it. I'm really dreading this situation because if I take action in this particular case, there is a small chance it could go terribly wrong. But you're right, it's been too long and I haven't been able to find a better solution. The main issue is that if it does go bad, it will require a lot more time to be spent on it. Let's see if we're lucky.
If the failure takes a long time to process, why not migrate to another node first?
Tokyo-exclusive issue. We can't move people to nearby location as we normally would (we'd get 95% complaints) and can't move everyone to another Tokyo node either because it'll likely cause problems on the other node after a mass migration. Also this would only be possible for the ones that are online.
What I can offer is to move anyone who contacts us to get recreated on another node if they want to abandon their data.
As for what I said I'd do the other day, I did. It didn't result in it going terribly wrong but it also didn't go terribly right. Closer to wrong than right. There's a lot more that could be said but it's not helpful.
@nahaoba said: It looks like I'm not lucky. TYOC027 has been down for 22 days since the 19 Feb.How long will it take to repair?
Sorry, I guess I should just go for it. I'm really dreading this situation because if I take action in this particular case, there is a small chance it could go terribly wrong. But you're right, it's been too long and I haven't been able to find a better solution. The main issue is that if it does go bad, it will require a lot more time to be spent on it. Let's see if we're lucky.
If the failure takes a long time to process, why not migrate to another node first?
Tokyo-exclusive issue. We can't move people to nearby location as we normally would (we'd get 95% complaints) and can't move everyone to another Tokyo node either because it'll likely cause problems on the other node after a mass migration. Also this would only be possible for the ones that are online.
What I can offer is to move anyone who contacts us to get recreated on another node if they want to abandon their data.
As for what I said I'd do the other day, I did. It didn't result in it going terribly wrong but it also didn't go terribly right. Closer to wrong than right. There's a lot more that could be said but it's not helpful.
I can abandon my data,can you help me to move another Tokyo node?
That one started being weird a day or two ago. None of this is a coincidence though, I just haven't had time to play detective while dealing with LAXA004S. Such a PITA, I had to basically piece together the host operating system because it decided to die mid-patch.
Comments
Tokyo-exclusive issue. We can't move people to nearby location as we normally would (we'd get 95% complaints) and can't move everyone to another Tokyo node either because it'll likely cause problems on the other node after a mass migration. Also this would only be possible for the ones that are online.
What I can offer is to move anyone who contacts us to get recreated on another node if they want to abandon their data.
As for what I said I'd do the other day, I did. It didn't result in it going terribly wrong but it also didn't go terribly right. Closer to wrong than right. There's a lot more that could be said but it's not helpful.
Just waiting for a moment that Virmach AI would deem this idler using too much CPU and ban me (-:
min 1.8 ms, avg 150 :-D
Haven't bought a single service in VirMach Great Ryzen 2022 - 2023 Flash Sale.
https://lowendspirit.com/uploads/editor/gi/ippw0lcmqowk.png
AMSD014 coincidentally started to sweat up
https://imgur.com/a/HSiaKeb
I can abandon my data,can you help me to move another Tokyo node?
OFFER #3212 is good ~ 2 IPv4's
, but I am waiting for the $8 8 IPv4s 
This is by far my "idlest" server. Hope they won't look only at that chart like that...
That one started being weird a day or two ago. None of this is a coincidence though, I just haven't had time to play detective while dealing with LAXA004S. Such a PITA, I had to basically piece together the host operating system because it decided to die mid-patch.
Shutting down my VM on AMSD014 due to high [70%+] steal.
I've got extras so no worries.
If this is your preferred solution I'll contact you, otherwise I'll just wait it out.
Should be OK now. Monitoring further.