[2022] ★ VirMach ★ RYZEN ★ NVMe ★★ The Epic Sales Offer Thread ★★

16263656768277

Comments

  • cybertechcybertech OGBenchmark King
    edited August 2022

    @VirMach said:

    @cybertech said:
    TYOC040 is down

    This isn't down. If you have issues with your service please make a priority ticket, but at this point it might be best to private message me here. Private message from only @cybertech please, just in case more people are affected.

    my VM was discovered to be down few hours back along with the "node timeout" error on WHMCS.

    just checked and its back on WHMCS, powered on but lost all data "boot failed could not read the boot disk".

    if this is not a known error from past 24 hours and you would like to investigate, i could send a PM with VM details. otherwise I'll just reinstall or toy with ryzen migrate or procrastinate.

    its a 2.5GB plan

    I bench YABS 24/7/365 unless it's a leap year.

  • @VirMach said: Update: all tickets with appropriately titled dedicated server migration tickets have been completed. Sorry it took so long. I did get help with these so if yours isn't completed, let me know. If your ticket is merged or titled anything else other than "Dedicated Server Migration" it's up to you but it might work to your benefit to close it and create a new one in the priority department titled as such. Sorry for falling behind on replies here; I'll still try to reply to everyone but there's more comments than I can handle right now without losing focus.

    Please check this Ticket #492096, For dedicated server migration.
    Regards

  • @VirMach said: Migrations to Tokyo end up having a lot of issues and they're vastly more popular than other requests. I don't know if yours is a Tokyo request but if it is then please understand there are hundreds of requests we're trying to catch up with in Tokyo.

    >

    My request is to move to Tokyo and I continue to line up. Please don't forget it. I hope you can rest well too. Thanks

  • edited August 2022

    Hi, on July 25th I paid a $3 ticket to migrate to Tokyo, but it doesn't seem to work, so I want to get a refund(ticket #261712 invoice#1462469), thanks.@VirMach

  • benben
    edited August 2022

    SJCZ004 It has been offline for two months, and the paid migration cannot be completed @VirMach

  • edited August 2022

    @cybertech said:
    TYOC040 is down

    @VirMach said:

    @cybertech said:
    TYOC040 is down

    This isn't down. If you have issues with your service please make a priority ticket, but at this point it might be best to private message me here. Private message from only @cybertech please, just in case more people are affected.

    same with me, just check from monitor it has been down, also i cannot access panel from billing panel it said timeout

    Operation Timed Out After 90001 Milliseconds With 0 Bytes Received

  • edited August 2022

    @VirMach said:

    @cybertech said:
    TYOC040 is down

    This isn't down. If you have issues with your service please make a priority ticket, but at this point it might be best to private message me here. Private message from only @cybertech please, just in case more people are affected.

    Mine TYOC040 is also offline, basic control timed out and solusvm refuses to respond, monitoring tells me this is the second time in two weeks, the last time it happened was on August 2 and it lasted a day and 5 hours. Oops, I also seem to have sent a ticket after experiencing a similar situation in early July and it is still waiting for staff review.

    By the way the login button on your new home page points to an invalid link.

  • edited August 2022

    @virmach check ticket 354513

    I had 2 dedicated servers go offline I created a priority ticket on 8/3 for dedicated server migration

    You merged that ticket and changed the heading, I have not received anything to date other then billing reminders for termination

    Created another ticket 169735 dedicated server migration but wont let me do priority

  • @VirMach said:
    Update: all tickets with appropriately titled dedicated server migration tickets have been completed. Sorry it took so long. I did get help with these so if yours isn't completed, let me know. If your ticket is merged or titled anything else other than "Dedicated Server Migration" it's up to you but it might work to your benefit to close it and create a new one in the priority department titled as such. Sorry for falling behind on replies here; I'll still try to reply to everyone but there's more comments than I can handle right now without losing focus.

    Just got my replacement dedi overnight, looks like VirMach wasn't exaggerating that they'd racked several almost immediately:
    10:41:33 up 11 days, 1:17, 1 user, load average: 0.00, 0.01, 0.05

    Looks like my replacement is in San Jose, I was hoping to hold out for LA but that works well enough I suppose. I got a decent spec bump and got a small SSD instead of a 1-2tb hdd, exactly what I was hoping for. Invoice already generated for the new server and it makes sense to me, but the ticket mentioned account credit and I'm sure you're going to get a few people confused who don't realize the credit balance will be applied to the invoice automatically.

    With no ETA on IPMI / control panel stuff, I think I'm going to go ahead and put it a ticket to have the OS changed as suggested. I hate to make more work for your team, I'd normally just handle it myself, but I really don't want to have to set things up yet again once I'm able to access IPMI.

    Thanks VirMach, I know this has been chaotic for you, zero complaints with how you've handled the situation.

    Thanked by (1)skorous
  • @VirMach said:
    Update: all tickets with appropriately titled dedicated server migration tickets have been completed. Sorry it took so long. I did get help with these so if yours isn't completed, let me know. If your ticket is merged or titled anything else other than "Dedicated Server Migration" it's up to you but it might work to your benefit to close it and create a new one in the priority department titled as such. Sorry for falling behind on replies here; I'll still try to reply to everyone but there's more comments than I can handle right now without losing focus.

    Some other updates/information:

    • We are aware of the few servers offline. I requested someone update the status page this morning. These will unfortunately take some time to resolve given the current situation we're in.
    • Japan and Amsterdam storage nodes are going to be sent out early next week or possibly by the end of this week. I know these have been heavily delayed and again, I apologize.
    • I've requested someone complete re-creations for broken VMs so if yours is still inaccessible at this point, please make sure you have a priority ticket in for it.
    • Server controls for dedicated servers are still unfortunately broken. If you require some action such as a reboot or reinstall then please put in a priority ticket for it. We're still trying to get controls completed but waiting

    @Daevien said:
    SJCZ008 & NYCB004S just had posts on https://billing.virmach.com/serverstatus.php

    Virmach is alive and hopefully even managed some sleep this weekend?

    Yes, I've been catching up on sleep and delegating a lot of tasks to others at the company. I haven't been around because I want to make sure I spend the time I have fruitfully so at this time it's better I get important tasks done instead of hanging around the forums.

    I suppose a negative way of looking at things would be to say I finally "burned out" after the last year of constant work but I definitely need some more time to recuperate at this point (and it'll ultimately be beneficial.)

    Careful about burnout bud. As someone watching from the sidelines (I don't have a horse in this race) I have a ton of respect for what you're doing and wishing you the best. We need more people like you here and part of that also means you need to get your rest when the time is right, even if there's a mountain of other "shit" to do.

    I know you already probably get this a ton (and probably don't need it), but help and support is available if needed.

  • edited August 2022

    @VirMach said:
    Update: all tickets with appropriately titled dedicated server migration tickets have been completed. Sorry it took so long. I did get help with these so if yours isn't completed, let me know. If your ticket is merged or titled anything else other than "Dedicated Server Migration" it's up to you but it might work to your benefit to close it and create a new one in the priority department titled as such. Sorry for falling behind on replies here; I'll still try to reply to everyone but there's more comments than I can handle right now without losing focus.

    Was contacted by Support team last night offering me a Server at SJ DC, but the specs were somewhat much lesser than what I had at NY DC. Specially with the HDD. Had 2x1TB HDD previously and was offered 1x 240GB SSD. Preferred something equivalent of what I had and was informed have to wait another 1 or 2 weeks for more specs available at West coast DCs. Lets hope for the best. 3 Months remaining until renewal so will have to make some decisions once I see how this is handled.

  • @bula said:

    @VirMach said:
    Update: all tickets with appropriately titled dedicated server migration tickets have been completed. Sorry it took so long. I did get help with these so if yours isn't completed, let me know. If your ticket is merged or titled anything else other than "Dedicated Server Migration" it's up to you but it might work to your benefit to close it and create a new one in the priority department titled as such. Sorry for falling behind on replies here; I'll still try to reply to everyone but there's more comments than I can handle right now without losing focus.

    Was contacted by Support team last night offering me a Server at SJ DC, but the specs were somewhat much lesser than what I had at NY DC. Specially with the HDD. Had 2x1TB HDD previously and was offered 1x 240GB SSD. Preferred something equivalent of what I had and was informed have to wait another 1 or 2 weeks for more specs available at West coast DCs. Lets hope for the best. 3 Months remaining until renewal so will have to make some decisions once I see how this is handled.

    Currently still waiting for my ticket to get taken care of, but I feel like these new boxes (when properly integrated and ocnfigured) will end up being upgrades overall. Excited for the better network blend and (hopefully) more reliable hardware! On my CC dedis I had ~4 drive failures across 3 servers within a year. Hoping things will be different as the pricing is still quite nice.

  • @fluttershy said:
    Currently still waiting for my ticket to get taken care of, but I feel like these new boxes (when properly integrated and ocnfigured) will end up being upgrades overall. Excited for the better network blend and (hopefully) more reliable hardware! On my CC dedis I had ~4 drive failures across 3 servers within a year. Hoping things will be different as the pricing is still quite nice.

    At least they are making great efforts to get replacements so Lets hope for the best, had the NY server since 2018 and had no issues at all.

  • @bula said:

    @fluttershy said:
    Currently still waiting for my ticket to get taken care of, but I feel like these new boxes (when properly integrated and ocnfigured) will end up being upgrades overall. Excited for the better network blend and (hopefully) more reliable hardware! On my CC dedis I had ~4 drive failures across 3 servers within a year. Hoping things will be different as the pricing is still quite nice.

    At least they are making great efforts to get replacements so Lets hope for the best, had the NY server since 2018 and had no issues at all.

    Agreed, they have quite a bit of goodwill from me. A week of downtime does suck but this isn't something directly in their control (CC pulling servers) and they're doing their best to get replacements and bring customers back online.

  • vyasvyas OGSenpai

    @fluttershy said:

    @bula said:

    @fluttershy said:
    Currently still waiting for my ticket to get taken care of, but I feel like these new boxes (when properly integrated and ocnfigured) will end up being upgrades overall. Excited for the better network blend and (hopefully) more reliable hardware! On my CC dedis I had ~4 drive failures across 3 servers within a year. Hoping things will be different as the pricing is still quite nice.

    At least they are making great efforts to get replacements so Lets hope for the best, had the NY server since 2018 and had no issues at all.

    Agreed, they have quite a bit of goodwill from me. A week of downtime does suck but this isn't something directly in their control (CC pulling servers) and they're doing their best to get replacements and bring customers back online.

    You are supposed to rant in a fit of PMS.

    Wait..

    Wrong forum

  • Been using Virmach for over a year on a dedicated server for hobby usage, very much appreciate the great deal, and consistent service received so far!

    My only criticism at this point is more from an IT perspective. (I’m a sysadmin career wise)

    I had a priority ticket in since CC went down, but there was no email communication, or web page notification to users to create such a specifically named “dedicated server migration” ticket unless they happen to read on this forum.

    So my previous ticket with subject: “Easy migration, Don’t need data, just new server, San Jose would be great!” was ignored and server has remained down and not migrated.

    Regardless of ticket subjects, as IT I would think you would have a list of assets for the locations that went down then you would just go through the list and make sure everything is migrated.

    With the deal we get, I don’t want to complain, it’s more that it bugs me from an IT perspective.. work smart, not hard.. xD

    Either way, closed my other ticket and re-created:
    Ticket #969232

    I know it’s likely been very hard work with little help to get everything going again so quickly, so I do want to say your efforts are appreciated regardless of hiccups!

  • skorousskorous OGSenpai

    @kRyTiCaL said:

    Regardless of ticket subjects, as IT I would think you would have a list of assets for the locations that went down then you would just go through the list and make sure everything is migrated.

    And you're thinking that won't happen? I personally would imagine that people ticketing got seen as higher priority and thus handled first with everybody else falling into a second/third tier.

  • edited August 2022

    @skorous said:
    And you're thinking that won't happen? I personally would imagine that people ticketing got seen as higher priority and thus handled first with everybody else falling into a second/third tier.

    It makes sense that it would happen, I guess it was just the wording of:

    so if yours isn't completed, let me know. If your ticket is merged or titled anything else other than "Dedicated Server Migration" it's up to you but it might work to your benefit to close it and create a new one in the priority department titled as such.

    Makes it sound as though they are only handling this through tickets. “Correctly” named tickets first, and then all other tickets.

    I guess I also have no idea how many users/servers this could include..
    Considering the lack of communication like emails or web page notification on the subject and that you had to be in this thread to find out the correct ticket name to use, I wouldn’t have thought it was a very large number.

    If it is a larger number, you’d think there would be 2 minutes of free time to have an email sent out or banner update indicating this was necessary…
    otherwise I guess there is a large number of very confused people with absolutely no idea what’s going on still.
    Could prevent themselves a lot of headache by spending literally 5-10 more minutes on communication.

    This is all speculation on my part, I hope I didn’t come off too poorly or too critical of Virmach/the situation… just posting the only conclusions I can come to with the communications received thus far.

    Thanked by (2)storm adly
  • Reading through some of this thread I see what you're enduring behind the scenes, especially in Dallas where my downed VPS is.

    @VirMach said: I've requested someone complete re-creations for broken VMs so if yours is still inaccessible at this point, please make sure you have a priority ticket in for it.

    I've had a "Ryzen Issues" ticket open for over a month now, should I open a separate priority ticket? I created two (I believe different) tickets in the meantime and both were merged into the "Ryzen Issues" ticket.

    The good news is, my VPS is actually running now, the bad news is

    Booting from Hard Disk
    Boot failed: not a bootable disk
    
    No bootable device.
    

    I tried rescue for giggles, and I found it actually looks like (some?) of my data is there, it's just in the wrong place on the virtual disk:

    00000000  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
    *
    8c0000000  eb 63 90 10 8e d0 bc 00  b0 b8 00 00 8e d8 8e c0  |.c..............|
    8c0000010  fb be 00 7c bf 00 06 b9  00 02 f3 a4 ea 21 06 00  |...|.........!..|
    8c0000020  00 be be 07 38 04 75 0b  83 c6 10 81 fe fe 07 75  |....8.u........u|
    8c0000030  f3 eb 16 b4 02 b0 01 bb  00 7c b2 80 8a 74 03 02  |.........|...t..|
    8c0000040  80 00 00 80 18 85 16 00  00 08 fa 90 90 f6 c2 80  |................|
    8c0000050  75 02 b2 80 ea 59 7c 00  00 31 00 80 01 00 00 00  |u....Y|..1......|
    8c0000060  00 00 00 00 ff fa 90 90  f6 c2 80 74 05 f6 c2 70  |...........t...p|
    8c0000070  74 02 b2 80 ea 79 7c 00  00 31 c0 8e d8 8e d0 bc  |t....y|..1......|
    8c0000080  00 20 fb a0 64 7c 3c ff  74 02 88 c2 52 bb 17 04  |. ..d|<.t...R...|
    8c0000090  f6 07 03 74 06 be 88 7d  e8 17 01 be 05 7c b4 41  |...t...}.....|.A|
    8c00000a0  bb aa 55 cd 13 5a 52 72  3d 81 fb 55 aa 75 37 83  |..U..ZRr=..U.u7.|
    8c00000b0  e1 01 74 32 31 c0 89 44  04 40 88 44 ff 89 44 02  |[email protected].|
    8c00000c0  c7 04 10 00 66 8b 1e 5c  7c 66 89 5c 08 66 8b 1e  |....f..\|f.\.f..|
    8c00000d0  60 7c 66 89 5c 0c c7 44  06 00 70 b4 42 cd 13 72  |`|f.\..D..p.B..r|
    8c00000e0  05 bb 00 70 eb 76 b4 08  cd 13 73 0d 5a 84 d2 0f  |...p.v....s.Z...|
    8c00000f0  83 d0 00 be 93 7d e9 82  00 66 0f b6 c6 88 64 ff  |.....}...f....d.|
    8c0000100  40 66 89 44 04 0f b6 d1  c1 e2 02 88 e8 88 f4 40  |@f.D...........@|
    8c0000110  89 44 08 0f b6 c2 c0 e8  02 66 89 04 66 a1 60 7c  |.D.......f..f.`||
    8c0000120  66 09 c0 75 4e 66 a1 5c  7c 66 31 d2 66 f7 34 88  |f..uNf.\|f1.f.4.|
    8c0000130  d1 31 d2 66 f7 74 04 3b  44 08 7d 37 fe c1 88 c5  |.1.f.t.;D.}7....|
    8c0000140  30 c0 c1 e8 02 08 c1 88  d0 5a 88 c6 bb 00 70 8e  |0........Z....p.|
    8c0000150  c3 31 db b8 01 02 cd 13  72 1e 8c c3 60 1e b9 00  |.1......r...`...|
    8c0000160  01 8e db 31 f6 bf 00 80  8e c6 fc f3 a5 1f 61 ff  |...1..........a.|
    8c0000170  26 5a 7c be 8e 7d eb 03  be 9d 7d e8 34 00 be a2  |&Z|..}....}.4...|
    8c0000180  7d e8 2e 00 cd 18 eb fe  47 52 55 42 20 00 47 65  |}.......GRUB .Ge|
    8c0000190  6f 6d 00 48 61 72 64 20  44 69 73 6b 00 52 65 61  |om.Hard Disk.Rea|
    8c00001a0  64 00 20 45 72 72 6f 72  0d 0a 00 bb 01 00 b4 0e  |d. Error........|
    8c00001b0  cd 10 ac 3c 00 75 f4 c3  9a 0c d4 4c 00 00 80 20  |...<.u.....L... |
    8c00001c0  21 00 83 fe ff ff 00 08  00 00 80 ee d7 01 00 fe  |!...............|
    8c00001d0  ff ff 82 fe ff ff 80 f6  d7 01 00 00 08 00 00 00  |................|
    8c00001e0  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
    8c00001f0  00 00 00 00 00 00 00 00  00 00 00 00 00 00 55 aa  |..............U.|
    8c0000200  52 e8 28 01 74 08 56 be  33 81 e8 4c 01 5e bf f4  |R.(.t.V.3..L.^..|
    

    (basically the output from hexdump -C /dev/vda although rescue doesn't have hexdump so I had to download an image of /dev/vda first)

    I'm far from an expert in the on-disk format of partition tables but I'm pretty sure they're supposed to be at the start of the disk not 37GB in lol

  • @kRyTiCaL said:
    I had a priority ticket in since CC went down, but there was no email communication, or web page notification to users to create such a specifically named “dedicated server migration” ticket unless they happen to read on this forum.

    So my previous ticket with subject: “Easy migration, Don’t need data, just new server, San Jose would be great!” was ignored and server has remained down and not migrated.

    Regardless of ticket subjects, as IT I would think you would have a list of assets for the locations that went down then you would just go through the list and make sure everything is migrated.

    You didn't get the (admittedly delayed) email on 8/3 about "[Emergency] Potential Dedicated Server Service Disruption"? It didn't go out until several hours after the servers went offline, but there was actually a line in there that mentioned it: If you would like to more immediately switch your dedicated server to a server located with our new datacenter partners, please create a ticket in the "priority" department called "Dedicated Server Migration" and provide [...]

    If it's any consolation, I'm dumb and skimmed that part too fast, and named my ticket "Dedicated Server down." Looks like that was close enough that mine was noticed, it's unfortunate yours wasn't.

    Thanked by (1)skorous
  • @bakageta said:
    If it's any consolation, I'm dumb and skimmed that part too fast, and named my ticket "Dedicated Server down." Looks like that was close enough that mine was noticed, it's unfortunate yours wasn't.

    I am dumb, I’ve been going back and forth between LES, LET, and the actual Virmach portal so much looking for updates that I mixed up where I read what…

    My apologies!

  • skorousskorous OGSenpai

    @bakageta said:

    If it's any consolation, I'm dumb and skimmed that part too fast, and named my ticket "Dedicated Server down." Looks like that was close enough that mine was noticed, it's unfortunate yours wasn't.

    For what it's worth, I'm dumb too. I reported two different Dedicated Server Down and in each ticket I talked about how I can be last in the queue because I have redundancy, etc... They got merged into a single ticket, one server deployed immediately, and ticket closed ( heh heh ) . Ten points to @skorous for intentions, 0 points for accuracy and style. :-/

    @kRyTiCaL said: It makes sense that it would happen, I guess it was just the wording of:

    so if yours isn't completed, let me know. If your ticket is merged or titled anything else other than "Dedicated Server Migration" it's up to you but it might work to your benefit to close it and create a new one in the priority department titled as such.

    Makes it sound as though they are only handling this through tickets. “Correctly” named tickets first, and then all other tickets.

    Ahhhhh, I interpreted that as meaning you'd get your new server much faster if you did that. Maybe you're right.

  • Looks like I'm getting my replacement, just got an empty invoice and a new item in my client area. Seems to have a 2TB HDD as well, an upgrade from the 1TB one I had before. One down, 2 to go!

    Thanked by (1)skorous
  • I decided to give alma's elevate util a try instead of bothering anyone with a ticket, seems to have gone fine. One interesting thing I've noticed, my old dedi had a pretty typical supermicro micro-atx board and was presumably a 1u, while this new one is a supermicro blade, 12 nodes in a 3u if the model number I see is accurate. That feels like a solid attempt to keep these viable for a while, hopefully I get a nice long run out of this one.

  • FAT32FAT32 OGSenpai

    I just want to login, but I am stuck with the Cloudflare captcha page forever, solving infinite captcha...

    食之无味 弃之可惜 - Too arduous to relish, too wasteful to discard.

  • skorousskorous OGSenpai

    @bakageta said:
    I decided to give alma's elevate util a try instead of bothering anyone with a ticket, seems to have gone fine. One interesting thing I've noticed, my old dedi had a pretty typical supermicro micro-atx board and was presumably a 1u, while this new one is a supermicro blade, 12 nodes in a 3u if the model number I see is accurate. That feels like a solid attempt to keep these viable for a while, hopefully I get a nice long run out of this one.

    I've done it several times ( including on this one I just got ) and it's always been fine. It does a pretty good job of telling you what it's not going to handle well so you can prepare.

    Thanked by (1)bakageta
  • edited August 2022

    Got all my boxes, stuck on CentOS though. Reinstalls can take a week, so guess I'll attempt to live with CentOS until then.

  • @FAT32 said:
    I just want to login, but I am stuck with the Cloudflare captcha page forever, solving infinite captcha...

    Proof that FAT32 is a robot, can't solve captcha correctly!

  • LAXA032 always "Operation Timed Out After 90001 Milliseconds With 0 Bytes Received" from 08.14

  • cybertechcybertech OGBenchmark King
    edited August 2022

    @add_iT said:

    @cybertech said:
    TYOC040 is down

    @VirMach said:

    @cybertech said:
    TYOC040 is down

    This isn't down. If you have issues with your service please make a priority ticket, but at this point it might be best to private message me here. Private message from only @cybertech please, just in case more people are affected.

    same with me, just check from monitor it has been down, also i cannot access panel from billing panel it said timeout

    Operation Timed Out After 90001 Milliseconds With 0 Bytes Received

    @Anoneko said:

    @VirMach said:

    @cybertech said:
    TYOC040 is down

    This isn't down. If you have issues with your service please make a priority ticket, but at this point it might be best to private message me here. Private message from only @cybertech please, just in case more people are affected.

    Mine TYOC040 is also offline, basic control timed out and solusvm refuses to respond, monitoring tells me this is the second time in two weeks, the last time it happened was on August 2 and it lasted a day and 5 hours. Oops, I also seem to have sent a ticket after experiencing a similar situation in early July and it is still waiting for staff review.

    By the way the login button on your new home page points to an invalid link.

    mine shows up on WHMCS and SolusVM, but reinstalling it doesn't work. both VM and VNC still offline after clicking reinstall on SolusVM

    I bench YABS 24/7/365 unless it's a leap year.

This discussion has been closed.