[2022] ★ VirMach ★ RYZEN ★ NVMe ★★ The Epic Sales Offer Thread ★★

15758606263277

Comments

  • VirMach's ticket list/queue:

  • @rockinmusicgv said: there simply weren't enough Ryzen chips/mobos/servers on the market to get setup somewhere else? And now the SHTF...

    i remembered virmach said he can't get server chassis...

  • VirMachVirMach Hosting Provider

    @yoursunny said:

    @VirMach said:
    2x1TB SSD and Ryzen processor

    These are rookie specs.
    It gotta be:

    • EPYC 5th generation, 96 cores
    • 3072 GB RAM
    • 4x 8TB NVMe

    delivery in 2028

    That'd be interesting if we allow people to request whatever they want, the only catch is we only do it when it's financially feasible. I'm sure inflation and energy costs will catch up to us eventually though so running 180W TDP in 2032 might end up costing $80 a month.

    @fluttershy said:

    @fluttershy said:
    It’s great to see a few people wanting to help - I’m also available if @VirMach is interested. However, there is probably a better way to approach them about it, such as @MikePT having a personal/Skype contact.

    Also open to handling a few tickets if needed.

    Yes but are you able to handle opening a few tickets?

    @taizi said:

    @rockinmusicgv said: there simply weren't enough Ryzen chips/mobos/servers on the market to get setup somewhere else? And now the SHTF...

    i remembered virmach said he can't get server chassis...

    @rockinmusicgv said:
    I haven't been paying super close attention to the thread so I hope I'm not asking something that's already been discussed. Is some of the problem caused by VirMach being unable to get new equipment due to shortages? I.e., VirMach wanted to leave a certain WNY datacenter but there simply weren't enough Ryzen chips/mobos/servers on the market to get setup somewhere else? And now the SHTF...

    I'll try to tell the whole story again and add some more details.

    What ended up happening is we had something like 200~ chassis with E3 processors and enough CPUs, SSDs, motherboards for them all. These were for dedicated server customers.

    ASRock, the only brand that makes Ryzen motherboards and therefore Ryzen motherboard and chassis combos for datacenter use (the motherboards have built-in BMC chip, so IPMI) ran into huge supply chain issues. They initially quoted us let's say $X per chassis and motherboard combo and then ghosted us for a few months, no longer replying to our sales request. By the time we grabbed their attention and went through the entire process of contacting the same sales manager, he didn't seem very interested in making the deal (but was essentially forced to reply after we bothered a dozen people about it.) So he just punted it to one of their resellers instead, who quoted us a higher price than MSRP. This ended up being about double the original cost and was not within our budget.

    It no longer made sense to buy the motherboard and chassis combo, since it was something like triple the cost of just buying the motherboard. The chassis combo came with a power supply, heatsink, fans, and appropriate brackets/cables.

    At the same time, this was when gasoline prices were skyrocketing and there was a general shortage in workers in the US and globally. This meant we had to now try to individually source all these missing parts ourselves. There's only so many power supplies we could purchase, so many fans, and we also had to 3D print brackets because they weren't really for sale for a custom chassis being converted to fit what we were trying to achieve. Most of it was OK, except getting a bulk quantity of the same chassis was the more difficult part. We got lucky with the 200~ we had purchased earlier, and got them at a good price, but with shipping costs alone, it would have ended up costing around $50 per chassis just to get it shipped. Something weird happened around the same time where shipping via freight actually ended up being more expensive than just sending them per box, so we couldn't save any money either by purchasing them in bulk. Then the final issue was getting rails that fit the specific chassis we would have to purchase, and it essentially made it impossible to try to get 100+ chassis that were more or less the "same" to where we could CAD and 3D print parts for them. Most the "bulk" chassis available were with built-in I/O panels for specific boards so we would have had to cut out a square on every single one if we went that route.

    Long story short, by the time we absolutely had to finish these builds, which at this point in time consisted of all of them being "done" except for placing them in the chassis, the only viable option was to use the chassis we had already built for the customer E3 dedicated servers. We also already had rails for them which were difficult to find on short notice in bulk quantities. This meant our plan for having replacements for E3 servers had to be delayed indefinitely. To add insult to injury, this meant we had to undo all the work we had already done for the E3 servers (putting in and testing the motherboards, labeling everything, putting in RAID controllers, SSD brackets, etc.) We did probably end up saving $50,000 in the process but this was $50,000 we hadn't planned on having to pay on top of everything in the first place so, again, it was the only option for various reasons. Plus even if we did pay the extra money to try to get the proper chassis and motherboard combos, it was still on a 2 month backorder.

    So at least 100 chassis meant for customer E3 servers ended up being repurposed for our Ryzen builds that were going to be used for customer VPS services instead. And the other 100 or so left were a different type of servers where the rail kits were impossible to find for months and months, and those also happened to be the ones we hadn't finished building out yet as a result of having no rails (they were de-prioritized.) This meant on top of everything, we were now short on time. Even if we did figure out the rail kit situation with maybe using universal rails, we still had to redo the builds into the second half of chassis.

    The backup plan was to just rent these servers instead, and we found a partner for those bulk quantity E3 rentals, we just ended up getting screwed over at the finish line with what abruptly happened last week.

    As for customer Ryzen builds, we did have a smaller portion of the E3 purchases being upgraded by customers to Ryzen builds, I believe 3600X. We just couldn't do those because all the motherboards ended up being used for the VPS builds. The prices on those also went up and the only alternative required a KVM switch setup for IPMI which meant we had to more specifically plan out exactly which cabinets they went into and it ended up being a logistical nightmare with everything else going on. Again, this was just a small quantity of them so it's basically just a footnote to the whole scenario.

    @adly said:

    @imok said:

    @FAT32 said:

    @VirMach said:
    Also aware of this, I think mobile version just uses the full size image, right? I just had the one old large logo and again, lazily used it for now.

    Correct. Since you already have the template, I can provide help to bring the site back to a professional state in 24-48 hours (Fixing all the small layout bugs etc).

    Me and @MikePT are just trying to help our favourite provider.

    I can reply basic tickets while I'm watching "How I met your mother" if you want.

    It’s great to see a few people wanting to help - I’m also available if @VirMach is interested. However, there is probably a better way to approach them about it, such as @MikePT having a personal/Skype contact.

    I saw your kind private message, as well as other offers by everyone. I'm still trying to think if there's a viable method of accepting everyone's offer of assistance. The closest idea I had was launching an official message board where people such as yourself could essentially have the role of official moderators and be equipped to deal with the majority of basic questions and requests which do take up the majority of the current ticket queue.

    I'll try to finalize a plan, if any, over the weekend.

  • @VirMach said: The closest idea I had was launching an official message board where people such as yourself could essentially have the role of official moderators and be equipped to deal with the majority of basic questions and requests which do take up the majority of the current ticket queue.

    An official discord would be neat.

    Thanked by (1)imok
  • Honestly, this kind of openness makes me respect you more as a provider. Maybe when the dust has settled a bit you should do a blog post, or maybe even offer raindog a proper article about these logistical issues rather than his speculations.

  • @soulchief said:

    @VirMach said: The closest idea I had was launching an official message board where people such as yourself could essentially have the role of official moderators and be equipped to deal with the majority of basic questions and requests which do take up the majority of the current ticket queue.

    An official discord would be neat.

    Yes.

    Can I have my own public channel? I would like to share flan and jelly photos. Everybody should know about Pollo a la brasa too.

    Thanked by (1)dedicados
  • @soulchief said:

    @VirMach said: The closest idea I had was launching an official message board where people such as yourself could essentially have the role of official moderators and be equipped to deal with the majority of basic questions and requests which do take up the majority of the current ticket queue.

    An official discord would be neat.

    Agreed! I'm sort of surprised there isn't one already.

    @VirMach said:

    Yes but are you able to handle opening a few tickets?

    I'll try my best to open as many tickets as possible, just for you <3 /s
    I'm happy to dust off my old helpdesk hat if you do need assistance, I have a feeling you aren't getting much sleep lately. Try not to burn out! Made that mistake a few times before.

  • I didn't mean to make you type all that out. I just remember seeing a post about a server that might have been dropped. I know from my own experience getting chips out of China/Taiwan/HK has been nearly impossible over the past few years. The last time I had a quote it was something on the order of $60/kg using a courier like FedEx. I couldn't imagine how much it must've cost to get servers chassis out.

    If you need any logistics help. Let me know, I might be able to help out...

  • vyasvyas OGSenpai
    edited August 2022

    @VirMach said:

    Too much to read and retain at the same time, even though it was quite a read!
    Here's the audio version. I threw in some sound effects for good measure.

    Click to listen to audio (opens in new window)

    Audio: The Virmach Saga

    Click the link to visit the audio page.

    https://pod.co/gatha-podcasts/the-virmach-migration-sage

    Thanked by (3)FrankZ imok DanSummer
  • @vyas Sage=saga?

    It wisnae me! A big boy done it and ran away.
    NVMe2G for life! until death (the end is nigh)

  • vyasvyas OGSenpai
    edited August 2022

    @AlwaysSkint said:
    @vyas Sage=saga?

    50-50 Typo and Intentional :astonished:

    • Sage (or sage-ish)

    jHamm-Mad-Men

    Thanked by (1)AlwaysSkint
  • @VirMach said: I'm still trying to think if there's a viable method of accepting everyone's offer of assistance. The closest idea I had was launching an official message board where people such as yourself could essentially have the role of official moderators and be equipped to deal with the majority of basic questions and requests which do take up the majority of the current ticket queue.

    My personal advice would be to accept and delegate some work (maybe aging or basic tickets?) so you can feed your focus 👌

    Thanked by (1)adly
  • VirMachVirMach Hosting Provider

    Update.

    For virtual servers, we've located and organized disaster recovery backups for any remaining VMs. This is about 700 or 800 of them, and it'd be for any VMs that got marked as "locked" around Wednesday not yet migrated. A small quantity of VMs had corrupt backups, I think maybe 20 of them total so for those we'll recreate them at the end. This number isn't including any other nodes or transfers that previously had issues, it's only for the most recent issue. These were delegated to be done by someone else earlier on Wednesday but he unfortunately ran into issues and I needed to help so it was delayed.

    For dedicated servers, we now have about 30% of them available for immediate provisioning. I fell behind on handing them out the last two days as several other tasks came up such as fixing ATLZ007 which went down again, as well as configuring several new nodes and troubleshooting the last dozen builds I need to send out and probably a dozen other smaller tasks, but we're on track to finish all the deliveries today.

  • JabJab Senpai

    VirMach said: A small quantity of VMs had corrupt backups, I think maybe 20 of them total so for those we'll recreate them at the end.

    Place your bets:
    Is Jab's VPS one of those or not?!

    Thanked by (1)ralf

    Haven't bought a single service in VirMach Great Ryzen 2022 - 2023 Flash Sale.
    https://lowendspirit.com/uploads/editor/gi/ippw0lcmqowk.png

  • @VirMach said:
    Update.

    For virtual servers, we've located and organized disaster recovery backups for any remaining VMs. This is about 700 or 800 of them, and it'd be for any VMs that got marked as "locked" around Wednesday not yet migrated.

    I'm guessing that it would be in fact more work for you if some of us didn't care about the restore but more having the vps active again (ie: we rebuilt from scratch once they were configured on a host node)?

    I know the two I have in this state have nothing critical on them and i can rebuild them pretty quickly but I assume that would involve more manual work to separate them from restore routines and such?

  • @Jab said:
    Place your bets:
    Is Jab's VPS one of those or not?!

    Depends on how much Jab has bugged Virmach lately? :grin:

  • HxxxHxxx OG
    edited August 2022

    All this work and tragedy just because you are inclined to use Ryzen, a Desktop CPU. I do understand they are nice, cheap alternative to EPYC, people love the brand, etc. But what about other CPU's and type of builds? Maybe this process would have been less of an issue, specially the part of the mobos/chassis?

    From what I gather ... you are a big provider, the numbers you are mentioning aren't small.

    WebNX / GorillaServers (sister company) has like overstock of Ryzen servers. What about making a deal there, probably better than buying and colo'ing at this stage?

    Thanked by (2)yoursunny vyas
  • FrankZFrankZ Moderator
    edited August 2022

    @VirMach said: Here's my philosophy on negative press: I like it because it helps ignorant customers stay away. It's mutually beneficial.

    This is good to know, and I expect you are correct.

    EDIT: Then it is also a good thing that LEB mods seem to be removing any positive comments from the previously mentioned blog post and just showing repeated negative comments by the same users.

    Thanked by (1)skorous

    For staff assistance or support issues please use the helpdesk ticket system at https://support.lowendspirit.com/index.php?a=add

  • edited August 2022

    @FrankZ said:

    @VirMach said: Here's my philosophy on negative press: I like it because it helps ignorant customers stay away. It's mutually beneficial.

    This is good to know, and I expect you are correct.

    EDIT: Then it is also a good thing that LEB mods seem to be removing any positive comments from the previously mentioned blog post and just showing repeated negative comments by the same users.

    Yeah my comment got deleted, and a couple of the comment regarding justin got deleted.

    Thanked by (2)FrankZ DanSummer
  • @VirMach said: I'll try to finalize a plan, if any, over the weekend.

    >
    good plan.

  • @FrankZ said:

    @VirMach said: Here's my philosophy on negative press: I like it because it helps ignorant customers stay away. It's mutually beneficial.

    This is good to know, and I expect you are correct.

    EDIT: Then it is also a good thing that LEB mods seem to be removing any positive comments from the previously mentioned blog post and just showing repeated negative comments by the same users.

    LEB has been a joke for a while, with LET closely behind particularly following the influx of the new ‘moderators’. Despite this, Jon seems to think it’s some extremely valuable resource people are failing to utilise, rather than the actuality of it having the reverse Midas touch (everything it touches turns to shit).

  • @Ryujin said:

    @FrankZ said:

    @VirMach said: Here's my philosophy on negative press: I like it because it helps ignorant customers stay away. It's mutually beneficial.

    This is good to know, and I expect you are correct.

    EDIT: Then it is also a good thing that LEB mods seem to be removing any positive comments from the previously mentioned blog post and just showing repeated negative comments by the same users.

    Yeah my comment got deleted, and a couple of the comment regarding justin got deleted.

    Funny how that works. If you're a big user of ColoCrossing than you're a hero. The greatestest provider in the universe.

    But once you decide that CC is not for you anymore, you get ousted. Almost literally.

    Thanked by (2)Ryujin FrankZ
  • JabJab Senpai
    edited August 2022

    VirMach-FFME001-gateway is now DOWN
    Target: 149.57.160.1[ping]
    Noticed at: 2022-08-08 10:50:36 (UTC 01:00)
    Encountered errors:
    Amsterdam: Timeout (5 sec)
    Frankfurt: Timeout (5 sec)
    Warsaw: Timeout (5 sec)

    Losing millions again :bleep_bloop:

    VirMach-FFME001-gateway is now UP
    Downtime: 3 min
    Target: 149.57.160.1[ping]
    Noticed at: 2022-08-08 10:53:36 (UTC 01:00)

    Millions has been restored also the SolusVM seems to be working now :P

    aaaaaaand it's down again.

    VirMach-FFME001-gateway is now DOWN
    Target: 149.57.160.1[ping]
    Noticed at: 2022-08-08 10:54:39 (UTC 01:00)
    Encountered errors:
    Amsterdam: Timeout (5 sec)
    London: Timeout (5 sec)
    Frankfurt: Timeout (5 sec)
    Warsaw: Timeout (5 sec)

    Millions lost again, VirMach doubling the rate of losing millions.

    and it's back after another 3 minutes - SolusVM back to NOT working too. Operation Timed Out After 90001 Milliseconds With 0 Bytes Received. VirMach must be having fun day there.

    Thanked by (1)markbidz

    Haven't bought a single service in VirMach Great Ryzen 2022 - 2023 Flash Sale.
    https://lowendspirit.com/uploads/editor/gi/ippw0lcmqowk.png

  • I need to stop check the virmach posts OGF early in the morning, my head hurts from the stupidity there :(

  • Hello,

    I like those super special and low cost offers, and i understand as it's a super cheap offer it's normal to not get fast support; as well as the fact there is ton of tickets related to migration etc.

    I'm on a node, named NYCB011X. I wonder what the "X" means? My instance is offline since more than 1w, and clicking the "boot" button doesn't change anything. It doesn't start at all, no bios, no kvm, nothing. I've opened a "priority support" ticket but yeah, they are all busy ^^. So what this X means, is it something regular or something about an ongoing issue with the node? There is nothing on Network Status page.

    This thread goes fast, so maybe i've missed the part about it. Thanks for answering ^^.

    Site Reliability Engineer using DevOps mindset. High interest in so many hosting companies (VPN, Drive, Web, VPS, etc.) and believe in privacy.
    Opinions are my own.

  • lesuserlesuser OG
    edited August 2022

    My VPS ever since migrated to Ryzen is working fine and dandy. I think it was 12 days ago but I waited 3, 4 days before migrating my site and it has been running perfectly ever since. In fact its speed is noticeably fast than before thanks to new Ryzens.

    There is another VPS for which I am facing a small issue of not being able to access SolusVM. I opened ticket like 14 days ago but I understand there are other critical issues so no big deal I can wait.

    EDIT
    Wow this is my first comment even though I am a member since Nov 2019.

    Thanked by (1)VirMach
  • AlwaysSkintAlwaysSkint OGSenpai
    edited August 2022

    @o_be_one said: So what this X means, is it something regular or something about an ongoing issue with the node?

    The designation was mentioned on OGF (IIRC something to do with a particular server build) and it was subsequently dropped - got sod all to do with status of the node. Virmach will rightly be concentrating on getting dedicated servers sorted out, before returning to VPS nodes. Adding a Ticket to the long queue will serve no purpose whatsoever. Patience.
    One assumes you have a backup of your exceedingly important & expensive VPS, therefore (in the interim) an attempt to install Ryzen Almalinux from the Solus control panel will be a viable option.

    Thanked by (1)o_be_one

    It wisnae me! A big boy done it and ran away.
    NVMe2G for life! until death (the end is nigh)

  • skorousskorous OGSenpai

    @o_be_one said: I wonder what the "X" means?

    Yeah, if you got back through the thread you'll see it designates the node being 10G capable. No X doesn't necessarily mean lack of 10G because he started adding it to hostnames part-way through.

  • skorousskorous OGSenpai

    @AlwaysSkint said:

    @o_be_one said: So what this X means, is it something regular or something about an ongoing issue with the node?

    The designation was mentioned on OGF (IIRC something to do with a particular server build) and it was subsequently dropped - got sod all to do with status of the node. Virmach will rightly be concentrating on getting dedicated servers sorted out, before returning to VPS nodes. Adding a Ticket to the long queue will serve no purpose whatsoever. Patience.

    Wait it was dropped? I thought he started adding it partway through ... crap, maybe I'm remembering it wrong. I felt pretty good that I had that one right.

This discussion has been closed.