Hardware Poll - Server purchases in 2024

NathanCore4NathanCore4 Services Provider

Hi!
I posted a poll a few months back on which server brands LES members used, and thought the results were interesting. https://lowendspirit.com/discussion/7044/hardware-poll-what-brand-servers-do-you-prefer#latest

It seems like there is a good mix of LES members across different sized operations, with different preferences so I was curious to understand it more. Hopefully we all get some interesting insights from this poll.

Feel free to comment!

Servers Purchases in 2024 Poll
  1. How many servers do you expect to buy this year?27 votes
    1. None
      11.11%
    2. 1 - 5x
      51.85%
    3. 6 - 20x
      14.81%
    4. 21 - 50x
        3.70%
    5. 51 - 100x
        0.00%
    6. Over 100x
      18.52%
  2. Do you buy servers fully configured or build them yourself?27 votes
    1. I buy fully configured servers
      25.93%
    2. I build servers myself from components
      29.63%
    3. I do both
      44.44%
  3. Do you plan to buy any GPU servers this year?27 votes
    1. No
      74.07%
    2. Yes
      25.93%
  4. CPU preference?27 votes
    1. Intel
      29.63%
    2. AMD Epyc
      22.22%
    3. AMD Ryzen
      44.44%
    4. Other
        3.70%
  5. Server Type Preference?27 votes
    1. Tower
        7.41%
    2. Single Node 1U
      55.56%
    3. Single Node 2U
        7.41%
    4. Single Node 3U+
        0.00%
    5. Multi-Node / Blade
        3.70%
    6. A mix of these options
      25.93%

Comments

  • host_chost_c Hosting Provider
    edited March 5

    Dell4Life, or at least until we do not get a divorce. =)

    HP was the first, she cheated on me with payed BIOS updates, nasty girl..... I hope they will not mess up Junos, highly doubt it....

    Thanked by (1)NathanCore4

    Host-C - VPS Services Provider - AS211462

    "If there is no struggle there is no progress"

  • The stallion coder builds high performance network routers out of servers.
    We want many PCIe x16 slots, straight to the CPU, without going through PCIe switch.

    Server brand?
    I don't care.

    IPMI?
    I never use.

    CPU?
    The real stuff with >16 cores per NUMA socket.
    Ryzen / 1230 are for amateurs.

    RAM?
    Must be high speed.
    4~8GB per core.
    So sad the Optane Persistent Memory is discontinued.

    Budget?
    $1000 per PCIe slot.

    HostBrr aff best VPS; VirmAche aff worst VPS.
    Unable to push-up due to shoulder injury 😣

  • crunchbitscrunchbits Hosting Provider

    @host_c said:
    HP was the first, she cheated on me with payed BIOS updates, nasty girl..... I hope they will not mess up Junos, highly doubt it....

    We're all holding our breath about that now :/

    P.S. Quanta Queens > Dell Diddlers

    Thanked by (1)host_c
  • skhronskhron Hosting Provider

    CPU preference?

    • AMD Epyc
      26.32%

    Why it is so underrated? EPYC looks promising to me.

    Thanked by (1)NathanCore4

    Check our KVM VPS (flags are clickable): πŸ‡΅πŸ‡± πŸ‡ΈπŸ‡ͺ | Looking glass: πŸ‡΅πŸ‡± πŸ‡ΈπŸ‡ͺ

  • NathanCore4NathanCore4 Services Provider

    I'm also surprised no one so far prefers multi-node servers. Surprising, since these offer the highest density and often the lowest price per Node for comparable specs. I'm curious if there is a reason or if most just have not tried them? No right or wrong answer as there are pros/cons and use cases for all the forms.

  • servarica_haniservarica_hani Hosting ProviderOG

    We usually prefer multi node servers due to the fact usually we have 10kw/h per each rack and it is sometimes hard to fill the rack with 1u servers to fully utilize the 10kw

    on the other hand they are harder to sell when you decommission them

    Thanked by (2)NathanCore4 crunchbits
  • @NathanCore4 said:
    I'm also surprised no one so far prefers multi-node servers. Surprising, since these offer the highest density and often the lowest price per Node for comparable specs. I'm curious if there is a reason or if most just have not tried them? No right or wrong answer as there are pros/cons and use cases for all the forms.

    We bought 4-node server from your buddy Luke.
    It's for a specialized use case where the workload must be isolated onto different physical machines, to eliminate performance interference.

    Other than that, I would always pick single node servers, to reduce management headache.
    One more node means one more weekly apt-get update.
    I can then run Docker containers or KVMs with CPU isolation, to support the workloads.

    Thanked by (1)NathanCore4

    HostBrr aff best VPS; VirmAche aff worst VPS.
    Unable to push-up due to shoulder injury 😣

  • AdvinAdvin Hosting Provider
    edited March 6

    @NathanCore4 said:
    I'm also surprised no one so far prefers multi-node servers. Surprising, since these offer the highest density and often the lowest price per Node for comparable specs. I'm curious if there is a reason or if most just have not tried them? No right or wrong answer as there are pros/cons and use cases for all the forms.

    Some datacenters have power/density restrictions, so multi-node configurations don't work out for everyone, especially the high power stuff. Also if there are any problems with backplanes, PSU, or any of the shared functions, now you have 4 nodes offline instead of 1. In the VPS industry, a lot of hosts also typically prioritize RAM density over cores, since RAM is typically the primary bottleneck, and these multi-node servers typically have limited DIMM slots or require Dual CPU per node.

    Getting a specific replacement part is also virtually impossible, especially with systems that have a small market or from relatively unknown vendors. In our standard configurations, we use a standard E-ATX board and SuperMicro chassis with PSU's that have a big used market. If a PSU were to fail, it is remarkably easy to find a replacement part because SuperMicro used the same standardized chassis/PSU for an extremely long time. If an E-ATX board were to fail, we could easily swap it in for a replacement, available from multiple vendors like Gigabyte, ASUS, SuperMicro, Tyan, etc. Now let's say a motherboard or PSU failed in those high-density configurations, good luck finding replacement parts, because they are usually propietary. Gigabyte support takes weeks to reply and may not have the part in stock (especially on older configurations).

    Maybe this is not the case for everyone, but these are my reasons for not buying them.

    Thanked by (2)NathanCore4 Abdullah

    I am a representative of Advin Servers

  • crunchbitscrunchbits Hosting Provider

    @Advin said:

    @NathanCore4 said:
    I'm also surprised no one so far prefers multi-node servers. Surprising, since these offer the highest density and often the lowest price per Node for comparable specs. I'm curious if there is a reason or if most just have not tried them? No right or wrong answer as there are pros/cons and use cases for all the forms.

    Some datacenters have power/density restrictions, so multi-node configurations don't work out for everyone, especially the high power stuff. Also if there are any problems with backplanes, PSU, or any of the shared functions, now you have 4 nodes offline instead of 1. In the VPS industry, a lot of hosts also typically prioritize RAM density over cores, since RAM is typically the primary bottleneck, and these multi-node servers typically have limited DIMM slots or require Dual CPU per node.

    Getting a specific replacement part is also virtually impossible, especially with systems that have a small market or from relatively unknown vendors. In our standard configurations, we use a standard E-ATX board and SuperMicro chassis with PSU's that have a big used market. If a PSU were to fail, it is remarkably easy to find a replacement part because SuperMicro used the same standardized chassis/PSU for an extremely long time. If an E-ATX board were to fail, we could easily swap it in for a replacement, available from multiple vendors like Gigabyte, ASUS, SuperMicro, Tyan, etc. Now let's say a motherboard or PSU failed in those high-density configurations, good luck finding replacement parts, because they are usually propietary. Gigabyte support takes weeks to reply and may not have the part in stock (especially on older configurations).

    Maybe this is not the case for everyone, but these are my reasons for not buying them.

    While I love multi-node (and I voted "mix" with that in mind because I also have strong use-cases without it) these are valid. Take the beloved Quanta T41's: their onboard sata data cables use standard SFF connectors but a custom pin-out. You have to contact an obscure manufacturer in HKG/CN that has this layout to get your custom pinned cables done in a minimum quantity of hundreds/thousands to have it remotely make sense. Even then, you're generally better off just buying more nodes and having spares for 20% more cost or using add-on SAS card. Parts interchangeability is very often overlooked. Amazing what the same SuperMicro chassis from 10 years ago still runs today =)

    They have their place, but it does require a special set of circumstances. Most datacenters/colo customers just simply aren't looking for 10kW-17.3kW+ per rack so they can get to their committed power using 1U/2U chassis.

    Thanked by (1)NathanCore4
  • AdvinAdvin Hosting Provider
    edited March 6

    @crunchbits said:
    While I love multi-node (and I voted "mix" with that in mind because I also have strong use-cases without it) these are valid. Take the beloved Quanta T41's: their onboard sata data cables use standard SFF connectors but a custom pin-out. You have to contact an obscure manufacturer in HKG/CN that has this layout to get your custom pinned cables done in a minimum quantity of hundreds/thousands to have it remotely make sense. Even then, you're generally better off just buying more nodes and having spares for 20% more cost or using add-on SAS card. Parts interchangeability is very often overlooked. Amazing what the same SuperMicro chassis from 10 years ago still runs today =)

    They have their place, but it does require a special set of circumstances. Most datacenters/colo customers just simply aren't looking for 10kW-17.3kW+ per rack so they can get to their committed power using 1U/2U chassis.

    As soon as a Quanta product goes into retirement, they delete literally every mention of it on their website and all of the public documentation, have to contact support to receive any information. I was thinking of buying really cheap dual-node EPYC chassis from them a bit ago on eBay, but because of the lack of any reference or documentation to it from Quanta, opted not to :p

    Craft Computing ended up buying the dual-node EPYC and ended up discovering that there was no documentation and could not get it to boot as well:

    I am a representative of Advin Servers

  • @Advin said:
    Some datacenters have power/density restrictions, so multi-node configurations don't work out for everyone, especially the high power stuff.

    When I notice the machines start to shutdown randomly, I simply moved the power plug to the circuit in the adjacent rack.

    In the VPS industry, a lot of hosts also typically prioritize RAM density over cores, since RAM is typically the primary bottleneck

    Sell as VDS and you'll love the cores again.

    Getting a specific replacement part is also virtually impossible, especially with systems that have a small market or from relatively unknown vendors.

    I think all the Intel compute modules are interchangeable, across multiple generations.
    If a node fails, pull it out and insert another.

    HostBrr aff best VPS; VirmAche aff worst VPS.
    Unable to push-up due to shoulder injury 😣

  • crunchbitscrunchbits Hosting Provider

    @Advin said:

    @crunchbits said:
    While I love multi-node (and I voted "mix" with that in mind because I also have strong use-cases without it) these are valid. Take the beloved Quanta T41's: their onboard sata data cables use standard SFF connectors but a custom pin-out. You have to contact an obscure manufacturer in HKG/CN that has this layout to get your custom pinned cables done in a minimum quantity of hundreds/thousands to have it remotely make sense. Even then, you're generally better off just buying more nodes and having spares for 20% more cost or using add-on SAS card. Parts interchangeability is very often overlooked. Amazing what the same SuperMicro chassis from 10 years ago still runs today =)

    They have their place, but it does require a special set of circumstances. Most datacenters/colo customers just simply aren't looking for 10kW-17.3kW+ per rack so they can get to their committed power using 1U/2U chassis.

    As soon as a Quanta product goes into retirement, they delete literally every mention of it on their website and all of the public documentation, have to contact support to receive any information. I was thinking of buying really cheap dual-node EPYC chassis from them a bit ago on eBay, but because of the lack of any reference or documentation to it from Quanta, opted not to :p

    Craft Computing ended up buying the dual-node EPYC and ended up discovering that there was no documentation and could not get it to boot as well:

    I saw that video, and had seen those units before. Given my experience with Quanta stuff gradually getting worse with each newer generation of CPU/chipset I definitely opted to stay far away. Feels like they peaked around E5 v4 siiiiip.

    Anything newer from them I've had are all 1-offs because it's been somewhere between a huge PITA and unstable/random bugs. Doc wise very true, too.

Sign In or Register to comment.