an interesting specimen from datapacket.net. All of their VPS have “16 cores”, even the 1gb one. I guess you really have to hope your neighbors are good people
after @lentro mentioned the impact of zfs caching on benchmark numbers, here is an experiment to show this again...
VM is build on a Hetzner dedi with only HDDs, so keep in mind that technically IOps are limited to somewhat around ~180 per second depending on the average access time of that model (aka 1sec/5.5ms). I put three disks in raid-z1 and have ARC limited to 4GB on the host-node, disk image is directly deployed into the zfs pool...
it always bugged me, that the 512k and 1m IOps seemed to low, and watching the load and IO wait on the node it seemed much like the filled ARC cached that needed to be written to the disk at that point kinda blocked the whole process. like what is gained in 4k/64k now is in the way...
so, how about adding a sleep 5 into the fio loop as a 'cooldown period' for ARC?
BAM! finally everything seems to be hitting the ARC cache properly. because yabs prepopulates the test-file before the actual fio runs and I added the delay/cooldown in between, even the 4k now made a bigger jump...
now, what to take away from this?
1) yabs is awesome. the way it is built allows for easily digging into specific things by modifying some test parameters and more important it is very clear from the beginning, what it is actually doing and how so one can put the result into proper context.
thx @Mason ;-)
2) single/quick glance at artificial benchmark numbers can be misleading, if you don't know what's 'under the hood' - the zfs / ARC example is comparable to a hardware raid controller with some memory-cache and there are lots of things one can adjust around that (cachesizes, writeback, writethrough) that will have an influence... however for the tests nothing of that has been changed, but just the timeline for the workload inside the benchmark!
3) it's debatable how to put the numbers into relation to real world workloads. I'd expect for common not too big hosting workloads most stuff to be sitting in FS cache or ARC most of the time and the time zfs will need to actually write to disk nicely be averaged out in most of the cases.
so will ZFS/ARC be beneficial despite the first benchmark run seem rather slow on 512k/1m ? yeah I would say so, 4GB should still be plenty and allow for a lot of frequently accessed data to be kept hot and available.
⭕ A simple uptime dashboard using UptimeRobot API https://upy.duo.ovh
⭕ Currently using VPS from BuyVM, GreenCloudVPS, Gullo's, Hetzner, HostHatch, InceptionHosting, LetBox, MaxKVM, MrVM, VirMach.
@chocolateshirt said:
HostHatch 500 GB Stockholm BF Flash Sales
that's a good one mate...should not have hesitated
Yes, it is worth the wait.. only $13.3 / yr for 500 GB.
⭕ A simple uptime dashboard using UptimeRobot API https://upy.duo.ovh
⭕ Currently using VPS from BuyVM, GreenCloudVPS, Gullo's, Hetzner, HostHatch, InceptionHosting, LetBox, MaxKVM, MrVM, VirMach.
Micronode X1S Dedicated Server - These should be provisioned sometime next week and will be priced at £5 per month with 8GB EMMC OS disk and 64GB Data disk:
⭕ A simple uptime dashboard using UptimeRobot API https://upy.duo.ovh
⭕ Currently using VPS from BuyVM, GreenCloudVPS, Gullo's, Hetzner, HostHatch, InceptionHosting, LetBox, MaxKVM, MrVM, VirMach.
Can someone post a lsblk on hosthatch and servarica? Looking for storage servers but I want to see fio results on "storage disk", not "system ssd disk".
⭕ A simple uptime dashboard using UptimeRobot API https://upy.duo.ovh
⭕ Currently using VPS from BuyVM, GreenCloudVPS, Gullo's, Hetzner, HostHatch, InceptionHosting, LetBox, MaxKVM, MrVM, VirMach.
Comments
Yes
@Mr_Tom how about a €1/year deal from the beautiful VM specialists?
Team push-ups!
err... no
epik deal
I bench YABS 24/7/365 unless it's a leap year.
TBH, Epik would never offer such a deal
It was only @cociu, and his love for @Nekki that made this possible. As long as it lasts.
Hosthatch storage after migrated to EPYC
Hosthatch BF2021 Flash — Storage 2TB
Sultan Muda - Amazon Store
an interesting specimen from datapacket.net. All of their VPS have “16 cores”, even the 1gb one. I guess you really have to hope your neighbors are good people
IP is from AS209, CenturyLink
Hosthatch storage BF flash sale (Stockholm) @ Ubuntu 20.04
Contribute your idling VPS/dedi (link), Android (link) or iOS (link) devices to medical research
Adding swap will probably make GB5 run fine.
after @lentro mentioned the impact of zfs caching on benchmark numbers, here is an experiment to show this again...
VM is build on a Hetzner dedi with only HDDs, so keep in mind that technically IOps are limited to somewhat around ~180 per second depending on the average access time of that model (aka 1sec/5.5ms). I put three disks in raid-z1 and have ARC limited to 4GB on the host-node, disk image is directly deployed into the zfs pool...
standard unmodified yabs:
it always bugged me, that the 512k and 1m IOps seemed to low, and watching the load and IO wait on the node it seemed much like the filled ARC cached that needed to be written to the disk at that point kinda blocked the whole process. like what is gained in 4k/64k now is in the way...
so, how about adding a sleep 5 into the fio loop as a 'cooldown period' for ARC?
oh, see the difference in 512k and 64k even... what about 20sec delay then?
BAM! finally everything seems to be hitting the ARC cache properly. because yabs prepopulates the test-file before the actual fio runs and I added the delay/cooldown in between, even the 4k now made a bigger jump...
now, what to take away from this?
1) yabs is awesome. the way it is built allows for easily digging into specific things by modifying some test parameters and more important it is very clear from the beginning, what it is actually doing and how so one can put the result into proper context.
thx @Mason ;-)
2) single/quick glance at artificial benchmark numbers can be misleading, if you don't know what's 'under the hood' - the zfs / ARC example is comparable to a hardware raid controller with some memory-cache and there are lots of things one can adjust around that (cachesizes, writeback, writethrough) that will have an influence... however for the tests nothing of that has been changed, but just the timeline for the workload inside the benchmark!
3) it's debatable how to put the numbers into relation to real world workloads. I'd expect for common not too big hosting workloads most stuff to be sitting in FS cache or ARC most of the time and the time zfs will need to actually write to disk nicely be averaged out in most of the cases.
so will ZFS/ARC be beneficial despite the first benchmark run seem rather slow on 512k/1m ? yeah I would say so, 4GB should still be plenty and allow for a lot of frequently accessed data to be kept hot and available.
Thanks! ^^
Since I can't edit https://talk.lowendspirit.com/discussion/comment/79267/#Comment_79267 anymore:
Hosthatch storage BF flash sale (Stockholm) @ Ubuntu 20.04
https://browser.geekbench.com/v5/cpu/12072017 TL;DR single: 545, multi: 558
Contribute your idling VPS/dedi (link), Android (link) or iOS (link) devices to medical research
just post a new one anytime you like mate
I bench YABS 24/7/365 unless it's a leap year.
HostHatch 500 GB Stockholm BF Flash Sales
⭕ A simple uptime dashboard using UptimeRobot API https://upy.duo.ovh
⭕ Currently using VPS from BuyVM, GreenCloudVPS, Gullo's, Hetzner, HostHatch, InceptionHosting, LetBox, MaxKVM, MrVM, VirMach.
that's a good one mate...should not have hesitated
I bench YABS 24/7/365 unless it's a leap year.
Yes, it is worth the wait.. only $13.3 / yr for 500 GB.
⭕ A simple uptime dashboard using UptimeRobot API https://upy.duo.ovh
⭕ Currently using VPS from BuyVM, GreenCloudVPS, Gullo's, Hetzner, HostHatch, InceptionHosting, LetBox, MaxKVM, MrVM, VirMach.
15€/y 512MB NVMe Ryzen W24
Free NAT KVM | Free NAT LXC | Bobr
ITS WEDNESDAY MY DUDES
Is this the 10x10x10? What location is this?
Team push-ups!
NY just got migrated
Free NAT KVM | Free NAT LXC | Bobr
ITS WEDNESDAY MY DUDES
Alphavps giveaway
epyc 2GB
I bench YABS 24/7/365 unless it's a leap year.
Micronode X1S Dedicated Server - These should be provisioned sometime next week and will be priced at £5 per month with 8GB EMMC OS disk and 64GB Data disk:
Hm. I expected some more fireworks given the fine specs, but maybe more RAM and disk might change the results.
———-
blog | exploring visually |
Nice new laptop you have @vyas
⭕ A simple uptime dashboard using UptimeRobot API https://upy.duo.ovh
⭕ Currently using VPS from BuyVM, GreenCloudVPS, Gullo's, Hetzner, HostHatch, InceptionHosting, LetBox, MaxKVM, MrVM, VirMach.
Can someone post a lsblk on hosthatch and servarica? Looking for storage servers but I want to see fio results on "storage disk", not "system ssd disk".
(stockholm)
sullivanshosting LEB ovz
$2
cheapwindowsvps unmetered bw 4.50
Spaceberg.cc - Your favorite Seedbox provider!
Noice test result... Bravoo!
⭕ A simple uptime dashboard using UptimeRobot API https://upy.duo.ovh
⭕ Currently using VPS from BuyVM, GreenCloudVPS, Gullo's, Hetzner, HostHatch, InceptionHosting, LetBox, MaxKVM, MrVM, VirMach.