Good and bad disks

Well I have this disk as a spare for downloading torrents. Previously was on another machine, on an old server, god knows for how long.

Today I noticed that it's flying for 13 years with zero bad sectors.

Do you have good or bad experiences with particular disks/models/brands?

Show some kick-ass Power_On_Hours statistics from your drives

smartctl 7.2 2020-12-30 r5155 [x86_64-linux-5.15.0-87-generic] (local build)
Copyright (C) 2002-20, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF INFORMATION SECTION ===
Model Family:     Western Digital Caviar Blue (SATA)
Device Model:     WDC WD6400AAKS-22A7B0
Serial Number:    WD-WCASY1685427
LU WWN Device Id: 5 0014ee 2ac6a0395
Firmware Version: 01.03B01
User Capacity:    640,135,028,736 bytes [640 GB]
Sector Size:      512 bytes logical/physical
Device is:        In smartctl database [for details use: -P show]
ATA Version is:   ATA8-ACS (minor revision not indicated)
SATA Version is:  SATA 2.5, 3.0 Gb/s
Local Time is:    Sun Jan  7 23:57:24 2024 EET
SMART support is: Available - device has SMART capability.
SMART support is: Enabled
...
SMART Attributes Data Structure revision number: 16
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE
  1 Raw_Read_Error_Rate     0x002f   200   200   051    Pre-fail  Always       -       0
  3 Spin_Up_Time            0x0027   163   150   021    Pre-fail  Always       -       4816
  4 Start_Stop_Count        0x0032   083   083   000    Old_age   Always       -       17724
  5 Reallocated_Sector_Ct   0x0033   200   200   140    Pre-fail  Always       -       0
  7 Seek_Error_Rate         0x002e   200   200   051    Old_age   Always       -       0
  9 Power_On_Hours          0x0032   001   001   000    Old_age   Always       -       118780
 10 Spin_Retry_Count        0x0032   100   100   051    Old_age   Always       -       0
 11 Calibration_Retry_Count 0x0032   100   100   051    Old_age   Always       -       0
 12 Power_Cycle_Count       0x0032   100   100   000    Old_age   Always       -       633
192 Power-Off_Retract_Count 0x0032   199   199   000    Old_age   Always       -       1132
193 Load_Cycle_Count        0x0032   195   195   000    Old_age   Always       -       17706
194 Temperature_Celsius     0x0022   115   089   000    Old_age   Always       -       32
196 Reallocated_Event_Count 0x0032   200   200   000    Old_age   Always       -       0
197 Current_Pending_Sector  0x0032   200   200   000    Old_age   Always       -       0
198 Offline_Uncorrectable   0x0030   200   200   000    Old_age   Offline      -       0
199 UDMA_CRC_Error_Count    0x0032   200   200   000    Old_age   Always       -       0
200 Multi_Zone_Error_Rate   0x0008   200   200   051    Old_age   Offline      -       0
«1

Comments

  • host_chost_c Hosting Provider

    @itsdeadjim

    Nice, well, by today standard of production, you might not see that reliability anymore in consumer drives, SATA.

    Actually we have a nasty experience with SATA over 6TB mark, we switched to SAS for many years.

    Thanked by (2)itsdeadjim bikegremlin

    Host-C - VPS Services Provider - AS211462

    "If there is no struggle there is no progress"

  • @itsdeadjim said:
    Well I have this disk as a spare for downloading torrents. Previously was on another machine, on an old server, god knows for how long.

    Today I noticed that it's flying for 13 years with zero bad sectors.

    From what i understand, only these values matter. Since they are all showing 0 on raw value, your disk still seems to be ok (which is weird to say the least). Normally they fail after 5+ years of operation...

     ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE
       1 Raw_Read_Error_Rate     0x002f   200   200   051    Pre-fail  Always       -       0
       5 Reallocated_Sector_Ct   0x0033   200   200   140    Pre-fail  Always       -       0
       7 Seek_Error_Rate         0x002e   200   200   051    Old_age   Always       -       0
     196 Reallocated_Event_Count 0x0032   200   200   000    Old_age   Always       -       0
     197 Current_Pending_Sector  0x0032   200   200   000    Old_age   Always       -       0
     198 Offline_Uncorrectable   0x0030   200   200   000    Old_age   Offline      -       0
    

    .

    .

    Unlike you, I did not have a good experience with my disk... However reading up online for my brand of disk, apparently the seek error rate and raw error rates tend to give wrong values? Apparently I should only look at "Reallocated sector count", "Reported Uncorrected errors" and such, which to me, seems very wrong...

    smartctl 7.3 2022-02-28 r5338 [x86_64-linux-6.2.16-10-pve] (local build)
    Copyright (C) 2002-22, Bruce Allen, Christian Franke, www.smartmontools.org
    
    === START OF INFORMATION SECTION ===
    Model Family:     Seagate BarraCuda 3.5 (SMR)
    Device Model:     ST4000DM005-2DP166
    Serial Number:    WDH2HY3P
    LU WWN Device Id: 5 000c50 0a94294ee
    Firmware Version: 0001
    User Capacity:    4,000,787,030,016 bytes [4.00 TB]
    Sector Sizes:     512 bytes logical, 4096 bytes physical
    Rotation Rate:    5980 rpm
    Form Factor:      3.5 inches
    Device is:        In smartctl database 7.3/5319
    ATA Version is:   ACS-3 T13/2161-D revision 5
    SATA Version is:  SATA 3.1, 6.0 Gb/s (current: 6.0 Gb/s)
    Local Time is:    Mon Jan  8 07:26:03 2024 +08
    SMART support is: Available - device has SMART capability.
    SMART support is: Enabled
    
    === START OF READ SMART DATA SECTION ===
    ...
    
    SMART Attributes Data Structure revision number: 10
    Vendor Specific SMART Attributes with Thresholds:
    ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE
      1 Raw_Read_Error_Rate     0x000f   081   064   006    Pre-fail  Always       -       138489104
      3 Spin_Up_Time            0x0003   094   094   000    Pre-fail  Always       -       0
      4 Start_Stop_Count        0x0032   100   100   020    Old_age   Always       -       396
      5 Reallocated_Sector_Ct   0x0033   100   100   010    Pre-fail  Always       -       0
      7 Seek_Error_Rate         0x000f   086   060   045    Pre-fail  Always       -       384802057
      9 Power_On_Hours          0x0032   050   050   000    Old_age   Always       -       44053h+52m+35.707s
     10 Spin_Retry_Count        0x0013   100   100   097    Pre-fail  Always       -       0
     12 Power_Cycle_Count       0x0032   100   100   020    Old_age   Always       -       249
    183 Runtime_Bad_Block       0x0032   100   100   000    Old_age   Always       -       0
    184 End-to-End_Error        0x0032   100   100   099    Old_age   Always       -       0
    187 Reported_Uncorrect      0x0032   100   100   000    Old_age   Always       -       0
    188 Command_Timeout         0x0032   100   100   000    Old_age   Always       -       0 0 0
    189 High_Fly_Writes         0x003a   100   100   000    Old_age   Always       -       0
    190 Airflow_Temperature_Cel 0x0022   069   043   040    Old_age   Always       -       31 (Min/Max 31/35)
    191 G-Sense_Error_Rate      0x0032   100   100   000    Old_age   Always       -       0
    192 Power-Off_Retract_Count 0x0032   100   100   000    Old_age   Always       -       1903
    193 Load_Cycle_Count        0x0032   094   094   000    Old_age   Always       -       12761
    194 Temperature_Celsius     0x0022   031   057   000    Old_age   Always       -       31 (0 18 0 0 0)
    197 Current_Pending_Sector  0x0012   100   100   000    Old_age   Always       -       0
    198 Offline_Uncorrectable   0x0010   100   100   000    Old_age   Offline      -       0
    199 UDMA_CRC_Error_Count    0x003e   200   200   000    Old_age   Always       -       0
    240 Head_Flying_Hours       0x0000   100   253   000    Old_age   Offline      -       43255h+23m+19.699s
    241 Total_LBAs_Written      0x0000   100   253   000    Old_age   Offline      -       13839805021
    242 Total_LBAs_Read         0x0000   100   253   000    Old_age   Offline      -       16780634238
    

    Websites have ads, I have ad-blocker.

  • edited January 7

    @host_c said:
    @itsdeadjim

    Nice, well, by today standard of production, you might not see that reliability anymore in consumer drives, SATA.

    Actually we have a nasty experience with SATA over 6TB mark, we switched to SAS for many years.

    I was thinking to buy a couple of 8TB SATA disks or more for home, but what stops me is the drop in quality. What do you suggest?

    @somik said: Unlike you, I did not have a good experience with my disk... However reading up online for my brand of disk, apparently the seek error rate and raw error rates tend to give wrong values? Apparently I should only look at "Reallocated sector count", "Reported Uncorrected errors" and such, which to me, seems very wrong...

    Yeah on seagates Raw_Read_Error_Rate is a 48 bit number, where first 16bits is the error count and last 32bit is the number of operations. FFS :)

    So in your case it's 138489104 = 0x8412D10 = 0x0000_08412D10, so 0 errors and 138489104 operations (same stands for Seek_Error_Rate)

    Thanked by (1)somik
  • @itsdeadjim said:

    @somik said: Unlike you, I did not have a good experience with my disk... However reading up online for my brand of disk, apparently the seek error rate and raw error rates tend to give wrong values? Apparently I should only look at "Reallocated sector count", "Reported Uncorrected errors" and such, which to me, seems very wrong...

    Yeah on seagates Raw_Read_Error_Rate is a 48 bit number, where first 16bits is the error count and last 32bit is the number of operations. FFS :)

    So in your case it's 138489104 = 0x8412D10 = 0x0000_08412D10, so 0 errors and 138489104 operations (same stands for Seek_Error_Rate)

    Ooo! Did not know that! Thanks! Time to find a hex converter online for the rest! :lol:

    Websites have ads, I have ad-blocker.

  • @somik said: Ooo! Did not know that! Thanks! Time to find a hex converter online for the rest! :lol:

    Yeah I don't know who at seagate thought that this it's a good idea to pack 2 different values in the error attributes, as there are no other attributes there. Had the same issue with a seagate and found this info in some tech manual

    Thanked by (1)somik
  • @itsdeadjim said:

    @somik said: Ooo! Did not know that! Thanks! Time to find a hex converter online for the rest! :lol:

    Yeah I don't know who at seagate thought that this it's a good idea to pack 2 different values in the error attributes, as there are no other attributes there. Had the same issue with a seagate and found this info in some tech manual

    Yo! I heard you like error attributes. So I put 2 error attributes in a error attribute! So you can get confused and throw away your disk after reading the smart values...

    FYI, I did take the drive out from my home server as I misdiagnosed it to be faulty. Turns out it was another drive that I was using as second backup that gave up the ghost, not this one... It only took me 4 months to realize it and put the drive back into the home server...

    Websites have ads, I have ad-blocker.

  • @itsdeadjim said: Do you have good or bad experiences with particular disks/models/brands?

    I have booth good and bad experiences with all of them (for HDD). Seagate/WD/Hitachi with all their unicorn fart color variant.
    so it doesn't makes sense to day brand A good, brand B bad.

    if you want to compare them by brand, use decent data comparison like this https://www.backblaze.com/blog/backblaze-drive-stats-for-q3-2023/

    Thanked by (1)itsdeadjim

    Fuck this 24/7 internet spew of trivia and celebrity bullshit.

  • @Encoders said:

    @itsdeadjim said: Do you have good or bad experiences with particular disks/models/brands?

    I have booth good and bad experiences with all of them (for HDD). Seagate/WD/Hitachi with all their unicorn fart color variant.
    so it doesn't makes sense to day brand A good, brand B bad.

    if you want to compare them by brand, use decent data comparison like this https://www.backblaze.com/blog/backblaze-drive-stats-for-q3-2023/

    Looks like Seagate is generally more prone to failures... WDC and HGST (previously IBM) are the good options...

    Websites have ads, I have ad-blocker.

  • host_chost_c Hosting Provider
    edited January 8

    Sincerely guys, in SATA, I would not give a dime what brand it is. Remember the scandal of WD RED and GOLD ( NASWARE )?

    They twerked the FW a little and BANG!, +30% price tag, even double to some cheaper models from seagate. I mean, I just hate it whet the MK team of the Manufacturer thinks all their clients as idiots.

    I moved away from SATA spinning rust a long time ago.

    If you want some decent Home Storage, Raid 1 is a definitely go, 10 would make perfect sense to me. I lost all my photos until the age of 18 because I was rock solid that my WD Blue was Immortal, it died within a few months. :o

    Do not use those cheap 2 bay NAS things, nor 4 bay NAS things, unless they are 900USD without the drives. The CPU in them that does the SW raid is as powerful as the Iphone 3GS. ( really crappy ). And in case of a firmware upgrade, your data might co to the Dimensional Cloud. ( lost ad boot, and drives appear uninitialized )

    What I suggest, it will break the bank a little, but go with me on this.

    Optin A:

    HP Microserver G8 with Xeon CPU, 16GB DDR3 ECC unbuffered
    1 x LSI SAS 2008 Card, Low Profile - so it fits the HP Micro
    4 x SAS any brand/model/age from 2015 upwards
    1 x ssd new as boot drive ( you will have to get a power adapter + sata cabel to fit this in, if it comes with CD-ROM, the a 12MM Cdrom adaptopr for SATA SSD

    • yes , you guessed it, TruenasCORE ( formerly freenas )

    Average power consumption 60-90W ( 5W the controller, 10W for each drive + the cpu and MB )

    Option B if you do not want ZFS, altho for home use, I recommend it, never lost a bit with it, but only usable on option A hardware.

    Option B, Same setup except the RAID CARD:

    HP Microserver G8 with Xeon CPU 16GB DDR3 ECC unbuffered
    1 x HP Smart Array P222, 512MB FBWC, LOW Profile
    4 x SAS any brand/model/age from 2015 upwards
    1 x ssd new as boot drive ( you will have to get a power adapter + sata cabel to fit this in, if it comes with CD-ROM, the a 12MM Cdrom adaptopr for SATA SSD
    Any OS that you like.

    Lower CPU is fine also, as long as you do not want more than two 1080p trans-coded streams off it ( yes, you will :) )

    All from your local EBAY provider. =)

    Microserver G10 has a small problem, does not fit a damn RAID card, maybe a HBA for ZFS, but not quite, and the base model lacks ILO.

    You can go SATA, but I would not trust any sata drive without a raid 1 minimum, or raid 10 preferably.

    @somik said: Looks like Seagate is generally more prone to failures... WDC and HGST (previously IBM) are the good options...

    It really depends on sales numbers, and Seagate is number 1 here in DC stuff, consumer grade, have no clue, and that is very subjective to how many times did the Amazon Guy hit the package to the wall of the van before delivering it =)

    Or the sorting hub carefully reading the FRAGILE sticker and acting accordingly:

    Thanked by (2)itsdeadjim bikegremlin

    Host-C - VPS Services Provider - AS211462

    "If there is no struggle there is no progress"

  • @Encoders said:

    @itsdeadjim said: Do you have good or bad experiences with particular disks/models/brands?

    I have booth good and bad experiences with all of them (for HDD). Seagate/WD/Hitachi with all their unicorn fart color variant.
    so it doesn't makes sense to day brand A good, brand B bad.

    if you want to compare them by brand, use decent data comparison like this https://www.backblaze.com/blog/backblaze-drive-stats-for-q3-2023/

    Thank you, I didnt know they posted that.

    @somik said: Looks like Seagate is generally more prone to failures... WDC and HGST (previously IBM) are the good options...

    Yeah I thought that too, but reading on an analysis on this they say that these Seagates are much older than the rest disks, so more or less it's not safe to draw conclusions.

    Thanked by (1)bikegremlin
  • @host_c said:
    Sincerely guys, in SATA, I would not give a dime what brand it is. Remember the scandal of WD RED and GOLD ( NASWARE )?

    They twerked the FW a little and BANG!, +30% price tag, even double to some cheaper models from seagate. I mean, I just hate it whet the MK team of the Manufacturer thinks all their clients as idiots.

    I moved away from SATA spinning rust a long time ago.

    If you want some decent Home Storage, Raid 1 is a definitely go, 10 would make perfect sense to me. I lost all my photos until the age of 18 because I was rock solid that my WD Blue was Immortal, it died within a few months. :o

    Do not use those cheap 2 bay NAS things, nor 4 bay NAS things, unless they are 900USD without the drives. The CPU in them that does the SW raid is as powerful as the Iphone 3GS. ( really crappy ). And in case of a firmware upgrade, your data might co to the Dimensional Cloud. ( lost ad boot, and drives appear uninitialized )

    What I suggest, it will break the bank a little, but go with me on this.

    Optin A:

    HP Microserver G8 with Xeon CPU, 16GB DDR3 ECC unbuffered
    1 x LSI SAS 2008 Card, Low Profile - so it fits the HP Micro
    4 x SAS any brand/model/age from 2015 upwards
    1 x ssd new as boot drive ( you will have to get a power adapter + sata cabel to fit this in, if it comes with CD-ROM, the a 12MM Cdrom adaptopr for SATA SSD

    • yes , you guessed it, TruenasCORE ( formerly freenas )

    Average power consumption 60-90W ( 5W the controller, 10W for each drive + the cpu and MB )

    Option B if you do not want ZFS, altho for home use, I recommend it, never lost a bit with it, but only usable on option A hardware.

    Option B, Same setup except the RAID CARD:

    HP Microserver G8 with Xeon CPU 16GB DDR3 ECC unbuffered
    1 x HP Smart Array P222, 512MB FBWC, LOW Profile
    4 x SAS any brand/model/age from 2015 upwards
    1 x ssd new as boot drive ( you will have to get a power adapter + sata cabel to fit this in, if it comes with CD-ROM, the a 12MM Cdrom adaptopr for SATA SSD
    Any OS that you like.

    Lower CPU is fine also, as long as you do not want more than two 1080p trans-coded streams off it ( yes, you will :) )

    All from your local EBAY provider. =)

    Microserver G10 has a small problem, does not fit a damn RAID card, maybe a HBA for ZFS, but not quite, and the base model lacks ILO.

    You can go SATA, but I would not trust any sata drive without a raid 1 minimum, or raid 10 preferably.

    @somik said: Looks like Seagate is generally more prone to failures... WDC and HGST (previously IBM) are the good options...

    It really depends on sales numbers, and Seagate is number 1 here in DC stuff, consumer grade, have no clue, and that is very subjective to how many times did the Amazon Guy hit the package to the wall of the van before delivering it =)

    Or the sorting hub carefully reading the FRAGILE sticker and acting accordingly:

    Thank you for your analysis.

    At home I use an older i3 with a desktop motherboard and a couple of 4T sata disks on zfs mirror + ssd boot + the above immortal disk for buffering torrents + an external backup as a nas.

    I am thinking for a generous upgrade in storage, probably 2x10-15T or something like that, so you're actually saying go with anything SAS. Are you saying that because most of them are enterprise grade? Can I go with SAS for that kind of storage, without spending far too much?

  • bikegremlinbikegremlin ModeratorOGContent Writer

    @host_c said:
    Sincerely guys, in SATA, I would not give a dime what brand it is. Remember the scandal of WD RED and GOLD ( NASWARE )?

    They twerked the FW a little and BANG!, +30% price tag, even double to some cheaper models from seagate. I mean, I just hate it whet the MK team of the Manufacturer thinks all their clients as idiots.

    I moved away from SATA spinning rust a long time ago.

    If you want some decent Home Storage, Raid 1 is a definitely go, 10 would make perfect sense to me. I lost all my photos until the age of 18 because I was rock solid that my WD Blue was Immortal, it died within a few months. :o

    Do not use those cheap 2 bay NAS things, nor 4 bay NAS things, unless they are 900USD without the drives. The CPU in them that does the SW raid is as powerful as the Iphone 3GS. ( really crappy ). And in case of a firmware upgrade, your data might co to the Dimensional Cloud. ( lost ad boot, and drives appear uninitialized )

    What I suggest, it will break the bank a little, but go with me on this.

    Optin A:

    HP Microserver G8 with Xeon CPU, 16GB DDR3 ECC unbuffered
    1 x LSI SAS 2008 Card, Low Profile - so it fits the HP Micro
    4 x SAS any brand/model/age from 2015 upwards
    1 x ssd new as boot drive ( you will have to get a power adapter + sata cabel to fit this in, if it comes with CD-ROM, the a 12MM Cdrom adaptopr for SATA SSD

    • yes , you guessed it, TruenasCORE ( formerly freenas )

    Average power consumption 60-90W ( 5W the controller, 10W for each drive + the cpu and MB )

    Option B if you do not want ZFS, altho for home use, I recommend it, never lost a bit with it, but only usable on option A hardware.

    Option B, Same setup except the RAID CARD:

    HP Microserver G8 with Xeon CPU 16GB DDR3 ECC unbuffered
    1 x HP Smart Array P222, 512MB FBWC, LOW Profile
    4 x SAS any brand/model/age from 2015 upwards
    1 x ssd new as boot drive ( you will have to get a power adapter + sata cabel to fit this in, if it comes with CD-ROM, the a 12MM Cdrom adaptopr for SATA SSD
    Any OS that you like.

    Lower CPU is fine also, as long as you do not want more than two 1080p trans-coded streams off it ( yes, you will :) )

    All from your local EBAY provider. =)

    Microserver G10 has a small problem, does not fit a damn RAID card, maybe a HBA for ZFS, but not quite, and the base model lacks ILO.

    You can go SATA, but I would not trust any sata drive without a raid 1 minimum, or raid 10 preferably.

    @somik said: Looks like Seagate is generally more prone to failures... WDC and HGST (previously IBM) are the good options...

    It really depends on sales numbers, and Seagate is number 1 here in DC stuff, consumer grade, have no clue, and that is very subjective to how many times did the Amazon Guy hit the package to the wall of the van before delivering it =)

    Or the sorting hub carefully reading the FRAGILE sticker and acting accordingly:

    A very good post.

    My only comment would be regarding the RAID 10. I don't think it makes sense if data security is important. RAID 5 or 6 would probably be a better idea (if RAID 1 is too wasteful for the budget).

    Thanked by (1)host_c

    Relja of House Novović, the First of His Name, King of the Plains, the Breaker of Chains, WirMach Wolves pack member
    BikeGremlin's web-hosting reviews

  • edited January 8

    @bikegremlin said: My only comment would be regarding the RAID 10. I don't think it makes sense if data security is important. RAID 5 or 6 would probably be a better idea (if RAID 1 is too wasteful for the budget).

    Correct me if I am wrong, but raid is about uptime (or speed), not security, right? Raid is not a backup

    Thanked by (1)bikegremlin
  • @host_c said:
    Do not use those cheap 2 bay NAS things, nor 4 bay NAS things, unless they are 900USD without the drives. The CPU in them that does the SW raid is as powerful as the Iphone 3GS. ( really crappy ). And in case of a firmware upgrade, your data might co to the Dimensional Cloud. ( lost ad boot, and drives appear uninitialized )

    Ya, no. I would not trust a dedicated NAS with even my temp folders. They are designed to be cheap and feature rich, not reliable. If they fail before your disk, you are locked into either getting same brand drive or hope they used a good hardware raid to recover your data.

    In fact, I stopped using RAID loong time ago. I prefer to stick to running full backups instead. I like to choose what I want to backup. I do not need a copy of the OS standing by. I would rather backup my home folder only and reinstall OS or any app I used.

    What I suggest, it will break the bank a little, but go with me on this.

    Or get a cheapo intel/amd cpu with ram/mobo and setup a desktop tower with a few drives. Unraid or TrueNAS if you are not familiar and plain old debian/ubuntu/centos + software of your choice if you are experienced.

    In my case, my Proxmox VM server has a dedicated Sata HDD used for backups. The entire container image gets backed up automatically using the proxmox's automated backup, configured through GUI.

    Second backup is a standalone mini PC with 2TB Sata HDD where my website backups gets uploaded through my custom PHP script. Nothing special, just archive the /home/domain/public_html folder (and any other folder outside) along with the mariaDB backup and upload it to the mini PC over http (all running locally). Mini PC is behind firewall and not accessible over internet.

    Third backup is my offline backup which is a 4TB Sata USB HDD that I connect every month to copy data from the mini PC.

    Final backup is my offsite backup in cloud which I also update manually when I backup data to my offline backup.

    I know this is too much. I would say only 1 or 2 backups is enough for most people, just spread them over 2 or more devices.

    @somik said: Looks like Seagate is generally more prone to failures... WDC and HGST (previously IBM) are the good options...

    It really depends on sales numbers, and Seagate is number 1 here in DC stuff, consumer grade, have no clue, and that is very subjective to how many times did the Amazon Guy hit the package to the wall of the van before delivering it =)

    Or the sorting hub carefully reading the FRAGILE sticker and acting accordingly:

    Ya, i received a "fragile" box last week, which was banged up MORE then my other non-fragile stickered boxes... I think my delivery guy hates me :(

    @itsdeadjim said:

    @bikegremlin said: My only comment would be regarding the RAID 10. I don't think it makes sense if data security is important. RAID 5 or 6 would probably be a better idea (if RAID 1 is too wasteful for the budget).

    Correct me if I am wrong, but raid is about uptime (or speed), not security, right? Raid is not a backup

    If your server gets compromised, RAID dies with it. So it is not a backup.

    @itsdeadjim said:

    @somik said: Looks like Seagate is generally more prone to failures... WDC and HGST (previously IBM) are the good options...

    Yeah I thought that too, but reading on an analysis on this they say that these Seagates are much older than the rest disks, so more or less it's not safe to draw conclusions.

    Seagates are newer and uses SMR for their hard drives. CMR are the older type but cannot hold much data. SMR can hold a lot more data at the price of reliability... In fact almost all new HDDs are now SMRs. I mostly use these to hold data that I do not need, for example as local debian/raspbian mirror to speed up my server's updating their OS.

    They are also great for larger files (like large ISO files or video files) as they perform almost the same speed for larger files, and in some cases, even faster... My PNY SSD is slower then my Seagate 4TB HDDs for large file transfers and it bogs down when transfering over 30 GBs at once, which is not something you'll ever face with a HDD.

    Websites have ads, I have ad-blocker.

  • bikegremlinbikegremlin ModeratorOGContent Writer

    @itsdeadjim said:

    @bikegremlin said: My only comment would be regarding the RAID 10. I don't think it makes sense if data security is important. RAID 5 or 6 would probably be a better idea (if RAID 1 is too wasteful for the budget).

    Correct me if I am wrong, but raid is about uptime (or speed), not security, right? Raid is not a backup

    Yes, RAID is not a backup, butit can either let you read your data even in case of one or two drives fai; or it can make data on all drives unreadable when one drive fails - depending on which RAID you use.

    RAID types - wiki

    @somik said:

    Seagates are newer and uses SMR for their hard drives. CMR are the older type but cannot hold much data. SMR can hold a lot more data at the price of reliability... In fact almost all new HDDs are now SMRs. I mostly use these to hold data that I do not need, for example as local debian/raspbian mirror to speed up my server's updating their OS.

    SATA HDDs sold as "NAS" drives often use CMR.

    My HDD buying recommendation.

    Relja of House Novović, the First of His Name, King of the Plains, the Breaker of Chains, WirMach Wolves pack member
    BikeGremlin's web-hosting reviews

  • my humour is so bad that every time your pfp pops up on the forum i start wheezing

    Thanked by (1)bikegremlin

    youtube.com/watch?v=k1BneeJTDcU

  • bikegremlinbikegremlin ModeratorOGContent Writer

    @Otus9051 said:
    my humour is so bad that every time your pfp pops up on the forum i start wheezing

    Keep your arm steady at least, you've missed the topic.

    :)

    Relja of House Novović, the First of His Name, King of the Plains, the Breaker of Chains, WirMach Wolves pack member
    BikeGremlin's web-hosting reviews

  • host_chost_c Hosting Provider

    @itsdeadjim said: I am thinking for a generous upgrade in storage, probably 2x10-15T or something like that, so you're actually saying go with anything SAS. Are you saying that because most of them are enterprise grade? Can I go with SAS for that kind of storage, without spending far too much?

    Well, you can go SAS only with a SAS HBA/RAID card, as the protocol differs from SATA, also the connector, as between power and data there is another round of connectors.

    SATA interface and SAS

    Definitely go with SAS, little more noisy ( as in a server who cares ) but definitely more reliable.

    SAS Cards and enclosures ( bays ) can read SATA, SATA Controllers and enclosures cannot read SAS

    @bikegremlin

    In a 4 bay setup, raid 10 is the only logical option.

    Raid 6 - uses 2 parity disks ( not 100 accurate, technique is called block-level striping with distributed parity , but for you to get the idea), so you are left with the space of 2 drives - PRO is that any 2 drives can fail at the same time.

    Raid 10 of 4 drives - is a mirror of 2 drives / group added in single storage. So it is basically Group A Mirror + Group B Mirror = Usable Space

    Group A - Drive 1 + Drive 2 in mirror
    Group B - Drive 3 + Drive 4 in mirror

    You can loose each Mirror's group one member drive, but not both. You have same redundancy of 2 drives ( 1 per each group ).

    Speed and simplicity of this setup is what I love, overhead on the drives during a resync after changing the drive is minimal, as data is only copied from the Groups Active Drive, no complicated calculations, no stress on all HDD members of the array.

    Raid5 - It should be banned, well it kinda is for 2 decades, as with large drives ( above 4TB ) , you have a 50% chance of another dive to fail during re-sync, and you can only loose 1 drive in RAID5.

    Parity Raid setups ( 5,6,7, or ZFS Z1,Z2,Z3 ) stress all the disks during rebuils/resync in event of changing a drive. In raid 10, data is simply copied from the healthy member of the Group, almost no impact on overall speed, in raid 5/6 oh boy do you feel it.

    Speed is a factor here, but for home is not important, as any of the setups can saturate 1G line easy today.

    Raid 5/6 - Write Speed is usually the speed of all Drives and minus the CRC calculation of the controller, in large disk setup, this is a problem.

    Raid 5 - does not have a performance penalty for read operations, cannot be compared with Raid 0 or Raid 10 at same number of disks.
    Raid 6 - does not have a performance penalty for read operations, but suffers greately at write, as there are 2 parity calculation necessary for write operations. Today modern Controllers have 4,6,8,12 GB DDR4 or DDR5 cache and special ASIC to solve the issue, hence the multy 1000$ price tag.

    Raid10 - Write Speed is of all drives divided by 2 ( as data has to be multiplied to each 2 drive group )
    Raid10 - Read Speed - Oh boy, speed of all drives, and even better, as data is fetched from all drives, depending on the controller/SW raid controller, data is read partially from all drives, and the controller does reassembly, so basically 1 MB file will be read 0.25 MB from each of the 4 drives.

    To have an idea, a raid 10 setup of 4 drives will saturate the 6 GBPS SAS link to the controller ( secvential not random read )

    For maximum safety you can go Raid 6/ ZFS Z2.

    In a 2 Drive setup, Mirror is the only way to fly =)

    You can read more on this:
    https://en.wikipedia.org/wiki/Standard_RAID_levels

    @bikegremlin

    Security wise, well, that is at OS level, this is a Hardware protection for failed drives, as stated by others, RAID is not a Backup, but merely a piece of safety that we do not trust the product, and wish to be safe in case of drive fail at hardware level.

    A micro-server G8 will be ~200 to 450 depending on the config. For ILO4 you will find an Advanced serial by the 3'rd post of google search. These little boxes from HP are ideal at this for home and even small office use, inexpensive, quiet, low power drain. The whole PSU is 150W if I remember correctly. 4 GE ports and dedicated ILO 4.

    Combined with a good OS, I still recommend TruenasCORE, you can have a 8/10/20 TB storage that does all, from torrent box to PLex server, file server, snapshots, MAC Backup, and a ton of plugins via doker, even KVM support.

    G10 is a more cheaper cut down actually, and pretty crowded inside.

    Thanked by (2)bikegremlin beagle

    Host-C - VPS Services Provider - AS211462

    "If there is no struggle there is no progress"

  • bikegremlinbikegremlin ModeratorOGContent Writer

    @host_c said:

    @itsdeadjim said: I am thinking for a generous upgrade in storage, probably 2x10-15T or something like that, so you're actually saying go with anything SAS. Are you saying that because most of them are enterprise grade? Can I go with SAS for that kind of storage, without spending far too much?

    Well, you can go SAS only with a SAS HBA/RAID card, as the protocol differs from SATA, also the connector, as between power and data there is another round of connectors.

    SATA interface and SAS

    Definitely go with SAS, little more noisy ( as in a server who cares ) but definitely more reliable.

    SAS Cards and enclosures ( bays ) can read SATA, SATA Controllers and enclosures cannot read SAS

    @bikegremlin

    In a 4 bay setup, raid 10 is the only logical option.

    Raid 6 - uses 2 parity disks ( not 100 accurate, technique is called block-level striping with distributed parity , but for you to get the idea), so you are left with the space of 2 drives - PRO is that any 2 drives can fail at the same time.

    Raid 10 of 4 drives - is a mirror of 2 drives / group added in single storage. So it is basically Group A Mirror + Group B Mirror = Usable Space

    Group A - Drive 1 + Drive 2 in mirror
    Group B - Drive 3 + Drive 4 in mirror

    You can loose each Mirror's group one member drive, but not both. You have same redundancy of 2 drives ( 1 per each group ).

    Speed and simplicity of this setup is what I love, overhead on the drives during a resync after changing the drive is minimal, as data is only copied from the Groups Active Drive, no complicated calculations, no stress on all HDD members of the array.

    Raid5 - It should be banned, well it kinda is for 2 decades, as with large drives ( above 4TB ) , you have a 50% chance of another dive to fail during re-sync, and you can only loose 1 drive in RAID5.

    Parity Raid setups ( 5,6,7, or ZFS Z1,Z2,Z3 ) stress all the disks during rebuils/resync in event of changing a drive. In raid 10, data is simply copied from the healthy member of the Group, almost no impact on overall speed, in raid 5/6 oh boy do you feel it.

    Speed is a factor here, but for home is not important, as any of the setups can saturate 1G line easy today.

    Raid 5/6 - Write Speed is usually the speed of all Drives and minus the CRC calculation of the controller, in large disk setup, this is a problem.

    Raid 5 - does not have a performance penalty for read operations, cannot be compared with Raid 0 or Raid 10 at same number of disks.
    Raid 6 - does not have a performance penalty for read operations, but suffers greately at write, as there are 2 parity calculation necessary for write operations. Today modern Controllers have 4,6,8,12 GB DDR4 or DDR5 cache and special ASIC to solve the issue, hence the multy 1000$ price tag.

    Raid10 - Write Speed is of all drives divided by 2 ( as data has to be multiplied to each 2 drive group )
    Raid10 - Read Speed - Oh boy, speed of all drives, and even better, as data is fetched from all drives, depending on the controller/SW raid controller, data is read partially from all drives, and the controller does reassembly, so basically 1 MB file will be read 0.25 MB from each of the 4 drives.

    To have an idea, a raid 10 setup of 4 drives will saturate the 6 GBPS SAS link to the controller ( secvential not random read )

    For maximum safety you can go Raid 6/ ZFS Z2.

    In a 2 Drive setup, Mirror is the only way to fly =)

    You can read more on this:
    https://en.wikipedia.org/wiki/Standard_RAID_levels

    @bikegremlin

    Security wise, well, that is at OS level, this is a Hardware protection for failed drives, as stated by others, RAID is not a Backup, but merely a piece of safety that we do not trust the product, and wish to be safe in case of drive fail at hardware level.

    A micro-server G8 will be ~200 to 450 depending on the config. For ILO4 you will find an Advanced serial by the 3'rd post of google search. These little boxes from HP are ideal at this for home and even small office use, inexpensive, quiet, low power drain. The whole PSU is 150W if I remember correctly. 4 GE ports and dedicated ILO 4.

    Combined with a good OS, I still recommend TruenasCORE, you can have a 8/10/20 TB storage that does all, from torrent box to PLex server, file server, snapshots, MAC Backup, and a ton of plugins via doker, even KVM support.

    G10 is a more cheaper cut down actually, and pretty crowded inside.

    Well-written, very well explained, and I agree with everything, with one note:
    RAID 10 is good if performance is your priority.
    RAID 6 is a better choice if your priority is to read data in case of a drive failing.

    MXroute outage shows how RAID 10 can go bady, very badly:
    https://lowendtalk.com/discussion/191200/mxroute-failed-and-im-sorry/p1

    Relja ElectricSix Novović

    Thanked by (1)host_c

    Relja of House Novović, the First of His Name, King of the Plains, the Breaker of Chains, WirMach Wolves pack member
    BikeGremlin's web-hosting reviews

  • @bikegremlin said:

    @host_c said:

    @itsdeadjim said: I am thinking for a generous upgrade in storage, probably 2x10-15T or something like that, so you're actually saying go with anything SAS. Are you saying that because most of them are enterprise grade? Can I go with SAS for that kind of storage, without spending far too much?

    Well, you can go SAS only with a SAS HBA/RAID card, as the protocol differs from SATA, also the connector, as between power and data there is another round of connectors.

    SATA interface and SAS

    Definitely go with SAS, little more noisy ( as in a server who cares ) but definitely more reliable.

    SAS Cards and enclosures ( bays ) can read SATA, SATA Controllers and enclosures cannot read SAS

    @bikegremlin

    In a 4 bay setup, raid 10 is the only logical option.

    Raid 6 - uses 2 parity disks ( not 100 accurate, technique is called block-level striping with distributed parity , but for you to get the idea), so you are left with the space of 2 drives - PRO is that any 2 drives can fail at the same time.

    Raid 10 of 4 drives - is a mirror of 2 drives / group added in single storage. So it is basically Group A Mirror + Group B Mirror = Usable Space

    Group A - Drive 1 + Drive 2 in mirror
    Group B - Drive 3 + Drive 4 in mirror

    You can loose each Mirror's group one member drive, but not both. You have same redundancy of 2 drives ( 1 per each group ).

    Speed and simplicity of this setup is what I love, overhead on the drives during a resync after changing the drive is minimal, as data is only copied from the Groups Active Drive, no complicated calculations, no stress on all HDD members of the array.

    Raid5 - It should be banned, well it kinda is for 2 decades, as with large drives ( above 4TB ) , you have a 50% chance of another dive to fail during re-sync, and you can only loose 1 drive in RAID5.

    Parity Raid setups ( 5,6,7, or ZFS Z1,Z2,Z3 ) stress all the disks during rebuils/resync in event of changing a drive. In raid 10, data is simply copied from the healthy member of the Group, almost no impact on overall speed, in raid 5/6 oh boy do you feel it.

    Speed is a factor here, but for home is not important, as any of the setups can saturate 1G line easy today.

    Raid 5/6 - Write Speed is usually the speed of all Drives and minus the CRC calculation of the controller, in large disk setup, this is a problem.

    Raid 5 - does not have a performance penalty for read operations, cannot be compared with Raid 0 or Raid 10 at same number of disks.
    Raid 6 - does not have a performance penalty for read operations, but suffers greately at write, as there are 2 parity calculation necessary for write operations. Today modern Controllers have 4,6,8,12 GB DDR4 or DDR5 cache and special ASIC to solve the issue, hence the multy 1000$ price tag.

    Raid10 - Write Speed is of all drives divided by 2 ( as data has to be multiplied to each 2 drive group )
    Raid10 - Read Speed - Oh boy, speed of all drives, and even better, as data is fetched from all drives, depending on the controller/SW raid controller, data is read partially from all drives, and the controller does reassembly, so basically 1 MB file will be read 0.25 MB from each of the 4 drives.

    To have an idea, a raid 10 setup of 4 drives will saturate the 6 GBPS SAS link to the controller ( secvential not random read )

    For maximum safety you can go Raid 6/ ZFS Z2.

    In a 2 Drive setup, Mirror is the only way to fly =)

    You can read more on this:
    https://en.wikipedia.org/wiki/Standard_RAID_levels

    @bikegremlin

    Security wise, well, that is at OS level, this is a Hardware protection for failed drives, as stated by others, RAID is not a Backup, but merely a piece of safety that we do not trust the product, and wish to be safe in case of drive fail at hardware level.

    A micro-server G8 will be ~200 to 450 depending on the config. For ILO4 you will find an Advanced serial by the 3'rd post of google search. These little boxes from HP are ideal at this for home and even small office use, inexpensive, quiet, low power drain. The whole PSU is 150W if I remember correctly. 4 GE ports and dedicated ILO 4.

    Combined with a good OS, I still recommend TruenasCORE, you can have a 8/10/20 TB storage that does all, from torrent box to PLex server, file server, snapshots, MAC Backup, and a ton of plugins via doker, even KVM support.

    G10 is a more cheaper cut down actually, and pretty crowded inside.

    Well-written, very well explained, and I agree with everything, with one note:
    RAID 10 is good if performance is your priority.
    RAID 6 is a better choice if your priority is to read data in case of a drive failing.

    MXroute outage shows how RAID 10 can go bady, very badly:
    https://lowendtalk.com/discussion/191200/mxroute-failed-and-im-sorry/p1

    Relja ElectricSix Novović

    Relja ElectricSEX Novovic

    I believe in good luck. Harder that I work ,luckier i get.

  • host_chost_c Hosting Provider
    edited January 9

    @bikegremlin

    And I agree with you an all, especially on the RAID6, that is why, we use that for all storage services we provide.

    In our case, as a provider/providers, as we are not the only ones doing this, we have to put the safety of data and access to it above all other factors, for our customers, speed comes after. We can live with a pour-ish YABS disk test on 12 drives in HW raid6, but loosing data of tens of clients, for using riad5, or no raid at all, or some lame ZFS implementations are a no go.

    Now we cannot guarantee that the MB will not fail, or the controller will not burn out.

    I just want to add 1 more.

    I personally used, implemented ZFS for over a decade, but in hosting I will not touch it again. ZFS is great, as we did not loose a single bit ever, importing that data to another server is easy ( not as easy on HW raid as it happened to MXroute ). Has excellent test values ( until the cache depletes, after that is rubbish, and barely does some IOPS ) but it consumes a ton of resources, and that puts too much load on the node/server and messes with the memory alocations to VM's/KVM.

    Also, as there is no control over the LED's at the bays with a HBA for ZFS and you servers are 500KM away from you, changing a failed drive is difficult, as identifying it for the staff on site will be a long day, in WH raid, you send the drive to the DC, open a tiket, they take out the red blinking drive ( hopefuly, oh man, do I have stories on this =) and finito, 7 minute job )

    I do not see it's place. The cpu's in a node/server has to also do storage calculations, caching and ram management ( ah yes, if 256 GB DDR4 ram was enough for a node, add another 128 or 256 for ZFS, and a enterprise grade NVME ).
    Secondly, ZFS implementations differ among Linux distros, performance as well.

    But the thing that made us go back to HW raid, was the latency in high IO load. In storage 1 ms is a lot, 2-3 ms is bad, really bad.

    Open ZFS that is used today, has nothing to do with what Oracle developed initially and that was bough by Cisco Netapp division decade ago. They do it right, EMC for example, meehh.

    But for what the OP initialy asked. I would do the setup from above, on ZFS! and truneas core.

    It will last for a long time, and even if the micro burns out, he can just put the controller with the drives in any other system, reinstall truenas on a ssd, import the config and BANG! good to go in ~1-2 ours at most, no data loss.

    Thanked by (1)bikegremlin

    Host-C - VPS Services Provider - AS211462

    "If there is no struggle there is no progress"

  • bikegremlinbikegremlin ModeratorOGContent Writer

    @host_c said:
    @bikegremlin

    And I agree with you an all, especially on the RAID6, that is why, we use that for all storage services we provide.

    In our case, as a provider/providers, as we are not the only ones doing this, we have to put the safety of data and access to it above all other factors, for our customers, speed comes after. We can live with a pour-ish YABS disk test on 12 drives in HW raid6, but loosing data of tens of clients, for using riad5, or no raid at all, or some lame ZFS implementations are a no go.

    Now we cannot guarantee that the MB will not fail, or the controller will not burn out.

    I just want to add 1 more.

    I personally used, implemented ZFS for over a decade, but in hosting I will not touch it again. ZFS is great, as we did not loose a single bit ever, importing that data to another server is easy ( not as easy on HW raid as it happened to MXroute ). Has excellent test values ( until the cache depletes, after that is rubbish, and barely does some IOPS ) but it consumes a ton of resources, and that puts too much load on the node/server and messes with the memory alocations to VM's/KVM.

    Also, as there is no control over the LED's at the bays with a HBA for ZFS and you servers are 500KM away from you, changing a failed drive is difficult, as identifying it for the staff on site will be a long day, in WH raid, you send the drive to the DC, open a tiket, they take out the red blinking drive ( hopefuly, oh man, do I have stories on this =) and finito, 7 minute job )

    I do not see it's place. The cpu's in a node/server has to also do storage calculations, caching and ram management ( ah yes, if 256 GB DDR4 ram was enough for a node, add another 128 or 256 for ZFS, and a enterprise grade NVME ).
    Secondly, ZFS implementations differ among Linux distros, performance as well.

    But the thing that made us go back to HW raid, was the latency in high IO load. In storage 1 ms is a lot, 2-3 ms is bad, really bad.

    Open ZFS that is used today, has nothing to do with what Oracle developed initially and that was bough by Cisco Netapp division decade ago. They do it right, EMC for example, meehh.

    But for what the OP initialy asked. I would do the setup from above, on ZFS! and truneas core.

    It will last for a long time, and even if the micro burns out, he can just put the controller with the drives in any other system, reinstall truenas on a ssd, import the config and BANG! good to go in ~1-2 ours at most, no data loss.

    Thank you very much for taking the time to write this. I really appreciate it (hope the same goes for other LESbians).

    In your opinion, for the OP's use case, would you stick to the RAID 10 recommendation, or do you think RAID 6 might be a good idea?

    Thanked by (1)host_c

    Relja of House Novović, the First of His Name, King of the Plains, the Breaker of Chains, WirMach Wolves pack member
    BikeGremlin's web-hosting reviews

  • host_chost_c Hosting Provider

    Raid 6, ZFS Z2. I also advise him to install the server without drives, and add the drives 1 by one after install and network config so he can do the following:

    Adding the first drive, will show up in truenas as SDA/DA/ADA 0, and he can edit the notes on the drive and mark it bay 0, then the next, bay 1,bay 2,bay 3. so when SMARTCLI will pop up an error that says disk ADA2 is off/bad/dead, he can see that the drive is actually in bay 0, or 3. Linux distros have a way of changing drive names from time to time during kernel updates , this is also valid for network card names, they can fly from ENO1 to ENPS5 during a driver upgrade.

    Thanked by (1)bikegremlin

    Host-C - VPS Services Provider - AS211462

    "If there is no struggle there is no progress"

  • @bikegremlin said:

    @somik said:

    Seagates are newer and uses SMR for their hard drives. CMR are the older type but cannot hold much data. SMR can hold a lot more data at the price of reliability... In fact almost all new HDDs are now SMRs. I mostly use these to hold data that I do not need, for example as local debian/raspbian mirror to speed up my server's updating their OS.

    SATA HDDs sold as "NAS" drives often use CMR.

    My HDD buying recommendation.

    Ya, the HDD I am using in my server for proxmox backups is a 4TB Seagate Ironwolf. Other then the confusion with the smart values, this hard drive seems fast and reliable.

    smartctl 7.3 2022-02-28 r5338 [x86_64-linux-6.5.11-7-pve] (local build)
    Copyright (C) 2002-22, Bruce Allen, Christian Franke, www.smartmontools.org
    
    === START OF INFORMATION SECTION ===
    Model Family:     Seagate IronWolf
    Device Model:     ST4000VN008-2DR166
    Serial Number:    ZDHBCSNY
    LU WWN Device Id: 5 000c50 0e47ba481
    Firmware Version: SC60
    User Capacity:    4,000,787,030,016 bytes [4.00 TB]
    Sector Sizes:     512 bytes logical, 4096 bytes physical
    Rotation Rate:    5980 rpm
    Form Factor:      3.5 inches
    Device is:        In smartctl database 7.3/5319
    ATA Version is:   ACS-3 T13/2161-D revision 5
    SATA Version is:  SATA 3.1, 6.0 Gb/s (current: 6.0 Gb/s)
    Local Time is:    Tue Jan  9 21:30:28 2024 +08
    SMART support is: Available - device has SMART capability.
    SMART support is: Enabled
    
    === START OF READ SMART DATA SECTION ===
    SMART Attributes Data Structure revision number: 10
    Vendor Specific SMART Attributes with Thresholds:
    ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE
      1 Raw_Read_Error_Rate     0x000f   083   064   044    Pre-fail  Always       -       189343488
      3 Spin_Up_Time            0x0003   094   093   000    Pre-fail  Always       -       0
      4 Start_Stop_Count        0x0032   100   100   020    Old_age   Always       -       111
      5 Reallocated_Sector_Ct   0x0033   100   100   010    Pre-fail  Always       -       0
      7 Seek_Error_Rate         0x000f   073   060   045    Pre-fail  Always       -       18678885
      9 Power_On_Hours          0x0032   087   087   000    Old_age   Always       -       11516 (221 5 0)
     10 Spin_Retry_Count        0x0013   100   100   097    Pre-fail  Always       -       0
     12 Power_Cycle_Count       0x0032   100   100   020    Old_age   Always       -       106
    184 End-to-End_Error        0x0032   100   100   099    Old_age   Always       -       0
    187 Reported_Uncorrect      0x0032   100   100   000    Old_age   Always       -       0
    188 Command_Timeout         0x0032   100   100   000    Old_age   Always       -       0
    189 High_Fly_Writes         0x003a   100   100   000    Old_age   Always       -       0
    190 Airflow_Temperature_Cel 0x0022   062   057   040    Old_age   Always       -       38 (Min/Max 31/41)
    191 G-Sense_Error_Rate      0x0032   100   100   000    Old_age   Always       -       0
    192 Power-Off_Retract_Count 0x0032   100   100   000    Old_age   Always       -       106
    193 Load_Cycle_Count        0x0032   100   100   000    Old_age   Always       -       575
    194 Temperature_Celsius     0x0022   038   043   000    Old_age   Always       -       38 (0 20 0 0 0)
    197 Current_Pending_Sector  0x0012   100   100   000    Old_age   Always       -       0
    198 Offline_Uncorrectable   0x0010   100   100   000    Old_age   Offline      -       0
    199 UDMA_CRC_Error_Count    0x003e   200   200   000    Old_age   Always       -       0
    240 Head_Flying_Hours       0x0000   100   253   000    Old_age   Offline      -       11502h+29m+09.286s
    241 Total_LBAs_Written      0x0000   100   253   000    Old_age   Offline      -       2755965598
    242 Total_LBAs_Read         0x0000   100   253   000    Old_age   Offline      -       6873678817
    
    Thanked by (1)bikegremlin

    Websites have ads, I have ad-blocker.

  • FalzoFalzo Senpai

    people seem to lack understanding common (spinning) hard drive technology more and more these days. lots of assumption and claims without further clarification.

    sorry, but where do things like

    Normally they fail after 5+ years of operation...

    or

    Definitely go with SAS, little more noisy ( as in a server who cares ) but definitely more reliable.

    do come from?

    harddisks usually have a very long life. also apart from electronic/controller issues they tend to die rather slow and not instantly opposed to pure chip based storage ;-) I would not put any generic number of years to any of this but maybe that's just me?

    for the difference between SATA and SAS, what makes for the claim of SAS being more reliable? as pointed out the interface is different. but that does not mean the mechanical parts have to be.
    bad sectors usually are a rather physical issue and do not depend on the interface itself. so SAS alone does not say much about that in terms of reliability...
    if you however meant to say that SAS drives usually are of enterprise quality, which are built for another durability then this might be more right. but keep in mind, that there are also enterprise graded SATA drives, so again the claim that SAS perse is better than SATA does not fit well here I'd say.

    for the 13 yo drive, of course this quite an age. if the sectors are good as shown I don't see any reason to mistrust that device at all.
    the only thing that comes to mind regarding very old, long running disks is some wearout on the motor that sometimes can lead to these disks not wanting to spin up anymore.

    if a 3.5" 640GB drive is useful for anything at all is a different topic though, I guess...

    Thanked by (3)Lee beagle chimichurri
  • bikegremlinbikegremlin ModeratorOGContent Writer

    @Falzo said:
    if you however meant to say that SAS drives usually are of enterprise quality, which are built for another durability then this might be more right. but keep in mind, that there are also enterprise graded SATA drives, so again the claim that SAS perse is better than SATA does not fit well here I'd say.

    This is exactly how I understood that recommendation.

    There has been a case of WD RED "NAS" drives using SMR, for example.

    Relja of House Novović, the First of His Name, King of the Plains, the Breaker of Chains, WirMach Wolves pack member
    BikeGremlin's web-hosting reviews

  • host_chost_c Hosting Provider
    edited January 9

    You have to understand, that retail is market where the product should not live more than the manufacturer warranty + a little margin, otherwise where is the next sale?

    Also, Marketing BS dominates the market, so what to believe is the question?

    On an IT forum, when we talk about interfaces, SAS / SATA, we refer to it as the product itself. As I did not see consumer grade hardware with SAS.

    Consumer grade Disks are not designed for 24/7/365 use. maybe an 8H/day. Drives "optimized" for DVR, NAS, Performance drives, these are al BS marketing. Enterprise grade in SATA, lol, that would be the first, it is BS marketing. There is a slight diffrence maybe, but the margins is so thin, that paying the extra 30% or mor makes no sense.

    Drives are labeled on the manufacturing line depending on the Quality Test ( well, I doubt even that is happening nowadays). if 50% of the drive is bad, it will be rebranded to Seagate SV/NVR series, cut down to 5400/5900RPM and sold with the BS marketing od NVR optimized - for example, to understent the logic, as no one will thro away something that can be used and sold for any $.

    Oh wait, there is more, if the electronics are messed up, it will be a drive for external use via USB, as that will not rad almost no info from SMART.

    A few years ago, external WD and Seagate ( Lacie ) drives were 20% less then the internal brothers. so we went to the shops, got a few 4 TB ones ( 20 something if I remember ), stripped the case apart, and bingo , fresh new SATA wd/seagate drive inside. As soon as we hooked it up to a LSI2008 controller, and installed for example freenas, wow, at the first SMART SHORT test all of them had some fail, and I mean all. So that was a nice waste of money.

    The days of a brand that makes better and better products is long gone, in any filed. and this is actually sad. :'(

    It is not just the NAS WD Scandal, Other brands have done it too, Toshiba for example made some "Performance" desktop drives, they as reliable as the old Quantum Fireball EX series, Or Maxtor if anyone remembers, even better, remember the Hitachi "Death Star"?

    As more and more enthusiast entered the consumer market, all companies started the propaganda that this Drive is for pro, well a pro will get Enterprise used rather then new BS Marketing, if he is a pro, and if he has some experience. I have seen mountains of failed drives marketed as the best of the best of the best during my 9 year at computer assembly company.

    you can argue, give examples of one, two exceptions, but at the end of the day, so can I show you consumer grade drives failed in under 24mo in Business use.

    When we talk abut a backup solution, the whole point of the backup is to have it when shit hits the fan, otherwise, we should call it maybe backup.

    Look at the SSD market, I still have some Corsair GT SATA 90GB drives, not a cell in it is dead, life span remaining 99%. That piece of consumer SATA SSD drive is ~15 years old, and in the next corner is Kingston SV300 series, oh men what a waist of money. Even the NVME line KC3000 is a piece of dirt, and the list can go on and on. ( Samsung is not better at this point ).

    Again, all the above in business use, 24/7. And why not use Enterprise Second hand components in raid solution? Or any solution?

    I mean, those devices cost $$$ not $ at new, they enter the used market in 2-3 years, depending on the big company renewal policy on equipment. You can find 2020, 2021 used drives on e-bay for example, and not from a few sellers, from a bunch of them.

    I understand that my point might seem too enterprise and you only use this for home use. It is not about the GBPS or the IOPS of the drive, it is about if I can trust it with my data, even if it is my "www.girls4you.com" stuff. =)

    <3 the spinning rust

    Edit:

    LOL, here is one of our customer that got some WD Balcks because they were cheap, just got the mail from he's server:

    Device: /dev/bus/0 [megaraid_disk_11] [SAT], ATA error count increased from 13 to 14

    Device info:
    HGST HTS721010A9E630, S/N:JR1000BNG1511E, WWN:5-000cca-8e6c08744, FW:JB0OA3W0, 1.00 TB
    For details see host's SYSLOG.

    And I told him, get some 1 TB SAS 2.5 inch drives from the god damn ebay supplier. But now, he knows better.

    I have ~ 30 146 GB SAS 15K RPM drives manufactured in 2000 ish. I hate to throw them away, because they still work. but what the heck to do with them, a nice MKV file, uncompressed is ~25GB. So off to the junk yard thew will go this weekend.

    Thanked by (1)bikegremlin

    Host-C - VPS Services Provider - AS211462

    "If there is no struggle there is no progress"

  • edited January 9

    I don't think it's easy to measure the reliability of a hard drive.
    Because power on hours isn't enough.
    You also need the amount of data read / written.
    But even that's not enough. Because depending on the type of data stored, if it's large data sets with essentially sequential read / write it wouldn't have the same impact as reading / writing a lot of small files.
    And it depends on how fragmented the data is.

    You need to know the number of movements made by the hard disk drive's read head. :p

    Otherwise I only have one drive that is powered 24 hours a day : Travelstar Z5K1000
    It's been working for a few years on my personal server, but it's not really used that much, so...

    In fact, I'd like to replace it with a 4TB drive. So he'll probably be replaced before he dies.
    But since the data on it is so uncritical. I'm waiting to find a good price, or even to find a second-hand one =)

    Thanked by (1)host_c
  • edited January 9

    If you have some badass hdd/ssd/nvme then you can submit smart data, and also you can fight the power on hours - https://diskcheck.monster/battle-power_on_hours :)

    https://diskcheck.monster/
    https://lowendspirit.com/discussion/4229/tool-nvme-ssd-hdd-s-m-a-r-t-monitoring-testing

  • FalzoFalzo Senpai

    @host_c said: On an IT forum, when we talk about interfaces, SAS / SATA, we refer to it as the product itself.

    ah, thanks for clarifying that we do so. I am new to IT forums it seems...

    @host_c said: Enterprise grade in SATA, lol, that would be the first, it is BS marketing.

    so, how come vendors offer the exact same drive with the same technical specifications under the same product name and number in a SAS and SATA version?
    have an example: https://documents.westerndigital.com/content/dam/doc-library/en_us/assets/public/western-digital/product/data-center-drives/ultrastar-sata-series/data-sheet-ultrastar-7k4000.pdf

    is this consumer grade SAS now or enterprise SATA? you have to decide now. sorry if that's a first for you (you as in we, the IT forum guys)

    could this be due to the easiness of just slapping different PCBs on top of an otherwise identical block of alloy with a motor and some platters in it?
    or are you still claiming, that this won't be the exact same mechanical parts in this drive? and running it on SATA will cause worse reliability in terms of endurance? uh huh...

    don't get me wrong. I am not denying that there is in fact a big difference in quality betwen certain types of drives. for sure there is, and that's why there are technical specification and warranties and stuff. also no offense meant, but please try stop talking this down to an interface as source of distinguishing truth. it simply isn't. we don't do that on IT forums. we rather stick to facts. ;-)

Sign In or Register to comment.