Me and my friend were discussing this the other day about how he said RAID is no longer needed. He said it was due to how big SSDs have gotten and that apparently you can replace sectors within them if a problem occurs which is why having an array is not needed.

I replied with the fact that arrays allow for redundancy that create a faster uptime if there are issues and drive needs to be replaced. And depending on what you are doing, that is more valuable than just doing the new thing. Especially because RAID allows redundancy that can replicate lost data if needed depending on the configuration.

What do you all think?

  • Dekkia@this.doesnotcut.it
    link
    fedilink
    arrow-up
    10
    ·
    edit-2
    7 months ago

    I don’t think the internal wear-leveling and overprovisioning of SSDs can or should be able to replace raid. Disregarding a dead sector without losing capacity is great, but it won’t help you when (for example) the controller dies.

    Depending on the amount of data you’re storing SSDs also might be too expensive.

    The only exception is maybe Raid 0 in a normal PC. Here it’s probably better to just get one disk for each logical drive.

  • lemmyreader@lemmy.ml
    link
    fedilink
    English
    arrow-up
    10
    ·
    7 months ago

    Reminds me of the days that cdroms were brand new and advertised like indestructible, with photos of elephants walking over it. Having said that I assume SSD disks can break like other hard disks can break, and in that case RAID can save a lot of time to get a computer back up especially when a lot of data is involved.

  • winnie@lemmy.ml
    link
    fedilink
    arrow-up
    9
    ·
    7 months ago

    you can replace sectors within them if a problem occurs

    That won’t help you if sector where your data is located dies!

  • xkforce@lemmy.world
    link
    fedilink
    arrow-up
    4
    ·
    7 months ago

    Higher end Samsung ssds were dying a lot faster than they should. I dont know what drugs your friend is on thinking they cant fail but theyd better have enough for the rest of the class.

  • tobogganablaze@lemmus.org
    link
    fedilink
    English
    arrow-up
    2
    ·
    7 months ago

    He said it was due to how big SSDs have gotten and that apparently you can replace sectors within them if a problem occurs which is why having an array is not needed.

    Buying SSDs with the same capacity as my NAS with 70TB (after raid 6) would cost almost tripple of what my setup (including the NAS) costs.

    So unless you shit money, SSDs are not an option for anything with a decent capacity.

  • lemmylommy@lemmy.world
    link
    fedilink
    arrow-up
    2
    ·
    7 months ago

    This has nothing to do with ssd or their size. Harddisks also have a little spare area (though not as big) and can mark and remap failing sectors.

    RAID (1) is still (possibly) good for the only thing it ever was (possibly) good for: Keeping the system running long enough for you to put in a new harddisk if one fails.

    Think of industrial systems where every minute of downtime can cost thousands of dollars. And even there the usefulness of RAID can be questioned: should you not in that case have a whole spare system, easy to swap in, because more than just storage can fail?

    And what about the RAID controller itself? Does it not add complexity and another point of failure to the whole system?

    And most importantly: will anyone actually get notified of a failing disk and replace it quickly? Or will the whole thing just prolong the inevitable?

    Would you even trust a system that had one disk fail already to keep going in a critical place? Or would it not be safer to just replace the whole thing anyway after one failure?

  • neidu2@feddit.nl
    link
    fedilink
    arrow-up
    2
    ·
    edit-2
    7 months ago

    I wholeheartedly agree with you. It is worth noting that a lot of the use cases of RAID can now be solved via software, but there are some places where hardware RAID still shines, such as redundancy. Yes, software also can provide redundancy, but I still haven’t seen a software solution that is equivalent to a proper RAID controller with a dedicated battery to keep the I/O buffer alive in case of hardware failure. That one has saved me a few times.

    Source: I’m in charge of 6 storage clusters at work. Beegfs is what takes care of the actual clustering, resulting in each cluster clocking in at 1.2PB of storage. Each cluster consists of four machines with three storage volumes each.
    Each storage volume consists of 12 drives in a RAID6 configuration.

    I can yank faulty drives and toss them out and have them replaced with no downtime. I know some like to set up hot spares, but I for one don’t. I’ve even had entire servers die on me, and thanks to additional redundancy provided by beegfs, I’ve changed motherboard with no cluster downtime either. Just move the drives over to an identical machine (yes, each cluster has a dedicated spare machine), import the RAID, and you’re good to go.

  • DontTakeMySky@lemmy.world
    link
    fedilink
    arrow-up
    1
    ·
    7 months ago

    Maybe maybe MAYBE for a prosumer desktop situation it’s less necessary than it used to be. But it’s absolutely still needed, your friend is dumb and reckless with their data.

    Drives fail all the time, not just sectors.

  • MsPenguinette@lemmy.world
    link
    fedilink
    arrow-up
    1
    ·
    7 months ago

    Depends on the type of RAID, probably not needed for just expanding storage but having one that allows for a drive to fail is absolutely still prudent

  • LemmyHead@lemmy.ml
    cake
    link
    fedilink
    arrow-up
    2
    arrow-down
    1
    ·
    7 months ago

    I’d say “old” RAID could be dead if you have proper backups and have the ability to replace a defect drive fast in the case uptime is crucial. But there’s also modern RAID like btrfs and zfs that also can repair corrupted filed, caused by bitrot for example. Old RAID can’t do that also hardware based RAID couldn’t either when I used it until years ago. Maybe that changed but I don’t see the point of hardware based RAID in most cases anymore

    • winnie@lemmy.ml
      link
      fedilink
      English
      arrow-up
      1
      ·
      7 months ago

      AFAIK only officially supported RAID modes in BTRFS are RAID0 and RAID1.

      RAID56 is officially considered unstable.