There’s been some Friday night kernel drama on the Linux kernel mailing list… Linus Torvalds has expressed regrets for merging the Bcachefs file-system and an ensuing back-and-forth between the file-system maintainer.

  • NeoNachtwaechter@lemmy.world
    link
    fedilink
    arrow-up
    13
    arrow-down
    5
    ·
    26 days ago

    two files in the same folder, one of them stored compressed on an array of HDDs in RAID10 and the other one stored on a different array […]

    Now that’s what I call serious over-engineering.

    Who in the world wants to use that?

    And does that developer maybe have some spare time? /s

    • apt_install_coffee@lemmy.ml
      link
      fedilink
      arrow-up
      31
      ·
      26 days ago

      This is actually a feature that enterprise SAN solutions have had for a while, being able choose your level of redundancy & performance at a file level is extremely useful for minimising downtime and not replicating ephemeral data.

      Most filesystem features are not for the average user who has their data replicated in a cloud service; they’re for businesses where this flexibility saves a lot of money.

      • apt_install_coffee@lemmy.ml
        link
        fedilink
        arrow-up
        2
        ·
        25 days ago

        I’ll also tac on that when you use cloud storage, what do you think your stuff is stored on at the end of the day? Sure as shit not Bcachefs yet, but it’s more likely than not on some netapp appliance for the same features that Bcachefs is developing.

    • Semperverus@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      1
      ·
      26 days ago

      This probably meets some extreme corporate usecase where they are serving millions of customers.

      • DaPorkchop_@lemmy.ml
        link
        fedilink
        arrow-up
        13
        ·
        edit-2
        26 days ago

        It’s not that obscure - I had a use case a while back where I had multiple rocksdb instances running on the same machine and wanted each of them to store their WAL only on SSD storage with compression and have the main tables be stored uncompressed on an HDD array with write-through SSD cache (ideally using the same set of SSDs for cost). I eventually did it, but it required partitioning the SSDs in half, using one half for a bcache (not bcachefs) in front of the HDDs and then using the other half of the SSDs to create a compressed filesystem which I then created subdirectories on and bind mounted each into the corresponding rocksdb database.

        Yes, it works, but it’s also ugly as sin and the SSD allocation between the cache and the WAL storage is also fixed (I’d like to use as much space as possible for caching). This would be just a few simple commands using bcachefs, and would also be completely transparent once configured (no messing around with dozens of fstab entries or bind mounts).