Edit2: OK Per feedback I am going to have a dedicated external NAS and a separate homeserver. The NAS will probably run TrueNAS. The homeserver will use an immutable os like fedora silverblue. I am doing a dedicated NAS because it can be good at doing one thing - serving files and making backups. Then my homeserver can be good at doing whatever I want it to do without accidentally torching my data.

I haven’t found any good information on which distro to use for the NAS I am building. Sure, there are a few out there. But as far as I can tell, none are immutable and that seems to be the new thing for long term durability.

Edit: One requirement is it will run a media server with hardware transcoding. I’m not quite sure if I can containerize jellyfin and still easily hardware transcode without a more expensive processor that supports hyper-v.

  • ShortN0te@lemmy.ml
    link
    fedilink
    arrow-up
    0
    ·
    9 months ago

    As of my understanding, immutable systems are useful for Devices that are more bound to change, like a Desktop you actually use to install programs try out things and so on.

    I do not see much benefit here for a stable server system. If you are worried about stability and uptime, a testing system does a better job here, IMHO.

    • fuckwit_mcbumcrumble@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      9 months ago

      Yeah that part confuses me too. It’s a NAS. Install something simple and whatever services you need and I can’t imagine it breaking any time soon. Shit as long as someone else has tested the software I’d be more than happy to install something complex… Which I have and has been running for almost 10 years now with no issues. FreeNAS has been rock solid for me and it sure as hell ain’t minimal.

      • Dran@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        9 months ago

        Virtual machines also exist. I once got bit by a proxmox upgrade, so I built a proxmox vm on that proxmox host, mirroring my physical setup, that ran a debian vm inside of the paravirtualized proxmox instance. They were set to canary upgrade a day before my bare-metal host. If the canary debian vm didn’t ping back to my update script, the script would exit and email me letting me know that something was about to break in the real upgrade process. Since then, even though I’m no longer using proxmox, basically all my infrastructure mirrors the same philosophy. All of my containers/pods/workflows canary build and test themselves before upgrading the real ones I use in my homelab “production”. You don’t always need a second physical copy of hardware to have an appropriate testing/canary system.

    • FutileRecipe@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      9 months ago

      As of my understanding, immutable systems are useful for Devices that are more bound to change, like a Desktop…I do not see much benefit here for a stable server system.

      This logic is kind of backwards, or rather incomplete. Immutable typically means that the core system doesn’t change outside of upgrades. I would prioritize putting an immutable OS on a server over a desktop if I was forced to pick one or the other (nothing wrong with immutable on both), simply because I don’t want the server OS to change outside of very controlled and specific circumstances. An immutable server OS helps ensure that stability you speak of, not to mention it can thwart some malware. The consequences of losing a server is typically higher than losing a desktop, hence me prioritizing the server.

      In a perfect world, you’re right, the server remains stable and doesn’t need immutablitiy…but then so does the desktop.

  • mholiv@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    9 months ago

    I would think that any immutable linux distribution would be suitable. Just configure it with the services that you want. Is there any special need that you specifically need?

    • WetBeardHairs@lemmy.mlOP
      link
      fedilink
      arrow-up
      0
      ·
      9 months ago

      Honestly I had never built an NAS and installed an OS on it before. I’ve only ever used the junk that ASUSTOR puts out and I want to have control over things. So a good part of the reason I asked on here was to see what other people had done and why.

  • anamethatisnt@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    9 months ago

    What functionality do you want from your NAS? If it’s simple NFS and Samba then I imagine you can choose whatever you want really.

      • anamethatisnt@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        9 months ago

        If the software you want to run has flatpak then I imagine you can try out Fedora Silverblue, Jellyfin do have a flatpak.

        Personally I run my Jellyfin on a virtual Debian Bookworm server with transcoding off, my Jellyfin clients don’t need the help.
        I always clone my Jellyfin server before apt update && apt upgrade to be able to rollback.
        Oh, and my NAS (network attached storage) isn’t on the same machine, my Jellyfin server use Samba and /mnt/media/libraryfolders, so cloning it is quick and easy.

        • WetBeardHairs@lemmy.mlOP
          link
          fedilink
          arrow-up
          0
          ·
          9 months ago

          Is there a performance impact on the jellyfin server by having the NAS on a separate machine? How long does it take to serve a 20gb rip of a bluray?

          • anamethatisnt@lemmy.world
            link
            fedilink
            arrow-up
            0
            ·
            9 months ago

            The network isn’t a bottleneck in my system.
            I don’t have any 20gb bluray rips as I’m satisfied with the quality of a 5-8gb 1080p.
            I don’t notice a delay when starting it, it’s just a datastream without transcoding.

  • Falcon@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    9 months ago

    Just use anything and set up a good workflow with snapshots.

    Have a “current” snapshot, rollback to it before using and then re-snapshot over it.

    Now your system is immutable in practice but you can still edit /etc to debug.

  • lemmyvore@feddit.nl
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    9 months ago

    Typically on a home server you would virtualize services anyway so it really doesn’t matter what distro is running on the metal.

    And also if you’re fully virtualized you can switch out the host distro anytime you want, so you can adopt an immutable one later if you want.

    Why do you want an immutable distro anyway?

    • WetBeardHairs@lemmy.mlOP
      link
      fedilink
      arrow-up
      0
      ·
      9 months ago

      I want immutability because I come from a the debian world where everything just works. But I want the benefits of using modern versions of packages.

      • drndramrndra@lemmygrad.ml
        link
        fedilink
        arrow-up
        0
        ·
        9 months ago

        If you’re running unstable system packages, immutability won’t really save your stability.

        So don’t complicate it, and just use Debian with nix and home-manager. That way you have a stable base, and you can create a list of bleeding edge packages that should be installed. In any case it should be essentially only docker + whatever can’t be dockerised.

  • kevincox@lemmy.ml
    link
    fedilink
    arrow-up
    0
    ·
    9 months ago

    I use NixOS for this. It works wonderfully.

    Immutable means different things to different people, but to me:

    1. Different programs don’t conflict with each other.
    2. My entire server config is stored in a versioned Git repo.
    3. I can rollback OS updates trivially and pick which base OS version I want to use.