Over the years I have accumulated a sizable music library (mostly flacs, adding up to a bit less than 1TB) that I now want to reorganize (ie. gradually process with Musicbrainz Picard).

Since the music lives in my NAS, flacs are relatively big and my network speed is 1GB, I insalled on my computer a hdd I had laying around and replicated the whole library there; the idea being to work on local files and the sync them to the NAS.

I setup Syncthing for replication and… everything works, in theory.

In practice, Syncthing loves to rescan the whole library (given how long it takes, it must be reading all the data and computing checksums rather than just scanning the filesystem metadata - why on earth?) and that means my under-powered NAS (Celeron N3150) does nothing but rescanning the same files over and over.

Syncthing by default rescans directories every hour (again, why on earth?), but it still seem to rescan a whole lot even after I have set rescanIntervalS to 90 days (maybe it rescans once regardless when restarted?).

Anyway, I am looking into alternatives.
Are there any you would recommend? (FOSS please)

Notes:

  • I know I could just schedule a periodic rsync from my PC to the NAS, but I would prefer a bidirectional solution if possible (rsync is gonna be the last resort)
  • I read about unison, but I also read that it’s not great with big jobs and that it too scans a lot
  • The disks on my NAS go to sleep after 10 minutes idle time and if possible I would prefer not waking them up all the time (which would most probably happen if I scheduled a periodic rsync job - the NAS has RAM to spare, but there’s no guarantee it’ll keep in cache all the data rsync needs)
  • Alex@lemmy.ml
    link
    fedilink
    arrow-up
    14
    ·
    19 days ago

    Syncthing should have inotify support which allows it to watch for changes rather than polling. Does that help?

    • gomp@lemmy.mlOP
      link
      fedilink
      arrow-up
      3
      ·
      19 days ago

      Yes, Syncthing does watch for file changes… that’s why I am so puzzled that it also does full rescans :)

      Maybe they do that to catch changes that may have been made while syncthing was not running… it may make sense on mobies, where the OS like to kill processes willy-nilly, but IMHO not on a “real” computer

      • Alex@lemmy.ml
        link
        fedilink
        arrow-up
        2
        ·
        19 days ago

        Is it worth raising an issue with the project? Also enable logging to see if there are any clues as to why a rescan is being done?

      • petersr@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        18 days ago

        Or to catch if you start in a different OS and make changes to files that are then not tracked.

  • Lung@lemmy.world
    link
    fedilink
    arrow-up
    13
    ·
    edit-2
    19 days ago

    Nothing wrong with rsync, it’s still kinda the shit. Short script, will do everything

    https://git-annex.branchable.com/ this thing extends git to handling lots of big files. Probably a solid choice, haven’t tried, but it claims to do exactly what you need, and even has ui and partial sync

    • ouch@lemmy.world
      link
      fedilink
      arrow-up
      4
      ·
      19 days ago

      The use case sounds exactly like git-annex.

      As a bonus you get a system that tracks how many copies of files and where you have them.

    • IanTwenty@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      18 days ago

      So git-annex should let you just pull down the files you want to work on, make your changes, then push them back upstream. No need to continuously sync entire collection. Requires some git knowledge and wading through git-annex docs but the walkthrough is a good place for an overview: https://git-annex.branchable.com/walkthrough/

  • JTskulk@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    18 days ago

    FreeFileSync detects moves and changes quickly without rereading the whole file. The first time you sync it will read every file to hash them first, this takes a long time but subsequent syncs will be fast.

  • gi1242@lemmy.world
    link
    fedilink
    arrow-up
    1
    ·
    18 days ago

    I used seafile in the past. but I abandoned it for syncing. might help your use case …

    • gomp@lemmy.mlOP
      link
      fedilink
      arrow-up
      2
      ·
      19 days ago

      Never heard of it… OMG that must be the worst name for a backup solution! :D

      It reeks of abandoned software (last release is 0.50 from 2018), but there is recent activity in git, so… IDK

      • Dr Jekell@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        19 days ago

        I wouldn’t consider it a backup solution, I use Timeshift for that.

        It’s more of a file syncing software like Syncthing.

        I have it set up to one way sync certain folders on my computer to an external USB HDD that I can disconnect and take with me if I have to evacuate.

  • fl42v@lemmy.ml
    link
    fedilink
    arrow-up
    1
    ·
    19 days ago

    I suspect it’s somewhat inevitable, since in order to sync you need to know what’s the difference between files here and there (unless using smth like zfs send which should be able to send only the changes, I guess?). I’d probably tag everything at once and then sync

      • fl42v@lemmy.ml
        link
        fedilink
        arrow-up
        1
        ·
        19 days ago

        That’s if you don’t keep track of whether it was modified. It comes more or less for free if you’re the filesystem, but may be more complicated for external programs. Although, ?maybe inotifywait can track for changes to the whole directory, but I’m not sure here