A client has asked me to replace their video editor for a video podcast. It’s the standard quick-cut style, zooms, loud transitions, and big bubble-letter subtitles throughout.

They recommended using Descript, which looks to be an AI platform that does all the work for you. Throw your video/audio into the site, and it transcribes the video, allowing you to edit based on the transcription. It then makes AI recommendations and inserts zooms and transitions.

There’s no getting around using AI for some of this, like subtitle generation, but I’d rather not buy a sub to an AI platform, nor would I like to use one, so I’m looking for alternatives. The pay certainly isn’t worth the time it would take without cutting corners unfortunately.

Unfortunately, Davinci Resolve isn’t playing well with my system and the nvidia driver I use (580, it worked on 550 but that’s not an option in Additional Drivers anymore for some reason) results in a black screen for the video timeline (not ideal for a video editor haha). I’ve been playing around with Kdenlive and Blender’s video editor.

I found an add-on for both programs that transcribes speech-to-text, which I finally got mostly working with Kdenlive (using whisper) but not with Blender. I also found a FOSS app called audapolis which does well pulling a transciption into an exportable file.

Anyone have any experience making these mass-produced-style videos without going full AI? My client mentioned the old VE spent 1-2 hours with Descript for a 15ish min video and 2 shorts. I’m ok doubling that timeframe at first if it means not using Descript.

    • Jack_Burton@lemmy.caOP
      link
      fedilink
      arrow-up
      5
      ·
      1 day ago

      I was hoping to avoid going full AI. Unfortunately these are the YT type videos that AI is completely taking over so unless I want to spend a week on this I think I’ll have to. Was just trying to exhaust every avenue before going that route.

      • Randomgal@lemmy.ca
        link
        fedilink
        arrow-up
        3
        arrow-down
        1
        ·
        9 hours ago

        If I pay you to do something I’d expect you to do what I’m paying you to do. If you can’t, don’t take the job.

      • Evotech@lemmy.world
        link
        fedilink
        arrow-up
        6
        ·
        edit-2
        1 day ago

        Yeah, I get that. But seems it’s pretty well suited for the task.

        You can probably create a similar workflow using comfyui though. But it will require time and effort.

        • Jack_Burton@lemmy.caOP
          link
          fedilink
          arrow-up
          5
          ·
          1 day ago

          Honestly it’s looking that way. I just needed to try everything else first for my own principles. Appreciate the kindness

  • utopiah@lemmy.ml
    link
    fedilink
    arrow-up
    7
    ·
    1 day ago

    There’s no getting around using AI for some of this, like subtitle generation

    Eh… yes there is, you can pay actual humans to do that. In fact if you do “subtitle generation” (whatever that might mean) without any editing you are taking a huge risk. Sure it might get 99% of the words right but it fucks up on the main topic… well good luck.

    Anyway, if you do want to go that road still you could try

    • ffmpeg with whisper.cpp (but honestly I’m not convinced hardcoding subtitles is a good practice, why not package as e.g. .mkv? Depends on context obviously)
    • Kdenlive with vosk
    • Kdenlive with whatever else via *.srt *.ass *.vtt *.sbv formats
    • Jack_Burton@lemmy.caOP
      link
      fedilink
      arrow-up
      2
      ·
      1 day ago

      I’d love to pay someone, or I’d just transcribe it myself if it wouldn’t take so long. I’m new to VE so learning as I go, I do audio and the editing process seems fairly transferable, it’s the barragement of movement and transitions in these that I’m struggling to not spend a week working on it. I’m doing this as a favour so outsourcing isn’t an option. I’ll be checking over the subtitles anyway, generating just saves a bunch of time before a full pass over it.

      I’d rather not have hardcoded subs at all, but these are the “no attention span” style videos for YT (constant zooms, transitions, big subtitles, etc) that I have to mimic. Honestly I hate the style haha, but it is what it is. The style “gets traction” on social media.

      I’m quickly realizing why these videos use AI, it’s a tonne of work without it for very little pay. I was just hoping to use as little of it as possible and trying to avoid going with Descript.

      Anyway, appreciate you taking the time, I got some sub generation working with Kdenlive but it’s looking like I either have to bite the bullet with Descript or just transcribe it myself. The editing for the subs generation looks to be as much work as just transcribing a handful of frames at a time.

      • utopiah@lemmy.ml
        link
        fedilink
        arrow-up
        1
        ·
        7 hours ago

        I’ll be checking over the subtitles anyway, generating just saves a bunch of time before a full pass over it. […] The editing for the subs generation looks to be as much work as just transcribing a handful of frames at a time.

        Sorry I’m confused, which is it?

        doing this as a favour […] Honestly I hate the style haha

        I’m probably out of line for saying this but I recommend you reconsider.

    • Jack_Burton@lemmy.caOP
      link
      fedilink
      arrow-up
      3
      ·
      1 day ago

      Finally got that working, had to run with the appimage instead of flatpack for it to work. Now I just gotta see if I can mimc the font haha. Thanks

  • Feyter@programming.dev
    link
    fedilink
    arrow-up
    4
    ·
    edit-2
    1 day ago

    I know that’s not a ready to use solution but blender has a very powerful python API which should allow you to automate everything including doing calls to a AI backend of your choice if needed.

    • Jack_Burton@lemmy.caOP
      link
      fedilink
      arrow-up
      3
      ·
      1 day ago

      Interesting. I’m struggling to get transcription add-ons to work in Blender. I’ve never installed python script stuff so I don’t know if I screwed something up. Every time I try transcription it either just stops around 95% or crashes with

      Unable to load any of {libcudnn_ops.so.9.1.0, libcudnn_ops.so.9.1, libcudnn_ops.so.9, libcudnn_ops.so}
      Invalid handle. Cannot load symbol cudnnCreateTensorDescriptor
      Aborted (core dumped)
      

      Do you have a suggestion of where I can get started learning about what you’re talking about?

      • Feyter@programming.dev
        link
        fedilink
        arrow-up
        1
        ·
        1 day ago

        I think this libcudnn is a Nvidia CUDA thing. I guess you have checked that the correct CUDA libs are installed and blended has permission and knows where to look for them?

        First start for learning blender Python API would be it’s documentation: https://docs.blender.org/api/current/index.html

        In general you can skip anything that you can do on the user interface. But video editing is just a very small part of this and if you don’t have any programming experience yet this could be overkill for what you are looking for.

        Perhaps someone had the same problems like you before and implemented something. Maybe searching explicitly for blender video editing automation or Python API will give you some results.

        • Jack_Burton@lemmy.caOP
          link
          fedilink
          arrow-up
          3
          ·
          edit-2
          1 day ago

          Honestly I’m new to Linux from about 3 months ago, so it’s been a bit of a learning curve on top to learning VE haha. I didn’t realize CUDA had versions let alone was anything other than an acronym for using GPU (Nvidia for me) and I now figure CUDA is probably why Davinci Resolve isn’t working right. Kdenlive’s search for GPU over CPU had CUDA versions listed (mines 12.0, it was searching for 12.3,4,5 etc) which made me realize CUDA and Nvidia drivers differ.

          So long story short, no I haven’t checked that beyond looking for how to update CUDA haha. I really appreciate you taking the time, I’ll look into implementing python next. One thing I love about Linux, I’m constantly learning.

          • FauxLiving@lemmy.world
            link
            fedilink
            arrow-up
            1
            ·
            edit-2
            9 hours ago

            It’s a lot to learn, but the knowledge is more durable than learning where Microsoft has moved the menu option to in this version (or learning the new arcane method of summoning the old menu from the nether realm.)