• ByteJunk@lemmy.world
    link
    fedilink
    arrow-up
    8
    ·
    19 days ago

    Thank you for testing that out.

    My experience with AI is that it’s at a point where it can comprehend something like this very easily, and won’t be tricked.

    I suspect that this can, however, pollute a model if it’s included as training data, especially if done regularly, as OP is suggesting.

    • saigot@lemmy.ca
      link
      fedilink
      arrow-up
      3
      ·
      19 days ago

      If it was done with enough regularity to eb a problem, one could just put an LLM model like this in-between to preprocess the data.

    • bountygiver [any]@lemmy.ml
      link
      fedilink
      English
      arrow-up
      3
      ·
      19 days ago

      In which microwavegang already did the job better. Due the full subreddit of mmmmmmmmm, it causes training data that touches it to devolve into all mmmmmmm whenever there’s enough m’s in a sentence