• fossilesque@mander.xyzOPM
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      6 months ago

      My master’s was fueled by Starbucks, and my Ph.D. is fueled by spite. Don’t get me wrong; I am not against using LLMs for help, especially for ESLs. It’s a fantastic tool for developing your thoughts when used ethically. I’ve put placeholders with GPT for framing in my drafts which eventually become something completely different by the end product. This is an issue with peer review and publishing monopolies, aka late-stage capitalism. This draft was clearly not peer-reviewed and is a likely consequence of publish or perish.

  • DashboTreeFrog@discuss.online
    link
    fedilink
    English
    arrow-up
    0
    ·
    6 months ago

    Took me a second.

    But man, I don’t right academic papers anymore, but I have to write a lot of reports and such for my work and I’ve tried to use different LLM’s to help and almost always the biggest help is just in making me go “Man, this sucks, it should be more like this.” and then I proceed to just write the whole thing with the slight advantage of knowing what a badly written version looks like.

    • Serinus@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      6 months ago

      My favorite was when it kept summarizing too much. I then told it to include all of my points, which it mostly just ignored. I finally figured out it was keeping under its own cap of 5000 words in a response.

      • DashboTreeFrog@discuss.online
        link
        fedilink
        English
        arrow-up
        0
        ·
        6 months ago

        I’ve had the reverse issue, where I wanted to input a large amount of text for ChatGPT to work with. Tried to do a workaround where part of my prompt was that I was going to give it more information in parts. No matter how I phrased things it would always try to start working with whatever I gave it with the first prompt so I just gave up and did it myself.

    • Final Remix@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      6 months ago

      That’s basically Classifier-free guidance for LLMs! It basically takes an additional prompt and says “not this. Don’t do this. In fact, never come near this shit in general. Ew.” And pushes the output closer to the original prompt by using the “not this” as a reference to avoid.