• Boozilla@lemmy.world
    link
    fedilink
    English
    arrow-up
    26
    arrow-down
    2
    ·
    edit-2
    4 months ago

    It’s been a tremendous help to me as I relearn how to code on some personal projects. I have written 5 little apps that are very useful to me for my hobbies.

    It’s also been helpful at work with some random database type stuff.

    But it definitely gets stuff wrong. A lot of stuff.

    The funny thing is, if you point out its mistakes, it often does better on subsequent attempts. It’s more like an iterative process of refinement than one prompt gives you the final answer.

    • Downcount@lemmy.world
      link
      fedilink
      arrow-up
      16
      arrow-down
      1
      ·
      4 months ago

      The funny thing is, if you point out its mistakes, it often does better on subsequent attempts.

      Or it get stuck in an endless loop of two different but wrong solutions.

      Me: This is my system, version x. I want to achieve this.

      ChatGpt: Here’s the solution.

      Me: But this only works with Version y of given system, not x

      ChatGpt: <Apology> Try this.

      Me: This is using a method that never existed in the framework.

      ChatGpt: <Apology> <Gives first solution again>

      • UberMentch@lemmy.world
        link
        fedilink
        arrow-up
        6
        ·
        4 months ago

        I used to have this issue more often as well. I’ve had good results recently by **not ** pointing out mistakes in replies, but by going back to the message before GPT’s response and saying “do not include y.”

      • BrianTheeBiscuiteer@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        4 months ago

        While explaining BTRFS I’ve seen ChatGPT contradict itself in the middle of a paragraph. Then when I call it out it apologizes and then contradicts itself again with slightly different verbiage.

    • WalnutLum@lemmy.ml
      link
      fedilink
      arrow-up
      5
      ·
      4 months ago

      This is because all LLMs function primarily based on the token context you feed it.

      The best way to use any LLM is to completely fill up it’s history with relevant context, then ask your question.

      • Boozilla@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        4 months ago

        I worked on a creative writing thing with it and the more I added, the better its responses. And 4 is a noticeable improvement over 3.5.