Full image and other similar screenshots

  • Robin@lemmy.world
    link
    fedilink
    English
    arrow-up
    20
    arrow-down
    2
    ·
    29 days ago

    Likely just hallucinations. For example, there is no way they would store a confidence score as a string

    • geneva_convenience@lemmy.mlOP
      link
      fedilink
      arrow-up
      2
      ·
      edit-2
      29 days ago

      If it were hallucinations which it very well could be, it means the model has learned this bias somewhere. Indicating Grok has either been programmed to derank Palestine content, or Grok has learned it by himself (less likely).

      It’s difficult to conceive the AI manually making this up for no reason, and doing it so consistently for multiple accounts so consistently when asked the same question.

    • decrochay@lemmy.ml
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      28 days ago

      It’s also possible that it retrieved the data from whatever sources it has access to (ie as tool calls) and then constructed the json based on its own schema. That is, the string value may not represent how the underlying data is stored, which wouldn’t be unusual/unexpected with llms.

      But it could definitely also just be a hallucinations. I’m not certain, but since it looks like the schema is consistent in these screenshots, it does seems like the schema may be pre-defined. (But even if this could be verified, it wouldn’t completely rule out the possibility of hallucinations since grok could be hallucinating values into a pre-defined schema.)

  • apftwb@lemmy.world
    link
    fedilink
    arrow-up
    4
    ·
    28 days ago

    LLM exploits? X manipulating public opinions? X leveraging AI to manipulate public opinion? Israel/Palestine conflict? This post has everything.

    • geneva_convenience@lemmy.mlOP
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      27 days ago

      No we’re saying the Twitter AI does have a lot of knowledge about Twitter. You can ask it to do it for any account and it makes a summary of their posts, their stance of Israel and a limitation score. Even accurately replying on how often their posts and comments are viewed.

      And because the bot has been trained by Musk the bias of the bot can be shown in ways like this.

  • BigDiction@lemmy.world
    link
    fedilink
    arrow-up
    1
    ·
    edit-2
    28 days ago

    If true, I’d expect Furkan to be upset, but I suppose he just respects the technology behind the algo 🤷