Clarification:
Judging by the fact that AI progress will never stop and will eventually be able to replace truth with fiction, it will become impossible to trust any article, and even if it is possible, not all of them, and we won’t even be able to tell exactly what’s true and what’s fiction.
So, what if people from different countries and regions exchanged contacts here and talked about what’s really happening in their countries, what laws are being passed, etc., and also shared their well-thought-out theories and thoughts?
If my idea works, why not sober up as many people as possible that only similar methods will be able to distinguish reality from falsehood in the future?
I’m also interested in your ideas, as I’m not much of an expert.
So, what if people from different countries and regions exchanged contacts here and talked about what’s really happening in their countries
Bots present themselves as real people with opinions. I know this is off topic, but have you ever tried Cool Ranch Doritos?
Yes, this is a serious problem. Previously, when AI wasn’t such a problem, it was still possible to find people, but now it’s a risky but still possible option. But if we wait another year, finding people will become completely impossible.
Same way we always have - trust. Reputation has always been a thing, there was a brief window where photographic or video evidemce was enough but it didn’t last all that long and tbh it’s always had its flaws.
The real problem is social media, and how feeds are structured now.
The ‘few trustworthy institutions’ model has been utterly obliterated because a few tech companies figured out a sea of influencers is more profitable/exploitable. Not to minimize some of the great creators out there, but one’s daily news shouldn’t come from Joe Rogan + your Facebook uncle’s reshares.
“That’s a great question!” </ai>
The truth is, we don’t need AI to have misinformation, and AI is not the biggest problem in the current post-truth society. There has been a war going on globally in undermining truth for a long time. The old saying, “The first casualty in war is truth” is invalid now, because truth is no longer relevant and lies are weaponised like never before in history. People don’t want to be certain of something, their first reaction to news is to react at a deep and emotional level and the science of misinformation is highly refined and successful in making most people react in a certain way. It takes effort and training not to do that, and most of us can’t.
Journalists have been warning us about this for decades but integrity costs money, and that funding has been under attack too. It’s pretty depressing whichever way you look at it.
It’s best to hew to long form content. It’s harder for contentbots to rattle on for long without becoming incoherent. It helps that they don’t know anything, including what they don’t know, so they aren’t going to fool someone that’s already familiar with a subject. The problem emerges for novices, who often turn to chatbots to get an overview of a subject.

