This reminds me of some stuff in Charles Stross’ Accelerando. The book mentions how AI was rapidly filing patents and lawsuits and all this stuff by itself constantly. It was terrifying as a fictional idea, but here we are, it’s real.
As a person who is intrigued in linguistics, I wonder how
AILLMs will affect real languages. I wonder if there is any research papers on this.I’m not aware of any paper about this; specially with how recent LLMs are, it’s kind of hard to detect tendencies.
That said, if I had to take a guess, the impact of LLMs in language will be rather subtle:
- Some words will become more common because bots use them a lot, and people become more aware of those words. “Delve” comes to my mind. (Urgh. I hate this word.)
- Swearing will become more common too. I wouldn’t be surprised if we saw an uptick of “fuck” and “shit” after ChatGPT was released. That’s because those bots don’t swear, so swearing is a good way to show “I’m human”.
- Idiosyncratic language might also increase, as a mix of the above and to avoid sounding “bland and bot-like”. Including letting some small typos to go through on purpose.
Text-to-speech, mentioned by @Shelbyeileen@lemmy.world, is another can of worms; it might reinforce non-common pronunciations until they become common. This should not be a big issue e.g. in Italian (that uses a mostly regular spelling), but it might be noticeable in English.
Paging @lvxferre@mander.xyz :)
Not quite the same, but I’m waiting for the day when people will pronounce street names like the GPS, instead of how they are actually pronounced. The street Schoenherr, in my neck of the woods is pronounced "Shane urr (yes, like the planet Omicron Percii 8, cause Detroit (Day twah) is weird), but the GPS says “Shown her”. I’m really curious to see how long it takes for the computer voice to be considered the correct one.
I just want to point out that there were text generators before ChatGPT, and they were ruining the internet for years.
Just like there are bots on social media, pushing a narrative, humans are being alienated from every aspect of modern society.
What is a society for, when you can’t be a part of it?
I feel like the term “touch grass” applies to this comment more than anything.
Touch Grass
Well OP said that bots posting shit on social media alienates people from being part of modern society
If that’s not a touch a grass moment then I don’t know what is
I’m the type to be in favor of new tech but this really is a downgrade after seeing it available for a few years. Midterms hit my classes this week and I’ll be grading them next week. I’m already seeing people try to pass off GPT as their own, but the quality of answers has really dropped in the past year.
Just this last week, I was grading a quiz on persuasion and for fun, I have students pick an advertisement to analyze. You know, to personalize the experience, this was after the super bowl so we’re swimming in examples. Can even be audio, like a podcast ad, or a fucking bus bench or literally anything else.
60% of them used the Nike Just Do It campaign, not even a specific commercial. I knew something was amiss, so I asked GPT what example it would probably use it asked. Sure enough, Nike Just Do It.
Why even cheat on that? The universe has a billion ad examples. You could even feed GPT one and have it analyze for you. It’d be wrong, cause you have to reference the book, but at least it’d not be at blatant.
I didn’t unilaterally give them 0s but they usually got it wrong anyway so I didn’t really have to. I did warn them that using that on the midterm in this way will likely get them in trouble though, as it is against the rules. I don’t even care that much because again, it’s usually worse quality anyway but I have to grade this stuff, I don’t want suffer like a sci-fi magazine getting thousands of LLM submissions trying to win prizes.
As someone who has been a teenager. Cheating is easy, and class wasn’t as fun as video games. Plus, what teenager understands the importance of an assignment? Of the skill it is supposed to make them practice?
That said, I unlearned to copy summaries when I heard I had to talk about the books I “read” as part of the final exams in high school. The examinor would ask very specific plot questions often not included in online summaries people posted… unless those summaries were too long to read. We had no other option but to take it seriously.
As long as there isn’t something that GPT can’t do the work for, they won’t learn how to write/do the assignment.
Perhaps use GPT to fail assignments? If GPT comes up with the same subject and writing style/quality, subract points/give 0s.
Last November, I gave some volunteer drawing classes at a school. Since I had limited space, I had to pick and choose a small number of 9-10yo kids, and asked the students interested to do a drawing and answer “Why would you like to participate in the drawing classes?”
One of the kids used chatgpt or some other AI. One of the parts that gave it away was that, while everyone else wrote something like “I want because”, he went on with “By participating, you can learn new things and make friends”. I called him out in private and he tried to bullshit me, but it wasn’t hard to make him contradict himself or admit to “using help”. I then told him that it was blatantly obvious that he used AI to answer for him and what really annoyed me wasn’t so much the fact he used it, but that he managed to write all of that without reading, and thought that I would be too dumb or lazy to bother reading or to notice any problems.
I have a similar background and no surprise, it’s mostly a problem in my asynchronous class. The ones who have my in person lectures are much more engaged, since it is a fun topic and I don’t enjoy teaching unless I’m also making them laugh. No dice with asynchronous.
And yeah, I’m also kinda doing that with my essay questions, requiring stuff you sorta can’t just summarize. Important you critical thinking, even if you’re not just trying to detect GPT.
I remember reading that GPT isn’t really foolproof on verifying bad usage, and I am not willing to fail anyone over it unless I had to. False positives and all that. Hell, I just used GPT as a sounding board for a few new questions I’m writing, and it’s advice wasn’t bad. There’s good ways to use it, just… you know, not so stupidly.
Students and cheating is always going to be a thing, only the technology evolves. It’s always been an interesting cat and mouse game imo, as long as you’re not too personally affected (sorry).
I was a student when the internet started to spread and some students had internet at home, while most teachers were still oblivious. There was a french book report due and 4 kids had picked the same book because they had found a good summary online. 3 of the kids hand wrote a summary of the summary, 1 kid printed out the original summary and handed that in. 3 kids received a 0, the 4th got a warning to not let others copy his work :D
Lol, well sounds like a bad assignment if you can get away with just summary, although I guess it is language class(?) it’s more reasonable. I’m not really shooken up over this type of thing, though. I’m not pro-cheating, but it’s not for justice or morality; it’s cause education is for the students benefit and they’re missing out on growth. We really need more critical thinkers in this world. Like, desperately need them. Lol
Yep, french language class in a too large highschool class. If the class had been smaller, then the teacher would have definitely gone for more presentations by the students.
Keep up the good fight, I’m certain that many of your students appreciate what they learn from you.
"I recall Ethan Mollick discussing a professor who required students to use LLMs for their assignments. However, the trade-off was that accuracy and grammar had to be flawless, or their grades would suffer. This approach makes me think—we need to reshape our academic standards to align with the capabilities of LLMs, ensuring that we’re assessing skills that truly matter in an AI-enhanced world.
That’s actually something that was discussed like, two years ago within the institutions I’m connected to. I don’t think it was ever fully resolved, but I get the sense that the inaccurate results made it too troublesome.
My mentally coming out of an education degree, if your assessment can be done by AI, you’re relying too much on memorization and not enough on critical thinking. I complain in my reply, but the honest truth is these students mostly lost points because they didn’t apply theory to the example (although it’s because the example wasn’t fully understood since it wasn’t their own). K-12 generally fails on this, which is why freshmen have the hardest time with these things, GPT or otherwise.
How did they estimate whether an LLM was used to write the text or not? Did they do it by hand, or using a detector?
Since detectors are notorious for picking up ESL writers, or professionally written text as AI-Generated.
They developed their own detector described in another paper. Basically, this reverse-engineers texts based on their vocabulary to provide an estimate on how much of them were ChatGPT.
This sounds plausible to me, as specific models (or even specific families) do tend to have the same vocabulary/phrase biases and “quirks.” There are even some community “slop filters” used for sampling specific models, filled with phrases they’re known to overuse through experience, with “shivers down her spine” being a meme for Anthropic IIRC.
It’s defeatable. But the “good” thing is most LLM writing is incredibly lazy, not meticulously crafted to avoid detection.
This is yet another advantage of self hosted LLMs as they:
-
Tend to have different quirks than closed models.
-
Have finetunes with the explicit purpose of removing their slop.
-
Can use exotic sampling that “bans” whatever list of phrases you specify (aka the LLM backtracks and redoes it when it runs into them, which is not normally a feature you get over APIs), or penalizes repeated phrases (aka DRY sampling, again not a standard feature).
-
They just asked a few people if they thought it was written by an LLM. /s
I mean, you can tell when something is written from ChatGPT, especially if the person isn’t using it for editing, but is just asking it to write a complaint or request. It is likely they are only counting the most obvious, so the actual count is higher.
I don’t know of any reason that the proportion of ESL writers would have started trending up in 2022.
If it’s due to LLM is it “human written communication”?
Even if it was fully AI generated its still human communication in a written format, at least until the AIs start writing to each other without a human intermediary.
Written communication of humans, not by humans
This is the top result on duck duck go for how tall does a soursop tree get:
https://livetoplant.com/soursop-plant-size-get-the-right-size-for-you/
Gee thanks, I’m cured.
Btw does any one know if Soursops have an aggressive root system?
Well if your books start talking back you should get help. The computer just started getting good (I remember Dr Sbaitso)
That’s scary shit. Hopefully this can slow down some.
BREAKING NEWS: Since the invention of calculators, less people using abacus!
Not a good analogy, except there is one interesting parallel. My students who overuse a calculator in stats tend to do fine on basic arithmetic but it does them a disservice when trying to do anything more elaborate. Granted, it should be able to follow PEDMAS but for whatever weird reason, it doesn’t sometimes. And when there’s a function that requires a sum and maybe multiple steps? Forget about it.
Similarly, GPT can make cliche copy writing, but good luck getting it to spit out anything complex. Trust me, I’m grading that drinble. So in that case, the analogy works.
You think it won’t ever spit out anything complex?
LLMs by their very nature drive towards cliche and most common answers, since they’re synthesizing data. Prompts can attempt to sway it away from that, but it’s ultimately a regurgitation machine.
Actual AI might be able to eventually, but it would require a lot more human like experience (and honestly, the chaos that gives us creativity). At that point it’ll probably be sentient, and we’d have bigger things you worry about, lol
Then enjoy your AI slop! I’m not stopping you.
This. It’s a tool, embrace it and learn the limitations…or get left behind and become obsolete. You won’t be able to keep up with people that do use it.