The use of chatgpt for writing is so widespread in higher ed, it will cause serious problems to those students when entering the workforce.
Lots of fancy stuff is written about how we just have to change the way we teach!, and how we can use chatgpt in lessons! blablabla, but it’s all ignorant of the fact that some things need to be learnt by doing them, and students can’t understand how they hurt their own learning, because they don’t know what they don’t know.
There are a lot of entry level jobs that basically assume new employees know nothing, anyway. Seems like this will just further devalue degrees and emphasize work experience for hiring.
Once a detector is good, you can train a model to adjust its outputs to cause false negatives from the detector. Then the cycle repeats. It’s a cat and mouse game basically.
The only proper way I see is a system that is based ob cryptographic signatures. This ia easier said than done ofc.
Great points. Note: I’m not arguing against it as a concept. I’m just skeptical that it’ll happen, and even if it did, there wouldn’t likely be terrible consequences for the accused, especially as that’s what science is… new facts change the outcome vs choosing an outcome and matching facts to it.
The use of chatgpt for writing is so widespread in higher ed, it will cause serious problems to those students when entering the workforce.
Lots of fancy stuff is written about how we just have to change the way we teach!, and how we can use chatgpt in lessons! blablabla, but it’s all ignorant of the fact that some things need to be learnt by doing them, and students can’t understand how they hurt their own learning, because they don’t know what they don’t know.
There are a lot of entry level jobs that basically assume new employees know nothing, anyway. Seems like this will just further devalue degrees and emphasize work experience for hiring.
I bet AI detection is going to get a lot better over time.
I wonder if there’s going to be retrospective testing of theses as time goes on.
Could really damage some careers down the line.
Edit: guys, retrospective testing means it was done later (i.e. with a more up to date AI detector).
Once a detector is good, you can train a model to adjust its outputs to cause false negatives from the detector. Then the cycle repeats. It’s a cat and mouse game basically.
The only proper way I see is a system that is based ob cryptographic signatures. This ia easier said than done ofc.
Yeah but if your wrote your thesis in 2024, and the detector is run on it in 2026…
You’re probably busted.
It’s not like you’ll re-write your thesis with every major ChatGPT release.
Are you expecting that the for-profit college will go back and retroactively rescind degrees? What’s the end-game for re-running the thesis?
It could be a new level added to the peer review of work. Nothing to do with the university. Just “other professionals”.
A thesis isn’t just an exam, it’s a real scientific paper.
And usually claims is contents as fact, which can be referenced by others as fact.
And absolutely should be open to scrutiny so long as it is relevant.
Great points. Note: I’m not arguing against it as a concept. I’m just skeptical that it’ll happen, and even if it did, there wouldn’t likely be terrible consequences for the accused, especially as that’s what science is… new facts change the outcome vs choosing an outcome and matching facts to it.