Once a detector is good, you can train a model to adjust its outputs to cause false negatives from the detector. Then the cycle repeats. It’s a cat and mouse game basically.
The only proper way I see is a system that is based ob cryptographic signatures. This ia easier said than done ofc.
Great points. Note: I’m not arguing against it as a concept. I’m just skeptical that it’ll happen, and even if it did, there wouldn’t likely be terrible consequences for the accused, especially as that’s what science is… new facts change the outcome vs choosing an outcome and matching facts to it.
Once a detector is good, you can train a model to adjust its outputs to cause false negatives from the detector. Then the cycle repeats. It’s a cat and mouse game basically.
The only proper way I see is a system that is based ob cryptographic signatures. This ia easier said than done ofc.
Yeah but if your wrote your thesis in 2024, and the detector is run on it in 2026…
You’re probably busted.
It’s not like you’ll re-write your thesis with every major ChatGPT release.
Are you expecting that the for-profit college will go back and retroactively rescind degrees? What’s the end-game for re-running the thesis?
It could be a new level added to the peer review of work. Nothing to do with the university. Just “other professionals”.
A thesis isn’t just an exam, it’s a real scientific paper.
And usually claims is contents as fact, which can be referenced by others as fact.
And absolutely should be open to scrutiny so long as it is relevant.
Great points. Note: I’m not arguing against it as a concept. I’m just skeptical that it’ll happen, and even if it did, there wouldn’t likely be terrible consequences for the accused, especially as that’s what science is… new facts change the outcome vs choosing an outcome and matching facts to it.