Hold up. That actually got through to publishing??
It’s because nobody was there to highlight the text for them.
yea lol
https://www.sciencedirect.com/science/article/pii/S1930043324004096
I’ve recently been watching a lot of videos on prominent cases of fraud and malpractice like Francesca Gino, Claudine Gay, Hwang Woo-suk, etc., which prompted me to start reading more into meta-research as well, and now I’m basically paranoid about every paper I read. There’s so much shady shit going on…
Yep. And AI will totally help.
Ooh I mean not help. It’ll make it much worse. Particularly with the social sciences. Which were already pretty fuX0r3d anyway due to the whole “your emotions equal this number” thing.
Many journals are absolute garbage that will accept anything. Keep that in mind the next time someone links a study to prove a point. You have to actually read the thing and judge the methodology to know if their conclusions have any merits.
Full disclosure: I don’t intend to be condescending.
Research Methods during my graduate studies forever changed the way I interpret just about any claim, fact, or statement. I’m obnoxiously skeptical and probably cynical, to be honest. It annoys the hell out of my wife but it beats buying into sensationalist headlines and miracle research. Then you get into the real world and see how data gets massaged and thrown around haphazardly…believe very little of what you see.
I have this problem too. My wife gets so annoyed at things because I question things I notice as biases or statistical irregularities instead of just accepting that they knee what they were doing. I have tried to explain it to her. Skepticism is not dismissal and it is not saying I am smarter than them, it is recognizing that they are human and that I may be more proficient in one spot they made a mistake than they were.
I will acknowledge that the lay need to stop trying to argue with scientists because “they did their own research”, but the actually informed and educated need to do a better job of calling each other out.
A good tactic, though not perfect, is to look at the journal impact factor.
We are in top dystopia mode right now. Students have AI write articles that are proofread and edited by AI, submitted to automated systems that are AI vetted for publishing, then posted to platforms where no one ever reads the articles posted but AI is used to scrape them to find answers or train all the other AIs.
How generative AI is clouding the future of Google search
The search giant doesn’t just face new competition from ChatGPT and other upstarts. It also has to keep AI-powered SEO from damaging its results.
More or less the same phenomenon of signal pollution:
“Google is shifting its responsibility for maintaining the quality of results to moderators on Reddit, which is dangerous,” says Ray of Amsive. Search for “kidney stone pain” and you’ll see Quora and Reddit ranking in the top three positions alongside sites like the Mayo Clinic and the National Kidney Foundation. Quora and Reddit use community moderators to manually remove link spam. But with Reddit’s traffic growing exponentially, is a human line of defense sustainable against a generative AI bot army?
We’ll end up using year 2022 as a threshold for reference criteria. Maybe not entirely blocked, but like a ratio… you must have 90% pre-2022 and 10% post-2022.
Perhaps this will spur some culture shift to publish all the data, all the notes, everything - which will be great to train more AI on. Or we’ll get to some type of anti-AI or anti-crawler medium.
Maybe, if reviewers were paid for their job they could actually focus on reading the paper and those things wouldn’t slide. But then Elsevier shareholders could only buy one yacht a year instead of two and that would be a nightmare…
Elsevier pays its reviewers very well! In fact, in exchange for my last review, I received a free month of ScienceDirect and Scopus…
… Which my institution already pays for. Honestly it’s almost more insulting than getting nothing.
I try to provide thorough reviews for about twice as many articles as I publish in an effort to sort of repay the scientific community for taking the time to review my own articles, but in academia reviewing is rewarded far less than publishing. Paid reviews sound good but I’d be concerned that some would abuse this system for easy cash and review quality would decrease (not that it helped in this case). If full open access publishing is not available across the board (it should be), I would love it if I could earn open access credits for my publications in exchange for providing reviews.
I’ve always wondered if some sort of decentralized, community-led system would be better than the current peer review process.
That is, someone can submit their paper and it’s publicly available for all to read, then people with expertise in fields relevant to that paper could review and rate its quality.
Now that I think about it it’s conceptually similar to Twitter’s community notes, where anyone with enough reputation can write a note and if others rate it as helpful it’s shown to everyone. Though unlike Twitter there would obviously need to be some kind of vetting process so that it’s not just random people submitting and rating papers.
Open access credits is a fantastic idea. Unfortunately it goes against the business model of these parasites. Ultimately, these businesses provide little to no actual value except siphoning taxpayer money. I really prefer eLifes current model but it would be great if it was cheaper. arXiv, Biorxiv provides a better service than most journals IMO
Also I agree with the reviewing seriously and twice as often as publishing. Many people leave academia so reviewing more can cover them.
Perhaps paid reviews would increase quality because unpaid reviews are more susceptible to corruption
In Elsevier’s defense, reading is hard and they have so much money to count.
the entire paragraph after the highlight is still AI too
Guys it’s simple they just need to automate AI to read these papers for them to catch if AI language was used. They can automate the entire peer review process /s
I would insert specific language into every single one of my submissions to see if my editors were doing their jobs. Only about 1/3 caught it. Short story long, I’m not just a researcher in a narrow field, I’m also an amateur marine biologist.
Most read Elsevier paper.
Raneem Bader, Ashraf Imam, Mohammad Alnees, Neta Adler, Joanthan ilia, Diaa Zugayar, Arbell Dan, Abed Khalaileh. You are all accused of using chatgpt or whatever else to write your paper. How do you plead?
How do you plead?
“I apologize, but I do not feel comfortable performing any pleas or participating in negative experiences. As an AI language model, I aim to help with document production. Perhaps you would like me to generate another article?”
How do you feel about using chatgpt as a translation tool?
Depends on what kind of translation we’re talking here. Translating some chatter? Translating a web page (most of these suck)? Translating a book for it to be published? Translating a book so you can read it yourself? Translating a scientific paper so you can publish it, without proofreading the translation?
Is it the personal vs. private vs. public use that is bothersome or is it just the fact that these fuckers didn’t proofread I guess is what I’m trying to figure out
They didn’t proofread, plus there’s a real chance that some other parts of the paper might be AI nonsense. If something so glaringly problematic got past, what smaller mistakes are also there? They effectively poisoned their own paper
All MDs, no PhDs. I wouldn’t have read that anyway, but rejected instead of publishing hehe. “Long live the system!” /s
Wouldn’t you want a pediatric hepatobiliary surgeon? A four month old is going to be a tricky case, I’d think.