Springer Nature Book on Machine Learning is Full of Made-Up Citations (retractionwatch.com)
- Reference: 0178301696
- News link: https://science.slashdot.org/story/25/07/07/1354223/springer-nature-book-on-machine-learning-is-full-of-made-up-citations
- Source link: https://retractionwatch.com/2025/06/30/springer-nature-book-on-machine-learning-is-full-of-made-up-citations/
Three researchers contacted by Retraction Watch confirmed their supposedly authored works were fake or incorrectly cited. Yehuda Dar of Ben-Gurion University said a paper cited as appearing in IEEE Signal Processing Magazine was actually an unpublished arXiv preprint. Aaron Courville of Universite de Montreal confirmed he was cited for sections of his "Deep Learning" book that "doesn't seem to exist."
The pattern of nonexistent citations matches known hallmarks of large language model-generated text. Madhavan did not answer whether he used AI to generate the book's content. The book contains no AI disclosure despite Springer Nature policies requiring authors to declare AI use beyond basic copy editing.
[1] https://retractionwatch.com/2025/06/30/springer-nature-book-on-machine-learning-is-full-of-made-up-citations/
First rule of Liars' Club is: It's Honesty Club. (Score:3)
I don't understand why fans of LLMs don't simply use a second LLM to fact-check the output of the first one. Though I suppose for that to work the second LLM would need to formally recognize that some sources of truth are better than others, which would strike a killing blow to the heart of the LLM ethos. And then the first LLM would need to rewrite its original draft based on the editorial input of the second one, which would undercut its unmerited bloviating confidence, which would strike a second killing blow to the heart of the LLM ethos.
Re: (Score:3)
People absolutely are doing stuff like that. They are even putting LLMs in front of other LLMs to act as WAF-like firewall solutions and such.
The problem is it is all very compute and memory intensive. I do see some people getting good results tying multiple models together via MCP and other interop solutions, in terms of outputs. The problem of course is you get an application that is painfully slow to use and to expensive to run.
Its funny the big tech people use to talk about doing things more efficien
Re: (Score:2)
> I don't understand why fans of LLMs don't simply use a second LLM to fact-check the output of the first one. Though I suppose for that to work the second LLM would need to formally recognize that some sources of truth are better than others, which would strike a killing blow to the heart of the LLM ethos. And then the first LLM would need to rewrite its original draft based on the editorial input of the second one, which would undercut its unmerited bloviating confidence, which would strike a second killing blow to the heart of the LLM ethos.
It's far simpler than that. Some of these references just don't exist. Just ask a summer intern or high school student to write a simple script to check for the existence of these references. Of course, since this appears to be so challenging to do, maybe someone can form a startup to address this problem and earn billions of dollars.
In real-life editing, there are human editors that just check for grammar, formatting, etc. Then there are editors that check for consistency, legal issues, being on messag
His LinkedIn bio includes: AI Ethics (Score:4, Informative)
His bio is fascinating.
BTech in Chemical Engineering.
Master level practitioner of Neuro Linguistic Programming.
Never having heard of Neuro Linguistic Programming before, I looked it up. Wikipedia calls it a pseudoscience.
But the AI Ethics expertise takes the cake.
$169.00 for this book of lies (Score:2)
Fuck academic publishing.
Par for the course (Score:4, Insightful)
The whole AI is shit, why shouldn't the "books" on it be shit too.
Meanwhile over in Washington... (Score:2)
... the trump administration has just stopped all subscriptions to Springer Nature publications citing junk science in their journals (hopefully not Nature itself) which seems to have got a lot of people hot under the collar.
I'm no trump fan but Trump et al have long been compared to a broken clock - occasionally correct if you wait long enough. Maybe this is one of those times.
"Better Crap", what do you expect? (Score:5, Insightful)
The fanbois are blind to it, the rest are shaking their heads in disgust. And the people like this "author" are essentially scamming their readers.
Re: (Score:1)
The publisher’s lack of review and missing due diligence for the price is the real news here. Another one using ChatGPT to write book is nothing new.
Re: (Score:2)
And Springer too of course. They'll turn a blind eye any chance they get.