Wikipedia Bans Use of Generative AI
- Reference: 0181108860
- News link: https://news.slashdot.org/story/26/03/26/1818215/wikipedia-bans-use-of-generative-ai
- Source link:
> Editors can use large language models (LLMs) to refine their own writing, but only if the copy is checked for accuracy. The policy states that this is because LLMs "can go beyond what you ask of them and change the meaning of the text such that it is not supported by the sources cited." Editors can also use LLMs to assist with language translation. However, they must be fluent enough in both languages to catch errors. Once again, the information must be checked for inaccuracies.
>
> "My genuine hope is that this can spark a broader change. Empower communities on other platforms, and see this become a grassroots movement of users deciding whether AI should be welcome in their communities, and to what extent," Wikipedia administrator Chaotic Enby [3]wrote . The administrator also called the policy a "pushback against enshittification and the forceful push of AI by so many companies in these last few years."
[1] https://www.engadget.com/ai/wikipedia-has-banned-ai-generated-articles-173641377.html
[2] https://en.wikipedia.org/wiki/Wikipedia:Core_content_policies
[3] https://en.wikipedia.org/wiki/Wikipedia:Writing_articles_with_large_language_models
Andrew Jackson of the Mind (Score:4, Insightful)
1. Obviously. So very obvious, in fact, that I am surprised to hear that LLMs weren't already banned several years ago.
2. How are they going to enforce it? There's a large contingent of alleged humans who get a tingle in their nethers presenting LLM output as their own original thought.
Re: (Score:2)
> There's a large contingent of alleged humans who get a tingle in their nethers presenting LLM output as their own original thought.
LLM or no LLM, there is a long-standing policy against that: [1]Wikipedia is not a publisher of original thought [wikipedia.org].
[1] https://en.wikipedia.org/wiki/Wikipedia:What_Wikipedia_is_not#Wikipedia_is_not_a_publisher_of_original_thought
Re: (Score:2)
> 2. How are they going to enforce it?
If only there was a Wikipedia page abo-oh, wait.
Enforcement: [1]https://en.wikipedia.org/wiki/... [wikipedia.org]
Detection: [2]https://en.wikipedia.org/wiki/... [wikipedia.org]
Do you know how I found this out? Hint: it wasn't by asking AI.
[1] https://en.wikipedia.org/wiki/Wikipedia:WikiProject_AI_Cleanup/Guide
[2] https://en.wikipedia.org/wiki/Wikipedia:Signs_of_AI_writing
Old News (Score:2)
We already had this news like 2 or 3 weeks ago.
Old news (Score:2)
It wasn't banned before? Stackoverflow banned it 3 years ago. Get with the times.
Re:Bye bye Wikipedia (Score:5, Insightful)
If you can't take a list of bullets and turn it into a paragraph of text, what are you even doing trying to edit anything?
Re:Bye bye Wikipedia (Score:4, Insightful)
> Even on for authors, of encyclopedia articles, and this notihing wrong with telling ChatGTP to, "take this list of bullets and write it up as a paragraph."
Until it hallucinates and adds something that wasn't there or changes the meaning significantly. In my experience, AI is really good at screwing things up in ways that nobody expects. And if the people making the changes aren't subject-matter experts, but are just doing drive-by edits to try to make things more digestible, they might not notice the errors if they are subtle enough. Allowing any random person to do stuff like that could potentially cause a lot of damage really quickly.
> Nor is there anything wrong with asking it to make a diagram of some process etc.
Until it steals the chart blatantly from somebody's published book, and Wikipedia gets sued for copyright infringement. Wikipedia isn't just trying to protect itself from erroneous data. It's trying to protect itself from liability. With user-uploaded content, the user can self-certify that they have the right to upload it, and apart from user incompetence, that's usually going to be good enough. With AI-generated images, it is impossible for a user to know for certain whether what they are uploading is infringing, and would be hard to later prove which AI generated the diagram to transfer the liability to the AI company.
But the biggest risk, IMO, would be asking it to make a chart with numbers from some table. It could manipulate the numbers, and if someone isn't checking closely, they might not see the error, but the incorrect chart could easily mislead people. AI-based chart generation seems way more likely to introduce errors than a human copying and pasting the table into a spreadsheet and generating the chart with traditional non-AI-based tools.
> Someone else is going to clone wikipedia and the authorship will no doubt migrate to where they are allowed to use contemporary tooling.
And after a few months, people will complain that the content is constantly wrong, the editors over there will give up trying to keep the error rate under control, and anyone with a clue will come running back to Wikipedia.
Re:Bye bye Wikipedia (Score:4, Insightful)
> Wikipedia is choosing to die. There is a lot wrong with a lot of what people are doing with GenAI but it is also super useful.
Unfortunately, even the best LLMs sometimes make up information ("hallucinate"), and the stuff they make up is deliberately crafted to appear exactly like real information. This is simply unacceptable for an encyclopedia.
If Wikipedia were written by paid professionals, you could plausibly put in place protocols to check and verify, and fire the ones who fail to check properly, but even paid professionals have been seen to let hallucinations through. As it is, as an encyclopedia that it is put together by volunteers, forbidding AI is pretty much a forced choice.
[1]https://www.evidentlyai.com/bl... [evidentlyai.com]
[2]https://arize.com/llm-hallucin... [arize.com]
[3]https://thisweekinsciencenews.... [thisweekin...cenews.com]
[1] https://www.evidentlyai.com/blog/llm-hallucination-examples
[2] https://arize.com/llm-hallucination-examples/
[3] https://thisweekinsciencenews.com/blog/2025/08/21/the-complete-guide-to-llm-hallucinations-types-examples-and-how-to-spot-them/
Good for them. (Score:2)
And stop calling ii "Generative" -- it doesn't generate anything -- at best, it's "reflective" in that it reflects back whatever was put into it. It's still GIGO.
Well, the hard job is done, then. (Score:3)
What remains is the trivial matter of enforcement. I guess then can use LLMs to evaluate submissions for the presence of a human factor.
Re: (Score:2)
The AI vs AI arms race, begun it has.
Re: (Score:1)
Like every solution --- go after the source. Find those able to code LLMs ... cut them down like bad weeds. Just like if your bought some bad soap ... that gave you an itch ... you go after the soap-makers who brewed up that itch. If they are Randists who claim their freedom requires them to create itching --- then put your iron boot-heel thru their teeth.