Wikipedia editors will quickly delete AI-generated articles

📟 News

Date: 12/08/2025

Wikipedia has adopted new rules to combat the flood of AI-generated articles inundating the online encyclopedia. Under the new policy, administrators are authorized to quickly delete AI-generated articles if they meet certain criteria.

Because Wikipedia is maintained by an international community of volunteers and editors, its participants spend a lot of time on discussions and debates—both about specific edits to articles and about the policies that govern such edits in general. Deleting entire articles is standard practice on Wikipedia; however, the deletion process includes a week-long discussion phase during which the community tries to reach a consensus on whether a particular piece of content should be removed.

For common violations that clearly contradict the encyclopedia’s rules, there is a process of speedy deletion: one person tags the article for deletion, an administrator checks whether it meets the criteria, and deletes it, bypassing the discussion stage.

This kind of speedy deletion, for example, can be applied to articles composed entirely of nonsensical text, as well as to promotional content that lacks encyclopedic value. However, if someone simply believes that an article is “probably not of interest,” that is a subjective assessment, and deletion requires a full discussion.

The problem is that, currently, most of the articles that editors mark as AI-generated fall into the second category, because the editors cannot say with confidence that they were written by AI.

As Ilyas Lebleu (Đ˜Đ»ŃŒŃŃ ЛДблё), one of the founders of the WikiProject AI Cleanup project and the author of key wording in the new policy, explains, this is precisely why previous attempts to regulate AI content on Wikipedia have been fraught with difficulties.

“While it’s easy to spot signs that a text may have been generated by AI (choice of phrasing, dashes, lists with bold headings, and so on), that’s rarely unambiguous. We don’t want to mistakenly delete material just because it looks like it was written by AI,” LeblĂ© told 404 Media. “Overall, the growth of readily available AI content is already being called an ‘existential threat’ to Wikipedia. Our processes are built around lengthy discussions and consensus-building, and if someone can quickly generate mountains of junk content, we don’t have a deletion procedure that’s equally fast. And that’s a serious problem. Of course, AI content isn’t always bad, and people are also quite good at creating bad content, just not at that scale. Our tools were built for a very different scale.”

The solution reached by the Wikipedia community is to allow speedy deletion of articles that are clearly AI-generated. However, for this to apply, the articles must meet certain criteria.

First criterion: the article contains phrases “addressed to the user.” These are formulations that betray AI-typical language, for example: “Here’s your article for Wikipedia” or “As a large language model
”. Such turns of phrase are a reliable marker that the text was generated by an LLM, and they have long been used to detect AI content in social media and academic papers.

Lebleu notes that encyclopedia editors often encounter this. And this means that the content was not just created by AI, but also indicates that the user didn’t even read the generated text.

“If someone misses such basic things, you can confidently assume they didn’t check anything at all and just copied ‘white noise’,” says Lebleu.

The second criterion: the article contains obviously false citations. This is another common problem with LLMs, namely that the AI may cite non-existent books, articles, or scientific papers, or include links that lead to irrelevant content. Wikipedia’s new policy gives an example: “if an article on computer science cites a work about beetle species — that’s grounds for deletion.”

According to Lebleu, expedited deletion is more of a temporary “patch” than a full-fledged solution. It will help eliminate the most obvious issues, but the problem of AI-generated content as a whole will persist. Such content still regularly appears in the encyclopedia and does not always fall under the criteria described above.

The expert also added that AI can be a useful tool and could even benefit Wikipedia in the future.

“The current situation is completely different, and speculation about how technology will develop only distracts from the real problems we’re already facing,” he says. “One of Wikipedia’s core principles is that there are no firm rules. Therefore, any decisions we make today can be revisited in a few years, as the technology advances further.”

LeBleu concludes that the new policy improves the situation, even if it doesn’t solve the problem entirely.

“The good news (in addition to the expedited deletion process itself) is that we now have a clear statement regarding LLM-generated articles. This used to be contentious. And while most of us oppose AI-written content, we couldn’t agree on exactly how to address it, and early attempts to develop a comprehensive policy failed. Now, building on prior success in dealing with AI-generated images, drafts, and talk page comments, we’ve set out more concrete guidelines that make it clear: unverified, AI-generated content runs counter to the spirit of Wikipedia.”

Related posts:
2025.04.12 — Hackers compromised a bureau within the U.S. Department of the Treasury and spent months in hacked systems

The Office of the Comptroller of the Currency (OCC), an independent bureau within the United States Department of the Treasury, reported a major cybersecurity incident. Unknown attackers had…

Full article →
2025.01.23 — Fake Telegram CAPTCHA forces users to run malicious PowerShell scripts

Hackers used the news of Ross Ulbricht pardoning to lure users to a rogue Telegram channel where they are tricked into running malicious PowerShell code. This…

Full article →
2025.01.27 — Zyxel firewalls reboot due to flawed update

Zyxel warned its customers that a recent signature update may cause critical errors in USG FLEX and ATP series firewalls. As a result, devices go into…

Full article →
2025.03.28 — Zero-day vulnerability in Windows results in NTLM hash leaks

Security experts reported a new zero-day vulnerability in Windows that enables remote attackers to steal NTLM credentials by tricking victims into viewing malicious files in Windows…

Full article →
2025.02.10 — Failed attempt to block phishing link results in massive Cloudflare outage

According to the incident report released by Cloudflare, an attempt to block a phishing URL on the R2 platform accidentally caused a massive outage; as a result, many Cloudflare…

Full article →
2025.02.01 — Critical RCE vulnerability fixed in Cacti

A critical vulnerability has been discovered in the open-source Cacti framework: it enables an authenticated attacker to remotely execute arbitrary code. Vulnerability's ID is CVE-2025-22604; its…

Full article →
2025.01.28 — J-magic backdoor attacked Juniper Networks devices using 'magic packets'

A massive backdoor attack targeting Juniper routers often used as VPN gateways has been uncovered. The devices were attacked by the J-magic malware that…

Full article →
2025.02.08 — Hackers exploit RCE vulnerability in Microsoft Outlook

The U.S. Cybersecurity and Infrastructure Security Agency (CISA) warned Federal Civilian Executive Branch (FCEB) Agencies that they have to secure their systems from ongoing…

Full article →
2025.02.14 — 12,000 Kerio Control firewalls remain vulnerable to RCE

Security experts report that more than 12,000 GFI Kerio Control firewall instances remain vulnerable to the critical RCE vulnerability CVE-2024-52875, which was fixed…

Full article →
2025.01.29 — Google to disable Sync in older Chrome versions

Google announced that in early 2025, Chrome Sync will be disabled in Chrome versions older than four years. Chrome Sync enables users to save and sync their…

Full article →