
Wikipedia has adopted new rules to combat the flood of AI-generated articles inundating the online encyclopedia. Under the new policy, administrators are authorized to quickly delete AI-generated articles if they meet certain criteria.
Because Wikipedia is maintained by an international community of volunteers and editors, its participants spend a lot of time on discussions and debatesâboth about specific edits to articles and about the policies that govern such edits in general. Deleting entire articles is standard practice on Wikipedia; however, the deletion process includes a week-long discussion phase during which the community tries to reach a consensus on whether a particular piece of content should be removed.
For common violations that clearly contradict the encyclopediaâs rules, there is a process of speedy deletion: one person tags the article for deletion, an administrator checks whether it meets the criteria, and deletes it, bypassing the discussion stage.
This kind of speedy deletion, for example, can be applied to articles composed entirely of nonsensical text, as well as to promotional content that lacks encyclopedic value. However, if someone simply believes that an article is âprobably not of interest,â that is a subjective assessment, and deletion requires a full discussion.
The problem is that, currently, most of the articles that editors mark as AI-generated fall into the second category, because the editors cannot say with confidence that they were written by AI.
As Ilyas Lebleu (ĐĐ»ŃŃŃ ĐДблŃ), one of the founders of the WikiProject AI Cleanup project and the author of key wording in the new policy, explains, this is precisely why previous attempts to regulate AI content on Wikipedia have been fraught with difficulties.
âWhile itâs easy to spot signs that a text may have been generated by AI (choice of phrasing, dashes, lists with bold headings, and so on), thatâs rarely unambiguous. We donât want to mistakenly delete material just because it looks like it was written by AI,â LeblĂ© told 404 Media. âOverall, the growth of readily available AI content is already being called an âexistential threatâ to Wikipedia. Our processes are built around lengthy discussions and consensus-building, and if someone can quickly generate mountains of junk content, we donât have a deletion procedure thatâs equally fast. And thatâs a serious problem. Of course, AI content isnât always bad, and people are also quite good at creating bad content, just not at that scale. Our tools were built for a very different scale.â
The solution reached by the Wikipedia community is to allow speedy deletion of articles that are clearly AI-generated. However, for this to apply, the articles must meet certain criteria.
First criterion: the article contains phrases âaddressed to the user.â These are formulations that betray AI-typical language, for example: âHereâs your article for Wikipediaâ or âAs a large language modelâŠâ. Such turns of phrase are a reliable marker that the text was generated by an LLM, and they have long been used to detect AI content in social media and academic papers.
Lebleu notes that encyclopedia editors often encounter this. And this means that the content was not just created by AI, but also indicates that the user didnât even read the generated text.
“If someone misses such basic things, you can confidently assume they didn’t check anything at all and just copied ‘white noise’,” says Lebleu.
The second criterion: the article contains obviously false citations. This is another common problem with LLMs, namely that the AI may cite non-existent books, articles, or scientific papers, or include links that lead to irrelevant content. Wikipediaâs new policy gives an example: âif an article on computer science cites a work about beetle species â thatâs grounds for deletion.â
According to Lebleu, expedited deletion is more of a temporary âpatchâ than a full-fledged solution. It will help eliminate the most obvious issues, but the problem of AI-generated content as a whole will persist. Such content still regularly appears in the encyclopedia and does not always fall under the criteria described above.
The expert also added that AI can be a useful tool and could even benefit Wikipedia in the future.
âThe current situation is completely different, and speculation about how technology will develop only distracts from the real problems weâre already facing,â he says. âOne of Wikipediaâs core principles is that there are no firm rules. Therefore, any decisions we make today can be revisited in a few years, as the technology advances further.â
LeBleu concludes that the new policy improves the situation, even if it doesnât solve the problem entirely.
“The good news (in addition to the expedited deletion process itself) is that we now have a clear statement regarding LLM-generated articles. This used to be contentious. And while most of us oppose AI-written content, we couldnât agree on exactly how to address it, and early attempts to develop a comprehensive policy failed. Now, building on prior success in dealing with AI-generated images, drafts, and talk page comments, weâve set out more concrete guidelines that make it clear: unverified, AI-generated content runs counter to the spirit of Wikipedia.”

2025.04.12 â Hackers compromised a bureau within the U.S. Department of the Treasury and spent months in hacked systems
The Office of the Comptroller of the Currency (OCC), an independent bureau within the United States Department of the Treasury, reported a major cybersecurity incident. Unknown attackers had…
Full article â
2025.01.23 â Fake Telegram CAPTCHA forces users to run malicious PowerShell scripts
Hackers used the news of Ross Ulbricht pardoning to lure users to a rogue Telegram channel where they are tricked into running malicious PowerShell code. This…
Full article â
2025.01.27 â Zyxel firewalls reboot due to flawed update
Zyxel warned its customers that a recent signature update may cause critical errors in USG FLEX and ATP series firewalls. As a result, devices go into…
Full article â
2025.03.28 â Zero-day vulnerability in Windows results in NTLM hash leaks
Security experts reported a new zero-day vulnerability in Windows that enables remote attackers to steal NTLM credentials by tricking victims into viewing malicious files in Windows…
Full article â
2025.02.10 â Failed attempt to block phishing link results in massive Cloudflare outage
According to the incident report released by Cloudflare, an attempt to block a phishing URL on the R2 platform accidentally caused a massive outage; as a result, many Cloudflare…
Full article â
2025.02.01 â Critical RCE vulnerability fixed in Cacti
A critical vulnerability has been discovered in the open-source Cacti framework: it enables an authenticated attacker to remotely execute arbitrary code. Vulnerability's ID is CVE-2025-22604; its…
Full article â
2025.01.28 â J-magic backdoor attacked Juniper Networks devices using 'magic packets'
A massive backdoor attack targeting Juniper routers often used as VPN gateways has been uncovered. The devices were attacked by the J-magic malware that…
Full article â
2025.02.08 â Hackers exploit RCE vulnerability in Microsoft Outlook
The U.S. Cybersecurity and Infrastructure Security Agency (CISA) warned Federal Civilian Executive Branch (FCEB) Agencies that they have to secure their systems from ongoing…
Full article â
2025.02.14 â 12,000 Kerio Control firewalls remain vulnerable to RCE
Security experts report that more than 12,000 GFI Kerio Control firewall instances remain vulnerable to the critical RCE vulnerability CVE-2024-52875, which was fixed…
Full article â
2025.01.29 â Google to disable Sync in older Chrome versions
Google announced that in early 2025, Chrome Sync will be disabled in Chrome versions older than four years. Chrome Sync enables users to save and sync their…
Full article â