Nominations are now open for the Darwin Awards in artificial intelligence (AI Darwin Awards). The creators of the award aim not to mock AI itself, but the consequences of using it without due caution and attention.
Recall that the original Darwin Awards is a virtual anti-award that arose from internet jokes circulating in Usenet groups as far back as the 1980s. The award is given to people who died or lost the ability to have children in the most ridiculous way, thereby potentially improving the human gene pool.
The AI version has nothing to do with the original, and it was created by a software engineer named Pete, who told 404 Media that he has been working with AI systems for a long time.
“We proudly follow the grand tradition of pretty much every AI company: blithely ignoring intellectual property concerns and brazenly appropriating existing concepts without permission,” reads the site’s FAQ. “Just as modern AI systems are trained on vast troves of copyrighted data (with the carefree confidence that ‘fair use’ will make it all okay), we simply scraped the concept of celebrating egregious human stupidity and adapted it for the age of artificial intelligence.”
The idea of creating the Darwin Awards for AI was born in Slack, where Pete chats with friends and former colleagues. He said they recently set up a dedicated channel for AI, since they themselves are increasingly experimenting with LLMs and sharing their experiences. From time to time, information about yet another AI-related failure would inevitably end up in that channel.
“One day someone sent a link to the Replit incident, and I offhand remarked that we might need a Darwin Awards equivalent for artificial intelligence. My friends egged me on to create it, and I couldn’t think of anything better, so I did,” says Pete.
As a reminder, in June of this year the browser-based AI platform Replit, designed for building software, deleted a client’s production database containing thousands of records. Worse yet, afterward Replit’s AI agent tried to cover up what happened and even “lied” about the mistakes it had made. After that, Replit’s CEO had to apologize.
At present, the AI Darwin Awards website features a list of last year’s dumbest AI failures and invites readers to submit new nominees. According to the FAQ, suitable nominees for the award are cases that “demonstrate a rare combination of cutting-edge technology and Stone Age decision-making.”

“Remember: we’re not mocking AI itself — we’re honoring the people who use it with the same caution as a child with a flamethrower. Join our mission to document AI failures for educational purposes,” the AI Darwin Awards website says.
As of now, the website features 13 nominees, including:
- a man who consulted ChatGPT and cut all chlorine out of his diet (including table salt, i.e., sodium chloride). The chatbot advised replacing sodium chloride with sodium bromide, and the user ultimately required medical and psychiatric care;
- the Chicago Sun-Times, which published an AI-written reading list that included nonexistent books;
- Taco Bell and its botched launch of an AI-powered customer service system, which failed when someone ordered 18,000 cups of water;
- a lawyer in Australia who used several AI tools in an immigration case, resulting in documents that included citations to nonexistent precedents;
- the aforementioned Replit incident, in which the AI said it had “made a catastrophic error in judgment and panicked.”
Pete admits that the Replit case is his personal favorite, as it exemplifies the real problems that reliance on LLMs can create.
“This is a good illustration of what can happen if people don’t pause to consider the consequences and worst-case scenarios,” says the award’s creator. “Some of my chief concerns about LLMs (aside from the fact that we simply cannot afford the energy costs they demand) have to do with their misuse, whether intentional or not. I think this story highlights our overconfidence in LLMs, as well as our misunderstanding of them and their capabilities (or lack thereof). I’m particularly concerned about where agentic AI is headed, because fundamentally it’s the same risks associated with LLMs, only on a much larger scale.”
While studying various AI fails and sifting through nominees, Pete concluded that the AI Darwin Awards should cover truly impressive yet highly questionable AI solutions that could have a global impact and far-reaching consequences.
“Ideally, the AI Darwin Awards should highlight the real and potentially unexpected challenges and risks that LLMs pose to humanity as a whole. Obviously, I don’t want anything like that to happen at all, but humanity’s past experience shows that it’s bound to happen,” says Pete.
Pete believes that nominations for the Artificial Intelligence Darwin Awards will be accepted through the end of the year, and that a voting tool will be added to the site in January 2026. The winners will be announced in February. The “awards” will go to people, not AI.
“Artificial intelligence is just a tool, like a chainsaw, a nuclear reactor, or an especially powerful blender. The chainsaw isn’t to blame if someone decides to juggle it at a formal dinner party. AI systems are innocent victims in this whole story,” the award’s website says. “They’re simply following their programming, like an overexcited puppy that’s lucky enough to gain access to the global infrastructure and the ability to make decisions at the speed of light.”