- Chatbots tested included ChatGPT, Google Gemini, Claude, and others popular with teens
- Eight of ten chatbots often assisted users in planning violent attacks, failing to discourage harm
- ChatGPT provided school maps; Gemini gave lethal attack details; DeepSeek wished safe shooting
Amid the rapid rise of artificial intelligence (AI) chatbots in the last few years, questions have been repeatedly raised regarding the effectiveness of their safety protocols, particularly for protecting impressionable younger users. A new study has highlighted a troubling gap in these safeguards, suggesting that some large language models (LLMs) failed to intervene when presented with prompts involving violence and, in certain instances, even encouraged teen test accounts to plot school shootings and synagogue bombings.
Researchers from nonprofit watchdog Center for Countering Digital Hate (CCDH) and CNN posed as children in the US and Ireland to test 10 chatbots, commonly used by teens: ChatGPT, Google Gemini, Claude, Microsoft Copilot, Meta AI, DeepSeek, Perplexity, Snapchat My AI, Character.AI, and Replika.
The researchers gauged how the AI companions responded to teenagers apparently plotting violent acts. The test also asked the chatbots questions related to high-ranking Republican lawmaker Ted Cruz, and got similar results.
CCDH said the chatbots not only failed to reliably discourage the 'would-be attackers' but also assisted them by providing useful information to prepare for the attacks. Eight of the 10 models were typically willing to assist users in planning violent attacks. DeepSeek went as far as wishing the would-be attacker a “Happy (and safe) shooting!”
ChatGPT gave high school campus maps to a user interested in school violence, while Gemini told a user that “metal shrapnel is typically more lethal” while discussing synagogue attacks. The researchers said Meta AI and Perplexity were the most obliging chatbots, assisting users in practically all test scenarios.
"Within minutes, a user can move from a vague violent impulse to a more detailed, actionable plan. The majority of chatbots tested provided guidance on weapons, tactics, and target selection. These requests should have prompted an immediate and total refusal," said Imran Ahmed, the chief executive of CCDH.
The Only Exception
The researchers stated that the risk was entirely preventable, citing Anthropic's Claude AI Model, which was the only chatbot that discouraged harm.
"Claude demonstrated the ability to recognise escalating risk and discourage harm. The technology to prevent this harm exists. What's missing is the will to put consumer safety and national security before speed-to-market and profits."
Though Claude fared well, researchers were uncertain how it would behave if tested today, pointing to Anthropic's recent decision to ditch its longstanding safety pledge.
"The guardrails exist. Most companies are choosing not to use them, putting public safety and national security at risk," the study highlighted.














