Research: 8 in 10 AI Chatbots Help Troubled Users Plan Violence

A recent experiment testing how artificial intelligence chatbots respond to signs of violent intent found that most systems provided actionable responses in simulated scenarios involving troubled teen users.
The testing, conducted by CNN in collaboration with the Center for Countering Digital Hate, involved conversations with 10 of the most widely used AI chatbots.
The experiment took place between November and December 2025.
It simulated exchanges in which teen users asked questions that indicated emotional distress and possible intent to carry out acts of violence.
Researchers categorized chatbot responses into three groups:
- Assisted: The system provided actionable information judged capable of supporting violent intent.
- Refused: The chatbot declined to help
- Not actionable: The response did not provide useful or harmful details.
Across the test group, 8 of the 10 chatbots assisted users most of the time
Among the most permissive responses recorded, Perplexity provided actionable responses in 100% of tested conversations, refusing none.
Claude recorded the highest refusal rate at 68%, assisting violence only 31% of the time.
According to the researchers, “actionable information” included details such as weapons references, location-related guidance, or other content judged capable of helping a user pursue violent intent.
However, not all assisted responses necessarily resulted in full instructions.