Aug 2, 2025
Anthropic Scores Best in AI Safety Report

What We’re Showing
A comparative grading of major AI companies on their safety practices and risk mitigation efforts, assessed by the Future of Life Institute.
The ratings from AI researchers and governance specialists span six dimensions, from managing current harms like information sharing to existential risk considerations.
Key Takeaways
- Anthropic (creators of Claude) received the highest overall grade (C+), standing out for conducting the only human-involved bio-risk trials, not training on user data, leading in alignment research, and structuring itself as a Public Benefit Corporation committed to safety.
- Chinese companies Zhipu.AI and DeepSeek both received failing grades overall, though the report notes that China’s stricter national AI regulations may explain their weaker performance on Western-aligned self-governance and transparency metrics.
- Only three companies—Anthropic, OpenAI, and DeepMind—report any testing for high-risk capabilities like bio- or cyber-terrorism, and even these efforts often lack clear reasoning or rigorous standards.