Loading...
Loading...
Browse all stories on DeepNewz
VisitWill AgentHarm be a standard benchmark in AI safety research by end of 2025?
Yes • 50%
No • 50%
Inclusion and citation in major AI safety research papers or conferences
AI Safety Institute and Gray Swan AI Release AgentHarm to Measure LLM Agent Harmfulness, Address Jailbreaking
Oct 14, 2024, 12:05 PM
The AI Safety Institute (AISI) and Gray Swan AI have announced the release of AgentHarm, a benchmark designed to measure the harmfulness of large language model (LLM) agents. This dataset aims to evaluate the unique harms posed by AI agents with access to external tools. The collaboration emphasizes the importance of moving beyond simple chatbot evaluations to assess the safety of more complex agent tasks. AgentHarm is described as easy to run, comprehensive, and reliable, and it is partly public, allowing broader accessibility for safety evaluations. The dataset also addresses concerns about jailbreaking and robustness in LLM agents.
View original story
Yes • 50%
No • 50%
OpenAI • 25%
Google DeepMind • 25%
Anthropic • 25%
Other • 25%
Fairness metric • 25%
Robustness metric • 25%
Transparency metric • 25%
Other metric • 25%
Yes • 50%
No • 50%
Yes • 50%
No • 50%
Yes • 50%
No • 50%
Yes • 50%
No • 50%
Yes • 50%
No • 50%
Alignment Techniques • 25%
Deception Detection • 25%
Reinforcement Learning Safety • 25%
Other • 25%
No • 50%
Yes • 50%
Chatbot Safety Evaluation • 25%
Other • 25%
Jailbreaking Resistance Testing • 25%
Tool-using Agent Safety • 25%
ICML 2025 • 25%
Other • 25%
NeurIPS 2024 • 25%
AAAI 2025 • 25%