Loading...
Loading...
Browse all stories on DeepNewz
VisitFirst sector to report significant impact from AgentHarm by May 31, 2025?
Healthcare • 25%
Finance • 25%
Technology • 25%
Other • 25%
Industry reports or news articles detailing impacts
AI Safety Institute Releases AgentHarm to Measure LLM Agent Harmfulness on October 14, 2024
Oct 15, 2024, 02:22 PM
The AI Safety Institute, in collaboration with GraySwanAI, has announced the release of AgentHarm, a novel dataset designed to measure the harmfulness of large language model (LLM) agents. This benchmark focuses on unique harms from AI agents with access to external tools, addressing a critical gap in current safety evaluations. Announced on October 14, 2024, AgentHarm is comprehensive, reliable, and easy to run, allowing for widespread use. The initiative highlights the need for robust safety mechanisms as LLM agents become more integrated with external systems. Jailbreaking transfers to LLM agents without degrading capabilities, and the dataset is partly public.
View original story
Airlines • 25%
Banks • 25%
Media • 25%
Emergency Services • 25%
Aviation • 25%
Banking • 25%
Healthcare • 25%
Other • 25%
Air travel • 25%
Rail travel • 25%
Hospitals • 25%
Banks • 25%
Chatbot Safety Evaluation • 25%
Tool-using Agent Safety • 25%
Jailbreaking Resistance Testing • 25%
Other • 25%
Airlines • 25%
Banks • 25%
Media • 25%
Emergency Services • 25%
Airlines • 25%
Banks • 25%
Hospitals • 25%
Media outlets • 25%
Aviation • 25%
Healthcare • 25%
Finance • 25%
Other • 25%
Aviation • 25%
Banking • 25%
Healthcare • 25%
Other • 25%
Banking • 25%
Airlines • 25%
Media • 25%
Government • 25%
Financial Services • 25%
Healthcare • 25%
Transportation • 25%
Retail • 25%
Supermarkets • 25%
Banks • 25%
Broadcasters • 25%
Airlines • 25%
Yes • 50%
No • 50%
No • 50%
Yes • 50%
Other • 25%
Transparency and Explainability • 25%
Bias and Fairness • 25%
Robustness and Reliability • 25%