Loading...
Loading...
Browse all stories on DeepNewz
VisitWill the vulnerability in Meta's Prompt-Guard-86M AI model be exploited in a real-world attack by end of 2024?
Yes • 50%
No • 50%
Publicly available news reports or official statements from Meta
Researchers Find 99.8% Exploit in Meta's Prompt-Guard-86M AI Model
Aug 2, 2024, 03:54 PM
Researchers at robusthq have identified a significant vulnerability in Meta's recently refreshed Prompt-Guard-86M model, which is designed to protect large language models (LLMs) against jailbreaks and other adversarial examples. The exploit has a 99.8% success rate. The researchers have shared countermeasures with Meta, and the company is working on a fix. The findings were published in a blog. Additionally, a new method has been developed to enhance the security of open-source LLMs by preventing tampering, which could prevent misuse such as explaining how to make a bomb.
View original story
Yes, targeting individuals • 25%
Yes, targeting corporations • 25%
Yes, targeting government entities • 25%
No • 25%
Yes • 50%
No • 50%
Yes • 50%
No • 50%
Yes • 50%
No • 50%
Yes • 50%
No • 50%
Yes • 50%
No • 50%
Not adopted • 25%
Other • 25%
Widely adopted • 25%
Partially adopted • 25%
No significant change • 25%
Increase • 25%
Other • 25%
Decrease • 25%