Loading...
Loading...
Browse all stories on DeepNewz
VisitHow will the new method to enhance the security of open-source LLMs be adopted by end of 2024?
Widely adopted • 25%
Partially adopted • 25%
Not adopted • 25%
Other • 25%
Official announcements from major open-source LLM projects or robusthq blog
Researchers Find 99.8% Exploit in Meta's Prompt-Guard-86M AI Model
Aug 2, 2024, 03:54 PM
Researchers at robusthq have identified a significant vulnerability in Meta's recently refreshed Prompt-Guard-86M model, which is designed to protect large language models (LLMs) against jailbreaks and other adversarial examples. The exploit has a 99.8% success rate. The researchers have shared countermeasures with Meta, and the company is working on a fix. The findings were published in a blog. Additionally, a new method has been developed to enhance the security of open-source LLMs by preventing tampering, which could prevent misuse such as explaining how to make a bomb.
View original story
Two-factor authentication (2FA) • 25%
Biometric authentication • 25%
Enhanced password policies • 25%
Other • 25%
Yes • 50%
No • 50%
Yes • 50%
No • 50%
Yes • 50%
No • 50%
Yes • 50%
No • 50%
Two-factor authentication (2FA) • 25%
Biometric authentication • 25%
Enhanced encryption protocols • 25%
Other • 25%
Focus on user data protection • 25%
Focus on AI model security • 25%
Focus on infrastructure security • 25%
No significant change • 25%
No • 50%
Yes • 50%
No significant change • 25%
Increase • 25%
Other • 25%
Decrease • 25%