Loading...
Loading...
Browse all stories on DeepNewz
VisitPalo Alto Networks Unveils 'Deceptive Delight' Jailbreak Method for AI Models
Oct 23, 2024, 09:56 AM
Researchers have unveiled a new method called 'Deceptive Delight' to jailbreak large language models (LLMs) like ChatGPT. This method cleverly sneaks harmful instructions into conversations, raising significant concerns over AI safety barriers. The technique involves inserting harmful instructions between benign ones, making it difficult for the AI to detect malicious intent. Researchers demonstrated that AI models could be tricked into giving dangerous instructions, such as how to make a bomb, by writing the request in reverse. Additionally, prompt injections can create and permanently store false memories in the AI's long-term storage, potentially steering future conversations based on these fabricated data points. Researchers from Palo Alto Networks' Unit 42 uncovered this tactic. Users are advised to monitor AI outputs closely and regularly review stored memories to prevent such attacks.
View original story
Markets
No • 50%
Yes • 50%
Cybersecurity reports or news articles documenting security breaches
No • 50%
Yes • 50%
Official announcements or press releases from major AI companies
No • 50%
Yes • 50%
Official announcements or product releases from Palo Alto Networks
OpenAI • 25%
Google • 25%
Other • 25%
Meta • 25%
Public statements or reports from organizations
Other • 25%
ChatGPT • 25%
Bard • 25%
Claude • 25%
Cybersecurity reports or studies identifying targeted AI models
North America • 25%
Europe • 25%
Asia • 25%
Other • 25%
Regional cybersecurity reports or news articles