Loading...
Loading...
Browse all stories on DeepNewz
VisitFirst academic conference to feature SEPs as a keynote topic?
NeurIPS • 33%
ICML • 33%
AAAI • 33%
Programs and agendas of major academic AI conferences (e.g., NeurIPS, ICML)
Researchers and OpenAI Develop Algorithm to Detect AI Hallucinations with CriticGPT
Jun 27, 2024, 01:15 PM
Researchers have developed an algorithm to detect when AI systems, particularly large language models (LLMs) such as ChatGPT and Gemini, are hallucinating. This new method, based on calculating semantic entropy, measures the uncertainty in meaning to improve the reliability of AI outputs. The approach, known as Semantic Entropy Probes (SEPs), involves repeated LLM generation and consistency checks. Oxford researchers are among those advancing this field, highlighting the importance of effective detection and evaluation of AI hallucinations. Researchers also suggest that chatbots policing each other can correct some hallucinations. Additionally, OpenAI has introduced CriticGPT, a model based on GPT-4, to catch errors in ChatGPT's code output, enhancing the accuracy of AI-generated content. The University of Oxford has contributed significantly to this research, emphasizing the robustness and cost-effectiveness of this detection method.
View original story