Loading...
Loading...
Browse all stories on DeepNewz
VisitWill DataGemma be the most used model for reducing LLM hallucinations by March 31, 2025?
Yes • 50%
No • 50%
Usage statistics published by Google and third-party analytics firms
Google Releases DataGemma with Fine-Tuned Gemma 2 Models to Reduce LLM Hallucinations
Sep 12, 2024, 01:20 PM
Google has announced the release of DataGemma, a series of open models designed to reduce hallucinations in large language models (LLMs) by grounding them in real-world data. DataGemma utilizes techniques such as Retrieval Interleaved Generation (RIG) and Retrieval Augmented Generation (RAG) to incorporate factual data from Data Commons into LLM responses. These models aim to improve the factual accuracy of LLMs by integrating numerical and statistical data, allowing for more reliable and responsible AI development. The release follows Google's previous efforts, including the Reflection 70B model, to enhance the accuracy and reliability of AI-generated content. The new release includes fine-tuned Gemma 2 models.
View original story
Less than 10% • 25%
10-20% • 25%
20-30% • 25%
More than 30% • 25%
Yes • 50%
No • 50%
Less than 10 competitions • 33%
10-20 competitions • 33%
More than 20 competitions • 34%
Yes • 50%
No • 50%
Yes • 50%
No • 50%
Healthcare • 25%
Finance • 25%
Retail • 25%
Manufacturing • 25%
Both RIG and RAG • 25%
Other • 25%
Retrieval Interleaved Generation (RIG) • 25%
Retrieval Augmented Generation (RAG) • 25%
Reduction in hallucinations • 25%
Other • 25%
Enhanced numerical and statistical data integration • 25%
Improvement in factual accuracy • 25%