Loading...
Loading...
Browse all stories on DeepNewz
VisitGoogle Releases DataGemma with Fine-Tuned Gemma 2 Models to Reduce LLM Hallucinations
Sep 12, 2024, 01:20 PM
Google has announced the release of DataGemma, a series of open models designed to reduce hallucinations in large language models (LLMs) by grounding them in real-world data. DataGemma utilizes techniques such as Retrieval Interleaved Generation (RIG) and Retrieval Augmented Generation (RAG) to incorporate factual data from Data Commons into LLM responses. These models aim to improve the factual accuracy of LLMs by integrating numerical and statistical data, allowing for more reliable and responsible AI development. The release follows Google's previous efforts, including the Reflection 70B model, to enhance the accuracy and reliability of AI-generated content. The new release includes fine-tuned Gemma 2 models.
View original story
Markets
Yes • 50%
No • 50%
Usage statistics published by Google and third-party analytics firms
No • 50%
Yes • 50%
Announcements from major AI industry awards such as NeurIPS, AAAI, or similar
No • 50%
Yes • 50%
Independent AI benchmarks and studies published by credible AI research institutions
Both RIG and RAG • 25%
Other • 25%
Retrieval Interleaved Generation (RIG) • 25%
Retrieval Augmented Generation (RAG) • 25%
Technical documentation and research papers published by Google
Reduction in hallucinations • 25%
Other • 25%
Enhanced numerical and statistical data integration • 25%
Improvement in factual accuracy • 25%
Independent AI benchmarks and studies published by credible AI research institutions
Other • 25%
Healthcare • 25%
Finance • 25%
Education • 25%
Usage statistics and case studies published by Google and third-party analytics firms