Loading...
Loading...
Browse all stories on DeepNewz
VisitWill DataGemma significantly reduce LLM hallucinations by end of 2024?
Yes • 50%
No • 50%
Independent AI benchmarks and studies published by credible AI research institutions
Google Releases DataGemma with Fine-Tuned Gemma 2 Models to Reduce LLM Hallucinations
Sep 12, 2024, 01:20 PM
Google has announced the release of DataGemma, a series of open models designed to reduce hallucinations in large language models (LLMs) by grounding them in real-world data. DataGemma utilizes techniques such as Retrieval Interleaved Generation (RIG) and Retrieval Augmented Generation (RAG) to incorporate factual data from Data Commons into LLM responses. These models aim to improve the factual accuracy of LLMs by integrating numerical and statistical data, allowing for more reliable and responsible AI development. The release follows Google's previous efforts, including the Reflection 70B model, to enhance the accuracy and reliability of AI-generated content. The new release includes fine-tuned Gemma 2 models.
View original story
Yes • 50%
No • 50%
Less than 10% • 25%
10-20% • 25%
20-30% • 25%
More than 30% • 25%
Yes • 50%
No • 50%
Less than 50% • 33%
50% to 75% • 33%
More than 75% • 33%
Improved transparency in AI models • 25%
Enhanced safety measures • 25%
Better understanding of AI decision-making • 25%
Other • 25%
Less than 40% • 25%
40% to 50% • 25%
50% to 60% • 25%
More than 60% • 25%
Yes • 50%
No • 50%
Statistically significant improvement • 25%
No significant difference • 25%
Worse outcomes • 25%
Trial terminated early • 25%
Yes • 50%
No • 50%
Both RIG and RAG • 25%
Other • 25%
Retrieval Interleaved Generation (RIG) • 25%
Retrieval Augmented Generation (RAG) • 25%
Reduction in hallucinations • 25%
Other • 25%
Enhanced numerical and statistical data integration • 25%
Improvement in factual accuracy • 25%