Loading...
Loading...
Browse all stories on DeepNewz
VisitNVIDIA Introduces Llama3-70B ChatQA 2 with 128K Context Window to Rival Proprietary LLMs
Jul 22, 2024, 02:58 AM
NVIDIA has introduced ChatQA 2, a Llama3-70B-based model designed to bridge the gap between open-access large language models (LLMs) and leading proprietary models such as GPT-4-Turbo. ChatQA 2 features a 128K context window, enhancing its long-context understanding and Retrieval-Augmented Generation (RAG) capabilities. The model aims to achieve comparable performance to proprietary models in various tasks. NVIDIA's initiative includes a training recipe to effectively extend the context window, highlighting a significant advancement in the field of open-access LLMs, potentially challenging the dominance of proprietary models.
View original story
Markets
No • 50%
Yes • 50%
Official announcements from major tech companies or NVIDIA
No • 50%
Yes • 50%
Official announcements from consumer application developers or NVIDIA
Yes • 50%
No • 50%
Performance benchmarks published by reputable AI research organizations or NVIDIA's own announcements
CVPR • 25%
NeurIPS • 25%
Other or None • 25%
ICML • 25%
Official conference agendas and keynotes
3 or more • 25%
2 • 25%
0 • 25%
1 • 25%
Official announcements from tech companies or NVIDIA
Below Top 10 • 25%
Top 1 • 25%
Top 5 • 25%
Top 10 • 25%
Results from the next major AI performance benchmark (e.g., MLPerf, GLUE, SuperGLUE)