Loading...
Loading...
Browse all stories on DeepNewz
VisitNVIDIA Introduces Llama3-70B ChatQA 2 with 128K Context Window to Rival Proprietary LLMs
Jul 22, 2024, 02:58 AM
NVIDIA has introduced ChatQA 2, a Llama3-70B-based model designed to bridge the gap between open-access large language models (LLMs) and leading proprietary models such as GPT-4-Turbo. ChatQA 2 features a 128K context window, enhancing its long-context understanding and Retrieval-Augmented Generation (RAG) capabilities. The model aims to achieve comparable performance to proprietary models in various tasks. NVIDIA's initiative includes a training recipe to effectively extend the context window, highlighting a significant advancement in the field of open-access LLMs, potentially challenging the dominance of proprietary models.
View original story
Yes • 50%
No • 50%
Yes • 50%
No • 50%
Yes • 50%
No • 50%
Yes • 50%
No • 50%
Yes • 50%
No • 50%
CVPR • 25%
NeurIPS • 25%
Other or None • 25%
ICML • 25%
3 or more • 25%
2 • 25%
0 • 25%
1 • 25%