Loading...
Loading...
Browse all stories on DeepNewz
VisitNVIDIA Introduces Llama3-70B ChatQA 2 with 128K Context Window to Rival Proprietary LLMs
Jul 22, 2024, 02:58 AM
NVIDIA has introduced ChatQA 2, a Llama3-70B-based model designed to bridge the gap between open-access large language models (LLMs) and leading proprietary models such as GPT-4-Turbo. ChatQA 2 features a 128K context window, enhancing its long-context understanding and Retrieval-Augmented Generation (RAG) capabilities. The model aims to achieve comparable performance to proprietary models in various tasks. NVIDIA's initiative includes a training recipe to effectively extend the context window, highlighting a significant advancement in the field of open-access LLMs, potentially challenging the dominance of proprietary models.
View original story
NeurIPS • 25%
ICML • 25%
AAAI • 25%
CVPR • 25%
NeurIPS 2024 • 25%
ICML 2024 • 25%
CVPR 2024 • 25%
Other • 25%
NeurIPS • 25%
ICML • 25%
AAAI • 25%
Other • 25%
NeurIPS • 25%
ICML • 25%
CVPR • 25%
Other • 25%
NeurIPS • 25%
ICML • 25%
CVPR • 25%
Other • 25%
Grok 2 • 25%
Imagen 3 • 25%
GPT-5 • 25%
Other • 25%
Wins Best Innovation • 25%
Wins Best Performance • 25%
Wins Best Cost Efficiency • 25%
No Major Award • 25%
CVPR • 25%
NeurIPS • 25%
ICML • 25%
Other • 25%
NeurIPS 2024 • 25%
ICML 2025 • 25%
AAAI 2025 • 25%
Other • 25%
Enhanced Multilingual Support • 25%
Advanced Code Debugging • 25%
Improved Conversational Search • 25%
Other • 25%
No • 50%
Yes • 50%
3 or more • 25%
2 • 25%
0 • 25%
1 • 25%