Loading...
Loading...
Browse all stories on DeepNewz
VisitGroq Inc. and NVIDIA Turbocharge Llama 3.1 405B Model for Record-Breaking Speeds and Cost Efficiency
Jul 23, 2024, 03:18 PM
Groq Inc. has turbocharged the Llama 3.1 model, achieving record-breaking speeds and cost efficiency. The Llama 3.1 405B model, hosted by Groq Inc., runs at speeds up to 330 tokens per second, making it 100 times faster than previous models. This advancement is expected to significantly reduce costs, with some estimates suggesting it could be 10 times cheaper. The model is also available for download on Hugging Face. Additionally, Groq Inc. has partnered with Together Inference and Fine-tuning to bring these models to a broader audience, with speeds of up to 400 tokens per second for the Llama 3.1 8B model. NVIDIA has also announced its AI Foundry service, which will allow enterprises and nations to build custom generative AI models using Llama 3.1 405B and NVIDIA Nemotron models, with comprehensive features including synthetic data generation and fine-tuning. The Llama 3.1 70B model with 128k context is also part of this offering, and NVIDIA NeMo Retriever microservices are included for accurate responses.
View original story
Markets
Yes • 50%
No • 50%
Official announcements from Groq Inc. or validated third-party reports
No • 50%
Yes • 50%
Official press releases from NVIDIA or announcements from Fortune 500 companies
Yes • 50%
No • 50%
Download statistics from Hugging Face
Groq Inc. • 25%
Other • 25%
NVIDIA • 25%
Together Inference • 25%
Official benchmarks published by Groq Inc., Together Inference, or third-party validators
NeMo Retriever microservices • 25%
Custom generative AI models • 25%
Synthetic data generation • 25%
Fine-tuning • 25%
Usage statistics or surveys from NVIDIA or third-party market research firms
Llama 3.1 405B • 25%
Llama 3.1 8B • 25%
Llama 3.1 70B • 25%
Other • 25%
Adoption statistics from Groq Inc., NVIDIA, or third-party market research firms