Loading...
Loading...
Browse all stories on DeepNewz
VisitWill Groq Inc.'s Llama 3.1 405B model achieve a 10x cost reduction by end of 2024?
Yes • 50%
No • 50%
Official announcements from Groq Inc. or validated third-party reports
Groq Inc. and NVIDIA Turbocharge Llama 3.1 405B Model for Record-Breaking Speeds and Cost Efficiency
Jul 23, 2024, 03:18 PM
Groq Inc. has turbocharged the Llama 3.1 model, achieving record-breaking speeds and cost efficiency. The Llama 3.1 405B model, hosted by Groq Inc., runs at speeds up to 330 tokens per second, making it 100 times faster than previous models. This advancement is expected to significantly reduce costs, with some estimates suggesting it could be 10 times cheaper. The model is also available for download on Hugging Face. Additionally, Groq Inc. has partnered with Together Inference and Fine-tuning to bring these models to a broader audience, with speeds of up to 400 tokens per second for the Llama 3.1 8B model. NVIDIA has also announced its AI Foundry service, which will allow enterprises and nations to build custom generative AI models using Llama 3.1 405B and NVIDIA Nemotron models, with comprehensive features including synthetic data generation and fine-tuning. The Llama 3.1 70B model with 128k context is also part of this offering, and NVIDIA NeMo Retriever microservices are included for accurate responses.
View original story
Yes • 50%
No • 50%
Yes • 50%
No • 50%
Leader • 25%
Strong Competitor • 25%
Moderate Competitor • 25%
Minor Player • 25%
New AI Chip • 25%
AI Software Platform • 25%
Partnership with Major Tech Company • 25%
Other • 25%
Yes • 50%
No • 50%
Yes • 50%
No • 50%
Under $500M • 25%
$500M - $1B • 25%
$1B - $2B • 25%
Over $2B • 25%
No • 50%
Yes • 50%
Yes • 50%
No • 50%
Groq Inc. • 25%
Other • 25%
NVIDIA • 25%
Together Inference • 25%
NeMo Retriever microservices • 25%
Custom generative AI models • 25%
Synthetic data generation • 25%
Fine-tuning • 25%