Loading...
Loading...
Browse all stories on DeepNewz
VisitWhich entity will achieve the highest token generation speed for Llama 3.1 models by end of 2024?
Groq Inc. • 25%
Together Inference • 25%
NVIDIA • 25%
Other • 25%
Official benchmarks published by Groq Inc., Together Inference, or third-party validators
Groq Inc. and NVIDIA Turbocharge Llama 3.1 405B Model for Record-Breaking Speeds and Cost Efficiency
Jul 23, 2024, 03:18 PM
Groq Inc. has turbocharged the Llama 3.1 model, achieving record-breaking speeds and cost efficiency. The Llama 3.1 405B model, hosted by Groq Inc., runs at speeds up to 330 tokens per second, making it 100 times faster than previous models. This advancement is expected to significantly reduce costs, with some estimates suggesting it could be 10 times cheaper. The model is also available for download on Hugging Face. Additionally, Groq Inc. has partnered with Together Inference and Fine-tuning to bring these models to a broader audience, with speeds of up to 400 tokens per second for the Llama 3.1 8B model. NVIDIA has also announced its AI Foundry service, which will allow enterprises and nations to build custom generative AI models using Llama 3.1 405B and NVIDIA Nemotron models, with comprehensive features including synthetic data generation and fine-tuning. The Llama 3.1 70B model with 128k context is also part of this offering, and NVIDIA NeMo Retriever microservices are included for accurate responses.
View original story
1B • 25%
3B • 25%
11B • 25%
90B • 25%
GPT-4o • 25%
Claude 3.5 • 25%
Gemini Pro • 25%
None • 25%
GPT-5 • 25%
BERT-3 • 25%
Claude 3.0 • 25%
Other • 25%
GPQA Diamond (CoT) • 25%
Math (CoT) • 25%
Other AI Benchmark • 25%
None • 25%
HumanEval • 25%
MMLU_social_sciences • 25%
Both • 25%
Neither • 25%
1B parameter model • 25%
3B parameter model • 25%
11B parameter model • 25%
90B parameter model • 25%
Meta • 25%
Google • 25%
OpenAI • 25%
Other • 25%
1B • 25%
3B • 25%
11B • 25%
90B • 25%
OpenAI • 25%
Anthropic • 25%
Google • 25%
Other • 25%
Google • 25%
Microsoft • 25%
Apple • 25%
Other • 25%
No • 50%
Yes • 50%
Yes • 50%
No • 50%
NeMo Retriever microservices • 25%
Custom generative AI models • 25%
Synthetic data generation • 25%
Fine-tuning • 25%