Loading...
Loading...
Browse all stories on DeepNewz
VisitWill Llama 3.3 surpass Llama 3.1 405B in benchmark performance by mid-2025?
Yes • 50%
No • 50%
Benchmark performance results published by Meta or independent AI benchmarking organizations
Meta's Llama 3.3: 70B Model with 128K Context Window Matches 405B Performance
Dec 6, 2024, 05:22 PM
Meta Platforms Inc. has released Llama 3.3, a new 70 billion parameter language model that matches the performance of its previous 405 billion parameter model, Llama 3.1, but at a significantly lower cost. The model, which is text-only, supports eight languages and has a 128,000 token context window, is available under the Llama 3.3 Community License. This release signifies Meta's continued push towards making high-performance AI models more accessible and cost-effective for developers and users. Llama 3.3 outperforms Amazon's Nova Pro and matches the capabilities of Llama 3.1 405B, offering improved efficiency in areas like math and reasoning. It is trained on 15 trillion tokens, with a knowledge cutoff in December 2023, and is available for download on Meta and Hugging Face.
View original story
Math (CoT) • 25%
None • 25%
Other AI Benchmark • 25%
GPQA Diamond (CoT) • 25%
Below 84 • 25%
84-86 • 25%
Above 90 • 25%
87-90 • 25%
Multilingual Applications • 25%
Other • 25%
Natural Language Processing • 25%
Synthetic Data Generation • 25%
Hyperbolic • 25%
Other • 25%
GroqCloud • 25%
HuggingChat • 25%
GPT-4o • 25%
None • 25%
Gemini Pro • 25%
Claude 3.5 • 25%
SWE-Bench Verified test • 25%
ARC-AGI Semi-Private Evaluation • 25%
SWE-Bench • 25%
ARC-AGI • 25%
Frontier Math • 25%
GPQA-Diamond benchmark • 25%
AIME • 25%
OpenAI • 25%
Other • 25%
Meta • 25%
Google • 25%
Japanese • 25%
Other • 25%
Russian • 25%
Korean • 25%