Will DeepSeek-R1 surpass OpenAI's o1 in performance benchmarks by end of 2025?
Yes • 50%
No • 50%
Performance benchmark reports from reputable AI research publications or organizations
DeepSeek-R1: Chinese AI Startup's 671-Billion-Parameter Model Challenges OpenAI's o1 at 95% Less Cost
Jan 23, 2025, 05:21 PM
Chinese AI startup DeepSeek has released DeepSeek-R1, an open-source reasoning model with 671 billion parameters that rivals the performance of OpenAI's o1 on several benchmarks. DeepSeek-R1 achieved a score of 79.8% on the AIME 2024 and 97.3% on the MATH-500 benchmark, slightly surpassing or matching OpenAI's o1. The model also excelled in coding tasks, achieving a 2,029 Elo rating on Codeforces and outperforming 96.3% of human participants. DeepSeek-R1 is available under an MIT license, allowing for unrestricted commercial use, and its API access is priced at up to 95% less than OpenAI's o1. This development underscores the competitive landscape of AI development, particularly with China's advancements in open-source AI technology.
View original story
Ranks higher in some benchmarks • 25%
Ranks higher in all benchmarks • 25%
Performance is equivalent • 25%
Ranks lower in all benchmarks • 25%
Natural Language Processing (NLP) • 25%
Other • 25%
Reinforcement Learning • 25%
Computer Vision • 25%
0 • 25%
5 or more • 25%
3 to 4 • 25%
1 to 2 • 25%
No • 50%
Yes • 50%
Europe • 25%
Asia • 25%
Other • 25%
North America • 25%
More than 20% • 25%
10% to 20% • 25%
Less than 5% • 25%
5% to 10% • 25%