Which benchmark will DeepSeek-R1 outperform OpenAI on by end of 2025?
Natural Language Processing (NLP) • 25%
Computer Vision • 25%
Reinforcement Learning • 25%
Other • 25%
Results published in AI research papers or announcements by AI benchmarking organizations
China's DeepSeek Unveils 671B-Parameter AI Model Rivaling OpenAI, Trained in Two Months
Jan 26, 2025, 03:28 AM
Chinese AI startup DeepSeek has caused a stir in Silicon Valley after releasing open-source AI models that rival those of leading U.S. companies like OpenAI and Google. DeepSeek's latest model, DeepSeek-R1, contains 671 billion parameters and matches or outperforms OpenAI's o1 model on certain benchmarks, despite being developed at a fraction of the cost and without access to cutting-edge chips restricted by U.S. export controls. The company reportedly used 2,048 Nvidia H800 GPUs to train DeepSeek-R1 in just two months, spending only $5.5 million to $6 million—significantly less than the hundreds of millions reportedly spent by U.S. firms. DeepSeek's success highlights the potential for innovation through efficient engineering, including the use of reinforcement learning, and challenges the notion that massive computing resources are essential for advanced AI development. Co-founded by Liang Wenfeng, the Hangzhou-based company's rapid progress has ignited concerns in the U.S. tech industry, with reports of companies like Meta analyzing DeepSeek's model to glean insights. DeepSeek's achievements underscore China's growing competitiveness in artificial intelligence despite U.S. export controls on advanced chips.
View original story
Ranks higher in all benchmarks • 25%
Ranks higher in some benchmarks • 25%
Ranks lower in all benchmarks • 25%
Performance is equivalent • 25%
5 or more • 25%
1 to 2 • 25%
3 to 4 • 25%
0 • 25%
Below 10 • 25%
Top 6-10 • 25%
Top 1 • 25%
Top 2-5 • 25%
Natural Language Processing • 25%
Other • 25%
Healthcare Applications • 25%
Computer Vision • 25%