Will DeepSeek-R1 be banned from use in the U.S. by end of 2025?
Yes • 50%
No • 50%
U.S. government announcements or legal documents
DeepSeek's $5.5 Million Open-Source AI Model Rivals OpenAI's o1
Jan 25, 2025, 12:28 AM
Chinese AI startup DeepSeek has released DeepSeek-R1, an open-source large language model containing 671 billion parameters that rivals OpenAI's o1 in reasoning tasks. The model excels in mathematical and coding abilities, matching or outperforming o1 on several benchmarks. DeepSeek claims that DeepSeek-R1 was developed in just two months at a cost of $5.5 million, utilizing only 2,048 Nvidia H800 GPUs—a fraction of the resources used by U.S. tech giants. Founded by Liang Wenfeng, who previously managed over $100 billion at investment firm High-Flyer, DeepSeek has employed an efficient training method called "R1 Zero" that uses reinforcement learning. This approach challenges the notion that massive computing resources are necessary for advanced AI models. The release of DeepSeek-R1, which ranks third in LM Arena, has sparked discussions about China's rapid advancement in AI despite U.S. export controls intended to limit access to high-end chips. DeepSeek plans to charge approximately 10% of the cost of leading-edge models from OpenAI, Google, and Anthropic, making it about 20 times cheaper than OpenAI's o1. Industry experts, including Scale AI CEO Alexandr Wang, have noted that China has quickly caught up with the U.S. in AI capabilities. DeepSeek's open-source approach and cost-efficient development are challenging Silicon Valley's dominance in the AI sector.
View original story
No • 50%
Yes • 50%
No • 50%
Yes • 50%
Yes • 50%
No • 50%
Yes • 50%
No • 50%
No • 50%
Yes • 50%
Between 5% and 10% • 25%
Less than 5% • 25%
Greater than 20% • 25%
Between 10% and 20% • 25%
3rd place • 25%
1st place • 25%
4th place or lower • 25%
2nd place • 25%