Will DeepSeek-R1 be integrated into a major tech product by end of 2025?
Yes • 50%
No • 50%
Press releases or official announcements from major tech companies
China's DeepSeek Unveils 671B-Parameter AI Model Rivaling OpenAI, Trained in Two Months
Jan 26, 2025, 03:28 AM
Chinese AI startup DeepSeek has caused a stir in Silicon Valley after releasing open-source AI models that rival those of leading U.S. companies like OpenAI and Google. DeepSeek's latest model, DeepSeek-R1, contains 671 billion parameters and matches or outperforms OpenAI's o1 model on certain benchmarks, despite being developed at a fraction of the cost and without access to cutting-edge chips restricted by U.S. export controls. The company reportedly used 2,048 Nvidia H800 GPUs to train DeepSeek-R1 in just two months, spending only $5.5 million to $6 million—significantly less than the hundreds of millions reportedly spent by U.S. firms. DeepSeek's success highlights the potential for innovation through efficient engineering, including the use of reinforcement learning, and challenges the notion that massive computing resources are essential for advanced AI development. Co-founded by Liang Wenfeng, the Hangzhou-based company's rapid progress has ignited concerns in the U.S. tech industry, with reports of companies like Meta analyzing DeepSeek's model to glean insights. DeepSeek's achievements underscore China's growing competitiveness in artificial intelligence despite U.S. export controls on advanced chips.
View original story
Google • 25%
None of the above • 25%
Amazon • 25%
Microsoft • 25%
None • 25%
No • 50%
Yes • 50%
Other • 25%
Google • 25%
Microsoft • 25%
Amazon • 25%
Natural Language Processing • 25%
Other • 25%
Healthcare Applications • 25%
Computer Vision • 25%
Other • 25%
Natural Language Processing (NLP) • 25%
Computer Vision • 25%
Reinforcement Learning • 25%