Loading...
Loading...
Browse all stories on DeepNewz
VisitWhat will be the primary application of DeepSeek-V3 in 2025?
Customer Service • 25%
Content Creation • 25%
Research and Development • 25%
Other • 25%
Industry reports and product announcements
DeepSeek Launches DeepSeek-V3, Surpassing GPT-4o with 671B Parameters, 37B Activated, Trained on 14.8 Trillion Tokens for $5.5 Million
Dec 27, 2024, 02:31 AM
Chinese AI company DeepSeek has launched DeepSeek-V3, a new open-source language model that reportedly surpasses OpenAI's GPT-4o and Anthropic's Claude 3.5 Sonnet in performance. The model features a total of 671 billion parameters, with 37 billion activated for each token, and was trained using 14.8 trillion tokens at a cost of approximately $5.5 million. This achievement is notable given that it was developed with a fraction of the resources typically required for such advanced models. DeepSeek claims that its model is more efficient and cost-effective, having been trained on 2,048 GPUs. The company, which has not sought outside funding, operates entirely on self-generated capital from a hedge fund. The release of DeepSeek-V3 is seen as a pivotal moment in the AI landscape, particularly in the context of ongoing geopolitical tensions and discussions around AI regulation in the United States.
View original story
Other • 25%
Coding Assistance • 25%
Natural Language Processing • 25%
Data Analysis • 25%
Coding • 25%
Other • 25%
Data Analysis • 25%
Natural Language Processing • 25%
Other • 25%
Healthcare • 25%
Finance • 25%
Technology • 25%
Technology • 25%
Healthcare • 25%
Finance • 25%
Education • 25%
Amazon • 25%
Other • 25%
Google • 25%
Microsoft • 25%
Google • 25%
Other • 25%
Amazon • 25%
Microsoft • 25%
Cost efficiency • 25%
Competitive pressure • 25%
Regulatory challenges • 25%
Performance superiority • 25%
MMLU-Pro • 25%
MMLU • 25%
Other • 25%
MATH-500 • 25%
Other • 25%
North America • 25%
Europe • 25%
Asia • 25%
Yes • 50%
No • 50%
No • 50%
Yes • 50%
Alibaba • 25%
Other • 25%
Tencent • 25%
Baidu • 25%