Loading...
Loading...
Browse all stories on DeepNewz
VisitWhat will be the SWE-Bench Verified score of 'o3' by end of 2025?
Below 70% • 25%
70% to 75% • 25%
75% to 80% • 25%
Above 80% • 25%
Published results on OpenAI's official channels or peer-reviewed publications
OpenAI's 'o3' Models with Breakthrough AI Reasoning Surpass Human Performance on ARC-AGI
Dec 20, 2024, 06:10 PM
OpenAI has announced its latest AI reasoning models, 'o3' and 'o3-mini', marking a significant advancement in artificial intelligence capabilities. The 'o3' model, successor to 'o1', bypasses 'o2' due to potential trademark issues with telecommunications company O2. Designed to enhance thoughtful and contextual responses by 'thinking' before responding via a 'private chain of thought', 'o3' represents a breakthrough in AI reasoning. OpenAI collaborated with ARC to test 'o3' on ARC-AGI, which testers believe marks a qualitative shift in AI capabilities compared to prior limitations of large language models. 'o3' has achieved state-of-the-art performance across several benchmarks, including scoring 87.5% in high-compute mode on the ARC-AGI semi-private evaluation, surpassing human performance estimated at 85%. In low-compute mode, it scored 75.7%. On the Frontier Math benchmark, 'o3' solved 25.2% of the hardest math questions, a substantial increase from the previous best of 2%. Additionally, 'o3' scored 71.7% on SWE-Bench Verified, over 20% better than 'o1', and achieved a Codeforces rating of 2727, equivalent to the 175th best human competitive coder. The models are currently available to a limited group of outside researchers for safety testing, with 'o3-mini' expected to launch publicly by the end of January 2025, followed by 'o3' shortly thereafter.
View original story
ARC-AGI • 25%
Frontier Math • 25%
SWE-Bench • 25%
ARC-AGI Semi-Private Evaluation • 25%
SWE-Bench Verified test • 25%
AIME • 25%
GPQA-Diamond benchmark • 25%
Cosine • 25%
Amazon • 25%
Cognition • 25%
Other • 25%
30.08% to 32% • 25%
32.01% to 34% • 25%
34.01% to 36% • 25%
Above 36% • 25%
0-1 benchmarks • 25%
2-3 benchmarks • 25%
4-5 benchmarks • 25%
More than 5 benchmarks • 25%
30.08% to 35% • 25%
35.01% to 40% • 25%
40.01% to 45% • 25%
Above 45% • 25%
Anthropic • 25%
OpenAI • 25%
Google • 25%
Microsoft • 25%
Less than 30% • 25%
30% to 35% • 25%
35% to 40% • 25%
More than 40% • 25%
Below 120 • 25%
120-130 • 25%
130-140 • 25%
Above 140 • 25%
Below 85% • 25%
Above 95% • 25%
90% to 95% • 25%
85% to 90% • 25%