Loading...
Loading...
Browse all stories on DeepNewz
VisitOpenAI Unveils GPT-4o, New AI Model with Text, Vision, and Audio Capabilities, 2x Faster and 50% Cheaper
May 13, 2024, 07:36 PM
OpenAI has announced the launch of GPT-4o, a new flagship AI model that integrates text, vision, and audio capabilities in real-time. The model is designed to be significantly faster and more efficient, being 2x faster and 50% cheaper than GPT-4 Turbo, with 5x higher rate limits. GPT-4o is available to all users, including those on free plans, and supports multiple languages. The new model can reason across various modalities, including voice, text, and vision, and can detect emotions in voice inputs. It can respond to audio inputs in as little as 232 milliseconds. Additionally, OpenAI has introduced a desktop version of ChatGPT, enhancing accessibility and user experience. The new model and app are expected to set a new standard for generative and conversational AI, according to CTO Mira Murati.
View original story
Markets
Yes • 50%
No • 50%
Press releases or usage statistics from OpenAI
No • 50%
Yes • 50%
Results from independent tech review platforms or academic papers
No • 50%
Yes • 50%
Public announcements from major software companies or app developers
Poorly adopted • 33%
Moderately adopted • 33%
Widely adopted • 33%
Surveys or reports from educational institutions
Leading the market • 33%
Underperforming against competitors • 33%
Competitive but not leading • 33%
Market analysis reports and AI industry comparisons
No new languages added • 25%
Added 5+ new languages • 25%
Added 3-4 new languages • 25%
Added 1-2 new languages • 25%
Official updates from OpenAI