Loading...
Loading...
Browse all stories on DeepNewz
VisitWill Molmo surpass GPT-4V in API integrations by March 31, 2025?
Yes • 50%
No • 50%
Publicly available data from API integration reports and announcements from the Allen Institute for AI and GPT-4V providers
Allen Institute for AI Releases State-of-the-Art Molmo Model with 72B Parameters, Surpassing GPT-4V
Sep 25, 2024, 01:50 PM
The Allen Institute for AI has released the Multimodal Open Language Model (Molmo), a state-of-the-art multimodal vision language model. Molmo is available in multiple sizes, including 1B, 7B, and 72B parameters, and is designed to surpass existing models like GPT-4V and Claude 3.5 Sonnet. The model includes four checkpoints: MolmoE-1B, a mixture of experts model with 1B active parameters and 7B total parameters, and Molmo-7B-O, the most open 7B model. Molmo's performance benchmarks above GPT-4V and Flash, and it achieves human preference scores on par with top API models. Additionally, Molmo utilizes the PixMo dataset for high-quality captioning. The model is supported by platforms like hyperbolic_labs and MistralAI.
View original story
Yes • 50%
No • 50%
Yes • 50%
No • 50%
Yes • 50%
No • 50%
Yes • 50%
No • 50%
Yes • 50%
No • 50%
Better than GPT-4o • 25%
Equal to GPT-4o • 25%
Worse than GPT-4o • 25%
Inconclusive • 25%
Molmo has fewer citations than GPT-4V • 25%
Molmo has more than 20% more citations than GPT-4V • 25%
Molmo has 10-20% more citations than GPT-4V • 25%
Molmo has 0-10% more citations than GPT-4V • 25%
10% to 20% • 25%
More than 30% • 25%
Less than 10% • 25%
20% to 30% • 25%