Loading...
Loading...
Browse all stories on DeepNewz
VisitHow will the number of academic citations for Molmo compare to GPT-4V by June 30, 2025?
Molmo has fewer citations than GPT-4V • 25%
Molmo has 0-10% more citations than GPT-4V • 25%
Molmo has 10-20% more citations than GPT-4V • 25%
Molmo has more than 20% more citations than GPT-4V • 25%
Academic citation databases such as Google Scholar or Semantic Scholar
Allen Institute for AI Releases State-of-the-Art Molmo Model with 72B Parameters, Surpassing GPT-4V
Sep 25, 2024, 01:50 PM
The Allen Institute for AI has released the Multimodal Open Language Model (Molmo), a state-of-the-art multimodal vision language model. Molmo is available in multiple sizes, including 1B, 7B, and 72B parameters, and is designed to surpass existing models like GPT-4V and Claude 3.5 Sonnet. The model includes four checkpoints: MolmoE-1B, a mixture of experts model with 1B active parameters and 7B total parameters, and Molmo-7B-O, the most open 7B model. Molmo's performance benchmarks above GPT-4V and Flash, and it achieves human preference scores on par with top API models. Additionally, Molmo utilizes the PixMo dataset for high-quality captioning. The model is supported by platforms like hyperbolic_labs and MistralAI.
View original story
Yes • 50%
No • 50%
Yes • 50%
No • 50%
Meta (Llama 3) • 25%
OpenAI (GPT-4o) • 25%
Anthropic (Claude 3.5 Sonnet) • 25%
Other • 25%
Better than GPT-4o • 25%
Equal to GPT-4o • 25%
Worse than GPT-4o • 25%
Inconclusive • 25%
Yes • 50%
No • 50%
Less than 50 • 25%
50-100 • 25%
101-200 • 25%
More than 200 • 25%
Yes • 50%
No • 50%
Top 1 • 25%
Top 5 • 25%
Top 10 • 25%
Below Top 10 • 25%
GPT-4V • 25%
Claude 3.5 Sonnet • 25%
Flash • 25%
Other • 25%
10% to 20% • 25%
More than 30% • 25%
Less than 10% • 25%
20% to 30% • 25%