Loading...
Loading...
Browse all stories on DeepNewz
VisitWill Molmo outperform GPT-4V in a major AI benchmark by December 31, 2024?
Yes • 50%
No • 50%
Results from recognized AI benchmarks such as GLUE, SuperGLUE, or similar
Allen Institute for AI Releases State-of-the-Art Molmo Model with 72B Parameters, Surpassing GPT-4V
Sep 25, 2024, 01:50 PM
The Allen Institute for AI has released the Multimodal Open Language Model (Molmo), a state-of-the-art multimodal vision language model. Molmo is available in multiple sizes, including 1B, 7B, and 72B parameters, and is designed to surpass existing models like GPT-4V and Claude 3.5 Sonnet. The model includes four checkpoints: MolmoE-1B, a mixture of experts model with 1B active parameters and 7B total parameters, and Molmo-7B-O, the most open 7B model. Molmo's performance benchmarks above GPT-4V and Flash, and it achieves human preference scores on par with top API models. Additionally, Molmo utilizes the PixMo dataset for high-quality captioning. The model is supported by platforms like hyperbolic_labs and MistralAI.
View original story