Loading...
Loading...
Browse all stories on DeepNewz
VisitVILA 1.5 Sets New Benchmark Record by Mid-2024?
Yes • 50%
No • 50%
official announcements or academic papers detailing benchmark results
NVIDIA and MIT Launch VILA 1.5: Top OSS Vision Model with State-of-the-Art Accuracy
May 4, 2024, 07:16 PM
NVIDIA, in collaboration with MIT, has introduced a new vision language model named VILA 1.5, which can reason among multiple images, learn in context, and understand videos. This model, described as the best open-source vision language model currently available, has been fully open-sourced, including training code and data. VILA 1.5 has achieved state-of-the-art accuracy on the MMMU dataset and supports multi-image processing. It is optimized for performance on NVIDIA GPUs, including the Jetson Orin Nano, and is capable of running on multiple GPUs. The model also features AWQ quantized models and is touted as the fastest on NVIDIA's Jetson Orin Nano. The advancements of VILA 1.5 are detailed in the CVPR'24 paper.
View original story
Yes • 50%
No • 50%
Falcon 2 • 33%
Meta's Llama 3 • 33%
OpenAI's latest model • 34%
Healthcare • 25%
Security • 25%
Automotive • 25%
Retail • 25%
Asia • 25%
Europe • 25%
North America • 25%
Rest of the World • 25%