Loading...
Loading...
Browse all stories on DeepNewz
VisitNVIDIA and MIT Launch VILA 1.5: Top OSS Vision Model with State-of-the-Art Accuracy
May 4, 2024, 07:16 PM
NVIDIA, in collaboration with MIT, has introduced a new vision language model named VILA 1.5, which can reason among multiple images, learn in context, and understand videos. This model, described as the best open-source vision language model currently available, has been fully open-sourced, including training code and data. VILA 1.5 has achieved state-of-the-art accuracy on the MMMU dataset and supports multi-image processing. It is optimized for performance on NVIDIA GPUs, including the Jetson Orin Nano, and is capable of running on multiple GPUs. The model also features AWQ quantized models and is touted as the fastest on NVIDIA's Jetson Orin Nano. The advancements of VILA 1.5 are detailed in the CVPR'24 paper.
View original story
Markets
No • 50%
Yes • 50%
press releases or announcements from major companies
Yes • 50%
No • 50%
surveys or adoption reports from major academic institutions
No • 50%
Yes • 50%
official announcements or academic papers detailing benchmark results
Healthcare • 25%
Security • 25%
Automotive • 25%
Retail • 25%
industry reports or announcements from relevant sectors
Asia • 25%
Europe • 25%
North America • 25%
Rest of the World • 25%
technology adoption surveys or regional tech news
GeForce RTX 3080 • 34%
Jetson Orin Nano • 33%
Tesla V100 • 33%
performance evaluation reports from NVIDIA or independent tech reviews