Loading...
Loading...
Browse all stories on DeepNewz
VisitWill OpenVLA be adopted by top robotics labs by end of 2024?
Yes • 50%
No • 50%
Press releases or announcements from top robotics research labs (e.g., MIT, Stanford, Carnegie Mellon) or academic publications
OpenVLA Released: New 7B Parameter Open-Source VLA Model Outperforms RT-2-X and Octo in Robotics
Jun 14, 2024, 04:03 PM
OpenVLA, a new open-source vision-language-action (VLA) model, has been released. Developed from Llama-2 and incorporating Dino features, OpenVLA boasts 7 billion parameters and is trained on 970,000 robot episodes from the Open X-Embodiment dataset. It outperforms existing models like RT-2-X and Octo in zero-shot evaluations while being nearly 10 times smaller than RT-2-X. The model is designed for efficient inference and fine-tuning on a single GPU, utilizing quantization and LoRA. OpenVLA's code, data, and weights are fully available online, including a PyTorch codebase and models on HuggingFace, making it a significant step forward in accessible large-scale robotic learning. The project is expected to drive advancements in both academic and industry settings.
View original story
Yes • 50%
No • 50%
Yes • 50%
No • 50%
Less than 10 contributors • 33%
More than 50 contributors • 33%
10-50 contributors • 33%
Outside Top 50 • 33%
Top 50 • 33%
Top 10 • 33%