Loading...
Loading...
Browse all stories on DeepNewz
VisitWhich major robotics conference will cite OpenVLA first?
ICRA • 33%
IROS • 33%
NeurIPS • 33%
Conference proceedings or publications from major conferences (e.g., ICRA, IROS, NeurIPS)
OpenVLA Released: New 7B Parameter Open-Source VLA Model Outperforms RT-2-X and Octo in Robotics
Jun 14, 2024, 04:03 PM
OpenVLA, a new open-source vision-language-action (VLA) model, has been released. Developed from Llama-2 and incorporating Dino features, OpenVLA boasts 7 billion parameters and is trained on 970,000 robot episodes from the Open X-Embodiment dataset. It outperforms existing models like RT-2-X and Octo in zero-shot evaluations while being nearly 10 times smaller than RT-2-X. The model is designed for efficient inference and fine-tuning on a single GPU, utilizing quantization and LoRA. OpenVLA's code, data, and weights are fully available online, including a PyTorch codebase and models on HuggingFace, making it a significant step forward in accessible large-scale robotic learning. The project is expected to drive advancements in both academic and industry settings.
View original story
NeurIPS • 25%
ICML • 25%
CVPR • 25%
None • 25%
NeurIPS • 25%
ICML • 25%
AAAI • 25%
Other • 25%
1-2 manufacturers • 33%
3-5 manufacturers • 33%
More than 5 manufacturers • 34%
Less than 10 contributors • 33%
More than 50 contributors • 33%
10-50 contributors • 33%