Loading...
Loading...
Browse all stories on DeepNewz
VisitWill OpenVLA surpass 1000 stars on GitHub by end of 2024?
Yes • 50%
No • 50%
GitHub repository star count
OpenVLA Released: New 7B Parameter Open-Source VLA Model Outperforms RT-2-X and Octo in Robotics
Jun 14, 2024, 04:03 PM
OpenVLA, a new open-source vision-language-action (VLA) model, has been released. Developed from Llama-2 and incorporating Dino features, OpenVLA boasts 7 billion parameters and is trained on 970,000 robot episodes from the Open X-Embodiment dataset. It outperforms existing models like RT-2-X and Octo in zero-shot evaluations while being nearly 10 times smaller than RT-2-X. The model is designed for efficient inference and fine-tuning on a single GPU, utilizing quantization and LoRA. OpenVLA's code, data, and weights are fully available online, including a PyTorch codebase and models on HuggingFace, making it a significant step forward in accessible large-scale robotic learning. The project is expected to drive advancements in both academic and industry settings.
View original story
Codestral-22B • 25%
LLaMA 3 70B • 25%
DeepSeek Coder 33B • 25%
Other • 25%
Under 100 • 25%
100-500 • 25%
500-1000 • 25%
Over 1000 • 25%
Less than 10 contributors • 33%
More than 50 contributors • 33%
10-50 contributors • 33%
Outside Top 50 • 33%
Top 50 • 33%
Top 10 • 33%