Sources
fly51fly[RO] OpenVLA: An Open-Source Vision-Language-Action Model https://t.co/eCfjgsTeqB - OpenVLA is a 7B parameter open-source vision-language-action model (VLA) trained on 970k robot episodes from the Open X-Embodiment dataset. - It sets a new state-of-the-art for generalist robot… https://t.co/79S0R0eJx4
Rafael Rafailov @ NeurIPSThe OpenVLA project is finally out! Robotics has also been revolutionized by foundation models, but until now, the field did not have open access to any high-quality ones to build on top. I believe this project will open the door for academic and industry advances in robotics. https://t.co/RM68Ck8Svg
Chelsea FinnReally excited to share OpenVLA! - state-of-the-art robotic foundation model - outperforms RT-2-X in our evals, despite being nearly 10x smaller - code + data + weights open-source Webpage: https://t.co/Y0XU6kX3hl https://t.co/wqQbgG5z8I
Additional media



