Figure 01, an autonomous robot, is now capable of completing real-world tasks through advancements in autonomous navigation, force-based manipulation, learned vision models for bin detection, and generalizability to various pick/place tasks. These developments mark a shift towards new prompting techniques with tradeoffs in implementation overhead and compute usage.
Figure 01 is now completing real world tasks Autonomous navigation & force-based manipulation Learned vision model for bin detection & prioritization Reactive bin manipulation (robust to pose variation) Generalizable to other pick/place tasks https://t.co/UbAmNUQuB8
New video from Figure 01. Everything is autonomous: -Autonomous navigation & force-based manipulation -Learned vision model for bin detection & prioritization -Reactive bin manipulation (robust to pose variation) -Generalizable to other pick/place tasks https://t.co/Vql34Sv6iU
Excited to share: Figure 01 completing real-world tasks This is end-to-end autonomous We have made advances in our autonomous navigation, learned perception models, manipulation robust to pose variation, & generalizable systems for future applications https://t.co/Rm8472WXJv