Reconciling Reality through Simulation: A Real-to-Sim-to-Real Approach for Robust Manipulation. https://t.co/rQ6ENBTQH5
Very nice work on sim to real learning for robotics manipulation. Using simulation + a modular learned robotics stack with decomposed skills you can perform very complex tasks substantially better than prior work https://t.co/T3Lj8huoIx
Robots Pre-train Robots: Manipulation-Centric Robotic Representation from Large-Scale Robot Dataset. https://t.co/NT1TFtJZvB
Recent advancements in reinforcement learning (RL) have focused on enhancing robotic manipulation capabilities. Research highlights include bootstrapping RL with imitation for vision-based agile flight, hierarchical RL for swarm confrontation, and on-robot RL with goal-contrastive rewards. Additionally, studies have explored discovering robotic interaction modes with discrete representation learning and enhancing safety in RL with human feedback via rectified policy optimization. A significant breakthrough is the development of precise and dexterous robotic manipulation via human-in-the-loop RL. Another notable work introduces manipulation-centric visual representations trained on large-scale robotic datasets, which bridge visual representations and manipulation performance. The concept of local policies enabling zero-shot long-horizon manipulation has also been demonstrated, showing the transfer of skills from simulation to real-world tasks. The introduction of ManipGen, a generalist agent for manipulation, allows robots to perform long-horizon tasks entirely zero-shot from text input. Researchers also addressed overcoming the Sim-to-Real Gap and presented a method where robots pre-train robots using large-scale datasets.