Researchers from the University of Washington and MIT have introduced several innovative approaches to enhance human-robot interaction. One notable project, GRACE, focuses on generating socially appropriate robot actions by leveraging large language models (LLMs) and human explanations. Another significant development is the Follow Instructions with Social and Embodied Reasoning (FISER) method, which aims to address the inherent ambiguity in human instructions by incorporating social and embodied reasoning in collaborative tasks. Additionally, the RoboGPT by Orangewood enables robot arms to be programmed via natural language, utilizing custom-trained vision models and fine-tuned LLMs to understand and respond to human intentions. Furthermore, the Text2Interaction project introduces a long-horizon, skill-based planner that integrates human preferences at various levels using code writing LLMs. Other advancements include Semantically-Driven Disambiguation for Human-Robot Interaction, Robotic Environmental State Recognition with Pre-Trained Vision-Language Models, and SECURE: Semantics-aware Embodied Conversation under Unawareness for Lifelong Robot Learning.
SECURE: Semantics-aware Embodied Conversation under Unawareness for Lifelong Robot Learning. https://t.co/5Qq8pzOeTG
How can robots incorporate human preferences into their plans? Introducing Text2Interaction: a long-horizon, skill-based planner that meets human preferences at the task, motion, and control levels zero-shot using code writing LLMs. Project site: https://t.co/ZpTa7sP7Zn https://t.co/JCdoZAvJ4q
RoboGPT by Orangewood allows robot arms to be programmed via natural language. It leverages custom-trained vision models and fine-tuned LLMs to interact with the bot. Any robot can be made human-centric if it understands and responds to human intention. https://t.co/FXDu5drr5i