Human-Robot Interaction

ToDo: Lisa, Kay, Berk

Due to the inherent stochasticity and diversity of human behavior, robots need learning and adaptation capabilities to successfully interact with humans in various scenarios. Predicting human intent and modeling human behavior is essential for seamless human-robot interaction. Fundamental research in human-robot interaction has potential applications in a variety of scenarios where humans need assistance: assembly of products in factories, the aid of the elderly at home, control of actuated prosthetics, shared control in repetitive teleoperated processes, interaction with home robots and humanoid robot assistants. We have developed machine learning algorithms that enable robots to learn interactions from demonstrations. Moreover, our group has investigated how robots can support the practice and execution of movements by humans. Applications of our methods have been demonstrated in tasks involving humans and robots working in partnership, in teleoperation scenarios with shared autonomy between the human and the robot, and in interaction of humans with humanoid robot assistants, among other situations.

Interactive Learning

Interactively learning behavior trees from imperfect human demonstrations

In Interactive Task Learning (ITL), an agent learns a new task through natural interaction with a human instructor. Behavior Trees (BTs) offer a reactive, modular, and interpretable way of encoding task descriptions but have not yet been applied a lot in robotic ITL settings. Most existing approaches that learn a BT from human demonstrations require the user to specify each action step-by-step or do not allow for adapting a learned BT. We propose a new framework to directly learn a BT from only a few human task demonstrations. We automatically extract continuous action pre- and post-conditions from visual features and use a Backchaining approach to build a reactive BT. In a user study on how non-experts provide and vary demonstrations, we identify three common failure cases of an BT learned from potentially imperfect human demonstrations. We offer a way to interactively resolve these failure cases by refining the existing BT through interaction with a user. Specifically, failure cases or unknown states are detected automatically during robot execution and the initial BT is adjusted or extended according to the provided user input. We evaluate our approach on a robotic trash disposal task with 20 human participants and demonstrate that our method is capable of learning reactive BTs from only a few human demonstrations and interactively resolving possible failure cases at runtime.

    •     Bib
      Scherf, L.; Schmidt, A.; Pal, S.; Koert, D. (2023). Interactively learning behavior trees from imperfect human demonstrations, Frontiers in Robotics and AI, 10.

Teleoperation

Visual Tactile Sensor Based Force Estimation for Position-Force Teleoperation

Vision-based tactile sensors have gained extensive attention in the robotics community. The sensors are highly expected to be capable of extracting contact information i.e. haptic information during in-hand manipulation. This nature of tactile sensors makes them a perfect match for haptic feedback applications. In this paper, we propose a contact force estimation method using the vision-based tactile sensor DIGIT [1], and apply it to a position-force teleoperation architecture for force feedback. The force estimation is carried out by (1) building a depth map for DIGIT gel’s surface deformation measurement, and (2) applying a regression algorithm on estimated depth data and ground truth force data to get the depth-force relationship. The experiment is performed by constructing a grasping force feedback system with a haptic device as a leader robot and a parallel robot gripper as a follower robot, where the DIGIT sensor is attached to the tip of the robot gripper to estimate the contact force. The preliminary results show the capability of using the low-cost vision-based sensor for force feedback applications.

    •     Bib
      Zhu, Y.; Nazirjonov, S.; Jiang, B.; Colan, J.; Aoyama, T.; Hasegawa, Y.; Belousov, B.; Hansel, K.; Peters, J. (2023). Visual Tactile Sensor Based Force Estimation for Position-Force Teleoperation, IEEE International Conference on Cyborg and Bionic Systems (CBS), pp.49-52.

Learning Trajectory Distributions for Assisted Teleoperation and Path Planning

Several approaches have been proposed to assist humans in co-manipulation and teleoperation tasks given demonstrated trajectories. However, these approaches are not applicable when the demonstrations are suboptimal or when the generalization capabilities of the learned models cannot cope with the changes in the environment. Nevertheless, in real co-manipulation and teleoperation tasks, the original demonstrations will often be suboptimal and a learning system must be able to cope with new situations. This paper presents a reinforcement learning algorithm that can be applied to such problems. The proposed algorithm is initialized with a probability distribution of demonstrated trajectories and is based on the concept of relevance functions. We show in this paper how the relevance of trajectory parameters to optimization objectives is connected with the concept of Pearson correlation. First, we demonstrate the efficacy of our algorithm by addressing the assisted teleoperation of an object in a static virtual environment. Afterward, we extend this algorithm to deal with dynamic environments by utilizing Gaussian Process regression. The full framework is applied to make a point particle and a 7-DoF robot arm autonomously adapt their movements to changes in the environment as well as to assist the teleoperation of a 7-DoF robot arm in a dynamic environment.

    •       Bib
      Ewerton, M.; Arenz, O.; Maeda, G.; Koert, D.; Kolev, Z.; Takahashi, M.; Peters, J. (2019). Learning Trajectory Distributions for Assisted Teleoperation and Path Planning, Frontiers in Robotics and AI.