Workshop on Human-Robot Collaboration: Towards Co-Adaptive Learning Through Semi-Autonomy and Shared Control

IROS 2016, 10 October, Daejeon, Korea

IROS 2016 website: http://www.iros2016.org/

Organisers

Luka Peternel (HRI2, ADVR, IIT, Italy) and Guilherme Maeda (IAS, TU Darmstadt, Germany), Leonel Rozo (IIT, Italy), Serena Ivaldi (INRIA Nancy Grand-Est, France), Claudia Pérez D'Arpino (MIT, USA), Julie A. Shah (MIT, USA), Jan Babič (JSI, Slovenia), Tamim Asfour (KIT, Germany) and Erhan Oztop (Ozyegin University, Turkey)

Objectives

One of the major goals in robotics is to make robots operate in unstructured environments under unpredictable conditions. Of great interest is the ability of robots to collaborate with human counterparts in various complex tasks and share tools in either industrial or daily-life settings. Such robots can address the unprecedented growth of the elderly population by acting as caregivers, or as body-part enhancement devices (i.e. prostheses and exoskeletons). Successful human-robot collaboration usually involves unforeseen interactions and requires a high degree of safety, mutual adaptation and shared control between the acting agents.

The concept of mutual adaptation requires new forms of high-level reasoning and control to enable optimal interaction and collaboration in conditions that arise when human-robot joint actions are initially inadequate, e.g., skill gap between collaborating agents, gradual mutual improvement, different speed of individual improvement, etc. While the human partner in human-robot collaboration is usually considered an expert, her/his behaviour may change over time as a result of self-improvement, gradual improvement of the robot or mutual improvement due to the novelty of task. Therefore, human adaptation and improvement during the collaboration cannot be neglected and the robot should adapt to the varying human behaviour. In the opposite scenario, the human can be inexperienced and the robot proficient in the given task. Here, the robot should adapt its behaviour to accommodate for slower or easier execution in the initial stages to help the human to improve.

To enable the above-mentioned behaviour, methods to assess the competence of each member are essential to decide when to switch roles in collaborative tasks with semi-autonomous robots. Methods are needed to adapt joint human-robot skills according to individual preferences and to transfer such skills to different scenarios and users. In addition, new methods are required to dynamically incorporate human decisions and instructions while interactively enriching the current repertoire of collaborative skills of the semi-autonomous robot.

Another challenge regards the use of multiple feedback information and multiple motor commands. When the robot learns from the human and/or adapts to his/her behaviour in an online manner during the execution of a collaborative task, differences in behaviour between both agents can produce undesirable or even unsafe conditions. This aspect is especially relevant when the controlled task and/or tools are shared between the human and the robot. To resolve this issue, a share control framework is necessary to arbitrate and blend their roles, actions/intentions and cues from different modalities.

The goal of the workshop is to (1) bring together top experts in human-robot collaboration and robot learning fields, (2) discuss the state-of-art in the field of shared control in human-in-the-loop robot learning and (3) lay down promising future directions for co-adaptation in human-robot collaboration. Particularly, the workshop intends to tackle the following questions:

  • Can a human and a robot learn each other's capabilities interactively? How to guarantee this co-adaptation will converge to effective semi-autonomy and shared control?
  • How can a robot evaluate the human proficiency and adapt its behaviour to accommodate for different skill level? How can humans do it in human-human collaboration scenario?
  • High-level representations of human-robot team behaviour are useful for long term planning but do no address uncertainty encoded as low-level collaborative skills. Is it possible to combine both forms of representation for the generation of robust collaborative plans?
  • How to incorporate highly stochastic behaviour of the human into the robot’s movement generation? How can robot detect/predict the human intentions?
  • Imitation learning has shown to be extremely useful for single-agent skill learning. Is imitation learning still useful in human-robot interaction and what are alternatives?
  • Can the robot plan interventions by inference from long term sequences of human actions while leveraging on prior knowledge on the human aptitude in executing these actions?
  • How can a robot learn to assess its own capability of executing a certain skill in order to ask for intervention/assistance?

Topics of Interest

  • Co-adaptation between human and robot
  • Learning and modelling human-robot interaction, human instructions and collaborative behaviour
  • Methods for self-improvement, interactive and incremental learning of collaborative tasks
  • Shared control between human and robot in online learning of coupled tasks
  • Intention recognition, skill level/gap evaluation and role allocation
  • Transferring and reusing skills from human-human tasks and from existing human-robot tasks to new scenarios
  • Discussion on current and future applications of semi and shared autonomy in human-robot collaboration: industrial, rescue, surgical, rehabilitation, home-care, companion, etc.

List of speakers:

Call for Contributions

We welcome prospective participants to submit extended abstracts (up to 4 pages) to be presented as posters. The manuscripts should use the IEEE IROS two-column format. Please submit a PDF copy of your manuscript through our EasyChair platform before August 8. Each paper will receive a minimum of two reviews. Papers will be selected based on their originality, relevance to the workshop topics, contributions, technical clarity, and presentation. Accepted papers require that at least one of the authors register to the workshop. This workshop is an excellent opportunity to present and discuss your ongoing work and get an early feedback from the participants.

To submit the paper please follow: https://easychair.org/conferences/?conf=iroshrc2016

Important dates:

  • Submission deadline for papers: 8 August, 2016
  • Notification of acceptance: 29 August, 2016

See contributions

Special Issue: Autonomous Robots

Our workshop is associated with special issue on Autonomous Robots. We would like to invite participants to submit significant research results to Autonomous Robots: Special Issue on Learning in Human-Robot Cooperation.

Guest Editors: Heni Ben Amor, Leonel Rozo, Sylvain Calinon, Dongheui Lee and Anca Dragan.

Important dates:

  • Paper submission deadline: November 30, 2016
  • Notification to authors: January 15, 2016
  • Final manuscript due: February 1, 2016
  • Final decision: February 15, 2016

Please follow this link for details:

http://static.springer.com/sgw/documents/1576377/application/pdf/AURO+CFP+-+Human-Robot+Collaboration.pdf

Program

  • Location: Room 111
  • Date: 10 October, 2016
  • 30 minutes are allocated for each talk and 10 minutes for questions and discussion after each talk.
Time Speaker Talk
08:30 - 08:40 Luka Peternel, Guilherme Maeda Welcome and introduction from the organisers
FIRST TALK SESSION
08:40 - 09:20 Yukie Nagai Learning with motionese: Human-robot interaction inspired by caregiver-infant interaction
09:20 - 10:00 Eiichi Yoshida Toward Unified Framework for Anthropomorphic Motion Simulation and Synthesis
10:00 - 10:50 Coffee Break
POSTER SESSION
SECOND TALK SESSION
10:50 - 11:30 Anca Dragan Adapting to Human Objectives
11:30 - 12:10 Sylvain Calinon Challenges in extending learning from demonstration to collaboration and shared autonomy
12:10 - 12:50 Arash Ajoudani Human-robot impedance interfaces for adaptive co-manipulation
12:50 - 14:00 Lunch Break
THIRD TALK SESSION
14:00 - 14:40 Heni Ben Amor Learning and Adaptation for Symbiotic Human-Robot Collaboration
14:40 - 15:20 Dongheui Lee Human Robot Cooperation based on Learning and Prediction of Human Motions
15:20 - 15:50 Coffee Break
FOURTH TALK SESSION
15:50 - 16:30 Fumihide Tanaka Co-Adaptive Learning in the Context of Educational Robotics
16:30 - 17:10 Ross Knepper Towards High-Functioning Human-Robot Joint Action
17:10 - 18:00 Round-table discussion  

Invited Talks

Yukie Nagai (Osaka University, Japan): Learning with motionese: Human-robot interaction inspired by caregiver-infant interaction

Human caregivers significantly modify their actions (e.g., exaggerate movements, make more pauses between movements) when interacting with infants compared to when interacting with adults. It has been suggested that these modifications, called motionese, facilitate action learning in infants while caregivers adapt the patterns and degrees of motionese to infants’ capabilities. This talk will present our experiments of human-robot interaction to investigate the role of motionese in robot action learning. Our robot was equipped with the abilities first to learn affordances of objects and then to imitate actions presented by human partners. Experimental results demonstrated that the robot could successfully reproduce the goal of actions by detecting action segments corresponding to the affordances. Motionese observed in partners’ actions assisted the robot in extracting the action segments. Importantly, the robot played a role in eliciting motionese. The robot exhibiting its acquired but still immature abilities encouraged the partners to adapt their action demonstrations to the robot. I will discuss the importance of mutual adaptation in human-robot interaction as well as in caregiver-infant interaction. -> Back to Program

Eiichi Yoshida (AIST, Japan): Toward Unified Framework for Anthropomorphic Motion Simulation and Synthesis

In this research, we seek to build a unified framework that allows describing dynamic behavior of anthropomorphic figures, including humans and humanoid robots, which can be utilized to evaluate effects and quality of products closely interacting humans. Human motions are measured, analyzed and simulated using a digital human, which models its shape, dynamics, and musculo-skeletal structures in order to reproduce human behaviors. We integrate interactions with products and environments to perform ergonomic evaluation. Humanoid robots play a complementary role in this framework. Humanoid robots, whose structure is close to humans and can reproduce human motions, are useful because they can measure their internal states such as posture and force with their sensors, unlike humans. They not only serve as subjects to test the product instead of humans for qualitative evaluation, but also validate simulation results by comparing them with experiments. -> Back to Program

Anca Dragan (University of California, Berkeley, USA): Adapting to Human Objectives

The ability to give robots the right objective function is key to them being useful, and to us trusting them. Objective functions are unfortunately notoriously difficult to specify. Inverse Reinforcement Learning enables robots to learn objective functions by observing expert behavior, but implicitly assumes that the robot is a passive observer and that the person is an uninterested expert. In this talk, I will share our recent works in making this learning process interactive, where humans teach rather than provide expert demonstrations, and robots act to actively gather information and to showcase what they've learned.-> Back to Program

Sylvain Calinon (Idiap Research Institute/EPFL, Switzerland): Challenges in extending learning from demonstration to collaboration and shared autonomy

Human-centric robot applications requires a tight integration of learning and control. This connexion can be facilitated by shared probabilistic representations of the tasks and objectives to achieve. In human-robot collaboration, such representation can take various forms. Movements must be enriched with perception, force and impedance information to anticipate the users' behaviors and generate safe and natural gestures. I will present two applications in which learning from demonstration techniques require to be extended to assistive tasks and semi-autonomous teleoperation. In the DexROV project, a bimanual underwater robot is distantly controlled by a user wearing an exoskeleton with force feedback. The transmission delays are handled by treating the problem as classification and synthesis with the shared encoding of a set of motion, synergy and impedance primitives employed as task-adaptive building blocks assembled in sequence and in parallel. In the STIFF-FLOP project, a continuum robot with variable stiffness is used in minimally invasive surgery to go through narrow openings and manipulate soft organs. Since the surgeon cannot control all the degrees of freedom simultaneously, the teleoperation is considered as a shared task in which a learning interface is deployed to assist the surgeon with semi-autonomous behaviors. -> Back to Program

Arash Ajoudani (Italian Institute of Technology, Italy): Human-robot impedance interfaces for adaptive co-manipulation

A key requirement for the exploitation of the robots in collaborative applications is the ability to work alongside the humans and adapt to their behavior and needs. The two major requirements towards this objective are the establishment of multimodal interfaces that are capable of online tracking of the human states and intention, and adaptive control of well-performing robotic platforms to respond appropriately and accordingly. This talk introduces novel thinking and techniques to control and coordination of manipulation in collaborative settings, through the development of multi-modal impedance interfaces that permit effective, dynamic and intuitive execution of complex tasks. -> Back to Program

Heni Ben Amor (Arizona State University, USA): Learning and Adaptation for Symbiotic Human-Robot Collaboration

Co-robots that work alongside human partners are an important vision of artificial intelligence. For this vision to become reality, a theoretical foundation is needed that allows for the specification of collaborative interactions between humans and robots. In this talk, I will discuss recent advances in learning collaborative behavior from observed human-human interactions. First, I will introduce a probabilistic representation that allows a robot to anticipate, recognize, and respond to human behavior. In addition, I will introduce and discuss a geometric representation of spatial relationships between two interaction partners. Both approaches will be applied to collaborative assembly tasks involving proximal, physical interactions between humans and robots. -> Back to Program

Dongheui Lee (TU Munich, Germany): Human Robot Cooperation based on Learning and Prediction of Human Motions

There are many challenges for Human Robot Collaboration such as dynamic environment, intuitive interfaces, and adaptation to humans. In order to address adaptive human robot cooperation, the presenter and her group have been working on a variety of fundamental sub-problems including motion primitive representation, intuitive teaching methodologies, motor skill learning, human motion prediction and assistive robot control based on prediction. During the talk learning and executions phases with the human perspective will be discussed and followed by examples of prediction based real-time bipedal walking imitation and a human robot cooperative carrying task. Empirical evaluation illustrates the effectiveness and applicability to learning approaches for human robot collaboration. -> Back to Program

Fumihide Tanaka (Tsukuba University, Japan): Co-Adaptive Learning in the Context of Educational Robotics

Social robots are being used for the purpose of supporting childhood education. An important issue is how robots and humans learn/adapt each other. In this talk, I will first talk about a robot that is designed to be taught by humans, and then discuss the co-adaptive learning aspect.-> Back to Program

Ross Knepper (Cornell University, USA): Towards High-Functioning Human-Robot Joint Action

New human-safe robot hardware has been an enabler of close-proximity collaboration between humans and robots. However, the traditional mode of interaction still persists, in which humans program robots, which then simply repeat a set pattern. A loftier goal would have humans and robots working side-by-side as peers -- reacting to and anticipating one another's needs. To achieve this higher level of generality and versatility, robots must understand humans and anticipate their actions. This talk describes several robotic models of human behavior and shows how they can be inverted to increase the productivity of a team engaged in joint action, such as furniture assembly and pedestrian navigation. -> Back to Program

Contributions

-> Back to Program

Acknowledgement

This workshop receives support from the European Community’s Seventh Framework Programmes (FP7-ICT-2013-10) under grant agreement 610878 3rd Hand Robot Project, under grant agreement 600716 CoDyCo Project, and FP7 MC-CIG under grant agreement 321700 Converge Project (Convergent Human Learning for Robot Skill Generation); and from the European Community’s Horizon 2020 under grant agreement 687662 SPEXOR Project.

This workshop is supported by the IEEE Robotics and Automation’s Technical Committee on Cognitive Robotics (CORO), the Technical Committee on Human-Robot Interaction & Coordination, the Technical Committee on Humanoid Robotics, and the Technical Committee on Telerobotics.

Dissemination

LINK TO PHOTOS