Quick Facts

Organizers:Abdeslam Boularias, Brian Ziebart, Jan Peters
Conference:ICML 2011
Date:Saturday, July 2, 2011
Room:Grand-B
Location:Bellevue, Washington, USA
Website:http://www.robot-learning.de/Research/ICML2011

Abstract

From a very early age, humans learn many new skills by the observation of others. This paradigm, known as imitation learning or learning from demonstration, is an important topic for many research fields, including psychology, neuroscience, artificial intelligence, and robotics. From a machine learning point of view, imitation learning is a supervised learning approach for solving control and sequential decision-making problems. This approach is often preferred to the fully autonomous one, as it avoids unnecessary and hazardous exploration. Consequently, most of successful applications of machine learning in robotics incorporate a form of imitation learning. Within the imitation learning community, relevant lines of research may be classified into the following sub-fields:

  1. Direct imitation, where the problem of generalizing from provided examples is typically reduced to a supervised learning problem, without making assumptions on the teacher's intent.
  2. Inverse optimal control, where the teacher is assumed to be maximizing a certain reward function, and the goal of the learner is to find the simplest reward function that explains the teacher's behavior.

Imitation learning is at the intersection of many fields of machine learning. It corresponds to a complex large-scale optimization problem that can be formalized in different ways, as a structured output prediction problem, or as a semi-supervised learning problem for example. Imitation learning is intimately related to reinforcement learning, online and active learning, multi-agent learning, and feature construction as well.

The workshop is supported by the PASCAL2 Thematic Programme on Machine Learning for Autonomous Skill Acquisition in Robotics and the IEEE RAS Technical Committee on Robot Learning.

Format

Our goal is to provide an overview of state-of-the-art techniques and applications of imitation learning, while bringing together researchers who have worked on imitation learning along with researchers from other areas of statistical machine learning aiming to bring new statistical learning techniques to bear on imitation learning. The workshop will consist of presentations, posters, and panel discussions. Topics to be addressed include, but are not limited to:

  • Which modern statistical learning techniques can be used for scaling imitation learning to more complex tasks?
  • How can we use recent advances in graphical models to partition the model learning part from the policy learning step?
  • How can we use recent advances in deep learning for hierarchical imitation learning?
  • How do different approaches of imitation learning compare to each other? In particular, what advantages of inverse optimal control over behavioral cloning remain for small data sets?
  • Can the essential correspondence problem in imitation learning be phrased in terms of statistical similarity?
  • How can the intention of the expert be recognized and reasoned about by the learner?
  • Which probabilistic models can be used for representing a theory of mind?
  • What are the advantages of imitation learning over reinforcement learning?
  • How can we improve the policies acquired through imitation learning by trial and error?
  • How can we learn by imitation in a partially observable environment?
  • How can we learn by imitation given very few examples? Can semi-supervised imitation learning give us the edge?
  • Which transfer learning techniques can be used for solving the correspondence problem?
  • Can we learn good policies from a bad expert?
  • Are there general methods for automatically extracting useful features for imitation learning?
  • How can we combine demonstrations provided by different experts?
  • Which probabilistic models provide a compact and representative view of a given task?
  • What are the biological foundations of imitation learning?

Call for Posters

In the field of imitation learning, we have seen a dramatic growth over the past decade in many different ways: in terms of newly developed algorithms, new successful applications and new scientific challenges for understanding both the computational and the neuronal aspects of imitation. Moreover, imitation is a complex learning problem related to many fields of Machine Learning, including supervised and semi-supervised learning, learning with structured data, transfer learning, reinforcement learning, multi-agent learning, and online learning. Imitation Learning is also an essential technology for robotics, one of the main goals of this workshop is to bring together roboticists along with Machine Learning researchers.
The workshop will have an awesome set of invited speakers including Drew Bagnell, Pieter Abbeel, Aude Billard, Emo Todorov, Sethu Vijayakumar, Umar Syed, Rajesh Rao, and Manuel Lopes.
For this workshop, we are seeking researchers who want to present high-quality recent or ongoing work on all aspects of imitation learning. Both theoretical and applied work is solicited. An extended abstract suffices for a poster submission. Additionally, we welcome position papers, as well as papers discussing potential future research directions.

Confirmed Invited Speakers

Drew Bagnell (Carnegie Mellon University)
Pieter Abbeel (University of California, Berkeley)
Aude Billard (EPFL Lausanne)
Umar Syed (University of Pennsylvania)
Sethu Vijayakumar (University of Edinburgh)
Emo Todorov (University of Washington)
Rajesh Rao (University of Washington)
Manuel Lopes (INRIA)
Shie Mannor (Technion)

Submissions and Publication

Both extended abstracts and position/future research papers will be reviewed by program committee members on the basis of relevance, significance, and clarity. Accepted contributions will be presented as posters but particularly exciting work may be considered for talks. Submissions should be formatted according to the conference templates and submitted via email to imitation.learning.icml2011@googlemail.com.

Important Dates

April 29 - Deadline of submission
May 20 - Notification of Acceptance
July 2 - Workshop Proper

Organizers

Abdeslam Boularias, Max Planck Institute for Biological Cybernetics (Primary contact)
Brian Ziebart, Robotics Institute of Carnegie Mellon University
Jan Peters, Max Planck Institute for Biological Cybernetics

Program

The workshop will take place in section Grand-B of the Grand Ballroom.
09.00 - 09.30am Invited talk: Shie Mannor (Technion). Topic: Imitating fighter jet pilot. [pdf]
09.30 - 10.00am Invited talk: Drew Bagnell (Carnegie Mellon University).
10.00 - 10.30am Invited talk: Pieter Abbeel (University of California, Berkeley). Topic: Apprenticeship Learning for Robotic Control. [pdf]
10.30 - 11.00am Coffee break (and putting up posters)
11.00 - 11.30am Invited talk: Aude Billard (EPFL Lausanne). Topic: Density estimation for robust control of reaching, grasping and grasp adaptation.
11.30 - 12.00pm Poster session
12.00 - 01.30pm Lunch Break (and poster session)
01.30 - 02.00pm Invited talk: Umar Syed (University of Pennsylvania). Topic: Reductions from imitation learning to classification
02.00 - 02.30pm Invited talk: Rajesh Rao (University of Washington). Topic: Probabilistic Goal-Based Imitation Learning.
02.30 - 03.00pm Invited talk: Emo Todorov (University of Washington). Topic: Model-predictive control of fast discontinuous dynamics. [pdf]
03.00 - 04.00pm Poster session ( and coffee break)
04.00 - 04.30pm Invited talk: Sethu Vijayakumar (University of Edinburgh). Topic: Designing Variable Impedance Policies: Imitate or Optimize?
04.30 - 05.00pm Invited talk: Manuel Lopes (INRIA). Topic: Social Learning and Imitation for Teamwork. [pdf]
05.00 - 05.30pm Discussion

Accepted Papers

  • Brian Ziebart. Strategy Transfer Learning via Parametric Deviation Sets. [pdf]
  • Bradley Knox. Augmenting Reinforcement Learning with Human Feedback. [pdf]
  • Eduardo Morales. Teaching a Robot New Tasks through Imitation and Feedback. [pdf]
  • Heni Ben Amor. Towards Responsive Humanoids: Learning Interaction Models for Humanoid Robots. [pdf]
  • Daniel Grollman. Imitation and Reinforcement Learning from Failed Demonstrations. [pdf]
  • Maya Cakmak. Active Learning with Mixed Query Types in Learning from Demonstration. [pdf]
  • George Konidaris. CST: Constructing Skill Trees by Demonstration. [pdf]
  • Baris Akgun. Augmenting Kinesthetic Teaching with Keyframes. [pdf]
  • Benjamin Balaguer. An hybrid approach for robots learning folding tasks. [pdf]
  • Seyed Mohammad Khansari Zadeh. Autonomous Dynamical System Approach to Generate Human-Like Robot Motions with Non-Zero Final Velocity. [pdf]
  • Navid Aghasadeghi. Maximum Entropy Inverse Reinforcement Learning in Continuous State Spaces with Path Integrals. [pdf]
  • Hoshino Kiyoshi. Recognition of Human Hand Motions for Robot Learning by Observation. [pdf]
  • Alexander Grubb. Imitation Learning for Natural Language Direction Following. [pdf]
  • Amin Mousavi. Hierarchical Functional Concepts for Knowledge Transfer in Reinforcement Learning.
  • Farzad Amirjavid. Learning Spatiotemporal Activity Prediction Patterns in Smart Homes.

Participants

This workshop will bring together researchers from different areas of machine learning in order to explore how to approach new topics in imitation learning. Attendants of the workshop are encouraged to actively participate by responding with questions and comments about the talks.

Location and More Information

The most up-to-date information about ICML 2011 workshops can be found on the ICML website .