Quick Facts

Presenter:Jan Peters
Conference:33rd Annual German Conference on Artificial Intelligence (KI 2010)
Date:TBA, between September 21 and 24, 2010
Location:Karlsruhe Institute of Technology (KIT)


Autonomous robots that can assist humans in situations of daily life have been a long standing vision of artificial intelligence, robotics, and cognitive sciences. A first step towards this goal is to create robots that can learn tasks triggered by environmental context or higher level instruction. Such a learning approach would have drastic consequences in European manufacturing as the cost of robot programming has become the main bottleneck for keeping production in Europe.

However, machine learning has yet to live up to this promise. To date, few generic off-the-shelf learning approaches have yet helped to solve problems in skill learning of high-dimensional manipulator or humanoid robots. Instead, robotics problems often pose hard but also very specific constraints. For example, in supervised learning problems such as model acquisition, data often arrives in a continuous endless stream at 500 Hz and requires the learning method to update during less than 10ms a challenge that most of the best performing regression methods are not able to meet but required especially tailored approaches [3]. Similarly, reinforcement learning problems in robotics are high-dimensional and even single trials are expensive; as a result, few of the traditional value functions methods have found application in robot skill learning but many novel policy search approaches have been found [4].

The proposed tutorial has two goals: Firstly, we attempt to bring the important robot learning problems with all their difficult constraints to the attention of the artificial intelligence community. Secondly, we point out what machine learning approaches have been successful in robotics in the past and where there is need for more novel, advanced learning methods. The intended audience includes graduate students, and post-docs who are interested in research at the intersection of robotics and machine learning as well as robotics practicioners who intend to apply machine learning approaches.

Tutorial Content

This tutorial will discuss both challenges and opportunities for a machine learning researchers who are willing to enter the area of robot learning. First, we will discuss the generic problems of the domain of robotics including the core technical challenges, tools that lead to efficient development processes in robot learning, key insights from classical robotics as well as core points of view of the robotics community. Subsequently, we will focus on three core learning problems: (i) Model Learning, (ii) Policy Acquisition and (iii) Robot Self-Improvement. The lecture on Model Learning, we will give an overview on supervised learning problems in robot control which includes both solved and unsolved problems. The lecture on policy acquisition will start with a review on imitation learning with a strong focus on using dynamic systems motor primitives. The lecture on robot self-improvement will highlight the successes in robot reinforcement learning using either optimal control with learned models, value functions approximation or policy search approaches.

We discuss learning on three different levels of abstraction, i.e., learning for accurate control is needed to execute, learning of motor primitives is needed to acquire simple movements, and learning of the task-dependent "hyperparameters" of these motor primitives allows learning of complex tasks. Empirical evaluations on a several robot systems illustrate the effectiveness and applicability to learning control on anthropomorphic robots.

Background Reading

The following references will help as background reading for the participants of the tutorial:

  1. Peters, J. (2008). Machine Learning for Robotics, VDM-Verlag, ISBN 978-3-639-02110-3.   Download Article [PDF]   BibTeX Reference [BibTex]
  2. Sigaud, O.; Peters, J. (2010). From Motor Learning to Interaction Learning in Robots, Studies in Computational Intelligence, Springer Verlag, 264.   Download Article [PDF]   BibTeX Reference [BibTex]
  3. Nguyen Tuong, D.; Seeger, M.; Peters, J. (2009). Model Learning with Local Gaussian Process Regression, Advanced Robotics, 23, 15, pp.2015-2034.   Download Article [PDF]   BibTeX Reference [BibTex]
  4. Peters, J.; Schaal, S. (2008). Reinforcement learning of motor skills with policy gradients, Neural Networks, 21, 4, pp.682-97.   Download Article [PDF]   BibTeX Reference [BibTex]

More information is available on request.

Slides & Handouts


Biography of the Presenter

Jan Peters is a robot learning researcher who has published over a 100 papers on topics at the intersection of machine learning and robotics in the last decade. He is currently a senior research scientist and heads the Robot Learning Lab (RoLL) of the Department of Empirical Inference (Scholkopf) at the Max Planck Institute for Biological Cybernetics (MPI) in Tuebingen, Germany. He graduated from University of Southern California (USC) with a Ph.D. in Computer Science. He holds two German M.S. degrees in Informatics and in Electrical Engineering (from Hagen University and Munich University of Technology) and two M.S. degrees in Computer Science and Mechanical Engineering from USC. Jan Peters has been a visiting researcher at the Department of Robotics and Mechatronics at the German Aerospace Research Center DLR in Oberpfaffenhofen, Germany, at Siemens Advanced Engineering (SAE) in Singapore, at the National University of Singapore (NUS), and at the Department of Humanoid Robotics and Computational Neuroscience at the Advanded Telecommunication Research (ATR) Center in Kyoto, Japan. His research interests include robotics, nonlinear control, machine learning, reinforcement learning, and motor skill learning.


zum Seitenanfang