Workshop: Novel Methods for Learning and Optimization of Control Policies and Trajectories for Robotics

Quick Facts

Organizers:Katja Mombaur, Gerhard Neumann, Martin Felis, Jan Peters
Conference:ICRA 2013
Location:Karlsruhe, Germany
Room:Mombert
Date and Time:Friday, May 10, 2013, 8:30 - 18:30
Website:http://www.robot-learning.de/Research/ICRA2013

Abstract

The current challenges defined for robots require them to automatically generate and control a wide range of motions in order to be more flexible and adaptive in uncertain and changing environments. However, anthropomorphic robots with many degrees of freedom are complex dynamical systems. The generation and control of motions for such systems are very demanding tasks. Cost functions appear to be the most succinct way of describing desired behavior without over- specification and appear to underlie human movement generation in pointing/reaching movement as well as locomotion. Common cost functions in robotics include goal achievement, minimization of energy consumption, minimization of time, etc. A myriad of approaches have been suggested to obtain control policies and trajectories that are optimal with respect to such cost function. However, to date, it remains an open question what is the best algorithm for designing or learning optimal control policies and trajectories in robotics would work. The goal of this workshop is to gather researchers working in robot learning with researchers working in optimal control, in order to give an overview of the state of the art and to discuss how both fields could learn from each other and potentially join forces to work on improved motion generation and control methods for the robotics community. Some of the core topics are:

  1. State of the art methods in model-based optimal control and model predictive control for robotics as well as inverse optimal control
  2. State of the art methods in robot learning, model learning, imitation learning, reinforcement learning, inverse reinforcement learning, etc .
  3. Shared open questions in both reinforcement learning and optimal control approaches
  4. How could methods from optimal control and machine learning be combined?

Format

The workshop will consist of presentations, posters, and panel discussions. Topics to be addressed include, but are not limited to:

  • How far can optimal control approaches based on analytical models come?
  • When using learned models, will the optimization biases be increased or reduced?
  • Can a mix of analytical and learned models help?
  • Can a full Bayesian treatment of model errors ensure high performance in general?
  • What are the advantages and disadvantages of model-free and model-based approaches?
  • How does real-time optimization / model predictive control relate to learning?
  • Is it easier to optimize a trajectory or a control policy?
  • Which can be represented with fewer parameters?
  • Is it easier to optimize a trajectory/control policy directly in parameter space or to first obtain a value function for subsequent backwards steps?
  • Is less data needed for learning a model (to be used in optimal control, or model-based reinforcement learning) or for directly learning an optimal control policy from data?
  • What applications in robotics are better suited for model-based, model-learning and model-free approaches?

All of these questions are of crucial importance for furthering the state-of-the-art both in optimal control and in robot reinforcement learning. The goal of this workshop is to gather researchers working in robot learning with researchers working in optimal control, in order to give an overview of the state of the art and to discuss how both fields could learn from each other and potentially join forces to work on improved motion generation and control methods for the robotics community.

Invited Speakers

Pieter Abbeel, U.C.Berkeley, USA
Tim Bretl, Univ. of Illinois, USA
Abderramane Kheddar, CNRS, France and AIST, Tsukuba, Japan
Petar Kormushev, liT Genova, Italy
Richard Longman, Columbia University, New York, USA
Freek Stulp, ENSTA-ParisTech, Paris, France
Konrad Rawlik, University of Edinburgh, Scotland
Oskar von Stryk, TU Darmstadt, Germany

Program

TimeSpeakerTalkAffiliation
8:30Gerhard Neumann, Katja MombaurIntroduction by the organizers 
9:00Pieter AbbeelA Geometry-Based Approach for Learning from Demonstrations for ManipulationUC Berkeley, USA
9:30Oskar von StrykOptimal control of bio-inspired compliant robots: A trajectory optimization, control or design problem?TU Darmstadt, Germany
10:00 - 10:30Coffee Break
10:30Abderrahmane KheddarGuaranteed constraint fulfillment in semi-infinite optimization planning in humanoid robotsAIST Tsukuba Japan / CNRS France
11:00Petar KormushevReinforcement and imitation learning of robot motor skillsIIT Italy
11:30Richard LongmanDifficulties Specifying Optimization Criteria and Then Making Hardware Perform Model Based Optimized TrajectoriesColumbia University, USA
12:00Freek StulpPolicy Improvement Methods: Between Black-Box Optimization and Episodic Reinforcement LearningENSTA-ParisTech, Paris, France
12:30 - 14:00Lunch Break
14:00Gerhard NeumannInformation-Theoretic Motor Skill LearningTU Darmstadt
14:30Martin FelisOptimal Control: Two Applications in Robotics and BiomechanicsUniv. Heidelberg
15:00Tim BretlWhen is inverse optimal control easy?Univ. of Illinois, USA
15:30 - 16:00Coffee Break
16:00Konrad RawlikProbabilistic Inference and Stochastic Optimal ControlUniv. of Edinburgh, UK
16:30 - 18:20 Poster Teasers and Poster Session 

Call for Posters

The following submissions have been accepted for poster presentation

Important Dates

March 15 - Deadline of submission for Posters
March 23th - Notification of Poster Acceptance

Submissions

Extended abstracts (1 pages) will be reviewed by the program committee members on the basis of relevance, significance, and clarity. Accepted contributions will be presented as posters but particularly exciting work may be considered for talks. Submissions should be formatted according to the conference templates and submitted via email to neumann@ias.tu-darmstadt.de

Organizers

Katja Mombaur , Universitaet Heidelberg
Gerhard Neumann, Technische Universitaet Darmstadt
Martin Felis, Universitaet Heidelberg
Jan Peters , Technische Universitaet Darmstadt and Max Planck Institute for Intelligent Systems

Location and More Information

The most up-to-date information about the workshop can be found on the ICRA 2013 webpage.