Workshop: Learning for Locomotion

Quick Facts

Organizers:Jan Peters, Russ Tedrake, Stefan Schaal
Conference:Robotics - Science and Systems 2005
Date:June 11, 2005
Room:To be announced
Location:MIT, Cambridge, MA, USA
Website:http://www.jan-peters.net/Research/LearningForLocomotion

Abstract
Over the last few decades, there has been an impressive amount of published work on legged locomotion, including bipedal walking, running, hopping, stand-ups, summersaults and much more. However, despite all this progress, legged locomotion research has largely been driven by researchers using human insight and creativity in order to generate locomotion control algorithms. In order to improve the robustness, energy efficiency, and natural appearance of legged locomotion, there may be a significant advantage to using machine learning methods to synthesize new controllers and to avoid tedious parameter tuning. For instance, it could be advantageous to learn dynamics models, kinematic models, impact models, for model-based control techniques. Imitation learning could be employed for the teaching of gaits patterns, and reinforcement learning could help tuning parameters of the control policies in order to improve the performance with respect to given cost functions. In this context, we would like to bring together researchers from both the legged locomotion and machine learning in order to discuss which locomotion problems require learning, and to identify the machine learning methods that can be used to solve them.

Goal
In order to better understand the application of machine learning techniques to locomotion, our goal is to bring together researchers who represent many different approaches to biped locomotion control with their peers in machine learning for control. We hope to discuss future research directions for principled machine learning approaches to biped locomotion. The workshop will address topics such as:

  • Which unsolved biped locomotion problems can be solved using learning?
  • Can walking be broken down into components upon which machine learning methods are applicable?
  • What models (e.g., forward, inverse, impact) would be desirable for controlling locomotion?
  • Can machine learning methods help solve the gait generation and foot-placement problems?
  • Can human learning of locomotion yield insights for both robotics and machine learning?
  • Which machine learning algorithms are suitable for online implementation on the robot, and which problems can be solved in simulation?
  • What cost functions should be used to describe "optimal" walking, and what experiments should be done to test our controllers?

Furthermore, we intend to kick-off the Legged Robot Control Competition.

Format
The workshop is 3.5 hours in the morning, and 2.5 hours in the afternoon. The invited participants will give short (20-25 minute) talks on particular aspects of real-time modeling. After each talk there will be a fair amount of discussion time and additionally the possibility for other participants to give short talks (less then 10 min) that complement the topic of the main talks. Participants may be allowed to give multiple short talks and the possibility of impromptu talks from members of the audience could also be considered.

Tentative Program

8:25-8:30amWelcome
 Jan Peters (USC), Russ Tedrake (MIT), Stefan Schaal (USC)

Session 1: Control for Legged Robots

8:30-8:55amAdaptive control of locomotion in articulated robots using central pattern generators
 Auke Ijspeert, EPFL
8:55-9:20amControl of Dynamic Legged Robots
 Marc Raibert and Martin Buehler, Boston Dynamics
9:20-9:45amThe Design and Control of Bio-inspired Legged Robots
 Chandana Paul, Cornell University
9:45-10:10amBiomechanics Teaches Us About Biped Control Design
 Hugh Herr, Massachuesetts Institute of Technology
10:10-10:35amNonlinear stability tools for locomotion and learning
 Jean-Jacques Slotine, Massachuesetts Institute of Technology

Coffee Break (10:35-10:45am)

Session 2: Funding Opportunities and Competitions

10:45-11:10amDARPA's Learning Locomotion Program
 Eric Krotkov on behalf of Larry Jackel, DARPA Information Processing Technology Office
11:10-11:15amLegged Robot Control Competition: Kick-Off
 Russ Tedrake, Massachusetts Institute of Technology

Short Break (11:15-11:20am)

Session 3: Systematic and Automatic Gait Generation

11:20-11:45pmSystematic creation of stable walking and running gaits in planar bipeds
 Eric Westervelt, Ohio State University
11:45-12:10pmLocomotion on the Vertical: Early Efforts in Gait Development for Robot Climbing
 Alfred Rizzi, Carnegie Mellon University
12:10-12:35pmLearning and Adaptation of Biped Locomotion with Dynamical Movement Primitives
 Jun Nakanishi, Advanced Telecommunication Research (ATR) Institute

Lunch Break (12:35-2:05pm)

Session 4: Planning and Optimization for Locomotion

2:05-2:25pmOptimizing in Low-Dimensional, Behavior-Specific Spaces
 Jessica Hodgkins, Carnegie Mellon University
2:25-2:45pmCo-evolutionary learning for robotic locomotion
 Hod Lipson, Cornell University

Session 5: Locomotion from Reinforcement Learning

2:45-3:10pmQuadruped locomotion via reinforcement learning
 Andrew Ng, Stanford University

Coffee Break (3:10-3:20pm)

Session 5 continued: Locomotion from Reinforcement Learning

3:20-3:45pmRobust reasoning for robot planning
 Geoff Gordon, Carnegie Mellon University
3:45-4:10pmModel-based Reinforcement Learning
 Christopher G. Atkeson, Carnegie Mellon University
4:10-4:35pmModel-based and Model-free reinforcement learning methods for biped walking
 Jun Morimoto, Advanced Telecommunication Research (ATR) Institute
4:35-5:00pmExploiting passive dynamics to achieve fast online policy learning
 Russ Tedrake, Massachuesetts Institute of Technology

---

2. Tentative Program

8:45-8:50amWelcome
 Jan Peters (USC), Russ Tedrake (MIT), Stefan Schaal (USC)

Session 1: Systematic and Automatic Gait Generation

8:50-9:10amSystematic creation of stable walking and running gaits in planar bipeds
 Eric Westervelt, Ohio State University
9:10-9:30amLocomotion on the Vertical: Early Efforts in Gait Development for Robot Climbing
 Alfred Rizzi, Carnegie Mellon University
9:30-9:50amLearning and Adaptation of Biped Locomotion with Dynamical Movement Primitives
 Jun Nakanishi, Advanced Telecommunication Research (ATR) Institute

Short Break (9:50-9:55am)

Session 2: Planning and Optimization for Locomotion

9:55-10:15amOptimizing in Low-Dimensional, Behavior-Specific Spaces
 Jessica Hodgkins, Carnegie Mellon University
10:15am-10:35pmCo-evolutionary learning for robotic locomotion
 Hod Lipson, Cornell University

Coffee Break (10:35-10:45am)

Session 3: Funding Opportunities and Competitions

10:45-11:05amDARPA's Learning Locomotion Program
 Eric Krotkov on behalf of Larry Jackel, DARPA Information Processing Technology Office
11:05-11:10amLegged Robot Control Competition: Kick-Off
 Russ Tedrake, Massachusetts Institute of Technology

Short Break (11:10-11:15am)

Session 4a: Control for Legged Robots - Part I

11:15-11:40amControl of Dynamic Legged Robots
 Marc Raibert and Martin Buehler, Boston Dynamics
11:40-12:00pmNonlinear stability tools for locomotion and learning
 Jean-Jacques Slotine, Massachuesetts Institute of Technology

Lunch Break (12:00-1:30pm)

Session 4b: Control for Legged Robots - Part II

1:30-1:50pmBiomechanics Teaches Us About Biped Control Design
 Hugh Herr, Massachuesetts Institute of Technology
1:50-2:10pmThe Design and Control of Bio-inspired Legged Robots
 Chandana Paul, Cornell University
2:10-2:30pmAdaptive control of locomotion in articulated robots using central pattern generators
 Auke Ijspeert, EPFL

Short Break (2:30-2:35pm)

Session 5a: Locomotion from Reinforcement Learning

2:35-2:55pmQuadruped locomotion via reinforcement learning
 Andrew Ng, Stanford University
2:55-3:15pmModel-based and Model-free reinforcement learning methods for biped walking
 Jun Morimoto, Advanced Telecommunication Research (ATR) Institute

Coffee Break (3:15-3:25pm)

Session 5b: Locomotion from Reinforcement Learning

3:25-3:45pmRobust reasoning for robot planning
 Geoff Gordon, Carnegie Mellon University
3:45-4:05pmModel-based Reinforcement Learning
 Christopher G. Atkeson, Carnegie Mellon University
4:05-4:25pmExploiting passive dynamics to achieve fast online policy learning
 Russ Tedrake, Massachuesetts Institute of Technology

Participants
This workshop will bring together researchers from both the robotics and machine learning in order to explore how to approach the topic of learning legged locomotion in a principled way. Participants of the workshop (inclusive of the audience) are encouraged to actively participate by responding with questions and comments about the talks and give stand-up talks. Please contact the organizers if you would like to reserve apriori some time for expressing your view (short short-talk!!) on a particular topic.

Organizers
The workshop is organized by Jan Peters, Russ Tedrake and Stefan Schaal, from the Departments of Computer Science and Neuroscience, University of Southern California, Los Angeles, CA, USA and from the Brain and Cognitive Sciences Department at the Massachusetts Institute of Technology, Cambridge, MA, USA.

Location and More Information
The most up-to-date information about Robotics - Science and Systems 2005 can be found on the Robotics 2005 website.