|Organizers:||Gerhard Neumann, George Konidaris, Freek Stulp, Jan Peters|
|Date:||June 28th, 2013, 9:00|
Learning robot control policies in complex real-world environments is a major challenge for machine learning due to the inherent high dimensionality, partial observability and the high costs of data generation. Treating robot learning as a monolithic machine problem and employing off-the-shelf approaches is unrealistic at best. However, the physical world can yield important insights into the inherent structure of control policies, state or action spaces and reward functions. For example, many robot motor tasks are also hierarchically structured decision tasks. A tennis playing robot has to combine different striking movements sequentially. During locomotion there are at least three behaviors simultaneously active as a robot has to combine its gait generation with foot placement and balance control. First domain-driven skill learning approaches have already yielded impressive recent successes by incorporating such structural insights into the learning process. Hence, a promising route to more scalable policy learning approaches includes the automatic exploitation of the environment's structure, resulting in new structured learning approaches for robot control.
Structured and hierarchical learning has been an important trend in machine learning in recent years. In robotics, researchers often ended up naturally at well-structured hierarchical policies based on discrete-continuous partitions (e.g., define local movement generators as well as a prioritized operational space control for combining them) with nested control loops at several different speeds (i.e., fast control loops for smooth and accurate movement achievement, slower loops for model-predictive planning). Furthermore, evidence from the field cognitive sciences indicate that humans also heavily exploit such structures and hierarchies. Although such structures have been found in human motor control, are favored in robot control and exist in machine learning, the connections between these fields have not been well explored. Transferring insights from structured prediction methods, which make use of the inherent correlation in the data, to hierarchical robot skill learning may be a crucial step. General approaches for bringing structured policies, states, actions and rewards into robot reinforcement learning may well be the key to tackle many challenges of real-world robot environments and an important step to the vision of intelligent autonomous robots which can learn rich and versatile sets of motor skills. This workshop aims to reveal how complex motor skills typically exhibit structures that can be exploited for learning reward functions and to find structure in the state or action space.
In order to make progress towards the goal of structured learning for robot control, this workshop aims at researchers from different machine learning areas (such as reinforcement learning, structured prediction), robotics and related disciplines (e.g., control engineering, and the cognitive sciences).
We particularly want to focus on the following important topics for structured robot learning which have a big overlap from several of these fields:
These challenges are important steps to building intelligent autonomous robots and may potentially motivate new research topics in the related research fields.
|9:00||Gerhard Neumann / Freek Stulp||Introduction by the organizers|
|9:30||Stefan Schaal||From Motor Primitives to Associative Skill Memories||USC / MPI for Intelligent Systems|
|10:30 - 11:00||Coffee-break and Poster Session 1|
|11:00||Yiannis Demiris||Hierarchies in Action Recognition and Assistance Generation||Imperial College London|
|11:40||Marc Toussaint||Relational Reinforcement Learning -- and the ``subproblems'' arising from it||University of Stuttgart|
|12:20 - 13:40||Lunch|
|13:40||Gerhard Neumann||Learning Modular Control Policies||TU Darmstadt|
|14:10||Louis Sentis||Data Driven Locomotion Behaviors||University of Texas|
|14:50||Dimitri Berenson||Learning and Motion Planning for Practical Manipulation Tasks||Worcester Polytechnic Institute|
|15:30 - 16:00||Coffee break and Poster Session 2|
|16:00||Matthew Botvinick||Hierarchical Structure in Human Behavior||Princeton University|
|16:40||Andrea d'Avella||Modularity for Human Motor Control and Learning||IRCCS Fondazione Santa Lucia|
|17:20-18:00||Wrap-up and Poster Session 3|
The aim of this workshop is to bring together researchers which are interested in structured representations, reinforcement learning, hierarchical learning methods and control architectures. Among these general topics we will focus on the following questions:
Extended abstracts (1 pages) will be reviewed by the program committee members on the basis of relevance, significance, and clarity. Accepted contributions will be presented as posters but particularly exciting work may be considered for talks. Submissions should be formatted according to the conference templates and submitted via email to firstname.lastname@example.org.
The most up-to-date information about the workshop can be found on the RSS 2013 webpage.