Workshop: Towards Structured Learning for Control, Decision Making and Robotics

Quick Facts

Organizers:Gerhard Neumann, ?????, Jan Peters
Conference:NIPS 2012
Location:Lake Tahoe, Nevada, United States


Learning control or decision policies for complex real-world problems, as encountered by robots, web agents or in artificial games, is a challenge for machine learning. Despite considerable efforts, general reinforcement learning approaches have not yet scaled into real world domains due to the inherent high dimensionality, partial observability and the high costs of data generation. However, domain-driven reinforcement learning approaches have yielded a lot of recent successes, e.g., in robotics, indicating that domain structure enables scaling into real-world problems -- especially as most real-world domains are also highly structured environments. Hence, the most promising route to more scalable policy learning approaches must include the automatic exploitation of the environment's structure, resulting in new structured learning approaches for control, decision making and robotics.

In robot movement generation the physical world has dictated the inherent structure of control policies, state or action spaces, as well as reward functions. As a result, reachers often ended up naturally at well-structured hierarchical policies based on discrete-continuous partitions (e.g., learning both elementary/primitive actions as well as their supervisory layer, prioritized operational space control, ) with nested control loops at several different speeds (i.e., fast control loops for smooth and accurate movement achievement, slower loops for long-term task achievement). While such structures are favored in robot reinforcement learning, they are frequently neglected by the general reinforcement learning community. Instead, hierarchical reinforcement has often been confined to discrete toy-domains due to lack of a principled approach to leverage on structure and prior knowledge. Transferring insights from structured prediction methods, which make use of the inherent correlation in the data, may be a crucial step. General approaches for bringing structured policies, states, actions and rewards into reinforcement learning may well be the key to tackle many challenges of real-world environments and an important step to the vision of intelligent autonomous agents which can learn a rich and versatile sets of behaviors. The workshop should reveal how complex behavior typically exhibits correlations that can be exploited for decision making, learning reward functions or find structure in the state or action space.

In order to make progress towards the goal of structured learning for control, decision making and robotics, this workshop aims at researchers from different machine learning areas (such as reinforcement learning, structured prediction), text/speech processing (supervised learning methods in these domains make a lot of use of the inherent structure) and related disciplines (e.g., control engineering, robotics, and the cognitive sciences).

We particularly want to focus on the following important topics for structured decision making which have a big overlap from several of these fields:

  • Efficient representations and learning methods for hierarchical policies
  • Learning on several layers of hierarchy
  • Structured representations for motor control and planning
  • Skill extraction and skill transfer
  • Sequencing and composition of behaviors
  • Hierarchical Bayesian Models for decision making and efficient transfer learning
  • Low-dimensional Manifolds as structured representation for decision making
  • Exploit correlations in the decision making process
  • Prioritized control policies in a multi-task reinforcement learning setup

These challenges are important steps to building intelligent autonomous systems and may potentially motivate new research topics in the related research fields.


The aim of this workshop is to bring together researchers which are interested in structured representations, reinforcement learning, hierarchical learning methods and control architectures. Among these general topics we will focus on the following questions:

Structured representations:

  • How to efficiently use graphical models such as Markov random fields to exploit correlations in the decision making process?
  • How to extract the relevant structure (e.g. low dimensional manifolds, factorizations...) from the state and action space?
  • Can we efficiently model structure in the reward function or the system dynamics?
  • How to learn good features for the policy or also the value function?
  • What can we learn from structured prediction?

Representations of behavior:

  • What are good representations for behaviors in continuous domains, e.g. motor skills?
  • How can we efficiently reuse skills in new situations?
  • How can we extract movement skills and elemental movements from demonstrations?
  • How can we compose skills to solve a combination of tasks?
  • How to represent versatile motor skills?
  • How can we represent and exploit the correlations over time in the decision process?

Structured Control:

  • How to efficiently use structured representations for planning and control?
  • Can we learn task-priorities and use similar policies as in task-prioritized control?
  • How to decompose optimal control laws into elemental movements such as muscle synergies?
  • How to use low-dimensional manifolds to control high-dimensional, redundant systems?
  • Can we use chain or tree-like structures as policy representation to mimic the kinematic structure of the robot?

Hierarchical Learning Methods:

  • How can we efficiently apply abstractions to control problems?
  • How to efficiently learn on several layers of hierarchy?
  • Which policy search algorithms are appropriate for which hierarchical representation?
  • Can we extract in a hierarchical inverse reinforcement learning setup the cost function of single skills and also the cost-function for selecting the single skills?
  • How to decide when to create new skills or re-use known ones?
  • How to discover and generalize important sub-goals in our movement plan?

Skill Transfer:

  • How can we efficiently transfer skills to new situations
  • Can we use hierarchical Bayesian models to learn on several layers of abstractions also in decision making?
  • How to transfer learned models or even value functions to new tasks?


To be announced soon.

Location and More Information


zum Seitenanfang