|Organizers:||Gerhard Neumann, ?????, Jan Peters|
|Location:||Lake Tahoe, Nevada, United States|
Learning control or decision policies for complex real-world problems, as encountered by robots, web agents or in artificial games, is a challenge for machine learning. Despite considerable efforts, general reinforcement learning approaches have not yet scaled into real world domains due to the inherent high dimensionality, partial observability and the high costs of data generation. However, domain-driven reinforcement learning approaches have yielded a lot of recent successes, e.g., in robotics, indicating that domain structure enables scaling into real-world problems -- especially as most real-world domains are also highly structured environments. Hence, the most promising route to more scalable policy learning approaches must include the automatic exploitation of the environment's structure, resulting in new structured learning approaches for control, decision making and robotics.
In robot movement generation the physical world has dictated the inherent structure of control policies, state or action spaces, as well as reward functions. As a result, reachers often ended up naturally at well-structured hierarchical policies based on discrete-continuous partitions (e.g., learning both elementary/primitive actions as well as their supervisory layer, prioritized operational space control, ) with nested control loops at several different speeds (i.e., fast control loops for smooth and accurate movement achievement, slower loops for long-term task achievement). While such structures are favored in robot reinforcement learning, they are frequently neglected by the general reinforcement learning community. Instead, hierarchical reinforcement has often been confined to discrete toy-domains due to lack of a principled approach to leverage on structure and prior knowledge. Transferring insights from structured prediction methods, which make use of the inherent correlation in the data, may be a crucial step. General approaches for bringing structured policies, states, actions and rewards into reinforcement learning may well be the key to tackle many challenges of real-world environments and an important step to the vision of intelligent autonomous agents which can learn a rich and versatile sets of behaviors. The workshop should reveal how complex behavior typically exhibits correlations that can be exploited for decision making, learning reward functions or find structure in the state or action space.
In order to make progress towards the goal of structured learning for control, decision making and robotics, this workshop aims at researchers from different machine learning areas (such as reinforcement learning, structured prediction), text/speech processing (supervised learning methods in these domains make a lot of use of the inherent structure) and related disciplines (e.g., control engineering, robotics, and the cognitive sciences).
We particularly want to focus on the following important topics for structured decision making which have a big overlap from several of these fields:
These challenges are important steps to building intelligent autonomous systems and may potentially motivate new research topics in the related research fields.
The aim of this workshop is to bring together researchers which are interested in structured representations, reinforcement learning, hierarchical learning methods and control architectures. Among these general topics we will focus on the following questions:
To be announced soon.