Reinforcement learning is from our perspective the automatic design of approximately optimal controllers from measurements. In traditional (optimal) control, the smart human in the loop decides how to measure and model the system. In RL, on the other hand, the optimal controller is constructed by the RL system directly from measurements; however, the way to the optimal controller can require extensive prestructuring through structured policies, value functions or models. In this page, I want to list some of the projects I am working on or have worked on but this list will always be fairly incomplete.
Our general goal in reinforcement learning is the development of methods which scale into the dimensionality of humanoid robots and can generate actions for seven or more degrees of freedom, e.g., for a human arm. Such problems are a tremendous challenge for reinforcement learning as they require a state space of 21 or more dimensions (one dimension for each joint position, velocity and acceleration) and an action space of seven dimensions.
While supervised statistical learning techniques have significant applications for model and imitation learning, they do not suffice for all motor learning problems, particularly when no expert teacher or idealized desired behavior is available. Thus, both robotics and the understanding of human motor control require reward (or cost) related self-improvement. The development of efficient reinforcement learning methods is therefore essential for the success of learning in motor control.
However, reinforcement learning in high-dimensional spaces such as manipulator and humanoid robotics is extremely difficult as a complete exploration of the underlying state-action spaces is impossible and few existing techniques scale into this domain.
Nevertheless, it is obvious that humans also never need such an extensive exploration in order to learn new motor skills and instead rely upon a combination of both watching a teacher and subsequent self-improvement. In more technical terms: first, a control policy is obtained by imitation and then improved using reinforcement learning. It is essential that only local policy search techniques, e.g., policy gradient methods, are applied as a rapid change to the policy would result into a complete unlearning of the policy and might also result into unstable control policies which can damage the robot.
In order to bring reinforcement learning to robotics and computational motor control, we have both improved existing reinforcement learning methods as well as developed a variety of novel algorithms. At this point, we can only give a short overview of these methods:
- Policy Gradient Methods: One class of methods which are particularly interesting, are policy gradient methods due to their stronger guarantees. A nice tutorial to get started can be found in the Policy Gradient Toolbox which I created for an upcoming survey.
- Natural Actor-Critic: The natural actor-critic makes use of the fact, that a natural gradient usually beats a vanilla gradient. We have developed several versions and have realized that algorithms such as Sutton's Actor-Critic and Bradtke & Bartos' Q-Learning for the traditional problem of Linear Quadratic-Regulation can be derived from this setting.
- EM-like Reinforcement Learning: If we had a teacher labeling all actions as good or bad in a binary fashion, we would have an imitation learning problem. However, if we consider these labels as hidden variables and use the returns/action values as improper distributions over the labels, we obtain an inference problem. This problem has led to the reward-weighted regression and the PoWER algorithm.
(:rollkeywordsearch reinforcement learning:)