Reference Type | Conference Proceedings |
Author(s) | Deisenroth, M.P.; Fox, D.; Rasmussen, C.E. |
Year | 2011 |
Title | Learning to Control a Low-Cost Robotic Manipulator Using Data-Efficient Reinforcement Learning |
Journal/Conference/Book Title | Robotics: Science & Systems (RSS 2011) |
Keywords | Gaussian process, reinforcement learning, robot learning, policy search |
Abstract | Over the last years, there has been substantial progress in robust manipulation in unstructured environments. The long-term goal of our work is to get away from precise, but very expensive robotic systems and to develop affordable, potentially imprecise, self-adaptive manipulator systems that can interactively perform tasks such as playing with children. In this paper, we demonstrate how a low-cost off-the-shelf robotic system can learn closed-loop policies for a stacking task in only a handful of trials—from scratch. Our manipulator is inaccurate and provides no pose feedback. For learning a controller in the work space of a Kinect-style depth camera, we use a model-based reinforcement learning technique. Our learning method is data efficient, reduces model bias, and deals with several noise sources in a principled way during long-term planning. We present a way of incorporating state-space constraints into the learning process and analyze the learning gain by exploiting the sequential structure of the stacking task. |
Link to PDF | http://www.ias.informatik.tu-darmstadt.de/uploads/Publications/Deisenroth_RSS_2011.pdf |