Publication Details

SELECT * FROM publications WHERE Record_Number=10595
Reference TypeConference Proceedings
Author(s)Englert, P.; Paraschos, A.; Peters, J.; Deisenroth, M. P.
TitleModel-based Imitation Learning by Probabilistic Trajectory Matching
Journal/Conference/Book TitleProceedings of 2013 IEEE International Conference on Robotics and Automation (ICRA)
AbstractOne of the most elegant ways of teaching new skills to robots is to provide demonstrations of a task and let the robot imitate this behavior. Such imitation learning is a non-trivial task: Different anatomies of robot and teacher, and reduced robustness towards changes in the control task are two major difficulties in imitation learning. We present an imitation-learning approach to efficiently learn a task from expert demonstrations. Instead of finding policies indirectly, either via state-action mappings (behavioral cloning), or cost function learning (inverse reinforcement learning), our goal is to find policies directly such that predicted trajectories match observed ones. To achieve this aim, we model the trajectory of the teacher and the predicted robot trajectory by means of probability distributions. We match these distributions by minimizing their Kullback-Leibler divergence. In this paper, we propose to learn probabilistic forward models to compute a probability distribution over trajectories. We compare our approach to model-based reinforcement learning methods with hand-crafted cost functions. Finally, we evaluate our method with experiments on a real compliant robot.
Link to PDF


zum Seitenanfang