Learning Motor Primitives

Humans demonstrate a large variety of very complicated motor skills in their day-to-day life. Their agility and adaptability to new control tasks remains unmatched by the millions of robots laboring on factory floors and roaming research labs. Achieving the abilities of learning and improving new motor skills has become an essential component in order to get a step closer to human-like motor skills. If future robots were able to acquire their basic task (:youtube cNyoMVZQdYM :) by imitating human demonstrations and subsequently self-improve by trial and error, such robot learning would result into more interesting robot applications as well as large productivity gains in industry.

Recent progress in the area of machine learning has yielded several important tools for getting closer to this vision for the future. Dynamical system-based motor primitives have enabled robots to learn complex tasks ranging from Tennis-swings to legged locomotion. However, most interesting motor learning problems are high-dimensional reinforcement learning problems often beyond the reach of current methods. We use the combination of initializing the learning process by imitation learning and, subsequently, improving the policy by reinforcement learning as suggested in [1]. We obtained human presentations of tasks using a VICONTM motion capture system and kinesthetic teach-in. This data is used for imitation learning. We developed a novel EM-inspired reinforcement learning algorithm particularly well-suited for dynamic motor primitives, compared it to several well-known parametrized policy search methods and showed that it outperforms them. We applied it in the context of motor learning and showed that it can learn a complex Ball-in-a-Cup task using our Barrett WAMTM robot arm [2].

The policy can be sensitive to perturbations of the initial conditions or the trajectory. Thus, perceptual coupling is sometimes needed to cancel these disturbances. However, to date there have been only few extensions which have incorporated perceptual coupling to variables of external focus, and, furthermore, these modifications have relied upon handcrafted solutions. Humans learn how to couple their movement primitives with external variables. We proposed an augmented version of the dynamic systems motor primitives which incorporates perceptual coupling to an external variable. The resulting perceptually driven motor primitives include the previous primitives as a special case and can inherit some of their interesting properties. We showed that these motor primitives can perform complex tasks such as the Ball-in-a-Cup task even with large variances in the initial conditions where a skilled human player would be challenged. The resulting novel type of motor primitive can be learned with the algorithms described above [3].

Contact persons: Jens Kober, Jan Peters

References

    •     Bib
      Peters, J.; Schaal, S. (2008). Reinforcement learning of motor skills with policy gradients, Neural Networks, 21, 4, pp.682-97.
  1. (:rolltitlesearch Policy Search for Motor Primitives in Robotics :)
    •     Bib
      Kober, J.; Mohler, B.; Peters, J. (2008). Learning Perceptual Coupling for Motor Primitives, International Conference on Intelligent Robot Systems (IROS).