Human motor control appears to rely on motor primitives that encode basic movements as the elementary units of motor actions. Many resulting models in computational motor control assume that a hierarchical composition of elementary templates allows humans to perform difficult task. As this concept makes sense in the context of robotics, there has been an increasing number of motor learning approaches in robotics research that rely it. Formalizations in terms of dynamic systems have lead to biomimetic robot systems that allowed learning a large variety of basic movements that include both rhythmic movements (e.g., drumming, paddling a ball on a string, basic gaits of biped and quadroped robots, etc) as well as single-stroke discrete movements (e.g., hitting a baseball, ball in a cup movements, etc).
Nevertheless, while the general ideas of motor primitives has inspired successful applications of biomimetic robotics, these have not yet caught up with computational motor control models to date. Instead of learning complex tasks using these basic elements, they are currently only employed in order to acquire and refine simple tasks that do not require a central, supervisory process for decision making. For complex tasks in robotics such as sports or manipulation in cluttered environments, such a supervisory process will become essential.
In this project, we will study how robots can learn complex tasks using motor primitives inspired results from human motor control that will be employed by a central, supervisory process for higher level decision making. As an example, we will learn to play table tennis in the way how humans acquire the motor skill for such hitting sports. Table tennis provides us with a unique scenario that has all the complexity common in motor skills learned by humans. In a feasibility study, we have created a mechanical model based biological hypotheses (e.g., Durrey's movement phases, Ramanantsoa virtual hitting point hypotheses, operational timing hypothesis, etc) and, as a result, are able to ensure that our learning approach will be applicable in this context.
Our suggested approach can be decomposed into two parts: Firstly, the robot will acquire a few motor primitives through imitation and, subsequently, will refine these using a ball gun by reinforcement learning. Our previous work on discrete motor primitives provides us with these basic components and can be applied in this first step. Secondly, we focus on learning the central process for using motor primtives that biological systems appear to have but biomimetic robots lack. This supervisor will make usage of context information (e.g., predicted ball trajectories, opponents behavior) in order to select motor primitives, generalize between primitives, determine goal parameters (e.g., virtual hitting points, timing).