High Speed Motor Skills 

Opening up the solution space of possible control strategies from popular static approaches to more dynamic alternatives, we gain potential benefits in overcoming torque constraints, under-actuation, energy efficiency, speed, simpler movements, and simpler hardware requirements. We investigate tasks like juggling and air hockey, which distinguish themselves within the realm of dynamic manipulation tasks by requiring the simultaneous combination of high speed and intricate dexterity.
Robot Juggling 
High acceleration reinforcement learning for real-world juggling with binary rewards. Robots that can learn in the physical world will be important to enable robots to escape their stiff and pre-programmed movements. For dynamic high-acceleration tasks, such as juggling, learning in the real-world is particularly challenging as one must push the limits of the robot and its actuation without harming the system, amplifying the necessity of sample efficiency and safety for robot learning algorithms. In contrast to prior work which mainly focuses on the learning algorithm, we propose a learning system, that directly incorporates these requirements in the design of the policy representation, initialization, and optimization. We demonstrate that this system enables the high-speed Barrett WAM manipulator to learn juggling two balls from 56 minutes of experience with a binary reward signal. The final policy juggles continuously for up to 33 minutes or about 4500 repeated catches.
- Gomez Andreu, M.A.; Ploeger, K.; Peters, J. (2024). Beyond the Cascade: Juggling Vanilla Siteswap Patterns, IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).
- Ploeger, K.; Peters, J. (2022). Controlling the Cascade: Kinematic Planning for N-ball Toss Juggling, IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).
- Ploeger, K.; Lutter, M.; Peters, J. (2020). High Acceleration Reinforcement Learning for Real-World Juggling with Binary Rewards, Conference on Robot Learning (CoRL).
Robot Air Hockey 
Efficient and Reactive Planning for High Speed Robot Air Hockey. Highly dynamic robotic tasks require high-speed and reactive robots. These tasks are particularly challenging due to the physical constraints, hardware limitations, and the high uncertainty of dynamics and sensor measures. To face these issues, it’s crucial to design robotics agents that generate precise and fast trajectories and react immediately to environmental changes. Air hockey is an example of this kind of task. Due to the environment’s characteristics, it is possible to formalize the problem and derive clean mathematical solutions. For these reasons, this environment is perfect for pushing to the limit the performance of currently available general-purpose robotic manipulators. Using two Kuka Iiwa 14, we show how to design a policy for general-purpose robotic manipulators for the air hockey game. We demonstrate that a real robot arm can perform fast-hitting movements and that the two robots can play against each other on a medium-size air hockey table in simulation.
- Kicki, P.; Liu, P.; Tateo, D.; Bou Ammar, H.; Walas, K.; Skrzypczynski, P.; Peters, J. (2024). Fast Kinodynamic Planning on the Constraint Manifold with Deep Neural Networks, IEEE Transactions on Robotics (T-Ro), and Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), 40, pp.277-297.
- Liu, P.; Tateo, D.; Bou-Ammar, H.; Peters, J. (2021). Efficient and Reactive Planning for High Speed Robot Air Hockey, Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).
Robot Table Tennis 

Learning to Select and Generalize Striking Movements in Robot Table Tennis. Learning new motor tasks from physical interactions is an important goal for both robotics and machine learning. However, when moving beyond basic skills, most monolithic machine learning approaches fail to scale. For more complex skills, methods that are tailored for the domain of skill learning are needed. In this paper, we take the task of learning table tennis as an example and present a new framework that allows a robot to learn cooperative table tennis from physical interaction with a human. The robot first learns a set of elementary table tennis hitting movements from a human table tennis teacher by kinesthetic teach-in, which is compiled into a set of motor primitives represented by dynamical systems. The robot subsequently generalizes these movements to a wider range of situations using our mixture of motor primitives approach. The resulting policy enables the robot to select appropriate motor primitives as well as to generalize between them. Finally, the robot plays with a human table tennis partner and learns online to improve its behavior. We show that the resulting setup is capable of playing table tennis using an anthropomorphic robot arm.
- Muelling, K.; Kober, J.; Kroemer, O.; Peters, J. (2013). Learning to Select and Generalize Striking Movements in Robot Table Tennis, International Journal of Robotics Research (IJRR), 32, 3, pp.263-279.