On-Robot Learning

While agents can acquire complex skills by learning in a simulated environment, the sim-to-real gap is often too large to overcome using just domain randomization techniques. Indeed, it is often the case in contact-rich environments, or when learning fast and dynamic motion skills. In these settings, we need to deploy our learning algorithm directly on the real robot. This opens many crucial issues, such as learning efficiency, complying with safety and task constraints, and transferring learned skills to different tasks. In the following, we present some of the research tackling these topics. Our long-term objective is to deploy learning algorithms in the real world, allowing robots to constantly improve their performance while performing the target task.

Safe Robot Learning

Fast Kinodynamic Planning on the Constraint Manifold with Deep Neural Networks

In recent years, learning-based solutions have become alternatives to classical approaches, but they still lack comprehensive handling of complex constraints, such as planning on a lower-dimensional manifold of the task space while considering the robot's dynamics. To solve these issues, we introduce a novel learning-to-plan framework that exploits the concept of constraint manifold, allowing the handling of kinodynamic constraints. Our approach generates plans satisfying an arbitrary set of constraints and computes them in a short constant time. This allows the robot to plan and replan reactively, making our approach suitable for dynamic environments. We validate our approach on two simulated tasks and in a demanding real-world scenario, where we use a Kuka LBR Iiwa 14 robotic arm to perform the hitting movement in robotic Air Hockey.

    •     Bib
      Kicki, P.; Liu, P.; Tateo, D.; Bou Ammar, H.; Walas, K.; Skrzypczynski, P.; Peters, J. (2024). Fast Kinodynamic Planning on the Constraint Manifold with Deep Neural Networks, IEEE Transactions on Robotics (T-Ro), and Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), 40, pp.277-297.

Robot Reinforcement Learning on the Constraint Manifold

Reinforcement learning in robotics is extremely challenging due to many practical issues, including safety, mechanical constraints, and wear and tear. Typically, these issues are not considered in the machine learning literature. One crucial problem in applying reinforcement learning in the real world is Safe Exploration, which requires physical and safety constraints satisfaction throughout the learning process. To explore in such a safety-critical environment, leveraging known information such as robot models and constraints is beneficial to provide more robust safety guarantees. Exploiting this knowledge, we propose a novel method to learn robotics tasks in simulation efficiently while satisfying the constraints during the learning process.

    •     Bib
      Liu, P.; Tateo, D.; Bou-Ammar, H.; Peters, J. (2021). Robot Reinforcement Learning on the Constraint Manifold, Proceedings of the Conference on Robot Learning (CoRL).
    •       Bib
      Liu, P.; Zhang, K.; Tateo, D.; Jauhri, S.; Hu, Z.; Peters, J. Chalvatzaki, G. (2023). Safe Reinforcement Learning of Dynamic High-Dimensional Robotic Tasks: Navigation, Manipulation, Interaction, 2023 IEEE International Conference on Robotics and Automation (ICRA), IEEE.
    •     Bib
      Liu, P.; Zhang, K.; Tateo, D.; Jauhri, S.; Peters, J.; Chalvatzaki, G.; (2022). Regularized Deep Signed Distance Fields for Reactive Motion Generation, 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).