Dexterous Manipulation

Our Robot Learning Laboratory is at the forefront of advancing dexterous manipulation, a critical area that enables robots to perform complex tasks with agility and precision akin to human hands. We are particularly focused on innovative skill learning approaches that meld the strengths of imitation learning and reinforcement learning. This hybrid strategy allows our robots to not only mimic complex human actions but also improve their performance through trial and error, leading to more nuanced and adaptable manipulative skills. On a higher level, we delve into the intricate combinatorial optimization problems associated with sequential assembly involving multiple parts. This research aims to equip robots with the capability to plan and execute a series of actions, manipulating and assembling various components in an efficient and intelligent manner, thereby pushing the boundaries of what automated systems can achieve in dynamic and unpredictable environments.

Robotic Assembly

Graph-based Reinforcement Learning meets Mixed Integer Programs: An application to 3D robot assembly discovery. Robot assembly discovery (RAD) is a challenging problem that lives at the intersection of resource allocation and motion planning. The goal is to combine a predefined set of objects to form something new while considering task execution with the robot-in-the-loop. In this work, we tackle the problem of building arbitrary, predefined target structures entirely from scratch using a set of Tetris-like building blocks and a robotic manipulator. Our novel hierarchical approach aims at efficiently decomposing the overall task into three feasible levels that benefit mutually from each other. On the high level, we run a classical mixed-integer program for global optimization of block-type selection and the blocks’ final poses to recreate the desired shape. Its output is then exploited to efficiently guide the exploration of an underlying reinforcement learning (RL) policy. This RL policy draws its generalization properties from a flexible graph-based representation that is learned through Q-learning and can be refined with search. Moreover, it accounts for the necessary conditions of structural stability and robotic feasibility that cannot be effectively reflected in the previous layer. Lastly, a grasp and motion planner transforms the desired assembly commands into robot joint movements. We demonstrate our proposed method’s performance on a set of competitive simulated RAD environments, showcase real world transfer, and report performance and robustness gains compared to an unstructured end-to-end approach.

  •       Bib
    Funk, N.; Menzenbach, S.; Chalvatzaki, G.; Peters, J. (2022). Graph-based Reinforcement Learning meets Mixed Integer Programs: An application to 3D robot assembly discovery, IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

Reinforcement Learning and Tactile Sensing for Robotic Assembly. Construction is an industry that could benefit significantly from automation, yet still relies heavily on manual human labor. Thus, we investigate how a robotic arm can be used to assemble a structure from predefined discrete building blocks autonomously. Since assembling structures is a challenging task that involves complex contact dynamics, we propose to use a combination of reinforcement learning and planning for this task. In this work, we take a first step towards autonomous construction by training a controller to place a single building block in simulation. Our evaluations show that trial-and-error algorithms that have minimal prior knowledge about the problem to be solved, so called model-free deep reinforcement learning algorithms, can be successfully employed. We conclude that the achieved results, albeit imperfect, serve as a proof of concept and indicate the directions for further research to enable more complex assemblies involving multiple building elements.

  •       Bib
    Belousov, B.; Wibranek, B.; Schneider, J.; Schneider, T.; Chalvatzaki, G.; Peters, J.; Tessmann, O. (2022). Robotic Architectural Assembly with Tactile Skills: Simulation and Optimization, Automation in Construction, 133, pp.104006.

Robot Grasping

Combining active learning and reactive control for robot grasping. Grasping an object is a task that inherently needs to be treated in a hybrid fashion. The system must decide both where and how to grasp the object. While selecting where to grasp requires learning about the object as a whole, the execution only needs to reactively adapt to the context close to the grasp’s location. We propose a hierarchical controller that reflects the structure of these two sub-problems, and attempts to learn solutions that work for both. A hybrid architecture is employed by the controller to make use of various machine learning methods that can cope with the large amount of uncertainty inherent to the task. The controller’s upper level selects where to grasp the object using a reinforcement learner, while the lower level comprises an imitation learner and a vision-based reactive controller to determine appropriate grasping motions. The resulting system is able to quickly learn good grasps of a novel object in an unstructured environment, by executing smooth reaching motions and preshaping the hand depending on the object’s geometry. The system was evaluated both in simulation and on a real robot.

  •       Bib
    Kroemer, O.; Detry, R.; Piater, J.; Peters, J. (2010). Combining Active Learning and Reactive Control for Robot Grasping, Robotics and Autonomous Systems, 58, 9, pp.1105-1116.