Robot Learning: Integrated Project 1+2

The IP Project will continue during the Summer Semester 2024. The Introduction presentation will be held both in-person and via Zoom on 18.04.2024 at 15:20 (see details below in the Meetings section). In case you require more information, please contact asap.

Outstanding students who have completed 20-00-0753-pj Robot Learning: Integrated Project - Part 1 and 20-00-0754-pj Robot Learning: Integrated Project - Part 2 will be considered for the 20-00-1108-pp Expert Lab on Robot Learning.

In the Robot Learning: Integrated Project, we offer the possibility for highly motivated students to increase their knowledge in robotics and machine learning. This project will allow participants to understand these fields in greater depth and provide hands-on experience. For this purpose, new methods from the Robot Learning lecture are to be implemented. These methods are then proven useful for solving learning problems regarding robotics applications. Participants join together in groups of 2-3 people and get supervision from one or more members of the Intelligent Autonomous Systems Lab. Students may focus on more theoretical or practical work depending on their preference and on the available projects. We are always delighted to answer questions, so please contact us, for example by writing an email to!

Part 1: Literature Review, Toy Evaluations and Simulation Studies. In the Robot Learning: Integrated Project - Part 1, a current problem in robot learning is tackled by students with the aid of their supervisor. The students conduct a literature review that corresponds to their research interests. Based on this preliminary work, a project plan is developed, the necessary algorithms are tested and a prototypical realization in simulation is created.

Part 2: Evaluation and Submission to a Conference In the Robot Learning: Integrated Project - Part 2 the solutions from Part 1 are completed and applied to a real robot. A scientific article is written on the issue, methods and results, and submitted to a high quality scientific conference or journal. For Part 2, there are numerous exciting robot systems available to students.

Requirements and Background Knowledge. Mathematics from your undergraduate studies (calculus, statistics), programming (project dependent but usually C/C++, Python), and computer science fundamentals (algorithms). Simultaneous or previous attendance of the Statistical Machine Learning and Robot Learning lectures is recommended!

Meetings & Deadlines

Introduction & Topics presentations18.04.2024, 15:20-16:45Room: S2|02 / C110 and Zoom:
Project Application25.04.2024, 23:59 CEST 
Topic assignment02.05.2024, 23:59 CEST 
Report Submission05.09.2024, 23:59 CEST 
Review Submission12.09.2024, 23:59 CEST 
Final Report Submission19.09.2024, 23:59 CEST 
Final Presentation26.09.2024, TBDRoom: TBD and Zoom: TBD

Project Application

Please apply for a project from the list of offered projects by writing a short motivation letter. In the letter, please address the following questions:

  1. Which project would you like to try and why?
  2. Why do you think this project is important?
  3. What helpful background do you have for the project and what makes you special for that project?
  4. Your academic aspirations: 1 semester? 2 semesters? Future thesis?

See the Introduction and Topic slides here for the rules and the list of projects! Participants can only apply for two projects. Please specify your priority for the two projects. If you already have a group, please send a joint email.

To apply for a project, please send your project wishes by email to the project supervisor and cc Kay Hansel ( by Thursday 25.04.2024, 23:59 CEST. After an internal discussion with the potential supervisors, it will be decided which topics are assigned to which students. Unfortunately, it is possible that some students do not get a topic! Please get in contact with the supervisor beforehand to know more about the project.

Report, Review and Presentation

At the end of the IP project (either part 1 or 2), your group is expected to produce three outputs:

  1. Submit a final report in the ICML style or IROS style as a PDF file. Papers are limited to eight pages, including figures and tables; and additional pages containing only cited references and an appendix (no page limit) are allowed.
  2. Send your report to Kay Hansel, and your supervisors.
  3. Review and evaluate a report of another group. This process emulates the standard procedure at scientific conferences.
  4. Give a final presentation on the work you did - 10 minutes + 5 minutes questions.

Your project will be marked based on the following Grading Criteria:

  • Creativity and effort on the project
  • Technical quality and robustness of the solution
  • Has the student grasped and presented the big picture?
  • Quality of the report
  • Quality of the presentation

Offered IP Projects

TBA at the Introduction Lecture 

Ongoing IP Projects

Visuotactile Shear Force EstimationErik Helmut, Luca DziarskiNiklas Funk, Boris Belousov
AI Olympics with RealAIGymTim Faust, Erfan Aghadavoodi, Habib MaraqtenBoris Belousov
Solving Insertion Tasks with RL in the Real WorldNick Striebel,Adriaan MulderJoao Carvalho
Hands-on ControlErik Gattung, Noah BeckerTim Schneider, Kay Hansel
Student-Teacher Learning for simulated QuadrupedsKeagan Holmes, Oliver Griess, Oliver GreinNico Bohlinger
Reinforcement Learning for Contact Rich ManipulationDustin GoreckiAiswarya Menon, Arjun Datta, Suman Pal
Autonomous Gearbox Assembly by Assembly by DisassemblyJonas Chaselon, Julius ThorwarthAiswarya Menon, Felix Kaiser, Arjun Datta
Learning to Assemble from Instruction ManualErik Rothenbacker, Leon De Andrade, Simon SchmalfussAiswarya Menon,Vignesh Prasad,Felix Kaiser,Arjun Datta
Black-box System Identification of the Air Hockey TableShihao Li, Yu WangTheo Gruner, Puze Liu
Unveiling the Unseen: Tactile Perception and Reinforcement Learning in the Real WorldAlina Böhm, Inga Pfenning, Janis LenzDaniel Palenicek, Theo Gruner, Tim Schneider
XXX: eXploring X-Embodiment with RT-XDaniel Dennert, Christian Scherer, Faran AhmadDaniel Palenicek, Theo Gruner, Tim Schneider, Maximilian Tölle
Kinodynamic Neural Planner for Robot Air HockeyNiclas MertenPuze Liu
XXX: eXploring X-Embodiment with RT-XTristan JacobsDaniel Palenicek, Theo Gruner, Tim Schneider, Maximilian Tölle
Reactive Human-to-Robot HandoversFabian HahneVignesh Prasad, Alap Kshirsagar
Analysis of multimodal goal prompts for robot controlMax SiebenbornAditya Bhatt, Maximilian Tölle
Q-Ensembles as a Multi-Armed BanditHenrik MetternichAhmed Hendawy, Carlo D'Eramo
Control Barrier Functions for Assistive TeleoperationYihui Huang,Yuanzheng SunBerk Gueler, Kay Hansel
Hands-on Control: Tactile Feedback for Remote Robot AssemblyNoah Becker,Chunyao Zhu,Kyrylo SovailoTim Schneider and Kay Hansel from IAS
Learning Torque Control for QuadrupedsDaniel Schmidt, Lina GaumannNico Bohlinger
Robot Learning for Dynamic Motor SkillsMarcus Kornamann, Qimeng HeKai Ploeger, Alap Kshirsagar

Completed IP Projects

Pendulum AcrobaticsFlorian WolfKai Ploeger, Pascal Klink
Learn to play TangramMax Zimmermann, Marius Zöller, Andranik AristakesyanKay Hansel, Niklas Funk
Characterizing Fear-induced Adaptation of Balance by Inverse Reinforcement LearningZeyuan SunAlap Kshirsagar, Firas Al-Hafez
Tactile Environment InteractionChangqi Chen, Simon Muchau, Jonas RingsdorfNiklas Funk
Latent Generative Replay in Continual LearningMarcel MittenbühlerAhmed Hendawy, Carlo D'Eramo
Memory-Free Continual LearningDhruvin VadgamaAhmed Hendawy, Carlo D'Eramo
Simulation of Vision-based Tactile SensorsDuc Huy NguyenBoris Belousov, Tim Schneider
Learning Bimanual Robotic GraspingHanjo Schnellbächer, Christoph DickmannsJulen Urain De Jesus, Alap Kshirsagar
Learning Deep probability fields for planning and controlFelix Herrmann, Sebastian ZachDavide Tateo, Georgia Chalvatzaki, Jacopo Banfi
On Improving the Reliability of the Baseline Agent for Robotic Air HockeyHaozhe ZhuPuze Liu
Self-Play Reinforcement Learning for High-Level Tactics in Robot Air HockeyYuheng OuyangPuze Liu, Davide Tateo
Kinodynamic Neural Planner for Robot Air HockeyNiclas MertenPuze Liu
Robot Drawing With a Sense of TouchNoah Becker, Zhijingshui Yang, Jiaxian PengBoris Belousov, Mehrzad Esmaeili
Black-Box System Identification of the Air Hockey TableAnna Klyushina, Marcel RathTheo Gruner, Puze Liu
Autonomous Basil HarvestingJannik Endres, Erik Gattung, Jonathan LippertAiswarya Menon, Felix Kaiser, Arjun Vir Datta, Suman Pal
Latent Tactile Representations for Model-Based RLEric KrämerDaniel Palenicek, Theo Gruner, Tim Schneider
Model Based Multi-Object 6D Pose EstimationHelge MeierFelix Kaiser, Arjun Vir Datta, Suman Pal
Reinforcement Learning for Contact Rich ManipulationNoah Farr, Dustin GoreckiAiswarya Menon, Arjun Vir DattaSuman Pal
Measuring Task Similarity using Learned FeaturesHenrik MetternichAhmed Hendawy, Pascal Klink, Carlo D'Eramo
Interactive Semi-Supervised Action SegmentationMartina Gassen, Erik Prescher, Frederic MetzlerLisa Scherf, Felix Kaiser, Vignesh Prasad
Kinematically Constrained Humanlike Bimanual Robot MotionYasemin Göksu, Antonio De Almeida CorreiaVignesh Prasad, Alap Kshirsagar
Control and System identification for Unitree A1Lu LiuJunning Huang, Davide Tateo
System identification and control for Telemax manipulatorKilian FeessDavide Tateo, Junning Huang
Tactile Active Exploration of Object ShapesIrina Rath, Dominik HorstkötterTim Schneider, Boris Belousov, Alap Kshirsagar
Object Hardness Estimation with Tactile SensorsMario Gomez, Frederik HellerAlap Kshirsagar, Boris Belousov, Tim Schneider
Task and Motion Planning for Sequential AssemblyPaul-Hallmann, Nicolas NonnengießerBoris Belousov, Tim Schneider, Yuxi Liu
A Digital Framework for Interlocking SL-Blocks Assembly with RobotsBingqun LiuMehrzad Esmaeili, Boris Belousov
Learn to play TangramMax Zimmermann, Dominik Marino, Maximilian LangerKay Hansel, Niklas Funk
Learning the Residual Dynamics using Extended Kalman Filter for Puck TrackingHaoran DingPuze Liu, Davide Tateo
ROS Integration of Mitsubishi PA 10 robotJonas GünsterPuze Liu, Davide Tateo
6D Pose Estimation and Tracking for Ubongo 3DMarek DanivJoao Carvalho, Suman Pal
Task Taxonomy for robots in householdAmin Ali, Xiaolin LinSnehal Jauhri, Ali Younes
Task and Motion Planning for Sequential AssemblyPaul-Hallmann, Patrick Siebke, Nicolas NonnengießerBoris Belousov, Tim Schneider, Yuxi Liu
Learning the Gait for Legged Robot via Safe Reinforcement LearningJoshua Johannson, Andreas Seidl FernandezPuze Liu, Davide Tateo
Active Perception for Mobile ManipulationSophie Lueth, Syrine Ben Abid, Amine ChouchaneSnehal Jauhri
Combining RL/IL with CPGs for Humanoid LocomotionHenri GeissFiras Al-Hafez, Davide Tateo
Multimodal Attention for Natural Human-Robot InteractionAleksandar Tatalovic, Melanie Jehn, Dhruvin Vadgama, Tobias GockelOleg Arenz, Lisa Scherf
Hybrid Motion-Force Planning on ManifoldsChao Jin, Peng Yan, Liyuan XiangAn Thai Le, Junning Huang
Stability analysis for control algorithms of Furuta PendulumLu liu, Jiahui Shi, Yuheng OuyangJunning Huang, An Thai Le
Multi-sensorial reinforcement learning for robotic tasksRickmer KrohnGeorgia Chalvatzaki, Snehal Jauhri
Learning Behavior Trees from VideoNick Dannenberg, Aljoscha SchmidtLisa Scherf, SumanPal
Subgoal-Oriented Shared ControlZhiyuan Gao, Fengyun ShaoKay Hansel
Task Taxonomy for robots in householdAmin Ali, Xiaolin LinSnehal Jauhri, Ali Younes
Learn to play TangramMaximilian LangerKay Hansel, Niklas Funk
Theory of Mind Models for HRI under partial ObservabilityFranziska Herbert, Tobias Niehues, Fabian KalterDorothea Koert, Joni Pajarinen, David Rother
Learning Safe Human-Robot InteractionZhang ZhangPuze Liu, Snehal Jauhri
Active-sampling for deep Multi-Task RLFabian WahrenCarlo D'Eramo, Georgia Chalvatzaki
Interpretable Reinforcement Learningpatrick VimrDavide Tateo, Riad Akrour
Optimistic Actor CriticNiklas Kappes,Pascal HerrmannJoao Carvalho
Active Visual Search with POMDPsJascha Hellwig, Mark BaierlJoao Carvalho, Julen Urain De Jesus
Utilizing 6D Pose-Estimation over ROSJohannes WeyelJulen Urain De Jesus
Learning Deep Heuristics for Robot PlanningDominik MarinoTianyu Ren
Learning Behavior Trees from VideosJohannes Heeg, Aljoscha Schmidt and Adrian WorringSuman Pal, Lisa Scherf
Learning Decisions by Imitating Human Control CommandsJonas Günster, Manuel SengeJunning Huang
Combining Self-Paced Learning and Intrinsic MotivationFelix Kaiser, Moritz Meser, Louis SterkerPascal Klink
Self-Paced Reinforcement Learning for Sim-to-RealFabian Damken, Heiko CarrascoFabio Muratore
Policy Distillation for Sim-to-RealBenedikt Hahner, Julien BrosseitFabio Muratore
Neural Posterior System IdentificationTheo Gruner, Florian WieseFabio Muratore, Boris Belousov
Syntethic Dataset generation for Articulation predictionJohannes Weyel, Niklas BabendererdeJulen Urain, Puze Liu
Guided Dimensionality Reduction for Black-Box OptimizationMarius MemmelPuze Liu, Davide Tateo
Learning Laplacian Representations for continuous MCTSDaniel Mansfeld, Alex RuffiniTuan Dam, Georgia Chalvatzaki, Carlo D'Eramo
Object Tracking using Depth CarmeraLeon Magnus, Svenja Menzenbach, Max SiebenbornNiklas Funk, Boris Belousov, Georgia Chalvatzaki
GNNs for Robotic ManipulationFabio d'Aquino Hilt, Jan Kolf, Christian WeilandJoao Carvalho
Benchmarking advances in MCTS in Go and ChessLukas SchneiderTuan Dam, Carlo D'Eramo
Architectural Assembly: Simulation and OptimizationJan SchneiderBoris Belousov, Georgia Chalvatzaki
Probabilistic Object Tracking using Depth CarmeraJan Emrich, Simon KiefhaberNiklas Funk, Boris Belousov, Georgia Chalvatzaki
Bayesian Optimization for System Identification in Robot Air HockeyChen Xue. Verena SieburgerPuze Liu, Davide Tateo
Benchmarking MPC Solvers in the Era of Deep Reinforcement LearningDarya Nikitina, Tristan SchulzJoe Watson
Enhancing Attention Aware Movement PrimitivesArtur KrukDorothea Koert
Towards Semantic Imitation LearningPengfei ZhaoJulen Urain & Georgia Chalvatzaki
Can we use Structured Inference Networks for Human Motion Prediction?Hanyu Sun, Liu LanmiaoJulen Urain & Georgia Chalvatzaki
Reinforcement Learning for Architectural Combinatorial OptimizationJianpeng Chen, Yuxi Liu, Martin Knoll, Leon WietschorkeBoris Belousov, Georgia Chalvatzaki, Bastian Wibranek
Architectural Assembly With Tactile Skills: Simulation and OptimizationTim Schneider, Jan SchneiderBoris Belousov, Georgia Chalvatzaki, Bastian Wibranek
Bayesian Last Layer NetworksJihao Andreas LinJoe Watson, Pascal Klink
BATBOT: BATter roBOT for BaseballYannick Lavan, Marcel WesselyCarlo D'Eramo
Benchmarking Deep Reinforcement LearningBenedikt VolkerDavide Tateo, Carlo D'Eramo, Tianyu Ren
Model Predictive Actor-Critic Reinforcement Learning of Robotic TasksDaljeet NandhaGeorgia Chalvatzaki
Dimensionality Reduction for Reinforcement LearningJonas JägerMichael Lutter
From exploration to control: learning object manipulation skills through novelty search and local adaptationLeon KellerSvenja Stark, Daniel Tanneberg
Robot Air-HockeyPatrick LutzPuze Liu, Davide Tateo
Learning Robotic Grasp of Deformable ObjectMingye Zhu, Yanhua ZhangTianyu Ren
Teach a Robot to solve puzzles with intrinsic motivationAli KarpuzogluGeorgia Chalvatzaki, Svenja Stark
Inductive Biases for Robot LearningRustam GalljamovBoris Belousov, Michael Lutter
Accelerated Mirror Descent Policy SearchMaximilian HenselBoris Belousov, Tuan Dam
Foundations of Adversarial and Robust LearningJanosch Moos, Kay HanselSvenja Stark, Hany Abdulsamad
Likelihood-free Inference for Reinforcement LearningMaximilian Hensel, Kai CuiBoris Belousov
Risk-Aware Reinforcement LearningMaximillian Kircher, Angelo Campomaggiore, Simon Kohaut, Dario PerroneSamuele Tosatto, Dorothea Koert
Jonas Eschmann, Robin Menzenbach, Christian EilersBoris Belousov, Fabio Muratore
Learning Symbolic Representations for Abstract High-Level PlanningZhiyuan Hu, Claudia Lölkes, Haoyi YangSvenja Stark Daniel Tanneberg
Learning Perceptual ProMPs for Catching BallsAxel PatzwahlDorothea Koert, Michael Lutter
Bayesian Inference for Switching Linear Dynamical SystemsMarkus Semmler, Stefan FabianHany Abdulsamad
Characterization of WAM DynamicsKai PloegerDorothea Koert, Michael Lutter
Deep Reinforcement Learning for playing Starcraft IIDaniel Palenicek, Marcel Hussing, Simon MeisterFilipe Veiga
Enhancing Exploration in High-Dimensional Environments ΔLu Wan, Shuo ZhangSimone Parisi
Building a Grasping TestbedDevron WilliamsOleg Arenz
Online Dynamic Model LearningPascal KlinkHany Abdulsamad, Alexandros Paraschos
Spatio-spectral Transfer Learning for Motor Performance EstimationKarl-Heinz FiebigDaniel Tanneberg
From Robots to CobotsMichael Burkhardt, Moritz Knaust, Susanne TrickDorothea Koert, Marco Ewerton
Learning Hand-KinematicsSumanth Venugopal, Deepak Singh MohanGregor Gebhardt
Goal-directed reward generationAlymbek SadybakasovBoris Belousov
Learning Grammars for Sequencing Movement PrimitivesKim Berninger, Sebastian SzelagRudolf Lioutikov
Learning Deep Feature Spaces for Nonparametric InferencePhilipp BeckerGregor Gebhardt
Lazy skill learning for cleaning up a tableLejla Nukovic, Moritz FuchsSvenja Stark
Reinforcement Learning for Gait Learning in QuadrupedsKai Ploeger, Zinan LiuSvenja Stark
Learning Deep Feature Spaces for Nonparametric InferencePhilipp BeckerGregor Gebhardt
Optimal Control for Biped LocomotionMartin Seiler, Max KreischerHany Abdulsamad
Semi-Autonomous Tele-OperationNick Heppert, Marius, Jeremy TschirnerOleg Arenz
Teaching People how to Write Japanese CharactersDavid Rother, Jakob Weimar, Lars LotterMarco Ewerton
Local Bayesian OptimizationDmitry SorokinRiad Akrour
Bayesian Deep Reinforcement Learning -Tools and Methods-Simon RamstedtSimone Parisi
Controlled Slip for Object ReleaseSteffen Kuchelmeister, Albert Schotschneider Filipe Veiga
Learn intuitive physics from videosYunlong Song, Rong ZhiBoris Belousov
Learn an Assembling Task with Swarm RobotsKevin Daun, Marius SchnaubeltGregor Gebhardt
Learning To Sequence Movement PrimitivesChristoph MayerChristian Daniel
Learning versatile solutions for Table TennisFelix EndGerhard Neumann Riad Akrour
Learning to Control Kilo-Bots with a FlashlightAlexander Hendrich, Daniel KauthGregor Gebhardt
Playing Badminton with RobotsJ. Tang, T. Staschewski, H. GouBoris Belousov
Juggling with RobotsElvir Sabic, Alexander WölkerDorothea Koert
Learning and control for the bipedal walker FaBiManuel Bied, Felix Treede, Felix PelsRoberto Calandra
Finding visual kernelsFide Marten, Dominik DienlinHerke van Hoof
Feature Selection for Tetherball Robot GamesXuelei Li, Jan Christoph KlieSimone Parisi
Inverse Reinforcement Learning of Flocking BehaviourMaximilian Maag, Robert PinslerOleg Arenz
Control and Learning for a Bipedal RobotFelix Treede, Phillip Konow, Manuel BiedRoberto Calandra
Perceptual coupling with ProMPsJohannes Geisler, Emmanuel StapfAlexandros Paraschos
Learning to balance with the iCubMoritz Nakatenus, Jan GeukesRoberto Calandra
Generalizing Models for a Compliant RobotMike SmykHerke van Hoof
Learning Minigolf with the BioRobFlorian BrandhermMarco Ewerton
iCub TelecontrolLars Fritsche, Felix UnverzagtRoberto Calandra
REPS for maneuvering in RobocupJannick Abbenseth, Nicolai OmmerChristian Daniel
Learning Ball on a Beam on the KUKA lightweight armsBianca Loew, Daniel WilbertsChristian Daniel
Sequencing of DMPs for Task- and Motion PlanningMarkus Sigg, Fabian FallerRudolf Lioutikov
Tactile Exploration and MappingThomas Arnreich, Janine HoelscherTucker Hermans
Multiobjective Reinforcement Learning on Tetherball BioRobAlexander Blank, Tobias ViernickelSimone Parisi
Semi-supervised Active Grasp LearningSimon Leischnig, Stefan LüttgenOliver Kroemer

Other past project ideas were

  • Trajectory vs Step-based Learning with Herke van Hoof Slides
  • Relational Skill Sequencing with Rudold Lioutikov Slides
  • Learning to walk in complex environments with Roberto Calandra Slides
  • Controlled slip with Herke van Hoof and Filipe Veiga Slides
  • Object rolling with Herke van Hoof Slides
  • Learning Transient Movements with Alexandros Paraschos Slides


Jan Peters heads the Intelligent Autonomous Systems Lab at the Department of Computer Science at the TU Darmstadt. Jan has studied computer science, electrical, control, mechanical and aerospace engineering. You can find Jan Peters in the Robert-Piloty building in S2 | 02 find in room E314.

!!Exercises Exercise 1: Download and install SL. Follow the instructions in the readme file (attached in the source code). If there are any problems, please get in touch with one of the advisers. (This exercise can be completed before the first lecture. Alternatively, if you bring a laptop with a Unix-based system, the advisers will help you with the installation during the first lecture.)
Exercise 2: Solve the ball following task (sl -> barrett -> src -> ball_following). You are required to implement a movement pattern where the robot follows a ball with its end effector. The task is partially written (see ball_following_task.c), you have to implement only the controller.
Exercise 3: Solve the ball on a beam task (sl -> barrett -> ball_on_beam). You have to implement the finite-difference gradient method in Matlab (BallOnBeam_example.m) to find the PD parameters that allow the robot to balance a ball on a beam.