(:requiretuid:)
Vorlesung: Robot Learning
Note
- All important announcements will be done via Moodle! The Moodle is currently being prepared.
- The first lecture will be on Monday, 16. Oct. 2023, at 9:50am, in S202 | C205 (Bosch Hörsaal)
Quick Facts
Lecturer: | Jan Peters |
Teaching Assistants: | Firas Al-Hafez, Berk Gueler, Paul Jansonnie, Michael Drolet, Nico Bohlinger , Maximilian Tölle |
Classes: | Lecture Hall / KI Campus Platform - provides recordings of the 2021 version Click Here |
Language: | English |
Questions: | rol-team@ias.informatik.tu-darmstadt.de |
Office Hours: | TBA |
TU-CAN: | 20-00-0629-vl Lernende Roboter |
20-00-1113-vl Maschinelles Lernen für Robotik & Mechatronik | |
Moodle: | Click Here |
Credits: | 6,0 |
Exam: | TBA |
Description
In the 1980s, classical robotics already reached a high level of maturity and it was able to produce large factories. For example, cars factories were completely automated. Despite these impressive achievements, unlike personal computers, modern service robots still did not leave the factories and take a seat as robot companions on our side. The reason is that it is still harder for us to program robots than computers. Usually, modern companion robots learn their duties by a mixture of imitation and trial-and-error. This new way of programming robots has a crucial consequence in the field of industry: the programming cost increases, making mass production impossible.
However, in research, this approach had a great influence and over the last ten years all top universities in the world conduct research in this area. The success of these new methods has been demonstrated in a variety of sample scenarios: autonomous helicopters learning from teachers complex maneuver, walking robot learning impressive balancing skills, self-guided cars hurtling at high speed in racetracks, humanoid robots balancing a bar in their hand and anthropomorphic arms cooking pancakes.

We pay particular attention to interactions with the participants of the lecture, asking multiple question and appreciating enthusiastic students.
We also offer a parallel project, the Robot Learning: Integrated Project. It is designed to enable participants to understand robot learning in its full depth by directly applying methods presented in this class to real or simulated robots. We suggest motivated students to attend it as well, either during or after the Robot Learning Class!
Contents
The course gives a introduction to robotics and machine learning methods. The following topics are expected to be covered throughout the semester.
- Robotics Basics
- Machine Learning Basics
- Model Learning
- Imitation Learning
- Optimal Decision Making
- Optimal Control
- Reinforcement Learning
- Policy Search
- Inverse Reinforcement Learning
Requirements
Mathematics from the first semesters, basic programming abilities, computer science basics.
Literature
The most important books for this class are:
- B. Siciliano, L. Sciavicco. Robotics: Modelling, Planning and Control, Springer
- C.M. Bishop. Pattern Recognition and Machine Learning, Springer free online copy
- R. Sutton, A. Barto. Reinforcement Learning - An Introduction, MIT Press free online copy
Additionally, the following papers are useful for specific topics:
-
- Deisenroth, M. P.; Neumann, G.; Peters, J. (2013). A Survey on Policy Search for Robotics, Foundations and Trends in Robotics, 21, pp.388-403.
-
- Kober, J.; Bagnell, D.; Peters, J. (2013). Reinforcement Learning in Robotics: A Survey, International Journal of Robotics Research (IJRR), 32, 11, pp.1238-1274.
-
- Nguyen Tuong, D.; Peters, J. (2011). Model Learning in Robotics: a Survey, Cognitive Processing, 12, 4.
Teaching Staff
The lecturer will be Jan Peters and additionally supervised by Firas Al-Hafez, Berk Gueler, Paul Jansonnie, Michael Drolet, Nico Bohlinger, Maximilian Tölle.

Jan Peters heads the Intelligent Autonomous Systems Lab at the Department of Computer Science at the TU Darmstadt. Jan has studied computer science, electrical, control, mechanical and aerospace engineering.

Firas Al-Hafez is a second year Ph.D. researcher for the Intelligent Autonomous Systems Group at TU Darmstadt. Firas' main research interests include reinforcement learning and imitation learning to advance humanoid locomotion.

Maximilian Tölle is a first year Ph.D. researcher for the Intelligent Autonomous Systems Group at TU Darmstadt. Additionally, he is part of the research department SAIROL of DFKI. His research focuses on language-conditioned manipulation, especially learning generalizable task representations for robotic control.

Nico Bohlinger is a first year Ph.D. researcher for the Intelligent Autonomous Systems Group at TU Darmstadt. He works on machine learning for locomotion with a current focus on reinforcement learning methods and quadrupedal robots.

Paul Jansonnie is a first year Ph.D. researcher for the Intelligent Autonomous Systems Group at TU Darmstadt, in collaboration with Naver Labs Europe. He is mainly interested in robotic manipulation and is currently working soft-body manipulation.

Michael Drolet is a first year Ph.D. researcher for the Intelligent Autonomous Systems Group at TU Darmstadt. He is interested in adversarial imitation learning, with a focus on bipedal locomotion.

Berk Güler is a first-year Ph.D. researcher for the Intelligent Autonomous Systems Group at TU Darmstadt. He is interested in autonomy-infused teleoperation for dexterous object manipulation.
Tutors
TBA
For further inquiries do not hesitate to contact us immediately!