(:requiretuid:)

Vorlesung: Robot Learning

Note

  • All important announcements will be done via Moodle! Please check it regularly!
  • The first lecture will be on Wednesday, 4. Nov. 2020, at 11:30, via Zoom.
  • You need to register for the Zoom lecture with the following link https://tinyurl.com/y5eq7srt. After registration you will get an email to the Zoom room. Please use your @stud.tu-darmstadt.de email!

Quick Facts

Lecturer:Jan Peters
Teaching Assistants:Joe Watson, Joao Carvalho, Julen Urain De Jesus and Tuan Dam
Classes:Over Zoom / AI Campus Platform
Language:English
Questions: rol-team@ias.informatik.tu-darmstadt.de
Office Hours:See in Moodle
  
TU-CAN:20-00-0629-vl Lernende Roboter TuCaN link
 20-00-1113-vl Maschinelles Lernen für Robotik & Mechatronik TuCaN link
Moodle:https://moodle.tu-darmstadt.de/course/view.php?id=24167
Credits:6,0
Exam:Di, 30. Mar. 2021 15:30-18:30 -> Check Moodle for relevant information

Description

(:youtube qtqubguikMk :) In the 1980s, classical robotics already reached a high level of maturity and it was able to produce large factories. For example, cars factories were completely automated. Despite these impressive achievements, unlike personal computers, modern service robots still did not leave the factories and take a seat as robot companions on our side. The reason is that it is still harder for us to program robots than computers. Usually, modern companion robots learn their duties by a mixture of imitation and trial-and-error. This new way of programming robots has a crucial consequence in the field of industry: the programming cost increases, making mass production impossible.

However, in research, this approach had a great influence and over the last ten years all top universities in the world conduct research in this area. The success of these new methods has been demonstrated in a variety of sample scenarios: autonomous helicopters learning from teachers complex maneuver, walking robot learning impressive balancing skills, self-guided cars hurtling at high speed in racetracks, humanoid robots balancing a bar in their hand and anthropomorphic arms cooking pancakes.

Accordingly, this class serves as an introduction to autonomous robot learning. The class focuses on approaches from the fields of robotics, machine learning, model learning, imitation learning, reinforcement learning and motor primitives. Application scenarios and major challenges in modern robotics will be presented as well.

We pay particular attention to interactions with the participants of the lecture, asking multiple question and appreciating enthusiastic students.

We also offer a parallel project, the Robot Learning: Integrated Project. It is designed to enable participants to understand robot learning in its full depth by directly applying methods presented in this class to real or simulated robots. We suggest motivated students to attend it as well, either during or after the Robot Learning Class!



Contents

The course gives a introduction to robotics and machine learning methods. The following topics are expected to be covered throughout the semester.

  • Robotics Basics
  • Machine Learning Basics
  • Model Learning
  • Imitation Learning
  • Optimal Decision Making
  • Optimal Control
  • Reinforcement Learning
  • Policy Search
  • Inverse Reinforcement Learning

Requirements

Mathematics from the first semesters, basic programming abilities, computer science basics.

Literature

The most important books for this class are:

  1. B. Siciliano, L. Sciavicco. Robotics: Modelling, Planning and Control, Springer
  2. C.M. Bishop. Pattern Recognition and Machine Learning, Springer free online copy
  3. R. Sutton, A. Barto. Reinforcement Learning - An Introduction, MIT Press free online copy

Additionally, the following papers are useful for specific topics:

    •       Bib
      Deisenroth, M. P.; Neumann, G.; Peters, J. (2013). A Survey on Policy Search for Robotics, Foundations and Trends in Robotics, 21, pp.388-403.
    •       Bib
      Kober, J.; Bagnell, D.; Peters, J. (2013). Reinforcement Learning in Robotics: A Survey, International Journal of Robotics Research (IJRR), 32, 11, pp.1238-1274.
    •     Bib
      Nguyen Tuong, D.; Peters, J. (2011). Model Learning in Robotics: a Survey, Cognitive Processing, 12, 4.


Teaching Staff

The lecturer will be Jan Peters and additionally supervised by Joao Carvalho, Joe Watson, Julen Urain De Jesus and Tuan Dam.

Jan Peters heads the Intelligent Autonomous Systems Lab at the Department of Computer Science at the TU Darmstadt. Jan has studied computer science, electrical, control, mechanical and aerospace engineering.

Joao Carvalho is a Ph.D. student at the Intelligent Autonomous Systems (IAS) Group at the Technical University of Darmstadt. His current research focus is on Monte Carlo Gradient Estimators for Policy Gradient algorithms.


Joe Watson is a second year Ph.D. student at IAS, where he works towards Bayesian inference methods for model learning and control.


Julen Urain De Jesus is a second year Ph.D. researcher at the Intelligent Autonomous Systems (IAS) Group at the Technical University of Darmstadt. His main research line is in Probabilistic Modelling and Inductive Biases for Imitation Learning.


Tuan Dam is a second year Ph.D. researcher for the Intelligent Autonomous Systems Group at TU Darmstadt. During his Ph.D., Tuan is researching the development of principled methods that allow robots to operate in unstructured partially observable real-world environments.


Tutors

Our Tutors for WS 2020 are Zhiyuan Hu and Chen Xue.

For further inquiries do not hesitate to contact us immediately!