Autonomous Systems Labs: Machine Learning for Intelligent Systems and Robotics

Upcoming Talks

DateTimeLocation
22.02.201814:00-14:20S02|02 E202
Felix Treede, M. Sc. Intermediate Presentation, Reinforcement Learning from Perturbed Physics Simulations
DateTimeLocation
1.03.201814:00-15:00S02|02 E202
Research Talk: Robotics at ABB Corporate Research
DateTimeLocation
1.03.201815:00-15:15S02|02 E202
Invited Talk: Tobias Hammerschmidt - Recursion and A* for the Hamster Problem
Welcome to the Computational Learning for Autonomous Systems Group and the Intelligent Autonomous Systems Group of the Computer Science Department of the Technische Universitaet Darmstadt.

Our research centers around the goal of bringing advanced motor skills to robotics using techniques from machine learning and control. Please check out our research or contact any of our lab members. As we originated out of the RObot Learning Lab (RoLL) in the Department for Empirical Inference and Machine Learning at the Max-Planck Institute of Intelligent Systems, we also have a few members in Tuebingen. We also collaborate with some of the excellent other autonomous systems groups at TU Darmstadt such as the Simulation, Systems Optimization and Robotics Group and the Locomotion Laboratory.

Creating autonomous robots that can learn to assist humans in situations of daily life is a fascinating challenge for machine learning. While this aim has been a long-standing vision of artificial intelligence and the cognitive sciences, we have yet to achieve the first step of creating robots that can learn to accomplish many different tasks triggered by environmental context or higher-level instruction. The goal of our robot learning laboratory is the realization of a general approach to motor skill learning, to get closer towards human-like performance in robotics. We focus on the solution of fundamental problems in robotics while developing machine-learning methods.

Computational Learning for Autonomous Systems (CLAS)

The new group on Computational Learning for Autonomous Systems Group is headed by Gerhard Neumann, who is Assistant Professor at the TU-Darmstadt since September 2014. The main focus of the CLAS group is to investigate computational learning algorithms that allow artificial agents to autonomously learn new skills from interaction with the environment, humans or other agents. We believe that such autonomously learning agents will have a great impact in many areas of everyday life, for example, autonomous robots for helping in the household, care of the elderly or the disposal of dangerous goods.

An autonomously learning agent has to acquire a rich set of different behaviours to achieve a variety of goals. The agent has to learn autonomously how to explore its environment and determine which are the important features that need to be considered for making a decision. It has to identify relevant behaviours and needs to determine when to learn new behaviours. Furthermore, it needs to learn what are relevant goals and how to re-use behaviours in order to achieve new goals. In order to achieve these objectives, our research concentrates on hierarchical learning and structured learning of robot control policies, information-theoretic methods for policy search, imitation learning and autonomous exploration, learning forward models for long-term predictions, autonomous cooperative systems and multi-agent systems and the biological aspects of autonomous learning systems.

Intelligent Autonomous Systems (IAS)

In the Intelligent Autonomous Systems Group headed by Jan Peters since July 2011 at TU Darmstadt and since May 2007 at the Max Planck Institute, we develop methods for learning models and control policy in real time, see e.g., learning models for control and learning operational space control. We are particularly interested in reinforcement learning where we try push the state-of-the-art further on and received a tremendous support by the RL community. Much of our research relies upon learning motor primitives that can be used to learn both elementary tasks as well as complex applications such as grasping or sports.

Some more information on us fore the general public can be found in a long article in the Max Planck Research magazine, small stubs in New Scientist, WIRED and the Spiegel, as well as on the IEEE Blog on Robotics and Engadget.

Directions and Open Positions

In case that you are searching for our address or for directions on how to get to our lab, look at our contact information. We always have thesis opportunities for enthusiastic and driven Masters/Bachelors students (please contact Jan Peters or Gerhard Neumann). Check out the currently offered theses (Abschlussarbeiten) or suggest one yourself, drop us a line by email or simply drop by! We also occasionally have open Ph.D. or Post-Doc positions, see OpenPositions.

News

  • New ICRA papers:
  1. Gebhardt, G.H.W.; Daun, K.; Schnaubelt, M.; Neumann, G. (2018). Robust Learning of Object Assembly Tasks with an Invariant Representation of Robot Swarms, Proceedings of the IEEE International Conference on Robotics and Automation (ICRA).   See Details [Details]   BibTeX Reference [BibTex]
  2. Pinsler, R.; Akrour, R.; Osa, T.; Peters, J.; Neumann, G. (2018). Sample and Feedback Efficient Hierarchical Reinforcement Learning from Human Preferences, Proceedings of the IEEE International Conference on Robotics and Automation (ICRA).   See Details [Details]   Download Article [PDF]   BibTeX Reference [BibTex]
  3. Lioutikov, R.; Maeda, G.; Veiga, F.F.; Kersting, K.; Peters, J. (2018). Inducing Probabilistic Context-Free Grammars for the Sequencing of Robot Movement Primitives, Proceedings of the International Conference on Robotics and Automation (ICRA).   See Details [Details]   Download Article [PDF]   BibTeX Reference [BibTex]
  4. Koert, D.; Maeda, G.; Neumann, G.; Peters, J. (2018). Learning Coupled Forward-Inverse Models with Combined Prediction Errors, Proceedings of the International Conference on Robotics and Automation (ICRA).   See Details [Details]   Download Article [PDF]   BibTeX Reference [BibTex]
  • Julia Vinogradska will receive the Best Junior Scientist Award of the Stiftung Werner-von-Siemens-Ring
  • Daniel Tanneberg received the Hanns-Voith-Stiftungspreis Award 2017 for his master thesis.
  • new HUMANOIDS Papers:
  1. Tanneberg, D.; Peters, J.; Rueckert, E. (2017). Efficient Online Adaptation with Stochastic Recurrent Neural Networks, Proceedings of the International Conference on Humanoid Robots (HUMANOIDS).   See Details [Details]   Download Article [PDF]   BibTeX Reference [BibTex]
  2. Stark, S.; Peters, J.; Rueckert, E. (2017). A Comparison of Distance Measures for Learning Nonparametric Motor Skill Libraries, Proceedings of the International Conference on Humanoid Robots (HUMANOIDS).   See Details [Details]   Download Article [PDF]   BibTeX Reference [BibTex]
  • New CoRL Papers:
  1. Maeda, G.; Ewerton, M.; Osa, T.; Busch, B.; Peters, J. (2017). Active Incremental Learning of Robot Movement Primitives, Proceedings of the Conference on Robot Learning (CoRL).   See Details [Details]   Download Article [PDF]   BibTeX Reference [BibTex]
  2. Tanneberg, D.; Peters, J.; Rueckert, E. (2017). Online Learning with Stochastic Recurrent Neural Networks using Intrinsic Motivation Signals, Proceedings of the Conference on Robot Learning (CoRL).   See Details [Details]   Download Article [PDF]   BibTeX Reference [BibTex]
  • New Journal Papers:
  1. Kroemer, O.; Leischnig, S.; Luettgen, S.; Peters, J. (accepted). A Kernel-based Approach to Learning Contact Distributions for Robot Manipulation Tasks, Autonomous Robots (AURO).   See Details [Details]   Download Article [PDF]   BibTeX Reference [BibTex]
  2. Paraschos, A.; Daniel, C.; Peters, J.; Neumann, G. (accepted). Using Probabilistic Movement Primitives in Robotics, Autonomous Robots (AURO).   See Details [Details]   Download Article [PDF]   BibTeX Reference [BibTex]
  3. Lioutikov, R.; Neumann, G.; Maeda, G.; Peters, J. (2017). Learning Movement Primitive Libraries through Probabilistic Segmentation, International Journal of Robotics Research (IJRR), 36, 8, pp.879-894.   See Details [Details]   Download Article [PDF]   BibTeX Reference [BibTex]
  4. Maeda, G.; Ewerton, M.; Neumann, G.; Lioutikov, R.; Peters, J. (2017). Phase Estimation for Fast Action Recognition and Trajectory Generation in Human-Robot Collaboration, International Journal of Robotics Research (IJRR), 36, 13-14, pp.1579-1594.   See Details [Details]   Download Article [PDF]   BibTeX Reference [BibTex]
  5. van Hoof, H.; Neumann, G.; Peters, J. (2017). Non-parametric Policy Search with Limited Information Loss, Journal of Machine Learning Research (JMLR), 18, 73, pp.1-46.   See Details [Details]   Download Article [PDF]   BibTeX Reference [BibTex]
  6. van Hoof, H.; Tanneberg, D.; Peters, J. (2017). Generalized Exploration in Policy Search, Machine Learning (MLJ), 106, 9-10, pp.1705-1724.   See Details [Details]   Download Article [PDF]   BibTeX Reference [BibTex]
  • New IROS papers accepted:
  1. Paraschos, A.; Lioutikov, R.; Peters, J.; Neumann, G. (2017). Probabilistic Prioritization of Movement Primitives, Proceedings of the International Conference on Intelligent Robot Systems, and IEEE Robotics and Automation Letters (RA-L).   See Details [Details]   Download Article [PDF]   BibTeX Reference [BibTex]
  2. Parisi, S., Ramstedt, S., Peters J. (2017). Goal-Driven Dimensionality Reduction for Reinforcement Learning, Proceedings of the IEEE/RSJ Conference on Intelligent Robots and Systems (IROS).   See Details [Details]   Download Article [PDF]   BibTeX Reference [BibTex]
  3. Busch, B.; Maeda, G.; Mollard, Y.; Demangeat, M.; Lopes, M. (2017). Postural Optimization for an Ergonomic Human-Robot Interaction, Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).   See Details [Details]   BibTeX Reference [BibTex]
  4. Pajarinen, J.; Kyrki, V.; Koval, M.; Srinivasa, S; Peters, J.; Neumann, G. (2017). Hybrid Control Trajectory Optimization under Uncertainty, Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).   See Details [Details]   Download Article [PDF]   BibTeX Reference [BibTex]
  1. Farraj, F. B.; Osa, T.; Pedemonte, N.; Peters, J.; Neumann, G.; Giordano, P.R. (2017). A Learning-based Shared Control Architecture for Interactive Task Execution, Proceedings of the IEEE International Conference on Robotics and Automation (ICRA).   See Details [Details]   Download Article [PDF]   BibTeX Reference [BibTex]
  2. Wilbers, D.; Lioutikov, R.; Peters, J. (2017). Context-Driven Movement Primitive Adaptation, Proceedings of the IEEE International Conference on Robotics and Automation (ICRA).   See Details [Details]   Download Article [PDF]   BibTeX Reference [BibTex]
  3. End, F.; Akrour, R.; Peters, J.; Neumann, G. (2017). Layered Direct Policy Search for Learning Hierarchical Skills, Proceedings of the International Conference on Robotics and Automation (ICRA).   See Details [Details]   Download Article [PDF]   BibTeX Reference [BibTex]
  4. Gabriel, A.; Akrour, R.; Peters, J.; Neumann, G. (2017). Empowered Skills, Proceedings of the International Conference on Robotics and Automation (ICRA).   See Details [Details]   Download Article [PDF]   BibTeX Reference [BibTex]

Gebhardt, G.H.W.; Kupcsik, A.G.; Neumann, G. (2017). The Kernel Kalman Rule - Efficient Nonparametric Inference with Recursive Least Squares, Proceedings of the National Conference on Artificial Intelligence (AAAI).   See Details [Details]   Download Article [PDF]   BibTeX Reference [BibTex]

  • Our undergraduate student Karl-Heinz Fiebig won the IEEE Brain Initiative Best Paper Award for the paper Fiebig, K.-H. (2017). Multi-Task Logistic Regression in Brain-Computer Interfaces, Bachelor Thesis.   See Details [Details]   BibTeX Reference [BibTex]

    Fiebig, K.-H.; Jayaram, V.; Peters, J.; Grosse-Wentrup, M. (2016). Multi-Task Logistic Regression in Brain-Computer Interfaces, IEEE SMC 2016 — 6th Workshop on Brain-Machine Interface Systems.   See Details [Details]   BibTeX Reference [BibTex] . Congratulations!
  • Jan Peters was invited for a talk at the 3rd Multidisciplinary Conference on ReinforcementLearning and Decision Making (RLDM2017).

Past News

  

zum Seitenanfang