Upcoming Talks

DateTimeLocation
23.04.201410:45-11:15S202 E302
Rudolf Lioutikov, Research Talk: Towards a 3rd Hand!
DateTimeLocation
23.04.201411:15-12:00S202 E302
Adrià Colomé Figueras, Research Talk: Lower dimensionality representation of movement primitives for learning
DateTimeLocation
30.04.201410:15-10:45S202 E302
Viktor Kisner and Dimitar Ho. ADP Final Presetation: Trajectory Tracking Controller for a 4-DoF Flexible Joint Robot Arm

Intelligent Autonomous Systems Lab

Welcome to the Intelligent Autonomous Systems Group of the Computer Science Department of the Technische Universitaet Darmstadt. Our research centers around the goal of bringing advanced motor skills to robotics using techniques from machine learning and control. Please check out our research or contact any of our lab members. As we originated out of the RObot Learning Lab in the Department for Empirical Inference and Machine Learning at the Max-Planck Institute of Intelligent Systems, we also have a few members in Tuebingen.

Our agenda: Creating autonomous robots that can learn to assist humans in situations of daily life is a fascinating challenge for machine learning. While this aim has been a long-standing vision of artificial intelligence and the cognitive sciences, we have yet to achieve the first step of creating robots that can learn to accomplish many different tasks triggered by environmental context or higher-level instruction. The goal of our robot learning laboratory is the investigation of the ingredients for such a general approach to motor skill learning, to get closer towards human-like performance in robotics. We thus focus on the solution of basic problems in robotics while developing domain- appropriate machine-learning methods.

For doing so, we develop methods for learning models and control policy in real time, see e.g., learning models for control and learning operational space control. We are particularly interested in reinforcement learning where we try push the state-of-the-art further on and received a tremendous support by the RL community. Much of our research relies upon learning motor primitives that can be used to learn both elementary tasks as well as complex applications such as grasping or sports.

In case that you are searching for our address or for directions on how to get to our lab, look at our contact information.

We always have thesis opportunities for enthusiastic and driven Masters/Bachelors students (please contact Jan Peters or Gerhard Neumann). Check out the currently offered theses (Abschlussarbeiten) or suggest one yourself, drop us a line by email or simply drop by! We also occasionally have open Ph.D. or Post-Doc positions, see OpenPositions.

Some more information on us fore the general public can be found in a long article in the Max Planck Research magazine, small stubs in New Scientist and the Spiegel, as well as on the IEEE Blog on Robotics and Engadget.

News

  • Jan Peters will be Area Chair at Advances in Neural Information Processing Systems (NIPS 2014).
  • several cool journal papers have been accepted:
  1. Kupcsik, A.G.; Deisenroth, M.P.; Peters, J.; Ai Poh, L.;Vadakkepat, V.; Neumann, G. (conditionally accepted). Model-based Contextual Policy Search for Data-Efficient Generalization of Robot Skills, Artificial Intelligence.   See Details [Details]   BibTeX Reference [BibTex]
  2. Dann, C.; Neumann, G.; Peters, J. (2014). Policy Evaluation with Temporal Differences: A Survey and Comparison, Journal of Machine Learning Research, 15, March, pp.809-883.   See Details [Details]   Download Article [PDF]   BibTeX Reference [BibTex]
  3. Meyer, T.; Peters, J.; Zander, T.O.; Schoelkopf, B.; Grosse-Wentrup, M. (accepted). Predicting Motor Learning Performance from Electroencephalographic Data, Journal of Neuroengineering and Rehabilitation.   See Details [Details]   Download Article [PDF]   BibTeX Reference [BibTex]
  4. Muelling, K.; Boularias, A.; Mohler, B.; Schoelkopf, B.; Peters, J. (accepted). Learning Strategies in Table Tennis using Inverse Reinforcement Learning, Biological Cybernetics.   See Details [Details]   BibTeX Reference [BibTex]
  • Six ICRA 2014 papers accepted (100% acceptance rate for our team):
  1. Kroemer, O.; van Hoof, H.; Neumann, G.; Peters, J. (2014). Learning to Predict Phases of Manipulation Tasks as Hidden States, Proceedings of 2014 IEEE International Conference on Robotics and Automation (ICRA).   See Details [Details]   Download Article [PDF]   BibTeX Reference [BibTex]
  2. Ben Amor, H.; Neumann, G.; Kamthe, S.; Kroemer, O.; Peters, J. (2014). Interaction Primitives for Human-Robot Cooperation Tasks , Proceedings of 2014 IEEE International Conference on Robotics and Automation (ICRA).   See Details [Details]   Download Article [PDF]   BibTeX Reference [BibTex]
  3. Calandra, R.; Seyfarth, A.; Peters, J.; Deisenroth, M.P. (2014). An Experimental Comparison of Bayesian Optimization for Bipedal Locomotion, Proceedings of 2014 IEEE International Conference on Robotics and Automation (ICRA).   See Details [Details]   Download Article [PDF]   BibTeX Reference [BibTex]
  4. Deisenroth, M.P.; Englert, P.; Peters, J.; Fox, D. (2014). Multi-Task Policy Search for Robotics, Proceedings of 2014 IEEE International Conference on Robotics and Automation (ICRA).   See Details [Details]   Download Article [PDF]   BibTeX Reference [BibTex]
  5. Lioutikov, R.; Paraschos, A.; Neumann, G.; Peters, J. (2014). Sample-Based Information-Theoretic Stochastic Optimal Control, Proceedings of 2014 IEEE International Conference on Robotics and Automation (ICRA).   See Details [Details]   Download Article [PDF]   BibTeX Reference [BibTex]
  6. Bischoff, B.; Nguyen-Tuong, D.; van Hoof, H. McHutchon, A.; Rasmussen, C.E.; Knoll, A.; Peters, J.; Deisenroth, M.P. (2014). Policy Search For Learning Robot Control Using Sparse Data, Proceedings of 2014 IEEE International Conference on Robotics and Automation (ICRA).   See Details [Details]   Download Article [PDF]   BibTeX Reference [BibTex] .

Past News

  

zum Seitenanfang