|Alexander Klein, M.Sc. Intermediate Presentation: Learning Robot Grasping of Industrial Work Pieces using Dense Object Descriptors|
|Hamish Flynn (BCAI), Invited Talk: Towards Hybrid Bayesian Methods for Task-Efficient Meta-Reinforcement Learning|
|Jean Kaddour (ICL), Invited Talk: Active Meta-Learning|
RObot Learning Lab (RoLL) in the Department for Empirical Inference and Machine Learning at the Max-Planck Institute of Intelligent Systems, we also have a few members in Tuebingen. We also collaborate with some of the excellent other autonomous systems groups at TU Darmstadt such as the Simulation, Systems Optimization and Robotics Group and the Locomotion Laboratory. We are part of TU Darmstadt's artificial intelligence initiative AI•DA and the Centre for Cognitive Science (CCS).
Creating autonomous robots that can learn to assist humans in situations of daily life is a fascinating challenge for machine learning. While this aim has been a long-standing vision of artificial intelligence and the cognitive sciences, we have yet to achieve the first step of creating robots that can learn to accomplish many different tasks triggered by environmental context or higher-level instruction. The goal of our robot learning laboratory is the realization of a general approach to motor skill learning, to get closer towards human-like performance in robotics. We focus on the solution of fundamental problems in robotics while developing machine-learning methods. Artificial agents that autonomously learn new skills from interaction with the environment, humans or other agents will have a great impact in many areas of everyday life, for example, autonomous robots for helping in the household, care of the elderly or the disposal of dangerous goods.
An autonomously learning agent has to acquire a rich set of different behaviours to achieve a variety of goals. The agent has to learn autonomously how to explore its environment and determine which are the important features that need to be considered for making a decision. It has to identify relevant behaviours and needs to determine when to learn new behaviours. Furthermore, it needs to learn what are relevant goals and how to re-use behaviours in order to achieve new goals. In order to achieve these objectives, our research concentrates on hierarchical learning and structured learning of robot control policies, information-theoretic methods for policy search, imitation learning and autonomous exploration, learning forward models for long-term predictions, autonomous cooperative systems and biological aspects of autonomous learning systems.
In the Intelligent Autonomous Systems Institute at TU Darmstadt is headed by Jan Peters and has an additional research group at the Max Planck Institute, we develop methods for learning models and control policy in real time, see e.g., learning models for control and learning operational space control. We are particularly interested in reinforcement learning where we try push the state-of-the-art further on and received a tremendous support by the RL community. Much of our research relies upon learning motor primitives that can be used to learn both elementary tasks as well as complex applications such as grasping or sports. In addition, there are research groups by Gerhard Neumann, Elmar Rueckert and Joni Pajarinen at our institute that also focus on these aspects.
Some more information on us for the general public can be found in a long article in the Max Planck Research magazine, small stubs in New Scientist, WIRED and the Spiegel, as well as on the IEEE Blog on Robotics and Engadget.
In case that you are searching for our address or for directions on how to get to our lab, look at our contact information. We always have thesis opportunities for enthusiastic and driven Masters/Bachelors students (please contact Jan Peters). Check out the open topics currently offered theses (Abschlussarbeiten) or suggest one yourself, drop us a line by email or simply drop by! We also occasionally have open Ph.D. or Post-Doc positions, see OpenPositions.