Machine Learning, Robotics, Inverse Reinforcement Learning, Imitation Learning, Grasping and Manipulation, Reinforcement Learning, Variational Inference
Publications Google Citations DBLP
Oleg Arenz
TU Darmstadt, FG IAS,
Hochschulstr. 10, 64289 Darmstadt
Office.
Room E226, Building S2|02
+49-6151-16-20073
arenz(at)ias(dot)tu-darmstadt(dot)de
oleg(at)robot-learning(dot)de
Before his PhD, Oleg completed both, his Bachelor Degree in Computer Science and his Master Degree in Autonomous Systems at the Technische Universitaet Darmstadt. His master thesis entitled “Feature Extraction for Inverse Reinforcement Learning" was written under the supervision of Gerhard Neumann and Christian Daniel.
Robots can learn to accomplish a given task by imitating previously observed demonstrations. However, in order to adapt to different situations, true imitation must go beyond blindly repeating demonstrated actions. Instead, imitation learning is deeply connected with the problem of learning the intentions behind observed behaviour. Inverse Reinforcement Learning can be applied in order to learn these intentions by learning a corresponding reward function. Reinforcement Learning can then be applied for imitation learning by learning a policy that aims to maximize that reward function.
Many real-world applications such as autonomous driving have an intractable number of possible states. As even experts are usually not able to identify all relevant features, Inverse Reinforcement Learning depends on feature extraction for learning meaningful reward functions. Furthermore, intentions can be inferred at different levels of abstraction, e.g. steering a car to the right might serve the purpose of taking a corner, while taking that corner might itself serve the purpose of reaching a given destination. Hierarchical reward functions ease the task of both, Inverse Reinforcement Learning and Reinforcement Learning, by making it possible to reuse previously learned high-level goals and low-level strategies. The problem of building and utilizing such hierarchical decompositions provides an interesting route for future research.
Machine Learning, Robotics, Inverse Reinforcement Learning, Imitation Learning, Manipulation and Grasping, Hierarchical Learning, Feature Extraction, Reinforcement Learning