Reinforcement Learning (under complete- and partial observability), Transfer- and Curriculum Learning, Robotics, Optimal Control.
Mail. TU Darmstadt, FB-Informatik, FG-IAS, Hochschulstr. 10, 64289 Darmstadt
Office. Room E327, Building S2|02
Pascal joined the Intelligent Autonomous Systems Group in May 2019 as a PhD student and is working on the ROBOLEAP
project, developing methods for reinforcement learning in unstructured, partially observable real world environments. In general, his research interests revolve around various aspects of reinforcement learning that, in his opinion, limit its applicability to real world settings - two of them being partial observability and knowledge transfer. He is convinced that truly intelligent systems need to be able to learn in environments with incomplete observations of the environment and reuse previously acquired knowledge to speed up and bootstrap learning in new situations.
Before starting his PhD, Pascal completed his Bachelor's degree in Computer Science and Master's degree in Autonomous Systems at the Technische Universitaet Darmstadt. He wrote his Master's thesis on "Generalization and Transferability in Reinforcement Learning" supervised by Hany Abdulsamad, Boris Belousov and Jan Peters, where he investigated concepts from the domain of numerical continuation, parameteric programming and concurrent systems theory for the task of knowledge transfer and finally developed a method for autonomous curriculum generation for reinforcement learning problems.
- Klink, P.; Abdulsamad, H.; Belousov, B.; Peters, J. (2019). Self-Paced Contextual Reinforcement Learning, Proceedings of the 3rd Conference on Robot Learning (CoRL).
See Details [Details] Download Article [PDF] BibTeX Reference [BibTex]