Kay Hansel

Research Interests

Robot Learning, Machine Learning, Reinforcement Learning, Robotics, Optimal Control, Telerobotics, Human-Robot Interaction

Affiliation

TU Darmstadt, Intelligent Autonomous Systems, Computer Science Department

Contact

kay@robot-learning.de
kay.hansel@tu-darmstadt.de
Room E225, Building S2|02, TU Darmstadt, FB-Informatik, FG-IAS, Hochschulstr. 10, 64289 Darmstadt
+49-6151-16-20073

Kay Hansel joined the Intelligent Autonomous Systems (IAS) Lab as a Ph.D. student in May 2021. He is interested in experimenting and researching with robots, exploring new concepts in artificial intelligence such as machine learning, reinforcement learning or deep learning, and thereby discovering interdisciplinary connections between various academic disciplines.

Before starting his Ph.D., Kay Hansel completed his Bachelor's degree in Applied Mathematics at the RheinMain University of Applied Sciences and his Master's degree in Autonomous Systems at the TU Darmstadt. His thesis, "Probabilistic Dynamic Mode Primitives", was written under the supervision of Svenja Stark and Hany Abdulsamad.

Reviewing

IEEE IROS, IEEE ICRA, CoRL, IEEE RA-L, and various Robotics & ML workshops.

Teaching Assistant

LectureYears
Computational Engineering and RoboticsSS 2022, WS 2022/2023
Robot LearningWS 2022/2023, SS 2023

Currently, Kay's research focuses on the broad field of teleoperation and shared control. In this field, an operator remotely controls a robot in potentially hazardous and unpredictable environments. Executing complex robot manipulation tasks safely under partial observability poses a real challenge to the operator. Different embodiments of robots and operators and the risk of delays make precise manipulation nearly infeasible. Therefore, Kay is currently working on shared control, where information, e.g. prior knowledge and inductive biases, are employed to improve the teleoperation.

During his academic research, Kay had the invaluable opportunity to visit the Intelligent Robotics and Biomechatronics Laboratory led by Prof. Hasegawa as a visiting scholar from May 2023 to July 2023. Part of the Department of Micro-Nano Mechanical Science and Engineering at Nagoya University, the lab is renowned for its fundamental and applied studies on mechatronics technologies for advanced human assistance systems.


Recent Publications

Robot Control

    • Hansel, K.; Urain, J.; Peters, J.; Chalvatzaki, G. (2023). Hierarchical Policy Blending as Inference for Reactive Robot Control, 2023 IEEE International Conference on Robotics and Automation (ICRA), IEEE.
            Bib
    • Le, A. T.; Hansel, K.; Peters, J.; Chalvatzaki, G. (2023). Hierarchical Policy Blending As Optimal Transport, 5th Annual Learning for Dynamics & Control Conference (L4DC), PMLR.
            Bib

Telerobotics

    • Zhu, Y.; Nazirjonov, S.; Jiang, B.; Colan, J.; Aoyama, T.; Hasegawa, Y.; Belousov, B.; Hansel, K.; Peters, J. (2023). Visual Tactile Sensor Based Force Estimation for Position-Force Teleoperation, IEEE International Conference on Cyborg and Bionic Systems, pp.49-52.
          Bib

Robot Learning

    • Hansel, K.; Moos, J.; Abdulsamad, H.; Stark, S.; Clever, D.; Peters, J. (2022). Robust Reinforcement Learning: A Review of Foundations and Recent Advances, Machine Learning and Knowledge Extraction, 4, 1, pp.276--315, MDPI.
            Bib
    • Hansel, K.; Moos, J.; Derstroff, C. (2021). Benchmarking the Natural Gradient in Policy Gradient Methods and Evolution Strategies, Reinforcement Learning Algorithms: Analysis and Applications, pp.69--84, Springer.
          Bib
    • Belousov, B.; Abdulsamad H.; Klink, P.; Parisi, S.; Peters, J. (2021). Reinforcement Learning Algorithms: Analysis and Applications, Studies in Computational Intelligence, Springer International Publishing.
          Bib
    • Hansel, K. (2021). Probabilistic Dynamic Mode Primitives, Master Thesis.
          Bib

Supervised Theses and Projects

YearThesis/ProjectStudent(s)TopicTogether with
2021RL:IPGao, Z.,
Shao, F.
Towards Intelligent Shared-Control with TIAGo++Chalvatzaki, G.
2021RL:IPLanger, M.Learn to Play Tangram with Graph-based Reinforcement LearningFunk, N.
2022RL:IPZimmermann, M.,
Langer, M.,
Marino, D.
Learn to Play Tangram with Graph-based Reinforcement LearningFunk, N.
2022RL:IPZimmermann, M.,
Zöller, M.,
Aristakesyan, A.
Learn to Play Tangram with Graph-based Reinforcement LearningFunk, N.
2023MSc ThesisGao, Z.Hierarchical Contextualization of Movement PrimitivesChalvatzaki, G.,
Peters, J.,
Antink, C. H.
[Ongoing]MSc ThesisLanger, M.Energy-based Models for 6D Pose EstimationFunk, N.,
Peters, J.
[Ongoing]MSc ThesisGece, A.Leveraging Structured-Graph Correspondence in Imitation LearningLe, A. T.,
Chalvatzaki, G.
[Ongoing]MSc ThesisZoeller, M.Enhancing Smoothness in Policy Blending with Gaussian ProcessesLe, A. T.,
Peters, J.