Kay Hansel

Quick Info

Research Interests

Machine Learning, Reinforcement Learning, Robot Learning, Robotics, Optimal Control, Telerobotics, Human-Robot Interaction

More Information

LinkedIn ORCID Google Scholar Research Gate GitHub

Contact Information

Kay Hansel
TU Darmstadt, FG IAS,
Hochschulstr. 10, 64289 Darmstadt
Office. Room E225, Building S2|02

Kay Hansel joined the Intelligent Autonomous Systems (IAS) Lab as a Ph.D. student in May 2021. He is interested in experimenting and researching with robots, exploring new concepts in artificial intelligence such as machine learning, reinforcement learning or deep learning, and thereby discovering interdisciplinary connections between various academic disciplines.

Before starting his Ph.D., Kay Hansel completed his Bachelor's degree in Applied Mathematics at the RheinMain University of Applied Sciences and his Master's degree in Autonomous Systems at the TU Darmstadt. His thesis, "Probabilistic Dynamic Mode Primitives", was written under the supervision of Svenja Stark and Hany Abdulsamad.

Currently, Kay's research focuses on the broad field of teleoperation and shared control. In this field, an operator remotely controls a robot in potentially hazardous and unpredictable environments. Executing complex robot manipulation tasks safely under partial observability poses a real challenge to the operator. Different embodiments of robots and operators and the risk of delays make precise manipulation nearly infeasible. Therefore, Kay is currently working on shared control, where information, e.g. prior knowledge and inductive biases, are employed to improve the teleoperation.

Key References

Hierarchical Policy Blending for Robot Control


  • Zhu, Y.; Nazirjonov, S.; Jiang, B.; Colan, J.; Aoyama, T.; Hasegawa, Y.; Belousov, B.; Hansel, K.; Peters, J. (2022). Visual Tactile Sensor Based Force Estimation for Position-Force Teleoperation, IEEE International Conference on Cyborg and Bionic Systems.   Download Article [PDF]   BibTeX Reference [BibTex]

Reinforcement Learning

  • Hansel, K.; Moos, J.; Abdulsamad, H.; Stark, S.; Clever, D.; Peters, J. (2022). Robust Reinforcement Learning: A Review of Foundations and Recent Advances, Machine Learning and Knowledge Extraction, 4, 1, pp.276--315, MDPI.   Download Article [PDF]   BibTeX Reference [BibTex]
  • Hansel, K.; Moos, J.; Derstroff, C. (2021). Benchmarking the Natural Gradient in Policy Gradient Methods and Evolution Strategies, Reinforcement Learning Algorithms: Analysis and Applications, pp.69--84, Springer.   Download Article [PDF]   BibTeX Reference [BibTex]
  • Belousov, B.; Abdulsamad H.; Klink, P.; Parisi, S.; Peters, J. (2021). Reinforcement Learning Algorithms: Analysis and Applications, Studies in Computational Intelligence, Springer International Publishing.   Download Article [PDF]   BibTeX Reference [BibTex]

Imitation Learning

Teaching Assistant

  • Robot Learning (WS 2022/2023)
  • Computational Engineering and Robotics (SS 2021, WS 2022/2023)

Student Supervision

  • RL:IP.SS22, Zimmermann, M., Langer, M., Marino, D., Learn to Play Tangram with Graph-based Reinforcement Learning (w/ Niklas Funk);
  • RL:IP.WS21, Langer, M., Learn to Play Tangram with Graph-based Reinforcement Learning (w/ Niklas Funk);
  • RL:IP.WS21, Gao, Z., Shao, F., Towards Intelligent Shared-Control with TIAGo++;


zum Seitenanfang