Kay Hansel
Quick Info
Research Interests
Machine Learning, Reinforcement Learning, Robot Learning, Robotics, Optimal Control, Telerobotics, Human-Robot Interaction
More Information
LinkedIn
ORCID
Google Scholar
Research Gate
GitHub
Contact Information
Kay Hansel
TU Darmstadt, FG IAS,
Hochschulstr. 10, 64289 Darmstadt
Office.
Room E225, Building S2|02
work+49-6151-16-20073
emailkay.hansel@tu-darmstadt.de
emailkay@robot-learning.de
Kay Hansel joined the Intelligent Autonomous Systems (IAS) Lab as a Ph.D. student in May 2021. He is interested in experimenting and researching with robots, exploring new concepts in artificial intelligence such as machine learning, reinforcement learning or deep learning, and thereby discovering interdisciplinary connections between various academic disciplines.
Before starting his Ph.D., Kay Hansel completed his Bachelor's degree in Applied Mathematics at the RheinMain University of Applied Sciences and his Master's degree in Autonomous Systems at the TU Darmstadt. His thesis, "Probabilistic Dynamic Mode Primitives", was written under the supervision of Svenja Stark and Hany Abdulsamad.
Currently, Kay's research focuses on the broad field of teleoperation and shared control. In this field, an operator remotely controls a robot in potentially hazardous and unpredictable environments. Executing complex robot manipulation tasks safely under partial observability poses a real challenge to the operator. Different embodiments of robots and operators and the risk of delays make precise manipulation nearly infeasible. Therefore, Kay is currently working on shared control, where information, e.g. prior knowledge and inductive biases, are employed to improve the teleoperation.
Key References
Hierarchical Policy Blending for Robot Control
- Hansel, K.; Urain, J.; Peters, J.; Chalvatzaki, G. (2023). Hierarchical Policy Blending as Inference for Reactive Robot Control, 2023 IEEE International Conference on Robotics and Automation (ICRA), IEEE.
Download Article [PDF] BibTeX Reference [BibTex] Website
- Le, A. T.; Hansel, K.; Peters, J.; Chalvatzaki, G. (2022). Hierarchical Policy Blending As Optimal Transport, 5th Annual Learning for Dynamics & Control Conference (L4DC).
Download Article [PDF] BibTeX Reference [BibTex] Website
Teleoperation
- Zhu, Y.; Nazirjonov, S.; Jiang, B.; Colan, J.; Aoyama, T.; Hasegawa, Y.; Belousov, B.; Hansel, K.; Peters, J. (2022). Visual Tactile Sensor Based Force Estimation for Position-Force Teleoperation, IEEE International Conference on Cyborg and Bionic Systems.
Download Article [PDF] BibTeX Reference [BibTex]
Reinforcement Learning
- Hansel, K.; Moos, J.; Abdulsamad, H.; Stark, S.; Clever, D.; Peters, J. (2022). Robust Reinforcement Learning: A Review of Foundations and Recent Advances, Machine Learning and Knowledge Extraction, 4, 1, pp.276--315, MDPI.
Download Article [PDF] BibTeX Reference [BibTex]
- Hansel, K.; Moos, J.; Derstroff, C. (2021). Benchmarking the Natural Gradient in Policy Gradient Methods and Evolution Strategies, Reinforcement Learning Algorithms: Analysis and Applications, pp.69--84, Springer.
Download Article [PDF] BibTeX Reference [BibTex]
- Belousov, B.; Abdulsamad H.; Klink, P.; Parisi, S.; Peters, J. (2021). Reinforcement Learning Algorithms: Analysis and Applications, Studies in Computational Intelligence, Springer International Publishing.
Download Article [PDF] BibTeX Reference [BibTex]
Imitation Learning
Teaching Assistant
- Robot Learning (WS 2022/2023)
- Computational Engineering and Robotics (SS 2021, WS 2022/2023)
Student Supervision
- RL:IP.SS22, Zimmermann, M., Langer, M., Marino, D., Learn to Play Tangram with Graph-based Reinforcement Learning (w/ Niklas Funk);
- RL:IP.WS21, Langer, M., Learn to Play Tangram with Graph-based Reinforcement Learning (w/ Niklas Funk);
- RL:IP.WS21, Gao, Z., Shao, F., Towards Intelligent Shared-Control with TIAGo++;