Kay Hansel

Research Interests

Robot Learning, Machine Learning, Reinforcement Learning, Robotics, Optimal Control, Telerobotics, Human-Robot Interaction

Affiliation

TU Darmstadt, Intelligent Autonomous Systems, Computer Science Department

Contact

kay@robot-learning.de
kay.hansel@tu-darmstadt.de
Room D202, Building S2|02, TU Darmstadt, FB-Informatik, FG-IAS, Hochschulstr. 10, 64289 Darmstadt
+49-6151-16-25385

Kay Hansel joined the Intelligent Autonomous Systems (IAS) Lab as a Ph.D. student in May 2021. He is interested in experimenting and researching with robots, exploring new concepts in artificial intelligence such as machine learning, reinforcement learning or deep learning, and thereby discovering interdisciplinary connections between various academic disciplines.

Before starting his Ph.D., Kay Hansel completed his Bachelor's degree in Applied Mathematics at the RheinMain University of Applied Sciences and his Master's degree in Autonomous Systems at the TU Darmstadt. His thesis, "Probabilistic Dynamic Mode Primitives", was written under the supervision of Svenja Stark and Hany Abdulsamad.

Reviewing

IEEE IROS, IEEE ICRA, CoRL, IEEE RA-L, and various Robotics & ML workshops.

Teaching Assistant

LectureYears
Computational Engineering and RoboticsSS 2022, WS 2022/2023
Robot LearningWS 2022/2023, SS 2023
Robot Learning Integrated ProjectSS 2024

Currently, Kay's research focuses on the broad field of teleoperation and shared control. In this field, an operator remotely controls a robot in potentially hazardous and unpredictable environments. Executing complex robot manipulation tasks safely under partial observability poses a real challenge to the operator. Different embodiments of robots and operators and the risk of delays make precise manipulation nearly infeasible. Therefore, Kay is currently working on shared control, where information, e.g. prior knowledge and inductive biases, are employed to improve the teleoperation.

During his academic research, Kay had the invaluable opportunity to visit the Intelligent Robotics and Biomechatronics Laboratory led by Prof. Hasegawa as a visiting scholar from May 2023 to July 2023. Part of the Department of Micro-Nano Mechanical Science and Engineering at Nagoya University, the lab is renowned for its fundamental and applied studies on mechatronics technologies for advanced human assistance systems.


Recent Publications

Robot Control

    •       Bib
      Hansel, K.; Urain, J.; Peters, J.; Chalvatzaki, G. (2023). Hierarchical Policy Blending as Inference for Reactive Robot Control, 2023 IEEE International Conference on Robotics and Automation (ICRA), IEEE.
    •       Bib
      Le, A. T.; Hansel, K.; Peters, J.; Chalvatzaki, G. (2023). Hierarchical Policy Blending As Optimal Transport, 5th Annual Learning for Dynamics & Control Conference (L4DC), PMLR.

Telerobotics

    •     Bib
      Zhu, Y.; Nazirjonov, S.; Jiang, B.; Colan, J.; Aoyama, T.; Hasegawa, Y.; Belousov, B.; Hansel, K.; Peters, J. (2023). Visual Tactile Sensor Based Force Estimation for Position-Force Teleoperation, IEEE International Conference on Cyborg and Bionic Systems (CBS), pp.49-52.
    --- Best Paper Award Finalist ---
    •     Bib
      Chen, Q.; Zhu, Y.; Hansel, Kay.; Aoyama, T.; Hasegawa, Y. (2023). Human Preferences and Robot Constraints Aware Shared Control for Smooth Follower Motion Execution, IEEE International Symposium on Micro-NanoMechatronics and Human Science (MHS), IEEE.
    --- Best Paper Award ---
    •       Bib
      Becker, N.; Gattung, E.; Hansel, K.; Schneider, T.; Zhu, Y.; Hasegawa, Y.; Peters, J. (2024). Integrating Visuo-tactile Sensing with Haptic Feedback for Teleoperated Robot Manipulation, IEEE ICRA 2024 Workshop on Robot Embodiment through Visuo-Tactile Perception.

Robot Learning

    •       Bib
      Hansel, K.; Moos, J.; Abdulsamad, H.; Stark, S.; Clever, D.; Peters, J. (2022). Robust Reinforcement Learning: A Review of Foundations and Recent Advances, Machine Learning and Knowledge Extraction (MAKE), 4, 1, pp.276--315, MDPI.
    •     Bib
      Hansel, K.; Moos, J.; Derstroff, C. (2021). Benchmarking the Natural Gradient in Policy Gradient Methods and Evolution Strategies, Reinforcement Learning Algorithms: Analysis and Applications, pp.69--84, Springer.
    •     Bib
      Belousov, B.; Abdulsamad H.; Klink, P.; Parisi, S.; Peters, J. (2021). Reinforcement Learning Algorithms: Analysis and Applications, Studies in Computational Intelligence, Springer International Publishing.
    •     Bib
      Hansel, K. (2021). Probabilistic Dynamic Mode Primitives, Master Thesis.

Supervised Theses and Projects

YearThesis/ProjectStudent(s)TopicTogether with
2021RL:IPGao, Z.,
Shao, F.
Towards Intelligent Shared-Control with TIAGo++Chalvatzaki, G.
2021RL:IPLanger, M.Learn to Play Tangram with Graph-based Reinforcement LearningFunk, N.
2022RL:IPZimmermann, M.,
Langer, M.,
Marino, D.
Learn to Play Tangram with Graph-based Reinforcement LearningFunk, N.
2022RL:IPZimmermann, M.,
Zöller, M.,
Aristakesyan, A.
Learn to Play Tangram with Graph-based Reinforcement LearningFunk, N.
2023MSc ThesisGao, Z.Hierarchical Contextualization of Movement PrimitivesChalvatzaki, G.,
Peters, J.,
Antink, C. H.
2023MSc ThesisGece, A.Leveraging Structured-Graph Correspondence in Imitation LearningLe, A. T.,
Chalvatzaki, G.
2023MSc ThesisLanger, M.Energy-based Models for 6D Pose EstimationFunk, N.,
Peters, J.
2024RL:IPGattung, E.,
Becker, N.
Hands-on ControlSchneider, T.
2024S:HRBeyer, H.Controlling Humanoids: A Comprehensive Analysis of Teleoperation Frameworks 
2024MSc ThesisCosserat, E.Refining 6D pose estimation with tactile sensorsSchneider, T.,
Duret, G.,
Peters, J.,
Chen, L.
OngoingRL:IPBecker, N.,
Sovailo, K.,
Zhu, C.
Hands-on ControlSchneider, T.,
Funk, N.
OngoingRL:IPSun, Y.,
Huang, Y.
Control Barrier Functions for Assisted TeleoperationGueler, B.
OngoingMSc ThesisZoeller, M.Enhancing Smoothness in Policy Blending with Gaussian ProcessesLe, A. T.,
Peters, J.
OngoingMSc ThesisZach, S.Reactive Motion Generation through Probabilistic Dynamic GraphsLe, A. T.,
Peters, J.