Kay Hansel
Research Interests
Robot Learning, Machine Learning, Reinforcement Learning, Robotics, Optimal Control, Telerobotics, Human-Robot Interaction
Affiliation
TU Darmstadt, Intelligent Autonomous Systems, Computer Science Department
Contact
kay@robot-learning.de
kay.hansel@tu-darmstadt.de
Room D202, Building S2|02, TU Darmstadt, FB-Informatik, FG-IAS, Hochschulstr. 10, 64289 Darmstadt
+49-6151-16-25385
Kay Hansel joined the Intelligent Autonomous Systems (IAS) Lab as a Ph.D. student in May 2021. He is interested in experimenting and researching with robots, exploring new concepts in artificial intelligence such as machine learning, reinforcement learning or deep learning, and thereby discovering interdisciplinary connections between various academic disciplines.
Before starting his Ph.D., Kay Hansel completed his Bachelor's degree in Applied Mathematics at the RheinMain University of Applied Sciences and his Master's degree in Autonomous Systems at the TU Darmstadt. His thesis, "Probabilistic Dynamic Mode Primitives", was written under the supervision of Svenja Stark and Hany Abdulsamad.
Reviewing
IEEE IROS, IEEE ICRA, CoRL, IEEE RA-L, and various Robotics & ML workshops.
Teaching Assistant
Lecture | Years |
Computational Engineering and Robotics | SS 2022, WS 2022/2023 |
Robot Learning | WS 2022/2023, SS 2023 |
Robot Learning Integrated Project | SS 2024 |
Currently, Kay's research focuses on the broad field of teleoperation and shared control. In this field, an operator remotely controls a robot in potentially hazardous and unpredictable environments. Executing complex robot manipulation tasks safely under partial observability poses a real challenge to the operator. Different embodiments of robots and operators and the risk of delays make precise manipulation nearly infeasible. Therefore, Kay is currently working on shared control, where information, e.g. prior knowledge and inductive biases, are employed to improve the teleoperation.
During his academic research, Kay had the invaluable opportunity to visit the Intelligent Robotics and Biomechatronics Laboratory led by Prof. Hasegawa as a visiting scholar from May 2023 to July 2023. Part of the Department of Micro-Nano Mechanical Science and Engineering at Nagoya University, the lab is renowned for its fundamental and applied studies on mechatronics technologies for advanced human assistance systems.
Recent Publications
Robot Control
-
- Hansel, K.; Urain, J.; Peters, J.; Chalvatzaki, G. (2023). Hierarchical Policy Blending as Inference for Reactive Robot Control, 2023 IEEE International Conference on Robotics and Automation (ICRA), IEEE.
-
- Le, A. T.; Hansel, K.; Peters, J.; Chalvatzaki, G. (2023). Hierarchical Policy Blending As Optimal Transport, 5th Annual Learning for Dynamics & Control Conference (L4DC), PMLR.
Telerobotics
-
--- Best Paper Award Finalist ---- Zhu, Y.; Nazirjonov, S.; Jiang, B.; Colan, J.; Aoyama, T.; Hasegawa, Y.; Belousov, B.; Hansel, K.; Peters, J. (2023). Visual Tactile Sensor Based Force Estimation for Position-Force Teleoperation, IEEE International Conference on Cyborg and Bionic Systems (CBS), pp.49-52.
-
--- Best Paper Award ---- Chen, Q.; Zhu, Y.; Hansel, Kay.; Aoyama, T.; Hasegawa, Y. (2023). Human Preferences and Robot Constraints Aware Shared Control for Smooth Follower Motion Execution, IEEE International Symposium on Micro-NanoMechatronics and Human Science (MHS), IEEE.
-
- Becker, N.; Gattung, E.; Hansel, K.; Schneider, T.; Zhu, Y.; Hasegawa, Y.; Peters, J. (2024). Integrating Visuo-tactile Sensing with Haptic Feedback for Teleoperated Robot Manipulation, IEEE ICRA 2024 Workshop on Robot Embodiment through Visuo-Tactile Perception.
Robot Learning
-
- Hansel, K.; Moos, J.; Abdulsamad, H.; Stark, S.; Clever, D.; Peters, J. (2022). Robust Reinforcement Learning: A Review of Foundations and Recent Advances, Machine Learning and Knowledge Extraction (MAKE), 4, 1, pp.276--315, MDPI.
-
- Hansel, K.; Moos, J.; Derstroff, C. (2021). Benchmarking the Natural Gradient in Policy Gradient Methods and Evolution Strategies, Reinforcement Learning Algorithms: Analysis and Applications, pp.69--84, Springer.
-
- Belousov, B.; Abdulsamad H.; Klink, P.; Parisi, S.; Peters, J. (2021). Reinforcement Learning Algorithms: Analysis and Applications, Studies in Computational Intelligence, Springer International Publishing.
-
- Hansel, K. (2021). Probabilistic Dynamic Mode Primitives, Master Thesis.
Supervised Theses and Projects
Year | Thesis/Project | Student(s) | Topic | Together with |
2021 | RL:IP | Gao, Z., Shao, F. | Towards Intelligent Shared-Control with TIAGo++ | Chalvatzaki, G. |
2021 | RL:IP | Langer, M. | Learn to Play Tangram with Graph-based Reinforcement Learning | Funk, N. |
2022 | RL:IP | Zimmermann, M., Langer, M., Marino, D. | Learn to Play Tangram with Graph-based Reinforcement Learning | Funk, N. |
2022 | RL:IP | Zimmermann, M., Zöller, M., Aristakesyan, A. | Learn to Play Tangram with Graph-based Reinforcement Learning | Funk, N. |
2023 | MSc Thesis | Gao, Z. | Hierarchical Contextualization of Movement Primitives | Chalvatzaki, G., Peters, J., Antink, C. H. |
2023 | MSc Thesis | Gece, A. | Leveraging Structured-Graph Correspondence in Imitation Learning | Le, A. T., Chalvatzaki, G. |
2023 | MSc Thesis | Langer, M. | Energy-based Models for 6D Pose Estimation | Funk, N., Peters, J. |
2024 | RL:IP | Gattung, E., Becker, N. | Hands-on Control | Schneider, T. |
2024 | S:HR | Beyer, H. | Controlling Humanoids: A Comprehensive Analysis of Teleoperation Frameworks | |
2024 | MSc Thesis | Cosserat, E. | Refining 6D pose estimation with tactile sensors | Schneider, T., Duret, G., Peters, J., Chen, L. |
Ongoing | RL:IP | Becker, N., Sovailo, K., Zhu, C. | Hands-on Control | Schneider, T., Funk, N. |
Ongoing | RL:IP | Sun, Y., Huang, Y. | Control Barrier Functions for Assisted Teleoperation | Gueler, B. |
Ongoing | MSc Thesis | Zoeller, M. | Enhancing Smoothness in Policy Blending with Gaussian Processes | Le, A. T., Peters, J. |
Ongoing | MSc Thesis | Zach, S. | Reactive Motion Generation through Probabilistic Dynamic Graphs | Le, A. T., Peters, J. |