Vignesh Prasad

Quick Info

Research Interests

Human-Robot Interaction, Interaction Modelling, Human Motion Prediction, Computer Vision, SLAM, Robot Navigation

More Information

Google Scholar Github LinkedIn Personal Website

Contact Information

Mail. Vignesh Prasad
TU Darmstadt, FG MuP
Hochschulstr. 1, 64289 Darmstadt
Office. S1|02 145
work+49-6151-16-24475

Vignesh Prasad joined TU Darmstadt in July 2019 as a Ph.D. student jointly supervised by Jan Peters and Prof. Ruth Stock-Homburg as part of the Handshaking Turing Test project. During his Ph.D., Vignesh is investigating learning physically interactive behaviours for humanoid robots. His current areas of research include Human-Robot Interaction, Learning from Demonstrations, Human Motion Prediction, and Social Robotics.

Prior to this, Vignesh worked as a researcher in the Machine Vision Group at TCS Innovation Labs, Kolkata under Dr. Brojeshwar Bhowmick, where he worked on Deep Learning for Monocular 3D Reconstruction and Computer Vision. During this time, Vignesh's work won the Best Paper Award at the 2018 Indian Conference on Computer Vision, Graphics and Image Processing (ICVGIP). Vignesh pursued his Bachelors and Masters in Computer Science and Engineering from IIIT Hyderabad, India. His Master's thesis titled "Learning Effective Navigational Strategies for Active Monocular Simultaneous Localization and Mapping" was done at the Robotics Research Center under Dr. K. Madhava Krishna in collaboration with Prof. Balaraman Ravindran.

Publications

Prasad, V.; Stock-Homburg, R.; Peters, J. (2022). Human-Robot Handshaking: A Review, International Journal of Social Robotics (IJSR), 14, 1, pp.277-293.   Download Article [PDF]   BibTeX Reference [BibTex]

Prasad, V.; Koert, D.; Stock-Homburg, R.; Peters, J.; Chalvatzaki, G. (2022). MILD: Multimodal Interactive Latent Dynamics for Learning Human-Robot Interaction, IEEE-RAS International Conference on Humanoid Robots (Humanoids).   Download Article [PDF]   BibTeX Reference [BibTex]

Prasad, V.; Stock-Homburg, R.; Peters, J. (2021). Learning Human-like Hand Reaching for Human-Robot Handshaking, Proceedings of the IEEE International Conference on Robotics and Automation (ICRA).   Download Article [PDF]   BibTeX Reference [BibTex]

Stock-Homburg, R.; Peters, J.; Schneider, K.; Prasad, V.; Nukovic, L. (2020). Evaluation of the Handshake Turing Test for anthropomorphic Robots, Proceedings of the ACM/IEEE International Conference on Human Robot Interaction (HRI), Late Breaking Report.   Download Article [PDF]   BibTeX Reference [BibTex]

Prasad, V.; Stock-Homburg, R.; Peters, J. (2020). Advances in Human-Robot Handshaking, International Conference on Social Robotics, Springer.   Download Article [PDF]   BibTeX Reference [BibTex]

For a full list of his publications, please see his Google Scholar page.

Supervision

Master Thesis

  • Xu, R. (Ongoing) "SLAM-itation: SLAM-based robotic teleoperation" (co-supervisors: Suman Pal)
  • Comellas, O. H. (Ongoing) "Binaural Sound Localisation with Spiking Neural Networks" (co-supervisor: Sven Schultze)
  • Frisch, Y. (2022) "Analysis of Self-supervised Keypoint Detection Methods for Robot Learning" (co-supervisors: Ali Younes and Georgia Chalvatzaki)
  • Yang, Z. (2022) "Exploring Gripping Behaviours and Haptic Emotions for Human-Robot Handshaking"
  • Redkin, M. (2021) "Personalizing Customer Interactions with Service Robots using Hand Gestures"
  • Kohl, M. (2021) "Learning Latent Interaction Models using Interaction Primitives"

Bachelor Thesis

  • Prescher. E. (Ongoing) "Visual Hierarchical Interaction Recognition and Segmentation"
  • Sterker, L. (Ongoing) "Social Affordance Segmentation and Learning using Hidden semi-Markov Models"
  • Gassen, M. (2021) "Learning a library of Physical Interactions for Social Robots" (co-supervisor: Dorothea Koert)
  • Scherbring, L. (2021) "Analyzing the role of Physical Interactions on Robot Acceptance"
  • Ajmera, Y. (2021) "Learning Movement Primitives for Handshaking Behaviours"
  • Baierl, M. (2020) "Learning Action Representations For Primitives-Based Motion Generation"

  

zum Seitenanfang