I have graduated and moved to Bosch Center for Artificial Intelligence in Renningen near Stuttgart.

Duy Nguyen-Tuong

Research Interests

Robotics, Machine Learning.

Contact Information

Robert-Bosch-Campus 1
71272 Renningen, Germany
+49-711-811-49408
duy.nguyentuong@robot-learning.de

Duy Nguyen-Tuong has been a Ph.D. student in the Robot Learning Lab at the Max Planck Institute for Biological Cybernetics (now, MPI for Intelligent Systems), working in the department of Bernhard Schölkopf under the supervision of Jan Peters from 2007 to 2011. During his Ph.D., he had an extended research stay at the University of Southern California in 2008. Before joining the Robot Learning Lab, he studied control and automation engineering at the University of Stuttgart and the National University of Singapore.

His main research interest is the application of machine learning techniques in control, robotics and data mining. One of his research focuses is developing regression methods for online model learning in real-time which can be used, for example, in robot model-based control. Another research focus is learning multivalued models, i.e., a mapping from one to many. A multivalued model can play an important role in many robotics applications, such as learning inverse kinematics or operational space robot control.

To promote machine learning techniques in robotics and control, he co-organized a NIPS 2009 workshop on Probabilistic Approaches for Robotics and Control. More details of his research activities can be found at Research > Model Learning. He was co-teaching the course Autonome Lernsysteme at TU Darmstadt in Fall 2012. In summer 2017, he is giving a lecture on Machine Learning at Uni Stuttgart.

Duy has graduated in Spring 2011 with a Ph.D. from the University of Freiburg. He is currently working at Bosch Corporate Research in Renningen (Stuttgart), Germany. While still pursuing the work in model learning there, he recently becomes interested in control and policy learning, which can be useful for numerous automotive and industrial applications.

Duy Nguyen-Tuong can be found on [Google Scholar] and [DBLP].

Publications

  •   Bib
    Peters, J.; Lee, D.; Kober, J.; Nguyen-Tuong, D.; Bagnell, J.; Schaal, S. (2017). Chapter 15: Robot Learning, Springer Handbook of Robotics, 2nd Edition, pp.357-394, Springer International Publishing.
  •     Bib
    Vinogradska, J.; Bischoff, B.; Nguyen-Tuong, D.; Peters, J. (2017). Stability of Controllers for Gaussian Process Forward Models, Journal of Machine Learning Research (JMLR), 18, 100, pp.1-37.
  •     Bib
    Vinogradska, J.; Bischoff, B.; Nguyen-Tuong, D.; Romer, A.; Schmidt, H.; Peters, J. (2016). Stability of Controllers for Gaussian Process Forward Models, Proceedings of the International Conference on Machine Learning (ICML).
  •     Bib
    Bischoff, B.; Nguyen-Tuong, D.; van Hoof, H. McHutchon, A.; Rasmussen, C.E.; Knoll, A.; Peters, J.; Deisenroth, M.P. (2014). Policy Search For Learning Robot Control Using Sparse Data, Proceedings of 2014 IEEE International Conference on Robotics and Automation (ICRA).
  •   Bib
    Peters, J.; Kober, J.; Muelling, K.; Nguyen-Tuong, D.; Kroemer, O. (2013). Learning Skills with Motor Primitives, Proceedings of the 16th Yale Learning Workshop.
  •       Bib
    Nguyen-Tuong, D.; Peters, J. (2012). Online Kernel-based Learning for Task-Space Tracking Robot Control, IEEE Transactions on Neural Networks and Learning Systems, 23, 9, pp.1417-1425 .
  •       Bib
    Peters, J.; Kober, J.; Muelling, K.; Nguyen-Tuong, D.; Kroemer, O. (2012). Robot Skill Learning, Proceedings of the European Conference on Artificial Intelligence (ECAI).
  •       Bib
    Bocsi, B.; Nguyen-Tuong, D; Csato, L; Schoelkopf, B.; Peters, J. (2011). Learning Inverse Kinematics with Structured Prediction, IEEE/RSJ International Conference on Intelligent Robot Systems (IROS).
  •     Bib
    Peters, J.; Kober, J.; Muelling, K.; Nguyen-Tuong, D.; Kroemer, O. (2009). Towards Motor Skill Learning for Robotics, Proceedings of the International Symposium on Robotics Research (ISRR), Invited Paper.
  •     Bib
    Peters, J.; Nguyen-Tuong, D. (2008). Real-Time Learning of Resolved Velocity Control on a Mitsubishi PA-10, International Conference on Robotics and Automation (ICRA).
  •     Bib
    Peters, J.; Kober, J.; Nguyen-Tuong, D. (2008). Policy Learning - a unified perspective with applications in robotics, Proceedings of the European Workshop on Reinforcement Learning (EWRL).