(redirected from Member.MarcDeisenroth)

I have moved to Imperial College in London, UK where I am a Lecturer (UK Assistant Professor). Check out my new homepage ...

Quick Info

Research Interests

Machine Learning, Robotics, Control, Signal Processing.

More Information

Curriculum Vitae Publications Google Citations DBLP

Marc Deisenroth

Marc joined the IAS in December 2011 as a Senior Research Scienctist & Group Leader (Learning for Control).

From February 2010 to December 2011, Marc has been a full-time Research Associate in Dieter Fox' lab at the University of Washington (Seattle). Marc completed his Ph.D. at the Karlsruhe Institute for Technology (Germany) with Uwe D. Hanebeck. Marc conducted his Ph.D. research under Carl Edward Rasmussen's supervision at the Max Planck Institute for Biological Cybernetics (2006–2007) and at the University of Cambridge (2007–2009).

Marc's research interests center around methodologies from modern Bayesian machine learning and their application to control and autonomous robotic systems. Marc's goal is to make robotic and control systems more autonomous by modeling and accounting for uncertainty in a principled way. Potential applications include intelligent prostheses, autonomous robots, and healthcare assistants.


Research Interests

  • Machine Learning: Gaussian processes, Reinforcement learning, Bayesian inference, Graphical models, Active learning/optimal design, Bayesian optimization
  • Control and Robotics: Optimal control, Learning legged locomotion, Robot learning, Imitation learning
  • Signal Processing: Bayesian state estimation, System identification, Inference and learning in nonlinear dynamical systems

Key References

  1. Deisenroth, M.P.; Turner, R.; Huber, M.; Hanebeck, U.D.; Rasmussen, C.E (2012). Robust Filtering and Smoothing with Gaussian Processes, IEEE Transactions on Automatic Control.   Download Article [PDF]   BibTeX Reference [BibTex]
  2. Deisenroth, M.P.; Rasmussen, C.E.; Peters, J. (2009). Gaussian Process Dynamic Programming, Neurocomputing, 72, pp.1508-1524.   Download Article [PDF]   BibTeX Reference [BibTex] This paper is among the most cited Neurocomputing articles
  3. Deisenroth, M.P.; Rasmussen, C.E. (2011). PILCO: A Model-Based and Data-Efficient Approach to Policy Search, International Conference on Machine Learning (ICML 2011).   Download Article [PDF]   BibTeX Reference [BibTex]
  4. Grossberger, L.; Hohmann, M.R.; Peters J.; Grosse-Wentrup, M. (2017). Investigating Music Imagery as a Cognitive Paradigm for Low-Cost Brain-Computer Interfaces, Proceedings of the 7th Graz Brain-Computer Interface Conference.   BibTeX Reference [BibTex]

    Alte, D. (2016). Control of a robotic arm using a low-cost BCI, Bachelor Thesis.   Download Article [PDF]   BibTeX Reference [BibTex]

    Weber, P.; Rueckert, E.; Calandra, R.; Peters, J.; Beckerle, P. (2016). A Low-cost Sensor Glove with Vibrotactile Feedback and Multiple Finger Joint and Hand Motion Sensing for Human-Robot Interaction, Proceedings of the IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN).   Download Article [PDF]   BibTeX Reference [BibTex]

    Grossberger, L. (2016). Towards a low-cost cognitive Brain-Computer Interface for Patients with Amyotrophic Lateral Sclerosis, Bachelor Thesis.   BibTeX Reference [BibTex]

    Rueckert, E.; Lioutikov, R.; Calandra, R.; Schmidt, M.; Beckerle, P.; Peters, J. (2015). Low-cost Sensor Glove with Force Feedback for Learning from Demonstrations using Probabilistic Trajectory Representations, ICRA 2015 Workshop on Tactile and force sensing for autonomous compliant intelligent robots.   Download Article [PDF]   BibTeX Reference [BibTex]

    Pfretzschner, B. (2013). Autonomous Car Driving using a Low-Cost On-Board Computer, Bachelor Thesis.   Download Article [PDF]   BibTeX Reference [BibTex]

    Deisenroth, M.P.; Fox, D.; Rasmussen, C.E. (2011). Learning to Control a Low-Cost Robotic Manipulator Using Data-Efficient Reinforcement Learning, Robotics: Science & Systems (RSS 2011).   Download Article [PDF]   BibTeX Reference [BibTex]
  5. Deisenroth, M.P.; Mohamed, S. (2012). Expectation Propagation in Gaussian Process Dynamical Systems, Advances in Neural Information Processing Systems 26 (NIPS/NeurIPS), Cambridge, MA: MIT Press., The MIT Press.   Download Article [PDF]   BibTeX Reference [BibTex]
  6. Deisenroth, M.P.; Ohlsson, H. (2011). A General Perspective on Gaussian Filtering and Smoothing: Explaining Current and Deriving new Algorithms, American Control Conference (ACC 2011).   Download Article [PDF]   BibTeX Reference [BibTex]
  7. Deisenroth, M.P.; Calandra, R.; Seyfarth, A.; Peters, J. (2012). Toward Fast Policy Search for Learning Legged Locomotion, Proceedings of the International Conference on Robot Systems (IROS).   Download Article [PDF]   BibTeX Reference [BibTex]

Student Supervision at TU Darmstadt

  • Roberto Calandra (Ph.D. student)
  • Nooshin HajiGhasemi (M.Sc. Student)
  • Felix Schmitt (M.Sc. Student)
  • Sanket Kamthe (M.Sc. Student)
  • Brian Pfretzschner (B.Sc. Student)
  • Peter Englert (M.Sc., will be starting a Ph.D. at the University of Stuttgart): Model-based Imitation Learning by Probabilistic Trajectory Matching (February 2013)
  • Nakul Gopalan (M.Sc., will be starting a Ph.D. at Brown University): Feedback Error Learning for Gait Acquisition (November 2012)


zum Seitenanfang