I am now the DeepMind Chair of Machine Learning and Artificial Intelligence at University College London, Deputy Director of the UCL Centre for Artificial Intelligence, and part of the UNESCO Chair on Artificial Intelligence at UCL. Check out my new homepage ...
Research Interests
Machine Learning, Robotics, Control, Signal Processing.
More Information
Marc Deisenroth
Marc joined the IAS in December 2011 as a Senior Research Scienctist & Group Leader (Learning for Control).
From February 2010 to December 2011, Marc has been a full-time Research Associate in Dieter Fox' lab at the University of Washington (Seattle). Marc completed his Ph.D. at the Karlsruhe Institute for Technology (Germany) with Uwe D. Hanebeck. Marc conducted his Ph.D. research under Carl Edward Rasmussen's supervision at the Max Planck Institute for Biological Cybernetics (2006–2007) and at the University of Cambridge (2007–2009).
Marc's research interests center around methodologies from modern Bayesian machine learning and their application to control and autonomous robotic systems. Marc's goal is to make robotic and control systems more autonomous by modeling and accounting for uncertainty in a principled way. Potential applications include intelligent prostheses, autonomous robots, and healthcare assistants.
News
- Peter Englert got his research results on model-based imitation learning accepted at the Adaptive Behavior journal.
- Roberto Calandra successfully applied Bayesian optimization to learning gait parameters for biped locomotion. Have a look at the videos!
- Peter Englert got some really good results for imitation learning based on probabilistic trajectory matching. Check out our ICRA paper!
- Zhikun Wang applied fast online Bayesian inference to an intention inference problem in the context of robot table tennis. Check out our IJRR paper!
- RSS 2013: I am the Robotics: Science & Systems 2013 workshop chair.
- EWRL 2012: I am program chair of the 10th European Workshop on Reinforcement Learning (EWRL 2012).
- Check out our YouTube channel on autonomous learning.
Research Interests
- Machine Learning: Gaussian processes, Reinforcement learning, Bayesian inference, Graphical models, Active learning/optimal design, Bayesian optimization
- Control and Robotics: Optimal control, Learning legged locomotion, Robot learning, Imitation learning
- Signal Processing: Bayesian state estimation, System identification, Inference and learning in nonlinear dynamical systems
Key References
-
- Deisenroth, M.P.; Turner, R.; Huber, M.; Hanebeck, U.D.; Rasmussen, C.E (2012). Robust Filtering and Smoothing with Gaussian Processes, IEEE Transactions on Automatic Control.
-
This paper is among the most cited Neurocomputing articles- Deisenroth, M.P.; Rasmussen, C.E.; Peters, J. (2009). Gaussian Process Dynamic Programming, Neurocomputing, 72, pp.1508-1524.
-
- Deisenroth, M.P.; Rasmussen, C.E. (2011). PILCO: A Model-Based and Data-Efficient Approach to Policy Search, International Conference on Machine Learning (ICML 2011).
-
- Grossberger, L.; Hohmann, M.R.; Peters J.; Grosse-Wentrup, M. (2017). Investigating Music Imagery as a Cognitive Paradigm for Low-Cost Brain-Computer Interfaces, Proceedings of the 7th Graz Brain-Computer Interface Conference.
- Alte, D. (2016). Control of a robotic arm using a low-cost BCI, Bachelor Thesis.
- Weber, P.; Rueckert, E.; Calandra, R.; Peters, J.; Beckerle, P. (2016). A Low-cost Sensor Glove with Vibrotactile Feedback and Multiple Finger Joint and Hand Motion Sensing for Human-Robot Interaction, Proceedings of the IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN).
- Grossberger, L. (2016). Towards a low-cost cognitive Brain-Computer Interface for Patients with Amyotrophic Lateral Sclerosis, Bachelor Thesis.
- Rueckert, E.; Lioutikov, R.; Calandra, R.; Schmidt, M.; Beckerle, P.; Peters, J. (2015). Low-cost Sensor Glove with Force Feedback for Learning from Demonstrations using Probabilistic Trajectory Representations, ICRA 2015 Workshop on Tactile and force sensing for autonomous compliant intelligent robots.
- Pfretzschner, B. (2013). Autonomous Car Driving using a Low-Cost On-Board Computer, Bachelor Thesis.
- Deisenroth, M.P.; Fox, D.; Rasmussen, C.E. (2011). Learning to Control a Low-Cost Robotic Manipulator Using Data-Efficient Reinforcement Learning, Robotics: Science & Systems (RSS 2011).
-
- Deisenroth, M.P.; Mohamed, S. (2012). Expectation Propagation in Gaussian Process Dynamical Systems, Advances in Neural Information Processing Systems 26 (NIPS/NeurIPS), Cambridge, MA: MIT Press., The MIT Press.
-
- Deisenroth, M.P.; Ohlsson, H. (2011). A General Perspective on Gaussian Filtering and Smoothing: Explaining Current and Deriving new Algorithms, American Control Conference (ACC 2011).
-
- Deisenroth, M.P.; Calandra, R.; Seyfarth, A.; Peters, J. (2012). Toward Fast Policy Search for Learning Legged Locomotion, Proceedings of the International Conference on Robot Systems (IROS).
Student Supervision at TU Darmstadt
- Roberto Calandra (Ph.D. student)
- Nooshin HajiGhasemi (M.Sc. Student)
- Felix Schmitt (M.Sc. Student)
- Sanket Kamthe (M.Sc. Student)
- Brian Pfretzschner (B.Sc. Student)
- Peter Englert (M.Sc., will be starting a Ph.D. at the University of Stuttgart): Model-based Imitation Learning by Probabilistic Trajectory Matching (February 2013)
- Nakul Gopalan (M.Sc., will be starting a Ph.D. at Brown University): Feedback Error Learning for Gait Acquisition (November 2012)