Fabio Muratore

Quick Info

Research Interests

Robotics, Machine Learning,
Physics Simulations,
Automatic Control

More Information

Publications Code

Contact Information

Mail. Fabio Muratore
TU Darmstadt, Fachgebiet IAS
Hochschulstraße 10
64289 Darmstadt
Honda Research Institute Europe
Carl-Legien-Straße 30
63073 Offenbach am Main
Office. Room E323,
Robert-Piloty-Gebaeude S2|02

Fabio Muratore joined the Institute for Intelligent Autonomous Systems (IAS) at TU Darmstadt in April 2017 as a Ph.D. student. He is working in a joint project with and at the Honda Research Institute Europe (HRI) in Offenbach am Main, supervised by Jan Peters and Michael Gienger.
Fabio received his bachelor's degree in mechatronics as well as his masters's degree in mechanical engineering from the Technical University of Munich. During his studies he focused on automatic control, structural dynamics, and artificial neural networks.

Research Topic

There is a large consent both in academia as well as in industry that physical human-robot interaction is attributed a large potential for future robotic applications. While being a generic technology, applications emerge increasingly in the factory domain, particular in the production and assembly processes. Learning concepts for manipulation tasks is, however still rather academic as it is, imposing a number of assumptions on the underlying problem, and requiring scientists to produce the results. In particular, learning of force-based manipulation is mainly realized using kinesthetic teaching, necessitating expensive and specialized hardware as well as an expert.
The joint research project Motor Dreaming between IAS and HRI takes a different perspective on the problem. It targets to combine data driven learning and exploitative learning in an efficient way. As an alternative to building a skill representation exclusively from data, the core idea is to make additional use of generative models that allow an internal simulation of the task, also known as mental rehearsal. This step involves devising physical simulation models from the real situation, and being able to play them through in different variations. Such a mental rehearsal allows to incorporate uncertainty, such aiming to increase the robustness of reproduction by learning solutions that can deal with large parameter variations.

Key References

  1. Muratore, F.; Eilers, C.; Gienger, M.; Peters, J. (2020). Bayesian Domain Randomization for Sim-to-Real Transfer, arXiv, 2003.02471, arXiv e-prints.   See Details [Details]   Download Article [PDF]   BibTeX Reference [BibTex]
  2. Muratore, F.; Gienger, M.; Peters, J. (in press). Assessing Transferability from Simulation to Reality for Reinforcement Learning, IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI).   See Details [Details]   Download Article [PDF]   BibTeX Reference [BibTex]


Chek out my open source repository for reinforcement learning from randomized simulations SimRLacra.

Blog Post

I worte a very easy to follow blog post on quantifying the transferability of sim-to-real control policies at sim2realai.github.io.


Bayesian Domain Randomization (BayRn)

Sim-to-Real transfer of a policy learned with BayRn on the Furuta Pendulum

Simulation-Based Policy Optimization with Transferability Assessment (SPOTA)

Sim-to-real evaluation transfer of a policy learned with of SPOTA on the Ball-Balancer and Cart-Pole platform

The subsequent videos are part of the supplementaty material to our 2018 CoRL paper and show comparison of SPOTA, EPOpt, TRPO and LQR policies on the ball-on-plate task.

Spotlight talk at CoRL 2018
Evaluation of SPOTA, EPOpt, TRPO, and LQR policies in Vortex varying selected physics parameters of the simulation
Cross-evaluation of SPOTA, EPOpt, TRPO, and LQR policies trained in Vortex and in Bullet then tested in both

Teaching & Student Supervision

Current Theses and Projects

Appraoch me any time to suggest your ideas

YearTypeIn coorperation withStudent(s)Topic

Completed Theses and Projects

YearTypeIn coorperation withStudent(s)TopicDocument
2019Student ProjectGCC6 groups à 2 studentsParallelizing a Reinforcement Learning Algorithm
2019Bachelor's ThesisBenedict Flade & HRISimon KohautSimultaneous Map Correction and Auto Calibration for Hybrid Localization
2019Bachelor's ThesisHRIRobin MenzenbachBenchmarking Sim-2-Real Algorithms on Real-World Platformspdf
2019Bachelor's ThesisBoris Belousov & HRIChristian EilersBayesian Optimization for Learning from Randomized Simulationspdf
2019Seminar--3 groups à 3 studentsReinforcement Learning Class
2019Integrated ProjectBoris BelousovJonas Eschmann, Robin Menzenbach, Christian EilersUnderactuated Trajectory-Tracking Control for Long-Exposure Photography (part II)pdf
2019Master's ThesisHRIMarkus LamprechtBenchmarking Robust Control against Reinforcement Learning Methods on a Robotic Balancing Problempdf
2018Integrated ProjectBoris BelousovJonas Eschmann, Robin Menzenbach, Christian EilersUnderactuated Trajectory-Tracking Control for Long-Exposure Photography (part I)pdf
2018Master's ThesisHRIFelix TreedeLearning Robust Control Policies from Simulations with Perturbed Parameterspdf

Research Interests

Robotics, Physics Simulations, Machine Learning, Automatic Control


zum Seitenanfang