I have graduated and moved to Bosch Center for Artificial Intelligence in Renningen near Stuttgart.

Fabio Muratore

Research Interests

Robotics, Machine Learning,
Physics Simulations,
Automatic Control

More Information

Publications Google Scholar Code

Contact Information

Mail. Fabio Muratore
TU Darmstadt, Fachgebiet IAS
Hochschulstraße 10
64289 Darmstadt
and
Honda Research Institute Europe
Carl-Legien-Straße 30
63073 Offenbach am Main
Office. Room E323,
Robert-Piloty-Gebaeude S2|02
+49-6151-1625376
+49-6151-1625375
fabio@robot-learning.de

Fabio Muratore joined the Institute for Intelligent Autonomous Systems (IAS) at TU Darmstadt in April 2017 as a Ph.D. student, and stayed until November 2021. He was working in a joint project with and at the Honda Research Institute Europe (HRI) in Offenbach am Main, supervised by Jan Peters and Michael Gienger.
Fabio received his bachelor's degree in mechatronics as well as his masters's degree in mechanical engineering from the Technical University of Munich. During his studies, he focused on automatic control, structural dynamics, and artificial neural networks.



Research Topic

There is a large consent both in academia as well as in industry that physical human-robot interaction is attributed a large potential for future robotic applications. While being a generic technology, applications emerge increasingly in the factory domain, particularly in the production and assembly processes.
Learning concepts for manipulation tasks is, however still rather academic as it is, imposing a number of assumptions on the underlying problem, and requiring scientists to produce the results. In particular, learning of force-based manipulation is mainly realized using kinesthetic teaching, necessitating expensive and specialized hardware as well as an expert.

The joint research project Motor Dreaming between IAS and HRI takes a different perspective on the problem. It targets to combine data-driven learning and exploitative learning in an efficient way. As an alternative to building a skill representation exclusively from data, the core idea is to make additional use of generative models that allow an internal simulation of the task, also known as mental rehearsal. This step involves devising physical simulation models from the real situation, and being able to play them through in different variations. Such a mental rehearsal allows incorporating uncertainty, such aiming to increase the robustness of reproduction by learning solutions that can deal with large parameter variations.

Key References

    •       Bib
      Muratore, F.; Ramos, F.; Turk, G.; Yu, W.; Gienger, M.; Peters, J. (2022). Robot Learning from Randomized Simulations: A Review, Frontiers in Robotics and AI, 9.
    •     Bib
      Muratore, F.; Gruner, T.; Wiese, F.; Belousov, B.; Gienger, M.; Peters, J. (2021). Neural Posterior Domain Randomization, Conference on Robot Learning (CoRL).
    •       Bib
      Muratore, F.; Eilers, C.; Gienger, M.; Peters, J. (2021). Data-efficient Domain Randomization with Bayesian Optimization, IEEE Robotics and Automation Letters (RA-L), with Presentation at the IEEE International Conference on Robotics and Automation (ICRA), IEEE.
    •       Bib
      Muratore, F.; Gienger, M.; Peters, J. (2021). Assessing Transferability from Simulation to Reality for Reinforcement Learning, IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), 43, 4, pp.1172-1183, IEEE.

Code

Check out my open-source repository for reinforcement learning from randomized simulations SimRLacra.
In the rare event you need a Python implementation of a backlash model, this might help you.

Videos

Neural Posterior Domain Randomization (NPDR)

Brief summary of the approach

Bayesian Domain Randomization (BayRn)

Sim-to-Real transfer of policies learned with BayRn on the Barrett WAM (ball-in-a-cup task) and a Furuta pendulum (swing-up and balance task)

Simulation-Based Policy Optimization with Transferability Assessment (SPOTA)

Sim-to-real evaluation transfer of a policy learned with of SPOTA on the Ball-Balancer and Cart-Pole platform
Spotlight talk at CoRL 2018
Evaluation of SPOTA, EPOpt, TRPO, and LQR policies in Vortex varying selected physics parameters of the simulation
Cross-evaluation of SPOTA, EPOpt, TRPO, and LQR policies trained in Vortex and in Bullet then tested in both

(Blog) Posts

I wrote a very easy-to-follow blog post on quantifying the transferability of sim-to-real control policies at sim2realai.github.io.
You can also find a short how-to on creating beautiful figures for your papers with Inkscape and Latex here.

Teaching & Student Supervision

Completed Theses and Projects

StartTypeIn coorperation withStudent(s)TopicDocument
2021Master's ThesisBoris BelousovTheo GrunerWasserstein-Optimal Bayesian System Identification for Domain Randomizationpdf
2020Integrated Project-Fabian Damken, Heiko CarrascoCombining Domain Randomization and Self-Paced Contextual Reinforcement Learningpdf
2020Integrated Project-Julien Brosseit, Benedikt HahnerCombining Domain Randomization and Policy Destillationpdf
2020Integrated ProjectBoris BelousovTheo Gruner, Arlene Kühn, Florian WieseLikelihood-Free Inference for Domain Randomizationpdf
2020Master's ThesisThomas WeisswangeDavid RotherReinforcement Learning in Decentralized Multi-Goal Multi-Agent Settingspdf
2020Master's ThesisClaudio ZitoJonas EschmannPartially Unsupervised Deep Meta Reinforcement Learningpdf
2019Student ProjectGCC6 groups à 2 studentsParallelizing a Reinforcement Learning Algorithm
2019Bachelor's ThesisBenedict FladeSimon KohautSimultaneous Map Correction and Auto Calibration for Hybrid Localization
2019Bachelor's ThesisHRIRobin MenzenbachBenchmarking Sim2Real Algorithms on Real-World Platformspdf
2019Bachelor's ThesisBoris Belousov & HRIChristian EilersBayesian Optimization for Learning from Randomized Simulationspdf
2019Seminar-3 groups à 3 studentsReinforcement Learning Class
2019Integrated ProjectBoris BelousovJonas Eschmann, Robin Menzenbach, Christian EilersUnderactuated Trajectory-Tracking Control for Long-Exposure Photography (part II)pdf
2019Master's ThesisHRIMarkus LamprechtBenchmarking Robust Control against Reinforcement Learning Methods on a Robotic Balancing Problempdf
2018Integrated ProjectBoris BelousovJonas Eschmann, Robin Menzenbach, Christian EilersUnderactuated Trajectory-Tracking Control for Long-Exposure Photography (part I)pdf
2018Master's ThesisHRIFelix TreedeLearning Robust Control Policies from Simulations with Perturbed Parameterspdf