Fabio Muratore

Quick Info

Research Interests

Robotics, Machine Learning,
Physics Simulations,
Automatic Control

More Information

Curriculum Vitae Publications

Contact Information

Mail. Fabio Muratore
TU Darmstadt, Fachgebiet IAS
Hochschulstraße 10
64289 Darmstadt
Honda Research Institute Europe
Carl-Legien-Straße 30
63073 Offenbach am Main
Office. Room E323,
Robert-Piloty-Gebaeude S2|02

Fabio Muratore joined the Institute for Intelligent Autonomous Systems (IAS) at TU Darmstadt in April 2017 as a Ph.D. student. He is working in a joint project with and at the Honda Research Institute Europe (HRI) in Offenbach am Main, supervised by Jan Peters and Michael Gienger.
Fabio received his bachelor's degree in mechatronics as well as his masters's degree in mechanical engineering from the Technical University of Munich. During his studies he focused on automatic control, structural dynamics, and artificial neural networks.

Research Topic

There is a large consent both in academia as well as in industry that physical human-robot interaction is attributed a large potential for future robotic applications. While being a generic technology, applications emerge increasingly in the factory domain, particular in the production and assembly processes. Learning concepts for manipulation tasks is, however still rather academic as it is, imposing a number of assumptions on the underlying problem, and requiring scientists to produce the results. In particular, learning of force-based manipulation is mainly realized using kinesthetic teaching, necessitating expensive and specialized hardware as well as an expert.
The joint research project Motor Dreaming between IAS and HRI takes a different perspective on the problem. It targets to combine data driven learning and exploitative learning in an efficient way. As an alternative to building a skill representation exclusively from data, the core idea is to make additional use of generative models that allow an internal simulation of the task, also known as mental rehearsal. This step involves devising physical simulation models from the real situation, and being able to play the through in different variations. Such a mental rehearsal allows to incorporate uncertainty, such aiming to increase the robustness of reproduction by learning solutions that can deal with large parameter variations.

Blog Post

Check out my blog post on quantifying the transferability of sim-to-real control policies at sim2realai.github.io.


Simulation-Based Policy Optimization with Transferability Assessment (SPOTA)

Sim-to-real evaluation of SPOTA on the Ball-Balancer and Cart-Pole platform from Qunaser

The subsequent videos are part of the supplementaty material to our 2018 CoRL paper and show comparison of SPOTA, EPOpt, TRPO and LQR policies on the ball-on-plate task.

Spotlight talk at CoRL 2018
Evaluation of SPOTA, EPOpt, TRPO, and LQR policies in Vortex varying selected physics parameters of the simulation
Cross-evaluation of SPOTA, EPOpt, TRPO, and LQR policies trained in Vortex and in Bullet then tested in both

Key References

Muratore, F.; Gienger, M.; Peters, J. (2019). Assessing Transferability from Simulation to Reality for Reinforcement Learning, ArXiv e-prints, 1907.04685.   See Details [Details]   Download Article [PDF]   BibTeX Reference [BibTex]
Muratore, F.; Treede, F.; Gienger, M.; Peters, J. (2018). Domain Randomization for Simulation-Based Policy Optimization with Transferability Assessment, Conference on Robot Learning (CoRL).   See Details [Details]   Download Article [PDF]   BibTeX Reference [BibTex]

Teching & Student Supervision

CurrentTheses and Projects

YearTypeIn coorperation withStudent(s)Topic

Completed Theses and Projects

YearTypeIn coorperation withStudent(s)TopicDocument
2019Bachelor's ThesisHRIRobin MenzenbachBenchmarking Sim-2-Real Algorithms on Real-World Platforms
2019Bachelor's ThesisBoris Belousov & HRIChristian EilersBayesian Optimization for Learning from Randomized Simulationspdf
2019Seminar--3 GroupsReinforcement Learning Class
2019Integrated ProjectBoris BelousovJonas Eschmann, Robin Menzenbach, Christian EilersUnderactuated Trajectory-Tracking Control for Long-Exposure Photography (part II)
2019Master's ThesisHRIMarkus LamprechtBenchmarking Robust Control against Reinforcement Learning Methods on a Robotic Balancing Problempdf
2018Integrated ProjectBoris BelousovJonas Eschmann, Robin Menzenbach, Christian EilersUnderactuated Trajectory-Tracking Control for Long-Exposure Photography (part I)pdf
2018Master's ThesisHRIFelix TreedeLearning Robust Control Policies from Simulations with Perturbed Parameterspdf

Research Interests

Robotics, Physics Simulations, Machine Learning, Automatic Control


zum Seitenanfang