Fabio Muratore

Quick Info

Research Interests

Robotics, Physics Simulations,
Machine Learning, Automatic Control

More Information

Curriculum Vitae Publications

Contact Information

Mail. Fabio Muratore
TU Darmstadt, Fachgebiet IAS
Hochschulstraße 10
64289 Darmstadt
and
Honda Research Institute Europe
Carl-Legien-Straße 30
63073 Offenbach am Main
Office. Room E323,
Robert-Piloty-Gebaeude S2|02
work+49-6151-1625376
fax+49-6151-1625375

Fabio Muratore joined the Institute for Intelligent Autonomous Systems (IAS) at TU Darmstadt in April 2017 as a Ph.D. student. He is working in a joint project with and at the Honda Research Institute Europe (HRI) in Offenbach am Main, supervised by Jan Peters and Michael Gienger.
Fabio received his bachelor's degree in mechatronics as well as his masters's degree in mechanical engineering from the Technical University of Munich. During his studies he focused on automatic control, structural dynamics, and artificial neural networks.

Research Topic

There is a large consent both in academia as well as in industry that physical human-robot interaction is attributed a large potential for future robotic applications. While being a generic technology, applications emerge increasingly in the factory domain, particular in the production and assembly processes. Learning concepts for manipulation tasks is, however still rather academic as it is, imposing a number of assumptions on the underlying problem, and requiring scientists to produce the results. In particular, learning of force-based manipulation is mainly realized using kinesthetic teaching, necessitating expensive and specialized hardware as well as an expert.
The joint research project Motor Dreaming between IAS and HRI takes a different perspective on the problem. It targets to combine data driven learning and exploitative learning in an efficient way. As an alternative to building a skill representation exclusively from data, the core idea is to make additional use of generative models that allow an internal simulation of the task, also known as mental rehearsal. This step involves devising physical simulation models from the real situation, and being able to play the through in different variations. Such a mental rehearsal allows to incorporate uncertainty, such aiming to increase the robustness of reproduction by learning solutions that can deal with large parameter variations.

Videos

Simulation-Based Policy Optimization with Transferability Assessment (SPOTA)

The subsequent videos are part of the supplementaty material to our 2018 CoRL paper and show comparison of SPOTA, EPOpt, TRPO and LQR policies on the ball-on-plate task.

Spotlight talk at CoRL 2018

Evaluation of SPOTA, EPOpt, TRPO, and LQR policies in Vortex varying selected physics parameters of the simulation
Cross-evaluation of SPOTA, EPOpt, TRPO, and LQR policies trained in Vortex and in Bullet then tested in both

Research Interests

Robotics, Physics Simulations, Machine Learning, Automatic Control

Key References

Muratore, F.; Treede, F.; Gienger, M.; Peters, J. (2018). Domain Randomization for Simulation-Based Policy Optimization with Transferability Assessment, Conference on Robot Learning (CoRL).   See Details [Details]   Download Article [PDF]   BibTeX Reference [BibTex]
Pieczona, S. J.; Muratore, F.; Zäh, M. F. (2016). An Approach for Modelling the Structural Dynamics of a Mechanical System based on a Takagi-Sugeno Representation, International Conference on Competitive Manufacturing (COMA).   See Details [Details]   Download Article [PDF]   BibTeX Reference [BibTex]

Teching & Student Supervision

CurrentTheses and Projects

YearTypeIn coorperation withStudentTopic
2018Master's ThesisHRIMarkus LamprechtBenchmarking Robust Control against Methods from Reinforcement Learning on a Robotic Balancing Problem

Completed Theses and Projects

YearTypeIn coorperation withStudentTopicDocument
2018Integrated ProjectBoris BelousovJonas Eschmann, Robin Menzenbach, Christian EilersUnderactuated Trajectory-Tracking Control for Long-Exposure Photographypdf
2018Master's ThesisHRIFelix TreedeLearning Robust Control Policies from Simulations with Perturbed Parameterspdf

  

zum Seitenanfang