Reinforcement Learning; Robotics; Deep Reinforcement Learning;
Google Scholar Curriculum Vitae
Mail. Davide Tateo
TU Darmstadt, Fachgebiet IAS
Hochschulstraße 10
64289 Darmstadt
Office.
Room E303,
Robert-Piloty-Gebaeude S2|02
+49-6151-16-20811
davide.tateo@tu-darmstadt.de
The main goal of his research group is to develop learning algorithms that can be deployed on real systems. To achieve this objective, the group focuses on fundamental properties of the learning algorithm, such as acting under (safety) constraints.
Currently, he is involved in a wide variety of projects: the collaborative KIARA project to bring advanced manipulations skills to risky scenarios, the DeepWalking project, to learn human gaits from demonstrations, and the INTENTION project, to develop legged robot locomotion exploiting active perception techniques. He previously worked on the SKILLS4ROBOTS project, whose objective is to develop humanoid robots that can acquire and improve a rich set of motor skills.
During his Ph.D. research, Davide worked under the supervision of prof. Andrea Bonarini and prof. Marcello Restelli focusing in particular on Hierarchical and Inverse Reinforcement Learning. He also co-developed MushroomRL, a Reinforcement Learning python library.
Liu, P.; Zhang, K.; Tateo, D.; Jauhri, S.; Hu, Z.; Peters, J. Chalvatzaki, G. (2023). Safe Reinforcement Learning of Dynamic High-Dimensional Robotic Tasks: Navigation, Manipulation, Interaction, 2023 IEEE International Conference on Robotics and Automation (ICRA), IEEE.
Download Article BibTeX Reference
Al-Hafez, F.; Tateo, D.; Arenz, O.; Zhao, G.; Peters, J. (2023). LS-IQ: Implicit Reward Regularization for Inverse Reinforcement Learning, International Conference on Learning Representations (ICLR).
Download Article BibTeX Reference
Urain, J.; Tateo, D.; Peters, J. (2023). Learning Stable Vector Fields on Lie Groups, Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), IEEE R-AL Track.
BibTeX Reference
Bjelonic, F.; Lee, J.; Arm, P.; Sako, D.; Tateo, D.; Peters, J.; Hutter, M. (2023). Learning-Based Design and Control for Quadrupedal Robots With Parallel-Elastic Actuators, IEEE Robotics and Automation Letters, 8, 3, pp.1611-1618.
Download Article BibTeX Reference
Parisi, S.; Tateo, D.; Hensel, M.; D'Eramo, C.; Peters, J.; Pajarinen, J. (2022). Long-Term Visitation Value for Deep Exploration in Sparse Reward Reinforcement Learning, Algorithms, 15, 3, pp.81.
Download Article BibTeX Reference
Akrour, R.; Tateo, D.; Peters, J. (2022). Continuous Action Reinforcement Learning from a Mixture of Interpretable Experts, IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), 44, 10, pp.6795-6806.
Download Article BibTeX Reference
Memmel, M.; Liu, P.; Tateo, D.; Peters, J. (2022). Dimensionality Reduction and Prioritized Exploration for Policy Search, 25th International Conference on Artificial Intelligence and Statistics (AISTATS).
Download Article BibTeX Reference
Liu, P.; Zhang, K.;Tateo D.; Jauhri S.; Peters J.; Chalvatzaki G.; (2022). Regularized Deep Signed Distance Fields for Reactive Motion Generation, 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).
Download Article BibTeX Reference
Urain, J.; Tateo, D; Peters, J. (2022). Learning Stable Vector Fields on Lie Groups, Robotics and Automation Letters (RA-L).
Download Article BibTeX Reference
Carvalho, J., Tateo, D., Muratore, F., Peters, J. (2021). An Empirical Analysis of Measure-Valued Derivatives for Policy Gradients, International Joint Conference on Neural Networks (IJCNN).
Download Article BibTeX Reference
D`Eramo, C.; Tateo, D.; Bonarini, A.; Restelli, M.; Peters, J. (2021). MushroomRL: Simplifying Reinforcement Learning Research, Journal of Machine Learning Research (JMLR).
BibTeX Reference
Liu, P.; Tateo D.; Bou-Ammar, H.; Peters, J. (2021). Efficient and Reactive Planning for High Speed Robot Air Hockey, Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).
Download Article BibTeX Reference
Liu, P.; Tateo D.; Bou-Ammar, H.; Peters, J. (2021). Robot Reinforcement Learning on the Constraint Manifold, Proceedings of the Conference on Robot Learning (CoRL).
Download Article BibTeX Reference
D`Eramo, C.; Tateo, D.; Bonarini, A.; Restelli, M.; Peters, J. (2020). Sharing Knowledge in Multi-Task Deep Reinforcement Learning, International Conference in Learning Representations (ICLR).
Download Article BibTeX Reference
Urain, J.; Ginesi, M.; Tateo, D.; Peters, J. (2020). ImitationFlow: Learning Deep Stable Stochastic Dynamic Systems by Normalizing Flows, IEEE/RSJ International Conference on Intelligent Robots and Systems.
Download Article BibTeX Reference
Urain, J.; Tateo, D.; Ren, T.; Peters, J. (2020). Structured policy representation: Imposing stability in arbitrarily conditioned dynamic systems, NeurIPS 2020, 3rd Robot Learning Workshop, pp.7.
Download Article BibTeX Reference
Tateo, D. (2019). Building structured hierarchical agents, Ph.D. Thesis.
Download Article BibTeX Reference
Beretta, C.; Brizzolari, C.; Tateo, D.; Riva, A.; Amigoni F. (2019). A Sampling-Based Algorithm for Planning Smooth Nonholonomic Paths, European Conference on Mobile Robots (ECMR).
Download Article BibTeX Reference
Tateo, D.; Erdenlig, I. S.; Bonarini, A. (2019). Graph-Based Design of Hierarchical Reinforcement Learning Agents, 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), IEEE.
Download Article BibTeX Reference
Akrour, R.; Tateo, D.; Peters, J. (2019). Towards Reinforcement Learning of Human Readable Policies, ECML/PKDD Workshop on Deep Continuous-Discrete Machine Learning.
Download Article BibTeX Reference
Tateo, D.; Banfi, J.; Riva, A.; Amigoni, F.; Bonarini, A. (2018). Multiagent Connected Path Planning: PSPACE-Completeness and How to Deal with It, Thirty-Second AAAI Conference on Artificial Intelligence (AAAI2018), pp.4735-4742.
Download Article BibTeX Reference
Tateo, D.; D'Eramo, C.; Nuara, A.; Bonarini, A.; Restelli, M. (2017). Exploiting structure and uncertainty of Bellman updates in Markov decision processes, 2017 IEEE Symposium Series on Computational Intelligence (SSCI).
Download Article BibTeX Reference
Tateo, D.; Pirotta, M.; Restelli, M.; Bonarini, A. (2017). Gradient-based minimization for multi-expert Inverse Reinforcement Learning, 2017 IEEE Symposium Series on Computational Intelligence (SSCI).
Download Article BibTeX Reference