Reinforcement Learning; Robotics; Deep Reinforcement Learning;
Google Scholar Curriculum Vitae
Mail. Davide Tateo
TU Darmstadt, Fachgebiet IAS
Hochschulstraße 10
64289 Darmstadt
Office.
Room E303,
Robert-Piloty-Gebaeude S2|02
+49-6151-16-20811
davide.tateo@tu-darmstadt.de
During his Ph.D. research, Davide worked under the supervision of prof. Andrea Bonarini and prof. Marcello Restelli focusing in particular on Hierarchical and Inverse Reinforcement Learning. During his Ph.D., he also co-developed MushroomRL, a Reinforcement Learning python library.
Parisi, S.; Tateo, D.; Hensel, M.; D'Eramo, C.; Peters, J.; Pajarinen, J. (submitted). Long-Term Visitation Value for Deep Exploration in Sparse Reward Reinforcement Learning, Submitted to the Journal of Machine Learning Research (JMLR).
See Details Download Article BibTeX Reference
Akrour, R.; Tateo, D.; Peters, J. (submitted). Reinforcement Learning from a Mixture of Interpretable Experts, Transactions on Pattern Analysis and Machine Intelligence (TPAMI).
See Details Download Article BibTeX Reference
D`Eramo, C.; Tateo, D.; Bonarini, A.; Restelli, M.; Peters, J. (2020). Sharing Knowledge in Multi-Task Deep Reinforcement Learning, International Conference in Learning Representations (ICLR).
See Details Download Article BibTeX Reference
Urain, J.; Ginesi, M.; Tateo, D.; Peters, J. (2020). ImitationFlow: Learning Deep Stable Stochastic Dynamic Systems by Normalizing Flows, IEEE/RSJ International Conference on Intelligent Robots and Systems.
See Details Download Article BibTeX Reference
Urain, J.; Tateo, D.; Ren, T.; Peters, J. (2020). Structured Policy Representation: Imposing Stability in arbitrarily conditioned dynamic systems, NeurIPS 2020, 3rd Robot Learning Workshop, pp.7.
See Details Download Article BibTeX Reference
Tateo, D. (2019). Building structured hierarchical agents, Ph.D. Thesis.
See Details Download Article BibTeX Reference
Beretta, C.; Brizzolari, C.; Tateo, D.; Riva, A.; Amigoni F. (2019). A Sampling-Based Algorithm for Planning Smooth Nonholonomic Paths, European Conference on Mobile Robots (ECMR).
See Details Download Article BibTeX Reference
Tateo, D.; Erdenlig, I. S.; Bonarini, A. (2019). Graph-Based Design of Hierarchical Reinforcement Learning Agents, 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), IEEE.
See Details Download Article BibTeX Reference
Akrour, R.; Tateo, D.; Peters, J. (2019). Towards Reinforcement Learning of Human Readable Policies, ECML/PKDD Workshop on Deep Continuous-Discrete Machine Learning.
See Details Download Article BibTeX Reference
Tateo, D.; Banfi, J.; Riva, A.; Amigoni, F.; Bonarini, A. (2018). Multiagent Connected Path Planning: PSPACE-Completeness and How to Deal with It, Thirty-Second AAAI Conference on Artificial Intelligence (AAAI2018), pp.4735-4742.
See Details Download Article BibTeX Reference
Tateo, D.; D'Eramo, C.; Nuara, A.; Bonarini, A.; Restelli, M. (2017). Exploiting structure and uncertainty of Bellman updates in Markov decision processes, 2017 IEEE Symposium Series on Computational Intelligence (SSCI).
See Details Download Article BibTeX Reference
Tateo, D.; Pirotta, M.; Restelli, M.; Bonarini, A. (2017). Gradient-based minimization for multi-expert Inverse Reinforcement Learning, 2017 IEEE Symposium Series on Computational Intelligence (SSCI).
See Details Download Article BibTeX Reference