Carlo D'Eramo

Quick Info

Research Interests

Multi-task Reinforcement Learning Meta-Reinforcement Learning Deep Reinforcement Learning

More Information

Personal website Linkedin Google Scholar

Contact Information

Mail. Carlo D'Eramo
TU Darmstadt, Fachgebiet IAS
Hochschulstraße 10
64289 Darmstadt
Office. Room E323,
Robert-Piloty-Gebaeude S2|02
work+49-6151-1625376
fax+49-6151-1625375


Carlo D'Eramo is a postdoctoral researcher in the Intelligent Autonomous Systems group working on Multi-Task Reinforcement Learning and Meta-Reinforcement Learning. Carlo joined the lab in April 2019 after receiving his Ph.D. in Information Technology from Politecnico di Milano (Milan, Italy) in February 2019.

During his Ph.D. research, Carlo worked under the supervision of prof. Marcello Restelli focusing in particular on value-based Reinforcement Learning. More specifically, in his Ph.D. thesis he proposed novel methodologies to address the problem of overestimation in Bellman updates and to improve exploration in stochastic MDPs. During his Ph.D., he also developed a Reinforcement Learning python library called Mushroom. In his postdoc, he will address the critical issues of sample-efficiency and feature extraction in Deep Reinforcement Learning by means of novel Multi-Task and Meta-Reinforcement Learning techniques.

Prior to his Ph.D. thesis, in 2015 he was awarded a double MSc in Computer Engineering at Politecnico di Milano and University of Illinois at Chicago (UIC), and in 2011 a BSc in Computer Engineering at Politecnico di Milano.

Key references

Tosatto, S.; D'Eramo, C.; Pajarinen, J.; Restelli, M.; Peters, J. (2019). Exploration Driven By an Optimistic Bellman Equation, Proceedings of the International Joint Conference on Neural Networks (IJCNN).   See Details [Details]   Download Article [PDF]   BibTeX Reference [BibTex]

Deramo, C.; Cini, A.; Restelli, M. (2019). Exploiting Action-Value Uncertainty to Drive Exploration in Reinforcement Learning, IJCNN.   See Details [Details]   BibTeX Reference [BibTex]

Tosatto, S.; D'Eramo, C.; Pajarinen, J.; Restelli, M.; Peters, J. (2018). Technical Report: "Exploration Driven by an Optimistic Bellman Equation".   See Details [Details]   Download Article [PDF]   BibTeX Reference [BibTex]

Tosatto, S.; D'Eramo, C.; Pirotta, M.; Restelli, M. (2017). Boosted Fitted Q-Iteration, Polytechnic University of Milan.   See Details [Details]   Download Article [PDF]   BibTeX Reference [BibTex]

Tosatto, S.; D'Eramo, C; Pirotta, M.; Restelli, M. (2017). Boosted Fitted Q-Iteration, Proceedings of the International Conference of Machine Learning (ICML).   See Details [Details]   Download Article [PDF]   BibTeX Reference [BibTex]

Deramo, C.; Nuara, A.; Pirotta, M.; Restelli, M. (2017). Estimating the Maximum Expected Value in Continuous Reinforcement Learning Problems, AAAI.   See Details [Details]   Download Article [PDF]   BibTeX Reference [BibTex]

Tateo, D.; D'Eramo, C.; Nuara, A.; Bonarini, A.; Restelli, M. (2017). Exploiting structure and uncertainty of Bellman updates in Markov decision processes, 2017 IEEE Symposium Series on Computational Intelligence (SSCI).   See Details [Details]   Download Article [PDF]   BibTeX Reference [BibTex]

Deramo, C.; Nuara, A.; Restelli, M. (2016). Estimating the Maximum Expected Value through Gaussian Approximation, ICML.   See Details [Details]   Download Article [PDF]   BibTeX Reference [BibTex]

  

zum Seitenanfang