Daniel Palenicek


Reviewing

NeurIPS, ICLR, CoRL, RLC, IROS, and various ML & Robotics workshops.

Teaching Assistant

LectureYears
Computational Engineering and RoboticsSS 2022, WS 2022/23
Statistical Machine LearningSS 2023, WS 2023/24

Daniel joined the Intelligent Autonomous System lab on October 1st, 2021 as a Ph.D. student. He is part of the 3AI project with Hessian.AI. In his research, Daniel focuses on increasing sample efficiency of model-based reinforcement learning algorithms by studying the impact which model-errors have on the learning.

Before starting his Ph.D., Daniel completed his Bachelor's degree and Master's degree in Wirtschaftsinformatik at the Technische Universität Darmstadt. He wrote his Master's thesis entitled "Dyna-Style Model-Based Reinforcement Learning with Value Expansion" in the Computer Science Department under the supervision of Michael Lutter and Jan Peters. During his studies, Daniel further did two research internships, at the Bosch Center for AI and at Huawei Noah’s Ark Lab London.

Publications

    •     Bib
      Bhatt, A.; Palenicek, D.; Belousov, B.; Argus, M.; Amiranashvili, A.; Brox, T.; Peters, J. (2024). CrossQ: Batch Normalization in Deep Reinforcement Learning for Greater Sample Efficiency and Simplicity, International Conference on Learning Representations (ICLR), Spotlight.
    •     Bib
      Palenicek, D.; Lutter, M.; Carvalho, J.; Peters, J. (2023). Diminishing Return of Value Expansion Methods in Model-Based Reinforcement Learning, International Conference on Learning Representations (ICLR).
    •     Bib
      Gruner, T.; Belousov, B.; Muratore, F.; Palenicek, D.; Peters, J. (2023). Pseudo-Likelihood Inference, Advances in Neural Information Processing Systems (NIPS / NeurIPS).
    •     Bib
      Palenicek, D.; Lutter, M., Peters, J. (2022). Revisiting Model-based Value Expansion, Multi-disciplinary Conference on Reinforcement Learning and Decision Making (RLDM).
    •     Bib
      Cowen-Rivers, A.I.; Palenicek, D.; Moens, V.; Abdullah, M.A.; Sootla, A.; Wang, J.; Bou-Ammar, H. (2022). SAMBA: safe model-based & active reinforcement learning, Machine Learning.
    •     Bib
      Palenicek, D. (2021). A Survey on Constraining Policy Updates Using the KL Divergence, Reinforcement Learning Algorithms: Analysis and Applications, pp.49-57.
    •     Bib
      Belousov, B.; Abdulsamad H.; Klink, P.; Parisi, S.; Peters, J. (2021). Reinforcement Learning Algorithms: Analysis and Applications, Studies in Computational Intelligence, Springer International Publishing.
  1. Zhou, M.; Luo, J.; Villella, J.; Yang, Y.; Rusu, D.; Miao, J.; Zhang, W.; Alban, M.; Fadakar, I.; Chen, Z.; Chongxi-Huang, A.; Wen, Y.; Hassanzadeh, K.; Graves, D.; Chen, D.; Zhu, Z.; Nguyen, N.; Elsayed, M.; Shao, K.; Ahilan, S.; Zhang, B.; Wu, J.; Fu, Z.; Rezaee, K.; Yadmellat, P.; Rohani, M.; Perez-Nieves, N.; Ni, Y.; Banijamali, S.; Cowen-Rivers, A.; Tian, Z.; Palenicek, D.; Bou-Ammar, H.; Zhang, H.; Liu, W.; Hao, J.; Wang, J. (2020). SMARTS: Scalable Multi-Agent Reinforcement Learning Training School for Autonomous Driving. Conference on Robot Learning (CoRL). Best System Paper Award. PDF

Talks and Interviews

Supervised Theses and Projects

Thesis/ProjectStudent(s)TopicTogether with
B.Sc. ThesisDennert D.Diminishing Return of Value Expansion Methods in Offline Model-Based Reinforcement Learning 
B.Sc. ThesisAhmad F.Diminishing Return of Value Expansion Methods in Discrete Model-Based Reinforcement Learning 
RL:IP.WS23/24Dennert D., Scherer C., Ahmad F.XXX: eXploring X-Embodiment with RT-XTim Schneider & Theo Gruner & Maximilian Tölle
RL:IP.WS23/24Jacobs T.XXX: eXploring X-Embodiment with RT-XTim Schneider & Theo Gruner & Maximilian Tölle
RL:IP.WS23/24Böhm A., Pfenning I., Lenz J.Unveiling the Unseen: Tactile Perception and Reinforcement Learning in the Real WorldTim Schneider & Theo Gruner
RL:IP.SS23Krämer E.Latent Tactile Representations for Model-based RLTim Schneider & Theo Gruner