Daniel Palenicek


Reviewing

NeurIPS, ICLR, CoRL, RLC, IROS, and various ML & Robotics workshops.

Teaching Assistant

LectureYears
Computational Engineering and RoboticsSS 2022, WS 2022/23
Statistical Machine LearningSS 2023, WS 2023/24

Daniel joined the Intelligent Autonomous System lab on October 1st, 2021 as a Ph.D. student. He is part of the 3AI project with Hessian.AI. In his research, Daniel focuses on increasing sample efficiency of model-based reinforcement learning algorithms by studying the impact which model-errors have on the learning.

Before starting his Ph.D., Daniel completed his Bachelor's degree and Master's degree in Wirtschaftsinformatik at the Technische Universität Darmstadt. He wrote his Master's thesis entitled "Dyna-Style Model-Based Reinforcement Learning with Value Expansion" in the Computer Science Department under the supervision of Michael Lutter and Jan Peters. During his studies, Daniel further did two research internships, at the Bosch Center for AI and at Huawei Noah’s Ark Lab London.

Publications

    •       Bib
      Bhatt, A.; Palenicek, D.; Belousov, B.; Argus, M.; Amiranashvili, A.; Brox, T.; Peters, J. (2024). CrossQ: Batch Normalization in Deep Reinforcement Learning for Greater Sample Efficiency and Simplicity, International Conference on Learning Representations (ICLR), Spotlight.
    •     Bib
      Bhatt, A.; Palenicek, D.; Belousov, B.; Argus, M.; Amiranashvili, A.; Brox, T.; Peters, J. (2024). CrossQ: Batch Normalization in Deep Reinforcement Learning for Greater Sample Efficiency and Simplicity, European Workshop on Reinforcement Learning (EWRL).
    •   Bib
      Lenz, J.; Gruner, T.; Palenicek, D.; Schneider, T.; Pfenning, I.; Peters J. (2024). Analysing the Interplay of Vision and Touch for Dexterous Insertion Tasks, CoRL 2024 Workshop on Learning Robot Fine and Dexterous Manipulation: Perception and Control.
    •     Bib
      Palenicek, D.; Gruner, T.; Schneider, T.; Böhm, A.; Lenz, J.; Pfenning, I. and Krämer, E.; Peters, J. (2024). Learning Tactile Insertion in the Real World, IEEE ICRA 2024 Workshop on Robot Embodiment through Visuo-Tactile Perception.
    •     Bib
      Palenicek, D.; Gruner, T.; Schneider, T.; Böhm, A.; Lenz, J.; Pfenning, I. and Krämer, E.; Peters, J. (2024). Learning Tactile Insertion in the Real World, 40th Anniversary of the IEEE International Conference on Robotics and Automation (ICRA@40).
    •     Bib
      Palenicek, D.; Lutter, M.; Carvalho, J.; Peters, J. (2023). Diminishing Return of Value Expansion Methods in Model-Based Reinforcement Learning, International Conference on Learning Representations (ICLR).
    •     Bib
      Gruner, T.; Belousov, B.; Muratore, F.; Palenicek, D.; Peters, J. (2023). Pseudo-Likelihood Inference, Advances in Neural Information Processing Systems (NIPS / NeurIPS).
    •     Bib
      Palenicek, D.; Lutter, M., Peters, J. (2022). Revisiting Model-based Value Expansion, Multi-disciplinary Conference on Reinforcement Learning and Decision Making (RLDM).
    •     Bib
      Cowen-Rivers, A.I.; Palenicek, D.; Moens, V.; Abdullah, M.A.; Sootla, A.; Wang, J.; Bou-Ammar, H. (2022). SAMBA: safe model-based & active reinforcement learning, Machine Learning.
    •     Bib
      Palenicek, D. (2021). A Survey on Constraining Policy Updates Using the KL Divergence, Reinforcement Learning Algorithms: Analysis and Applications, pp.49-57.
    •     Bib
      Belousov, B.; Abdulsamad H.; Klink, P.; Parisi, S.; Peters, J. (2021). Reinforcement Learning Algorithms: Analysis and Applications, Studies in Computational Intelligence, Springer International Publishing.
  1. Zhou, M.; Luo, J.; Villella, J.; Yang, Y.; Rusu, D.; Miao, J.; Zhang, W.; Alban, M.; Fadakar, I.; Chen, Z.; Chongxi-Huang, A.; Wen, Y.; Hassanzadeh, K.; Graves, D.; Chen, D.; Zhu, Z.; Nguyen, N.; Elsayed, M.; Shao, K.; Ahilan, S.; Zhang, B.; Wu, J.; Fu, Z.; Rezaee, K.; Yadmellat, P.; Rohani, M.; Perez-Nieves, N.; Ni, Y.; Banijamali, S.; Cowen-Rivers, A.; Tian, Z.; Palenicek, D.; Bou-Ammar, H.; Zhang, H.; Liu, W.; Hao, J.; Wang, J. (2020). SMARTS: Scalable Multi-Agent Reinforcement Learning Training School for Autonomous Driving. Conference on Robot Learning (CoRL). Best System Paper Award. PDF

Talks and Interviews

Supervised Theses and Projects

Thesis/ProjectTopicStudent(s)Together with
M.Sc. ThesisInvestigating bottlenecks of CrossQ's sample efficiencyVogt F. 
M.Sc. ThesisOn-robot Deep Reinforcement Learning for Quadruped LocomotionKinzel J.Nico Bohlinger
B.Sc. ThesisDiminishing Return of Value Expansion Methods in Offline Model-Based Reinforcement LearningDennert D. 
B.Sc. ThesisDiminishing Return of Value Expansion Methods in Discrete Model-Based Reinforcement LearningAhmad F. 
RL:IP.SS24Training Large Scale Robot Transformer ModelsScherer C.Tim Schneider & Theo Gruner & Maximilian Tölle
RL:IP.SS24XXX: eXploring X-Embodiment with RT-XJacobs T.Tim Schneider & Theo Gruner & Maximilian Tölle
RL:IP.SS24Unveiling the Unseen: Tactile Perception and Reinforcement Learning in the Real WorldBöhm A., Krämer E.Tim Schneider & Theo Gruner
RL:IP.WS23/24XXX: eXploring X-Embodiment with RT-XDennert D., Scherer C., Ahmad F.Tim Schneider & Theo Gruner & Maximilian Tölle
RL:IP.WS23/24XXX: eXploring X-Embodiment with RT-XJacobs T.Tim Schneider & Theo Gruner & Maximilian Tölle
RL:IP.WS23/24Unveiling the Unseen: Tactile Perception and Reinforcement Learning in the Real WorldBöhm A., Pfenning I., Lenz J.Tim Schneider & Theo Gruner
RL:IP.SS23Latent Tactile Representations for Model-based RLKrämer E.Tim Schneider & Theo Gruner