Journal Papers

  •     Bib
    Abdulsamad, H.; Peters, J. (in press). Model-Based Reinforcement Learning via Stochastic Hybrid Models, IEEE Open Journal of Control Systems, Special Section: Intersection of Machine Learning with Control.
  •     Bib
    Abdulsamad, H.; Nickl, P.; Klink, P.; Peters, J. (2024). Variational Hierarchical Mixtures for Probabilistic Learning of Inverse Dynamics, IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), 46, 4, pp.1950-1963.
  •       Bib
    Hansel, K.; Moos, J.; Abdulsamad, H.; Stark, S.; Clever, D.; Peters, J. (2022). Robust Reinforcement Learning: A Review of Foundations and Recent Advances, Machine Learning and Knowledge Extraction (MAKE), 4, 1, pp.276--315, MDPI.
  •     Bib
    Klink, P.; Abdulsamad, H.; Belousov, B.; D'Eramo, C.; Peters, J.; Pajarinen, J. (2021). A Probabilistic Interpretation of Self-Paced Learning with Applications to Reinforcement Learning, Journal of Machine Learning Research (JMLR).
  •     Bib
    Abdulsamad, H.; Peters, J. (2021). Model-Based Reinforcement Learning for Stochastic Hybrid Systems, arXiv.
  •     Bib
    Akrour, R.; Abdolmaleki, A.; Abdulsamad, H.; Peters, J.; Neumann, G. (2018). Model-Free Trajectory-based Policy Optimization with Monotonic Improvement, Journal of Machine Learning Research (JMLR).

Conference Papers

  •       Bib
    Schneider, T.; Belousov, B.; Abdulsamad, H.; Peters, J. (2022). Active Inference for Robotic Manipulation, 5th Multidisciplinary Conference on Reinforcement Learning and Decision Making (RLDM).
  •     Bib
    Abdulsamad, H.; Nickl, P.; Klink, P.; Peters, J. (2021). A Variational Infinite Mixture for Probabilistic Inverse Dynamics Learning, Proceedings of the IEEE International Conference on Robotics and Automation (ICRA).
  •     Bib
    Abdulsamad, H.; Dorau, T.; Belousov, B.; Zhu, J.-J; Peters, J. (2021). Distributionally Robust Trajectory Optimization Under Uncertain Dynamics via Relative Entropy Trust-Regions, arXiv.
  •     Bib
    Tosatto, S.; Carvalho, J.; Abdulsamad, H.; Peters, J. (2020). A Nonparametric Off-Policy Policy Gradient, Proceedings of the 23rd International Conference on Artificial Intelligence and Statistics (AISTATS).
  •     Bib
    Abdulsamad, H.; Peters, J. (2020). Hierarchical Decomposition of Nonlinear Dynamics and Control for System Identification and Policy Distillation, 2nd Annual Conference on Learning for Dynamics and Control.
  •   Bib
    Abdulsamad, H.; Peters, J. (2020). Learning Hybrid Dynamics and Control, ECML/PKDD Workshop on Deep Continuous-Discrete Machine Learning.
  •     Bib
    Belousov, B.; Abdulsamad, H.; Schultheis, M.; Peters, J. (2019). Belief Space Model Predictive Control for Approximately Optimal System Identification, 4th Multidisciplinary Conference on Reinforcement Learning and Decision Making (RLDM).
  •     Bib
    Celik, O.; Abdulsamad, H.; Peters, J. (2019). Chance-Constrained Trajectory Optimization for Nonlinear Systems with Unknown Stochastic Dynamics, Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).
  •     Bib
    Schultheis, M.; Belousov, B.; Abdulsamad, H.; Peters, J. (2019). Receding Horizon Curiosity, Proceedings of the 3rd Conference on Robot Learning (CoRL).
  •     Bib
    Klink, P.; Abdulsamad, H.; Belousov, B.; Peters, J. (2019). Self-Paced Contextual Reinforcement Learning, Proceedings of the 3rd Conference on Robot Learning (CoRL).
  •     Bib
    Watson, J.; Abdulsamad, H.; Peters, J. (2019). Stochastic Optimal Control as Approximate Input Inference, Conference on Robot Learning (CoRL).
  •   Bib
    Abdulsamad, H.; Naveh, K.; Peters, J. (2019). Model-Based Relative Entropy Policy Search for Stochastic Hybrid Systems, 4th Multidisciplinary Conference on Reinforcement Learning and Decision Making (RLDM).
  •     Bib
    Abdulsamad, H.; Arenz, O.; Peters, J.; Neumann, G. (2017). State-Regularized Policy Search for Linearized Dynamical Systems, Proceedings of the International Conference on Automated Planning and Scheduling (ICAPS).
  •     Bib
    Arenz, O.; Abdulsamad, H.; Neumann, G. (2016). Optimal Control and Inverse Optimal Control by Distribution Matching, Proceedings of the International Conference on Intelligent Robots and Systems (IROS), IEEE.
  •     Bib
    Akrour, R.; Abdolmaleki, A.; Abdulsamad, H.; Neumann, G. (2016). Model-Free Trajectory Optimization for Reinforcement Learning, Proceedings of the International Conference on Machine Learning (ICML).
  •     Bib
    Parisi, S.; Abdulsamad, H.; Paraschos, A.; Daniel, C.; Peters, J. (2015). Reinforcement Learning vs Human Programming in Tetherball Robot Games, Proceedings of the IEEE/RSJ Conference on Intelligent Robots and Systems (IROS).

Books

  •     Bib
    Belousov, B.; Abdulsamad H.; Klink, P.; Parisi, S.; Peters, J. (2021). Reinforcement Learning Algorithms: Analysis and Applications, Studies in Computational Intelligence, Springer International Publishing.

Theses

  •     Bib
    Abdulsamad, H. (2016). Stochastic Optimal Control with Linearized Dynamics, Master Thesis.