I am now a research fellow at Aalto university.

Hany Abdulsamad

Research Interest

  • Machine Learning, Reinforcement Learning, Optimal Control, Robotics.
  • Bayesian Inference, Hierarchical Models, Switching Dynamics, Robust Control.

Publications

Publication Page Google Scholar Research Gate Github

Contact

hany@robot-learning.de

Hany Abdulsamad joined the Intelligent Autonomous System lab in April 2016 as a PhD student. His research interests include optimal control, trajectory optimization, reinforcement learning and robotics. During his Phd, Hany is working on the SKILLS4ROBOTS project with the aim of enabling humanoid robots to acquire and improve a rich set of motor skills.

Before starting his PhD, Hany completed his Bachelor's degree and Master's degree in Electrical Engineering and Information Technology at the Technische Universitaet Darmstadt. He wrote his Master's thesis entitled "Stochastic Optimal Control with Linearized Dynamics" in the Computer Science Department under the supervision of Gerhard Neumann, Oleg Arenz and Jan Peters.

Source Code

Mixture Models: https://github.com/hanyas/mimo
Trajectory Optimization: https://github.com/hanyas/trajopt
Switching Dynamics: https://github.com/hanyas/sds
REPS: https://github.com/hanyas/reps

References

Stochastic Optimal Control

    •     Bib
      Abdulsamad, H.; Dorau, T.; Belousov, B.; Zhu, J.-J; Peters, J. (2021). Distributionally Robust Trajectory Optimization Under Uncertain Dynamics via Relative Entropy Trust-Regions, arXiv.
    •     Bib
      Watson, J.; Abdulsamad, H.; Peters, J. (2019). Stochastic Optimal Control as Approximate Input Inference, Conference on Robot Learning (CoRL).
    •     Bib
      Schultheis, M.; Belousov, B.; Abdulsamad, H.; Peters, J. (2019). Receding Horizon Curiosity, Proceedings of the 3rd Conference on Robot Learning (CoRL).
    •     Bib
      Celik, O.; Abdulsamad, H.; Peters, J. (2019). Chance-Constrained Trajectory Optimization for Nonlinear Systems with Unknown Stochastic Dynamics, Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).
    •     Bib
      Arenz, O.; Abdulsamad, H.; Neumann, G. (2016). Optimal Control and Inverse Optimal Control by Distribution Matching, Proceedings of the International Conference on Intelligent Robots and Systems (IROS), IEEE.
    •     Bib
      Abdulsamad, H.; Arenz, O.; Peters, J.; Neumann, G. (2017). State-Regularized Policy Search for Linearized Dynamical Systems, Proceedings of the International Conference on Automated Planning and Scheduling (ICAPS).

Hierarchical Learning

    •     Bib
      Abdulsamad, H.; Peters, J. (2021). Model-Based Reinforcement Learning for Stochastic Hybrid Systems, arXiv.
    •     Bib
      Abdulsamad, H.; Nickl, P.; Klink, P.; Peters, J. (2021). A Variational Infinite Mixture for Probabilistic Inverse Dynamics Learning, Proceedings of the IEEE International Conference on Robotics and Automation (ICRA).
    •     Bib
      Abdulsamad, H.; Peters, J. (2020). Hierarchical Decomposition of Nonlinear Dynamics and Control for System Identification and Policy Distillation, 2nd Annual Conference on Learning for Dynamics and Control.
    •   Bib
      Abdulsamad, H.; Peters, J. (2020). Learning Hybrid Dynamics and Control, ECML/PKDD Workshop on Deep Continuous-Discrete Machine Learning.
    •   Bib
      Abdulsamad, H.; Naveh, K.; Peters, J. (2019). Model-Based Relative Entropy Policy Search for Stochastic Hybrid Systems, 4th Multidisciplinary Conference on Reinforcement Learning and Decision Making (RLDM).

Reinforcement Learning

    •     Bib
      Klink, P.; Abdulsamad, H.; Belousov, B.; D'Eramo, C.; Peters, J.; Pajarinen, J. (2021). A Probabilistic Interpretation of Self-Paced Learning with Applications to Reinforcement Learning, Journal of Machine Learning Research (JMLR).
    •     Bib
      Belousov, B.; Abdulsamad H.; Klink, P.; Parisi, S.; Peters, J. (2021). Reinforcement Learning Algorithms: Analysis and Applications, Studies in Computational Intelligence, Springer International Publishing.
    •     Bib
      Tosatto, S.; Carvalho, J.; Abdulsamad, H.; Peters, J. (2020). A Nonparametric Off-Policy Policy Gradient, Proceedings of the 23rd International Conference on Artificial Intelligence and Statistics (AISTATS).
    •     Bib
      Klink, P.; Abdulsamad, H.; Belousov, B.; Peters, J. (2019). Self-Paced Contextual Reinforcement Learning, Proceedings of the 3rd Conference on Robot Learning (CoRL).
    •     Bib
      Akrour, R.; Abdolmaleki, A.; Abdulsamad, H.; Peters, J.; Neumann, G. (2018). Model-Free Trajectory-based Policy Optimization with Monotonic Improvement, Journal of Machine Learning Research (JMLR).
    •     Bib
      Akrour, R.; Abdolmaleki, A.; Abdulsamad, H.; Neumann, G. (2016). Model-Free Trajectory Optimization for Reinforcement Learning, Proceedings of the International Conference on Machine Learning (ICML).
    •     Bib
      Parisi, S.; Abdulsamad, H.; Paraschos, A.; Daniel, C.; Peters, J. (2015). Reinforcement Learning vs Human Programming in Tetherball Robot Games, Proceedings of the IEEE/RSJ Conference on Intelligent Robots and Systems (IROS).

Master Theses

Bachelor Theses