I am now a research fellow at Aalto university.

Hany Abdulsamad

Quick Info

Publications

Publication Page Google Scholar Research Gate Github

Contact


Hany Abdulsamad joined the Intelligent Autonomous System lab in April 2016 as a PhD student. His research interests include optimal control, trajectory optimization, reinforcement learning and robotics. During his Phd, Hany is working on the SKILLS4ROBOTS project with the aim of enabling humanoid robots to acquire and improve a rich set of motor skills.

Before starting his PhD, Hany completed his Bachelor's degree and Master's degree in Electrical Engineering and Information Technology at the Technische Universitaet Darmstadt. He wrote his Master's thesis entitled "Stochastic Optimal Control with Linearized Dynamics" in the Computer Science Department under the supervision of Gerhard Neumann, Oleg Arenz and Jan Peters.

Research Interest

Machine Learning, Reinforcement Learning, Optimal Control, Robotics.
Bayesian Inference, Hierarchical Models, Switching Dynamics, Robust Control.

Source Code

Mixture Models: https://github.com/hanyas/mimo
Trajectory Optimization: https://github.com/hanyas/trajopt
Switching Dynamics: https://github.com/hanyas/sds
REPS: https://github.com/hanyas/reps

References

Stochastic Optimal Control

  1. Abdulsamad, H.; Dorau, T.; Belousov, B.; Zhu, J.-J; Peters, J. (2021). Distributionally Robust Trajectory Optimization Under Uncertain Dynamics via Relative Entropy Trust-Regions, arXiv.   Download Article [PDF]   BibTeX Reference [BibTex]
  2. Watson, J.; Abdulsamad, H.; Peters, J. (2019). Stochastic Optimal Control as Approximate Input Inference, Conference on Robot Learning (CoRL 2019).   Download Article [PDF]   BibTeX Reference [BibTex]
  3. Schultheis, M.; Belousov, B.; Abdulsamad, H.; Peters, J. (2019). Receding Horizon Curiosity, Proceedings of the 3rd Conference on Robot Learning (CoRL).   Download Article [PDF]   BibTeX Reference [BibTex]
  4. Celik, O.; Abdulsamad, H.; Peters, J. (2019). Chance-Constrained Trajectory Optimization for Nonlinear Systems with Unknown Stochastic Dynamics, Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).   Download Article [PDF]   BibTeX Reference [BibTex]
  5. Arenz, O.; Abdulsamad, H.; Neumann, G. (2016). Optimal Control and Inverse Optimal Control by Distribution Matching, Proceedings of the International Conference on Intelligent Robots and Systems (IROS), IEEE.   Download Article [PDF]   BibTeX Reference [BibTex]
  6. Abdulsamad, H.; Arenz, O.; Peters, J.; Neumann, G. (2017). State-Regularized Policy Search for Linearized Dynamical Systems, Proceedings of the International Conference on Automated Planning and Scheduling (ICAPS).   Download Article [PDF]   BibTeX Reference [BibTex]

Hierarchical Learning

  1. Abdulsamad, H.; Peters, J. (2021). Model-Based Reinforcement Learning for Stochastic Hybrid Systems, arXiv.   Download Article [PDF]   BibTeX Reference [BibTex]
  2. Abdulsamad, H.; Nickl, P.; Klink, P.; Peters, J. (2021). A Variational Infinite Mixture for Probabilistic Inverse Dynamics Learning, Proceedings of the IEEE International Conference on Robotics and Automation (ICRA).   Download Article [PDF]   BibTeX Reference [BibTex]
  3. Abdulsamad, H.; Peters, J. (2020). Hierarchical Decomposition of Nonlinear Dynamics and Control for System Identification and Policy Distillation, 2nd Annual Conference on Learning for Dynamics and Control.   Download Article [PDF]   BibTeX Reference [BibTex]
  4. Abdulsamad, H.; Peters, J. (2020). Learning Hybrid Dynamics and Control, ECML/PKDD Workshop on Deep Continuous-Discrete Machine Learning.   BibTeX Reference [BibTex]
  5. Abdulsamad, H.; Naveh, K.; Peters, J. (2019). Model-Based Relative Entropy Policy Search for Stochastic Hybrid Systems, 4th Multidisciplinary Conference on Reinforcement Learning and Decision Making (RLDM).   BibTeX Reference [BibTex]

Reinforcement Learning

  1. Klink, P.; Abdulsamad, H.; Belousov, B.; D'Eramo, C.; Peters, J.; Pajarinen, J. (2021). A Probabilistic Interpretation of Self-Paced Learning with Applications to Reinforcement Learning, Journal of Machine Learning Research (JMLR).   Download Article [PDF]   BibTeX Reference [BibTex]
  2. Belousov, B.; Abdulsamad H.; Klink, P.; Parisi, S.; Peters, J. (2021). Reinforcement Learning Algorithms: Analysis and Applications, Studies in Computational Intelligence, Springer International Publishing.   Download Article [PDF]   BibTeX Reference [BibTex]
  3. Tosatto, S.; Carvalho, J.; Abdulsamad, H.; Peters, J. (2020). A Nonparametric Off-Policy Policy Gradient, Proceedings of the 23rd International Conference on Artificial Intelligence and Statistics (AISTATS).   Download Article [PDF]   BibTeX Reference [BibTex]
  4. Klink, P.; Abdulsamad, H.; Belousov, B.; Peters, J. (2019). Self-Paced Contextual Reinforcement Learning, Proceedings of the 3rd Conference on Robot Learning (CoRL).   Download Article [PDF]   BibTeX Reference [BibTex]
  5. Akrour, R.; Abdolmaleki, A.; Abdulsamad, H.; Peters, J.; Neumann, G. (2018). Model-Free Trajectory-based Policy Optimization with Monotonic Improvement, Journal of Machine Learning Research (JMLR).   Download Article [PDF]   BibTeX Reference [BibTex]
  6. Akrour, R.; Abdolmaleki, A.; Abdulsamad, H.; Neumann, G. (2016). Model-Free Trajectory Optimization for Reinforcement Learning, Proceedings of the International Conference on Machine Learning (ICML).   Download Article [PDF]   BibTeX Reference [BibTex]
  7. Parisi, S.; Abdulsamad, H.; Paraschos, A.; Daniel, C.; Peters, J. (2015). Reinforcement Learning vs Human Programming in Tetherball Robot Games, Proceedings of the IEEE/RSJ Conference on Intelligent Robots and Systems (IROS).   Download Article [PDF]   BibTeX Reference [BibTex]

Master Theses

Bachelor Theses

  

zum Seitenanfang