I am now a research fellow at Aalto university.
Hany Abdulsamad
Research Interest
- Machine Learning, Reinforcement Learning, Optimal Control, Robotics.
- Bayesian Inference, Hierarchical Models, Switching Dynamics, Robust Control.
Publications
Publication Page Google Scholar Research Gate Github
Contact

Before starting his PhD, Hany completed his Bachelor's degree and Master's degree in Electrical Engineering and Information Technology at the Technische Universitaet Darmstadt. He wrote his Master's thesis entitled "Stochastic Optimal Control with Linearized Dynamics" in the Computer Science Department under the supervision of Gerhard Neumann, Oleg Arenz and Jan Peters.
Source Code
Mixture Models: https://github.com/hanyas/mimo
Trajectory Optimization: https://github.com/hanyas/trajopt
Switching Dynamics: https://github.com/hanyas/sds
REPS: https://github.com/hanyas/reps
References
Stochastic Optimal Control
-
- Abdulsamad, H.; Dorau, T.; Belousov, B.; Zhu, J.-J; Peters, J. (2021). Distributionally Robust Trajectory Optimization Under Uncertain Dynamics via Relative Entropy Trust-Regions, arXiv.
-
- Watson, J.; Abdulsamad, H.; Peters, J. (2019). Stochastic Optimal Control as Approximate Input Inference, Conference on Robot Learning (CoRL).
-
- Schultheis, M.; Belousov, B.; Abdulsamad, H.; Peters, J. (2019). Receding Horizon Curiosity, Proceedings of the 3rd Conference on Robot Learning (CoRL).
-
- Celik, O.; Abdulsamad, H.; Peters, J. (2019). Chance-Constrained Trajectory Optimization for Nonlinear Systems with Unknown Stochastic Dynamics, Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).
-
- Arenz, O.; Abdulsamad, H.; Neumann, G. (2016). Optimal Control and Inverse Optimal Control by Distribution Matching, Proceedings of the International Conference on Intelligent Robots and Systems (IROS), IEEE.
-
- Abdulsamad, H.; Arenz, O.; Peters, J.; Neumann, G. (2017). State-Regularized Policy Search for Linearized Dynamical Systems, Proceedings of the International Conference on Automated Planning and Scheduling (ICAPS).
Hierarchical Learning
-
- Abdulsamad, H.; Peters, J. (2021). Model-Based Reinforcement Learning for Stochastic Hybrid Systems, arXiv.
-
- Abdulsamad, H.; Nickl, P.; Klink, P.; Peters, J. (2021). A Variational Infinite Mixture for Probabilistic Inverse Dynamics Learning, Proceedings of the IEEE International Conference on Robotics and Automation (ICRA).
-
- Abdulsamad, H.; Peters, J. (2020). Hierarchical Decomposition of Nonlinear Dynamics and Control for System Identification and Policy Distillation, 2nd Annual Conference on Learning for Dynamics and Control.
-
- Abdulsamad, H.; Peters, J. (2020). Learning Hybrid Dynamics and Control, ECML/PKDD Workshop on Deep Continuous-Discrete Machine Learning.
-
- Abdulsamad, H.; Naveh, K.; Peters, J. (2019). Model-Based Relative Entropy Policy Search for Stochastic Hybrid Systems, 4th Multidisciplinary Conference on Reinforcement Learning and Decision Making (RLDM).
Reinforcement Learning
-
- Klink, P.; Abdulsamad, H.; Belousov, B.; D'Eramo, C.; Peters, J.; Pajarinen, J. (2021). A Probabilistic Interpretation of Self-Paced Learning with Applications to Reinforcement Learning, Journal of Machine Learning Research (JMLR).
-
- Belousov, B.; Abdulsamad H.; Klink, P.; Parisi, S.; Peters, J. (2021). Reinforcement Learning Algorithms: Analysis and Applications, Studies in Computational Intelligence, Springer International Publishing.
-
- Tosatto, S.; Carvalho, J.; Abdulsamad, H.; Peters, J. (2020). A Nonparametric Off-Policy Policy Gradient, Proceedings of the 23rd International Conference on Artificial Intelligence and Statistics (AISTATS).
-
- Klink, P.; Abdulsamad, H.; Belousov, B.; Peters, J. (2019). Self-Paced Contextual Reinforcement Learning, Proceedings of the 3rd Conference on Robot Learning (CoRL).
-
- Akrour, R.; Abdolmaleki, A.; Abdulsamad, H.; Peters, J.; Neumann, G. (2018). Model-Free Trajectory-based Policy Optimization with Monotonic Improvement, Journal of Machine Learning Research (JMLR).
-
- Akrour, R.; Abdolmaleki, A.; Abdulsamad, H.; Neumann, G. (2016). Model-Free Trajectory Optimization for Reinforcement Learning, Proceedings of the International Conference on Machine Learning (ICML).
-
- Parisi, S.; Abdulsamad, H.; Paraschos, A.; Daniel, C.; Peters, J. (2015). Reinforcement Learning vs Human Programming in Tetherball Robot Games, Proceedings of the IEEE/RSJ Conference on Intelligent Robots and Systems (IROS).
Master Theses
- Tom Buchholz, Variational Locally Projected Regression, co-supervised with Janosch Moos
- Tim Schneider, Active Inference for Robotic Manipulation, co-supervised with Boris Belousov.
- Yannick Eich, Distributionally Robust Optimization for Hybrid Systems, co-supervised with Joe Watson.
- Kay Hansel, Probabilistic Dynamic Mode Primitives, co-supervised with Svenja Stark.
- Janosch Moos, Approximate Variational Inference For Mixture Models, co-supervised with Svenja Stark.
- Tim Dorau, Distributionally Robust Optimization for Optimal Control, co-supervised with Boris Belousov.
- Thomas Lautenschläger, Variational Inference for Switching Dynamics.
- Markus Semmler, Sequential Bayesian Optimal Experimental Design, co-supervised with Boris Belousov and Michael Lutter.
- Peter Nickl, Bayesian Inference for RegressionModels using Nonparametric Infinite Mixtures.
- Matthias Schultheis, Bayesian Reinforcement Learning for System Identification, co-supervised with Boris Belousov.
- Pascal Klink, Generalization and Transferability in Reinforcement Learning, co-supervised with Boris Belousov.
- Onur Celik, Chance Constraints for Stochastic Optimal Control and Stochastic Optimization.
Bachelor Theses
- Tim Schneider, Guided Policy Search for In-Hand Manipulation, co-supervised with Filipe Veiga.
- Ana Borg, Infinite-Mixture Policies in Reinforcement Learning.
- Nourhan Khaled, Benchmarking Reinforcement Learning Algorithms on Tetherball Games.