Hany Abdulsamad

Quick Info

Research Interests

Machine Learning, Robotics, Reinforcement Learning, Optimal Control, Trajectory Optimization, Policy Search, Motor Skill Learning.

More Information

Publication Page Google Citations DBLP

Contact Information

Mail. TU Darmstadt, FB-Informatik, FG-IAS, Hochschulstr. 10, 64289 Darmstadt
Office. Room E304, Robert-Piloty Building S2|02
work+49-6151-16-25387

Hany Abdulsamad joined the Intelligent Autonomous System lab in April 2016 as a PhD student. His research interests include optimal control, trajectory optimization, reinforcement learning and robotics. During his Phd, Hany is working on the SKILLS4ROBOTS project with the aim of enabling humanoid robots to acquire and improve a rich set of motor skills.

Before starting his PhD, Hany completed his Bachelor's degree and Master's degree in Electrical Engineering and Information Technology at the Technische Universitaet Darmstadt. He wrote his Master's thesis entitled "Stochastic Optimal Control with Linearized Dynamics" in the Computer Science Department under the supervision of Gerhard Neumann, Oleg Arenz and Jan Peters.

Research Interest

Machine Learning, Robotics, Reinforcement Learning, Optimal Control, Trajectory Optimization, Policy Search, Motor Skill Learning.

Key References

  1. Abdulsamad, H.; Arenz, O.; Peters, J.; Neumann, G. (2017). State-Regularized Policy Search for Linearized Dynamical Systems, Proceedings of the International Conference on Automated Planning and Scheduling (ICAPS).   See Details [Details]   Download Article [PDF]   BibTeX Reference [BibTex]
  2. Abdulsamad, H.; Naveh, K.; Peters, J. (2019). Model-Based Relative Entropy Policy Search for Stochastic Hybrid Systems, 4th Multidisciplinary Conference on Reinforcement Learning and Decision Making (RLDM).   See Details [Details]   BibTeX Reference [BibTex]
  3. Klink, P.; Abdulsamad, H.; Belousov, B.; Peters, J. (2019). Self-Paced Contextual Reinforcement Learning, Proceedings of the 3rd Conference on Robot Learning (CoRL).   See Details [Details]   Download Article [PDF]   BibTeX Reference [BibTex]
  4. Watson, J.; Abdulsamad, H.; Peters, J. (2019). Stochastic Optimal Control as Approximate Input Inference, Conference on Robot Learning (CoRL 2019).   See Details [Details]   Download Article [PDF]   BibTeX Reference [BibTex]
  5. Schultheis, M.; Belousov, B.; Abdulsamad, H.; Peters, J. (2019). Receding Horizon Curiosity, Proceedings of the 3rd Conference on Robot Learning (CoRL).   See Details [Details]   Download Article [PDF]   BibTeX Reference [BibTex]
  6. Celik, O.; Abdulsamad, H.; Peters, J. (2019). Chance-Constrained Trajectory Optimization for Nonlinear Systems with Unknown Stochastic Dynamics, Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).   See Details [Details]   Download Article [PDF]   BibTeX Reference [BibTex]
  7. Parisi, S.; Abdulsamad, H.; Paraschos, A.; Daniel, C.; Peters, J. (2015). Reinforcement Learning vs Human Programming in Tetherball Robot Games, Proceedings of the IEEE/RSJ Conference on Intelligent Robots and Systems (IROS).   See Details [Details]   Download Article [PDF]   BibTeX Reference [BibTex]

For all publications please see his Publication Page

Source Code

https://github.com/hanyas

  

zum Seitenanfang