I have joined INRIA Scool as permanent research fellow, see my new homepage.

Riad Akrour

Research Interests

Machine Learning for Decision Problems, Hierarchical Representations and Controls, Preference Learning

More Information

Publications Google Citations

Contact Information

Mail. TU Darmstadt, FB-Informatik, FG-IAS, Hochschulstr. 10, 64289 Darmstadt
Office. Room E226, Robert-Piloty-Gebaeude S2|02
(lab) +49-6151-16-20074
riad(at)robot-learning(dot)de

Riad Akrour is a research scientist in the Intelligent Autonomous Systems group working on hierarchical Reinforcement and Inverse Reinforcement Learning. Riad joined the lab in April 2015 after receiving his PhD in computer science from Université Paris-Sud 11 (Orsay, France) in October 2014.

During his PhD thesis, Riad worked with Michèle Sebag and Marc Schoenauer on reducing the expertise requirements of Policy Learning algorithms allowing uninitiated users to teach robots new tasks. They did so by proposing a learning framework (Preference-based Reinforcement Learning) where the user gives binary feedback (better/worse) to trajectories demonstrated by the robot; reducing the role of the user to that of a mere critic. He is expected during his postdoc to focus on the automatic discovery of structure in robot trajectories and to develop (hierarchical) algorithms capable of exploiting it.

Prior to his PhD, he received a diploma in Computer Engineering from Ecole Nationale Superieure d'Informatique (Algiers, Algeria) and an MSc in Artificial Intelligence and Decision from Université Pierre et Marie Curie (Paris, France).

Research interests

  • Reinforcement Learning and Inverse Reinforcement Learning
  • Continuous Optimization
  • Preference-based Reinforcement Learning
  • Robotics

Key References

Reinforcement Learning

    •     Bib
      Akrour, R.; Pajarinen, J.; Neumann, G.; Peters, J. (2019). Projections for Approximate Policy Iteration Algorithms, Proceedings of the International Conference on Machine Learning (ICML).
    •     Bib
      Akrour, R.; Abdolmaleki, A.; Abdulsamad, H.; Peters, J.; Neumann, G. (2018). Model-Free Trajectory-based Policy Optimization with Monotonic Improvement, Journal of Machine Learning Research (JMLR).

Optimization

    •     Bib
      Akrour, R.; Atamna, A.; Peters, J. (2021). Convex Optimization with an Interpolation-based Projection and its Application to Deep Learning, Machine Learning (MACH), 110, 8, pp.2267-2289.
    •     Bib
      Akrour, R.; Sorokin, D.; Peters, J.; Neumann, G. (2017). Local Bayesian Optimization of Motor Skills, Proceedings of the International Conference on Machine Learning (ICML).

Preference-based Reinforcement Learning

    •     Bib
      Wirth, C.; Akrour, R.; Fürnkranz, J.; Neumann G. (2017). A Survey of Preference-Based Reinforcement Learning Methods, Journal of Machine Learning Research (JMLR).
    •     Bib
      Akrour, R.; Schoenauer, M.; Souplet, J.-C.; Sebag, M. (2014). Programming by Feedback, Proceedings of the International Conference on Machine Learning (ICML).