Publication Details

SELECT * FROM publications WHERE Record_Number=11264
Reference TypeConference Paper
Author(s)Akrour, R.; Tateo, D.; Peters, J.
Year2019
TitleTowards Reinforcement Learning of Human Readable Policies
Journal/Conference/Book TitleECML/PKDD Workshop on Deep Continuous-Discrete Machine Learning
KeywordsHierarchical Reinforcement Learning; Interpretable Machine Learning
AbstractReinforcement learning (RL) has demonstrated its ability to solve high dimensional tasks by leveraging non-linear function approximators. These successes however are mostly confined to simulated domains. When deploying RL to the real world, several concerns regarding the use of a 'black-box' policy might be raised. In an effort to make RL more interpretable, we propose in this paper a policy iteration scheme that retains a complex function approximator for its internal value predictions but constrains the policy to have a simple, human readable structure. We show that our proposed algorithm can solve continuous action deep RL benchmarks and return policies that can be fully visualized and interpreted by a human non-expert.
URL(s) https://mrbungle.zdv.uni-mainz.de/index.php/s/QGJSlVf3VL0v9GF
Link to PDFhttps://www.ias.informatik.tu-darmstadt.de/uploads/Team/RiadAkrour/decodeml2019.pdf

  

zum Seitenanfang