Publication Details

SELECT * FROM publications WHERE Record_Number=10132
Reference TypeConference Proceedings
Author(s)Wierstra, D.; Foerster, A.; Peters, J.; Schmidhuber, J.
TitleSolving Deep Memory POMDPs with Recurrent Policy Gradients
Journal/Conference/Book TitleProceedings of the International Conference on Artificial Neural Networks (ICANN)
Keywordspolicy gradients, reinforcement learning
AbstractThis paper presents Recurrent Policy Gradients, a model- free reinforcement learning (RL) method creating limited-memory stochastic policies for partially observable Markov decision problems (POMDPs) that require long-term memories of past observations. The approach involves approximating a policy gradient for a Recurrent Neural Network (RNN) by backpropagating return-weighted characteristic eligibilities through time. Using a �Long Short-Term Memory� architecture, we are able to outperform other RL methods on two important benchmark tasks. Furthermore, we show promising results on a complex car driving simulation task.
Link to PDF


zum Seitenanfang