Publication Details

SELECT * FROM publications WHERE Record_Number=10198
Reference TypeJournal Article
Author(s)Hachiya, H.; Akiyama, T.; Sugiyama, M.; Peters, J.
Year2009
TitleAdaptive Importance Sampling for Value Function Approximation in Off-policy Reinforcement Learning
Journal/Conference/Book TitleNeural Networks
Keywordsoff-policy reinforcement learning; value function approximation; policy iteration; adaptive importance sampling; importance-weighted cross-validation; efficient sample reuse
AbstractOff-policy reinforcement learning is aimed at efficiently using data samples gathered from a different policy than the currently optimized one. A common approach is to use importance sampling techniques for compensating for the bias of value function estimators caused by the difference between the data-sampling policy and the target policy. However, existing off-policy methods often do not take the variance of the value function estimators explicitly into account and, therefore, their performance tends to be unstable. To cope with this problem, we propose using an adaptive importance sampling technique which allows us to actively control the trade-off between bias and variance. We further provide a method for optimally determining the trade-off parameter based on a variant of cross-validation. We demonstrate the usefulness of the proposed approach through simulations.
Volume22
Number10
Pages1399-1410
Link to PDFhttps://www.ias.informatik.tu-darmstadt.de/uploads/Publications/Publications/hachiya-AdaptiveImportanceSampling_5530.pdf

  

zum Seitenanfang