Publication Details

SELECT * FROM publications WHERE Record_Number=11295
Reference TypeThesis
Author(s)Hensel, M.
TitleCorrelated Exploration in Deep Reinforcement Learning
Journal/Conference/Book TitleBachelor Thesis
AbstractReinforcement Learning (RL) provides a general framework for sequential decision making. Recently RL has shown success through the combination with deep learning [70, 2] by using massive computation and data budgets. The potential of deep RL to generalize to new situations and handle noisy and complex data make it a promising approach in creating intelligent autonomous robot systems. Unfortunately, real world data is expensive to collect and thus deep RL for robotics currently remains a research area. It is possible to use a simulation in order satisfy the sample requirements of RL. However, real world and simulation differ and thus policies trained in simulation do not necessarily perform well in the real world. A possible way of overcoming this gap is optimizing the policy for the real by training on the real system. The training procedure requires a way of collecting data which is applicable to the real world. We will refer to this as local exploration. In the scope of this thesis we evaluate standard exploration methods w.r.t. their fitness for local exploration. In particular we are interested if the default Gaussian action noise is still the first choice for local exploration over correlated noises such as parameter noise or noise sampled from a stochastic process. For this we evaluate three tasks in simulation, two of which we also evaluate on a real robot system.
Link to PDF


zum Seitenanfang