Publication Details

SELECT * FROM publications WHERE Record_Number=11269
Reference TypeConference Proceedings
Author(s)Ewerton, M.; Maeda, G.; Koert, D.; Kolev, Z.; Takahashi, M.; Peters, J.
Year2019
TitleReinforcement Learning of Trajectory Distributions: Applications in Assisted Teleoperation and Motion Planning
Journal/Conference/Book TitleProceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)
AbstractThe majority of learning from demonstration approaches do not address suboptimal demonstrations or cases when drastic changes in the environment occur after the demonstrations were made. For example, in real teleoperation tasks, the demonstrations provided by the user are often suboptimal due to interface and hardware limitations. In tasks involving co-manipulation and manipulation planning, the environment often changes due to unexpected obstacles rendering previous demonstrations invalid. This paper presents a reinforcement learning algorithm that exploits the use of relevance functions to tackle such problems. This paper introduces the Pearson correlation as a measure of the relevance of policy parameters in regards to each of the components of the cost function to be optimized. The method is demonstrated in a static environment where the quality of the teleoperation is compromised by the visual interface (operating a robot in a three-dimensional task by using a simple 2D monitor). Afterward, we tested the method on a dynamic environment using a real 7-DoF robot arm where distributions are computed online via Gaussian Process regression.
Place PublishedMacau, China
Pages4294--4300
DateNovember 4-8, 2019
Link to PDFhttps://www.ias.informatik.tu-darmstadt.de/uploads/Member/PubMarcoEwerton/Ewerton_IROS_2019.pdf

  

zum Seitenanfang