I have graduated and have become a post-doctoral researcher at INRIA Lille in France. Please check out my new Personal Website.
Reinforcement Learning under uncertainty, Monte Carlo Tree Search, Multi Armed Bandit, POMDPs, MDPs, Information Theory, Robotics
Tuan is a fourth year Ph.D. researcher for the Intelligent Autonomous Systems Group at TU Darmstadt. Tuan has an interdisciplinary background in Computer Science from a bachelor's in Vietnam. Tuan did his Master's thesis in Electronics and Computer Engineering at Hanyang University, Korea. He gains lots of research experience mainly in Computer Vision, Interpreting and Understanding Deep Convolution Neural Network, Entropy Regularization Markov Decision Process, and Embedded System in Academy (ESOS lab - Korea, HMI lab - Vietnam, DFKI - Berlin, Germany, Auburn University, USA) and Industry.
During his Ph.D., Tuan is researching the development of principled methods that allow robots to operate in unstructured partially observable real-world environments. In his recent work, He proposed a framework to apply Partially Observable Markov Decision Process (POMDP) in Monte Carlo Planning settings. His work has been accepted to publish at the IJCAI 2020 conference with an acceptance rate of 12,6%. He is now focusing on bringing his framework to apply to robot planning problems such as Disentangling and Mikado Tasks.
- , Submitted to the Journal of Artificial Intelligence Research (JAIR).
- , Ph.D. Thesis.
- , IEEE Robotics and Automation Letters, and 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).
- , Proceedings of the International Conference on Machine Learning (ICML).
- , Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI).
- MS Thesis (co-supervised with Joni Pajarinen and Georgia Chalvatzaki), Cedric Derstroff, Memory Representations for Partially Observable Reinforcement Learning
- Robot Learning: Integrated Project (Winter 2021, co-supervised with Carlo D'Eramo), Lukas Schneider, Benchmarking advances in MCTS in Go and Chess
- Robot Learning: Integrated Project (Winter 2021, co-supervised with Georgia Chalvatzaki, and Carlo D'Eramo), Daniel Mansfeld, Alex Ruffini, Learning Laplacian Representations for continuous MCTS
- Robot Learning: Integrated Project (Winter 2019, co-supervised with Boris Belousov), Maximilian Hensel, Accelerated Mirror Descent Policy Search