I have graduated and moved to Bosch Center for Artificial Intelligence in Renningen near Stuttgart.
Motor Control & Learning, Robotics, Machine Learning, Biomimetic Systems.
Curriculum Vitae Publications Google Citations DBLP
chris@robot-learning.de
Christian Daniel joined the institute for Intelligent Autonomous Systems in August 2011 as a Masters student. He was born and raised in Frankfurt/ Main in Germany where he also went to school and completed his civil service duties.
Before writing his master thesis at the IAS, he received his bachelor of science from TU Darmstadt in the field of computational fluid dynamics. He then proceeded to leave Darmstadt for one year, during which he studied at EPFL in Lausanne, Switzerland. There, he focussed away from the field of computational fluid dynamics in favor of the field of robotics in general and artificial intelligence in particular. After finishing the official school year at EPFL he got the opportunity to stay on as a research assistant at EPFL's LASA lab, working with Aude Billard and Dan Grollman. Back in Germany, he went on to specialize in the field of AI and became a Master's student at the IAS lab. Recently, his thesis at IAS has won the Datenlotsenpreis 2013 for the best Master's thesis in Computer Science. After his Master's thesis, Christian became a Ph.D. student at the IAS lab.
Christian's research interests specialize on the field of skill and transfer learning, how robot learning compares to human learning and what makes humans 'intelligent'. These are very basic questions on the way to true artificial intelligence, that still have to be covered. While skill learning has been around for some time, there still remains a lot to be done. Transfer learning, on the other hand is an area of research that still is very much unexplored.
For all publications please see my Publication Page
Manually designing reward functions for real robot tasks is often a lengthy and complicated process. We show how we can leverage machine learning methods to integrate learning a reward function from human ratings into the reinforcement learning framework to replace hand coded reward functions.
Daniel, C.; Kroemer, O.; Viering, M.; Metz, J.; Peters, J. (2015). Active Reward Learning with a Novel Acquisition Function, Autonomous Robots (AURO), 39, pp.389-405.
Download Article BibTeX Reference
Daniel, C.; Viering, M.; Metz, J.; Kroemer, O.; Peters, J. (2014). Active Reward Learning, Proceedings of Robotics: Science & Systems (R:SS).
Download Article BibTeX Reference
Many tasks can only be solved by a combination of subskills. We show how a robot can learn to adapt a sequence of skills to achieve an overarching task goal.
Daniel, C.; Neumann, G.; Kroemer, O.; Peters, J. (2013). Learning Sequential Motor Tasks, Proceedings of 2013 IEEE International Conference on Robotics and Automation (ICRA). Download Article BibTeX Reference
Real robot applications often allow for more than one solution. Learning multiple solutions for the same task increases robustness of the robot to changes in the environment, as learned backup solutions can be activated when the previously best solution becomes unavailable. Additionally, learning multiple solutions helps avoiding the averaging problem and can be used to present policies that are piecewise linear and can approximate non-linear relations.
Daniel, C.; Neumann, G.; Peters, J. (2012). Learning Concurrent Motor Skills in Versatile Solution Spaces, Proceedings of the International Conference on Robot Systems (IROS). Download Article BibTeX Reference