Offered Master- and Bachelors Theses

We are always searching for good students and have many good opportunities. However, we are also open to suggestions by students as long as they touch the topics relevant to intelligent systems.

> Click here for Offered Topics / Angebotene Themen < Note that these Theses are only for TU Darmstadt students who can directly contact Team Members for more information. For exceptional external students, we offer Theses as well, these need to contact Jan Peters by jan.peters@tu-darmstadt.de first.

Ongoing B.Sc. and M.Sc. Theses

StudentAdvisorTypeTopic
Niklas KappesJoao CarvalhoM.Sc.Smooth Exploration
Benedikt HahnerJoe WatsonM.Sc.Sim2GP: Bayesian Dynamics Models via Differentiable Physics
Jan MackensenDorothea KoertM.Sc.A Human-Centered Approach for AI-Aided Anomaly Detection in Time Series
Zhiyuan HuJoe Watson, Oleg ArenzM.Sc.An Inference-based Approach to Reinforcement and Imitation Learning with Diverse Demonstration Data
Rolf GattungGeorgia Chalvatzaki, Davide TateoB.Sc.Active volumetric scene understanding for robotics
Marleen SinselDorothea KoertB.Sc.AI-Aided Pointing Gesture Detection for Human-Robot-Interaction
Ruiyong PiVignesh Prasad, Sven SchultzeM.Sc.Mapless Social Robot Navigation
Johannes HeegPuze Liu, Davide TateoM.Sc.Smooth Exploration for Robotics on the Geometric Manifold
Sabin GrubeSnehal JauhriM.Sc.Combining Deep Reinforcement Learning and 3D Vision for Dual-arm Robotic Tasks
Han GaoGeorgia ChalvatzakiM.Sc.AffordanceParts: Learning Explainable Object Parts with Invertible Neural Networks
Fabian HahneVignesh PrasadB.Sc.Learning Human-Robot Interaction with Gaussian Processes HSMMs
Felix NonnengießerAlap Kshirsagar, Boris Belousov, Prof. Gemma RoigM.Sc.Visuotactile Sensing for In-Hand Object Pose Tracking
Arne BacksteinVignesh PrasadB.Sc.Temporal Latent Space Modelling using Hidden Markov Models
Mario GomezKai PloegerB.Sc.Adaptive Planning for Sideswap Juggling Patterns
Sebastian MüllerKai Streiling (FB3), Max Stasica (FB3), Jan PetersM.Sc.Visual illusions in sensorimotor control – a reinforcement learning study
Janik SchöpperDorothea KoertB.Sc.Situational Adaptive Autonomy in a Shared Workspace
Kevin FröhlichDorothea Koert, Lisa ScherfM.Sc.Learning Action Conditions from Human Demonstrations
Zeyuan SunRupert Mitchell, Jan Peters, Heinz KöpplM.Sc.Self Expanding Neural Networks for Reinforcement Learning
Dominik HorstkötterTim Schneider, Boris Belousov, Alap KshirsagarB.Sc.Learning to Assemble SL-Block Structures from Vision and Touch
Alper GeceKay Hansel, An Thai Le, Georgia Chalvatzaki, Marius PesaventoM.Sc.Leveraging Structured-Graph Correspondence in Imitation Learning
Maximilian LangerNiklas Funk, Kay HanselM.Sc.Energy-based Models for 6D Pose Estimation
Renhao CaoAlap KshirsagarM.Sc.Action Recognition in Multi-person Sports
Aron Hernandez RiveroAlap KshirsagarB.Sc.Improving Basketball Officiating through AI
Fabian WahrenTheoVincent, BorisBelousovM.Sc.Adapt your network: Investigating neural network’s architecture in Q-learning methods.
Hannes MittwollenSuman Pal, Jan HämmelmannB.Sc.Detection and 6D-pose estimation of objects using their CAD-Models
Marcel MittenbühlerAhmed Hendawy, Carlo D'Eramo, Georgia ChalvatzakiM.Sc.Lifelong Robot Learning with Pretrained Multimodal Models
Noah FarrDavide Tateo, Georgia ChalvatzakiB.Sc.Designing reward functions for robotic tasks
Christian HammacherNiklas FunkM.Sc.Object Pose Estimation and Manipulation from Pointclouds using Energy-based Models
Philipp Vincent EbertKai PloegerB.Sc.Learning Latent Dynamics for Control

Ongoing IP Projects

TopicStudentsAdvisor
System identification and control for Telemax manipulatorKilian FeessDavide Tateo, Junning Huang
Pendulum AcrobaticsFlorian WolfKai Ploeger, Pascal Klink
Interactive Semi-Supervised Action SegmentationMartina Gassen, Erik Prescher, Frederic MetzlerLisa Scherf, Felix Kaiser, Vignesh Prasad
Kinematically Constrained Humanlike Bimanual Robot MotionYasemin Göksu, Antonio De Almeida CorreiaVignesh Prasad, Alap Kshirsagar
Learn to play TangramMax Zimmermann, Marius Zöller, Andranik AristakesyanKay Hansel, Niklas Funk
Characterizing Fear-induced Adaptation of Balance by Inverse Reinforcement LearningZeyuan SunAlap Kshirsagar, Firas Al-Hafez
Tactile Environment InteractionChangqi Chen, Simon Muchau, Jonas RingsdorfNiklas Funk
Latent Generative Replay in Continual LearningMarcel MittenbühlerAhmed Hendawy, Carlo D'Eramo
Memory-Free Continual LearningDhruvin VadgamaAhmed Hendawy, Carlo D'Eramo
Simulation of Vision-based Tactile SensorsDuc Huy NguyenBoris Belousov, Tim Schneider
Learning Bimanual Robotic GraspingHanjo Schnellbächer, Christoph DickmannsJulen Urain De Jesus, Alap Kshirsagar
Learning Deep probability fields for planning and controlFelix Herrmann, Sebastian ZachDavide Tateo, Georgia Chalvatzaki, Jacopo Banfi
On Improving the Reliability of the Baseline Agent for Robotic Air HockeyHaozhe ZhuPuze Liu
Self-Play Reinforcement Learning for High-Level Tactics in Robot Air HockeyYuheng OuyangPuze Liu, Davide Tateo
Control and System identification for Unitree A1Lu LiuJunning Huang, Davide Tateo
Kinodynamic Neural Planner for Robot Air HockeyNiclas MertenPuze Liu
Visuotactile Shear Force EstimationErik Helmut, Luca DziarskiNiklas Funk, Boris Belousov
Robot Drawing With a Sense of TouchNoah Becker, Zhijingshui Yang, Jiaxian PengBoris Belousov, Mehrzad Esmaeili
Black-Box System Identification of the Air Hockey TableAnna Klyushina, Marcel RathTheo Gruner, Puze Liu
Autonomous Basil HarvestingJannik Endres, Erik Gattung, Jonathan LippertAiswarya Menon, Felix Kaiser, Arjun Vir Datta, Suman Pal
Latent Tactile Representations for Model-Based RLEric KrämerDaniel Palenicek, Theo Gruner, Tim Schneider
Model Based Multi-Object 6D Pose EstimationHelge MeierFelix Kaiser, Arjun Vir Datta, Suman Pal
Reinforcement Learning for Contact Rich ManipulationNoah Farr, Dustin GoreckiAiswarya Menon, Arjun Vir DattaSuman Pal
Measuring Task Similarity using Learned FeaturesHenrik MetternichAhmed Hendawy, Pascal Klink, Carlo D'Eramo

Completed PhD Theses

  • Abdulsamad, H. (2022). Statistical Machine Learning for Modeling and Control of Stochastic Structured Systems, Ph.D. Thesis.
          Bib
  • Belousov, B. (2022). On Optimal Behavior Under Uncertainty in Humans and Robots, Ph.D. Thesis, Technical University of Darmstadt.
          Bib
  • Arenz, O. (2021). Sample-Efficient I-Projections for Robot Learning, Ph.D. Thesis, TU Darmstadt.
          Bib
  • Loeckel, S. (2021). Machine Learning for Modeling and Analyzing of Race Car Drivers, Ph.D. Thesis.
        Bib
  • Lutter, M. (2021). Inductive Biases for Machine Learning in Robotics and Control, Ph.D. Thesis.
        Bib
  • Muratore, F. (2021). Randomizing Physics Simulations for Robot Learning, Ph.D. Thesis.
        Bib
  • Tosatto, S. (2021). Off-Policy Reinforcement Learning for Robotics, PhD Thesis.
        Bib
  • Koert, D. (2020). Interactive Machine Learning for Assistive Robots, Ph.D. Thesis.
        Bib
  • Lampariello, R. (2020). Optimal Motion Planning for Object Interception and Grasping, Ph.D. Thesis.
        Bib
  • Tanneberg, D. (2020). Understand-Compute-Adapt: Neural Networks for Intelligent Agents, Ph.D. Thesis.
        Bib
  • Buechler, D. (2019). Robot Learning for Muscular Systems, Ph.D. Thesis.
        Bib
  • Ewerton, M. (2019). Bidirectional Human-Robot Learning: Imitation and Skill Improvement, PhD Thesis.
        Bib
  • Gebhardt, G.H.W. (2019). Using Mean Embeddings for State Estimation and Reinforcement Learning, PhD Thesis.
        Bib
  • Gomez-Gonzalez, S. (2019). Real Time Probabilistic Models for Robot Trajectories, Ph.D. Thesis.
        Bib
  • Parisi, S. (2019). Reinforcement Learning with Sparse and Multiple Rewards, PhD Thesis.
        Bib
  • Koc, O. (2018). Optimal Trajectory Generation and Learning Control for Robot Table Tennis, PhD Thesis.
        Bib
  • Lioutikov, R. (2018). Parsing Motion and Composing Behavior for Semi-Autonomous Manipulation, PhD Thesis.
        Bib
  • Veiga, F. (2018). Toward Dextrous In-Hand Manipulation through Tactile Sensing, PhD Thesis.
        Bib
  • Manschitz, S. (2017). Learning Sequential Skills for Robot Manipulation Tasks, PhD Thesis.
          Bib
  • Paraschos, A. (2017). Robot Skill Representation, Learning and Control with Probabilistic Movement Primitives, PhD Thesis.
          Bib
  • Vinogradska, J. (2017). Gaussian Processes in Reinforcement Learning: Stability Analysis and Efficient Value Propagation, PhD Thesis.
        Bib
  • Calandra, R. (2016). Bayesian Modeling for Optimization and Control in Robotics, PhD Thesis.
        Bib
  • Daniel, C. (2016). Learning Hierarchical Policies from Human Feedback, PhD Thesis.
        Bib
  • Hoof, H.v. (2016). Machine Learning through Exploration for Perception-Driven Robotics, PhD Thesis.
        Bib
  • Kroemer, O. (2015). Machine Learning for Robot Grasping and Manipulation, PhD Thesis.
        Bib
  • Muelling, K. (2013). Modeling and Learning of Complex Motor Tasks: A Case Study with Robot Table Tennis, PhD Thesis.
          Bib
  • Wang, Z. (2013). Intention Inference and Decision Making with Hierarchical Gaussian Process Dynamics Model, PhD Thesis.
          Bib
  • Kober, J. (2012). Learning Motor Skills: From Algorithms to Robot Experiments, PhD Thesis.
          Bib
  • Nguyen-Tuong, D (2011). Model Learning in Robot Control, PhD Thesis (Completed at IAS/Tuebingen before move to TU Darmstadt).
          Bib

Completed Master Theses

  • Baierl, M. (2023). Score-Based Generative Models as Trajectory Priors for Motion Planning, Master Thesis.
        Bib
  • Brosseit, J. (2023). The Principle of Value Equivalence for Policy Gradient Search, Master Thesis.
        Bib
  • Gao, Z. (2023). Hierarchical Contextualization of Movement Primitives, Master Thesis.
        Bib
  • Herrmann, P. (2023). 6DCenterPose: Multi-object RGB-D 6D pose tracking with synthetic training data, Master Thesis.
        Bib
  • Janjus, B. (2023). Genetic Programming For Interpretable Reinforcement Learning, Master Thesis.
        Bib
  • Jehn, M. (2023). NAS with GFlowNets, Master Thesis.
      Bib
  • Keller, L. (2023). Context-Dependent Variable Impedance Control with Stability Guarantees, Master Thesis.
        Bib
  • Carrasco, H. (2022). Particle-Based Adaptive Sampling for Curriculum Learning, Master Thesis.
        Bib
  • Chue, X. (2022). Task Classification and Local Manipulation Controllers, Master Thesis.
        Bib
  • Hellwig, H. (2022). Residual Reinforcement Learning with Stable Priors, Master Thesis.
        Bib
  • Jarnefelt, O. (2022). Sparsely Collaborative Multi-Agent Reinforcement Learning, Master Thesis.
      Bib
  • Kaiser, F. (2022). Multi-Object Pose Estimation for Robotic Applications in Cluttered Scenes, Master Thesis.
      Bib
  • Mueller, P.-O. (2022). Learning Interpretable Representations for Visuotactile Sensors, Master Thesis.
        Bib
  • Musekamp, D. (2022). Amortized Variational Inference with Gaussian Mixture Models, Master Thesis.
        Bib
  • Newswanger, A. (2022). Indoor Visual Navigation on Micro-Aerial Drones without External Infratructure, Master Thesis.
      Bib
  • Schneider, J. (2022). Model Predictive Policy Optimization amidst Inaccurate Models, Master Thesis.
      Bib
  • Sieburger, V. (2022). Development of a Baseline Agent in Robot Air Hockey, Master Thesis.
        Bib
  • Toelle, M. (2022). Curriculum Adversarial Reinforcement Learning, Master Thesis.
      Bib
  • Vincent, T. (2022). Projected Bellman Operator, Master Thesis.
      Bib
  • Xu, X. (2022). Visuotactile Grasping From Human Demonstrations, Master Thesis.
        Bib
  • Yang, Z. (2022). Exploring Gripping Behaviours and Haptic Emotions for Human-Robot Handshaking, Master Thesis.
          Bib
  • Zhang, K. (2022). Learning Geometric Constraints for Safe Robot Interactions, Master Thesis.
        Bib
  • Zhao, P. (2022). Improving Gradient Directions for Episodic Policy Search, Master Thesis.
        Bib
  • Brendgen, J. (2021). The Relation between Social Interaction And Intrinsic Motivation in Reinforcement Learning, Master Thesis.
      Bib
  • Buchholz, T. (2021). Variational Locally Projected Regression.
        Bib
  • Derstroff, C. (2021). Memory Representations for Partially Observable Reinforcement Learning, Master Thesis.
      Bib
  • Eich, Y. (2021). Distributionally Robust Optimization for Hybrid Systems, Master Thesis.
        Bib
  • Gruner, T. (2021). Wasserstein-Optimal Bayesian System Identification for Domain Randomization, Master Thesis.
        Bib
  • Hansel, K. (2021). Probabilistic Dynamic Mode Primitives, Master Thesis.
        Bib
  • He, J. (2021). Imitation Learning with Energy Based Model, Master Thesis.
        Bib
  • Huang, J. (2021). Multi-Objective Reactive Motion Planning in Mobile Manipulators, Master Thesis.
          Bib
  • Kaemmerer, M. (2021). Measure-Valued Derivatives for Machine Learning, Master Thesis.
        Bib
  • Lin, J.A. (2021). Functional Variational Inference in Bayesian Neural Networks, Master Thesis.
      Bib
  • Liu, L. (2021). Detection and Prediction of Human Gestures by Probabilistic Modelling, Master Thesis.
      Bib
  • Moos, J. (2021). Approximate Variational Inference for Mixture Models, Master Thesis.
        Bib
  • Palenicek, D. (2021). Dyna-Style Model-Based Reinforcement Learning with Value Expansion, Master Thesis.
      Bib
  • Patzwahl, A. (2021). Multi-sensor Fusion for Target Motion Prediction with an Application to Robot Baseball ||, Master Thesis.
      Bib
  • Rathjens, J. (2021). Accelerated Policy Search, Master Thesis.
        Bib
  • Schneider, T. (2021). Active Inference for Robotic Manipulation, Master Thesis.
        Bib
  • Sun, H. (2021). Can we improve time-series classification with Inverse Reinforcement Learning?, Master Thesis.
      Bib
  • Wang, Y. (2021). Bimanual Control and Learning with Composable Energy Policies, Master Thesis.
          Bib
  • Wegner, F. (2021). Learning Vision-Based Tactile Representations for Robotic Architectural Assembly, Master Thesis.
        Bib
  • Yang, H. (2021). Variational Inference for Curriculum Reinforcement Learning, Master Thesis.
      Bib
  • Ye, Z. (2021). Efficient Gradient-Based Variational Inference with GMMs, Master Thesis.
        Bib
  • Zhang, Y. (2021). Memory Representations for Partially Observable Reinforcement Learning, Master Thesis.
      Bib
  • Zhou, Z. (2021). Approximated Policy Search in Black-Box Optimization, Master Thesis.
        Bib
  • Borhade, P. (2020). Multi-agent reinforcement learning for autonomous driving, Master Thesis.
      Bib
  • Dorau, T. (2020). Distributionally Robust Optimization for Optimal Control, Master Thesis.
        Bib
  • Galljamov, R. (2020). Sample-Efficient Learning-Based Controller for Bipedal Walking in Robotic Systems, Master Thesis.
        Bib
  • Georgos, A. (2020). Robotics under Partial Observability, Master Thesis.
      Bib
  • Klein, A. (2020). Learning Robot Grasping of Industrial Work Pieces using Dense Object Descriptors, Master Thesis.
      Bib
  • Krabbe, P. (2020). Learning Riemannian Movement Primitives for Manipulation, Master Thesis.
      Bib
  • Lautenschlaeger, T. (2020). Variational Inference for Switching Dynamics, Master Thesis.
        Bib
  • Mentzendorff, E. (2020). Multi-Objective Deep Reinforcement Learning through Manifold Optimization, Master Thesis.
      Bib
  • Ploeger, K. (2020). High Acceleration Reinforcement Learning for Real-World Juggling with Binary Rewards.
        Bib
  • Schotschneider, A. (2020). Learning High-Level Behavior for Autonomous Vehicles, Master Thesis.
      Bib
  • Semmler, M. (2020). Sequential Bayesian Optimal Experimental Design for Nonlinear Dynamics, Master Thesis.
        Bib
  • Sharma, S. (2020). SAC-RL: Continuous Control of Wheeled Mobile Robot for Navigation in a Dynamic Environment, Master Thesis.
        Bib
  • Tekam, S. (2020). A Gaussian Mixture ModelApproach to Off-Policy PolicyGradient Estimation, Master Thesis.
        Bib
  • Tengang, V. M. (2020). 3D Pose Estimation for Robot Mikado, Master Thesis.
      Bib
  • Weimar, J. (2020). Exploring Intrinsic Motivation for Quanser Reinforcement Learning Benchmark Systems, Master Thesis.
        Bib
  • Williamson, L. (2020). Learning Nonlinear Dynamical Systems with the Koopman Operator, Master Thesis.
      Bib
  • Zecevic, M. (2020). Learning Algorithms, Invariances, and the Real World, Master Thesis.
      Bib
  • Baig, I. (2019). Deep End-to-End Calibration Thesis, Master Thesis.
        Bib
  • Becker, P. (2019). Expected Information Maximization: Using the I-Projection for Mixture Density Estimation, Master Thesis.
        Bib
  • Bous, F. (2019). Generating Spectral Envelopes for Singing Synthesis with Neural Networks, Master Thesis.
      Bib
  • Carvalho, J.A.C. (2019). Nonparametric Off-Policy Policy Gradient, Master Thesis.
        Bib
  • Cui, K. (2019). A Study on TD-Regularized Actor-Critic Methods, Master Thesis.
      Bib
  • Delfosse, Q. (2019). Grasping Objects Using a Goal-Discovering Architecture for Intrinsically-Motivated Learning, Master Thesis.
      Bib
  • Eschenbach, M. (2019). Metric-based Imitation Learning, Master Thesis.
      Bib
  • Hartmann, V. (2019). Efficient Exploration using Value Bounds in Deep Reinforcement Learning, Master Thesis.
        Bib
  • Hoffmann, D. (2019). Pedestrian Detection, Tracking and Intention Prediction in the context of autonomous Driving, Master Thesis.
      Bib
  • Hubecker, S. (2019). Curiosity Driven Reinforcement Learning for Autonomous Driving, Master Thesis.
      Bib
  • Huegelmann, N. (2019). Generating adaptable and reusable robot movements for a robot kitchen, Master Thesis.
      Bib
  • Jukonyte, L. (2019). Probabilistic Motion and Intention Prediction for Autonomous Vehicles, Master Thesis.
      Bib
  • Kircher, M. (2019). Learning from Human Feedback: a Comparison of Interactive Reinforcement Learning Algorithms, Master Thesis.
        Bib
  • Klink, P. (2019). Generalization and Transferability in Reinforcement Learning, Master Thesis.
        Bib
  • Knaust, M. (2019). Intuitive imitation learning for one-handed and bimanual tasks using ProMPs, Master Thesis.
        Bib
  • Laux, M. (2019). Deep Adversarialreinforcement Learning for Object Disentangling, Master Thesis.
        Bib
  • Liu, Z. (2019). Local Online Motor Babbling: Learning Motor Abundance of A Musculoskeletal Robot Arm, Master Thesis.
        Bib
  • Mohan, D.S. (2019). Learning hand adjustments for haptic interactions, Master Thesis.
      Bib
  • Nass, D. (2019). Risk-Sensitive Policy Search with Applications to Robot-Badminton, Master Thesis.
        Bib
  • Nickl, P. (2019). Bayesian Inference for Regression Models using Nonparametric Infinite Mixtures, Master Thesis.
        Bib
  • Pal, S. (2019). Deep Robot Reinforcement Learning for Assisting a Human, Master Thesis.
      Bib
  • Sadybakasov, A. (2019). Learning Vision-Based Tactile Skills for Robotic Architectural Assembly, Master Thesis.
        Bib
  • Saoud, H. (2019). Improving Sample-Efficiency with a Model-Based Deterministic Policy Gradient, Master Thesis.
      Bib
  • Schultheis, M. (2019). Approximate Bayesian Reinforcement Learning for System Identification, Master Thesis.
        Bib
  • Weigand, S. (2019). Guided Reinforcement Learning Under Partial Observability, Master Thesis.
        Bib
  • Wilberg, A. (2019). An Exploration of Interacting Machine Learning Methods for Agent Navigation, Master Thesis.
      Bib
  • Woelker, A. (2019). Local Pixel Manipulation Detection with Deep Neural Networks, Master Thesis.
        Bib
  • Zhang, S. (2019). Integration of self-imitation and model-based learning to actor-critic algorithms, Master Thesis.
      Bib
  • Ziese, A. (2019). Fast Multi-Objective Redundancy Resolution for Highly-Redundant Mobile Robots, Master Thesis.
      Bib
  • Brandherm, F. (2018). Learning Replanning Policies with Direct Policy Search, Master Thesis.
        Bib
  • Celik, O. (2018). Chance Constraints for Stochastic Optimal Control and Stochastic Optimization, Master Thesis.
        Bib
  • Dienlin, D. (2018). Generative Adverserial Models for Deep Driving, Master Thesis.
      Bib
  • Dittmar, D. (2018). Distributed Reinforcement Learning with Neural Networks for Robotics, Master Thesis.
        Bib
  • Ritter, C. (2018). Deep Learning of Inverse Dynamic Models, Master Thesis.
        Bib
  • Song, Y. (2018). Minimax and entropic proximal policy optimization, Master Thesis.
        Bib
  • Thai, H. L. (2018). Deep Reinforcement Learning for POMDPs, Master Thesis.
        Bib
  • Trick, S. (2018). Multimodal Uncertainty Reduction for Intention Recognition in a Human-Robot Environment, Master Thesis.
      Bib
  • Wang, Z. (2018). Representation Learning for Tactile Manipulation, Master Thesis.
        Bib
  • Zhi, R. (2018). Deep reinforcement learning under uncertainty for autonomous driving, Master Thesis.
        Bib
  • Ahmad, P. (2017). Analysis of Financial Big Data using Machine Learning and Graph Processing, Master Thesis.
      Bib
  • Beckmann, L. (2017). Lane estimation with deep neural networks, Master Thesis.
      Bib
  • Gabriel, A. (2017). Empowered Skills, Master Thesis.
        Bib
  • Gadiya, P. (2017). Large Scale Real-Time Data Analytics Platform for Energy Exchange, Master Thesis.
      Bib
  • Gondaliya, K. (2017). Learning to Categorize Issues in Distributed Bug Tracker Systems, Master Thesis.
      Bib
  • Hoelscher, J. (2017). Interactive Planning Under Uncertainty, Master Thesis.
        Bib
  • Pinsler, R. (2017). Data-Efficient Learning of Robotic Grasps From Human Preferences, Master Thesis.
        Bib
  • Sharma, D. (2017). Adaptive Training Strategies for Brain Computer Interfaces, Master Thesis.
        Bib
  • Shinde, S. (2017). POMDPs for Continuous States and Observations for Robotics, Master Thesis.
        Bib
  • Weibrecht, N. (2017). Auswertung von Sensordaten mit Machine Learning Algorithmen, Master Thesis.
      Bib
  • Abbenseth, J. (2016). Cooperative Path-Planning for Service Robots, Master Thesis.
      Bib
  • Abdulsamad, H. (2016). Stochastic Optimal Control with Linearized Dynamics, Master Thesis.
        Bib
  • Achieser, I. (2016). Potential evaluation of eye- and headtracking data as a robust and real-time capable predictor for driver intention detection and integration into an algorithm for maneuver prediction, Master Thesis.
      Bib
  • Belousov, B. (2016). Optimal Control of Ball Catching, Master Thesis.
        Bib
  • Hüttenrauch, M. (2016). Guided Deep Reinforcement Learning for Robot Swarms, Master Thesis.
        Bib
  • Hesse, T. (2016). Learning a Filter for Noise Attenuation in EEG Data for Brain-Computer Interfaces, Master Thesis.
        Bib
  • Koert, D. (2016). Combining Human Demonstrations and Motion Planning for Movement Primitive Optimization, Master Thesis.
        Bib
  • Kohlschuetter, J. (2016). Learning Probabilistic Classifiers from Electromyography Data for Predicting Knee Abnormalities, Master Thesis.
        Bib
  • Luck, K. (2016). Multi-Group Factor Extension of the GrouPS algorithm and Real-World Robot Learning, Master Thesis.
      Bib
  • Schuster, R. (2016). 3D Object Proposals from Stereo and Optical Flow, Master Thesis.
      Bib
  • Stapf, E. (2016). Predicting Traffic Flows for Traffic Engineering in Software-Defined Networks, Master Thesis.
        Bib
  • Stark, S. (2016). Learning Probabilistic Feedforward and Feedback Policies for Generating Stable Walking Behaviors, Master Thesis.
        Bib
  • Wilbers, D. (2016). Context-driven Movement Primitive Adaptation, Master Thesis, IAS, TU Darmstadt.
        Bib
  • Novoty, M. (2015). Application of Decision-Support Technologies for an Autonomous Evasion System for UAVs, Master Thesis.
      Bib
  • Tanneberg, D. (2015). Spiking Neural Networks Solve Robot Planning Problems, Master Thesis.
        Bib
  • Vandommele, T. (2015). Entwicklung eines Algorithmus zur Klassifikation von Schläfrigkeit durch videobasierte Fahrerbeobachtung, Master Thesis.
      Bib
  • Wieland, A. (2015). Probabilistic Methods for Forecasting of Electric Load Profiles, Master Thesis.
        Bib
  • Arenz, O. (2014). Feature Extraction for Inverse Reinforcement Learning, Master Thesis.
          Bib
  • Barnikol, S. (2014). Machine Learning for Active Gait Support with a Powered Ankle Prosthesis, Master Thesis.
          Bib
  • Chebotar, Y. (2014). Learning Robot Tactile Sensing for Object Manipulation, Master Thesis.
        Bib
  • Dann, C. (2014). Value-Function-Based Reinforcement Learning with Temporal Differences, Masters Thesis.
          Bib
  • Ewerton, M. (2014). Modeling Human-Robot Interaction with Probabilistic Movement Representations, Master Thesis.
        Bib
  • Gebhardt, G.H.W. (2014). Embedding Kalman Filters into Reproducing Kernel Hilbert Spaces, Master Thesis.
        Bib
  • Kamthe, S. (2014). Multi-modal Inference in Time Series, Master Thesis.
          Bib
  • Manschitz, S. (2014). Learning Sequential Skills for Robot Manipulation Tasks, Master Thesis.
          Bib
  • Merfels, C. (2014). Large-scale probabilistic feature mapping and tracking for autonomous driving, Masters Thesis.
      Bib
  • Mindt, M. (2014). Probabilistic Inference for Movement Planning in Humanoids, Master Thesis.
        Bib
  • Mundo, J. (2014). Extracting Low-Dimensional Control Variables for Movement Primitives, Master Thesis.
        Bib
  • Reubold, J. (2014). 3D Object Reconstruction from Partial Views, Master Thesis.
          Bib
  • Ringwald, J. (2014). Combination of Movement Primitives for Robotics, Master Thesis.
        Bib
  • Zeiss, S. (2014). Manipulation Skill for Robotic Assembly, Master Thesis.
        Bib
  • Englert, P. (2013). Model-based Imitation Learning by Probabilistic Trajectory Matching, Master Thesis.
        Bib
  • Haji Ghasemi, N. (2013). Approximate Gaussian Process Inference with Periodic Kernels, Master Thesis.
          Bib
  • Lioutikov, R. (2013). Learning time-dependent feedback policies with model-based policy search, Master Thesis.
        Bib
  • Schmitt, F. (2013). Probabilistic Nonlinear Model Predictive Control based on Pontryagin`s Minimum Principle, Master Thesis.
          Bib
  • Daniel, C. (2012). Hierarchical Relative Entropy Policy Search, Masters Thesis.
      Bib
  • Gopalan, N. (2012). Feedback Error Learning for Gait Acquisition, Master Thesis.
          Bib
  • Zhou, R. (2012). Free Space Detection Based On Occupancy Gridmaps, Masters Thesis.
          Bib
  • Muelling, K. (2009). Modeling Human Table Tennis, MSc Thesis (Completed at IAS/Tuebingen before move to TU Darmstadt) .
      Bib
  • Kober, J. (2008). Reinforcement Learning for Motor Primitives, MSc Thesis (Completed at IAS/Tuebingen before move to TU Darmstadt) .
          Bib

Completed Bachelor Theses

  • Boehm, A. (2023). Active Exploration for Tactile Texture Perception, Bachelor Thesis.
      Bib
  • Chemangui, E. (2023). Detecting Human Uncertainty from Multimodal Behavioral Data in a Task with Perceptual Ambiguity, Bachelor Thesis.
      Bib
  • Maurer, C. (2023). Quantifying Policy Uncertainty for Interactive Reinforcement Learning with Unreliable Human Action Advice, Bachelor Thesis.
      Bib
  • Atashak, M. (2022). Will it Blend? Learning to Coexecute Subskills, Bachelor Thesis.
        Bib
  • Daniv, M. (2022). Graph-Based Model Predictive Visual Imitation Learning, Bachelor Thesis.
        Bib
  • Kinzel, J. (2022). Modelling and Control of a Spherical Pendulum on a 4-DOF Barret WAM, Bachelor Thesis.
      Bib
  • Lokadjaja, S. (2022). Parallel Tempering VIPS, Bachelor Thesis.
        Bib
  • Magnus, L. (2022). Real-time Object Tracking for Assembly, Bachelor Thesis.
        Bib
  • Menzenbach, S. (2022). Leveraging Learned Graph-based Heuristics for efficiently solving the Combinatorics of Assembly, Bachelor Thesis.
      Bib
  • Meser, M. (2022). Multi-Instance Pose Estimation for Robot Mikado, Bachelor Thesis.
      Bib
  • Nikitina, D. (2022). Inference Methods for Markov Decision Processes, Bachelor Thesis.
      Bib
  • Prescher, E. (2022). Visual Hierarchical Recognition And Segmentation Of Interactions, Bachelor Thesis.
        Bib
  • Siebenborn, M. (2022). Evaluating Decision Transformer Architecture on Robot Learning Tasks, Bachelor Thesis.
        Bib
  • Sterker, L. (2022). Social Interaction Segmentation and Learning using Hidden semi-Markov Models, Bachelor Thesis.
        Bib
  • Woortman, N. (2022). Comparing and Personalizing Human Following Behaviors for Mobile Ground Robots, Bachelor Thesis.
        Bib
  • Ali, M. (2021). An Educational Framework for Robot Learning, Bachelor Thesis.
      Bib
  • Gassen, M. (2021). Learning a library of Physical Interactions for Social Robots, Bachelor Thesis.
        Bib
  • Helfenstein, F. (2021). Benchmarking Deep Reinforcement Learning Algorithms, Bachelor Thesis.
        Bib
  • Schneider, L. (2021). Distributional Monte-Carlo Tree Search, Bachelor Thesis.
      Bib
  • Zoeller, M. (2021). Graph Neural Networks forModel-Based ReinforcementLearning, Bachelor Thesis.
        Bib
  • Baierl, M. (2020). Learning Action Representations For Primitives-Based Motion Generation, Bachelor Thesis.
        Bib
  • Damken, F. (2020). Variational Autoencoders for Koopman Dynamical Systems, Bachelor Thesis.
        Bib
  • Eiermann, A. (2020). Optimierung von Biologischen Systemen, Bachelor Thesis.
      Bib
  • Kirschner, M. (2020). Integration of LIDAR SLAM for an automous vehicle, Bachelor Thesis.
      Bib
  • Nukovic, L. (2020). Evaluation of the Handshake Turing Test for anthropomorphic Robots, Bachelor Thesis.
      Bib
  • Scharf, F. (2020). Proximal Policy Optimization with Explicit Intrinsic Motivation, Bachelor Thesis.
        Bib
  • Stadtmueller, J. (2020). Dimensionality Reduction of Movement Primitives in Parameter Space, Bachelor Thesis.
        Bib
  • Divo, F. (2019). Trajectory Based Upper Body Gesture Recognition for an Assistive Robot, Bachelor Thesis.
        Bib
  • Ebeling, L. (2019). Experimental validation of an MPC-POMDP model of ball catching, Bachelor Thesis.
        Bib
  • Hensel, M. (2019). Correlated Exploration in Deep Reinforcement Learning, Bachelor Thesis.
        Bib
  • Kaiser, F. (2019). Towards a Robot Skill Library Using Hierarchy, Composition and Adaptation, Bachelor Thesis.
      Bib
  • Keller, L. (2019). Application of state-of-the-art RL algorithms to robotics simulators, Bachelor Thesis.
      Bib
  • Kinold, J. (2019). Development of a Simulation Model for an Autonomous Vehicle, Bachelor Thesis.
      Bib
  • Lang, M. (2019). Imitation Learning for Highlevel Robot Behavior in the Context of Elderly Assistance, Bachelor Thesis.
        Bib
  • Lutz, P. (2019). Automatic Segmentation and Labeling for Robot Table Tennis Time Series, Bachelor Thesis.
      Bib
  • Suess, J. (2019). Robust Control for Model Learning, Bachelor Thesis.
        Bib
  • Weiland, C. (2019). Deep Model-based Reinforcement Learning: Propagating Rewards Backwards through the Model for All Time-Steps, Bachelor Thesis.
      Bib
  • Borg, A. (2018). Infinite-Mixture Policies in Reinforcement Learning, Bachelor Thesis.
        Bib
  • Khaled, N. (2018). Benchmarking Reinforcement Learning Algorithms on Tetherball Games, Bachelor Thesis.
        Bib
  • Kolev, Z. (2018). Joint Learning of Humans and Robots, Bachelor Thesis.
        Bib
  • Rinder, S. (2018). Trajectory Kernels for Bayesian Optimization, Bachelor Thesis.
        Bib
  • Schneider, T. (2018). Guided Policy Search for In-Hand Manipulation, Bachelor Thesis.
        Bib
  • Schotschneider, A. (2018). Collision Avoidance in Uncertain Environments for Autonomous Vehicles using POMDPs.
        Bib
  • Tschirner, J. (2018). Boosted Deep Q-Network, Bachelor Thesis.
          Bib
  • Fiebig, K.-H. (2017). Multi-Task Logistic Regression in Brain-Computer Interfaces, Bachelor Thesis.
      Bib
  • Frisch, Y. (2017). The Effects of Intrinsic Motivation Signals on Reinforcement Learning Strategies, Bachelor Thesis.
        Bib
  • Hesse, R. (2017). Development and Evaluation of 3D Autoencoders for Feature Extraction, Bachelor Thesis.
        Bib
  • Lolkes, C. (2017). Incremental Imitation Learning with Estimation of Uncertainty, Bachelor Thesis.
        Bib
  • Pfanschilling, V. (2017). Self-Programming Mutation and Crossover in Genetic Programming for Code Generation, Bachelor Thesis.
        Bib
  • Polat, H. (2017). Nonparametric deep neural networks for movement planning, Bachelor Thesis.
        Bib
  • Rother, D. (2017). Transferring Insights on Biological Sleep to Robot Motor Skill Learning.
        Bib
  • Semmler, M. (2017). Exploration in Deep Reinforcement Learning, Bachelor Thesis.
        Bib
  • Szelag, S. (2017). Transferring Insights on Mental Training to Robot Motor Skill Learning, Bachelor Thesis.
        Bib
  • Thiem, S. (2017). Simulation of the underactuated Sake Robotics Gripper in V-REP and ROS, Bachelor Thesis.
        Bib
  • Zecevic, M. (2017). Matching Bundles of Axons Using Feature Graphs.
        Bib
  • Plage, L. M. (2016). Reinforcement Learning for tactile-based finger gaiting, Bachelor Thesis.
        Bib
  • Alte, D. (2016). Control of a robotic arm using a low-cost BCI, Bachelor Thesis.
          Bib
  • Becker, P. (2016). Learning Deep Feature Spaces for Nonparametric Inference, Bachelor Thesis.
        Bib
  • Grossberger, L. (2016). Towards a low-cost cognitive Brain-Computer Interface for Patients with Amyotrophic Lateral Sclerosis, Bachelor Thesis.
      Bib
  • Klink, P (2016). Model Learning for Probabilistic Movement Primitives, Bachelor Thesis.
        Bib
  • Marg, V. (2016). Reinforcement Learning for a Dexterous Manipulation Task, Bachelor Thesis.
          Bib
  • Nakatenus, M. (2016). Multi-Agent Reinforcement Learning Algorithms, Bachelor Thesis.
      Bib
  • Palenicek, D. (2016). Reinforcement Learning for Mobile Missile Launching, Bachelor Thesis.
      Bib
  • Ramstedt, S. (2016). Deep Reinforcement Learning with Continuous Actions, Bachelor Thesis.
        Bib
  • Schultheis, M. (2016). Learning Priors for Error-related Decoding in EEG data for Brain-Computer Interfacing, Bachelor Thesis.
        Bib
  • Unverzagt, F.T. (2016). Modeling Robustness for Multi-Objective Optimization, Bachelor Thesis.
        Bib
  • Alexev, S. (2015). Reinforcement Learning für eine mobile Raketenabschußplattform, Bachelor Thesis.
      Bib
  • Berninger K. (2015). Hierarchical Policy Search Algorithms, Bachelor Thesis.
        Bib
  • Blank, A. (2015). Learning a State Representation for a Game Agent’s Reactive Behaviour, Bachelor Thesis.
        Bib
  • End, F. (2015). Learning Versatile Solutions with Policy Search, Bachelor Thesis.
        Bib
  • Mayer C. (2015). Learning to Sequence Movement Primitives for Rhythmic Tasks, Bachelor Thesis.
        Bib
  • Schaefer, A. (2015). Prediction of Finger Flexion from ECoG Data with a Deep Neural Network, Bachelor Thesis.
      Bib
  • Amend, S. (2014). Feature Extraction for Policy Search, Bachelor Thesis.
        Bib
  • Brandl, S. (2014). Learning to Pour Using Warped Features, Bachelor Thesis.
        Bib
  • Hesse, T. (2014). Spectral Learning of Hidden Markov Models, Bachelor Thesis.
        Bib
  • Hochlaender, A. (2014). Deep Learning for Reinforcement Learning in Pacman, Bachelor Thesis.
          Bib
  • Hoelscher, J. (2014). Tactile Exploration of Object Properties, Bachelor Thesis.
        Bib
  • Huhnstock, N. (2014). Tactile Sensing for Manipulation, Bachelor Thesis.
          Bib
  • Laux, M. (2014). Online Feature Learning for Reinforcement Learning, Bachelor Thesis.
          Bib
  • Luck, K. (2014). Latent Space Reinforcement Learning, Bachelor Thesis.
        Bib
  • Mattmann, A. (2014). Modeling How To Catch Flying Objects: Optimality Vs. Heuristics, Bachelor Thesis.
        Bib
  • Schroecker, Y. (2014). Artificial Curiosity for Motor Skill Learning, Bachelor Thesis.
        Bib
  • Smyk, M. (2014). Learning Generalizable Models for Compliant Robots, Bachelor Thesis.
        Bib
  • Thai, H.L. (2014). Laplacian Mesh Editing for Interaction Learning, Bachelor Thesis.
        Bib
  • von Willig, J. (2014). Reinforcement Learning for Heros of Newerth, Bachelor Thesis.
        Bib
  • Notz, D. (2013). Reinforcement Learning for Planning in High-Dimensional Domains, Bachelor Thesis.
          Bib
  • Pfretzschner, B. (2013). Autonomous Car Driving using a Low-Cost On-Board Computer, Bachelor Thesis.
        Bib
  • Schoengen, S. (2013). Visual feature learning for interactive segmentation, Bachelor Thesis.
          Bib
  • Distler, M. (2012). Koennen Lernalgorithmen interagieren aehnlich wie im Gehirn?, Bachelor Thesis.
          Bib
  • Hensch, P. (2012). Comparing Reinforcement Learning Algorithms on Tic-Tac-Toe, Bachelor Thesis.
      Bib
  • Hess, S. (2012). Levitation Sphere, Bachelor Thesis.
      Bib
  • Sharma, D. (2012). Combining Reinforcement Learning and Feature Extraction, Bachelor Thesis.
          Bib
  • Zimpfer, A. (2012). Vergleich verschiedener Lernalgorithmen auf einem USB-Missilelauncher, Bachelor Thesis.
      Bib

Honors Theses and Advanced Design Projects

  • Sperling, J. (2021). Learning Robot Grasping of Industrial Work Pieces using Dense Object Descriptors, Honors Thesis.
      Bib
  • Smyk, M. (2016). Model-based Control and Planning on Real Robots, Honors Thesis.
      Bib
  • Koert, D. (2015). Inverse Kinematics for Optimal Human-Robot Collaboration, Honors Thesis.
          Bib
  • Abdulsamad, H.; Buchholz, T.; Croon, T; El Khoury, M. (2014). Playing Tetherball with Compliant Robots, Advanced Design Project.
          Bib
  • Ho, D.; Kisner, V. (2014). Trajectory Tracking Controller for a 4-DoF Flexible Joint Robotic Arm, Advanced Design Project.
          Bib

Completed Seminar Theses

  • Alles, I (2012). Models for Biological Motor Control: Modules of Movements, Seminar Thesis, Proceedings of the Robot Learning Seminar.
        Bib
  • Arenz, O. (2012). Extensive Games, Seminar Thesis, Proceedings of the Autonomous Learning Systems Seminar.
          Bib
  • Arenz, O. (2013). Inverse Optimal Control, Seminar Thesis, Proceedings of the Robot Learning Seminar.
          Bib
  • Dann, C. (2012). Algorithms for Fast Gradient Temporal Difference Learning, Seminar Thesis, Proceedings of the Autonomous Learning Systems Seminar.
          Bib
  • Dittmar, D. (2013). Slice Sampling, Seminar Thesis, Proceedings of the Robot Learning Seminar.
          Bib
  • Englert, P. (2012). Locally Weighted Learning, Seminar Thesis, Proceedings of the Autonomous Learning Systems Seminar.
          Bib
  • Fischer, A. (2012). Inverse Reinforcement Learning, Seminar Thesis, Proceedings of the Autonomous Learning Systems Seminar.
          Bib
  • Gabriel, A. (2012). An introduction to Structural Learning - A new approach in Reinforcement Learning, Seminar Thesis, Proceedings of the Robot Learning Seminar.
        Bib
  • Glaser, C. (2012). Learning in Reality: A case study of Stanley, the robot that Won the DARPA Challenge, Seminar Thesis, Proceedings of the Robot Learning Seminar.
        Bib
  • Goppalan, N. (2012). Gaussian Process Latent Variable Models for Dimensionality Reduction and Time Series Modeling, Seminar Thesis, Proceedings of the Robot Learning Seminar.
        Bib
  • Graber, T. (2012). Models for Biological Motor Control: Optimality Principles, Seminar Thesis, Proceedings of the Robot Learning Seminar.
        Bib
  • Hardock, S. (2012). Applications in Robot Helicopter Acrobatics, Seminar Thesis, Proceedings of the Autonomous Learning Systems Seminar.
          Bib
  • Isaak, J. (2012). Interaction Learning, Seminar Thesis, Proceedings of the Robot Learning Seminar.
        Bib
  • Kruk, S. (2013). Planning with Multiple Agents, Seminar Thesis, Proceedings of the Autonomous Learning Systems Seminar.
          Bib
  • Kunz, F. (2013). An Introduction to Temporal Difference Learning, Seminar Thesis, Proceedings of the Autonomous Learning Systems Seminar.
          Bib
  • Kutschke, M. (2012). Imitation Learning, Seminar Thesis, Proceedings of the Autonomous Learning Systems Seminar.
        Bib
  • Lioutikov, R. (2012). Machine learning and the brain, Seminar Thesis, Proceedings of the Robot Learning Seminar.
        Bib
  • Mindt, M. (2012). Learning robot control, Seminar Thesis, Proceedings of the Robot Learning Seminar.
        Bib
  • Mogk, R. (2012). Efficient Planning under Uncertainty with Macro-actions, Seminar Thesis, Proceedings of the Robot Learning Seminar.
        Bib
  • Mueck, J. (2012). Learning physical Models of Robots, Seminar Thesis, Proceedings of the Autonomous Learning Systems Seminar.
          Bib
  • Pignede, T. (2012). Evolution of Reinforcement Learning in Games or How to Win against Humans with Intelligent Agents, Seminar Thesis, Proceedings of the Autonomous Learning Systems Seminar.
          Bib
  • Ploetz, T. (2012). Deterministic Approximation Methods in Bayesian Inference, Seminar Thesis, Proceedings of the Robot Learning Seminar.
        Bib
  • Reubold, J. (2012). Kernel Descriptors in comparison with Hierarchical Matching Pursuit, Seminar Thesis, Proceedings of the Robot Learning Seminar.
        Bib
  • Schnell, F. (2013). Hierarchical Reinforcement Learning in Robot Control, Seminar Thesis, Proceedings of the Robot Learning Seminar.
          Bib
  • Schoenberger, D. (2012). Planning in POMDPs, Seminar Thesis, Proceedings of the Autonomous Learning Systems Seminar.
          Bib
  • Schroecker, Y. (2013). Planning for Relational Rules, Seminar Thesis, Proceedings of the Autonomous Learning Systems Seminar.
          Bib
  • Stark, S. (2012). Do Reinforcement Learning Models Explain Neural Learning?, Seminar Thesis, Proceedings of the Autonomous Learning Systems Seminar.
          Bib
  • Stein, A. (2013). Learning Robot Locomotion, Seminar Thesis, Proceedings of the Autonomous Learning Systems Seminar.
          Bib
  • Swiezinski, L. (2013). Lifecycle of a Jeopardy Question Answered by Watson DeepQA, Seminar Thesis, Proceedings of the Autonomous Learning Systems Seminar.
          Bib
  • Thiel, T. (2012). Learning in Robot Soccer, Seminar Thesis, Proceedings of the Robot Learning Seminar.
          Bib
  • Tschirsich, M. (2013). Learning Robot Control, Seminar Thesis, Proceedings of the Robot Learning Seminar.
          Bib
  • Viering, M. (2012). Hierarchical Reinforcement Learning in Robot Control, Seminar Thesis, Proceedings of the Robot Learning Seminar.
          Bib
  • Will, K. (2013). Autonomous Chess-Playing, Seminar Thesis, Proceedings of the Autonomous Learning Systems Seminar.
          Bib
  • Zoellner, M. (2013). Reinforcement Learning in Games, Seminar Thesis, Proceedings of the Autonomous Learning Systems Seminar.
          Bib

Completed IP Projects

TopicStudentsAdvisor
Tactile Active Exploration of Object ShapesIrina Rath, Dominik HorstkötterTim Schneider, Boris Belousov, Alap Kshirsagar
Object Hardness Estimation with Tactile SensorsMario Gomez, Frederik HellerAlap Kshirsagar, Boris Belousov, Tim Schneider
Task and Motion Planning for Sequential AssemblyPaul-Hallmann, Nicolas NonnengießerBoris Belousov, Tim Schneider, Yuxi Liu
A Digital Framework for Interlocking SL-Blocks Assembly with RobotsBingqun LiuMehrzad Esmaeili, Boris Belousov
Learn to play TangramMax Zimmermann, Dominik Marino, Maximilian LangerKay Hansel, Niklas Funk
Learning the Residual Dynamics using Extended Kalman Filter for Puck TrackingHaoran DingPuze Liu, Davide Tateo
ROS Integration of Mitsubishi PA 10 robotJonas GünsterPuze Liu, Davide Tateo
6D Pose Estimation and Tracking for Ubongo 3DMarek DanivJoao Carvalho, Suman Pal
Task Taxonomy for robots in householdAmin Ali, Xiaolin LinSnehal Jauhri, Ali Younes
Task and Motion Planning for Sequential AssemblyPaul-Hallmann, Patrick Siebke, Nicolas NonnengießerBoris Belousov, Tim Schneider, Yuxi Liu
Learning the Gait for Legged Robot via Safe Reinforcement LearningJoshua Johannson, Andreas Seidl FernandezPuze Liu, Davide Tateo
Active Perception for Mobile ManipulationSophie Lueth, Syrine Ben Abid, Amine ChouchaneSnehal Jauhri
Combining RL/IL with CPGs for Humanoid LocomotionHenri GeissFiras Al-Hafez, Davide Tateo
Multimodal Attention for Natural Human-Robot InteractionAleksandar Tatalovic, Melanie Jehn, Dhruvin Vadgama, Tobias GockelOleg Arenz, Lisa Scherf
Hybrid Motion-Force Planning on ManifoldsChao Jin, Peng Yan, Liyuan XiangAn Thai Le, Junning Huang
Stability analysis for control algorithms of Furuta PendulumLu liu, Jiahui Shi, Yuheng OuyangJunning Huang, An Thai Le
Multi-sensorial reinforcement learning for robotic tasksRickmer KrohnGeorgia Chalvatzaki, Snehal Jauhri
Learning Behavior Trees from VideoNick Dannenberg, Aljoscha SchmidtLisa Scherf, SumanPal
Subgoal-Oriented Shared ControlZhiyuan Gao, Fengyun ShaoKay Hansel
Task Taxonomy for robots in householdAmin Ali, Xiaolin LinSnehal Jauhri, Ali Younes
Learn to play TangramMaximilian LangerKay Hansel, Niklas Funk
Theory of Mind Models for HRI under partial ObservabilityFranziska Herbert, Tobias Niehues, Fabian KalterDorothea Koert, Joni Pajarinen, David Rother
Learning Safe Human-Robot InteractionZhang ZhangPuze Liu, Snehal Jauhri
Active-sampling for deep Multi-Task RLFabian WahrenCarlo D'Eramo, Georgia Chalvatzaki
Interpretable Reinforcement Learningpatrick VimrDavide Tateo, Riad Akrour
Optimistic Actor CriticNiklas Kappes,Pascal HerrmannJoao Carvalho
Active Visual Search with POMDPsJascha Hellwig, Mark BaierlJoao Carvalho, Julen Urain De Jesus
Utilizing 6D Pose-Estimation over ROSJohannes WeyelJulen Urain De Jesus
Learning Deep Heuristics for Robot PlanningDominik MarinoTianyu Ren
Learning Behavior Trees from VideosJohannes Heeg, Aljoscha Schmidt and Adrian WorringSuman Pal, Lisa Scherf
Learning Decisions by Imitating Human Control CommandsJonas Günster, Manuel SengeJunning Huang
Combining Self-Paced Learning and Intrinsic MotivationFelix Kaiser, Moritz Meser, Louis SterkerPascal Klink
Self-Paced Reinforcement Learning for Sim-to-RealFabian Damken, Heiko CarrascoFabio Muratore
Policy Distillation for Sim-to-RealBenedikt Hahner, Julien BrosseitFabio Muratore
Neural Posterior System IdentificationTheo Gruner, Florian WieseFabio Muratore, Boris Belousov
Syntethic Dataset generation for Articulation predictionJohannes Weyel, Niklas BabendererdeJulen Urain, Puze Liu
Guided Dimensionality Reduction for Black-Box OptimizationMarius MemmelPuze Liu, Davide Tateo
Learning Laplacian Representations for continuous MCTSDaniel Mansfeld, Alex RuffiniTuan Dam, Georgia Chalvatzaki, Carlo D'Eramo
Object Tracking using Depth CarmeraLeon Magnus, Svenja Menzenbach, Max SiebenbornNiklas Funk, Boris Belousov, Georgia Chalvatzaki
GNNs for Robotic ManipulationFabio d'Aquino Hilt, Jan Kolf, Christian WeilandJoao Carvalho
Benchmarking advances in MCTS in Go and ChessLukas SchneiderTuan Dam, Carlo D'Eramo
Architectural Assembly: Simulation and OptimizationJan SchneiderBoris Belousov, Georgia Chalvatzaki
Probabilistic Object Tracking using Depth CarmeraJan Emrich, Simon KiefhaberNiklas Funk, Boris Belousov, Georgia Chalvatzaki
Bayesian Optimization for System Identification in Robot Air HockeyChen Xue. Verena SieburgerPuze Liu, Davide Tateo
Benchmarking MPC Solvers in the Era of Deep Reinforcement LearningDarya Nikitina, Tristan SchulzJoe Watson
Enhancing Attention Aware Movement PrimitivesArtur KrukDorothea Koert
Towards Semantic Imitation LearningPengfei ZhaoJulen Urain & Georgia Chalvatzaki
Can we use Structured Inference Networks for Human Motion Prediction?Hanyu Sun, Liu LanmiaoJulen Urain & Georgia Chalvatzaki
Reinforcement Learning for Architectural Combinatorial OptimizationJianpeng Chen, Yuxi Liu, Martin Knoll, Leon WietschorkeBoris Belousov, Georgia Chalvatzaki, Bastian Wibranek
Architectural Assembly With Tactile Skills: Simulation and OptimizationTim Schneider, Jan SchneiderBoris Belousov, Georgia Chalvatzaki, Bastian Wibranek
Bayesian Last Layer NetworksJihao Andreas LinJoe Watson, Pascal Klink
BATBOT: BATter roBOT for BaseballYannick Lavan, Marcel WesselyCarlo D'Eramo
Benchmarking Deep Reinforcement LearningBenedikt VolkerDavide Tateo, Carlo D'Eramo, Tianyu Ren
Model Predictive Actor-Critic Reinforcement Learning of Robotic TasksDaljeet NandhaGeorgia Chalvatzaki
Dimensionality Reduction for Reinforcement LearningJonas JägerMichael Lutter
From exploration to control: learning object manipulation skills through novelty search and local adaptationLeon KellerSvenja Stark, Daniel Tanneberg
Robot Air-HockeyPatrick LutzPuze Liu, Davide Tateo
Learning Robotic Grasp of Deformable ObjectMingye Zhu, Yanhua ZhangTianyu Ren
Teach a Robot to solve puzzles with intrinsic motivationAli KarpuzogluGeorgia Chalvatzaki, Svenja Stark
Inductive Biases for Robot LearningRustam GalljamovBoris Belousov, Michael Lutter
Accelerated Mirror Descent Policy SearchMaximilian HenselBoris Belousov, Tuan Dam
Foundations of Adversarial and Robust LearningJanosch Moos, Kay HanselSvenja Stark, Hany Abdulsamad
Likelihood-free Inference for Reinforcement LearningMaximilian Hensel, Kai CuiBoris Belousov
Risk-Aware Reinforcement LearningMaximillian Kircher, Angelo Campomaggiore, Simon Kohaut, Dario PerroneSamuele Tosatto, Dorothea Koert
Jonas Eschmann, Robin Menzenbach, Christian EilersBoris Belousov, Fabio Muratore
Learning Symbolic Representations for Abstract High-Level PlanningZhiyuan Hu, Claudia Lölkes, Haoyi YangSvenja Stark Daniel Tanneberg
Learning Perceptual ProMPs for Catching BallsAxel PatzwahlDorothea Koert, Michael Lutter
Bayesian Inference for Switching Linear Dynamical SystemsMarkus Semmler, Stefan FabianHany Abdulsamad
Characterization of WAM DynamicsKai PloegerDorothea Koert, Michael Lutter
Deep Reinforcement Learning for playing Starcraft IIDaniel Palenicek, Marcel Hussing, Simon MeisterFilipe Veiga
Enhancing Exploration in High-Dimensional Environments ΔLu Wan, Shuo ZhangSimone Parisi
Building a Grasping TestbedDevron WilliamsOleg Arenz
Online Dynamic Model LearningPascal KlinkHany Abdulsamad, Alexandros Paraschos
Spatio-spectral Transfer Learning for Motor Performance EstimationKarl-Heinz FiebigDaniel Tanneberg
From Robots to CobotsMichael Burkhardt, Moritz Knaust, Susanne TrickDorothea Koert, Marco Ewerton
Learning Hand-KinematicsSumanth Venugopal, Deepak Singh MohanGregor Gebhardt
Goal-directed reward generationAlymbek SadybakasovBoris Belousov
Learning Grammars for Sequencing Movement PrimitivesKim Berninger, Sebastian SzelagRudolf Lioutikov
Learning Deep Feature Spaces for Nonparametric InferencePhilipp BeckerGregor Gebhardt
Lazy skill learning for cleaning up a tableLejla Nukovic, Moritz FuchsSvenja Stark
Reinforcement Learning for Gait Learning in QuadrupedsKai Ploeger, Zinan LiuSvenja Stark
Learning Deep Feature Spaces for Nonparametric InferencePhilipp BeckerGregor Gebhardt
Optimal Control for Biped LocomotionMartin Seiler, Max KreischerHany Abdulsamad
Semi-Autonomous Tele-OperationNick Heppert, Marius, Jeremy TschirnerOleg Arenz
Teaching People how to Write Japanese CharactersDavid Rother, Jakob Weimar, Lars LotterMarco Ewerton
Local Bayesian OptimizationDmitry SorokinRiad Akrour
Bayesian Deep Reinforcement Learning -Tools and Methods-Simon RamstedtSimone Parisi
Controlled Slip for Object ReleaseSteffen Kuchelmeister, Albert Schotschneider Filipe Veiga
Learn intuitive physics from videosYunlong Song, Rong ZhiBoris Belousov
Learn an Assembling Task with Swarm RobotsKevin Daun, Marius SchnaubeltGregor Gebhardt
Learning To Sequence Movement PrimitivesChristoph MayerChristian Daniel
Learning versatile solutions for Table TennisFelix EndGerhard Neumann Riad Akrour
Learning to Control Kilo-Bots with a FlashlightAlexander Hendrich, Daniel KauthGregor Gebhardt
Playing Badminton with RobotsJ. Tang, T. Staschewski, H. GouBoris Belousov
Juggling with RobotsElvir Sabic, Alexander WölkerDorothea Koert
Learning and control for the bipedal walker FaBiManuel Bied, Felix Treede, Felix PelsRoberto Calandra
Finding visual kernelsFide Marten, Dominik DienlinHerke van Hoof
Feature Selection for Tetherball Robot GamesXuelei Li, Jan Christoph KlieSimone Parisi
Inverse Reinforcement Learning of Flocking BehaviourMaximilian Maag, Robert PinslerOleg Arenz
Control and Learning for a Bipedal RobotFelix Treede, Phillip Konow, Manuel BiedRoberto Calandra
Perceptual coupling with ProMPsJohannes Geisler, Emmanuel StapfAlexandros Paraschos
Learning to balance with the iCubMoritz Nakatenus, Jan GeukesRoberto Calandra
Generalizing Models for a Compliant RobotMike SmykHerke van Hoof
Learning Minigolf with the BioRobFlorian BrandhermMarco Ewerton
iCub TelecontrolLars Fritsche, Felix UnverzagtRoberto Calandra
REPS for maneuvering in RobocupJannick Abbenseth, Nicolai OmmerChristian Daniel
Learning Ball on a Beam on the KUKA lightweight armsBianca Loew, Daniel WilbertsChristian Daniel
Sequencing of DMPs for Task- and Motion PlanningMarkus Sigg, Fabian FallerRudolf Lioutikov
Tactile Exploration and MappingThomas Arnreich, Janine HoelscherTucker Hermans
Multiobjective Reinforcement Learning on Tetherball BioRobAlexander Blank, Tobias ViernickelSimone Parisi
Semi-supervised Active Grasp LearningSimon Leischnig, Stefan LüttgenOliver Kroemer