Completed Master Theses
For completed PhD theses, see PhD Theses. Here, only undergraduate and M.Sc. theses are being listed.
- Aristakesyan, A. (2024). Analysing Sparse Attention in Reinforcement Learning for Continuous Control: alpha-entmax and Attention Masking for Morphology-Aware Decision-Making, Master Thesis.
- Chen, C. (2024). Learning Tactile Manipulation Policies from Human Demonstrations, Master Thesis.
- Cosserat, E. (2024). Refining vision-based 6D-pose estimation using tactile sensing, Master Thesis.
- Geiss, H.J. (2024). Inverse Reinforcement Learning for Musculoskeletal Control of Humanoid Locomotion, Master Thesis.
- Guenster, J. (2024). Handling Long-Term Constraints And Uncertainty in Safe RL, Master Thesis.
- Hahner, B. (2024). Posterior Sampling Reinforcement Learning with Function-space Variational Inference, Master Thesis.
- Hellinger, N. (2024). Smart, self-calibrating force/torque sensor, Master Thesis.
- Meser, M. (2024). Model Predictive Control for Humanoid Control, Master Thesis.
- Ouyang, Y. (2024). Hierarchical Reinforcement Learning with Self-Play for Robotic Air Hockey, Master Thesis.
- Qian, C. (2024). Learning Dexterous Manipulation from Videos, Master Thesis.
- Rau, M.J. (2024). Scaling learned, graph-based assembly policies, Master Thesis.
- Schlee, D. (2024). Control Strategies for a Rimless Wheel Micro-Rover, Master Thesis.
- Siebke, P. (2024). Analysis of Quasimetric Reinforcement Learning, Master Thesis.
- Tisch, T. (2024). Dynamic Multi-Agent Reward Sharing, Master Thesis.
- Wahren, F. (2024). Adapt your network: Investigating neural network’s architecture in Q-learning methods., Master Thesis.
- Baierl, M. (2023). Score-Based Generative Models as Trajectory Priors for Motion Planning, Master Thesis.
- Brosseit, J. (2023). The Principle of Value Equivalence for Policy Gradient Search, Master Thesis.
- Gao, Z. (2023). Hierarchical Contextualization of Movement Primitives, Master Thesis.
- Gece, A. (2023). Leveraging Structured-Graph Correspondence in Imitation Learning, Master Thesis.
- Hammacher, C. (2023). Object Pose Estimation and Manipulation from Pointclouds using Energy-based Models, Master Thesis.
- Heeg, J. (2023). Task Space Exploration in Robot Reinforcement Learning, Master Thesis.
- Herrmann, P. (2023). 6DCenterPose: Multi-object RGB-D 6D pose tracking with synthetic training data, Master Thesis.
- Hilt, F. (2023). Statistical Model-based Reinforcement Learning, Master Thesis.
- Janjus, B. (2023). Genetic Programming For Interpretable Reinforcement Learning, Master Thesis.
- Jehn, M. (2023). NAS with GFlowNets, Master Thesis.
- Kappes, N. (2023). Natural Gradient Optimistic Actor Critic, Master Thesis.
- Keller, L. (2023). Context-Dependent Variable Impedance Control with Stability Guarantees, Master Thesis.
- Langer, M. (2023). Energy-based Models for 6D Pose Estimation, Master Thesis.
- Liu, L. (2023). Robot Gaze for Communicating Collision Avoidance Intent in Shared Workspaces, Master Thesis.
- Carrasco, H. (2022). Particle-Based Adaptive Sampling for Curriculum Learning, Master Thesis.
- Chue, X. (2022). Task Classification and Local Manipulation Controllers, Master Thesis.
- Hellwig, H. (2022). Residual Reinforcement Learning with Stable Priors, Master Thesis.
- Jarnefelt, O. (2022). Sparsely Collaborative Multi-Agent Reinforcement Learning, Master Thesis.
- Kaiser, F. (2022). Multi-Object Pose Estimation for Robotic Applications in Cluttered Scenes, Master Thesis.
- Mueller, P.-O. (2022). Learning Interpretable Representations for Visuotactile Sensors, Master Thesis.
- Musekamp, D. (2022). Amortized Variational Inference with Gaussian Mixture Models, Master Thesis.
- Newswanger, A. (2022). Indoor Visual Navigation on Micro-Aerial Drones without External Infratructure, Master Thesis.
- Schneider, J. (2022). Model Predictive Policy Optimization amidst Inaccurate Models, Master Thesis.
- Sieburger, V. (2022). Development of a Baseline Agent in Robot Air Hockey, Master Thesis.
- Szelag, S. (2022). Multi-Agent Reinforcement Learning for Assembly, Master Thesis.
- Toelle, M. (2022). Curriculum Adversarial Reinforcement Learning, Master Thesis.
- Vincent, T. (2022). Projected Bellman Operator, Master Thesis.
- Xu, X. (2022). Visuotactile Grasping From Human Demonstrations, Master Thesis.
- Yang, Z. (2022). Exploring Gripping Behaviours and Haptic Emotions for Human-Robot Handshaking, Master Thesis.
- Zhang, K. (2022). Learning Geometric Constraints for Safe Robot Interactions, Master Thesis.
- Zhao, P. (2022). Improving Gradient Directions for Episodic Policy Search, Master Thesis.
- Brendgen, J. (2021). The Relation between Social Interaction And Intrinsic Motivation in Reinforcement Learning, Master Thesis.
- Buchholz, T. (2021). Variational Locally Projected Regression, Master's Thesis.
- Derstroff, C. (2021). Memory Representations for Partially Observable Reinforcement Learning, Master Thesis.
- Eich, Y. (2021). Distributionally Robust Optimization for Hybrid Systems, Master Thesis.
- Gruner, T. (2021). Wasserstein-Optimal Bayesian System Identification for Domain Randomization, Master Thesis.
- Hansel, K. (2021). Probabilistic Dynamic Mode Primitives, Master Thesis.
- He, J. (2021). Imitation Learning with Energy Based Model, Master Thesis.
- Huang, J. (2021). Multi-Objective Reactive Motion Planning in Mobile Manipulators, Master Thesis.
- Kaemmerer, M. (2021). Measure-Valued Derivatives for Machine Learning, Master Thesis.
- Lin, J.A. (2021). Functional Variational Inference in Bayesian Neural Networks, Master Thesis.
- Liu, L. (2021). Detection and Prediction of Human Gestures by Probabilistic Modelling, Master Thesis.
- Moos, J. (2021). Approximate Variational Inference for Mixture Models, Master Thesis.
- Palenicek, D. (2021). Dyna-Style Model-Based Reinforcement Learning with Value Expansion, Master Thesis.
- Patzwahl, A. (2021). Multi-sensor Fusion for Target Motion Prediction with an Application to Robot Baseball ||, Master Thesis.
- Rathjens, J. (2021). Accelerated Policy Search, Master Thesis.
- Schneider, T. (2021). Active Inference for Robotic Manipulation, Master Thesis.
- Sun, H. (2021). Can we improve time-series classification with Inverse Reinforcement Learning?, Master Thesis.
- Wang, Y. (2021). Bimanual Control and Learning with Composable Energy Policies, Master Thesis.
- Wegner, F. (2021). Learning Vision-Based Tactile Representations for Robotic Architectural Assembly, Master Thesis.
- Yang, H. (2021). Variational Inference for Curriculum Reinforcement Learning, Master Thesis.
- Ye, Z. (2021). Efficient Gradient-Based Variational Inference with GMMs, Master Thesis.
- Zhang, Y. (2021). Memory Representations for Partially Observable Reinforcement Learning, Master Thesis.
- Zhou, Z. (2021). Approximated Policy Search in Black-Box Optimization, Master Thesis.
- Borhade, P. (2020). Multi-agent reinforcement learning for autonomous driving, Master Thesis.
- Dorau, T. (2020). Distributionally Robust Optimization for Optimal Control, Master Thesis.
- Galljamov, R. (2020). Sample-Efficient Learning-Based Controller for Bipedal Walking in Robotic Systems, Master Thesis.
- Georgos, A. (2020). Robotics under Partial Observability, Master Thesis.
- Klein, A. (2020). Learning Robot Grasping of Industrial Work Pieces using Dense Object Descriptors, Master Thesis.
- Krabbe, P. (2020). Learning Riemannian Movement Primitives for Manipulation, Master Thesis.
- Lautenschlaeger, T. (2020). Variational Inference for Switching Dynamics, Master Thesis.
- Mentzendorff, E. (2020). Multi-Objective Deep Reinforcement Learning through Manifold Optimization, Master Thesis.
- Ploeger, K. (2020). High Acceleration Reinforcement Learning for Real-World Juggling with Binary Rewards.
- Schotschneider, A. (2020). Learning High-Level Behavior for Autonomous Vehicles, Master Thesis.
- Semmler, M. (2020). Sequential Bayesian Optimal Experimental Design for Nonlinear Dynamics, Master Thesis.
- Sharma, S. (2020). SAC-RL: Continuous Control of Wheeled Mobile Robot for Navigation in a Dynamic Environment, Master Thesis.
- Tekam, S. (2020). A Gaussian Mixture ModelApproach to Off-Policy PolicyGradient Estimation, Master Thesis.
- Tengang, V. M. (2020). 3D Pose Estimation for Robot Mikado, Master Thesis.
- Weimar, J. (2020). Exploring Intrinsic Motivation for Quanser Reinforcement Learning Benchmark Systems, Master Thesis.
- Williamson, L. (2020). Learning Nonlinear Dynamical Systems with the Koopman Operator, Master Thesis.
- Zecevic, M. (2020). Learning Algorithms, Invariances, and the Real World, Master Thesis.
- Baig, I. (2019). Deep End-to-End Calibration Thesis, Master Thesis.
- Becker, P. (2019). Expected Information Maximization: Using the I-Projection for Mixture Density Estimation, Master Thesis.
- Bous, F. (2019). Generating Spectral Envelopes for Singing Synthesis with Neural Networks, Master Thesis.
- Carvalho, J.A.C. (2019). Nonparametric Off-Policy Policy Gradient, Master Thesis.
- Cui, K. (2019). A Study on TD-Regularized Actor-Critic Methods, Master Thesis.
- Delfosse, Q. (2019). Grasping Objects Using a Goal-Discovering Architecture for Intrinsically-Motivated Learning, Master Thesis.
- Eschenbach, M. (2019). Metric-based Imitation Learning, Master Thesis.
- Hartmann, V. (2019). Efficient Exploration using Value Bounds in Deep Reinforcement Learning, Master Thesis.
- Hoffmann, D. (2019). Pedestrian Detection, Tracking and Intention Prediction in the context of autonomous Driving, Master Thesis.
- Hubecker, S. (2019). Curiosity Driven Reinforcement Learning for Autonomous Driving, Master Thesis.
- Huegelmann, N. (2019). Generating adaptable and reusable robot movements for a robot kitchen, Master Thesis.
- Jukonyte, L. (2019). Probabilistic Motion and Intention Prediction for Autonomous Vehicles, Master Thesis.
- Kircher, M. (2019). Learning from Human Feedback: a Comparison of Interactive Reinforcement Learning Algorithms, Master Thesis.
- Klink, P. (2019). Generalization and Transferability in Reinforcement Learning, Master Thesis.
- Knaust, M. (2019). Intuitive imitation learning for one-handed and bimanual tasks using ProMPs, Master Thesis.
- Laux, M. (2019). Deep Adversarialreinforcement Learning for Object Disentangling, Master Thesis.
- Liu, Z. (2019). Local Online Motor Babbling: Learning Motor Abundance of A Musculoskeletal Robot Arm, Master Thesis.
- Mohan, D.S. (2019). Learning hand adjustments for haptic interactions, Master Thesis.
- Nass, D. (2019). Risk-Sensitive Policy Search with Applications to Robot-Badminton, Master Thesis.
- Nickl, P. (2019). Bayesian Inference for Regression Models using Nonparametric Infinite Mixtures, Master Thesis.
- Pal, S. (2019). Deep Robot Reinforcement Learning for Assisting a Human, Master Thesis.
- Sadybakasov, A. (2019). Learning Vision-Based Tactile Skills for Robotic Architectural Assembly, Master Thesis.
- Saoud, H. (2019). Improving Sample-Efficiency with a Model-Based Deterministic Policy Gradient, Master Thesis.
- Schultheis, M. (2019). Approximate Bayesian Reinforcement Learning for System Identification, Master Thesis.
- Weigand, S. (2019). Guided Reinforcement Learning Under Partial Observability, Master Thesis.
- Wilberg, A. (2019). An Exploration of Interacting Machine Learning Methods for Agent Navigation, Master Thesis.
- Woelker, A. (2019). Local Pixel Manipulation Detection with Deep Neural Networks, Master Thesis.
- Zhang, S. (2019). Integration of self-imitation and model-based learning to actor-critic algorithms, Master Thesis.
- Ziese, A. (2019). Fast Multi-Objective Redundancy Resolution for Highly-Redundant Mobile Robots, Master Thesis.
- Brandherm, F. (2018). Learning Replanning Policies with Direct Policy Search, Master Thesis.
- Celik, O. (2018). Chance Constraints for Stochastic Optimal Control and Stochastic Optimization, Master Thesis.
- Dienlin, D. (2018). Generative Adverserial Models for Deep Driving, Master Thesis.
- Dittmar, D. (2018). Distributed Reinforcement Learning with Neural Networks for Robotics, Master Thesis.
- Ritter, C. (2018). Deep Learning of Inverse Dynamic Models, Master Thesis.
- Song, Y. (2018). Minimax and entropic proximal policy optimization, Master Thesis.
- Thai, H. L. (2018). Deep Reinforcement Learning for POMDPs, Master Thesis.
- Trick, S. (2018). Multimodal Uncertainty Reduction for Intention Recognition in a Human-Robot Environment, Master Thesis.
- Wang, Z. (2018). Representation Learning for Tactile Manipulation, Master Thesis.
- Zhi, R. (2018). Deep reinforcement learning under uncertainty for autonomous driving, Master Thesis.
- Ahmad, P. (2017). Analysis of Financial Big Data using Machine Learning and Graph Processing, Master Thesis.
- Beckmann, L. (2017). Lane estimation with deep neural networks, Master Thesis.
- Gabriel, A. (2017). Empowered Skills, Master Thesis.
- Gadiya, P. (2017). Large Scale Real-Time Data Analytics Platform for Energy Exchange, Master Thesis.
- Gondaliya, K. (2017). Learning to Categorize Issues in Distributed Bug Tracker Systems, Master Thesis.
- Hoelscher, J. (2017). Interactive Planning Under Uncertainty, Master Thesis.
- Pinsler, R. (2017). Data-Efficient Learning of Robotic Grasps From Human Preferences, Master Thesis.
- Sharma, D. (2017). Adaptive Training Strategies for Brain Computer Interfaces, Master Thesis.
- Shinde, S. (2017). POMDPs for Continuous States and Observations for Robotics, Master Thesis.
- Weibrecht, N. (2017). Auswertung von Sensordaten mit Machine Learning Algorithmen, Master Thesis.
- Abbenseth, J. (2016). Cooperative Path-Planning for Service Robots, Master Thesis.
- Abdulsamad, H. (2016). Stochastic Optimal Control with Linearized Dynamics, Master Thesis.
- Achieser, I. (2016). Potential evaluation of eye- and headtracking data as a robust and real-time capable predictor for driver intention detection and integration into an algorithm for maneuver prediction, Master Thesis.
- Belousov, B. (2016). Optimal Control of Ball Catching, Master Thesis.
- Hüttenrauch, M. (2016). Guided Deep Reinforcement Learning for Robot Swarms, Master Thesis.
- Hesse, T. (2016). Learning a Filter for Noise Attenuation in EEG Data for Brain-Computer Interfaces, Master Thesis.
- Koert, D. (2016). Combining Human Demonstrations and Motion Planning for Movement Primitive Optimization, Master Thesis.
- Kohlschuetter, J. (2016). Learning Probabilistic Classifiers from Electromyography Data for Predicting Knee Abnormalities, Master Thesis.
- Luck, K. (2016). Multi-Group Factor Extension of the GrouPS algorithm and Real-World Robot Learning, Master Thesis.
- Schuster, R. (2016). 3D Object Proposals from Stereo and Optical Flow, Master Thesis.
- Stapf, E. (2016). Predicting Traffic Flows for Traffic Engineering in Software-Defined Networks, Master Thesis.
- Stark, S. (2016). Learning Probabilistic Feedforward and Feedback Policies for Generating Stable Walking Behaviors, Master Thesis.
- Wilbers, D. (2016). Context-driven Movement Primitive Adaptation, Master Thesis, IAS, TU Darmstadt.
- Novoty, M. (2015). Application of Decision-Support Technologies for an Autonomous Evasion System for UAVs, Master Thesis.
- Tanneberg, D. (2015). Spiking Neural Networks Solve Robot Planning Problems, Master Thesis.
- Vandommele, T. (2015). Entwicklung eines Algorithmus zur Klassifikation von Schläfrigkeit durch videobasierte Fahrerbeobachtung, Master Thesis.
- Wieland, A. (2015). Probabilistic Methods for Forecasting of Electric Load Profiles, Master Thesis.
- Arenz, O. (2014). Feature Extraction for Inverse Reinforcement Learning, Master Thesis.
- Barnikol, S. (2014). Machine Learning for Active Gait Support with a Powered Ankle Prosthesis, Master Thesis.
- Chebotar, Y. (2014). Learning Robot Tactile Sensing for Object Manipulation, Master Thesis.
- Dann, C. (2014). Value-Function-Based Reinforcement Learning with Temporal Differences, Masters Thesis.
- Ewerton, M. (2014). Modeling Human-Robot Interaction with Probabilistic Movement Representations, Master Thesis.
- Gebhardt, G.H.W. (2014). Embedding Kalman Filters into Reproducing Kernel Hilbert Spaces, Master Thesis.
- Kamthe, S. (2014). Multi-modal Inference in Time Series, Master Thesis.
- Manschitz, S. (2014). Learning Sequential Skills for Robot Manipulation Tasks, Master Thesis.
- Merfels, C. (2014). Large-scale probabilistic feature mapping and tracking for autonomous driving, Masters Thesis.
- Mindt, M. (2014). Probabilistic Inference for Movement Planning in Humanoids, Master Thesis.
- Mundo, J. (2014). Extracting Low-Dimensional Control Variables for Movement Primitives, Master Thesis.
- Reubold, J. (2014). 3D Object Reconstruction from Partial Views, Master Thesis.
- Ringwald, J. (2014). Combination of Movement Primitives for Robotics, Master Thesis.
- Zeiss, S. (2014). Manipulation Skill for Robotic Assembly, Master Thesis.
- Englert, P. (2013). Model-based Imitation Learning by Probabilistic Trajectory Matching, Master Thesis.
- Haji Ghasemi, N. (2013). Approximate Gaussian Process Inference with Periodic Kernels, Master Thesis.
- Lioutikov, R. (2013). Learning time-dependent feedback policies with model-based policy search, Master Thesis.
- Schmitt, F. (2013). Probabilistic Nonlinear Model Predictive Control based on Pontryagin`s Minimum Principle, Master Thesis.
- Daniel, C. (2012). Hierarchical Relative Entropy Policy Search, Masters Thesis.
- Gopalan, N. (2012). Feedback Error Learning for Gait Acquisition, Master Thesis.
- Zhou, R. (2012). Free Space Detection Based On Occupancy Gridmaps, Masters Thesis.
- Muelling, K. (2009). Modeling Human Table Tennis, MSc Thesis (Completed at IAS/Tuebingen before move to TU Darmstadt) .
- Kober, J. (2008). Reinforcement Learning for Motor Primitives, MSc Thesis (Completed at IAS/Tuebingen before move to TU Darmstadt) .
Completed Bachelor Theses
- Ahmad, F. (2024). Diminishing Return of Value Expansion Methods in Discrete Model-Based Reinforcement Learning, Bachelor Thesis.
- Dennert, D. (2024). Diminishing Return of Value Expansion Methods in Offline Model-Based Reinforcement Learning, Bachelor Thesis.
- Feess, K. (2024). Learning Advanced Manipulation Skills on the SE(3) Manifold with Stability Guarantees, Bachelor Thesis.
- Glaser, M. (2024). Real2Sim for Play-Doh Manipulation, Bachelor Thesis.
- Heller, F. (2024). Learning Object Stress and Deformation With Graph Neural Networks, Bachelor Thesis.
- Horstkoetter, D. (2024). InsertionNet for Autonomous Assembly of Self-Interlocking Blocks, Bachelor Thesis.
- Nguyen, D. H. (2024). Simulation of GelSight Tactile Sensors for Reinforcement Learning, Bachelor Thesis.
- Althaus, T. (2023). Inverse Reinforcement Learning from Observation for Locomotion on the Unitree A1 Robot, Bachelor Thesis.
- Boehm, A. (2023). Active Exploration for Tactile Texture Perception, Bachelor Thesis.
- Chemangui, E. (2023). Detecting Human Uncertainty from Multimodal Behavioral Data in a Task with Perceptual Ambiguity, Bachelor Thesis.
- Gomez Andreu, M. (2023). Towards Dynamic Robot Juggling: Adaptive Planning for Siteswap Patterns, Bachelor Thesis.
- Maurer, C. (2023). Quantifying Policy Uncertainty for Interactive Reinforcement Learning with Unreliable Human Action Advice, Bachelor Thesis.
- Atashak, M. (2022). Will it Blend? Learning to Coexecute Subskills, Bachelor Thesis.
- Daniv, M. (2022). Graph-Based Model Predictive Visual Imitation Learning, Bachelor Thesis.
- Kinzel, J. (2022). Modelling and Control of a Spherical Pendulum on a 4-DOF Barret WAM, Bachelor Thesis.
- Lokadjaja, S. (2022). Parallel Tempering VIPS, Bachelor Thesis.
- Magnus, L. (2022). Real-time Object Tracking for Assembly, Bachelor Thesis.
- Menzenbach, S. (2022). Leveraging Learned Graph-based Heuristics for efficiently solving the Combinatorics of Assembly, Bachelor Thesis.
- Meser, M. (2022). Multi-Instance Pose Estimation for Robot Mikado, Bachelor Thesis.
- Nikitina, D. (2022). Inference Methods for Markov Decision Processes, Bachelor Thesis.
- Prescher, E. (2022). Visual Hierarchical Recognition And Segmentation Of Interactions, Bachelor Thesis.
- Siebenborn, M. (2022). Evaluating Decision Transformer Architecture on Robot Learning Tasks, Bachelor Thesis.
- Sterker, L. (2022). Social Interaction Segmentation and Learning using Hidden semi-Markov Models, Bachelor Thesis.
- Woortman, N. (2022). Comparing and Personalizing Human Following Behaviors for Mobile Ground Robots, Bachelor Thesis.
- Ali, M. (2021). An Educational Framework for Robot Learning, Bachelor Thesis.
- Gassen, M. (2021). Learning a library of Physical Interactions for Social Robots, Bachelor Thesis.
- Helfenstein, F. (2021). Benchmarking Deep Reinforcement Learning Algorithms, Bachelor Thesis.
- Schneider, L. (2021). Distributional Monte-Carlo Tree Search, Bachelor Thesis.
- Zoeller, M. (2021). Graph Neural Networks forModel-Based ReinforcementLearning, Bachelor Thesis.
- Baierl, M. (2020). Learning Action Representations For Primitives-Based Motion Generation, Bachelor Thesis.
- Damken, F. (2020). Variational Autoencoders for Koopman Dynamical Systems, Bachelor Thesis.
- Eiermann, A. (2020). Optimierung von Biologischen Systemen, Bachelor Thesis.
- Kirschner, M. (2020). Integration of LIDAR SLAM for an automous vehicle, Bachelor Thesis.
- Nukovic, L. (2020). Evaluation of the Handshake Turing Test for anthropomorphic Robots, Bachelor Thesis.
- Scharf, F. (2020). Proximal Policy Optimization with Explicit Intrinsic Motivation, Bachelor Thesis.
- Stadtmueller, J. (2020). Dimensionality Reduction of Movement Primitives in Parameter Space, Bachelor Thesis.
- Divo, F. (2019). Trajectory Based Upper Body Gesture Recognition for an Assistive Robot, Bachelor Thesis.
- Ebeling, L. (2019). Experimental validation of an MPC-POMDP model of ball catching, Bachelor Thesis.
- Hensel, M. (2019). Correlated Exploration in Deep Reinforcement Learning, Bachelor Thesis.
- Kaiser, F. (2019). Towards a Robot Skill Library Using Hierarchy, Composition and Adaptation, Bachelor Thesis.
- Keller, L. (2019). Application of state-of-the-art RL algorithms to robotics simulators, Bachelor Thesis.
- Kinold, J. (2019). Development of a Simulation Model for an Autonomous Vehicle, Bachelor Thesis.
- Lang, M. (2019). Imitation Learning for Highlevel Robot Behavior in the Context of Elderly Assistance, Bachelor Thesis.
- Lutz, P. (2019). Automatic Segmentation and Labeling for Robot Table Tennis Time Series, Bachelor Thesis.
- Schulze, L. (2019). Off-line Model-Based Predictive Control aplicado a Sistemas Robóticos, Bachelor Thesis.
- Suess, J. (2019). Robust Control for Model Learning, Bachelor Thesis.
- Weiland, C. (2019). Deep Model-based Reinforcement Learning: Propagating Rewards Backwards through the Model for All Time-Steps, Bachelor Thesis.
- Borg, A. (2018). Infinite-Mixture Policies in Reinforcement Learning, Bachelor Thesis.
- Khaled, N. (2018). Benchmarking Reinforcement Learning Algorithms on Tetherball Games, Bachelor Thesis.
- Kolev, Z. (2018). Joint Learning of Humans and Robots, Bachelor Thesis.
- Rinder, S. (2018). Trajectory Kernels for Bayesian Optimization, Bachelor Thesis.
- Schneider, T. (2018). Guided Policy Search for In-Hand Manipulation, Bachelor Thesis.
- Schotschneider, A. (2018). Collision Avoidance in Uncertain Environments for Autonomous Vehicles using POMDPs.
- Tschirner, J. (2018). Boosted Deep Q-Network, Bachelor Thesis.
- Fiebig, K.-H. (2017). Multi-Task Logistic Regression in Brain-Computer Interfaces, Bachelor Thesis.
- Frisch, Y. (2017). The Effects of Intrinsic Motivation Signals on Reinforcement Learning Strategies, Bachelor Thesis.
- Hesse, R. (2017). Development and Evaluation of 3D Autoencoders for Feature Extraction, Bachelor Thesis.
- Lolkes, C. (2017). Incremental Imitation Learning with Estimation of Uncertainty, Bachelor Thesis.
- Pfanschilling, V. (2017). Self-Programming Mutation and Crossover in Genetic Programming for Code Generation, Bachelor Thesis.
- Polat, H. (2017). Nonparametric deep neural networks for movement planning, Bachelor Thesis.
- Rother, D. (2017). Transferring Insights on Biological Sleep to Robot Motor Skill Learning.
- Semmler, M. (2017). Exploration in Deep Reinforcement Learning, Bachelor Thesis.
- Szelag, S. (2017). Transferring Insights on Mental Training to Robot Motor Skill Learning, Bachelor Thesis.
- Thiem, S. (2017). Simulation of the underactuated Sake Robotics Gripper in V-REP and ROS, Bachelor Thesis.
- Zecevic, M. (2017). Matching Bundles of Axons Using Feature Graphs.
- Plage, L. M. (2016). Reinforcement Learning for tactile-based finger gaiting, Bachelor Thesis.
- Alte, D. (2016). Control of a robotic arm using a low-cost BCI, Bachelor Thesis.
- Becker, P. (2016). Learning Deep Feature Spaces for Nonparametric Inference, Bachelor Thesis.
- Grossberger, L. (2016). Towards a low-cost cognitive Brain-Computer Interface for Patients with Amyotrophic Lateral Sclerosis, Bachelor Thesis.
- Klink, P (2016). Model Learning for Probabilistic Movement Primitives, Bachelor Thesis.
- Marg, V. (2016). Reinforcement Learning for a Dexterous Manipulation Task, Bachelor Thesis.
- Nakatenus, M. (2016). Multi-Agent Reinforcement Learning Algorithms, Bachelor Thesis.
- Palenicek, D. (2016). Reinforcement Learning for Mobile Missile Launching, Bachelor Thesis.
- Ramstedt, S. (2016). Deep Reinforcement Learning with Continuous Actions, Bachelor Thesis.
- Schultheis, M. (2016). Learning Priors for Error-related Decoding in EEG data for Brain-Computer Interfacing, Bachelor Thesis.
- Unverzagt, F.T. (2016). Modeling Robustness for Multi-Objective Optimization, Bachelor Thesis.
- Alexev, S. (2015). Reinforcement Learning für eine mobile Raketenabschußplattform, Bachelor Thesis.
- Berninger K. (2015). Hierarchical Policy Search Algorithms, Bachelor Thesis.
- Blank, A. (2015). Learning a State Representation for a Game Agent’s Reactive Behaviour, Bachelor Thesis.
- End, F. (2015). Learning Versatile Solutions with Policy Search, Bachelor Thesis.
- Mayer C. (2015). Learning to Sequence Movement Primitives for Rhythmic Tasks, Bachelor Thesis.
- Schaefer, A. (2015). Prediction of Finger Flexion from ECoG Data with a Deep Neural Network, Bachelor Thesis.
- Amend, S. (2014). Feature Extraction for Policy Search, Bachelor Thesis.
- Brandl, S. (2014). Learning to Pour Using Warped Features, Bachelor Thesis.
- Hesse, T. (2014). Spectral Learning of Hidden Markov Models, Bachelor Thesis.
- Hochlaender, A. (2014). Deep Learning for Reinforcement Learning in Pacman, Bachelor Thesis.
- Hoelscher, J. (2014). Tactile Exploration of Object Properties, Bachelor Thesis.
- Huhnstock, N. (2014). Tactile Sensing for Manipulation, Bachelor Thesis.
- Laux, M. (2014). Online Feature Learning for Reinforcement Learning, Bachelor Thesis.
- Luck, K. (2014). Latent Space Reinforcement Learning, Bachelor Thesis.
- Mattmann, A. (2014). Modeling How To Catch Flying Objects: Optimality Vs. Heuristics, Bachelor Thesis.
- Schroecker, Y. (2014). Artificial Curiosity for Motor Skill Learning, Bachelor Thesis.
- Smyk, M. (2014). Learning Generalizable Models for Compliant Robots, Bachelor Thesis.
- Thai, H.L. (2014). Laplacian Mesh Editing for Interaction Learning, Bachelor Thesis.
- von Willig, J. (2014). Reinforcement Learning for Heros of Newerth, Bachelor Thesis.
- Notz, D. (2013). Reinforcement Learning for Planning in High-Dimensional Domains, Bachelor Thesis.
- Pfretzschner, B. (2013). Autonomous Car Driving using a Low-Cost On-Board Computer, Bachelor Thesis.
- Schoengen, S. (2013). Visual feature learning for interactive segmentation, Bachelor Thesis.
- Distler, M. (2012). Koennen Lernalgorithmen interagieren aehnlich wie im Gehirn?, Bachelor Thesis.
- Hensch, P. (2012). Comparing Reinforcement Learning Algorithms on Tic-Tac-Toe, Bachelor Thesis.
- Hess, S. (2012). Levitation Sphere, Bachelor Thesis.
- Sharma, D. (2012). Combining Reinforcement Learning and Feature Extraction, Bachelor Thesis.
- Zimpfer, A. (2012). Vergleich verschiedener Lernalgorithmen auf einem USB-Missilelauncher, Bachelor Thesis.
Honors Theses and Advanced Design Projects
- Sperling, J. (2021). Learning Robot Grasping of Industrial Work Pieces using Dense Object Descriptors, Honors Thesis.
- Smyk, M. (2016). Model-based Control and Planning on Real Robots, Honors Thesis.
- Koert, D. (2015). Inverse Kinematics for Optimal Human-Robot Collaboration, Honors Thesis.
- Abdulsamad, H.; Buchholz, T.; Croon, T; El Khoury, M. (2014). Playing Tetherball with Compliant Robots, Advanced Design Project.
- Ho, D.; Kisner, V. (2014). Trajectory Tracking Controller for a 4-DoF Flexible Joint Robotic Arm, Advanced Design Project.
Completed Seminar Theses
- Alles, I (2012). Models for Biological Motor Control: Modules of Movements, Seminar Thesis, Proceedings of the Robot Learning Seminar.
- Arenz, O. (2012). Extensive Games, Seminar Thesis, Proceedings of the Autonomous Learning Systems Seminar.
- Arenz, O. (2013). Inverse Optimal Control, Seminar Thesis, Proceedings of the Robot Learning Seminar.
- Dann, C. (2012). Algorithms for Fast Gradient Temporal Difference Learning, Seminar Thesis, Proceedings of the Autonomous Learning Systems Seminar.
- Dittmar, D. (2013). Slice Sampling, Seminar Thesis, Proceedings of the Robot Learning Seminar.
- Englert, P. (2012). Locally Weighted Learning, Seminar Thesis, Proceedings of the Autonomous Learning Systems Seminar.
- Fischer, A. (2012). Inverse Reinforcement Learning, Seminar Thesis, Proceedings of the Autonomous Learning Systems Seminar.
- Gabriel, A. (2012). An introduction to Structural Learning - A new approach in Reinforcement Learning, Seminar Thesis, Proceedings of the Robot Learning Seminar.
- Glaser, C. (2012). Learning in Reality: A case study of Stanley, the robot that Won the DARPA Challenge, Seminar Thesis, Proceedings of the Robot Learning Seminar.
- Goppalan, N. (2012). Gaussian Process Latent Variable Models for Dimensionality Reduction and Time Series Modeling, Seminar Thesis, Proceedings of the Robot Learning Seminar.
- Graber, T. (2012). Models for Biological Motor Control: Optimality Principles, Seminar Thesis, Proceedings of the Robot Learning Seminar.
- Hardock, S. (2012). Applications in Robot Helicopter Acrobatics, Seminar Thesis, Proceedings of the Autonomous Learning Systems Seminar.
- Isaak, J. (2012). Interaction Learning, Seminar Thesis, Proceedings of the Robot Learning Seminar.
- Kruk, S. (2013). Planning with Multiple Agents, Seminar Thesis, Proceedings of the Autonomous Learning Systems Seminar.
- Kunz, F. (2013). An Introduction to Temporal Difference Learning, Seminar Thesis, Proceedings of the Autonomous Learning Systems Seminar.
- Kutschke, M. (2012). Imitation Learning, Seminar Thesis, Proceedings of the Autonomous Learning Systems Seminar.
- Lioutikov, R. (2012). Machine learning and the brain, Seminar Thesis, Proceedings of the Robot Learning Seminar.
- Mindt, M. (2012). Learning robot control, Seminar Thesis, Proceedings of the Robot Learning Seminar.
- Mogk, R. (2012). Efficient Planning under Uncertainty with Macro-actions, Seminar Thesis, Proceedings of the Robot Learning Seminar.
- Mueck, J. (2012). Learning physical Models of Robots, Seminar Thesis, Proceedings of the Autonomous Learning Systems Seminar.
- Pignede, T. (2012). Evolution of Reinforcement Learning in Games or How to Win against Humans with Intelligent Agents, Seminar Thesis, Proceedings of the Autonomous Learning Systems Seminar.
- Ploetz, T. (2012). Deterministic Approximation Methods in Bayesian Inference, Seminar Thesis, Proceedings of the Robot Learning Seminar.
- Reubold, J. (2012). Kernel Descriptors in comparison with Hierarchical Matching Pursuit, Seminar Thesis, Proceedings of the Robot Learning Seminar.
- Schnell, F. (2013). Hierarchical Reinforcement Learning in Robot Control, Seminar Thesis, Proceedings of the Robot Learning Seminar.
- Schoenberger, D. (2012). Planning in POMDPs, Seminar Thesis, Proceedings of the Autonomous Learning Systems Seminar.
- Schroecker, Y. (2013). Planning for Relational Rules, Seminar Thesis, Proceedings of the Autonomous Learning Systems Seminar.
- Stark, S. (2012). Do Reinforcement Learning Models Explain Neural Learning?, Seminar Thesis, Proceedings of the Autonomous Learning Systems Seminar.
- Stein, A. (2013). Learning Robot Locomotion, Seminar Thesis, Proceedings of the Autonomous Learning Systems Seminar.
- Swiezinski, L. (2013). Lifecycle of a Jeopardy Question Answered by Watson DeepQA, Seminar Thesis, Proceedings of the Autonomous Learning Systems Seminar.
- Thiel, T. (2012). Learning in Robot Soccer, Seminar Thesis, Proceedings of the Robot Learning Seminar.
- Tschirsich, M. (2013). Learning Robot Control, Seminar Thesis, Proceedings of the Robot Learning Seminar.
- Viering, M. (2012). Hierarchical Reinforcement Learning in Robot Control, Seminar Thesis, Proceedings of the Robot Learning Seminar.
- Will, K. (2013). Autonomous Chess-Playing, Seminar Thesis, Proceedings of the Autonomous Learning Systems Seminar.
- Zoellner, M. (2013). Reinforcement Learning in Games, Seminar Thesis, Proceedings of the Autonomous Learning Systems Seminar.
Completed IP Projects
Topic | Students | Advisor |
---|---|---|
Visuotactile Shear Force Estimation | Erik Helmut, Luca Dziarski | Niklas Funk, Boris Belousov |
AI Olympics with RealAIGym | Tim Faust, Erfan Aghadavoodi, Habib Maraqten | Boris Belousov |
Solving Insertion Tasks with RL in the Real World | Nick Striebel,Adriaan Mulder | Joao Carvalho |
Student-Teacher Learning for simulated Quadrupeds | Keagan Holmes, Oliver Griess, Oliver Grein | Nico Bohlinger |
Reinforcement Learning for Contact Rich Manipulation | Dustin Gorecki | Aiswarya Menon, Arjun Datta, Suman Pal |
Autonomous Gearbox Assembly by Assembly by Disassembly | Jonas Chaselon, Julius Thorwarth | Aiswarya Menon, Felix Kaiser, Arjun Datta |
Learning to Assemble from Instruction Manual | Erik Rothenbacker, Leon De Andrade, Simon Schmalfuss | Aiswarya Menon,Vignesh Prasad,Felix Kaiser,Arjun Datta |
Black-box System Identification of the Air Hockey Table | Shihao Li, Yu Wang | Theo Gruner, Puze Liu |
Unveiling the Unseen: Tactile Perception and Reinforcement Learning in the Real World | Alina Böhm, Inga Pfenning, Janis Lenz | Daniel Palenicek, Theo Gruner, Tim Schneider |
XXX: eXploring X-Embodiment with RT-X | Daniel Dennert, Christian Scherer, Faran Ahmad | Daniel Palenicek, Theo Gruner, Tim Schneider, Maximilian Tölle |
Kinodynamic Neural Planner for Robot Air Hockey | Niclas Merten | Puze Liu |
XXX: eXploring X-Embodiment with RT-X | Tristan Jacobs | Daniel Palenicek, Theo Gruner, Tim Schneider, Maximilian Tölle |
Reactive Human-to-Robot Handovers | Fabian Hahne | Vignesh Prasad, Alap Kshirsagar |
Analysis of multimodal goal prompts for robot control | Max Siebenborn | Aditya Bhatt, Maximilian Tölle |
Control Barrier Functions for Assistive Teleoperation | Yihui Huang,Yuanzheng Sun | Berk Gueler, Kay Hansel |
Learning Torque Control for Quadrupeds | Daniel Schmidt, Lina Gaumann | Nico Bohlinger |
Robot Learning for Dynamic Motor Skills | Marcus Kornamann, Qimeng He | Kai Ploeger, Alap Kshirsagar |
Q-Ensembles as a Multi-Armed Bandit | Henrik Metternich | Ahmed Hendawy, Carlo D'Eramo |
Pendulum Acrobatics | Florian Wolf | Kai Ploeger, Pascal Klink |
Learn to play Tangram | Max Zimmermann, Marius Zöller, Andranik Aristakesyan | Kay Hansel, Niklas Funk |
Characterizing Fear-induced Adaptation of Balance by Inverse Reinforcement Learning | Zeyuan Sun | Alap Kshirsagar, Firas Al-Hafez |
Tactile Environment Interaction | Changqi Chen, Simon Muchau, Jonas Ringsdorf | Niklas Funk |
Latent Generative Replay in Continual Learning | Marcel Mittenbühler | Ahmed Hendawy, Carlo D'Eramo |
Memory-Free Continual Learning | Dhruvin Vadgama | Ahmed Hendawy, Carlo D'Eramo |
Simulation of Vision-based Tactile Sensors | Duc Huy Nguyen | Boris Belousov, Tim Schneider |
Learning Bimanual Robotic Grasping | Hanjo Schnellbächer, Christoph Dickmanns | Julen Urain De Jesus, Alap Kshirsagar |
Learning Deep probability fields for planning and control | Felix Herrmann, Sebastian Zach | Davide Tateo, Georgia Chalvatzaki, Jacopo Banfi |
On Improving the Reliability of the Baseline Agent for Robotic Air Hockey | Haozhe Zhu | Puze Liu |
Self-Play Reinforcement Learning for High-Level Tactics in Robot Air Hockey | Yuheng Ouyang | Puze Liu, Davide Tateo |
Kinodynamic Neural Planner for Robot Air Hockey | Niclas Merten | Puze Liu |
Robot Drawing With a Sense of Touch | Noah Becker, Zhijingshui Yang, Jiaxian Peng | Boris Belousov, Mehrzad Esmaeili |
Black-Box System Identification of the Air Hockey Table | Anna Klyushina, Marcel Rath | Theo Gruner, Puze Liu |
Autonomous Basil Harvesting | Jannik Endres, Erik Gattung, Jonathan Lippert | Aiswarya Menon, Felix Kaiser, Arjun Vir Datta, Suman Pal |
Latent Tactile Representations for Model-Based RL | Eric Krämer | Daniel Palenicek, Theo Gruner, Tim Schneider |
Model Based Multi-Object 6D Pose Estimation | Helge Meier | Felix Kaiser, Arjun Vir Datta, Suman Pal |
Reinforcement Learning for Contact Rich Manipulation | Noah Farr, Dustin Gorecki | Aiswarya Menon, Arjun Vir Datta, Suman Pal |
Measuring Task Similarity using Learned Features | Henrik Metternich | Ahmed Hendawy, Pascal Klink, Carlo D'Eramo |
Interactive Semi-Supervised Action Segmentation | Martina Gassen, Erik Prescher, Frederic Metzler | Lisa Scherf, Felix Kaiser, Vignesh Prasad |
Kinematically Constrained Humanlike Bimanual Robot Motion | Yasemin Göksu, Antonio De Almeida Correia | Vignesh Prasad, Alap Kshirsagar |
Control and System identification for Unitree A1 | Lu Liu | Junning Huang, Davide Tateo |
System identification and control for Telemax manipulator | Kilian Feess | Davide Tateo, Junning Huang |
Tactile Active Exploration of Object Shapes | Irina Rath, Dominik Horstkötter | Tim Schneider, Boris Belousov, Alap Kshirsagar |
Object Hardness Estimation with Tactile Sensors | Mario Gomez, Frederik Heller | Alap Kshirsagar, Boris Belousov, Tim Schneider |
Task and Motion Planning for Sequential Assembly | Paul-Hallmann, Nicolas Nonnengießer | Boris Belousov, Tim Schneider, Yuxi Liu |
A Digital Framework for Interlocking SL-Blocks Assembly with Robots | Bingqun Liu | Mehrzad Esmaeili, Boris Belousov |
Learn to play Tangram | Max Zimmermann, Dominik Marino, Maximilian Langer | Kay Hansel, Niklas Funk |
Learning the Residual Dynamics using Extended Kalman Filter for Puck Tracking | Haoran Ding | Puze Liu, Davide Tateo |
ROS Integration of Mitsubishi PA 10 robot | Jonas Günster | Puze Liu, Davide Tateo |
6D Pose Estimation and Tracking for Ubongo 3D | Marek Daniv | Joao Carvalho, Suman Pal |
Task Taxonomy for robots in household | Amin Ali, Xiaolin Lin | Snehal Jauhri, Ali Younes |
Task and Motion Planning for Sequential Assembly | Paul-Hallmann, Patrick Siebke, Nicolas Nonnengießer | Boris Belousov, Tim Schneider, Yuxi Liu |
Learning the Gait for Legged Robot via Safe Reinforcement Learning | Joshua Johannson, Andreas Seidl Fernandez | Puze Liu, Davide Tateo |
Active Perception for Mobile Manipulation | Sophie Lueth, Syrine Ben Abid, Amine Chouchane | Snehal Jauhri |
Combining RL/IL with CPGs for Humanoid Locomotion | Henri Geiss | Firas Al-Hafez, Davide Tateo |
Multimodal Attention for Natural Human-Robot Interaction | Aleksandar Tatalovic, Melanie Jehn, Dhruvin Vadgama, Tobias Gockel | Oleg Arenz, Lisa Scherf |
Hybrid Motion-Force Planning on Manifolds | Chao Jin, Peng Yan, Liyuan Xiang | An Thai Le, Junning Huang |
Stability analysis for control algorithms of Furuta Pendulum | Lu liu, Jiahui Shi, Yuheng Ouyang | Junning Huang, An Thai Le |
Multi-sensorial reinforcement learning for robotic tasks | Rickmer Krohn | Georgia Chalvatzaki, Snehal Jauhri |
Learning Behavior Trees from Video | Nick Dannenberg, Aljoscha Schmidt | Lisa Scherf, SumanPal |
Subgoal-Oriented Shared Control | Zhiyuan Gao, Fengyun Shao | Kay Hansel |
Task Taxonomy for robots in household | Amin Ali, Xiaolin Lin | Snehal Jauhri, Ali Younes |
Learn to play Tangram | Maximilian Langer | Kay Hansel, Niklas Funk |
Theory of Mind Models for HRI under partial Observability | Franziska Herbert, Tobias Niehues, Fabian Kalter | Dorothea Koert, Joni Pajarinen, David Rother |
Learning Safe Human-Robot Interaction | Zhang Zhang | Puze Liu, Snehal Jauhri |
Active-sampling for deep Multi-Task RL | Fabian Wahren | Carlo D'Eramo, Georgia Chalvatzaki |
Interpretable Reinforcement Learning | patrick Vimr | Davide Tateo, Riad Akrour |
Optimistic Actor Critic | Niklas Kappes,Pascal Herrmann | Joao Carvalho |
Active Visual Search with POMDPs | Jascha Hellwig, Mark Baierl | Joao Carvalho, Julen Urain De Jesus |
Utilizing 6D Pose-Estimation over ROS | Johannes Weyel | Julen Urain De Jesus |
Learning Deep Heuristics for Robot Planning | Dominik Marino | Tianyu Ren |
Learning Behavior Trees from Videos | Johannes Heeg, Aljoscha Schmidt and Adrian Worring | Suman Pal, Lisa Scherf |
Learning Decisions by Imitating Human Control Commands | Jonas Günster, Manuel Senge | Junning Huang |
Combining Self-Paced Learning and Intrinsic Motivation | Felix Kaiser, Moritz Meser, Louis Sterker | Pascal Klink |
Self-Paced Reinforcement Learning for Sim-to-Real | Fabian Damken, Heiko Carrasco | Fabio Muratore |
Policy Distillation for Sim-to-Real | Benedikt Hahner, Julien Brosseit | Fabio Muratore |
Neural Posterior System Identification | Theo Gruner, Florian Wiese | Fabio Muratore, Boris Belousov |
Syntethic Dataset generation for Articulation prediction | Johannes Weyel, Niklas Babendererde | Julen Urain, Puze Liu |
Guided Dimensionality Reduction for Black-Box Optimization | Marius Memmel | Puze Liu, Davide Tateo |
Learning Laplacian Representations for continuous MCTS | Daniel Mansfeld, Alex Ruffini | Tuan Dam, Georgia Chalvatzaki, Carlo D'Eramo |
Object Tracking using Depth Carmera | Leon Magnus, Svenja Menzenbach, Max Siebenborn | Niklas Funk, Boris Belousov, Georgia Chalvatzaki |
GNNs for Robotic Manipulation | Fabio d'Aquino Hilt, Jan Kolf, Christian Weiland | Joao Carvalho |
Benchmarking advances in MCTS in Go and Chess | Lukas Schneider | Tuan Dam, Carlo D'Eramo |
Architectural Assembly: Simulation and Optimization | Jan Schneider | Boris Belousov, Georgia Chalvatzaki |
Probabilistic Object Tracking using Depth Carmera | Jan Emrich, Simon Kiefhaber | Niklas Funk, Boris Belousov, Georgia Chalvatzaki |
Bayesian Optimization for System Identification in Robot Air Hockey | Chen Xue. Verena Sieburger | Puze Liu, Davide Tateo |
Benchmarking MPC Solvers in the Era of Deep Reinforcement Learning | Darya Nikitina, Tristan Schulz | Joe Watson |
Enhancing Attention Aware Movement Primitives | Artur Kruk | Dorothea Koert |
Towards Semantic Imitation Learning | Pengfei Zhao | Julen Urain & Georgia Chalvatzaki |
Can we use Structured Inference Networks for Human Motion Prediction? | Hanyu Sun, Liu Lanmiao | Julen Urain & Georgia Chalvatzaki |
Reinforcement Learning for Architectural Combinatorial Optimization | Jianpeng Chen, Yuxi Liu, Martin Knoll, Leon Wietschorke | Boris Belousov, Georgia Chalvatzaki, Bastian Wibranek |
Architectural Assembly With Tactile Skills: Simulation and Optimization | Tim Schneider, Jan Schneider | Boris Belousov, Georgia Chalvatzaki, Bastian Wibranek |
Bayesian Last Layer Networks | Jihao Andreas Lin | Joe Watson, Pascal Klink |
BATBOT: BATter roBOT for Baseball | Yannick Lavan, Marcel Wessely | Carlo D'Eramo |
Benchmarking Deep Reinforcement Learning | Benedikt Volker | Davide Tateo, Carlo D'Eramo, Tianyu Ren |
Model Predictive Actor-Critic Reinforcement Learning of Robotic Tasks | Daljeet Nandha | Georgia Chalvatzaki |
Dimensionality Reduction for Reinforcement Learning | Jonas Jäger | Michael Lutter |
From exploration to control: learning object manipulation skills through novelty search and local adaptation | Leon Keller | Svenja Stark, Daniel Tanneberg |
Robot Air-Hockey | Patrick Lutz | Puze Liu, Davide Tateo |
Learning Robotic Grasp of Deformable Object | Mingye Zhu, Yanhua Zhang | Tianyu Ren |
Teach a Robot to solve puzzles with intrinsic motivation | Ali Karpuzoglu | Georgia Chalvatzaki, Svenja Stark |
Inductive Biases for Robot Learning | Rustam Galljamov | Boris Belousov, Michael Lutter |
Accelerated Mirror Descent Policy Search | Maximilian Hensel | Boris Belousov, Tuan Dam |
Foundations of Adversarial and Robust Learning | Janosch Moos, Kay Hansel | Svenja Stark, Hany Abdulsamad |
Likelihood-free Inference for Reinforcement Learning | Maximilian Hensel, Kai Cui | Boris Belousov |
Risk-Aware Reinforcement Learning | Maximillian Kircher, Angelo Campomaggiore, Simon Kohaut, Dario Perrone | Samuele Tosatto, Dorothea Koert |
Jonas Eschmann, Robin Menzenbach, Christian Eilers | Boris Belousov, Fabio Muratore | |
Learning Symbolic Representations for Abstract High-Level Planning | Zhiyuan Hu, Claudia Lölkes, Haoyi Yang | Svenja Stark Daniel Tanneberg |
Learning Perceptual ProMPs for Catching Balls | Axel Patzwahl | Dorothea Koert, Michael Lutter |
Bayesian Inference for Switching Linear Dynamical Systems | Markus Semmler, Stefan Fabian | Hany Abdulsamad |
Characterization of WAM Dynamics | Kai Ploeger | Dorothea Koert, Michael Lutter |
Deep Reinforcement Learning for playing Starcraft II | Daniel Palenicek, Marcel Hussing, Simon Meister | Filipe Veiga |
Enhancing Exploration in High-Dimensional Environments Δ | Lu Wan, Shuo Zhang | Simone Parisi |
Building a Grasping Testbed | Devron Williams | Oleg Arenz |
Online Dynamic Model Learning | Pascal Klink | Hany Abdulsamad, Alexandros Paraschos |
Spatio-spectral Transfer Learning for Motor Performance Estimation | Karl-Heinz Fiebig | Daniel Tanneberg |
From Robots to Cobots | Michael Burkhardt, Moritz Knaust, Susanne Trick | Dorothea Koert, Marco Ewerton |
Learning Hand-Kinematics | Sumanth Venugopal, Deepak Singh Mohan | Gregor Gebhardt |
Goal-directed reward generation | Alymbek Sadybakasov | Boris Belousov |
Learning Grammars for Sequencing Movement Primitives | Kim Berninger, Sebastian Szelag | Rudolf Lioutikov |
Learning Deep Feature Spaces for Nonparametric Inference | Philipp Becker | Gregor Gebhardt |
Lazy skill learning for cleaning up a table | Lejla Nukovic, Moritz Fuchs | Svenja Stark |
Reinforcement Learning for Gait Learning in Quadrupeds | Kai Ploeger, Zinan Liu | Svenja Stark |
Learning Deep Feature Spaces for Nonparametric Inference | Philipp Becker | Gregor Gebhardt |
Optimal Control for Biped Locomotion | Martin Seiler, Max Kreischer | Hany Abdulsamad |
Semi-Autonomous Tele-Operation | Nick Heppert, Marius, Jeremy Tschirner | Oleg Arenz |
Teaching People how to Write Japanese Characters | David Rother, Jakob Weimar, Lars Lotter | Marco Ewerton |
Local Bayesian Optimization | Dmitry Sorokin | Riad Akrour |
Bayesian Deep Reinforcement Learning -Tools and Methods- | Simon Ramstedt | Simone Parisi |
Controlled Slip for Object Release | Steffen Kuchelmeister, Albert Schotschneider | Filipe Veiga |
Learn intuitive physics from videos | Yunlong Song, Rong Zhi | Boris Belousov |
Learn an Assembling Task with Swarm Robots | Kevin Daun, Marius Schnaubelt | Gregor Gebhardt |
Learning To Sequence Movement Primitives | Christoph Mayer | Christian Daniel |
Learning versatile solutions for Table Tennis | Felix End | Gerhard Neumann Riad Akrour |
Learning to Control Kilo-Bots with a Flashlight | Alexander Hendrich, Daniel Kauth | Gregor Gebhardt |
Playing Badminton with Robots | J. Tang, T. Staschewski, H. Gou | Boris Belousov |
Juggling with Robots | Elvir Sabic, Alexander Wölker | Dorothea Koert |
Learning and control for the bipedal walker FaBi | Manuel Bied, Felix Treede, Felix Pels | Roberto Calandra |
Finding visual kernels | Fide Marten, Dominik Dienlin | Herke van Hoof |
Feature Selection for Tetherball Robot Games | Xuelei Li, Jan Christoph Klie | Simone Parisi |
Inverse Reinforcement Learning of Flocking Behaviour | Maximilian Maag, Robert Pinsler | Oleg Arenz |
Control and Learning for a Bipedal Robot | Felix Treede, Phillip Konow, Manuel Bied | Roberto Calandra |
Perceptual coupling with ProMPs | Johannes Geisler, Emmanuel Stapf | Alexandros Paraschos |
Learning to balance with the iCub | Moritz Nakatenus, Jan Geukes | Roberto Calandra |
Generalizing Models for a Compliant Robot | Mike Smyk | Herke van Hoof |
Learning Minigolf with the BioRob | Florian Brandherm | Marco Ewerton |
iCub Telecontrol | Lars Fritsche, Felix Unverzagt | Roberto Calandra |
REPS for maneuvering in Robocup | Jannick Abbenseth, Nicolai Ommer | Christian Daniel |
Learning Ball on a Beam on the KUKA lightweight arms | Bianca Loew, Daniel Wilberts | Christian Daniel |
Sequencing of DMPs for Task- and Motion Planning | Markus Sigg, Fabian Faller | Rudolf Lioutikov |
Tactile Exploration and Mapping | Thomas Arnreich, Janine Hoelscher | Tucker Hermans |
Multiobjective Reinforcement Learning on Tetherball BioRob | Alexander Blank, Tobias Viernickel | Simone Parisi |
Semi-supervised Active Grasp Learning | Simon Leischnig, Stefan Lüttgen | Oliver Kroemer |