We offer these current topics directly for Bachelor and Master students at TU Darmstadt who can feel free to DIRECTLY contact the thesis advisor if you are interested in one of these topics. **Excellent external students from another university may be accepted but please first email Jan Peters. Note that we cannot provide funding for any of these theses projects.**

We highly recommend that you do either our robotics and machine learning lectures (Robot Learning, Statistical Machine Learning) or our colleagues (Grundlagen der Robotik, Probabilistic Graphical Models and/or Deep Learning). Even more important to us is that you take both Robot Learning: Integrated Project, Part 1 (Literature Review and Simulation Studies) and Part 2 (Evaluation and Submission to a Conference) before doing a thesis with us.

In addition, we are usually happy to devise new topics on request to suit the abilities of excellent students. Please **DIRECTLY** contact the thesis advisor if you are interested in one of these topics. When you contact the advisor, it would be nice if you could mention (1) **WHY** you are interested in the topic (dreams, parts of the problem, etc), and (2) **WHAT** makes you special for the projects (e.g., class work, project experience, special programming or math skills, prior work, etc.). Supplementary materials (CV, grades, etc) are highly appreciated. Of course, such materials are not mandatory but they help the advisor to see whether the topic is too easy, just about right or too hard for you.

**Only contact *ONE* potential advisor at the same time! If you contact more a second one without first concluding discussions with the first advisor (i.e., decide for or against the thesis with her or him), we may not consider you at all. Only if you are super excited for at most two topics send an email to both supervisors, so that the supervisors are aware of the additional interest. **

**Scope:** Master's thesis **Advisor:** Tuan Dam, Carlo D'Eramo, Joni Pajarinen **Start:** ASAP **Topic:** Applying reinforcement to autonomous driving is a promising but challenging research direction due to the high uncertainty and environmental conditions in the task. Efficient reinforcement learning is needed. For efficient reinforcement learning recent work has suggested solving the Bellman Optimality equation with Stability guarantees but unfortunately no guarantee for zero bias has been proposed in this context making reinforcement learning susceptible to getting stuck in dangerous solutions. In this work we formulate the Bellman equation into a Convex-Concave Saddle Point Problem and solve it using a new proposed Accelerated Primal-Dual Algorithm [3]. We will test the algorithm in benchmark problems and in an autonomous driving task such as the one shown on the right (video: https://youtu.be/Hp8Dz-Zek2E) where an efficient unbiased solution is needed.

[1] Ofir Nachum, Yinlam Chow, and Mohammad Ghavamzadeh. Path consistency learning in Tsallis entropy regularized mdps. arXiv preprint arXiv:1802.03501 , 2018.

[2] Dai, Bo, et al. "Sbeed: Convergent reinforcement learning with nonlinear function approximation." International Conference on Machine Learning. PMLR, 2018.

[3] Erfan Yazdandoost Hamedani and Necdet Serhat Aybat. A primal-dual algorithm for general convex-concave saddle point problems. arXiv preprint arXiv:1803.01401 , 2018

**Scope:** Master Thesis **Advisor:** Georgia Chalvatzaki **Added:** 2020-10-19 **Start:** ASAP **Topic:** In this thesis, we will study the extraction of orientation invariant features in objects, by means of latent representation learning for mobile grasping (i.e. grasping policies for mobile manipulators). Contrary to approaches with static camera installations and stationary robot arms that train in an end-to-end fashion and overfit one-grasp solution per object, we want to sample points across the object's area and evaluate their quality. We will address this challenge by investigating how we can extract latent features in objects invariant to the camera's poses due to the mobile manipulator's different viewpoints, leveraging latent space sampling and energy-based models (in a self-supervised learning setting). Consequently, we will investigate the use of objects' latent features for learning mobile grasping policies with deep RL algorithms. The ultimate goal will be the fluent whole-pose coordination of the mobile manipulator for performing efficient grasping of objects. Starting from building the framework in a simulation environment, by the end of the thesis you will have the chance to transfer your algorithms on our mobile manipulator robot, TIAGo++.

**Scope:** Master Thesis **Advisor:** Georgia Chalvatzaki **Added:** 2020-10-19 **Start:** ASAP **Topic:** Task and Motion Planning (TAMP) is a fundamental research problem in robotics and AI. It combines discrete task
decisions about objects and actions with continuous motion decisions about paths. Although TAMP is a long-studied problem, most works separate task actions from geometric motions. The need for robots to perform in unstructured environments, while executing several tasks, led to the integration of task-level descriptors to low-level geometric information. Generalization over different environments, tasks, and objects still remains an open issue for TAMP in mobile manipulation. TAMP consider the need for automated robot planning, in which the robot is only endowed with a model of the world, inherits challenges of both motion & AI planning, and spans along continuous and combinatorially large spaces. Moreover, we have to consider continuous constraints, the kinematic reachability, joint limits & collisions, and all these in long horizons for executing multi-stage tasks. Recently, a novel approach, called PDDLstream, was introduced that augmented high-level discrete task planning with continuous parameters that describe motion constraints involving robot configurations, object poses, and robot trajectories. In this master thesis, we are going to explore ways to learn to plan such sequences of actions. Given a structure of sub-tasks in a task graph, we will study the abilities of neural networks to serve as value functions for learning a policy that will sequence and adapt the sub-task execution, and their continuous parameters. We will, mainly, explore the abilities of graph neural networks to dynamically represent the task graph. In a framework like PDDLstream, where each discrete state in the task graph represents a goal in the workspace, learning the goal-conditioned value functions will allow us to learn to plan from simple to more complex and hence long-horizon tasks. We are going to work both in simulation and potentially on our own bimanual mobile manipulator robot TIAGo++.
Please contact me for further information on the topic.

**Scope:** Master Thesis **Advisor:** Georgia Chalvatzaki **Added:** 2020-10-19 **Start:** ASAP **Topic:** In this thesis, we ae going to explore the intriguing field of integrating diffentaible physics engines together with deep learning. Our goal is to create a goal generator, that will perturbe the physical properties of simulated objects, in order to combine it with methods for learning to grasp. We are going to investigate how we can infer different physical properties in objects (e.g. mass, shape, deformation). In a generative adversarial setting the generator shall create objects with different physical properties, where the discriminator should evaluate whether the predicted object's properties from the generator are realistic, by classifying them as real or fake compared to a real input. Prior works on perceiving/altering objects' properties, like Galileo and 3d-PhysNet, will be our starting point of research. This challenging topic will combine work in computer graphics, physics and deep learning. This topic can also be split into two distinct complementary theses (encouraging team-work), if requested. Please contact me for further information.

**Scope:** Bachelor/Master Thesis **Advisor:** Joao Carvalho, Fabio Muratore **Added:** 2020-08-05 **Start:** ASAP **Topic:** Exploration in the parameter space is extremely important for robotics. It allows us to rollout a deterministic policy, instead of injecting noise at every step in the environment with a stochastic policy (which could damage the robot).
Episodic policy search is one way to do parameter exploration, and can be viewed as optimizing the parameters of a search distribution in order select policies that lead to higher returns. Current methods solve this problem with gradient-based (PGPE, NES) or gradient-free approaches (REPS, PoWER). The former need to compute the derivative of an expectation w.r.t. the distributional parameters (e.g., the mean of a Gaussian). Most algorithms use the log-ratio trick to achieve this, which is known to produce high variance estimates. In this thesis you will look into another estimator from this class, the Measure-Valued Derivative, study a new method and benchmark it with state-of-the-art algorithms. The algorithms you will develop in your thesis will not only be applicable primarly to robotic tasks, but can also be beneficial to optimize many objective functions in machine learning.
If you are interested or have any questions, feel free to reach us.

**Scope:** Master's thesis **Advisor:** Michael Lutter, Joni Pajarinen **Start:** ASAP

**Topic:** Imagine a future, where a user teaches a robot how to combine parts into objects or structures. Now imagine, that the robot is able to go even further: build objects with desired object properties, for example height, stability, or shape, using object parts which it has not seen before. In this thesis, we use reinforcement learning and monte carlo tree search to train a robot to build novel objects from novel object parts based on a database of previously demonstrated object part assemblies. Object parts and objects will be modeled as graphs where each graph node specifies to which kinds of other graph nodes it can be connected to. Putting two object parts together results then in a bigger graph merged from the two object part graphs. Experiments will be performed mainly in simulation, but, if desired, the approach can be also evaluated on a real robot. Suitable background knowledge for this thesis can be gained for example in robot learning or reinforcement learning lectures.

**Scope:** Master's thesis **Advisor:** Tuan Dam, Joni Pajarinen **Start:** ASAP

**Topic:** Google Deepmind recently showed how Monte Carlo Tree Search (MCTS) combined with neural networks can be used to play Go on a super-human level. However, one disadvantage of MCTS is that the search tree explodes exponentially with respect to the planning horizon. In this Master thesis the student will integrate the advantages of MCTS, that is, optimistic decision making into a policy representation that is limited in size with respect to the planning horizon. The outcome will be an approach that can plan further into the future. The application domain will include partially observable problems where decisions can have far reaching consequences.

**Scope:** Master's thesis, Bachelor Thesis **Advisor:** Julen Urain De Jesus**Start:** Anytime

**Topic:** During the last years Deep Generative models, like GAN, VAE or Normalizing Flows have proved powerful capabilities for generating Images. Meanwhile, the generative models in robotics, the motion primitives, did not take the step towards deep models also limitating their generalization capabilities. In this project we want to study the implementation of deep generative models for Imitation Learning in Robotics in order to increase the generalization capabilities of the Motion Primitives. During the project, we expect to find a motion primitive that could be adapt with respect of enviroment information given by images. See this write up for more details.

**Scope:** Master's thesis **Advisor:** Joe Watson**Start:** ASAP **Topic:**

Recent work has presented a control-as-inference formulation that frames optimal control as input estimation. The linear Gaussian assumption can be shown to be equivalent to the LQR solution, while approximate inference through linearization can be viewed as a Gauss–Newton method, similar to popular trajectory optimization methods (e.g. iLQR). However, the linearization approximation limits both the tolerable environment stochasticity and exploration during inference.

The aim of this thesis is to use alternative approximate inference methods (e.g. quadrature, monte carlo, variational), and investigate the benefits to stochastic optimal control and trajectory optimization. Ideally, prospective students are interested in optimal control, approximate inference methods and model-based reinforcement learning.

**Scope:** Master's thesis **Advisor:** Julen Urain De Jesus, Puze Liu **Start:** ASAP **Topic:**

Many Robotics tasks are multimodal. This is the case for example of grasping, on which the robot can grasp an object with several configurations. Anyway, most of the episodic RL problems are limited to gaussian distributions.

In this project, we want to learn through Deep Reinforcement Learning, complex distributions for our policies and solve some difficult multi-modal problems. Even if we are going to start exploring this problem in simulation, we expect for the end of the thesis to be able to adapt the algorithms to real robots.

**Scope:** Master's thesis, Bachelor's thesis **Advisor:** Michael Lutter **Start:** Anytime Soon **Topic:** One way to achieve reinforcement learning using few samples is model-based reinforcement learning but historically these approaches lack the comparable asymptotic performance as model-free approaches. Only very recently two papers showed comparable asymptotic performance with lower sample complexity using probabilistic models composed of network ensembles.

Within this thesis you should develop a probabilistic version of Deep Lagrangian Networks ( Lutter et. al., ICLR 2019), a physics derived architecture that only allows physically plausible models, and use this probabilistic representation for model-based exploration and policy improvement. For the probabilistic version you should use the deterministic and robust bayesian network approach presented earlier this year ( Wu et. al., ICLR 2019).

Finally, you should demonstrate your sample-efficient approach on the physical Cartpole & Furuta Pendulum and learn the swing up only using the physical system and, publish a paper about it :D. So if your are excited to try out Bayesian Deep Learning and want to get your hands dirty with model-based RL, this thesis is perfect for you. So if you are interested just message me (michael@robot-learning.de) and I am happy to discuss more details.

**TL;TR:**

- Extend DeLaN to bayesian deep learning
- Use this probabilistic model for efficient exploration and policy improvement
- Impress everybody by learning the swing-up only using the physical cartpole
- Publish your thesis at a machine learning conference
- Good knowledge of machine learning & deep learning required
- Good programming skills in Python required

**Scope:** Master's Thesis **Advisor:** Julen Urain De Jesus, Hany Abdulsamad **Start:** Anytime **Topic:** Solving the infinite horizon optimal control problem is a very hard problem and dynamic programming has to deal with the curse of dimensionality. In order to solve the problem, the infinite horizon value functions or controllers have been learnt by trajectory optimization techniques. The first works in this direction were done by Atkenson et al. two decades ago. One of the latest most notorious case is the Guided Policy Search, on which iLQR was applied in order to learn state-feedback controllers.

With the explosion of the Neural Ordinary Differential Equation(ODE) and the high similarities of Neural ODE to Pontryagin's Maximun Principle, this project aims to study the possibility of applying indirect methods from trajectory optimization as base for learning both infinite horizon value functions and state-feedback controllers. See this write up for more details.

**Scope:** Master's or Bachelor's thesis **Advisor:** Dorothea Koert **Start:** ASAP **Topic:** In the context of the KoBo34 project, which aims to build an assistive robot for elderly people, we offer different thesis topics in the context of learning robot skills for human robot interaction as well as predicting human motions into the future and recognizing human intentions. If you are interested in this research area please contact me directly to discuss more concrete topics.

**Scope:** Master's or Bachelor's thesis **Advisor:** Riad Akrour, Oleg Arenz **Start:** ASAP **Topic:** Correlated exploration is any exploration mechanism that enforces correlation of the action noise with respect to time or states. Correlated exploration is important for robotics in order to reduce or eliminate jerkiness of exploration and maintain the physical integrity of the robot. Correlated exploration was studied on low dimensional policy representations [1, 2], and we demonstrated suitability of such a learning scheme, for specialized policies, directly on a robotics platform [3]. It has also been shown that correlated exploration can be applied to larger, neural network based, policies [4]. However, the exploration scheme of [4], if seen as an episodic contextual policy search algorithm, is rather primitive in its adaption of the exploration noise, and does not offer the necessary guarantees to be applied directly on a robot. In this thesis, we propose to leverage our expertise in entropy regularized policy search algorithms [5, 6] to improve over these shortcomings in order to provide a safe and efficient correlated exploration algorithm for robotics. The successful candidate is expected to investigate the following topics:

- Set-up baseline by integrating correlated exploration of [4] to recent versions of DDPG such as [7].
- Compare uncorrelated and correlated exploration on simulated tasks and on the Quanser robots.
- Improve over existing correlated exploration formulations by, for example, integrating the gradient update of DDPG to our well founded formulations of entropy regularized episodic policy search algorithms [5, 6].

The successful candidate is expected to conduct their thesis with scientific rigor and a drive for quality such that their work find its place at a top machine learning or robotics conference.

[1] Rückstieß, T. et al.; State-dependent exploration for policy gradient methods; ECML 2008.

[2] van Hoof, H. et al.; Generalized exploration in policy search.; MLJ 2017.

[3] Parisi, S. et al.; Reinforcement learning vs human programming in tetherball robot games; IROS 2015.

[4] Plappert, M. et al.; Parameter space noise for exploration; ICLR 2018.

[5] Akrour, R. et al.; Model-free trajectory-based policy optimization with monotonic improvement; JMLR 2018.

[6] Arenz, O. et al.; Efficient gradient-free variational inference using policy search; ICML 2018.

[7] Fujimoto, S. et al.; Addressing function approximation error in actor-critic methods; ICML 2018.

**Scope:** Master's thesis, Bachelor's thesis **Advisor:** Jan Peters, Ruth Stock-Homburg, Katharina Schneider **Start:** ASAP

**Topic:**
Companies of various industries started introducing anthropomorphic, social robots that interact with customers by gesturing and showing facial expressions with their equipped extremities and head. In this way, they have a social presence that, in turn, can create an emotional bond with the human within the interaction. Accordingly, the physical and haptic contact between a social robot and a human is an important part of the human-robot-interaction. Handshaking is a simple human interaction, but it is a complex movement and can be applied in several different social contexts, such as greeting or congratulations. Therefore, an anthropomorphic, social robot that interacts with humans should be motor intelligent and have the ability to show a human-like and authentic handshake behavior. While first theoretical frameworks about the human hand movement for handshaking were investigated, their implementation for anthropomorphic robots using the handshake turing test are not yet well understood. The thesis is embedded in the interdisciplinary FIF-project "Handshake Turing Test – Androide robot vs. human." The aim of this thesis is first to survey the literature of theories about the human handshake and handshake turing test, second to develop a concept of handshaking for an anthropomorphic, social robot, and third to test the concept on our real anthropomorphic robot Elenoide (see picture).

**Scope:** Bachelor's thesis/Project **Advisor:** Davide Tateo **Start:** ASAP **Topic:**
Hierarchical Reinforcement Learning (HRL) is the field of Reinforcement Learning (RL) that considers structured agents. In this field, a high-level task is decomposed in simpler subtasks. The resulting control policy is represented as a hierarchy of policy, where each policy solves a subtask. While the original literature of HRL focus on how is possible to exploit domain knowledge and structured exploration to speed-up the learning, the more recent approaches, based on Deep Learning, focus on using the hierarchical structure to solve tasks that cannot be solved, or that are difficult to learn, using classical Deep RL approaches. While classical HRL approaches are particularly well suited for finite state-action space MDPs, the more recent Deep HRL approaches can work in complex robotic tasks with continuous state and actions pairs.

One major drawback of the recent literature, is that the Deep HRL approaches shares one of the major issues of the "flat" Deep RL: indeed, the resulting policy is difficult to be interpreted by humans and thus cannot be trusted in safety-critical applications, as we cannot analyze and predict the global behavior. Another major drawback of Deep HRL algorithms is that it is difficult to insert prior knowledge of the environment in the policy structure, making even more difficult to apply these kinds of algorithms in real-world scenarios.

To solve these issues, we propose a novel HRL framework, inspired by control theory, where the design of the hierarchical agent is performed using block diagrams. This framework simplifies the design of hierarchical agents and proposes a different paradigm for HRL: we build structured agents that do not execute of a policy following the stack principle i.e., functions calls, but instead are composed by a set of different parallel controllers. More details about this framework can be found here.

The objective of this thesis is to simplify the design of hierarchical agents using the above-mentioned framework by implementing graphical tools to define easily the structure of the agent and analyze the behavior of the agent while interacting with the environment. Also, we need to improve the existing codebase by refactoring interfaces and implementing new features.

**Minimum knowledge**

- Good Python programming skills.

**Preferred knowledge**

- Knowledge of Python graphical and graph libraries;
- Basic knowledge of Reinforcement Learning;
- Knowledge of recent Deep RL methodologies;

**Accepted candidate will**

- Learn the basic of the proposed framework by looking at the existent codebase;
- Implement graphical tools to design and analyze Hierarchical Reinforcement Learning Agents;
- Refactor the currently existing framework to design HRL agents;
- Add new functionalities to the Hierarchical Reinforcement Learning Framework;
- Test the developed framework in toy problems or, optionally, on real robots;
- optionally, implement some standard Hierarchical Reinforcement Learning algorithms.

**Scope:** Master's thesis | Bachelor's thesis **Advisor:** Julen Urain **Start:** ASAP **Topic:**

Object Segmentation algorithms have proved that segmentating data with respect of the information they have is possible. This opens the door to considering time related data like trajectories or videos. Been able to segment the movements of the human with respect of the different actions they are doing will provide a powerful method to undetrstand human tasks, predict them and hopefully mimic it with a robot. In this project it is expected to study different algorithms for Unsupervised segmentation of human actions and study how well the learned models can predict human motion.

**Scope:** Master's thesis **Advisor:** Vincent Berenz (a collaborator at Tübingen at the at the Max Planck Institute for Intelligent Systems) **Start:** ASAP **Topic:**

Robotic scripted dance is common. One the other hand, interactive dance, in which the robot uses runtime sensory information to continuously adapt its moves to those of its (human) partner, remains challenging. It requires integration of together various sensors, action modalities and cognitive processes. The selected candidate objective will be to develop such an interactive dance, based on the software suit for simultaneous perception and motion generation our department built over the years. The target robot on which the dance will be applied is the wheeled robot Softbank Robotics Pepper. This master thesis is with the Max Planck Institute for Intelligent Systems and is located in Tuebingen. More information: https://am.is.tuebingen.mpg.de/jobs/master-thesis-interactive-dance-performed-by-sofbank-robotics-pepper

**Scope:** Master's Thesis, Bachelor's thesis **Advisor:** Tuan Dam, Pascal Klink **Start:** ASAP **Topic:**
Reinforcement Learning under partial observability of the true system state, albeit having great potential, is still an open problem. A critical ingredient for recent model-free RL approaches in partially observable domains is the right choice of a memory model that is limited to recurrent neural networks or full histories [1][2]. The goal of this project is to investigate and compare the performance of different models, including ones used in Computer Vision or Natural Language Processing (e.g. Recurrent Ladder Networks [3]), in partially observable domains to gain new insights. The student will compare the performance of the memory models in selected tasks in simulation. If desired, the student also has to chance to test a few of the memory models in a real robotic task of playing Mikado.

**Minimum knowledge**

- Good Python programming skills.

**Preferred knowledge**

- Knowledge of deep neural network, deep recurrent neural networks
- Basic knowledge of Reinforcement Learning, POMDP, Memory Representation in POMDP
- Knowledge of recent Deep RL methodologies;

[1] Deep recurrent q-learning for partially observable mdps, Hausknecht et al. https://www.aaai.org/ocs/index.php/FSS/FSS15/paper/download/11673/11503

[2] Learning deep neural network policies with continuous memory states, Zhang et al.https://ieeexplore.ieee.org/iel7/7478842/7487087/07487174.pdf

[3] Recurrent Ladder Networks, Prémont-Schwarz et al. http://papers.nips.cc/paper/7182-recurrent-ladder-networks.pdf

**Scope:** Master's thesis, Bachelor thesis **Advisor:** Jan Peters **Start:** ASAP **Topic:** Inspired by results in neuroscience, especially in the Cerebellum, Kawato & Wolpert introduced the idea of the MOSAIC (modular selection and identification for control) learning architecture. In this architecture, local forward models, i.e., models that predict future states and events, are learned directly from observations. Based on the prediction accuracy of these models, corresponding inverse models can be learned. In this thesis, we want to focus on the problem of learning to control a robot system with a hysteresis in its friction.

**Scope:** Master's Thesis, Bachelor's thesis **Advisor:** Hany Abdulsamad **Start:** ASAP **Topic:** Model-based Reinforcement Learning is an approach to learn complex tasks given local approximations of the nonlinear dynamics of the environment and cost functions. It has proven to be a sample efficient approach for learning on real robots. Classical approaches for learning such local models have certain restrictions on the overall structure; for example the number of local componants and switching dynamics. State of the art research has recently moved to more general settings with nonparameteric approaches that require less structure. The aim of this thesis is to review the literature on this subject and to compare existing algorithms on real robots like the BioRob or the Barrett WAM.

**Scope:** Master's Thesis **Advisor:** Hany Abdulsamad **Start:** ASAP **Topic:** Standard learning control techniques focus on learning deterministic controllers. Even advanced policy search methods that rely on stochastic search distributions use stochastic controllers only for the purpose of exploration, the final policy is only applied in its deterministic form. There are however cases in which a deterministic controller is always sub-optimal, such as in scenarios with random unstructured disturbances. In this thesis we want to address the problem of learning true stochastic optimal controllers in the context of an adversarial setting, and investigate the question if adversarial learning can be used to generalize standard policy search methods. This topic includes very interesting and deep connections to robust control, game theory and multi-agent learning.

**Scope:** Master's Thesis **Advisor:** Joe Watson **Start:** Anytime **Topic:** Model-based Reinforcement Learning for robotics typically requires learning nonlinear stochastic dynamical systems. This project aims to combine the Koopman operator, bayesian machine learning and neural network models to represent these systems as linear gaussian dynamical systems in some high-dimensional embedding. See this write up for more details.

**Scope:** Master's Thesis, Bachelor's thesis **Advisor:** Hany Abdulsamad **Start:** ASAP **Topic:** A great challenge in applying Reinforcement Learning approaches is the need for human intervention to reset the scenario of a learned task, making the process very tedious and time consuming. A clear example is learning table tennis, where we are either limited to using a ball gun with predictable pattern of initial positions or a human is needed to play against the robot. However given a second robotic player, we propose a new setup, in which the two agents cooperate to develop two different strategies, where one agent learns to support the second in becoming a great table tennis player. It is interesting to see if in such a scenario the agents would be able to discover what might resemble a defensive and an aggressive strategy in table tennis. The thesis will concentrate on developing the concept of cooperation and testing the results in simulation and on our own real table tennis setup.

**Scope:** Master's Thesis **Advisor:** Hany Abdulsamad **Start:** ASAP **Topic:** Let's stop reinventing the wheel. Most recent approaches to model-based RL revolve around the concept of trajectory optimization (iLQG, GPS, PILCO <-- all related to DDP). They can all be categorized in terms of direct and indirect shooting methods, two categories of optimal control that have existed for a very long time. It is time to clarify this connection. The aim of this thesis is, first dive into the literature of control and model-based RL, second investigate the possibility of applying information-theoretic bounds to standard optimal control techniques.

**Scope:** Master's Thesis **Advisor:** Pascal Klink, Carlo D'Eramo **Start:** ASAP **Topic:** The idea of gradually learning to accomplish a complicated task via a guiding sequence of intermediate ones - referred to as Curriculum Learning - has shown great experimental success. The goal of this project is to investigate a recent take on Curriculum Learning in the domain of Reinforcement Learning, which interprets it as a form of Expectation Maximization. More precisely, the goal is to push the capabilities of this formulation by using advanced sampling methods to sample tasks for learning instead of simple approximations that have been used so far. The ideal candidate:

- is knowledged in Reinforcement Learning (as this is the basis for the project)
- has a basic understanding of Mixed-Integer Programming (or is not afraid of diving into this domain)
- has basic knowledge in the domain of Variational Inference

**Scope:** Master's thesis **Advisors:** Boris Belousov, Georgia Chalvatzaki, Bastian Wibranek **Start:** ASAP

**Topic:** Many real-world problems can be reduced to combinatorial optimization over graphs. For example, search for a combination of elements that produce a desired structure satisfying given design constraints, such as load bearing or form matching, is a ubiquitous problem in architecture. Commonly, heuristic search algorithms are employed which require expert input from the architect to guide the search. This thesis will investigate optimization approaches based on graph embedding techniques, such as graph neural networks, to improve the state of the art on combinatorial optimization in the architectural domain. The thesis will involve collaboration with the Digital Design Unit from FB Architektur.

**Scope:** Master's thesis **Advisor:** Julen Urain De Jesus, Georgia Chalvatzaki,**Start:** Asap

**Topic:** Semantic understanding of motion is an important tool for both Imitation Learning and Human-Robot interaction. If a robot is able to not only imitate a taught motion but understand the building semantic blocks (pick the object, put in the floor, move the ball ...) of a specific task, the robot will be able to generalize better to new environments. During this thesis, the student will develop algorithms for semantic segmentation of human motion for both Semantic Imitation Learning and Human-Robot interaction. See this write up for more details.

**Scope:** Master's thesis **Advisor:** Julen Urain De Jesus, Georgia Chalvatzaki,**Start:** Anytime

**Topic:** For safe Human-Robot Interaction it is important that the robot learns human motion models in order to be able to predict future states in which the human will be and therefore, make decisions based on the predicted information. In this thesis, we want to explore the different probabilistic models for modeling non-linear stochastic dynamics and their predictive abilities. These models will be afterward applied for human motion prediction and finally, these predictions will be integrated into path planning and motion control algorithms for robot navigation. See this write up for more details.

**Scope:** Bachelor's / Master's Thesis **Advisor:** Joe Watson **Start:** Anytime **Topic:**
For learning in robotic manipulation, there are two cultures: 'end-to-end' vs inductive biases.
While the former is purely data-driven, the latter incorporates ideas from computer vision and visual servoing for more interpretable and sample-efficient performance.
This is an open-ended project aimed at investigating novel frameworks for perception-based manipulation that combine 'structure' (computer vision) with learning.

The ideal candidate:

- has knowledge of both robotics and (geometric) computer vision
- is interested in working on real robotic manipulators
- can write clean, maintainable software

**Scope:** Master's thesis **Advisor:** Joni Pajarinen **Start:** ASAP

**Topic:** Efficient exploration is one of the most prominent challenges in deep reinforcement learning. In reinforcement learning, exploration of the state space is critical for finding high value actions and connecting them to the causing actions. Exploration in model-free reinforcement learning has relied on classical techniques, empirical uncertainty estimates of the value function, or random policies. In model-based reinforcement learning value bounds have been used successfully to direct exploration. In this Master thesis project the student will investigate how lower and upper value bounds can be used to target exploration in model-free reinforcement learning into the most promising parts of the state space. This thesis topic requires background knowledge in reinforcement learning gained e.g. through machine learning or robot learning courses.

**Scope:** Master's Thesis **Advisor:** Joe Watson, Michael Lutter **Start:** Anytime **Topic:** Model-based Reinforcement Learning for robotics typically requires learning the nonlinear dynamics of complex multibody mechanical systems. The Recursive Newton Euler Algorithm (RNEA) is an existing means of efficiently modelling such systems. This project looks at using the Lie Algebra perspective of RNEA to implement the algorithm on a differential computation graph, the basis of deep learning models. This potentially offers the means of learning high-fidelity and interpretable models for robotics from data. See this write up for more details.