I have graduated and have joined Bosch Center for Artificial Intelligence in Renningen, Germany.

Julia Vinogradska

Research Interests

Machine Learning, Reinforcement Learning, Optimal Control

Contact Information

+49-711-811-27767
julia.vinogradska(a)de.bosch.com

Julia Vinogradska joined the Intelligent Autonomous System lab on September, 1st, 2014 as an external PhD student in cooperation with Bosch Center for Artificial Intelligence. Before her PhD, Julia completed her Master Degree in Mathematics at the University of Stuttgart (Universtät Stuttgart). Her thesis entitled "Automorphisms of Graph Groups" (“Automorphismen von Graphgruppen") was written under the supervision of Prof. Diekert. Her research at IAS amongst others include reinforcement learning algorithms and the study of their stochastic stability.

Research Interests

Learning control has become a viable approach in both the machine learning and control community. Many successful applications impressively demonstrate the advantages of learning control: in contrast to classical control methods, it does not presuppose a detailed understanding of the underlying dynamics but tries to infer the required information from data. Thus, relatively little expert knowledge about the dynamics is required and fewer assumptions such as a parametric form and parameter estimates must be made.

As for real-world applications it is desirable to minimize system interaction time, model based approaches are often preferred. However, one drawback of model-based approaches is that the model is inherently approximate, but at the same time is implicitly assumed to model the system dynamics sufficiently well. These conflicting assumptions can derail learning and solutions to the approximate control problem may fail at the real-world task, especially when predictions are highly uncertain. Gaussian processes (GPs) offer an elegant, fully Bayesian approach to model system dynamics and incorporate uncertainty. Given observed data, GPs infer a distribution over all plausible dynamics models and are, thus, a viable choice for model-based reinforcement learning.

Julia's research focuses on closed-loop control systems with GP forward dynamics models. There are several open questions in this field she hopes to address during her PhD. One major difficulty of GPs as forward dynamics models in closed-loop control is that predictions become intractable when the input to the GP is a distribution. There are some well-known approximation methods, that offer rather rough estimates of the output state distribution and can be computed efficiently. However, there are several applications where these estimates are not precise enough and which demand for highly accurate multi-step-ahead predictions. One such field that requires high precision approximate inference is stability analysis of closed-loop control systems with GPs as forward dynamics model. This field deals with evaluating the system behaviour under a certain control policy. For example, one may be interested whether a policy succeeds or from which starting states the policy succeeds. In particular, the goal is to derive guarantees that the system will expose a certain (desired) behaviour. While in classical control stability analysis dates back to the 19th century, there has not been much research in this direction for GP dynamics models yet. However, such guarantees are crucial to learn control in safety critical applications. Julia works on several problems with GP dynamics models from this field: highly accurate approximations for multi-step ahead predictions, that enable stability analysis; stability of the closed-loop control structure (i) for finite time horizons (ii) under the presence of disturbances and (iii) asymptotic stability; learning control based on GP forward dynamics for finite and infinite time horizons.

Publications

  •     Bib
    Luis, C.E.; Bottero, A.G.; Vinogradska, J.; Berkenkamp, F.; Peters, J. (submitted). Uncertainty Representations in State-Space Layers for Deep Reinforcement Learning under Partial Observability, Transactions on Machine Learning Research (TMLR).
  •   Bib
    Luis, C.E.; Bottero, A.G.; Vinogradska, J.; Berkenkamp, F.; Peters, J. (2024). Value-Distributional Model-Based Reinforcement Learning, Journal of Machine Learning Research (JMLR).
  •     Bib
    Luis, C.; Bottero, A.G.; Vinogradska, J.; Berkenkamp, F.; Peters, J. (2023). Model-Based Uncertainty in Value Functions, Proceedings of the International Conference on Artificial Intelligence and Statistics (AISTATS).
  •     Bib
    Bottero, A.G.; Luis, C.E.; Vinogradska, J.; Berkenkamp, F.; Peters, J. (2022). Information-Theoretic Safe Exploration with Gaussian Processes, Advances in Neural Information Processing Systems (NIPS / NeurIPS).
  •     Bib
    Vinogradska, J.; Bischoff, B.; Koller, T.; Achterhold, J.; Peters, J. (2020). Numerical Quadrature for Probabilistic Policy Search, IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), 42, 1, pp.164-175.
  •     Bib
    Vinogradska, J.; Bischoff, B.; Peters, J. (2018). Approximate Value Iteration based on Numerical Quadrature, Proceedings of the International Conference on Robotics and Automation, and IEEE Robotics and Automation Letters (RA-L), 3, pp.1330-1337.
  •     Bib
    Bischoff, B.; Vinogradska, J.; Peters, J. (2018). Verfahren und Vorrichtung zum Einstellen mindestens eines Parameters eines Aktorregelungssystems und Aktorregelungssystems, Patent PCT/EP2018/067213, EP000003646122A1, WO002019002349.
  •     Bib
    Bischoff, B.; Vinogradska, J.; Peters, J. (2018). Verfahren und Vorrichtung zum Einstellen mindestens eines Parameters eines Aktorregelungssystems und Aktorregelungssystem, Patent PCT/EP2018/071742, EP000003698222A1, WO002019076511.
  •     Bib
    Bischoff, B.; Vinogradska, J.; Peters, J. (2018). Verfahren und Vorrichtung zum Betreiben eines Aktorregelungssystems, Computerprogramm und Maschinenlesbares Speichermedium, Patent PCT/EP2018/071753, EP000003698223A1, EP000003698223B1, WO002019076512.
  •     Bib
    Vinogradska, J.; Bischoff, B.; Nguyen-Tuong, D.; Peters, J. (2017). Stability of Controllers for Gaussian Process Forward Models, Journal of Machine Learning Research (JMLR), 18, 100, pp.1-37.
  •     Bib
    Vinogradska, J. (2017). Gaussian Processes in Reinforcement Learning: Stability Analysis and Efficient Value Propagation, PhD Thesis.
  •     Bib
    Bischoff, B.; Vinogradska, J.; Peters, J. (2017). Verfahren und Vorrichtung zum Einstellen mindestens eines Parameters eines Aktorregelungssystems und Aktorregelungssystem, Patent PCT/EP2018/071742, DE102017218813A1.
  •     Bib
    Bischoff, B.; Vinogradska, J.; Peters, J. (2017). Verfahren und Vorrichtung zum Einstellen mindestens eines Parameters eines Aktorregelungssystems, Aktorregelungssystem und Datensatz, Patent DE102017211209A1.
  •     Bib
    Bischoff, B.; Vinogradska, J.; Peters, J. (2017). Verfahren und Vorrichtung zum Betreiben eines Aktorregelungssystems, Computerprogramm und maschinenlesbares Speichermedium, Patent PCT/EP2018/071753,DE102017218811A1.
  •     Bib
    Vinogradska, J.; Bischoff, B.; Nguyen-Tuong, D.; Romer, A.; Schmidt, H.; Peters, J. (2016). Stability of Controllers for Gaussian Process Forward Models, Proceedings of the International Conference on Machine Learning (ICML).