Boris Belousov

Boris Belousov has graduated but remains adjunct to IAS while being a Senior Researcher at the German Research Center for AI (DFKI), Research Department: Systems AI for Robot Learning. He holds a MSc degree in Electrical Engineering from FAU Erlangen-Nürnberg with a major in Communications and Multimedia Engineering and a BSc degree in Applied Mathematics and Physics from Moscow Institute of Physics and Technology with a specialization in Electrical Engineering and Cybernetics.

Supervision

Boris Belousov has supervised numerous theses and projects. See Supervised Theses for details.

Teaching

Reinforcement Learning WS'18
Statistical Machine Learning SS'18
Robot Learning IP WS'17

Reviewing

JMLR, NeurIPS, ICML, AAAI, ICLR, CORL, ICRA, IROS, AURO, RA-L, TR-O, R:SS

Boris Belousov is interested in optimal control, information theory, robotics, and reinforcement learning. To realize the vision of intelligent systems of the future—that autonomously set and accomplish goals, learn from experience and adapt to changing conditions—Boris develops methods that are firmly grounded in Bayesian decision theory. He has worked on maximum entropy reinforcement learning, risk-sensitive policy search, active learning, distributionally robust trajectory optimization, curriculum learning, domain randomization, visuotactile manipulation.

Publications

Systems AI for Robot Learning

    •     Bib
      Toelle, M.; Belousov, B.; Peters, J. (2023). A Unifying Perspective on Language-Based Task Representations for Robot Control, CoRL Workshop on Language and Robot Learning: Language as Grounding.
    •     Bib
      Bhatt, A.; Palenicek, D.; Belousov, B.; Argus, M.; Amiranashvili, A.; Brox, T.; Peters, J. (2024). CrossQ: Batch Normalization in Deep Reinforcement Learning for Greater Sample Efficiency and Simplicity, International Conference on Learning Representations (ICLR), Spotlight.
    •     Bib
      Vincent, T.; Belousov, B.; D'Eramo, C.; Peters, J. (2023). Iterated Deep Q-Network: Efficient Learning of Bellman Iterations for Deep Reinforcement Learning, European Workshop on Reinforcement Learning (EWRL).
    •       Bib
      Lutter, M.; Belousov, B.; Mannor, S.; Fox, D.; Garg, A.; Peters, J. (2023). Continuous-Time Fitted Value Iteration for Robust Policies, IEEE Transaction on Pattern Analysis and Machine Intelligence (PAMI).
    •       Bib
      Siebenborn, M.; Belousov, B.; Huang, J.; Peters, J. (2022). How Crucial is Transformer in Decision Transformer?, Foundation Models for Decision Making Workshop at Neural Information Processing Systems.
    •     Bib
      Galljamov, R.; Zhao, G.; Belousov, B.; Seyfarth, A.; Peters, J. (2022). Improving Sample Efficiency of Example-Guided Deep Reinforcement Learning for Bipedal Walking, 2022 IEEE-RAS 21st International Conference on Humanoid Robots (Humanoids).
    •     Bib
      Belousov, B.; Abdulsamad H.; Klink, P.; Parisi, S.; Peters, J. (2021). Reinforcement Learning Algorithms: Analysis and Applications, Studies in Computational Intelligence, Springer International Publishing.

Reinforcement Learning and Tactile Manipulation for Robotic Assembly

    •     Bib
      Liu, Y.; Belousov, B.; Funk, N.; Chalvatzaki, G.; Peters, J.; Tessman, O. (2023). Auto(mated)nomous Assembly, International Conference on Trends on Construction in the Post-Digital Era, pp.167-181, Springer, Cham.
    •       Bib
      Funk, N.; Mueller, P.-O.; Belousov, B.; Savchenko, A.; Findeisen, R.; Peters, J. (2023). High-Resolution Pixelwise Contact Area and Normal Force Estimation for the GelSight Mini Visuotactile Sensor Using Neural Networks, Embracing Contacts-Workshop at ICRA 2023.
    •     Bib
      Zhu, Y.; Nazirjonov, S.; Jiang, B.; Colan, J.; Aoyama, T.; Hasegawa, Y.; Belousov, B.; Hansel, K.; Peters, J. (2023). Visual Tactile Sensor Based Force Estimation for Position-Force Teleoperation, IEEE International Conference on Cyborg and Bionic Systems (CBS), pp.49-52.
    •     Bib
      Boehm, A.; Schneider, T.; Belousov, B.; Kshirsagar, A.; Lin, L.; Doerschner, K.; Drewing, K.; Rothkopf, C.A.; Peters, J. (2023). Tactile Active Texture Recognition With Vision-Based Tactile Sensors, NeurIPS Workshop on Touch Processing: a new Sensing Modality for AI.
    •       Bib
      Belousov, B.; Wibranek, B.; Schneider, J.; Schneider, T.; Chalvatzaki, G.; Peters, J.; Tessmann, O. (2022). Robotic Architectural Assembly with Tactile Skills: Simulation and Optimization, Automation in Construction, 133, pp.104006.
    •       Bib
      Funk, N.; Chalvatzaki, G.; Belousov, B.; Peters, J. (2021). Learn2Assemble with Structured Representations and Search for Robotic Architectural Construction, Conference on Robot Learning (CoRL).
    •     Bib
      Wibranek, B.; Liu, Y.; Funk, N.; Belousov, B.; Peters, J.; Tessmann, O. (2021). Reinforcement Learning for Sequential Assembly of SL-Blocks: Self-Interlocking Combinatorial Design Based on Machine Learning, Proceedings of the 39th eCAADe Conference.
    •     Bib
      Belousov, B.; Sadybakasov, A.; Wibranek, B.; Veiga, F.; Tessmann, O.; Peters, J. (2019). Building a Library of Tactile Skills Based on FingerVision, Proceedings of the 2019 IEEE-RAS 19th International Conference on Humanoid Robots (Humanoids).
    •     Bib
      Wibranek, B.; Belousov, B.; Sadybakasov, A.; Peters, J.; Tessmann, O. (2019). Interactive Structure: Robotic Repositioning of Vertical Elements in Man-Machine Collaborative Assembly through Vision-Based Tactile Sensing, Proceedings of the 37th eCAADe and 23rd SIGraDi Conference.
    •     Bib
      Wibranek, B.; Belousov, B.; Sadybakasov, A.; Tessmann, O. (2019). Interactive Assemblies: Man-Machine Collaboration through Building Components for As-Built Digital Models, Computer-Aided Architectural Design Futures (CAAD Futures).

Maximum Entropy Reinforcement Learning and Stochastic Optimal Control

    •     Bib
      Abdulsamad, H.; Dorau, T.; Belousov, B.; Zhu, J.-J; Peters, J. (2021). Distributionally Robust Trajectory Optimization Under Uncertain Dynamics via Relative Entropy Trust-Regions, arXiv.
    •     Bib
      Klink, P.; Abdulsamad, H.; Belousov, B.; D'Eramo, C.; Peters, J.; Pajarinen, J. (2021). A Probabilistic Interpretation of Self-Paced Learning with Applications to Reinforcement Learning, Journal of Machine Learning Research (JMLR).
    •     Bib
      Eilers, C.; Eschmann, J.; Menzenbach, R.; Belousov, B.; Muratore, F.; Peters, J. (2020). Underactuated Waypoint Trajectory Optimization for Light Painting Photography, Proceedings of the IEEE International Conference on Robotics and Automation (ICRA).
    •   Bib
      Lutter, M.; Clever, D.; Belousov, B.; Listmann, K.; Peters, J. (2020). Evaluating the Robustness of HJB Optimal Feedback Control, International Symposium on Robotics.
    •     Bib
      Lutter, M.; Belousov, B.; Listmann, K.; Clever, D.; Peters, J. (2019). HJB Optimal Feedback Control with Deep Differential Value Functions and Action Constraints, Conference on Robot Learning (CoRL).
    •       Bib
      Belousov, B.; Peters, J. (2019). Entropic Regularization of Markov Decision Processes, Entropy, 21, 7, MDPI.
    •     Bib
      Klink, P.; Abdulsamad, H.; Belousov, B.; Peters, J. (2019). Self-Paced Contextual Reinforcement Learning, Proceedings of the 3rd Conference on Robot Learning (CoRL).
    •     Bib
      Nass, D.; Belousov, B.; Peters, J. (2019). Entropic Risk Measure in Policy Search, Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).
    •     Bib
      Belousov, B.; Peters, J. (2018). Mean Squared Advantage Minimization as a Consequence of Entropic Policy Improvement Regularization, European Workshops on Reinforcement Learning (EWRL).
    •     Bib
      Belousov, B.; Peters, J. (2017). f-Divergence Constrained Policy Improvement, arXiv.
    •     Bib
      Belousov, B.; Neumann, G.; Rothkopf, C.; Peters, J. (2016). Catching Heuristics Are Optimal Control Policies, Advances in Neural Information Processing Systems (NIPS / NeurIPS).

Information-Theoretic Active Exploration

    •       Bib
      Schneider, T.; Belousov, B.; Chalvatzaki, G.; Romeres, D.; Jha, D.K.; Peters, J. (2022). Active Exploration for Robotic Manipulation, IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).
    •       Bib
      Schneider, T.; Belousov, B.; Abdulsamad, H.; Peters, J. (2022). Active Inference for Robotic Manipulation, 5th Multidisciplinary Conference on Reinforcement Learning and Decision Making (RLDM).
    •     Bib
      Muratore, F.; Gruner, T.; Wiese, F.; Belousov, B.; Gienger, M.; Peters, J. (2021). Neural Posterior Domain Randomization, Conference on Robot Learning (CoRL).
    •     Bib
      Belousov, B.; Abdulsamad, H.; Schultheis, M.; Peters, J. (2019). Belief Space Model Predictive Control for Approximately Optimal System Identification, 4th Multidisciplinary Conference on Reinforcement Learning and Decision Making (RLDM).
    •     Bib
      Schultheis, M.; Belousov, B.; Abdulsamad, H.; Peters, J. (2019). Receding Horizon Curiosity, Proceedings of the 3rd Conference on Robot Learning (CoRL).