Example

Research Interests
Robot Learning, Robotics, Machine Learning, Cognitive Science and Biomimetic Systems.
Affiliations
1. TU Darmstadt, Intelligent Autonomous Systems, Computer Science Department
2. German Research Center for AI (DFKI), Research Department: SAIROL
3. Hessian Centre for Artificial Intelligence
Contact
myname@ias.tu-darmstadt.de
Room E315, Building S2|02, TU Darmstadt, FB-Informatik, FG-IAS, Hochschulstr. 10, 64289 Darmstadt
+49-6151-16-00000
About Me
I'm [Your Name], a passionate [Your Profession/Interest] based in [Your Location]. Welcome to my corner of the internet, where I share my thoughts, experiences, and interests with the world. I believe in [Your Belief/Philosophy], and I'm dedicated to [Your Goal/Purpose].
My Journey
My journey began [Start Date/Year], and it's been an incredible adventure ever since. I've had the privilege of [Highlight One or Two Significant Achievements or Experiences], and I continue to strive for excellence in everything I do.
My Interests
I have a wide range of interests, including [List 3-5 Interests / Hobbies]. These passions fuel my creativity and drive my desire to learn and grow every day.
Blog
On my blog, you'll find articles and insights on a variety of topics. I love sharing my knowledge and experiences with my readers. Whether it's [Choose a few Topics, e.g., Travel, Technology, Cooking], I aim to provide valuable content that informs and inspires.
Portfolio
Browse through my portfolio to see some of my [Work/Projects/Creations]. I'm proud of what I've accomplished so far, and I'm always open to new opportunities and collaborations.
Get in Touch I'm a firm believer in the power of connections. If you'd like to collaborate, have questions, or just want to say hello, don't hesitate to [Provide Contact Information/Links to Social Media].
Test Some stuff
You can write some text
Change width
Change width by change the value
Stay Updated Make sure to subscribe to my newsletter or follow me on [Social Media Platforms] to stay updated on my latest adventures, blog posts, and projects.
Thank you for visiting my personal homepage. I hope you find something here that resonates with you or sparks your curiosity. Feel free to explore, connect, and join me on this exciting journey!
Publications
- Urain, J.; Mandlekar, A.; Du, Y.; Shafiullah, M.; Xu, D.; Fragkiadaki, K.; Chalvatzaki, G.; Peters, J. (2026). Deep Generative Models in Robotics: A Survey on Learning from Multimodal Demonstrations, IEEE Transactions on Robotics (T-RO), 42, pp.60-79.
- Maier, L.; Schulze, L.; Lilow, R.; Hahn, L.; Krasowski, N.; Barth, A.; Gaebel, S.; Güran, F.; Hanau, O.; Wagner, G.; Borgmann, F.; Arenz, O.; Peters, J. (2026). Mathematical Foundations of Modeling ETL Process Chains, Workshop on Geometry, Topology, and Machine Learning.
- Palenicek, D.; Vogt, F.; Watson, J.; Posner, I.; Peters, J. (2026). XQC: Well-conditioned Optimization Accelerates Deep Reinforcement Learning, International Conference on Learning Representations (ICLR).
- Schulze, L.; Negri, J.D.; Barasuol, V.; Medeiros, V.S.; Becker, M.; Peters, J.; Arenz, O. (2026). Floating-Base Deep Lagrangian Networks, IEEE International Conference on Robotics and Automation (ICRA).
- Helmut, E.; Funk, N.; Schneider, T.; de Farias, C.; Peters, J. (2026). Tactile-Conditioned Diffusion Policy for Force-Aware Robotic Manipulation, IEEE International Conference on Robotics and Automation (ICRA).
- Drewing, N.; Al-Hafez, F.; Zhao, G.; Peters, J.; Seyfarth, A.; Findeisen, R.; Sharbafi, M. (2026). Learning Human Gait with Muscle Control and Metabolic Cost Integration, Proceedings of the American Control Conference (ACC).
- Schneider, T.; de Farias, C.; Calandra, R.; Chen, L.; Peters, J. (2026). APPLE: Toward General Active Perception via Reinforcement Learning, International Conference on Learning Representations (ICLR).
- Vincent, T.; Tripathi, Y.; Faust, T.; Akgül, A.; Oren, Y.; Kandemir, M.; Peters, J.; D'Eramo, C. (2026). Bridging the Performance Gap Between Target-Free and Target-Based Reinforcement Learning, International Conference on Learning Representations (ICLR).
- Drolet, M.; Al-Hafez, F.; Bhatt, A.; Peters, J.; Arenz, O. (2026). Discrete Variational Autoencoding via Policy Search, International Conference on Learning Representations (ICLR).
- Duret, G.; Heller, F.; Mazurak, D.; Kshirsagar, A.; Schneider, T.; Zara, F.; Peters, J.; Chen, L. (2026). Real-Time Simulation of Deformable Tactile Sensors and Objects in Robotic Grasping using Graph Neural Networks with Inductive Biases, IEEE-RAS International Conference on Soft Robotics (RoboSoft).
- Cai, Y.; Jansonnie, P.; Arenz, O.; de Farias C.; Peters, J. (2026). GaussTwin: Unified Simulation and Correction with Gaussian Splatting for Robotic Digital Twins, IEEE International Conference on Robotics and Automation (ICRA).
- Mower, C.; Wan, Y.; Yu, H.; Grosnit, A.; Gonzalez-Billandon, J.; Zimmer, M.; Liu, P.; Palenicek, D.; Tateo, D.; Peters, J.; Qu, K.; Zhang, M.; Lan, G.; Cadena, C.; Hutter, M.; Tian, G.; Zhuang, Y.; Shao, K.; Zhuang, X.; Hao, J.; Wang, J.; Bou Ammar, H. (2026). A robot operating system framework for using large language models in embodied AI, Nature Machine Intelligence.
- Duret, G.; Samsonenko, A.; Zara, F.; Peters, J.; Chen, L. (2026). Automatic Physically-Based Sim2Real for Tactile Images through Differentiable Path-Tracing Rendering, Proceedings of the IEEE International Conference on Robotics and Automation (ICRA).
- Jin, Y.; Funk, N.W.; Prasad, V.; Li, Z.; Franzius, M.; Peters, J.; Chalvatzaki, G. (2026). SE(3)-PoseFlow: Estimating 6D Pose Distributions for Uncertainty-Aware Robotic Manipulation, Proceedings of the IEEE International Conference on Robotics and Automation (ICRA).
- Guler, B.; Pompetzki, K.; Sun, Y.; Manschitz, S.; Peters, J. (2026). A Safety-Aware Shared Autonomy Framework with BarrierIK Using Control Barrier Functions, IEEE International Conference on Robotics and Automation (ICRA).
- Nguyen, K.; Le, A.T.; Peters, J.; Vu, M.N. (2026). DoublyAware: Dual Planning and Policy Awareness for Temporal Difference Learning in Humanoid Locomotion, IEEE Robotics and Automation Letters (RA-L), 11, 2, pp.2162--2169.
- Holzmann, P.; Pfefferkorn, M.; Braatz, R.D.; Peters, J.; Findeisen, R. (2026). Using Learned Flow-Matching Surrogate Models for Adaptive Receding-Horizon Control, European Control Conference (ECC).
- Kim, D.; Lee, Y.; Park, M.; Kim, K.; Nahendra, I.; Seno, T.; Min, S.; Palenicek, D.; Vogt, F.; Kragic, D.; Peters, J.; Choo, J.; Lee, H. (2026). FlashSAC: Fast and Stable Off-Policy Reinforcement Learning for High-Dimensional Robot Control, Robotics: Science and Systems (RSS).
- Duret, G.; Mazurak, D.; Zara, F.; Peters, J.; Chen, L. (2026). Breaking the 3D Dataset Bottleneck: Fast Scalable Generation of Aligned 3D Assets from Scratch for Category 6D Pose Estimation and Robotic Grasping, IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
- Montenegro, A.; Liu, P.; Li, S.; Metelli, A.M.; Peters, J. (2026). Mind Your Steps: A General Learning Framework for Accurate Humanoid Foothold Tracking, Robotics: Science and Systems (R:SS).
- Flynn, H.; Watson, J.; Posner, I.; Peters, J. (2026). Posterior Sampling Reinforcement Learning with Gaussian Processes for Continuous Control: Sublinear Regret Bounds for Unbounded State Spaces, International Conference on Machine Learning (ICML).
- Diwan, A.A.; Tateo, D.; Mower, C.; Bou Ammar, H.; Peters, J.; Arenz, O. (2026). Trust Region Inverse Reinforcement Learning, International Conference on Machine Learning (ICML).
- Vincent, T.; Gerhardt, K.; Tripathi, Y.; Maraqten, H.; White, A.; White, M.; Peters, J.; D'Eramo, C. (2026). Gradient Iterated Temporal-Difference Learning, Reinforcement Learning Journal (RLJ).
- Scherer, C.; Watson, J.; Palenicek, D.; Gruner, T.; Posner, I.; Peters, J. (2026). Coherent Off-Policy Improvement of Large Behaviour Models with Learned Rewards, ICRA 2026 Workshop on Reinforcement Learning in the Era of Imitation Learning.
- Vincent, T.; Palenicek, D.; Belousov, B.; Peters, J.; D'Eramo, C. (2025). Iterated Q-Network: Beyond One-Step Bellman Updates in Deep Reinforcement Learning, Transactions on Machine Learning Research (TMLR), J2C Certificate.
- Prasad, V.; Heitlinger, L; Koert, D.; Stock-Homburg, R.; Peters, J.; Chalvatzaki, G. (2025). Learning Multimodal Latent Dynamics for Human-Robot Interaction, IEEE Transaction of Robotics (T-RO), 41, pp.4418-4438.
- Liu, P.; Bou-Ammar H.; Peters, J.; Tateo D. (2025). Safe Reinforcement Learning on the Constraint Manifold: Theory and Applications, IEEE Transactions on Robotics (T-Ro), 41, pp.3442-3461.
- Gu, S.; Liu, P.; Kshirsagar, A.; Chen, G.; Peters, J.; Knoll, A. (2025). ROSCOM: Robust Safe Reinforcement Learning on Stochastic Constraint Manifolds, IEEE Transactions on Automation Science and Engineering (T-ASE), 22, pp.5841 - 5851.
- Luis, C.E.; Bottero, A.G.; Vinogradska, J.; Berkenkamp, F.; Peters, J. (2025). Uncertainty Representations in State-Space Layers for Deep Reinforcement Learning under Partial Observability, Transactions on Machine Learning Research (TMLR).
- Watson, J.; Song, C.; Weeger, O.; Gruner, T.; Le, A.T.; Hansel, K.; Headway, A.; Arenz, O.; Trojak, W.; Cranmer, M.; D’Eramo, C.; Bülow, F.; Goyal, T.; Peters, J.; Hoffman, M.W.; (2025). Machine Learning with Physics Knowledge for Prediction: A Survey, Transactions on Machine Learning Research (TMLR).
- Carvalho, J.; Le, A.T.; Kicki, P. ; Koert, D.; Peters, J. (2025). Motion Planning Diffusion: Learning and Adapting Robot Motion Planning with Diffusion Models, IEEE Transactions on Robotics (T-Ro), 41, pp.4881-4901.
- Kienle, C.; Alt, B.; Katic, D.; Jäkel, R.; Peters, J. (2025). QueryCAD: Grounded Question Answering for CAD Models, IEEE International Conference on Robotics and Automation (ICRA 2025).
- Le, A. T.; Hansel, K.; Carvalho, J.; Watson, J.; Urain, J.; Biess, A.; Chalvatzaki, G.; Peters, J. (2025). Global Tensor Motion Planning, IEEE Robotics and Automation Letters (RA-L), and ICRA 2026 (RA-L Track), 10, 7, pp.7302-7309.
- Toelle, M.; Gruner, T.; Palenicek, D.; Schneider, T. Guenster, J.; Watson, J.; Tateo, D.; Liu, P.; Peters, J. (2025). Towards Safe Robot Foundation Models using Inductive Biases, SafeVLM Workshop @ IEEE International Conference on Robotics and Automation (ICRA), Spotlight.
- Le, A. T.; Nguyen, K.; Vu, M.N.; Carvalho, J.; Peters, J. (2025). Model Tensor Planning, Transactions on Machine Learning Research (TMLR).
- Arriaga, O.; Adam, R.O.; Laux, M.; Gutzeit, L.; Ragni, M.; Peters, J.; Kirchner, F. (2025). Bayesian Inverse Physics for Neuro-Symbolic Robot Learning, Conference on Neurosymbolic Learning and Reasoning.
- Vincent, T.; Wahren, F.; Peters, J.; Belousov, B.; D'Eramo, C. (2025). Adaptive Q-Network: On-the-fly Target Selection for Deep Reinforcement Learning, International Conference on Learning Representations (ICLR).
- Diwan, A.A.; Urain, J.; Kober, J.; Peters, J. (2025). Noise-conditioned Energy-based Annealed Rewards (NEAR): A Generative Framework for Imitation Learning from Observation, International Conference on Learning Representations (ICLR).
- Straub, D.; Niehues, T.F.; Peters, J.; Rothkopf, C.A. (2025). Inverse decision-making using neural amortized Bayesian actors, International Conference on Learning Representations (ICLR).
- Huang, J.; Tateo, D.; Liu, P.; Peters, J. (2025). Adaptive Control based Friction Estimation for Tracking Control of Robot Manipulators, IEEE Robotics and Automation Letters, and IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 10, pp.2454-2461.
- Duret, G.; Bourennane, Y.; Mazurak, D.; Samsonenko, A.; Zara, F.; Peters, J.; Chen, L. (2025). Facilitate and scale up the creation of 3D meshes and 6D category-based datasets with generative models: GenVegeFruits, Proceedings of the IEEE International Conference on Image Processing (ICIP).
- Palenicek, D.; Vogt, F.; Peters, J. (2025). Scaling CrossQ with Weight Normalization, Multi-disciplinary Conference on Reinforcement Learning and Decision Making (RLDM).
- Vincent, T.; Faust, T.; Tripathi, Y.; Peters, J.; D'Eramo, C. (2025). Eau De Q-Network: Adaptive Distillation of Neural Networks in Deep Reinforcement Learning, Conference on Reinforcement Learning and Decision Making (RLDM).
- Lenz, J.; Pfenning, I.; Gruner, T.; Palenicek, D.; Schneider, T.; Peters, J. (2025). Exploring the Role of Vision and Touch in Reinforcement Learning for Dexterous Insertion Tasks, Multi-disciplinary Conference on Reinforcement Learning and Decision Making (RLDM).
- Bohlinger, N.; Kinzel, J.; Palenicek, D.; Antczak, L.; Peters, J. (2025). Gait in Eight: Efficient On-Robot Learning for Omnidirectional Quadruped Locomotion, International Conference on Intelligent Robots and Systems (IROS).
- Bohlinger, N.; Stasica, M.; Bick, A.; Mohseni, O.; Fritzsche, J.; Hübler, C.; Peters, J.; Seyfarth, A. (2025). Bridge the Gap: Enhancing Quadruped Locomotion with Vertical Ground Perturbations, International Conference on Intelligent Robots and Systems (IROS).
- Koosha, T. A.; Kshirsagar, A.; Augustat, N.; Hahne, F.; Mühl, D.; Melzig, C. A.; Bremmer, F.; Peters, J.; Endres, D. M. (2025). Staring Down the Elevator Shaft: Postural Responses to Virtual Heights in an Indoor Environment, Proceedings of the Annual Meeting of the Cognitive Science Society (CogSci).
- Chowdhury, A.; Maurer, H.; Kshirsagar, A.; Ploeger, K.; Peters, J.; Mueller, H. (2025). The Earlier You Know, the Smoother You Act, Conference of the Human Movement Science Section of the German Association of Sports Science.
- Jankowski, J.; Maric, A.; Liu, P.; Tateo, D.; Peters, J.; Calinon, S. (2025). Distilling Contact Planning for Fast Trajectory Optimization in Robot Air Hockey, Robotics: Science and Systems (RSS).
- Vincent, T.; Faust, T.; Tripathi, Y.; Peters, J.; D'Eramo, C. (2025). Eau De Q-Network: Adaptive Distillation of Neural Networks in Deep Reinforcement Learning, Reinforcement Learning Journal (RLJ).
- Bohlinger, N.; Ai, B.; Dai, L.; Li, D.; Mu, T.; Wu, Z.; Fay, K.; Christensen, H.I.; Peters, J.; Su, H. (2025). Towards Embodiment Scaling Laws in Robot Locomotion, Conference on Robot Learning (CoRL).
- Pompetzki, K.; Le, A. T.; Gruner, T.; Watson, J.; Chalvatzaki, G.; Peters, J. (2025). Geometrically-Aware Goal Inference: Leveraging Motion Planning as Inference, Multi-disciplinary Conference on Reinforcement Learning and Decision Making (RLDM).
- Celik, O.; Li, Z.; Blessing, D.; Li, G.; Palenicek, D.; Peters, J.; Chalvatzaki, G.; Neumann, G. (2025). DIME: Diffusion-Based Maximum Entropy Reinforcement Learning, International Conference on Machine Learning (ICML).
- Aditya, D.; Huang, J.; Bohlinger, N.; Kicki, P.; Walas, Peters, J.; Luperto, M.; Tateo, D. (2025). Robust Localization, Mapping, and Navigation for Quadruped Robots, European Conference on Mobile Robots (ECMR).
- Schulze, L.; Peters, J.; Arenz, O. (2025). Context-Aware Deep Lagrangian Networks for Model Predictive Control, 2025 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).
- Celik, O.; Li, Z.; Blessing, D.; Li, G.; Palenicek, D.; Peters, J.; Chalvatzaki, G.; Neumann, G. (2025). DIME: Diffusion-Based Maximum Entropy Reinforcement Learning, EXAIT Workshop at International Conference on Machine Learning (ICML).
- Nguyen, K.; Le, A. T.; Pham, T.; Manfred, H.; Peters, J.; Vu, M.N. (2025). FlowMP: Learning Motion Fields for Robot Planning with Conditional Flow Matching, IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).
- Rother, D.; Herbert, F.; Kalter, F.; Koert, D.; Pajarinen, J.; Peters, J.; Weisswange, T.H. (2025). Entropy based blending of policies for multi-agent coexistence., Autonomous Agents Multi Agent Systems, 39, pp.27.
- Keller, L.; Tanneberg, D.; Peters, J. (2025). Neuro-Symbolic Imitation Learning: Discovering Symbolic Abstractions for Skill Learning, IEEE International Conference on Robotics and Automation (ICRA 2025).
- Ding, H.; Jaquier, N.; Peters, J.; Rozo, L. (2025). Fast and Robust Visuomotor Riemannian Flow Matching Policy, IEEE Transactions on Robotics (T-Ro), 41, pp.5327-5343.