Aditya Kapoor

Research Interests

Reinforcement Learning, Multi-Agent Systems, Robotics, Human-Robot Interaction, Robot Learning

More Information

Website || Github || LinkedIn || X || Google Scholar

Contact Information

Office. Room G33, Kilburn Building, Oxford Rd, Manchester M13 9PL
aditya [dot] kapoor [at] robot [hyphen] learning [dot] de

I am an ELLIS PhD student jointly at the University of Manchester and TU Darmstadt, where I am advised by Dr. Mingfei Sun and Prof. Jan Peters. I also closely work with Dr. Yilun Du and Benjamin Freed. My research focuses on developing intelligent, embodied agents capable of effective communication, collaboration, and decision-making in multi-agent environments. I work at the intersection of foundational models, reinforcement learning, and multi-agent systems, aiming to create agents that can dynamically adapt, learn from interactions, and contribute to collective problem-solving.

During my PhD, I am exploring methods to build scalable and adaptable multi-agent societies that communicate efficiently, learn continuously, and coordinate effectively across diverse tasks and environments. By leveraging foundational models, I aim to enhance agent perception, communication, and decision-making capabilities. My research focuses on enabling agents to selectively draw on shared knowledge, respond to other agents' behaviors, and optimize both individual and collective performance.

Before my PhD, I was a predoctoral researcher at Tata Consultancy Services Research & Innovation in Mumbai, where I worked with Dr. Mayank Baranwal and Dr. Harshad Khadilkar. I also briefly worked with Dr. Vighnesh Vatsal and Dr. Jay Gubbi at TCS Bangalore. I completed my Bachelor of Engineering in Computer Science at BITS Pilani, Goa.

My research is inspired by human teamwork, where individuals contribute unique skills within a group to accomplish shared goals. One of the key challenges I address is balancing autonomous actions with cooperative roles in multi-agent systems. I focus on enabling agents to decompose complex objectives into manageable sub-tasks, coordinate effectively, and develop flexible strategies for decision-making. By integrating reinforcement learning with foundational models, I aim to enhance agent adaptability, allowing them to engage in high-level interactions and dynamic role assignments within a team.

Beyond multi-agent collaboration, my research has broader implications for human-AI interaction, particularly in scenarios where robots and intelligent systems must seamlessly cooperate with human operators. By improving representation learning and decision-making frameworks, I aim to develop AI-driven multi-agent societies that are robust, scalable, and capable of assisting in real-world tasks such as industrial operations, autonomous driving, and cooperative AI.

During my time at the IAS lab, I aspire to contribute to advancements in reinforcement learning, communication-based multi-agent coordination, and foundational model-driven decision-making. My long-term vision is to develop AI systems that can operate autonomously while effectively interacting with both humans and other intelligent agents.

If you are interested in collaborating, feel free to reach out via email at aditya [dot] kapoor [at] robot [hyphen] learning [dot] de OR aditya [dot] kapoor [at] postgrad [dot] manchester [dot] ac [dot] uk.

Research Interests

Reinforcement Learning, Multi-Agent Systems, Representation Learning, Robotics, Human-Robot Interaction, Cognitive Science