Robot Perception

In the realm of robotics, tactile sensing is a groundbreaking field, particularly when it comes to vision-based tactile sensing, which allows robots to 'see' through touch. At our laboratory, we are pioneering algorithms designed to process raw signal data from tactile sensors, transforming these signals into a form that robots can understand and respond to. This research is not just about interpreting touch; it's about enabling robots to perceive the intricacy of textures, the hardness of surfaces, and the weight of objects, all through tactile feedback. By integrating this tactile data with advanced control systems, we are enhancing the robots' ability to interact with and manipulate their environment in a more nuanced and sophisticated manner. This research is a cornerstone of our efforts to create more sensitive, responsive, and dexterous robots, and is a featured aspect of our laboratory's work.

Active Exploration

Active Exploration for Robotic Manipulation


Robotic manipulation stands as a largely unsolved problem despite significant advances in robotics and machine learning in the last decades. One of the central challenges of manipulation is partial observability, as the agent usually does not know all physical properties of the environment and the objects it is manipulating in advance. A recently emerging theory that deals with partial observability in an explicit manner is Active Inference. It does so by driving the agent to act in a way that is not only goal-directed but also informative about the environment. In this work, we apply Active Inference to a hard-to-explore simulated robotic manipulation tasks, in which the agent has to balance a ball into a target zone. Since the reward of this task is sparse, in order to explore this environment, the agent has to learn to balance the ball without any extrinsic feedback, purely driven by its own curiosity. We show that the information-seeking behavior induced by Active Inference allows the agent to explore these challenging, sparse environments systematically. Finally, we conclude that using an information-seeking objective is beneficial in sparse environments and allows the agent to solve tasks in which methods that do not exhibit directed exploration fail.
    •       Bib
      Schneider, T.; Belousov, B.; Chalvatzaki, G.; Romeres, D.; Jha, D.K.; Peters, J. (2022). Active Exploration for Robotic Manipulation, IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

Tactile Sensing

High-Resolution Pixelwise Contact Area and Normal Force Estimation for the GelSight Mini Visuotactile Sensor Using Neural Networks

Visuotactile sensors are gaining momentum in robotics because they provide high-resolution contact measurements at a fraction of the price of conventional force/torque sensors. It is, however, not straightforward to extract useful signals from their raw camera stream, which captures the deformation of an elastic surface upon contact. To utilize visuotactile sensors more effectively, powerful approaches are required, capable of extracting meaningful contact-related representations. This paper proposes a neural network architecture called CANFnet that provides a high-resolution pixelwise estimation of the contact area and normal force given the raw sensor images. The CANFnet is trained on a labeled experimental dataset collected using a conventional force/torque sensor, thereby circumventing material identification and complex modeling for label generation. We test CANFnet using GelSight Mini sensors and showcase its performance on real-time force control and marble rolling tasks. We are also able to report generalization of the CANFnets across different sensors of the same type. Thus, the trained CANFnet provides a plug-and-play solution for pixelwise contact area and normal force estimation for visuotactile sensors.

    •       Bib
      Funk, N.; Mueller, P.-O.; Belousov, B.; Savchenko, A.; Findeisen, R.; Peters, J. (2023). High-Resolution Pixelwise Contact Area and Normal Force Estimation for the GelSight Mini Visuotactile Sensor Using Neural Networks, Embracing Contacts-Workshop at ICRA 2023.

Event-Triggered Sensing and Control

Evetac: An Event-based Optical Tactile Sensor for Robotic Manipulation

Optical tactile sensors have recently become popular. They provide high spatial resolution, but struggle to offer fine temporal resolutions. To overcome this shortcoming, we study the idea of replacing the RGB camera with an event-based camera and introduce a new event-based optical tactile sensor called Evetac. Along with hardware design, we develop touch processing algorithms to process its measurements online at 1000 Hz. We devise an efficient algorithm to track the elastomer's deformation through the imprinted markers despite the sensor's sparse output. Benchmarking experiments demonstrate Evetac's capabilities of sensing vibrations up to 498 Hz, reconstructing shear forces, and significantly reducing data rates compared to RGB optical tactile sensors. Moreover, Evetac's output and the marker tracking provide meaningful features for learning data-driven slip detection and prediction models. The learned models form the basis for a robust and adaptive closed-loop grasp controller capable of handling a wide range of objects. We believe that fast and efficient event-based tactile sensors like Evetac will be essential for bringing human-like manipulation capabilities to robotics.

    •       Bib
      Funk, N.; Helmut, E.; Chalvatzaki, G.; Calandra, R.; Peters, J. (submitted). Evetac: An Event-based Optical Tactile Sensor for Robotic Manipulation, Submitted to the IEEE Transactions on Robotics (T-Ro).