Robot Perception

In the realm of robotics, tactile sensing is a groundbreaking field, particularly when it comes to vision-based tactile sensing, which allows robots to 'see' through touch. At our laboratory, we are pioneering algorithms designed to process raw signal data from tactile sensors, transforming these signals into a form that robots can understand and respond to. This research is not just about interpreting touch; it's about enabling robots to perceive the intricacy of textures, the hardness of surfaces, and the weight of objects, all through tactile feedback. By integrating this tactile data with advanced control systems, we are enhancing the robots' ability to interact with and manipulate their environment in a more nuanced and sophisticated manner. This research is a cornerstone of our efforts to create more sensitive, responsive, and dexterous robots, and is a featured aspect of our laboratory's work.

Event-Triggered Tactile Sensing and Control

Evetac: An Event-based Optical Tactile Sensor for Robotic Manipulation. Optical tactile sensors have recently become popular. They provide high spatial resolution, but struggle to offer fine temporal resolutions. To overcome this shortcoming, we study the idea of replacing the RGB camera with an event-based camera and introduce a new event-based optical tactile sensor called Evetac. Along with hardware design, we develop touch processing algorithms to process its measurements online at 1000 Hz. We devise an efficient algorithm to track the elastomer's deformation through the imprinted markers despite the sensor's sparse output. Benchmarking experiments demonstrate Evetac's capabilities of sensing vibrations up to 498 Hz, reconstructing shear forces, and significantly reducing data rates compared to RGB optical tactile sensors. Moreover, Evetac's output and the marker tracking provide meaningful features for learning data-driven slip detection and prediction models. The learned models form the basis for a robust and adaptive closed-loop grasp controller capable of handling a wide range of objects. We believe that fast and efficient event-based tactile sensors like Evetac will be essential for bringing human-like manipulation capabilities to robotics.

  •       Bib
    Funk, N.; Helmut, E.; Chalvatzaki, G.; Calandra, R.; Peters, J. (2024). Evetac: An Event-based Optical Tactile Sensor for Robotic Manipulation, IEEE Transactions on Robotics (T-RO), 40, pp.3812-3832.

Tactile Sensing

High-Resolution Pixelwise Contact Area and Normal Force Estimation for the GelSight Mini Visuotactile Sensor Using Neural Networks. Visuotactile sensors are gaining momentum in robotics because they provide high-resolution contact measurements at a fraction of the price of conventional force/torque sensors. It is, however, not straightforward to extract useful signals from their raw camera stream, which captures the deformation of an elastic surface upon contact. To utilize visuotactile sensors more effectively, powerful approaches are required, capable of extracting meaningful contact-related representations. This paper proposes a neural network architecture called CANFnet that provides a high-resolution pixelwise estimation of the contact area and normal force given the raw sensor images. The CANFnet is trained on a labeled experimental dataset collected using a conventional force/torque sensor, thereby circumventing material identification and complex modeling for label generation. We test CANFnet using GelSight Mini sensors and showcase its performance on real-time force control and marble rolling tasks. We are also able to report generalization of the CANFnets across different sensors of the same type. Thus, the trained CANFnet provides a plug-and-play solution for pixelwise contact area and normal force estimation for visuotactile sensors.

  •       Bib
    Funk, N.; Mueller, P.-O.; Belousov, B.; Savchenko, A.; Findeisen, R.; Peters, J. (2023). High-Resolution Pixelwise Contact Area and Normal Force Estimation for the GelSight Mini Visuotactile Sensor Using Neural Networks, Embracing Contacts-Workshop at ICRA 2023.

Placing by Touching: An empirical study on the importance of tactile sensing for precise object placing. This work deals with a practical everyday problem: stable object placement on flat surfaces starting from unknown initial poses. Common object-placing approaches require either complete scene specifications or extrinsic sensor measurements, e.g., cameras, that occasionally suffer from occlusions. We propose a novel approach for stable object placing that combines tactile feedback and proprioceptive sensing. We devise a neural architecture called PlaceNet that estimates a rotation matrix, resulting in a corrective gripper movement that aligns the object with the placing surface for the subsequent object manipulation. We compare models with different sensing modalities, such as force-torque, an external motion capture system, and two classical baseline models in real-world object placing tasks with different objects. The experimental evaluation of our placing policies with a set of unseen everyday objects reveals significant generalization of our proposed pipeline, suggesting that tactile sensing plays a vital role in the intrinsic understanding of robotic dexterous object manipulation.

  •       Bib
    Lach, L.; Funk, N.; Haschke, R.; Lemaignan, S.; Ritter, H.; Peters, J.; Chalvatzaki, G. (2023). Placing by Touching: An empirical study on the importance of tactile sensing for precise object placing, IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

Learning Dynamic Tactile Sensing with Robust Vision-based Training. Dynamic tactile sensing is a fundamental ability for recognizing materials and objects. However, while humans are born with partially developed dynamic tactile sensing and master this skill quickly, today’s robots remain in their infancy. The development of such a sense requires not only better sensors, but also the right algorithms to deal with these sensors’ data. For example, when classifying a material based on touch, the data is noisy, high-dimensional and contains irrelevant signals as well as essential ones. Few classification methods from machine learning can deal with such problems. In this paper, we propose an efficient approach to inferring suitable lower-dimensional representations of the tactile data. In order to classify materials based on only the sense of touch, these representations are autonomously discovered using visual information of the surfaces during training. However, accurately pairing vision and tactile samples in real robot applications is a difficult problem. The proposed approach therefore works with weak pairings between the modalities. Experiments show that the resulting approach is very robust and yields significantly higher classification performance based on only dynamic tactile sensing.

  •     Bib
    Kroemer, O.; Lampert, C.H.; Peters, J. (2011). Learning Dynamic Tactile Sensing with Robust Vision-based Training, IEEE Transactions on Robotics (T-Ro), 27, 3, pp.545-557.