Tucker Hermans (University of Utah)
Patrick van der Smagt (TU Munich)
Heni Ben Amor (Arizona State)
July 17 (c.f. http://www.roboticsconference.org/workshops.html) In room 108 in the first floor of Istituto Tecnico Galileo Galilei
In order for robots to be deployed in real-world homes, flexible manufacturing sites, and dangerous environments, they must be endowed with sophisticated interaction and manipulation capabilities, robust to the uncertainty of the real-world. A newly available class of reliable tactile sensors and affordable 3D cameras needs to be leveraged to accomplish this goal. Both tactile and visual sensing provide noisy, high-dimensional signals which must be used for control and planning algorithms to competently interact with the world. Learning serves as a key tool in dealing with such sensory information. Applications of such learning have shown how robots can autonomously discover objects, interactively detect object in clutter, improve manipulation skills, and generalize these skills to previously unseen objects.
However, the methods used have been quite disparate in these many different application challenges and different interaction domains have not yet shared their approaches among each other. A workshop is needed to bring experts together that can shed light on the most promising of these competing directions. Thus, we propose a workshop to focus on how learning can best be used to improve robot interaction based on visual and tactile sensing. While primarily concerned with manipulation, we define interaction to be quite broad here, including haptics in robotic surgery, human-robot physical interaction, and active object exploration for mapping. We seek to invite speakers with expertise in tactile sensing, visual perception, and robot learning to give differing viewpoints on the problem of learning for interaction. We will couple these talks with interactive presentations of submitted recent research in the area and also add poster spotlights between the talks. We will engage the invited speakers and contributing researchers in a structured debate to conclude the workshop. In order to foster focused discussion we will require both poster presenters and invited speakers to define relevant questions prior to the workshop and collect new questions throughout the day.
Ttitle: From haptic perception to decision-making for grasp and manipulation
Abstract: Haptic perception remains a grand challenge for artificial hands. The functionality of artificial dexterous manipulators could be enhanced by “haptic intelligence” that enables identification of objects and their features via touch alone. This could be especially useful when reliable visual and/or proprioceptive feedback are unavailable. In the first part of the talk, studies will be presented in which a robot hand outfitted with a deformable, multimodal tactile sensor was used to replay human-inspired haptic “exploratory procedures” to perceive salient geometric features such as edges and fingertip-sized bumps and pits. Tactile signals generated by active fingertip motions were used to extract inputs for offline support vector classification and regression models. The influence of tactile sensor capabilities, exploratory procedure selection, and hand compliance on perception accuracy will be discussed. In the second part of the talk, recent efforts toward real-time haptic perception and decision-making will be presented. Probabilistic modeling techniques, widely used for speech recognition, were applied to raw multimodal tactile sensor data from the haptic exploration of edges. Hidden Markov Models were used to build classifiers to perceive edge orientation with respect to the body-fixed reference frame of an artificial fingertip in real-time. The implementation of anytime algorithms for reinforcement learning and real-time decision-making are now underway for a two-fingered manipulation task.
Title: RoboBrain: Learning Deep Latent Haptic Features for Model Predictive Control
Title: What Gibson Might Say If He Were a Roboticist Presenting at This Workshop…
Abstract: The psychologist James J. Gibson has altered the way psychologists think about perception, be it tactile or visual or related to any other sense. Given his impact, roboticists like to use Gibson’s name to gain some interdisciplinary street cred, myself included. But are we willing to accept the consequences of his insights about perception? It probably would mean for us to change the way we tackle the problem in some fundamental way. In this split-personality talk, I would like to present some of the implications, at least given my own understanding of his research. And I would like to present some efforts towards perception that attempt to take these implications seriously. Now that I think about it… it may mean that a “focus on how learning can best be used to improve robot interaction” (from the workshop description) might not be the best way to focus. To provide evidence of split personality, I will also present some work that applies machine learning to visual perception. This work is probably not very “Gibsonian” but it helped us win the Amazon Picking Challenge. So maybe there is an alternative to Gibson after all? I will try to leave some time at the end of my talk to discuss this question.