RSS 2015 Workshop: Learning for Visual and Tactile Interaction

Organizers

Tucker Hermans (University of Utah)
Patrick van der Smagt (TU Munich)
Heni Ben Amor (Arizona State)

Date and Location

July 17 (c.f. http://www.roboticsconference.org/workshops.html) In room 108 in the first floor of Istituto Tecnico Galileo Galilei

Conference Details

RSS 2015 in Rome, Italy.

Description

In order for robots to be deployed in real-world homes, flexible manufacturing sites, and dangerous environments, they must be endowed with sophisticated interaction and manipulation capabilities, robust to the uncertainty of the real-world. A newly available class of reliable tactile sensors and affordable 3D cameras needs to be leveraged to accomplish this goal. Both tactile and visual sensing provide noisy, high-dimensional signals which must be used for control and planning algorithms to competently interact with the world. Learning serves as a key tool in dealing with such sensory information. Applications of such learning have shown how robots can autonomously discover objects, interactively detect object in clutter, improve manipulation skills, and generalize these skills to previously unseen objects.

However, the methods used have been quite disparate in these many different application challenges and different interaction domains have not yet shared their approaches among each other. A workshop is needed to bring experts together that can shed light on the most promising of these competing directions. Thus, we propose a workshop to focus on how learning can best be used to improve robot interaction based on visual and tactile sensing. While primarily concerned with manipulation, we define interaction to be quite broad here, including haptics in robotic surgery, human-robot physical interaction, and active object exploration for mapping. We seek to invite speakers with expertise in tactile sensing, visual perception, and robot learning to give differing viewpoints on the problem of learning for interaction. We will couple these talks with interactive presentations of submitted recent research in the area and also add poster spotlights between the talks. We will engage the invited speakers and contributing researchers in a structured debate to conclude the workshop. In order to foster focused discussion we will require both poster presenters and invited speakers to define relevant questions prior to the workshop and collect new questions throughout the day.

Schedule

  • 8:40 - 8:45 Welcome and Introduction
  • 8:45 - 9:30 Invited Speaker 1: Ashutosh Saxena and Ian Lenz - RoboBrain: Learning Deep Latent Haptic Features for Model Predictive Control
  • 9:30 - 9:45 Contributed Talk 1:Tapomayukh Bhattacharjee - Sensing Incidental Contact to Inform Manipulation in Clutter
  • 9:45 - 10:30 Invited Speaker 2: Rod Grupen - TBA
  • 10:30 - 11:00 Coffee Break
  • 11:00 - 11:45 Invited Speaker 3: Veronica Santos - From haptic perception to decision-making for grasp and manipulation
  • 11:45 - 12:00 Contributed Talk 2: Sergey Levine and Chelsea Finn - End-to-End Training of Deep Visuomotor Policies
  • 12:00 - 12:45 Invited Speaker 4: Oliver Brock - What Gibson Might Say If He Were a Roboticist Presenting at This Workshop…
  • 12:45 - 14:45 Lunch
  • 14:45 - 15:00 Contributed Talk 3: Mario Gianni - Dynamic contact sensing for articulated tracked vehicles
  • 15:00 - 15:45 Invited Speaker 5: Stefan Schaal - Learning and Control in Sensory Rich Environments
  • 15:45 - 16:30 Coffee Break
  • 16:30 - 17:30 Panel Discussion: Veronica Santos, Oliver Brock, Sergey Levine, Rod Grupen
  • 17:30 - 17:45 Findings and Closing Remarks

Contributed Papers

  • Tapomayukh Bhattacharjee and Charles C. Kemp, "Sensing Incidental Contact to Inform Manipulation in Clutter" pdf Δ
  • Mario Gianni, Fiora Pirri, and Manuel A. Ruiz Garcia, "Dynamic contact sensing for articulated tracked vehicles" pdf
  • Sergey Levine, Chelsea Finn, Trevor Darrell, and Pieter Abbeel, "End-to-End Training of Deep Visuomotor Policies" pdf

Invited Speakers

  • Veronica Santos, UCLA (Confirmed)

Ttitle: From haptic perception to decision-making for grasp and manipulation

Abstract: Haptic perception remains a grand challenge for artificial hands.  The functionality of artificial dexterous manipulators could be enhanced by “haptic intelligence” that enables identification of objects and their features via touch alone.  This could be especially useful when reliable visual and/or proprioceptive feedback are unavailable.  In the first part of the talk, studies will be presented in which a robot hand outfitted with a deformable, multimodal tactile sensor was used to replay human-inspired haptic “exploratory procedures” to perceive salient geometric features such as edges and fingertip-sized bumps and pits.  Tactile signals generated by active fingertip motions were used to extract inputs for offline support vector classification and regression models.  The influence of tactile sensor capabilities, exploratory procedure selection, and hand compliance on perception accuracy will be discussed.  In the second part of the talk, recent efforts toward real-time haptic perception and decision-making will be presented.  Probabilistic modeling techniques, widely used for speech recognition, were applied to raw multimodal tactile sensor data from the haptic exploration of edges.  Hidden Markov Models were used to build classifiers to perceive edge orientation with respect to the body-fixed reference frame of an artificial fingertip in real-time.  The implementation of anytime algorithms for reinforcement learning and real-time decision-making are now underway for a two-fingered manipulation task.

  • Ashutosh Saxena, Cornell University (Confirmed)

Title: RoboBrain: Learning Deep Latent Haptic Features for Model Predictive Control

  • Oliver Brock, TU Berlin (Confirmed)

Title: What Gibson Might Say If He Were a Roboticist Presenting at This Workshop…

Abstract: The psychologist James J. Gibson has altered the way psychologists think about perception, be it tactile or visual or related to any other sense. Given his impact, roboticists like to use Gibson’s name to gain some interdisciplinary street cred, myself included. But are we willing to accept the consequences of his insights about perception? It probably would mean for us to change the way we tackle the problem in some fundamental way. In this split-personality talk, I would like to present some of the implications, at least given my own understanding of his research. And I would like to present some efforts towards perception that attempt to take these implications seriously. Now that I think about it… it may mean that a “focus on how learning can best be used to improve robot interaction” (from the workshop description) might not be the best way to focus. To provide evidence of split personality, I will also present some work that applies machine learning to visual perception. This work is probably not very “Gibsonian” but it helped us win the Amazon Picking Challenge. So maybe there is an alternative to Gibson after all? I will try to leave some time at the end of my talk to discuss this question.

  • Stefan Schaal, USC (Confirmed)
  • Rod Grupen, UMass Amherst (Confirmed)

Discussion Topics

  • How can we leverage the large success of learning methods in vision for tactile data?
  • For what tasks are vision and tactile sensing required? Which tasks do they make easier?
  • What manipulation tasks beyond grasping can tactile sensing aid in solving?
  • What interaction tasks beyond manipulation can benefit from tactile sensing?
  • Can the use of tactile and visual sensing decrease the need for learning from demonstration or human labelling of robot data?
  • Is it possible to perform online learning with high-dimensional visual and tactile sensors streams?
  • How can we accurately compare methods for tactile manipulation using different sensors?
  • How can we adapt data collected on different robots with various sensors to improve learning for all robots equipped with tactile and visual sensors?
  • What technical issues occur in combining visual and tactile data in learning?
  • What integration issues arise when combining visual and tactile sensing for manipulation?
  • What benefits do the complementary aspects of visual and tactile sensing give when designing learning tasks for robot manipulation?
  • How can we use learning to better bridge the divide between manipulation and perception?
  • What successful perception tasks have not been helped improve robot manipulation and interaction?
  • What perceptual tasks are most important to work on in order to improve robot manipulation?
  • How can we integrate feedback and learning with planning to improve robot manipulation and long-term autonomy?