Format:
Invited Talk: (25min + 5min questions)
Mini-Talk with Poster: 10 min
Poster Spotlight Talks: One Slide a la 2min


A. SESSION: Policy Learning by Imitation for Robotics

'Invited Talk: Learning Nonparametric Policies by Imitation''
David Grimes and Rajesh Rao, University of Washington

Poster Spotlights:
Policy gradient approach to multi-robot learning, Francisco Melo, Instituto Superior Técnico
On the Development of Benchmarks for Policy Learning Algorithms, Chris Mansley and Michael Littman, Rutgers University
Improving Gradient Estimation by Incorporating Sensor Data, Gregory Lawrence, U.C. Berkeley
Reinforcement Learning and Weak Derivatives for Motor Learning in Robotics, E. A. Theodorou, J. Peters, S. Schaal, USC, MPI & ATR
Learning Robot Control Policies, Daniel H Grollman, Odest Chadwicke Jenkins


B. SESSION: Decomposition of Tasks in Human Robot-Interaction

Invited Talk: Machine Learning Application to Robotics and Human-Robot Interaction
Aude Billard, EPFL

Poster Spotlights:
Methods for Sensorimotor Human Computer Interaction Design, Yon Visell, McGill University
Symbol grounding in manipulation, Ville Kyrki, Lappeenranta University of Technology
The Modular Decomposition Problem for the Motor Learning and Control of Humanoid Robots, Camille Salaun, Olivier Sigaud, Vincent Padois, Universite Pierre et Marie Curie

Mini-Talk: Machine learning for developmental robotics
Manuel Lopes, Luis Montesano, Francisco Melo, Instituto Superior Técnico


C. SESSION: Learning for Optimal Control

Invited Talk: Probabilistic inference methods for nonlinear, non-Gaussian, hybrid control,
Nando de Freitas, University of British Columbia

Invited Talk: A new mathematical framework for optimal choice of actions,
Emo Todorov, UCSD

Poster Spotlights:
The conditioning effect of stochastic dynamics in continuous reinforcement learning, Yuval Tassa, Hebrew University, Jerusalem
Learned system dynamics for adaptive optimal feedback control, Djordje Mitrovic, Stefan Klanke, and Sethu Vijayakumar (U.Edinburgh)
Feature Discovery in RL using Genetic Programming, Sertan Girgin, Philippe Preux, INRIA
Reinforcement Learning with Multiple Demonstrations, Adam Coates, Pieter Abbeel, Andrew Y. Ng, Stanford University


D. SESSION: Towards Intelligent Robots

Invited Talk: STAIR: The STanford Artificial Intelligence Robot project
Andrew Ng, Stanford University

Mini-Talk: Robot Perception Challenges for Machine Learning
Chieh-Chih Wang, National Taiwan University

Poster Spotlights:
Context-Based Similarity: Forming higher-level abstract concepts from raw sensorimotor experience, Brandon Roherer, Sandia National Laboratories
Learning Non-Gaussian Stochastic Systems for Dynamic Textures, Byron Boots, Geoff Gordon, Sajid Siddiqi, CMU
A Step Towards Autonomy in Robotics via Reservoir Computing, E. Antonelo, X. Dutoit, B. Schrauwen, D. Stroobandt, H. Van Brussel, M. Nuttin, KU Leuven
Towards Active Learning for Socially Assistive Robots, Adriana Tapus, Maja Mataric, USC


E. SESSION: Learning for Mobile Robots

Invited Talk: Bayesian Reinforcement Learning in Continuous POMDPs with Application to Robot Navigation
Joelle Pineau, McGill University

Mini-Talk: Self-Supervised Learning from High-Dimensional Data for Autonomous Offroad Driving
Ayse Naz Erkan, Raia Hadsell, Pierre Sermanet, Koray Kavukcuoglu, Marc’Aurelio Ranzato, Urs Muller, Yann LeCun, NYU

Poster Spotlights:
Relocatable Action Models for Autonomous Navigation, Bethany R. Leffler Michael L. Littman, Rutgers University
Learning to Associate with CRF-Matching, Fabio Ramos, Dieter Fox, U. Washington
CRF-Based Semantic and Metric Maps, Bertrand Douillard, Dieter Fox, Fabio Ramos, University of Washington
TORO: Tracking and Observing Robot, Deepak Ramachandran, Rakesh Gupta, Honda Research Institute
Maximum Entropy Inverse Reinforcement Learning, Brian D. Ziebart, J. Andrew Bagnell, Anind K. Dey, CMU

G. SESSION: Learning for Robots with Complex Mechanics

Invited Talk: TBA
Luis Sentis, Stanford University

Poster Spotlights:
Learning 3-D Object Orientation from Images, Ashutosh Saxena, Justin Driemeyer, Andrew Y. Ng, Stanford University
Bayesian Nonparametric Regression with Local Models, Jo-Anne Ting, Stefan Schaal, USC
Active Learning for Robot Control, Philipp Robbel (MIT), Sethu Vijayakumar (University of Edinburgh), Marc Toussaint (TUB)
Learning Robot Low Level Control Primitives: A Case Study, Diego Pardo, Cecilio Angulo, Ricardo Tellez,Technical University of Catalunya

Borderline:

Tekkotsu as a Framework for Robot Learning Research David S. Touretzky, Ethan J. Tira­Thompson, CMU

Automatic learning of individual human models from 3D marker data Hildegard Köhler, Tobias Feldmann, Annika Wörner, Martin Pruzinec, University of Karlsruhe

Efficient Sample Reuse by Covariate Shift Adaptation in Value Function Approximation Hirotaka Hachiya, Takayuki Akiyama, Masashi Sugiyama

Plan Recognition and Execution with Reservoir Computing X. Dutoit, H. Van Brussel, M. Nuttin, KU Leuven

Robot Task Allocation Through Distributed Belief Propagation Jonas Nathan Schwertfeger Odest Chadwicke Jenkins, Brown University

Robot task learning without [student] sweat Cedric Hartland, Nicolas Bredeche, Michele Sebag, Universite Paris Sud

Evolving a Neuro-Controller for a Quadrotor Helicopter Jakob Schwendner, Mark Edgington, Jan Hendrik Metzen, and Yohannes Kassahun, DFKI

Batch Reinforcement Learning for Controlling a Mobile Wheeled Pendulum Claudio Caccia, Andrea Bonarini, Alessandro Lazaric, Marcello Restelli, University of Milan

Rejects:

S-Learning: Real-time bootstrapped modeling of complex unstructured environments Brandon Roherer, Sandia National Laboratories

Two of the Cedric Hartland submissions...

How to Give a Robot Obsessive Compulsive Disorder Lawrence Amsel, Columbia University

Accurate locate of 3D shapes with partial occlusions I. Frosio, I. Cattinelli, N. A. Borghese, University of Milan