We generically need three steps:
- Write proposal.
- Get some tentative speakers confirmed.
- Send the proposal out.
and hope it gets accepted (I have had a 100% success rate).
Website with a program similar to
http://www-clmc.usc.edu/~jrpeters/pmwiki.php/Research/NIPS2006
Confirmation of tentative speakers
Dear XXX, we would like to invite you to give a talk in our planned workshop "ZZZZ", which will be suggested to the conference organizers of NIPS Neural Information Processing Systems 2007 (at Westin Resort and Spa and Westin Hilton, Whistler, B.C., Canada). The workshop will be organized by Ashutosh Saxena and Jan Peters. Please see the attached abstract for more information.
At this point, we just wanted to ask you whether we could list you as a tentative speaker -- no real commitment is necessary at this time. We would also be glad about any suggestions for additional prospective participants for the workshop. Please note that we will not be able to provide travel funds and reimbursement for potential conference fees for this workshop. Best wishes,
Ashutosh Saxena & Jan Peters
Workshop proposal
WORKSHOP PROPOSAL
Title: Robotics Challenges for Machine Learning
Organizers:
Marc Toussaint (Technical University of Berlin),
Jan Peters (Max Planck Institute for Biological Cybernetics)
WWW: http://www.robot-learning.de
Abstract:
Creating autonomous robots that can assist humans in situations of
daily life is a great challenge for machine learning. While this aim
has been a long standing vision of robotics, artificial intelligence,
and the cognitive sciences, we have yet to achieve the first step of
creating robots that can accomplish a multitude of different tasks,
triggered by environmental context or higher level
instruction. Despite the wide range of machine learning problems
encountered in robotics, the main bottleneck towards this goal has
been a lack of interaction between the core robotics and the machine
learning communities. To date, many roboticists still discard machine
learning approaches as generally inapplicable or inferior to
classical, hand-crafted solutions. Similarly, machine learning
researchers do not yet acknowledge that robotics can play the same
role for machine learning which for instance physics had for
mathematics: as a major application as well as a driving force for new
ideas, algorithms and approaches.
Some fundamental problems we encounter in robotics that equally
inspire current research directions in Machine Learning are:
- learning and handling models, (e.g., of robots, task or environments)
- learning deep hierarchies or levels of representations (e.g., from sensor & motor representations to task abstractions)
- regression in very high-dimensional spaces for model and policy learning
- finding low-dimensional embeddings of movement as an implicit generative model
- methods for probabilistic inference of task parameters from vision,e.g., 3D geometry of manipulated objects
- the integration of multi-modal information (e.g., proprioceptive, tactile, vison) for state estimation and causal inference
- probabilistic inference in non-linear, non-Gaussian stochastic systems (e.g., for planning as well as optimal or adaptive control)
Robotics challenges can inspire and motivate new Machine Learning
research as well as being an interesting field of application of
standard ML techniques.
Inversely, with the current rise of real, physical humanoid robots in
robotics research labs around the globe, the need for machine learning
in robotics has grown significantly. Only if machine learning can succeed at
making robots fully adaptive, it is likely that we will be able to take real
robots
out of the research labs into real, human inhabited environments.
To do so, we future robots will need to be able to make
proper use of perceptual stimuli such as vision, proprioceptive &
tactile feedback and translate these into motor commands.
To close this complex loop, machine learning will be needed on various stages
ranging from sensory-based action determination over high-level plan
generation to motor control on torque level. Among the important
problems hidden in these steps are problems which can be understood
from the robotics and the machine learning point of view including
perceptuo-action coupling, imitation learning, movement decomposition,
probabilistic planning problems, motor primitive learning,
reinforcement learning, model learning and motor control.
Format:
The goal of this workshop is to bring together people that are
interested in robotics as a source and inspiration for new Machine
Learning challenges, or which work on Machine Learning methods as a
new approach to robotics challenges. In the robotics context, among
the questions which we intend to tackle are
Reinforcement Learning, Imitation, and Active Learning:
- What methods from reinforcement learning scale into the domain of
robotics?
- How can we improve our policies acquired through imitation by trial
and error?
- Can we turn many simple learned demonstrations into proper policies?
- Does the knowledge of the cost function of the teacher help the
student?
- Can statistical methods help for generating actions which actively
influencing our perception? E.g., Can these be used to plan visuo-motor sequences that will minimize our uncertainty about the scene?
- How can image understanding methods be extended to provide
probabilistic scene descriptions suitable for motor planning?
Motor Representations and Control:
- Can we decompose human demonstations into elemental movements,
e.g., motor primitives, and learn these efficiently?
- Is it possible to build libraries of basic movements from
demonstration? How to create higher-level structured representations and abstractions based on elemental movements?
- Can structured (e.g., hierarchical) temporal stochastic models
be used to plan the sequencing and superposition of movement primitives?
- Is probabilistic inference the road towards composing complex
action sequences from simple demonstrations? Are superpositions of motor primitives and the coupling in timing between these learnable?
- How to generate compliant controls for executing complex movement
plans which include both superposition and hierachies of elemental movements? Can we find learned versions of priortized hierachical control?
- Can we learn how to control in task-space of redundant robots in
the presence of underactuation and complex constraints? Can we learn force or hybrid control in task-space?
- Is real-time model learning the way to cope with executing tasks on
robots with unmodeled nonlinearities and manipulating uncertain objects in unpredictable environmental interactions?
- What new regression techniques can help real-time model learning to
improve the execution of tasks on robots with unmodeled
nonlinearities and manipulating uncertain objects in unpredictable
environmental interactions?
Learning structured models and representations:
- What kind of probabilistic models provide a compact and suitable
description of real-world environments composed of manipulable objects?
- How can abstractions or compact representations be learnt from
sensori-motor data?
- How can we extract features of the sensori-motor data that are
relevant for motor control or decision making? E.g., can we extract visual
features of objects directly related to their manipulability or
``affordance''?
Confirmed Tentative Speakers:
Auke Ijspeert, Aude Billard, Sethu Vijayakumar, Andrew Ng, Joelle Pineau,
Christopher G. Atkeson, Drew Bagnell, Nikos Vlassis, Stefan Schaal,
Emo Todorov, Sridhar Mahadevan, Russ Tedrake, Nando de Freitas
Program:
TBA
Posters:
There will be a call for posters upon acceptance of the workshop. The
deadline for abstract submissions will be October 21, 2007 and the
notification will be October 26, 2007
Participants:
This workshop will bring together researchers from both the robotics
and machine learning in order to explore how to approach the topic of
solving the current statistical learning challenges in robotics in a
principled way. Participants of the workshop (inclusive of the
audience) are encouraged to actively participate by responding with
questions and comments about the talks and give stand-up talks.
Please contact the organizers if you would like to reserve apriori
some time for expressing your view on a particular topic.
Organizers:
The workshop is organized by Marc Toussaint (http://ml.cs.tu-berlin.de/~mtoussai),
Technical University of Berlin, Germany, and by
Jan Peters (http://www.jan-peters.net), Max-Plank Institute for
Biological Cybernetics, Tuebingen, Germany & University of Southern
California (USC).
CV of Organizer Marc Toussaint:
Marc Toussaint joined the Machine Learning group at Technical
University Berlin (Prof. Klaus-Robert Mueller) in Mai 2007 as PI of
the Machine Learning & Robotics Lab funded by the Emmy Noether
excellence grant. Before that, he spend 2 years as a post-doc at the
School of Informatics in Edinburgh (Prof. Chris Williams & Prof. Sethu
Vijayakumar) and 6 months at the Robotics lab of Honda Research
Institute (Offenbach, Germany). In 2003, Marc graduated with a PhD from
University of Bochum, Germany, and he received a Diploma in
Theoretical Physics in 1999 and a Pre-Diploma in Mathematics in 1996
at the University of Cologne. Marc collected extensive experience in
Machine Learning methods and their application to real-world Robotics
during his stay in Edinburgh and the Honda Research Institute. His
research centers around learning appropriate representations for
stochastic search, reasoning and motor control, and the use of
probabilistic (inference) methods for solving (PO)MDPs and motor
control/planning problems.
CV of Organizer Jan Peters:
Jan Peters joined the Max-Planck Institute for Biological Cybernetics in April 2007
as a research scientist and is currently heading the Robot Learning Lab in the Departement
of Bernhard Schoelkopf. Before joining MPI, Jan Peters graduated with a Ph.D. in
Computer Science from University of Southern California (USC) with the thesis committee of
Stefan Schaal, Firdaus Udwadia, Gaurav Sukhatme, and Chris Atkeson (CMU)
and remains affiliated with the Computational Learning and Motor Control Lab at USC
as an adjunct researcher. Before his Ph.D., he received a German M.Sc. in Informatics from
University of Hagen in 2000, a German M.Sc. in Electrical Engineering from Munich University
of Technology (TU Muenchen) in 2001, a M.Sc. in Computer Science in 2002
and M.Sc. in Mechanical Engineering in 2005, both from USC.
Jan Peters has extensive experiencein machine learning for robotics. Additionally
to his work at the MPI and USC, he has spent several years performing robot learning
research at the Department of Robotics at the German Aerospace Research Center (DLR) in
Germany, at the Robotics & Control Division at Siemens Advanced Engineering in Singapore,
at the Department of Humanoid Robotics and Computational Neuroscience at the Advanced
Telecommunication Research (ATR) Center in Japan, as well as the National University of
Singapore (NUS).
Jan has previously organized several important workshops, the "Learning for Locomotion"
Workshop at Robotics: Science & Systems RSS 2005 conference which was influential
on the DARPA Learning Locomotion program and the NIPS 2006 Workshop
"Towards a New Reinforcement Learning?".
---
Workshop: Towards the New Reinforcement Learning,
Organizers: Jan Peters, Drew Bagnell, Stefan Schaal
Conference: NIPS 2006
Date: December 8, 2006
Location: Westin Resort and Spa and Westin Hilton, Whistler, B.C., Canada
Abstract
During the last decade, many areas of statistical machine learning have reached a high level of maturity with novel, efficient, and theoretically well founded algorithms that increasingly removed the need for heuristics and manual parameter tuning, which dominated the early days of neural networks. Reinforcement learning (RL) has also made major progress in theory and algorithms, but is somehow lagging behind the success stories of classification, supervised, and unsupervisedlearning. Besides the long-standing question for scalability of RL to larger and real world problems, even in simpler scenarios, a significant amount of manual tuning and human insight is needed to achieve good performance, e.g., as in exemplified in issues like eligibility factors, learning rates, the choice of function approximators and their basis functions for policy and/or value functions, etc. Some of the reasons for the progress of other statistical learning disciplines comes from connections to well-established fundamental learning approaches, like maximum-likelihood with EM, Bayesian statistics, linear regression, linear and quadratic programming, graph theory, function space analysis, etc.
Therefore, the main question of this workshop is to discuss, how other statistical learningtechniques may be used to developed new RL approaches in order to achieve properties including higher numerical robustness, easier use in terms of open parameters, probabilistic and Bayesian interpretations, better scalability, the inclusions of prior knowledge, etc.
Format
Our goal is to bring together researchers who have worked on reinforcement learning techniques which are heading towards new approaches in terms of bringing other statistical learning techniques to bear on RL. The workshop will consist of short presentations, posters, and panel discussions. Topics to be addressed include, but are not limited to:
- Which methods from supervised and unsupervised learning are the most promising to help developing new RL approaches?
- How can modern probabilistic and Bayesian method be beneficial for Reinforcement Learning?
- Which approaches can help reducing the number of open parameters in Reinforcement Learning?
- Can the Reinforcement Learning Problem be reduced to Classification or Regression?
- Can reinforcement learning be seen as a big filtering or prediction problem where the prediction of good actions is the main objective?
- Are there useful alternative ways to formulate the RL problem? E.g, as a dynamic Bayesian network, by using multiplicative rewards, etc.
- Can reinforcement learning be accelerated by incorporating biases, expert data from demonstration, prior knowledge on reward functions, etc.?
Tentative Speakers
Andrew Ng, Drew Bagnell, Mark W. Andrews, Pascal Poupart, Rich Sutton, Yaakov Engel, Satinder Singh, Mohammad Ghavamzadeh, John Langford, Michael Littman, Geoff Gordon
Participants
This workshop will bring together researchers from different areas of machine learning in order to explore how to approach new topics in reinforcement learning. Attendants of the workshop are encouraged to actively participate by responding with questions and comments about the talks and by giving stand-up talks. We also solicit additional short presentations of participants -- please contact the organizers if you would like to reserve a slot.