Deadlines, Journal Impact and Conference Quality
We keep track of the most important deadlines for us here.
We also keep a series of pages on journal impact and conference quality where we try to maintain the statistics that we can find on top venues.
Somewhat bizarre are the results from Google Scholar on Robotics and Machine Learning/AI.
Software: Implementation of Some of our Learning Methods
- Alexandros Paraschos developed a toolbox for using probabilistic movement primitives.
- Simone Parisi developed tensorl, a minimal toolbox in Tensorflow with some of the most famous RL algorithms.
- Oleg Arenz implemented VIPS an algorithm for learning GMMs for variational inference.
- StochasticSearch: Contains implementations of episodic REPS, CECER, MORE. Preliminary version which contains only basic documentation. Credit goes to G. Neumann, A. Abdolmaleki, C. Daniel, A. Paraschos, H. van Hoof.
- Simone Parisi developed MiPS, a minimal toolbox for Matlab with some of the most famous policy search algorithms, as well as some recent multi-objective methods and benchmark problems in reinforcement learning.
- Jens Kober created a basic MATLAB/Octave implementation of the PoWER algorithm: matlab_PoWER.zip. The required motor primitive code can be downloaded from http://www-clmc.usc.edu/Resources/Software.
- Jens Kober created a basic MATLAB/Octave implementation of the motor primitives for hitting and batting. You can find it here.
- Gerhard Neumann developed the Reinforcement Learning Toolbox, a C++ software library for reinforcement learning. As it is already quite old it is lacking recent algorithms and is unfortunately no longer maintained.
- Jan Peters developed the policy gradient toolbox back in the days. No longer maintained.
- Duy Nguyen-Tuong created Local-GP software that can be downloaded here.
- Roberto Calandra developed the Rprop Optimization Toolbox for Matlab.
- Kevin Luck has created a reference implementation of his PePPER algorithm.
Data Sets: From our Publications