Rprop Optimization Toolbox

The Rprop Optimization Toolbox implement for MATLAB all the 4 algorithms of the Rprop Family (Rprop+, Rprop-, IRprop+ and IRprop- ) according to the specifics presented in [1,2,3,4].

The Rprop methods are first order minimizing algorithms whose main capability is to automatically adapt the step length in order to speed up the convergence process. Rprop is particularly used for the optimization the weights of Artificial Neural Networks due to its faster convergence.

The interface of the Rprop Toolbox has been designed to be mostly compatible with the built-in Fminunc optimization function from MATLAB.

The instruction to call the rprop function is in fact:

[x,e,exitflag,stats] = rprop(funcgrad,x0,parameters,varargin);

funcgrad: is a function (or handle to function) that return [f g] where f is the objective value and g is a gradient matrix.
x0: contain the initial parameters to optimize.
parameters: is an optional structure to override the default options of the optimization function.
varargin: can be use to pass additional arguments to funcgrad.
x: return the optimized parameters.
e: the final objective value.
exitflag: specify which exit condition has been satisfied.
stats: is a structure that return additional information about the optimization process.

Experimental results shows that using the "Rprop Toolbox" can be order of magnitude faster (in terms of second) compared to using the "Fminunc" function. Moreover the "Rprop toolbox" is capable (given the proper hardware, drivers and the "Parallel Computing Toolbox") of making use of GPU acceleration in order to further speed up large-scale optimization tasks.

More details and instructions about the usage of the toolbox, as well as different demos can be find in the package.


  • 5-Jun-12: First release of the Rprop Optimization Toolbox


Copyright (c) 2011, Roberto Calandra. All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:

  1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
  2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.
  3. The names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.
  4. If used in any scientific publications, the publication has to refer specifically to the work published on this webpage.

This software is provided by us "as is" and any express or implied warranties, including, but not limited to, the implied warranties of merchantability and fitness for particular purpose are disclaimed. In no event shall the copyright holders or any contributor be liable for any direct, indirect, incidental, special, exemplary, or consequential damages however caused and on any theory of liability whether in contract, strict liability or tort arising in any way out of the use of this software, even if advised of the possibility of such damage.

DOWNLOAD HERE: I agree and therefore I download this toolbox.


For any comment, suggestion or question you can contact Me

Bibtex reference

    author = {Calandra, Roberto},
    title = {Rprop Toolbox for {MATLAB}}, 
    year = {2011},
    howpublished = {\url{http://www.ias.informatik.tu-darmstadt.de/Research/RpropToolbox}}


  1. Martin Riedmiller, Heinreich Braun. A direct adaptive method for faster backpropagation learning: the RPROP algorithm. Proceedings of the International Conference on Neural Networks, pp. 586-591, IEEE Press, 1993
  2. Martin Riedmiller. Advanced supervised learning in multilayer perceptrons-from backpropagation to adaptive learning techniques. International Journal of Computer Standards and Interfaces 16(3), pp. 265-278, 1994.
  3. Christian Igel and Michael Hüsken. Improving the Rprop Learning Algorithm. In H. Bothe and R. Rojas, eds.: Second International Symposium on Neural Computation (NC 2000), pp. 115-121, ICSC Academic Press, 2000
  4. Christian Igel and Michael Hüsken. Empirical Evaluation of the Improved Rprop Learning Algorithm. Neurocomputing 50(C), pp. 105-123, 2003


zum Seitenanfang