Optimisation

Header file optimise.h

Optimisable function class `Optimisable_function'

A real function of a real vector, the value of which can be optimised by adjusting the elements of the vector. To optimise a function, the class can be used as follows:
  1. Define a subclass which inherits from this class for the particular function to be optimised.
  2. Define a method in the class to evaluate the function given a vector of parameters. Typically, this entails copying the elements of the vector to the members of one or more temporary objects, which are then used to evaluate any general function.
  3. If required, define a method to set up the vector of parameters given a set of the objects used by the function.
  4. Optionally, define a method which applies constraints to the elements in the vector of parameters.
  5. Invoke whichever optimisation scheme seems most appropriate, from the list below.

Service methods

Amount of parameter perturbation
Specifies the perturbation to apply to each parameter, given the current parameter values. This is used in calculating numerical Jacobian matrices. A default method returns a value of 0.1 for each parameter.
   vector delta(const vector& c) const
Constrain parameters
Applies constraints to proposed parameters.
   int constrain(vector& c) const
Returns 1 if constraints were applied.

Optimisers

`dumb' minimisation: move each parameter in turn
double mindumb(
   vector& c, // parameters (initial and optimised)
   const int maxits = 0, // maximum iterations (0: no limit)
   const int verbose = 0 // 1 for diagnostic output
)
Returns value of the objective function at the final parameter values (overwriting initial contents of c).
bracket minimum along a line
int bracket_minimum(
   const vector& x0, // initial position
   const vector& dx0, // initial change vector, pointing towards minimum
   vector& x1, // calculated position at one size of minimum
   vector& x2, // calculated position at other size of minimum
   vector& xint, // calculated position close to minimum
   const double stepscale = 1.0, // scaling factor for gradient vector
   const double stepfac = 1.5, // factor for changing step size (>1)
   const double stepmin = 1.0e-8, // minimum stepsize allowed
   const int maxits = 0, // maximum iterations allowed (0 => no limit)
   const int verbose = 0 // 1 for diagnostic output
)
Returns 1 on success, 0 if step size underflowed or reached maximum iterations.
line minimum by bisection
int minimum_bisection(
   const vector& xa, // position at one side of minimum
   const vector& xb, // position at other side of minimum
   const vector& xint, // intermediate position
   vector& xmin, // position of minimum
   double& fmin, // function value at minimum
   const double tol = 1.0e-6, // desired position accuracy of minimum
   const int maxits = 0, // maximum iterations allowed (0 => no limit)
   const int verbose = 0 // 1 for diagnostic output
)
Returns 0 if reached maximum iterations.
minimum by the method of steepest descent
Uses jacob to calculate df/dci.
int minsteep(
   vector& c, // parameters (initial and optimised)
   const double tol = 1.0e-6, // acceptable gradient tolerance
   const double stepscale = 1.0, // scaling factor for gradient vector
   const double stepfac = 1.5, // factor for changing step size (>1)
   const double stepmin = 1.0e-8 // minimum stepsize allowed
   const int maxits = 0, // maximum iterations allowed (0 => no limit)
   const int verbose = 0 // 1 for diagnostic output
)
Returns 0 on failure.
conjugate gradient minimisation
int minimum_cg(
   vector& x, // initial and optimised position
   const double acc = 1.0e-6, // desired position accuracy of minimum
   const double stepscale = 1.0, // scaling factor for gradient vector
   const double stepfac = 1.5, // factor for changing step size (>1)
   const double stepmin = 1.0e-8, // minimum stepsize allowed
   const int maxits = 0, // maximum iterations allowed (0 => no limit)
   const int verbose = 0 // 1 for diagnostic output
)
Returns 0 on failure.

Generic optimiser class `Optimiser'

This is a base class for a set of different optimiser methods. Each method takes an optimisable_function and a starting guess at the parameters, and looks for the parameter values giving the minimum. No objects of class `optimiser' may be created as these are pure abstract. Objects of the concrete subclasses may be created, as may pointers to the base class. The subclasses correspond closely with the methods defined in the optimisable_function class.

Each concrete subclass requires control data. These are stored as internal parameters to objects in the class, rather than being passed each time an optimisation is performed. Defaults are set when an object of each concrete subclass is created.

The base class includes input and output methods for pointers. When the input method is used, a word is read from the input stream and compared against known subclass names; if a match is found, an instance of the subclass is created and its control data read. The output method writes the subclass name and then its control data.

Dumb optimiser class: `Dumb_optimiser'

Parameters:
   int maxits; // maximum iterations (0 => no limit)		default: 0
   int verbose; // 0/1 for diagnostics off/on			default: 0

I/O name: `dumb'.

Steepest descent optimiser class: `Steep_optimiser'

Parameters:
   double tol; // tolerance of minimum (spatial)		default: 1.0e-7
   double stepscale; // scaling factor for gradient vector	default: 1.0
   double stepfac; // factor for changing step size (>1)	default: 1.5
   double stepmin; // minimum stepsize allowed			default: 1.0e-7
   int maxits; // maximum iterations (0 => no limit)		default: 0
   int verbose; // 0/1 for diagnostics off/on			default: 0

I/O name: `steepest_descent'.

Conjugate gradient optimiser class: `CG_optimiser'

Parameters:
   double tol; // tolerance of minimum (spatial)		default: 1.0e-7
   double stepscale; // scaling factor for gradient vector	default: 1.0
   double stepfac; // factor for changing step size (>1)	default: 1.5
   double stepmin; // minimum stepsize allowed			default: 1.0e-7
   int maxits; // maximum iterations (0 => no limit)		default: 0
   int verbose; // 0/1 for diagnostics off/on			default: 0

I/O name: `conjugate_gradient'.

Monte-Carlo optimiser classes

Each class operates in a slightly different way.

Base class

`MC_optimiser', I/O name: `monte-carlo'. Defines data common to all: Parameters:
   double pseudotemp; // pseudo-temperature
   int nsteps; // global steps to take
   int verbose; // diagnostic options (0=off)

Subclasses