This page will look better in a graphical browser that supports web standards, but is accessible to any browser or internet device.

Served by Samwise.

# JSim Optimization Algorithms

## Introduction

There is no perfect optimization algorithm that is best for all problems. Optimization algorithms vary in their approach, efficiency, robustness and applicability to particular problem domains. This document describes the optimization algorithms currently supported by JSim in some detail so users can make intelligent use of them. JSim's currently available optimizers are listed below. Other algorithms are in development.

Prerequisites:

Contents:

## Overview

Some terminology is useful when discussing the merits of optimization algorithms:

• Bounded algorithms are those that require upper and lower bounds for each parameter varied. Unbounded algorithms require no such bounds.
• Parallel algorithms are those that can take advantage of multiple system processors for faster processing. See Using JSim Multiprocessing for further information on JSim multiprocessing.

All of JSim's currently available optimization algorithms share the following control parameters:

• max # runs: The optimizer will stop if it has run the model this many times.
• min RMS error: The optimizer will stop if the mean RMS error between reference data and model output is less than this value.

Algorithm-specific control parameters are listed with each algorithm description.

## Simplex

JSim's simplex is a bounded, non-linear steepest-descent algorithm. This algorithm does not currently support parallel processing. (description needs work)

Algorithm-specific control parameters:

• parameter initial step: default=0.01
• minimum par step: The optimizer will stop if it is considering a parameter step value that vary less than this value.

## GGopt

GGopt is an unbounded non-linear algorithm originally written by Glad and Goldstein. This algorithm does not currently support parallel processing. (description needs work)

Algorithm-specific control parameters:

• minimum par step: The optimizer will stop if it is considering a parameter step value that vary less than this value.
• maximum # iterations: default=10
• relative error: default=1e-6

Nelder-Mead is an unbounded, steepest descent algorithm by Nelder and Mead. It is also called non-linear Simplex. This algorithm supports multiprocessing (MP).

During each iteration in a P-parameter optimization, this algorithm performs a P or P+1 parmeter queries (model runs). Several additional single queries are also performed. Ideal MP speedup on an N-processor system on be roughly of order P (if P is a factor of N), or order N (if N is a factor of P).

This algorithm differs from the previously available JSim "Simplex" (above) in that:

• it is unbounded, while "Simplex" is bounded;
• it supports MP, while "Simplex" does not;
• it is a newer implementation of the algorithm (Java vs. Fortran).

Algorithm-specific control parameters:

• parameter initial step: default=0.01
• minimum par step: The optimizer will stop if it is considering a parameter step value that vary less than this value.

## GridSearch

GridSearch is a bounded, parallel algorithm. The algorithm operates via progressively restricted search of parameter space on a regularly spaced grid of npoints per dimension. Each iteration, npoints^nparm points are searched for the minimum residual. Each parameter dimension is then restricted to one grid delta around that minimum and the search repeats until stopping criteria are met.

Search bounds in each dimension narrow by a factor of at least 2/(npoints-1) each iteration. Thus, npoints must be at least 4. Each iteration requires up to npoints^nparm residual calculations. Residual calculations are reused when possible, and this reuse is most efficient when npoints is 1 + 2^N for some N. Therefore, npoints defaults to 5, which is the smallest "efficient" value.

This algorithm is very not efficient for very smooth residual functions in high-dimensional space. It works well on noisy functions when low accuraccy situations (e.g. 3 significant digits required). With npoints large, it copes well with multiple local minima. An effective strategy may be to use several interations of GridSearch to estimate a global minimum, and then use a steepest-descent algorithm to fine tune the answer.

The number of points searched during each iteration is typically large compared to the number of available processors. Typical MP speedup on an N-processor system is therefore on the order of N.

Algorithm-specific control parameters:

• minimum par step: The optimizer will stop if it is considering a parameter step value that vary less than this value.
• max # iterations: stop after this many iterations, default=10.
• # grid points: npoints above, default=5.

## Nl2sol

This version of Nl2sol is derived from NL2SOL, a library of FORTRAN routines which implement an adaptive nonlinear least-squares algorithm. It has been modified to perform all calculations in double precision. It is an unbounded optimizer. This optimizer does not support multi-processing.

Algorithm-specific control parameters:

• maximum # runs: default=50
• relative error: default=1e-6

## Sensop

Sensop is a variant of Levenberg-Marquardt algorithm that utilized the maximum parameter sensitivity to determine step size. It is a bounded optimizer, supporting multiprocessing.

Algorithm-specific control parameters:

• minimum par step: The optimizer will stop if it is considering a parameter step value that vary less than this value.

## Simulated Annealing

Simulated annealing is an algorithm inspired by the annealing process in metullurgy. As the problem "cools" from its initial "temperature", random fluctuations of model parameters are reduced. JSim implements a bounded version of this algorithm that supports multiprocessing.

Algorithm-specific control parameters:

• initial temperature:default=100. This parameter has no physical meaning (e.g. Kelvin), but must be positive.

## Genetic Algorithms

This algorithm is available only in JSim version 2.01 and above.

Genetic algorithms are a family of algorithms that generate a population of candidate solutions and then select the best solutions in each iteration. A new population of solutions is created during each iteration. There are different ways of specifying how a new population is generated from the existing population. The error calculation is used to score and rank the candidate solution. The "fit" individuals in the existing population are selected using one of three methods: (1) roulette-wheel, (2) tournament selection, or (3) elitism. In the roulette-wheel method, the probability of a solution being selected is inversely proportional to the error. In the tournament selection, two random solutions are selected and the one with the lower error is placed in the new population. In elitism, all the solutions with the lowest errors (with a cutoff fraction) are selected. New solutions are selected by "mutating" and "crossing over" existing solutions.

Algorithm-specific control parameters:

• Population:default= 25. Number of trials in each generation. If max # runs < Population then Population is defaulted to the value of max # runs and only the parent generation is calculated. Suggested values are max # runs = 400, Population=100 for four generations.
• Mutation rate:default = 0.1. The probability of any single parameter getting changed.
• Crossover rate:default = 0.5. The probability of creating a new solution by combining existing solutions.
• Select Method: default= 1. Acceptable values are (1) roulette-wheel, (2) tournament selection, and (3) elitism. See above description identifying these terms.
• Mutation step:default=0.05. The amount by which a mutation changes a parameter, specified as a fraction of the range.
• Elite cutoff:default=0.5. Fraction of the population considered fit for the elitism selection method (Select Method=3.)