OOOT: Object-Oriented Optimization Toolbox

This project is maintained by DesignEngrLab

Example 1: Simple Unconstrained 2-D Optimization

In this first simple example we solve the unconstrained Rosenbrock’s banana function with Nelder-Mead Search. The Nelder Mead search method, also at the heart of MatLab's fsolve function, is a clever and wholly unique method that works well for low-dimensional problems with no constraints (if constraints are used, you will need to specify a merit/penalty function.

            var optMethod = new NelderMead();

The objective function is the famous Rosenbrock's banana function for two dimensions. All objective functions must a object from classes that inherit from the IObjectiveFunction interface. Within the toolbox there is one such method called polynomialObjFn. Like polynomialInequality, and polynomialInequality, this class stores a list of polynomial terms which are summed together. An individual term can be comprised of a constant and any (or all) variables raised to any power. These methods cannot handle functions where there are individual sums that are multiplied or raised to a power. Therefore, the equation for Rosenbrock's shown at the top of the Wikipedia page must be broken down into individual terms like that shown below.

            optMethod.Add(new polynomialObjFn()

            {

                Terms = new List<string>

                {

                    "100*x1^4",

                    "-200*x1^2*x2",

                    "x1^2",

                    "-2*x1",

                    "100*x2^2",

                    "1",

                }

            });

Next additional details in the optimization algorithm are required. Actually, Nelder-Mead requires the least additional primping. One thing it does requires is at least one convergence method. Since we know the optimal is 0 (@ {1, 1}) we can use the "ToKnownBestFConvergence" with a tolerance of 0.0001.

   optMethod.Add(new ToKnownBestFConvergence(0, 0.01));

 

In this example, we will provide a starting point, but it is not required. If one is not provided a random point is chosen.

            double[] xInit = new[] { 5.0, 50.0 };

The next line is where the optimization actually occurs. Note that the Run command requires the "out double[]" as it's first argument. In this way, the optimization can return a single double equal to the value of the optimal objective function, and an optimizer, a vector of optimal decision variables. There are two other overloads for the Run command. You can simply specify the out vector and nothing else, or you can provide the out vector and the number of decision variables.

            var fStar = optMethod.Run(out xStar, xInit);

The output is provided from the optimization. Since the optMethod is an object, we can probe it following the run to get at important data like how the process converged.

            Console.WriteLine("Convergence Declared by " + optMethod.ConvergenceDeclaredByTypeString);

            Console.WriteLine("X* = " + StarMath.MakePrintString(xStar));

            Console.WriteLine("F* = " + fStar, 1);

            Console.WriteLine("NumEvals = " + optMethod.numEvals);

Whatever computer you run this on, you should get the same output since there is no randomness in the process:

            Convergence Declared by ToKnownBestFConvergence

            X* = { 1.007 , 1.014 }

            F* = 9.57200828679561E-05

            NumEvals = 264