Limited-memory Broyden–Fletcher–Goldfarb–Shanno (L-BFGS) for Unconstrained Optimization in C#

MSP120691a0731732i9i4h220000462e1a4c173g8b95-255B6-255D

The Limited-memory Broyden-Fletcher-Goldfarb-Shanno method is an optimization method belonging to the family of quasi-Newton methods for unconstrained non-linear optimization. In short terms, it is an off-the-shelf optimizer for seeking either minimum or maximum points of a any differentiable and possibly non-linear function, requiring only an expression of the function and its gradient. The goal of this article is to provide and demonstrate a C# port of the L-BFGS method originally written by Jorge Nocedal in Fortran.

Introduction

Function optimization is a common problem found in many numerical applications. Suppose we have a differentiable function f : Rn → R and we would like to obtain its minimal or maximum value while traversing its space of possible input values. Those kind of problems arise often in parameter optimization of machine learning models and other statistic related applications (and in a myriad of other applications too – however, we will pay special attention to cases related to machine learning).

The problem of maximizing or minimizing a function can be simply stated as max f(x) or min f(x). When there aren’t any constraints limiting possible values for x, it is widely known from calculus that the maximum or a minimum of such a function would occur when the first derivatives f’(x) of the function f(x) in respect to x are zero. Newton’s method is a common method for finding the roots of a differentiable function and is indeed a suitable method for finding those values such that first derivatives of a function are zero.

 

Surface of the example function. Contour lines of the example function.

Example of a convex, but non-linear function f(x,y) = exp{-(x-1)²} + exp{-(y-2)²/2}. Images have been created using Wolfram Alpha.

 

However, while Newton’s method may work very well with functions of a single variable, the generalization to higher dimensions will require the replacement of the first derivative f’(x) with the function’s gradient vector g of partial derivatives, and the second derivative f’’(x) with the inverse Hessian matrix H-1. The problem is that the Hessian is a square matrix with as many rows and columns as parameters of our function f, and, besides being very costly and challenging to compute, even its size may be infeasible to accommodate in main memory when dealing with very high dimensional problems.

To overcome those limitations, several methods attempt to replace the Hessian matrix with an approximation extracted from the first partial derivatives alone. Those methods are called quasi-Newton methods, as they replace H in Newton’s method by an approximation instead of using the full Hessian. In case the function being optimized has any particular form or special characteristic which could be exploited, it is also possible to use more specialized methods such as the Gauss-Newton, Levenberg-Marquardt or even lower-bound approximation methods to avoid computing the full Hessian.

The quasi-Newton BFGS method for non-linear optimization builds an approximation for the Hessian matrix based on estimates extracted solely from first order information. However, even being cheaper to compute, the memory requirements are still high as the method still needs to accommodate a dense N x N matrix for the inverse Hessian approximation (where N is the number of variables for the function). To overcome this problem, the L-BFGS method, or the limited-memory version of the BFGS method, never forms or stores the approximation matrix, but only a few vectors which can be used to represent it implicitly.

Source code

The code presented here is based upon a direct translation of the original code by Jorge Nocedal. His original code was released under the permissive BSD license, and honoring the original license and the author’s considerations, this code is released under the BSD as well.

L-BFGS Class diagrams

The L-BFGS method is implemented within the BroydenFletcherGoldfarbShanno class. Upon object creation, it is possible to specify a function and its gradient through either the use of Lambda expressions or by specifying handles for named functions. After the object has been initialized, a simple call to Minimize runs the standard L-BFGS method until convergence. The details for the Minimize method is given below.

The version of the code detailed here only supports Minimization problems. However, it is possible to note that any minimization problem can be converted into a maximization problem and vice-versa by taking the opposite of the function and its gradient.

Using the code

Code usage is rather simple. Suppose we would like to maximize the function g(x,y) = exp{-(x-1)²} + exp{-(y-2)²/2}:

Since we would like to perform a maximization, we first have to convert it to a minimization problem. The minimization version of the function is simply given by taking f(x,y) = –g(x,y):

Which is a convex function, as can be seen by plotting its surface. As it can be seen, the minimum of the function lies on the point (x,y) = (1,2). As expected, this point coincides with the roots of the partial derivative functions, as shown in the line plots below:

 

Example optimization cast as an minimization problem.

The minimization problem min f(x,y) = -(exp(-(x-1)²) + exp(-(y-2)²/2)), computed by Wolfram Alpha.

Partial derivative for x. Partial derivative for y.

The roots of the partial derivatives in respective to x (left) and y (right). It is possible to see that the roots occur at 1 and 2, respectively. Images computed using Wolfram Alpha. 

 

The first step in solving this optimization problem automatically is first to specify the target function. The target function f, in this case, can be specified through a lambda function in the form:

The next step is to specify the gradient g for the function f. In this particular case, we can manually compute the partial derivatives to be df/dx = -2 e^(-(x-1)^2) (x-1) and df/dy = -e^(-1/2 (y-2)^2) (y-2), in respect to x and y, respectively. Writing a lambda function to compute the gradient vector g, we have:

We can also note that this example was rather simple, so the gradient vector was easy to calculate. However, the gradient could also have been computed automatically using Accord.NET‘s FiniteDifferences class. In either case, all we have to do now is to create our L-BFGS solver and call its Minimize() method to begin optimization:

After the optimization is complete, the solution parameter will be available in the Solution property of the lbfgs object, and will be equal to { 1.0, 2.0 }. And also in case one is interested in progress reports (such as in the case of optimizing very large functions), it is also possible to register a listener to the Progress event handler. The complete version of the sample application accompanying the source code is given below:

Conclusion

In this post, we showed how to use a reduced set of the Accord.NET Framework‘s to perform non-linear optimization. This routine is also used by the Conditional Random Fields and Hidden Conditional Random Fields trainers to optimize parameters for such models. The solver routines have been adapted from the original Fortran’s source code from Nocedal, which, tracing back to a 2001 message from the author, also have been reported to be available under the public domain. Nevertheless, the reduced set of the framework available for download within this post is available under a BSD license, as an alternative to the version available within the Accord.NET Framework which is available only under a LGPL license.

As always, I hope someone will find it useful 🙂

References

2 Comments

Leave a Reply

Your email address will not be published. Required fields are marked *