Quadratic Programming in C#

I have manually translated and adapted the QuadProg solver for quadratic programming problems made by Berwin A. Turlach. His code was originally published under the GNU Library License, which has now been superseded by the GNU Lesser License. This adapted version honors the original work and is thus distributed under the same license.

The code in this article is part of the Accord.NET Framework. Download the full framework through NuGet and use this code in your own applications.

Introduction

Despite the name, the terms linear or quadratic programming have little resemblance to the set of activities most people now know as programming. Those terms usually usually refers to a specific set of function optimization methods, i.e. methods which can be used to determine the maximum or minimum points of special kinds of functions under a given number of solution constraints. For example, suppose we would like to determine the minimum value of the function:

f(x, y) = 2x + y + 4

Under the constraints that x and y must be non-negative (i.e. either positive or zero). This may seem fairly simple and trivial, but remember that practical linear programming problems may have hundreds or even thousands of variables and possibly million constraints.

When the problem to be solved involves a quadratic function instead of a linear function, but still presents linear constraints, this problem can be cast as a quadratic programming problem. Quadratic functions are polynomial functions in each each term may have at most a total degree of 2. For example, consider the function

f(x, y, z) = 2x² + 5xy + y² – z² + x – 5.

Now let’s check the sum of the degrees for each variable on the polynomial terms. We start by writing the missing terms of the polynomial

f(x, y, z) = 2 x2y0z0 + 5 x1y1z0 + 2 x0y2z0x0y0z2 + x1y0z0 – 5 x0y0z0

and then proceed to check the sum of the degrees at each term. In the first term, 2+0+0 = 2. For the second, 1+1+0 = 2, and so on. Those functions have a nice property that they can be expressed in a matrix form

f(x) = 1/2 xT Q x + cTx.

Here, x and c are vectors. The matrix Q is a symmetric matrix specifying how the variables combine in the quadratic terms of the function. If this matrix is positive definite, then the function is convex, and the optimization has a single, unique optimum (we say it has a global optimum). The coefficients c specify the linear terms of our function.

Source code

The available source code is based on a translation of the Fortran code written by Berwin A. Turlach. However, some modifications have been made. Fortran uses column-major ordering for matrices, meaning that matrices are stored in memory in sequential order of column elements. Almost all other languages use row-major ordering, including C, C++, Java and C#. In order to improve data locality, I have modified the code to use the transpose of the original matrices D and A. I have also modified the QP formulation adopted in the Goldfarb and Idnani paper to reflect the form presented in the introduction.

In this article, we will be using the GoldfarbIdnani class from the Accord.NET Framework, as well as the QuadraticObjectiveFunction. Their source code are available at GoldfarbIdnani.cs and QuadraticObjectiveFunction.cs, respectively. Please note that the sample application version available in this blog post will most likely not be the most recently, updated, fixed and enhanced version of the code. For the latest version, be sure to download the latest version of the framework on the project site or through a NuGet package.

Using the code

The first step in solving a quadratic programming problem is, well, specifying the problem. To specify a quadratic programming problem, one would need two components: a matrix D describing the relationship between the quadratic terms, and a vector d describing the linear terms. Perhaps this would work better with an example.

Suppose we are trying to solve a minimization problem. Given a function, the goal in such problems is to find the correct set of function arguments which would result in the minimum possible value for the function. An example of a quadratic minimization problem is given below:

min f(x, y) = 2x² – xy + 4y² – 5x – 6y

subject to the constraints:

x – y = 5
x = 10


wolfram
(generated with Wolfram Alpha)

However, note that this problem involves a set of constraints. The required solution for this minimization problem is required to lie in the interval specified by the constraints. More specifically, any x and y pair candidate for being a minimal of the function must respect the relations x – y = 5 and x >= 10. Thus, instead of lying in the unconstrained minimum of the function surface shown above, the solution lies slightly off the center of the surface. This is an obvious easy problem to solve manually, but it will fit for this demonstration.


wolfram2

As it can be seen (and also live demonstrated by asking Wolfram Alpha) the solution lies on the point (10,5), and the constrained minimum of the function is given by 170. So, now that we know what a quadratic programming problem looks like, how can we actually solve it?

Specifying the objective function

The first step in solving a quadratic programming problem is to specify the objective function. Using this code, there are three ways to specify it. Each of them has their own advantages and disadvantages.

1. Manually specifying the QP matrix.

This is the most common approach for numerical software, and probably the most cumbersome for the user. The problem matrix has to be specified manually. This matrix is sometimes denoted Q, D or H as it actually denotes the Hessian matrix for the problem.

The matrix Q is used to describe the quadratic terms of our problem. It is a n x n matrix, in which n corresponds to the number of variables in our problem, covering all possible combinations of variables. Recall our example given on the start of this section. We have 2 variables, x and y. Thus, our matrix Q is 2 x 2. The possible combinations for x and y are expressed in the table below.

x y
x x*x x*y
y y*x y*y

To form our matrix Q, we can take all coefficients associated with each pair mentioned on the table above. The diagonal elements should also be multiplied by two (this is actually because the matrix is the Hessian matrix of the problem: it is the matrix of all second-order derivatives for the function. Since we have only at most quadratic terms, the elementary power rule of derivation “drops” the ² from the x² and y² terms – I think a mathematician would hit me with a stick for explaining it like this, but it serves well for a quick, non-technical explanation).

Remember our quadratic terms were 2x²  – 1xy + 4y². Writing the terms on their proper position and differentiating, we have:


As it can be seen, the matrix is also symmetric (and often, but not always, positive definite). The next step, more trivial, is to write a vector d containing the linear terms. The linear terms are –5x –6y, and thus our vector d can be given by:


Therefore our C# code can be created like this:

2. Using lambda expressions

This approach is a bit more intuitive and less error prone. However, it involves lambdas functions and some people find it hard to follow them. Another disadvantage is that we will lose the edit & continue debugging ability of visual studio. The advantage is that the compiler may catch some obvious syntax errors automatically. Below we are using the QuadraticObjectiveFunction from Accord.NET to specify our target objective function.

Note that the x and y variables could have been initialized to any value. They are only used as symbols, and not used in any computations.

3. Using text strings

This approach is more intuitive but a bit more error prone. The function can be specified using strings, as in a standard mathematical formula. However, since all we have are strings, there is no way to enforce static, compile time checking.

Couldn’t be easier.

Specifying the constraints

The next step in specifying a quadratic programming problem is to specify the constraints. The constraints can be specified in almost the same way as the objective function.

1. Manually specifying the constraints matrix

The first option is to manually specify the constraints matrix A and vector b. The constraint matrix expresses the way the variables should be combined when compared to corresponding value on vector b. It is possible to specify either equality constraints or inequality constraints. The formulation used in this code is slightly different from the one used in Turlach’s original implementation. The constraints are supposed to be in the form:

A1 x = b1
A2 x = b2

This means that each line of matrix A expresses a possible combination of variables x which should be compared to the corresponding line of b. An integer variable m can be specified to denote how many of the first rows of matrix A should be treated as equalities rather than inequalities. Recall that in our example the constraints are given by 1x –1y = 5 and 1x = 10. Lets write this down in a tabular form:

# x y ? b
q1 1 -1 = 5
q2 1 0 = 10

Thus our matrix A and vector b can be specified as:



And not forgetting that m = 1, because the first constraint is actually an equality.

2. Using classes and objects

A more natural way to specify constraints is using the classes and objects of the Accord.NET Framework. The LinearConstraint class allows one to specify a single constraint using an object-oriented approach. It doesn’t have the most intuitive usage on earth, but has much more expressiveness. It can also be read aloud, it that adds anything! 🙂

The specification is centered around the notion that variables are numbered and have an associated index. For example, x is the zero-th variable of the problem. Thus x has an index of 0 and y has an index of 1. So for example, reading aloud the last constraint, it is possible to express how the variables at indices 0 and 1, when combined as 1x and –1y, should be equal to value 5.

2. Using lambda expressions

A more intuitive way to express constraints is again using lambda expressions. And again the problems are the same: some people find it hard to follow and we lose edit & continue.

3. Using text strings

Same as above, but with strings.

Finally, creating and solving the problem

Once we have specified what do we want, we can now ask the code for a solution. In case we have opted for manually specifying the objective function Q and d, and the constraint matrix A, vector b and integer m, we can use:

In case we have opted for creating a QuadraticObjectiveFunction object and a list of constraints instead, we can use:

After the solver object has been created, we can call Minimize() to solve the problem. In case we have opted for manually specifying Q and d, we can use:

The solution will be available in the Solution property of the solver object, and will be given by:

Sample application

The Accord.NET Framework now includes a sample application demonstrating the use of the Goldfarb-Idnani Quadratic Programming Solver. It can be downloaded at the Accord.NET Framework site, and also comes together with recent versions of the framework (> 2.6).

Solver

Solver sample application included in the Accord.NET Framework.

Remarks

Because the code has been translated by hand (in contrast of using automatic translators such as f2c) there could be potential bugs in the code. I have tested the code behavior against R’s quadprog package and still didn’t find errors. But this does not mean the code is bug-free. As always, as is the case of everything else in this blog, this code is published in good faith, but I can not guarantee the correctness of everything. Please read the disclaimer for more information.

References

Limited-memory Broyden–Fletcher–Goldfarb–Shanno (L-BFGS) for Unconstrained Optimization in C#

MSP120691a0731732i9i4h220000462e1a4c173g8b95-255B6-255D

The Limited-memory Broyden-Fletcher-Goldfarb-Shanno method is an optimization method belonging to the family of quasi-Newton methods for unconstrained non-linear optimization. In short terms, it is an off-the-shelf optimizer for seeking either minimum or maximum points of a any differentiable and possibly non-linear function, requiring only an expression of the function and its gradient. The goal of this article is to provide and demonstrate a C# port of the L-BFGS method originally written by Jorge Nocedal in Fortran.

Introduction

Function optimization is a common problem found in many numerical applications. Suppose we have a differentiable function f : Rn → R and we would like to obtain its minimal or maximum value while traversing its space of possible input values. Those kind of problems arise often in parameter optimization of machine learning models and other statistic related applications (and in a myriad of other applications too – however, we will pay special attention to cases related to machine learning).

The problem of maximizing or minimizing a function can be simply stated as max f(x) or min f(x). When there aren’t any constraints limiting possible values for x, it is widely known from calculus that the maximum or a minimum of such a function would occur when the first derivatives f’(x) of the function f(x) in respect to x are zero. Newton’s method is a common method for finding the roots of a differentiable function and is indeed a suitable method for finding those values such that first derivatives of a function are zero.

 

Surface of the example function. Contour lines of the example function.

Example of a convex, but non-linear function f(x,y) = exp{-(x-1)²} + exp{-(y-2)²/2}. Images have been created using Wolfram Alpha.

 

However, while Newton’s method may work very well with functions of a single variable, the generalization to higher dimensions will require the replacement of the first derivative f’(x) with the function’s gradient vector g of partial derivatives, and the second derivative f’’(x) with the inverse Hessian matrix H-1. The problem is that the Hessian is a square matrix with as many rows and columns as parameters of our function f, and, besides being very costly and challenging to compute, even its size may be infeasible to accommodate in main memory when dealing with very high dimensional problems.

To overcome those limitations, several methods attempt to replace the Hessian matrix with an approximation extracted from the first partial derivatives alone. Those methods are called quasi-Newton methods, as they replace H in Newton’s method by an approximation instead of using the full Hessian. In case the function being optimized has any particular form or special characteristic which could be exploited, it is also possible to use more specialized methods such as the Gauss-Newton, Levenberg-Marquardt or even lower-bound approximation methods to avoid computing the full Hessian.

The quasi-Newton BFGS method for non-linear optimization builds an approximation for the Hessian matrix based on estimates extracted solely from first order information. However, even being cheaper to compute, the memory requirements are still high as the method still needs to accommodate a dense N x N matrix for the inverse Hessian approximation (where N is the number of variables for the function). To overcome this problem, the L-BFGS method, or the limited-memory version of the BFGS method, never forms or stores the approximation matrix, but only a few vectors which can be used to represent it implicitly.

Source code

The code presented here is based upon a direct translation of the original code by Jorge Nocedal. His original code was released under the permissive BSD license, and honoring the original license and the author’s considerations, this code is released under the BSD as well.

L-BFGS Class diagrams

The L-BFGS method is implemented within the BroydenFletcherGoldfarbShanno class. Upon object creation, it is possible to specify a function and its gradient through either the use of Lambda expressions or by specifying handles for named functions. After the object has been initialized, a simple call to Minimize runs the standard L-BFGS method until convergence. The details for the Minimize method is given below.

The version of the code detailed here only supports Minimization problems. However, it is possible to note that any minimization problem can be converted into a maximization problem and vice-versa by taking the opposite of the function and its gradient.

Using the code

Code usage is rather simple. Suppose we would like to maximize the function g(x,y) = exp{-(x-1)²} + exp{-(y-2)²/2}:

Since we would like to perform a maximization, we first have to convert it to a minimization problem. The minimization version of the function is simply given by taking f(x,y) = –g(x,y):

Which is a convex function, as can be seen by plotting its surface. As it can be seen, the minimum of the function lies on the point (x,y) = (1,2). As expected, this point coincides with the roots of the partial derivative functions, as shown in the line plots below:

 

Example optimization cast as an minimization problem.

The minimization problem min f(x,y) = -(exp(-(x-1)²) + exp(-(y-2)²/2)), computed by Wolfram Alpha.

Partial derivative for x. Partial derivative for y.

The roots of the partial derivatives in respective to x (left) and y (right). It is possible to see that the roots occur at 1 and 2, respectively. Images computed using Wolfram Alpha. 

 

The first step in solving this optimization problem automatically is first to specify the target function. The target function f, in this case, can be specified through a lambda function in the form:

The next step is to specify the gradient g for the function f. In this particular case, we can manually compute the partial derivatives to be df/dx = -2 e^(-(x-1)^2) (x-1) and df/dy = -e^(-1/2 (y-2)^2) (y-2), in respect to x and y, respectively. Writing a lambda function to compute the gradient vector g, we have:

We can also note that this example was rather simple, so the gradient vector was easy to calculate. However, the gradient could also have been computed automatically using Accord.NET‘s FiniteDifferences class. In either case, all we have to do now is to create our L-BFGS solver and call its Minimize() method to begin optimization:

After the optimization is complete, the solution parameter will be available in the Solution property of the lbfgs object, and will be equal to { 1.0, 2.0 }. And also in case one is interested in progress reports (such as in the case of optimizing very large functions), it is also possible to register a listener to the Progress event handler. The complete version of the sample application accompanying the source code is given below:

Conclusion

In this post, we showed how to use a reduced set of the Accord.NET Framework‘s to perform non-linear optimization. This routine is also used by the Conditional Random Fields and Hidden Conditional Random Fields trainers to optimize parameters for such models. The solver routines have been adapted from the original Fortran’s source code from Nocedal, which, tracing back to a 2001 message from the author, also have been reported to be available under the public domain. Nevertheless, the reduced set of the framework available for download within this post is available under a BSD license, as an alternative to the version available within the Accord.NET Framework which is available only under a LGPL license.

As always, I hope someone will find it useful 🙂

References

Machine Learning Books and Lectures

Jordan, one of the readers of the blog, wrote to point out some cool references for machine learning, mathematics and artificial intelligence.

Thanks again for all your contributions. I’m a .NET programmer coming from a background of studying politics and Latin American studies so when the moment came that I realized I was interested in A.I., I didn’t have many resources to turn to.

I have a collection of books and links that I found very very helpful when I was trying to learn about these concepts. I was thinking they could be useful to some of your readers who were in my shoes two years ago. So without further adieu:

Handbook of Normal Frames and Coordinates

"Handbook of Normal Frames and Coordinates," Iliev, Bozhidar Z. (2006)

“This can be a heavy text, but if you understand the concept, this can take you to the next level – I would categorize this as Topology but has applications in quantum physics, theoretical mathematics, theoretical physics etc…- in context of your blogs – granted this is a dedicated read – I think readers will be able to appreciate and truly understand what a Hilbert Space is after reading this book.”

Linear Algebra

"Linear Algebra," Jim Heffron (2008)

“I liked this one because it was free and is still highly rated – I have also heard great reviews about David Poole’s book on linear algebra, but I couldn’t get myself to shell out the money when this was free.”

Complex To Real

http://www.complextoreal.com/tutorial.htm

“There are a ton of great articles here – I have personally read the ones on fourier transforms and wavelet transforms – great stuff”

Stanford Lectures – Fourier Analysis series

(free through Youtube or iTunesU)

“Email the Professor for the companion book. At this point it may have been published – but well worth shelling out the dough in any case.”

Bio-Inspired Artificial Intelligence

"Bio-Inspired Artificial Intelligence," Floreano and Mattiussi (2008)

“Excellent reference – fairly in depth for the number of topics it covers, lots of illustrations (illustrations are always a good thing 🙂 and I’ve also found it to be a useful source for inspiration. I bought this book while I was looking into fuzzy swarm intelligence – it doesn’t have all the answers, but I am simply in love with this book.”

Video lectures on Machine Learning

http://videolectures.net/pascal/

“A collection of video lectures featuring very brilliant people – let’s face it… if you’re doing or are interested in mathematics this complex… you probably don’t know too many people who you can talk to / learn from on the subject unless you’re in a University studying this kind of thing – these are some great lectures on machine learning – I just recently found this site but wish I had found it sooner – it’s great if you’re just exploring machine learning or are very well versed in it – however, I fall somewhere in the middle of that distribution so take it for what it’s worth!”

Fearless Symmetry

"Fearless Symmetry," Ash and Gross (2006)

Another accessible book to those without heavy training in math – great intro to Galois Theory, the Riemann Hypothesis and several other concepts.

Zero: The Biography of a Dangerous Idea

    "Zero: The Biography of a Dangerous Idea," Charles Seife (2000)

This one is more historical and conceptual than technical but it’s a captivating read and will help get you through those hard times when you want to put down that book on K-Dimensional Manifolds, but still need to satisfy your mathematical mind (some say it’s never at rest once you catch that "learning bug").

Khan Academy

http://www.khanacademy.org/

Finally,  when you get lost, go here! The Khan Academy is a not-for-profit 501(c)(3) with the mission of providing a world-class education to anyone, anywhere.

 

Thanks Jordan! I hope readers can enjoy those resources as much as I did. Cheers!

Handwritten Digits Recognition with C# SVMs in Accord.NET

handwritten-recognition

I’ve posted a new article on CodeProject, entitled “Handwriting Recognition Revisited: Kernel Support Vector Machines”. It is a continuation of a previous article on handwritten digits recognition but this time using SVMs instead of KDA.

handwritten-recognition

The code uses the SVM library in Accord.NET Framework. The framework, which runs on .NET and is written mostly in C#, supports standard or multiclass support vector machines for either classification or regression, having more than 20 kernel functions available to choose from.

In the article, the same set and the same amount of training and testing samples have been used as in the previous Kernel Discriminant Analysis article for comparison purposes. The experimental results shows that  SVMs can outperform KDA both in terms of efficiency and accuracy, requiring much less processing time and memory available while still producing robust results.

Matrix manipulation using Accord.NET

accord-matrix2_thumb

Matrix manipulation in Accord.NET
is very straightforward: Just add a new using directive on top of your class to
have (literally) about a hundred new extension methods that operate directly on
.NET multi-dimensional arrays.


accord-matrix2

Introduction


accord-matrix

Accord.NET uses a bit different approach for matrix manipulation in contrast to
other libraries. By using C# 3.0 extension methods, Accord adds several of the standard
methods you would expect from a Matrix library,
such as linear system solving, matrix algebra and numerical decompositions directly
to the standard double[,] (or more generally T[,]) matrices of the framework
[^].

This approach offers some advantages since it avoids mismatches when one is using
multiple libraries, each with their own and often incompatible Matrix implementations.
Extending multi-dimensional arrays makes the use of matrices much more natural in
the .NET world, as no specialized Matrix classes have to be used and no code has
to be rewritten just to use a particular Matrix implementation.

 

Please note, however, that most methods implemented by Accord are not equivalent
to the heavily optimized versions of more specialized numerical packages, such as

BLAS
, from a performance view. Their real power comes when prototyping or realizing algorithms into code. Once an algorithm is written, several
unit tests can be created to test the correctness of the code. After that, because
we have an early working prototype, it will be much easier to perform optimizations
as needed. The key point is to have a working version early which can (if
actually necessary) be optimized later.

Using extension methods

The first step is to include a new using directive on the top of
your source file.


accord-matrix3

This is all that is necessary after you have referenced the Accord.Math library
in your project. Now you can use all of the methods and operations described below.
The following list is nowhere complete, but shows only some basics and highlights
from the current version of Accord.

Declaring matrices

Using standard .NET declarations

To declare matrices, no special code is required. Just create multi-dimensional
arrays as usual.

Using Accord.NET extensions

Additionally, one may wish to use one of the convenience methods of Accord.NET to
create specialized matrices such as the Identity matrix, multiples of the Identity
matrix or Diagonal matrices.

Accord.NET (C#) MATLAB®

Using Accord.NET extensions with implicit typing

By using implicit type variable declaration, the code acquires a certain lightweight
feeling and gets closer of its MATLAB®/Octave counterpart (MATLAB is a registered
trademark of The MathWorks, Inc)
.

Accord.NET (C#) MATLAB®

A much more closer alternative will be possible by using Algorithm Environments,
a upcoming feature in Accord. When available, this feature will allow construction
of mathematical code using

directly. The code will be based on current Octave syntax. Octave is a high-level
language, primarily intended for
numerical computations
, that is mostly compatible with
MATLAB
.

Matrix operations

All standard matrix operations such as transpose, inverse, column and row manipulations
are available in the extension methods. In the example below, A is the same standard
double[,] matrix declared in the first section of this
article. All methods return a new double[,] matrix
as a result, leaving the original matrix A untouched.

Operation Accord.NET (C#) MATLAB®
Transpose
Inverse
Pseudo-Inverse

Matrix Algebra

All common algebraic operations for matrices are also implemented. Those are the
common operations such as addition, subtraction and multiplication. Here, division
is actually a shortcut for multiplying by the inverse.

Operation Accord.NET (C#) MATLAB®
Addition
Subtraction
Multiplication
Division

The above also works with vectors and scalars.

Operations Accord.NET (C#) MATLAB®
Multiplying by a scalar
Multiplying by a column vector

Special element-wise operations

Element-wise operations are operations performed on each element of the matrix or
vector. In Matlab, they are some times known as the dot operations, since they are
usually denoted by prefixing a dot on the common operators.

Operation Accord.NET (C#) MATLAB®
Multiplication
Division
Power

Vector operations

Accord.NET can also perform many vector operations. Among them are the many flavors
of products between vectors, such as the inner, the outer and the Cartesian.

Operation Accord.NET (C#) MATLAB®
Inner product (a.k.a. the scalar product)
Outer product (a.k.a. the matrix product) w = u’*v
Cartesian product  
Euclidean Norm
Sum
Product

Matrix characteristics

Some common matrix characteristics, such as the determinant and trace, are readily
available.

Operation Accord.NET (C#) MATLAB®
Determinant
Trace

Other characteristics

Other available characteristics are the Summation and Product of vector and matrix
elements.

Operation Accord.NET (C#) MATLAB®
Sum vector
Sum of elements
Sum along columns
Sum along rows

Linear Algebra

Linear algebra
is certainly one of the most important fields of mathematics, if not the most important
one. Accord includes many methods for numerical linear algebra such as matrix inversion
and matrix decompositions. Most of them were originally based on JAMA and MAPACK, but today Accord has some additions from
routines translated from EISPACK (mainly the
Generalized Eigenvalue Decomposition
[^],
which is currently absent from Jama).

Operation Accord.NET (C#) MATLAB®
Solve a linear system x = A.Solve(B) x = A B

Eigenvalue Decomposition

Operation Accord.NET (C#) MATLAB®
Standard decomposition
Generalized decomposition

Singular Value Decomposition

Operation Accord.NET (C#) MATLAB®
Economy decomposition

QR Decomposition

Operation Accord.NET (C#) MATLAB®
Standard decomposition

Cholesky decomposition

Operation Accord.NET (C#) MATLAB®
Standard decomposition

LU Decomposition

Operation Accord.NET (C#) MATLAB®
Standard decomposition

Special operators

There are some other useful operators available in Accord.NET. There are facilities
to create index vectors (very common in Matlab for accessing sub-portions of a matrix),
select elements, find elements based on a selection criteria, and so on.

Operation Accord.NET (C#) MATLAB®
Create a vector of indices idx = 1:9
Selecting elements B = A(idx)
Finding elements matching a certain criteria (For example, finding x ∈ v / x > 2). v = [ 5 2 2 7 1 0];
idx = find(v > 2)
Reshaping a vector to a matrix. m = [1 2 3 4];
reshape(m,2,2)

 

More than just matrices

Accord.NET also offers many other standard mathematical functions such as the Gamma
function, log-Gamma, Digamma, Bessel functions, Incomplete beta integrals, and so
on.


accord-matrix4

For a complete listing of the framework features, please check the project page at Origo. In particular, don’t forget to check
out the the class diagram for the Accord.Math namespace.

Cheers!

 

Legal notice: MATLAB is a registered trademark of The MathWorks, Inc. All
other trademarks are the property of their respective owners.

Automatic Image Stitching with Accord.NET

dept_full_thumb

I have just posted a new article on CodeProject, entitled Automatic Image Stitching with Accord.NET. It is a demonstration of automatic image stitching by interest point matching using the Accord and AForge.NET Frameworks.

 

Automatic Image Stitching in C# using Accord.NET Framework

 

The method used is pretty classic in the computer vision literature. There are many other variations of the method with their own advantages and disadvantages, but most of them are built around the same ideas – feature detection, matching and blending. This is one of the most straightforward and free implementations available, since some of those most sophisticated methods are patented.

Generalized Eigenvalue Decomposition in C#

5f7de24799f03b87862a0cdad7b0ed9e

For quite some time I have been trying to find a Java or C# implementation of the Generalized Eigenvalue Decomposition. I just wanted something simple, that didn’t depend on any external libraries such as BLAS or LAPACK. As I could not find any simple and understandable implementation of this decomposition, I decided to implement one myself.

Introduction

As Wikipedia says, a generalized Eigen value problem is the problem of finding a vector v that obeys

 Amathbf{v} = lambda B mathbf{v} quad quad

where A and B are matrices. If v obeys this equation, with some λ, then we call v the generalized eigenvector of A and B, and λ is called the generalized eigenvalue of A and B which corresponds to the generalized eigenvector v. The possible values of λ must obey the following equation

det(A - lambda B)=0.,

An interesting feature of the generalized eigenvalue decomposition is that it finds the eigenvectors of the matrix B-1A even if B is singular and does not have an inverse. If B is nonsingular, the problem could be solved by reducing it to a standard eigenvalue problem of the form B-1Ax=λx. However, because B can be singular, an alternative algorithm, called the QZ method, is necessary.

Solving the problem using the QZ method is equivalent to computing the Eigen decomposition of B-1A without the need of inverting B, which could be impossible or ill-conditioned if B is singular or near-singular. It is also computationally simpler as it saves the time and memory needed for inverting and storing the inverse of B.

EISPACK

EISPACK is a software library for numerical computation of eigenvalues and eigenvectors of matrices, written in FORTRAN. It was originally written around 1972–1973, based heavily on algorithms originally implemented in ALGOL. Because those algorithms are much more human-readable than their LAPACK counterparts, I decided to port the related functions from EISPACK.

The functions which implement the Generalized Eigenvalue Decomposition in EISPACK are called QZHES, QZIT, QZVAL and QZVEC. As their name implies, they use the QZ method for finding the generalized eigenvalues of a matrix pair (A,B).

Source code

The source code is available in the download link in the upper part of this article. It can also be found in the latest versions of the Accord.NET Framework. The algorithm presented in the code is equivalent to the eig(A,B) command of MATLAB.

Because the code has been ported from FORTRAN, many sections of the algorithms have stayed as they appear in their original functions. Please be aware that it may contain lots of labels and goto’s, as most structured programming features we take for granted today were not commonplace at the time.

Using the code

Using the code is much simpler than dealing with LAPACK functions. Because the functions have been implemented directly in C#, the code is somewhat more human readable and has also been wrapped in the same nice object-oriented approach as the other matrix decompositions that can be found in MAPACK and JAMA.

Using the given sources, the following MATLAB code:

Becomes equivalent to the C# code:

in the sense that both satisfy the identity A*V = B*V*D .

References

Accord.NET Framework – An extension to AForge.NET

Today I have just released the first version of Accord.NET. The Accord.NET Framework is a C# framework I have developed over the years while working on several areas of artificial intelligence, machine learning and statistics.

The Accord.NET Framework extends the excellent AForge.NET Framework with new tools and libraries. In order to use Accord.NET, you must have AForge.NET already installed. The first version number of Accord.NET will be used to indicate the compatibility status with AForge.NET versions, thus the first version will be starting at 2.0.0.

 

The framework is comprised by the set of libraries and sample applications, which demonstrate their features:

  • Accord.Statistics – library with statistical analysis and other tools;
  • Accord.Imaging – extension to the AForge.NET Imaging library with new filters and routines;
  • Accord.Neuro – extension to the AForge.NET Neuro library with other learning algorithms;
  • Accord.MachineLearning – extension to AForge’s machine learning library with Support Vector Machines;
  • Accord.Audio – experimental library with filters and audio processing routines.

 

The Framework has just been released, so be ready to expect bugs and unpolished/unfinished areas. The code is released under a LGPL license. For additional help, issues, and discussions, please refer to the recently created forums.

Recognition of Handwritten Digits using Non-linear Kernel Discriminant Analysis

kda3-3_thumb-5B6-5D

I’ve just submitted a new article to CodeProject, entitled "Handwriting Recognition using Kernel Discriminant Analysis". The article is a demonstration of handwritten digit recognition using Non-linear Discriminant Analysis with Kernels and using the Optical Recognition of Handwritten Digits data set from the University of California’s Machine Learning Repository.

Recognition of handwritten digits using Non-linear Kernel Discriminant Analysis

Recognition of Handwritten digits using Kernel Discriminant Analysis

 

The Code Project is a free repository of source code, tutorials and development resources. It is also home of a large and ever-growing community of software developers, architects and other technology professionals. It has fairly active forums, and is a reasonably good resource for resolving difficult software development issues.

Kernel Support Vector Machines for Classification and Regression in C#

Kernel methods in general have gained increased attention in recent years, partly due to the grown of popularity of the Support Vector Machines. Support Vector Machines are linear classifiers and regressors that, through the Kernel trick, operate in reproducing Kernel Hilbert spaces and are thus able to perform non-linear classification and regression in their input space.

Foreword

If you would like to use SVMs in your .NET applications, download the Accord.NET Framework through NuGet. Afterwards, creating support vector machines for binary and multi-class problems with a variety of kernels becomes very easy. The Accord.NET Framework is a LGPL framework which can be used freely in commercial, closed-source, open-source or free applications. This article explains a bit how the SVM algorithms and the overall SVM module was designed before being added as part of the framework.

Contents

  1. Introduction
    1. Support Vector Machines
    2. Kernel Support Vector Machines
      1. The Kernel Trick
      2. Standard Kernels
  2. Learning Algorithms
    1. Sequential Minimal Optimization
  3. Source Code
    1. Support Vector Machine
    2. Kernel Support Vector Machine
    3. Sequential Minimal Optimization
  4. Using the code
  5. Sample application
    1. Classification
    2. Regression
  6. See also
  7. References

Introduction

Support Vector Machines

Support vector machines (SVMs) are a set of related supervised learning methods used for classification and regression. In simple words, given a set of training examples, each marked as belonging to one of two categories, a SVM training algorithm builds a model that predicts whether a new example falls into one category or the other. Intuitively, an SVM model is a representation of the examples as points in space, mapped so that the examples of the separate categories are divided by a clear gap that is as wide as possible. New examples are then mapped into that same space and predicted to belong to a category based on which side of the gap they fall on.

A linear support vector machine is composed of a set of given support vectors z and a set of weights w. The computation for the output of a given SVM with N support vectors z1, z2, … , zN and weights w1, w2, … , wN is then given by:

F(x) = sum_{i=1}^N  w_i , left langle z_i,x right rangle  + b

Kernel Support Vector Machines

The original optimal hyperplane algorithm proposed by Vladimir Vapnik in 1963 was a linear classifier. However, in 1992, Bernhard Boser, Isabelle Guyon and Vapnik suggested a way to create non-linear classifiers by applying the kernel trick (originally proposed by Aizerman et al.) to maximum-margin hyperplanes. The resulting algorithm is formally similar, except that every dot product is replaced by a non-linear kernel function. This allows the algorithm to fit the maximum-margin hyperplane in a transformed feature space. The transformation may be non-linear and the transformed space high dimensional; thus though the classifier is a hyperplane in the high-dimensional feature space, it may be non-linear in the original input space.

Using kernels, the original formulation for the SVM given SVM with support vectors z1, z2, … , zN and weights w1, w2, … , wN is now given by:

F(x) = sum_{i=1}^N  w_i , k(z_i,x) + b

It is also very straightforward to see that, using a linear kernel of the form K(z,x) = <z,x> = zTx, we recover the original formulation for the linear SVM.

The Kernel trick

The Kernel trick is a very interesting and powerful tool. It is powerful because it provides a bridge from linearity to non-linearity to any algorithm that solely depends on the dot product between two vectors. It comes from the fact that, if we first map our input data into a higher-dimensional space, a linear algorithm operating in this space will behave non-linearly in the original input space.

Now, the Kernel trick is really interesting because that mapping does not need to be ever computed. If our algorithm can be expressed only in terms of a inner product between two vectors, all we need is replace this inner product with the inner product from some other suitable space. That is where resides the “trick”: wherever a dot product is used, it is replaced with a Kernel function. The kernel function denotes an inner product in feature space and is usually denoted as:

K(x,y) = <φ(x),φ(y)>

Using the Kernel function, the algorithm can then be carried into a higher-dimension space without explicitly mapping the input points into this space. This is highly desirable, as sometimes our higher-dimensional feature space could even be infinite-dimensional and thus infeasible to compute.

Standard Kernels

Some common Kernel functions include the linear kernel, the polynomial kernel and the Gaussian kernel. Below is a simple list with their most interesting characteristics.

 

Linear Kernel The Linear kernel is the simplest kernel function. It is given by the common inner product <x,y> plus an optional constant c. Kernel algorithms using a linear kernel are often equivalent to their non-kernel counterparts, i.e. KPCA with linear kernel is equivalent to standard PCA. k(x, y) = x^T y + c
Polynomial Kernel The Polynomial kernel is a non-stationary kernel. It is well suited for problems where all data is normalized. k(x, y) = (alpha x^T y + c)^d
Gaussian Kernel The Gaussian kernel is by far one of the most versatile Kernels. It is a radial basis function kernel, and is the preferred Kernel when we don’t know much about the data we are trying to model. k(x, y) = expleft(-frac{ lVert x-y rVert ^2}{2sigma^2}right)

For more Kernel functions, check Kernel functions for Machine Learning Applications. The accompanying source code includes definitions for over 20 distinct Kernel functions, many of them detailed in the aforementioned post.

Learning Algorithms

Sequential Minimal Optimization

Previous SVM learning algorithms involved the use of quadratic programming solvers. Some of them used chunking to split the problem in smaller parts which could be solved more efficiently. Platt’s Sequential Minimal Optimization (SMO) algorithm puts chunking to the extreme by breaking the problem down into 2-dimensional sub-problems that can be solved analytically, eliminating the need for a numerical optimization algorithm.

The algorithm makes use of Lagrange multipliers to compute the optimization problem. Platt’s algorithm is composed of three main procedures or parts:

  • run, which iterates over all points until convergence to a tolerance threshold;
  • examineExample, which finds two points to jointly optimize;
  • takeStep, which solves the 2-dimensional optimization problem analytically.

The algorithm is also governed by three extra parameters besides the Kernel function and the data points.

  • The parameter Ccontrols the trade off between allowing some training errors and forcing rigid margins. Increasing the value of C increases the cost of misclassifications but may result in models that do not generalize well to points outside the training set.
  • The parameter ε controls the width of the ε-insensitive zone, used to fit the training data. The value of ε can affect the number of support vectors used to construct the regression function. The bigger ε, the fewer support vectors are selected and the solution becomes more sparse. On the other hand, increasing the ε-value by too much will result in less accurate models.
  • The parameter T is the convergence tolerance. It is the criterion for completing the training process.

After the algorithm ends, a new Support Vector Machine can be created using only the points whose Lagrange multipliers are higher than zero. The expected outputs yi can be individually multiplied by their corresponding Lagrange multipliers ai to form a single weight vector w.

F(x) = sum_{i=0}^N { alpha_i y ,  k(z_i,x) } + b = sum_{i=0}^N { w_i , k(z_i,x) } + b

Sequential Minimal Optimization for Regression

A version of SVM for regression was proposed in 1996 by Vladimir Vapnik, Harris Drucker, Chris Burges, Linda Kaufman and Alex Smola. The method was called support vector regression and, as is the case with the original Support Vector Machine formulation, depends only on a subset of the training data, because the cost function for building the model ignores any training data close to the model prediction that is within a tolerance threshold ε.

Platt’s algorithm has also been modified for regression. Albeit still maintaining much of its original structure, the difference lies in the fact that the modified algorithm uses two Lagrange multipliers âi and ai for each input point i. After the algorithm ends, a new Support Vector Machine can be created using only points whose both Lagrange multipliers are higher than zero. The multipliers âi and ai are then subtracted to form a single weight vector w.

F(x) = sum_{i=0}^N { (hat{alpha_i} - alpha_i) ,  k(z_i,x) } + b = sum_{i=0}^N { w_i , k(z_i,x) } + b

The algorithm is also governed by the same three parameters presented above. The parameter ε, however, receives a special meaning. It governs the size of the ε-insensitive tube over the regression line. The algorithm has been further developed and adapted by Alex J. Smola, Bernhard Schoelkopf and further optimizations were introduced by Shevade et al and Flake et al.

Source Code

Here is the class diagram for the Support Vector Machine module. We can see it is very simple in terms of standard class organization.

svm-classdiagram1

Class diagram for the (Kernel) Support Vector Machines module.

Support Vector Machine

Below is the class definition for the Linear Support Vector Machine. It is pretty much self explanatory.

Kernel Support Vector Machine

Here is the class definition for the Kernel Support Vector Machine. It inherits from Support Vector Machine and extends it with a Kernel property. The Compute method is also overridden to include the chosen Kernel in the model computation.

Sequential Minimal Optimization

Here is the code for the Sequential Minimal Optimization (SMO) algorithm.

Using the code

In the following example, we will be training a Polynomial Kernel Support Vector Machine to recognize the XOR classification problem. The XOR function is classic example of a pattern classification problem that is not linearly separable.

Here, remember that the SVM is a margin classifier that classifies instances as either 1 or –1. So the training and expected output for the classification task should also be in this range. There are no such requirements for the inputs, though.

To create the Kernel Support Vector Machine with a Polynomial Kernel, do:

After the machine has been created, create a new Learning algorithm. As we are going to do classification, we will be using the standard SequentialMinimalOptimization algorithm.

After the model has been trained, we can compute its outputs for the given inputs.

The machine should be able to correctly identify all of the input instances.

Sample application

The sample application is able to perform both Classification and Regression using Support Vector Machines. It can read Excel spreadsheets and determines the task to be performed depending on the number of the columns in the sheet. If the input table contains two columns (e.g. X and Y) it will be interpreted as a regression problem X –> Y. If the input table contains three columns (e.g. x1, x2 and Y) it will be interpreted as a classification problem <x1,x2> belongs to class Y, Y being either 1 or -1.

Classification

To perform classification, load a classification task data such as the Yin Yang classification problem.

svm2-1

Yin Yang classification problem. The goal is to create a model which best determines whether a given point belongs to class blue or green. It is a clear example of a non-linearly separable problem.

svm2-2 svm2-3

Creation of a Gaussian Kernel Support Vector Machine with σ = 1.2236, C = 1.0, ε = 0.001 and T = 0.001.

svm2-4

Classification using the created Support Vector Machine. Notice it achieves an accuracy of 97%, with sensitivity and specifity rates of 98% and 96%, respectively.

Regression

To perform regression, we can load the Gaussian noise sine wave example.

svm2-7

Noise sine wave regression problem.

 

svm2-9 svm2-8

Creation of a Gaussian Kernel Support Vector Machine with σ = 1.2236, C = 1.0, ε = 0.2 and T = 0.001.

After the model has been created, we can plot the model approximation for the sine wave data. The blue line shows the curve approximation for the original red training dots.

svm2-10

Regression using the created Kernel Support Vector Machine. Notice the coefficient of determination r² of 0.95. The closer to one, the better.

See also

 

References