Gaussian Mixture Models and Expectation-Maximization

Like K-Means, Gaussian Mixture Models (GMM) can be regarded as a type of unsupervised learning or clustering methods. They are among the most statistically mature methods for clustering. But unlike K-Means, GMMs are able to build soft clustering boundaries, i.e., points in space can belong to any class with a given probability.

The code presented here is part of the Accord.NET Framework. The Accord.NET Framework is a framework for developing machine learning, computer vision, computer audition, statistics and math applications in .NET. To use the framework in your projects, install it by typing Install-Package Accord.MachineLearning in your IDE’s NuGet package manager.

Contents

gaussians

  1. Introduction
  2. Source code
    1. Normal Distributions
    2. Mixture Distributions
    3. Expectation-Maximization
    4. Gaussian Mixture Models
  3. Using the code
  4. Sample application
  5. Conclusion
  6. References
  7. See also

Introduction

In statistics, a mixture model is a probabilistic model which assumes the underlying data to belong to a mixture distribution. In a mixture distribution, its density function is just a convex combination (a linear combination in which all coefficients or weights sum to one) of other probability density functions:

The individual pi(x) density functions that are combined to make the mixture density p(x) are called the mixture components, and the weights w1, w2, …, wn associated with each component are called the mixture weights or mixture coefficients.

The most common mixture distribution is the Gaussian (Normal) density function, in which each of the mixture components are Gaussian distributions, each with their own mean and variance parameters.

To give a more solid and concrete understanding of the Gaussian mixture models, we will be jumping directly on how to represent those abstractions in source code through the use of class diagrams. Hopefully class diagrams may give a better and direct overview on the subject than a lengthy theoretical discussion.

Source code

First we will discuss about the characteristics of any probability distribution. The IDistribution interface (depicted below) says any probability distribution may have a probability function and a distribution function. It also says that probability distributions can also be fitted to sets of observations through a parameter estimation method.

IDistribution

Since distribution functions can be either univariate or multivariate, the methods above accept any number of scalar values as input through the use of the params keyword. The Fit method for parameter estimation also accepts a general System.Array because we could be given either an array of univariate observations in the form of a double[] or an array of multivariate observations in the form of a jagged double[][] array. Since each observation may have a different weight in the estimation process, the weights parameter can be used to inform those weights to the fitting method.

Probability distributions may also have some associated measures, mainly the Expectation and the Variance of the modeled random variable. Depending if the distribution is univariate or multivariate, the expected values and variances can be either a single scalar or a vector of scalars. For multivariate distributions the variance can be also represented by a symmetric, positive-definite matrix known as the variance-covariance matrix.

IDistributions

Moreover, probability distributions can also be continuous or discrete. In continuous distributions the probability function is known as the Probability Density Function (pdf), and in discrete distributions the probability function is known as the Probability Mass Function (pmf). However, since our distribution of interest, the Gaussian, is a continuous distribution, be focusing only on continuous multivariate distributions:

All

The Normal (Gaussian) distribution and the mixture distributions fall under the multivariate continuous distributions category and are implemented as such.

Normal distributions

The Normal (or Gaussian) multivariate distribution is a multivariate distribution whose parameters are the mean vector μ and a variance-covariance matrix Σ. Its probability density function is given by:

The more detailed class diagram for the normal distribution shows the multivariate characteristics of most of its methods, in particular of the Fit method for estimating Gaussian parameters from a multivariate set of data. Each sample, given in the form of a double[], may also have different associated weights. In this case, the weight associated with each of the samples can be passed through the weights parameter. The source code for the Gaussian distribution can be found in the NormalDistribution.cs class.

NormalDistribution

To estimate the parameters of a Gaussian distribution all we have to do is compute the mean and the variance-covariance matrix out of the given data. This is a very straightforward procedure and can be done by using the standard methods of the Tools class in the Accord.Statistics namespace. The Fit method updates the distribution parameters, overwriting the current distribution.

Mixture distributions

Mixture distributions are just convex combinations of other probability distributions. Thus said, mixture distributions have an array of component distributions and a coefficient array which contains the weights of the each of the component probability distributions.

Mixture

In the generic implementation above, T is the type of the distributions used in the mixture. The most common mixture distribution, the Gaussian Mixture Distribution, could then be created by instantiating a Mixture object passing the initial Normal distributions as constructor parameters.

Alternatively, the parameters of the mixture distributions could be estimated from a set of observations by calling the Fit method. To estimate the parameters of a mixture distribution we will be using a common technique known as the Expectation-Maximization algorithm.

Expectation Maximization

Expectation-Maximization (EM) is a well established maximum likelihood algorithm for fitting a mixture model to a set of training data. It should be noted that EM requires an a priori selection of model order, namely, the number of M components to be incorporated into the model. In the Accord.NET Framework, it can be called implicitly by using the Mixture.Fit method or explicitly using the ExpectationMaximization class. The source code for Expectation Maximization can be found in the ExpectationMaximization.cs file.

The general E-M algorithm is comprised of the following simple steps:

  1. Initialization

    Initialize the distribution parameters, such as the means, covariances and mixing coefficients and evaluate the initial value of the log-likelihood (the goodness of fit of the current distribution against the observation dataset)’;

  2. Expectation

    Evaluate the responsibilities (i.e. weight factors of each sample) using the current parameter values;

  3. Maximization

    Re-estimate the parameters using the responsibilities found in the previous step;

  4. Repeat

    Re-evaluate the log-likelihood and check if it has changed; if it has changed less than a given threshold, the algorithm has converged.

 

Below is the actual realization of the algorithm as implemented in the Fit method of the Mixture<T> class.

Gaussian Mixture Models

Gaussian mixture models are among the most commonly used examples of mixture distributions. The GaussianMixtureModel class encompasses a Mixture<NormalDistribution> object and provides methods to learn from data and to perform actual classification through a simplified interface.

Class diagram for Gaussian Mixture Model

Moreover, a common problem which rises in mixture model fitting through E-M is the proper initialization of the mixture parameters. One popular approach is to initialize the mixture parameters using the centroids detected by the K-Means algorithm. The code below shows how the mixture parameters are computed by first creating a KMeans object and then passing the clusters to the mixture model for further processing.

Using the code

Code usage is rather simple. First, instantiate a new GaussianMixtureModel object passing the desired number of components in the Gaussian mixture. Then, call the Compute(double[][], double) method passing the learning samples and a desired convergence threshold.

After training has been completed, a new sample can be classified by calling the Classify(double[]) method.

Sample application

The accompanying sample application demonstrates how Gaussian Mixture Models can be used to cluster normally-distributed samples of data. By clicking the “Create random data” button, a given number of Normal distributions will be created in the two-dimensional space using random mean and covariance matrices.

After the data has been created, you can use the “Fit a Gaussian Mixture Model” button to fit a mixture of Gaussians to the data. Each of the Gaussians will receive a random color and the samples which have the greatest probability of belonging to any of the Gaussians will be colored accordingly.

Sample7 Sample8

Left: a random dataset containing three Gaussian clusters. Right: the clusters as identified by the Gaussian Mixture Model.

Sample9 Sample10

Left: a random dataset containing 10 Gaussian clusters. Right: the clusters as identified by the Gaussian Mixture Model.

Conclusion

In this article we found how Gaussian Mixture Models can be successfully used to create soft clustering boundaries around data. Those soft boundaries are possible because in a mixture model each sample is said to belong to a cluster only within certain probability.

A main drawback of GMMs is that the number of Gaussian mixture components, as in K-Means, is assumed known as prior, so it cannot be considered as totally unsupervised clustering method. To aid in this situation, one could use additional algorithms such as the Minimum Message Length (MML) criteria.

Another problem arises with the correct initialization of the mixture components. A poor choice of initial parameters will invariably lead to poor results, as the E-M algorithm converges only to a local optimization point. A commonly used solution is initialization by randomly sampling in the mixture data. Other common solution, covered by the implementation presented here, is to use K-Means to perform initialization. However, K-Means itself may also converge to a poor solution and impact negatively in the mixture model fitting.

References

  • Raja, Y., Shaogang, G. Gaussian Mixture Models. Department of Computer Science, Queen Mary and Westfield College, England.
  • Bishop, C. M. (2007), Pattern Recognition and Machine Learning (Information Science and Statistics), Springer.
  • Wikipedia contributors, “Normal distribution,” Wikipedia, The Free Encyclopedia, http://en.wikipedia.org/w/index.php?title=Normal_distribution (accessed October 18, 2010).
  • Wikipedia contributors, “Probability density function,” Wikipedia, The Free Encyclopedia, http://en.wikipedia.org/w/index.php?title=Probability_density_function (accessed October 18, 2010).
  • Wikipedia contributors, “Mixture density,” Wikipedia, The Free Encyclopedia, http://en.wikipedia.org/w/index.php?title=Mixture_density (accessed October 18, 2010).

See also

K-Means clustering in C#

kmeans_thumb2

K-Means is one of the most popular, classic and simple approaches to clustering. Clustering is a method of unsupervised learning, and a common technique for statistical data analysis used in many fields, including machine learning, data mining, pattern recognition, image analysis and bioinformatics [3].

The code presented here is also part of the Accord.NET Framework. The Accord.NET Framework is a C# framework for developing machine learning, computer vision, computer audition, statistics and math applications in .NET. It is based on the AForge.NET Framework. Please see the starting guide for mode details. The latest version of the framework includes the latest version of this code plus many other statistics and machine learning tools.

Contents

  1. Introduction
    1. Lloyd’s K-Mean algorithm
  2. Source code
  3. Using the code
  4. Sample application
  5. Conclusion
  6. Acknowledgements
  7. References
  8. See also

Introduction

In statistics and machine learning, k-means clustering is a method of cluster analysis which aims to partition n observations into k clusters in which each observation belongs to the cluster with the nearest mean [4]. In its most common form, the algorithm is an iterative greedy algorithm which converges to a local optimum after a certain number of iterations.

kmeans

Illustration of the K-Means algorithm.

The algorithm works by first selecting k locations at random to be the initial centroids for the clusters. Each observation is then assigned to the cluster which has the nearest centroid, and the centroids are recalculated using the mean value of assigned values. The algorithm then repeats this process until the cluster centroids do not change anymore, or until the change less than a given threshold.

There are other refinements and extensions of the algorithm. The version depicted above is its most common form, also referred as Lloyd’s algorithm.

Lloyd’s K-Means algorithm

  1. Place K points into the space represented by the objects that are being clustered. These points represent initial group centroids.
  2. Assign each object to the group that has the closest centroid.
  3. When all objects have been assigned, recalculate the positions of the K centroids.
  4. Repeat Steps 2 and 3 until the centroids no longer move. This produces a separation of the objects into groups from which the metric to be minimized can be calculated.

 

Source code

The source code has been implemented using Accord.NET and is now part of the framework. In the current version (2.1.2), the following classes related to K-Means are contained inside the Accord.MachineLearning namespace. The source code available in this page contains only the parts of the framework actually needed to support the algorithm.

Class diagram for K-MeansClass diagram for the K-Means algorithm.

The KMeans class is the main class representing the K-Means algorithm. The algorithm itself is implemented in the Compute(double[][] data, double threshold) method, which accepts a set of observations and a convergence threshold to determine when the method should stop.

 

The implementation is quite straightforward and does not use additional techniques to avoid convergence problems. More refined techniques may be added to the implementation in the future, so please make sure to download the latest version of Accord.NET Framework for the most up-to-date revision of the code.

Using the code

To use the code, create a new instance of the KMeans class passing the desired number of clusters to its constructor. Additionally, you may also pass a distance function to be used as a distance metric during clustering. The default is to use the square Euclidean distance.

 

Sample application

The k-means clustering algorithm is commonly used in computer vision as a form of image segmentation. The results of the segmentation are often used to aid border detection and object recognition. The sample application performs image segmentation using the standard squared Euclidean distance over RGB pixel color space. There are, however, better distance metrics to be used for image segmentation, such as weighted distances and other color spaces, which will not be addressed in this example.

kmeans1

Original image (from Ossi Petruska Flickr page*).

To perform image segmentation, we will first translate our image into an array of pixel values. The single image will be read, pixel by pixel, into a jagged array where each element is a double array of length 3. Each element of those double array will contain one of the three RGB values scaled to the interval [–1,1].

After we perform clustering on this array of pixel values, each pixel will have an associated cluster label. Each of these values will then be swapped by its corresponding cluster centroid. The source code below is called when one clicks the Run button in the application.

After segmentation, the following resulting images can be obtained:

kmeans5

Same image after K-Means clustering with k = 5.

kmeans10

Image after K-Means clustering with k = 10.

 

* The sample image used above has been licensed by Ossi Petruska in a Creative Commons Attribution-NonCommercial-ShareAlike 2.0 Generic license.

Conclusion

K-Means is a very simple and popular approach to clustering. The implementation presented here is the same implementation used in Accord.NET. As it can be seem, the method can be easily extended with custom distance functions through delegates or lambda expressions, and can be used in different contexts, such as image segmentation, without further modifications. As a suggestion for improvement, the method can be further speeded up by using the triangle inequality as suggested on the paper "Using the triangle inequality to accelerate k-means", by Charles Elkan.

In the next article, we will see how we can use K-Means to initialize the Expectation-Maximization algorithm for estimating Gaussian Mixture densities in Gaussian Mixture Models. Those articles will form the basis for Continuous density Hidden Markov Models.

Acknowledgements

To Antonino Porcino, who provided the first version of the code and for the valuable information about many other methods and algorithms.

References

See also

Handwritten Digits Recognition with C# SVMs in Accord.NET

handwritten-recognition

I’ve posted a new article on CodeProject, entitled “Handwriting Recognition Revisited: Kernel Support Vector Machines”. It is a continuation of a previous article on handwritten digits recognition but this time using SVMs instead of KDA.

handwritten-recognition

The code uses the SVM library in Accord.NET Framework. The framework, which runs on .NET and is written mostly in C#, supports standard or multiclass support vector machines for either classification or regression, having more than 20 kernel functions available to choose from.

In the article, the same set and the same amount of training and testing samples have been used as in the previous Kernel Discriminant Analysis article for comparison purposes. The experimental results shows that  SVMs can outperform KDA both in terms of efficiency and accuracy, requiring much less processing time and memory available while still producing robust results.

Matrix manipulation using Accord.NET

accord-matrix2_thumb

Matrix manipulation in Accord.NET
is very straightforward: Just add a new using directive on top of your class to
have (literally) about a hundred new extension methods that operate directly on
.NET multi-dimensional arrays.


accord-matrix2

Introduction


accord-matrix

Accord.NET uses a bit different approach for matrix manipulation in contrast to
other libraries. By using C# 3.0 extension methods, Accord adds several of the standard
methods you would expect from a Matrix library,
such as linear system solving, matrix algebra and numerical decompositions directly
to the standard double[,] (or more generally T[,]) matrices of the framework
[^].

This approach offers some advantages since it avoids mismatches when one is using
multiple libraries, each with their own and often incompatible Matrix implementations.
Extending multi-dimensional arrays makes the use of matrices much more natural in
the .NET world, as no specialized Matrix classes have to be used and no code has
to be rewritten just to use a particular Matrix implementation.

 

Please note, however, that most methods implemented by Accord are not equivalent
to the heavily optimized versions of more specialized numerical packages, such as

BLAS
, from a performance view. Their real power comes when prototyping or realizing algorithms into code. Once an algorithm is written, several
unit tests can be created to test the correctness of the code. After that, because
we have an early working prototype, it will be much easier to perform optimizations
as needed. The key point is to have a working version early which can (if
actually necessary) be optimized later.

Using extension methods

The first step is to include a new using directive on the top of
your source file.


accord-matrix3

This is all that is necessary after you have referenced the Accord.Math library
in your project. Now you can use all of the methods and operations described below.
The following list is nowhere complete, but shows only some basics and highlights
from the current version of Accord.

Declaring matrices

Using standard .NET declarations

To declare matrices, no special code is required. Just create multi-dimensional
arrays as usual.

Using Accord.NET extensions

Additionally, one may wish to use one of the convenience methods of Accord.NET to
create specialized matrices such as the Identity matrix, multiples of the Identity
matrix or Diagonal matrices.

Accord.NET (C#) MATLAB®

Using Accord.NET extensions with implicit typing

By using implicit type variable declaration, the code acquires a certain lightweight
feeling and gets closer of its MATLAB®/Octave counterpart (MATLAB is a registered
trademark of The MathWorks, Inc)
.

Accord.NET (C#) MATLAB®

A much more closer alternative will be possible by using Algorithm Environments,
a upcoming feature in Accord. When available, this feature will allow construction
of mathematical code using

directly. The code will be based on current Octave syntax. Octave is a high-level
language, primarily intended for
numerical computations
, that is mostly compatible with
MATLAB
.

Matrix operations

All standard matrix operations such as transpose, inverse, column and row manipulations
are available in the extension methods. In the example below, A is the same standard
double[,] matrix declared in the first section of this
article. All methods return a new double[,] matrix
as a result, leaving the original matrix A untouched.

Operation Accord.NET (C#) MATLAB®
Transpose
Inverse
Pseudo-Inverse

Matrix Algebra

All common algebraic operations for matrices are also implemented. Those are the
common operations such as addition, subtraction and multiplication. Here, division
is actually a shortcut for multiplying by the inverse.

Operation Accord.NET (C#) MATLAB®
Addition
Subtraction
Multiplication
Division

The above also works with vectors and scalars.

Operations Accord.NET (C#) MATLAB®
Multiplying by a scalar
Multiplying by a column vector

Special element-wise operations

Element-wise operations are operations performed on each element of the matrix or
vector. In Matlab, they are some times known as the dot operations, since they are
usually denoted by prefixing a dot on the common operators.

Operation Accord.NET (C#) MATLAB®
Multiplication
Division
Power

Vector operations

Accord.NET can also perform many vector operations. Among them are the many flavors
of products between vectors, such as the inner, the outer and the Cartesian.

Operation Accord.NET (C#) MATLAB®
Inner product (a.k.a. the scalar product)
Outer product (a.k.a. the matrix product) w = u’*v
Cartesian product  
Euclidean Norm
Sum
Product

Matrix characteristics

Some common matrix characteristics, such as the determinant and trace, are readily
available.

Operation Accord.NET (C#) MATLAB®
Determinant
Trace

Other characteristics

Other available characteristics are the Summation and Product of vector and matrix
elements.

Operation Accord.NET (C#) MATLAB®
Sum vector
Sum of elements
Sum along columns
Sum along rows

Linear Algebra

Linear algebra
is certainly one of the most important fields of mathematics, if not the most important
one. Accord includes many methods for numerical linear algebra such as matrix inversion
and matrix decompositions. Most of them were originally based on JAMA and MAPACK, but today Accord has some additions from
routines translated from EISPACK (mainly the
Generalized Eigenvalue Decomposition
[^],
which is currently absent from Jama).

Operation Accord.NET (C#) MATLAB®
Solve a linear system x = A.Solve(B) x = A B

Eigenvalue Decomposition

Operation Accord.NET (C#) MATLAB®
Standard decomposition
Generalized decomposition

Singular Value Decomposition

Operation Accord.NET (C#) MATLAB®
Economy decomposition

QR Decomposition

Operation Accord.NET (C#) MATLAB®
Standard decomposition

Cholesky decomposition

Operation Accord.NET (C#) MATLAB®
Standard decomposition

LU Decomposition

Operation Accord.NET (C#) MATLAB®
Standard decomposition

Special operators

There are some other useful operators available in Accord.NET. There are facilities
to create index vectors (very common in Matlab for accessing sub-portions of a matrix),
select elements, find elements based on a selection criteria, and so on.

Operation Accord.NET (C#) MATLAB®
Create a vector of indices idx = 1:9
Selecting elements B = A(idx)
Finding elements matching a certain criteria (For example, finding x ∈ v / x > 2). v = [ 5 2 2 7 1 0];
idx = find(v > 2)
Reshaping a vector to a matrix. m = [1 2 3 4];
reshape(m,2,2)

 

More than just matrices

Accord.NET also offers many other standard mathematical functions such as the Gamma
function, log-Gamma, Digamma, Bessel functions, Incomplete beta integrals, and so
on.


accord-matrix4

For a complete listing of the framework features, please check the project page at Origo. In particular, don’t forget to check
out the the class diagram for the Accord.Math namespace.

Cheers!

 

Legal notice: MATLAB is a registered trademark of The MathWorks, Inc. All
other trademarks are the property of their respective owners.

Automatic Image Stitching with Accord.NET

dept_full_thumb

I have just posted a new article on CodeProject, entitled Automatic Image Stitching with Accord.NET. It is a demonstration of automatic image stitching by interest point matching using the Accord and AForge.NET Frameworks.

 

Automatic Image Stitching in C# using Accord.NET Framework

 

The method used is pretty classic in the computer vision literature. There are many other variations of the method with their own advantages and disadvantages, but most of them are built around the same ideas – feature detection, matching and blending. This is one of the most straightforward and free implementations available, since some of those most sophisticated methods are patented.

New additions to Accord.NET: Computer Vision namespace, Camshift and Viola-Jones Detector

tracking4_thumb-5B3-5D

The next version of the Accord.NET Framework will feature two important additions, alongside with a new namespace to accommodate them: The Camshift object tracker and the Viola-Jones object detector. Both will be located inside the new Accord.Vision namespace for Computer Vision algorithms.

 

Accord.NET Camshift Object Tracker.

Camshift object tracker

Accord.NET Viola-Jone's method for face detection.

Viola-Jones object detector

Additionally, other cool additions include the availability of Multi-class Kernel Support Vector Machines (with support for parallelized learning algorithms), Generic Cross-validation classes for model evaluation and Generic Gridsearch classes for model parameter tuning.

The Linear and Non-linear Kernel Discriminant Analysis implementations have been updated to use the Generalized Eigenvalue Decomposition instead of the Eigenvalue Decomposition following a prior matrix inversion. This has cut execution time in nearly half. Other optimizations have also been made to some radial basis Kernel functions, although they are not very noticeable.

The next version of the Accord.NET Framework will be labeled version 2.1.0. There are some interface changes that may break compatibility with older versions. Also the minimum required version of AForge.NET Framework has been leveled up to 2.1.3. You may have to update your assemblies if you wish to upgrade an already existing project.

Generalized Eigenvalue Decomposition in C#

5f7de24799f03b87862a0cdad7b0ed9e

For quite some time I have been trying to find a Java or C# implementation of the Generalized Eigenvalue Decomposition. I just wanted something simple, that didn’t depend on any external libraries such as BLAS or LAPACK. As I could not find any simple and understandable implementation of this decomposition, I decided to implement one myself.

Introduction

As Wikipedia says, a generalized Eigen value problem is the problem of finding a vector v that obeys

 Amathbf{v} = lambda B mathbf{v} quad quad

where A and B are matrices. If v obeys this equation, with some λ, then we call v the generalized eigenvector of A and B, and λ is called the generalized eigenvalue of A and B which corresponds to the generalized eigenvector v. The possible values of λ must obey the following equation

det(A - lambda B)=0.,

An interesting feature of the generalized eigenvalue decomposition is that it finds the eigenvectors of the matrix B-1A even if B is singular and does not have an inverse. If B is nonsingular, the problem could be solved by reducing it to a standard eigenvalue problem of the form B-1Ax=λx. However, because B can be singular, an alternative algorithm, called the QZ method, is necessary.

Solving the problem using the QZ method is equivalent to computing the Eigen decomposition of B-1A without the need of inverting B, which could be impossible or ill-conditioned if B is singular or near-singular. It is also computationally simpler as it saves the time and memory needed for inverting and storing the inverse of B.

EISPACK

EISPACK is a software library for numerical computation of eigenvalues and eigenvectors of matrices, written in FORTRAN. It was originally written around 1972–1973, based heavily on algorithms originally implemented in ALGOL. Because those algorithms are much more human-readable than their LAPACK counterparts, I decided to port the related functions from EISPACK.

The functions which implement the Generalized Eigenvalue Decomposition in EISPACK are called QZHES, QZIT, QZVAL and QZVEC. As their name implies, they use the QZ method for finding the generalized eigenvalues of a matrix pair (A,B).

Source code

The source code is available in the download link in the upper part of this article. It can also be found in the latest versions of the Accord.NET Framework. The algorithm presented in the code is equivalent to the eig(A,B) command of MATLAB.

Because the code has been ported from FORTRAN, many sections of the algorithms have stayed as they appear in their original functions. Please be aware that it may contain lots of labels and goto’s, as most structured programming features we take for granted today were not commonplace at the time.

Using the code

Using the code is much simpler than dealing with LAPACK functions. Because the functions have been implemented directly in C#, the code is somewhat more human readable and has also been wrapped in the same nice object-oriented approach as the other matrix decompositions that can be found in MAPACK and JAMA.

Using the given sources, the following MATLAB code:

Becomes equivalent to the C# code:

in the sense that both satisfy the identity A*V = B*V*D .

References

RANdom Sample Consensus (RANSAC) in C#

RANSAC is an iterative method to build robust estimates for parameters of a mathematical model from a set of observed data which is known to contain outliers. The RANSAC algorithm is often used in computer vision, e.g., to simultaneously solve the correspondence problem and estimate the fundamental matrix related to a pair of stereo cameras.

The code presented here is part of the Accord.NET Framework. The Accord.NET Framework is a framework for developing machine learning, computer vision, computer audition, statistics and math applications in .NET. To use the framework in your projects, install it by typing Install-Package Accord.MachineLearning in your IDE’s NuGet package manager.

Introduction

RANSAC is an abbreviation for “RANdom SAmple Consensus“. It is an iterative method to estimate parameters of a mathematical model from a set of observed data which may contains outliers. It is a non-deterministic algorithm in the sense that it produces a reasonable result only with a certain probability, with this probability increasing as more iterations are allowed. The algorithm was first published by Fischler and Bolles in 1981.

The basic assumption is that the data consists of “inliers”, i.e., data whose distribution can be explained by some mathematical model, and “outliers” which are data that do not fit the model. Outliers could be considered points which come from noise, erroneous measurements or simply incorrect data. RANSAC also assumes that, given a set of inliers, there exists a procedure which can estimate the parameters of a model that optimally explains or fits this data.

Example: Fitting a simple linear regression

We can use RANSAC to robustly fit a linear regression model using noisy data. Consider the example below, in which we have a cloud of points that seems to belong to a line. These are the inliers of the data. The other points, which can be seem as measurement errors or extreme noise values, are points expected to be considered outliers.

ransac-7

Linear structure contained in noisy data.

RANSAC is able to automatically distinguish the inliers from the outliers through the evaluation of the linear regression model. To do so, it randomly selects subsets from the data and attempts to fit linear regression models using them. The model which best explains most of the data will then be returned as the most probably correct model fit.

The image below shows the result of fitting a linear regression directly (as shown by the red line) and using RANSAC (as shown by the blue line). We can see that the red line represents poorly the data structure because it considers all points in order to fit the regression model. The blue line seems to be a much better representation of the linear relationship hidden inside the overall noisy data.

ransac-8

Hidden linear structure inside noisy data. The red line shows the fitting of a linear regression model directly considering all data points. The blue line shows the same result using RANSAC.

Source code

The code below implements RANSAC using a generic approach. Models are considered to be of the reference type TModel and the type of data manipulated by this model is considered to be of the type TData. This approach allows for the creation of a general purpose RANSAC algorithm which can be used in very different contexts, be it the fitting of linear regression models or the estimation of homography matrices from pair of points in different images.

This code is available in the class RANSAC of the Accord.NET Framework (source).

Besides the generic parameters, the class utilizes three delegated functions during execution.

  • The Fitting function, which should accept a subset of the data and use it to fit a model of the chosen type, which should be returned by the function;
  • The Degenerate function, which should check if a subset of the training data is already known to result in a poor model, to avoid unnecessary computations; and
  • The Distance function, which should accept a model and a subset of the training data to compute the distance between the model prediction and the expected value for a given point. It should return the indices of the points only whose predicted and expected values are within a given threshold of tolerance apart.

Using the code

In the following example, we will fit a simple linear regression of the form x→y using RANSAC. The first step is to create a RANSAC algorithm passing the generic type parameters of the model to be build, i.e. SimpleLinearRegression and of the data to be fitted, i.e. a double array.

In this case we will be using a double array because the first position will hold the values for the input variable x. The second position will be holding the values for the output variables y. If you are already using .NET 4 it is possible to use the Tuple type instead.

After the creation of the RANSAC algorithm, we should set the delegate functions which will tell RANSAC how to fit a model, how to tell if a set of samples is degenerate and how to check for inliers in data.

Finally, all we have to do is call the Compute method passing the data. The best model found will be returned by the function, while the given set of inliers indices for this model will be returned as an out parameter.

Sample application

The accompanying source application demonstrates the fitting of the simple linear regression model with and without using RANSAC. The application accepts Excel worksheets containing the independent values in the first column and the dependent variables in the second column.

ransac-9

References

Accord.NET Framework – An extension to AForge.NET

Today I have just released the first version of Accord.NET. The Accord.NET Framework is a C# framework I have developed over the years while working on several areas of artificial intelligence, machine learning and statistics.

The Accord.NET Framework extends the excellent AForge.NET Framework with new tools and libraries. In order to use Accord.NET, you must have AForge.NET already installed. The first version number of Accord.NET will be used to indicate the compatibility status with AForge.NET versions, thus the first version will be starting at 2.0.0.

 

The framework is comprised by the set of libraries and sample applications, which demonstrate their features:

  • Accord.Statistics – library with statistical analysis and other tools;
  • Accord.Imaging – extension to the AForge.NET Imaging library with new filters and routines;
  • Accord.Neuro – extension to the AForge.NET Neuro library with other learning algorithms;
  • Accord.MachineLearning – extension to AForge’s machine learning library with Support Vector Machines;
  • Accord.Audio – experimental library with filters and audio processing routines.

 

The Framework has just been released, so be ready to expect bugs and unpolished/unfinished areas. The code is released under a LGPL license. For additional help, issues, and discussions, please refer to the recently created forums.