Linear Discriminant Analysis in C#

cigars_thumb

Linear discriminant analysis (LDA) is a method used in statistics and machine learning to find a linear combination of features which best characterize or separates two or more classes of objects or events. The resulting combination may be used as a linear classifier, or, more commonly, for dimensionality reduction before later classification.

The code presented here is also part of the Accord.NET Framework. The Accord.NET Framework is a framework for developing machine learning, computer vision, computer audition, statistics and math applications. It is based on the already excellent AForge.NET Framework. Please see the starting guide for mode details. The latest version of the framework includes the latest version of this code plus many other statistics and machine learning tools.

Motivations

The goals of LDA are somewhat similar to those of PCA. But different from LDA, PCA is an unsupervised technique and as such does not include label information of the data, effectively ignoring this often useful information. For instance, consider two cigar-like clusters in 2 dimensions, one cigar having y = c and the other y = –c (with c being a arbitrary constant), as the image below suggests:

cigars

This example was adapted from the
note on Fisher Linear Discriminant Analysis by Max Welling.

 

The cigars are positioned in parallel and very closely together, such that the variance in the total dataset, ignoring the labels, is in the direction of the cigars. For classification, this would be a terrible projection, because all labels will get mixed and we will destroy any useful information:

cigars-pca-2 cigars-pca-1

 

A much more useful projection is orthogonal to the cigars, which would perfectly separate the data-cases:

cigars-lda-2 cigars-lda-1

 

The first row of images was obtained by performing PCA on the original data. The left image is the PCA projection using the first two components. However, as the analysis says the first component accounts for 96% of the information, one can infer all other components could be safely discarded. The result is the image on the right, clearly a not very useful projection.

The second row of images was obtained by performing LDA on the original data. The left image is the LDA projection using the first two dimensions. However, as the analysis says the first dimension accounts for 100% of the information, all other dimensions could be safely discarded. Well, this is actually true, as we can see on the result in the right image.

 

Analysis Overview

Linear Discriminant Analysis is closely related to principal component analysis (PCA) and factor analysis in that both look for linear combinations of variables which best explain the data. LDA explicitly attempts to model the difference between the classes of data. PCA on the other hand does not take into account any difference in class, and factor analysis builds the feature combinations based on differences rather than similarities. Linear Discriminant Analysis considers maximizing the following objective:

 J(w) = frac{w^T S_B w}{w^T S_W w}

where

S_B =  sum_{c=1}^C (mu_c - bar{x}) (mu_c - bar{x})^T S_W =  sum_{c=1}^C sum_{i in c} (x_i - mu_c) (x_i - mu_c)^T

are the Between-Class Scatter Matrix and Within-Class Scatter Matrix, respectively. The optimal solution can be found by computing the Eigenvalues of SBSW-1 and taking the Eigenvectors corresponding to the largest Eigen values to form a new basis for the data. Those can be easily obtained by computing the generalized eigenvalue decomposition of SB and SW.

 

Source Code

The source code below shows the main algorithm for Linear Discriminant Analysis.

 

 

Using The Code

Code usage is simple, as it follows the same object model in the previous PCA example.

 

Considerations

Besides being useful in many situations, in many practical cases linear discriminants are not suitable. A nice insight is that LDA can be extended for use in non-linear classification via the kernel trick. Using kernels, the original observations can be effectively mapped into a higher dimensional non-linear space. Linear classification in this non-linear space is then equivalent to non-linear classification in the original space.

While the code available here already contains code for Kernel Discriminant Analysis, this is something I’ll address in the next post. If you have any suggestions or questions, please leave me a comment.

Finally, as usual, I hope someone finds this information useful.

 

References:

14 Comments

  1. Hi!

    Thanks for the great article. I’ve found this one and the one on PCA very useful to me (I’m writing a paper on face recognition).

    I do have a question though:

    If using LDA / PCA for data compression, how would you go about retrieving the original data? (Obviously with some sort of loss of quality since you are discarding information).

    Thanks
    Mike

  2. Hi Mike,

    In the Accord.NET Framework you can perform PCA reversion by calling the Revert method of the PrincipalComponentAnalysis class.

    The reversion works by multiplying the data with the inverse of the transformation matrix. Since the transformation matrix is an orthogonal matrix of eigenvectors, this is equivalent to multiplying the data by the transpose of the eigenvectors. It may also be needed to re-scale and re-locate the data using the original column mean and standard deviation.

    Now that you mentioned, I guess I didn’t implement a Revert method for LDA yet. But it should work the same way.

    Best regards,
    César

  3. César,

    Como eu poderia utilizar LDA para separar, de um conjunto de imagens de carros e pedestres, os carros dos pedestres? Especificamente, que variáveis eu deveria utilizar. Por exemplo, serviria o gradiente de cada padrão como entrada (supondo que pedestres e carros diferem nesse aspecto) para o algoritmo?

    Abraços,

    Sergio Penedo

  4. Hi

    Just wondering if anyone else are having the same problem as me. At the moment I am trying to make a face recognition program. Only at the detection stage, but the program can’t detect my college’s face. (He is a Zimbabwian.

    Any suggestions?

  5. Hi Anonymous,

    The classification method has been corrected on the latest release of the framework. However, I have not yet updated the sample application available on this blog post. Sorry about that.

    You can always download the latest version of the framework, with the latest enhancements and bugfixes at the project page.

    Best regards,
    César

  6. Hi

    I am a computer engineering student and I have a LDA face recognition project. I would like to examine your source code but source code and sample applications’ download links are not available anymore. Could you please update or send me codes. Thanks.

  7. Hi César, from my understanding of Fisher LDA if there are more dimensions than classes then there should only be at most c-1 discriminants for c-classes. So would we want to do something like “eigs = eigs.Submatrix(0, Math.min(Classes.count – 1, dimensions), indices);” to get discriminants when we have more dimensions than classes? Let me know what you think.
    Thanks,
    Eric

  8. Hi Eric!

    Yes, it would be definitely be possible! But when using the framework, it should also be possible to pass the desired number of discriminants in the Compute method while creating the analysis. So if you wish, you could limit the number of discriminants right on the beginning, and it should hopefully work without having to change the source code. Hope it helps!

    Best regards,
    Cesar

Leave a Reply

Your email address will not be published. Required fields are marked *