Kernel Discriminant Analysis in C#

kdaf16

Kernel (Fisher) discriminant analysis (kernel FDA) is a non-linear generalization of linear discriminant analysis (LDA) using techniques of kernel methods. Using a kernel, the originally linear operations of LDA are done in a reproducing kernel Hilbert space with a non-linear mapping.

This code has also been incorporated in Accord.NET Framework, which includes the latest version of this code plus many other statistics and machine learning tools.

Analysis Overview

KDA is an extension of LDA to non-linear distributions, just as KPCA is to PCA. For information about LDA, please check the previous post, Linear Discriminant Analysis in C#. The algorithm presented here is a multi-class generalization of the original algorithm by Mika et al. in Fisher discriminant analysis with kernels (1999).

The objective of KDA is to find a transformation maximizing the between-class variance and minimizing the within-class variance. It can be shown that, with Kernels, the original objective function can be expressed as:

 J(alpha) = frac{alpha^T S_B alpha}{alpha^T S_W alpha}

with:

S_B =  sum_{c=1}^C (mu_c - bar{x}) (mu_c - bar{x})^T S_W =  sum_{c=1}^C K_c (I - 1_{l_c}) K_c^T

where Kc is the Kernel matrix for class c, uc is the column means vector for Kc, I is the identity matrix, lc is the number of samples in class c and 1lc is a lc x lc matrix with all entries 1/lc.

The Kernel Trick

The Kernel trick transforms any algorithm that solely depends on the dot product between two vectors. Wherever a dot product is used, it is replaced with the kernel function.

K(x,y) = <φ(x),φ(y)>

Thus, a linear algorithm can easily be transformed into a non-linear algorithm. So the algorithm can be carried in a higher-dimension space without explicitly mapping the input points into this space. For more information and a cool video showing how the kernel trick works, please see the introductory text in the previous post, Kernel Principal Component Analysis in C#.

Adding regularization

As the problem proposed above is ill-posed (we are estimating l dimensional covariance structures from l samples), the Sw matrix may become singular, and computing the objective function may become problematic. To avoid singularities, we can add a multiple of the identity matrix to the Sw matrix:

S_W :=!, S_W + lambda I

The λ adds numerical stabilities as for large λ the matrix Sw will become positive definite. The λ term can also be seen as a regularization term, favoring solutions with small expansion coefficients [Mika et al, 1999].

 

Source Code

The code below implements a generalization of the original Kernel Discriminant Analysis algorithm by Mika et al. in Fisher discriminant analysis with kernels (1999).

After completing the analysis, we may wish to project new data into discriminant space. The following code projects a data matrix into this space using the basis found during the analysis.

Sample Application

The sample application shows the how Kernel Discriminant Analysis works. The application can load Excel worksheets containing tables in the same format as the included sample worksheet.

Kernel Discriminant Analysis (KDA) Kernel Discriminant Analysis (KDA) Kernel Discriminant Analysis (KDA) Kernel Discriminant Analysis (KDA)

The first image shows the Wikipedia example for Kernel Principal Component Analysis. The second image shows the Analysis carried out with a Gaussian kernel with sigma = 3.6. The third image shows the Kernel between class and within class equivalent matrices. The last image shows the subset spawned by each class and its Kernel scatter matrix.

Kernel Discriminant Analysis (KDA) Kernel Discriminant Analysis (KDA)

The picture on the left shows the projection of the original data set in the first two discriminant dimensions. Note that just one dimension would already be sufficient to linear separate the classes. On the right, the picture shows how a point in input space (given by the mouse cursor) gets mapped into feature space.

Linear Discriminant Analysis equivalent as a special case

Linear Discriminant Analysis as Kernel Discriminant Analysis spacial case Linear Discriminant Analysis as Kernel Discriminant Analysis spacial case

We can also check that a projection equivalent to Linear Discriminant Analysis can be obtained by using a Linear kernel in the Kernel Discriminant Analysis.

See also

References

12 Comments

  1. This stuff is really cool. Thanks for your efforts.

    As I was testing the source code and also your demo for KDA I got an error while trying to transform.

    “Index was outside the bounds of the array.
    at Accord.Statistics.Analysis.KernelDiscriminantAnalysis.Transform(Double[,] data, Int32 discriminants) in C:UsersCesarDesktopAccord.NETSourcesAccord.StatisticsAnalysisKernelDiscriminantAnalysis.cs:line 239″

    In your demo it occurs when I use the Yin Yang example.

    Also while using LDA source a similar error occurs during the try to compute.

    Could you please help me? Am I doing something wrong?

  2. Hi MSA,

    Thanks for letting me know. I had updated the library code but forgot to update the samples.

    To fix it, you can open the file SamplesStatisticsKDAMainForm.cs and add the line:

    kda.Threshold = 0;

    right below the line 124 where it reads:

    kda = new KernelDiscriminantAnalysis(data, labels, kernel);

    I will update the downloadable sources soon. If you wish you can also download the latest version of the Accord.NET Framework, which contains the most updated version of the code (including the sample applications).

    Thanks again,
    César

  3. Hello,

    I’ve updated the source code. Please re-download it from the same link as before. The newest version also uses the generalized eigenvalue decomposition instead of the standard eigenvalue decomposition, resulting in a faster and more accurate results.

    Best regards,
    César

  4. HI,

    I have to find a signature in grey scaled image file and remove it.

    I am using C# for image analysis, could you please help me out.

  5. hello,

    Great work, i really appreciate it. But i have a small doubt in the code.
    when i find the discriminant projection for the input data using kda.Result. i get the projection of the input data. But when i try to find the projection of some other data (test data) which i want to compare to input data projection) using kda.transfor(output_data) i get all vales in the resulting array as 0. can u please explain how to correct this thing.

  6. Hi Cesar,

    Absolutely great work you’ve done in Accord.NET.

    I’ve been trying to implement LDA & KDA. I’ve gotten LDA to run successfully, but I keep getting an “Argument out of range” error when executing the KDA.Compute() line.

    Here’s my KDA training function:

    private void KDA_Training(ref KernelDiscriminantAnalysis KDA, Double[,] input, int[] output, Double regularisation, Double threshold, Double GaussianSigma)
    {
    IKernel kernel = new Gaussian(GaussianSigma);

    KDA = new KernelDiscriminantAnalysis(input, output, kernel);
    KDA.Threshold = threshold;
    KDA.Regularization = regularisation;
    KDA.Compute();
    }

    Parameters:
    input[,] is [120,24], positive doubles.
    output[] is three-class: {0, 1, 2}.
    regularization = 0.1
    GaussianSigma = 3.6
    threshold = 5

    I can’t seem to figure out what’s going wrong. Also, do you have the source code for your KDA sample application (specifically the KDA object calls) available?

    Thank you very much.

    Jack

  7. Hi Jack,

    First of all, thanks for the feedback!

    About your problem, may I ask which version of the code are you using? Are you using the latest versions from the Accord.NET project site, or are you using the standalone sources available in this post? I would highly recommend to download the latest sources as they might contain the latest fixes and enhancements.

    In either case, the full sources of the framework and of all sample applications are available at Google Code. In particular, the sources for the KDA sample application are here:

    http://code.google.com/p/accord/source/browse/#svn%2Ftrunk%2FSamples%2FStatistics%2FKDA

    From your snippet it is somewhat hard to see what is wrong. But the output[] array is expected to be an array of the same size as the number of samples (rows) in your input matrix. Each entry on the output vector should contain the class indicator (an integer label, such as 0, 1 or 2).

    If this doesn’t helps, you can also send your data example to me so I can take a further look on what is causing the problem.

    Best regards,
    César

  8. Hi Cesar,

    Thank you for your speedy reply!

    I’ve been running the version linked in this blog post, and have been using an output array, containing int class labels, whose size matches the first dimension of my input array. Apologies for being unclear earlier. Also, I’ve tried both unnormalized and normalized input arrays.

    After switching to the latest v2.2 build that you linked to, I’ve encountered an error in the KSVM section of my code. It seems that MultiClassVectorLearning.Configure and ksvm.Machines[][].Indices existed in past versions of the library, but no longer exists in 2.2.

    Further, after I revert to using Accord.MachineLearning v2.1.1.0 to get around this problem, Visual Studio gives me this error:

    Argument 2: cannot convert from ‘Accord.Statistics.Kernels.IKernel [c:C-sharpMachineLearningAccord DLLsAccord.Statistics.dll]’ to ‘Accord.Statistics.Kernels.IKernel’

    I’ve tried directly referencing your pre-compiled Accord.Statistics DLL, as well as recompiling your Accord.Statistics library locally and referencing the newly created DLL. Both ways give me the same error.

    Do you know how to fix this?

    Thank you very much.

    Jack

  9. Hi Jack,

    Many, many apologies for the issues. However, the Machines property of the MulticlassSupportVectorMachine still exists. Which error were you getting, exactly? You can also access individual machines by using the class indexer (i.e. ksvm[i,j]) to get each machine responsible for each classification i-vs-j sub-problem.

    The Configure method was part of the MulticlassSupportVectorLearning class, but was renamed to Algorithm. It was part of a change to support additional configuration settings for the learning algorithms and improve the semantics of the property. Sorry about that.

    In either case, I would still recommend you to stay with the latest version. The errors you are getting now are because of wrong assembly references in the project. Probably you are referencing two assemblies from different versions of the library. Most probably, your MachineLearning.dll is from one version and your Statistics.dll is from the other. Please recheck all versions of the assemblies match. As a last resort, try removing all Accord.NET references and adding them again, making sure they are all from the same version. You may also need to “clear” your build to make sure Visual Studio isn’t using any cached intermediate objects.

    For an example on how to setup a multi-class machine, please check this example in the documentation.

    And sorry for all the trouble. Please be assured I will assist you in everything needed to get your solution working again. If you wish we can also continue this discussion by email.

    Best regards,
    César

Leave a Reply

Your email address will not be published. Required fields are marked *