Partial Least Squares Analysis and Regression in C#

Partial Least Squares Regression (PLS) is a technique that generalizes and combines features from principal component analysis and (multivariate) multiple regression. It has been widely adopted in the field of chemometrics and social sciences.

The code presented here is also part of the Accord.NET Framework. The Accord.NET Framework is a framework for developing machine learning, computer vision, computer audition, statistics and math applications. Please see the starting guide for mode details. The latest version of the framework includes the latest version of this code plus many other statistics and machine learning tools.

Introduction

Partial least squares regression (PLS-regression) is a statistical method that is related to principal components regression. The goal of this method is to find a linear regression model by projecting both the predicted variables and the observable variables to new, latent variable spaces. It was developed in the 1960s by Herman Wold to be used in econometrics. However, today it is most commonly used for regression in the field of chemometrics.

In statistics, latent variables (as opposed to observable variables), are variables
that are not directly observed but are rather inferred (through a mathematical model) from other variables that are observed (directly measured). Mathematical models that aim to explain observed variables in terms of latent variables are called latent variable models.

A PLS model will try to find the multidimensional direction in the X space that explains the maximum multidimensional variance direction in the Y space. PLS-regression is particularly suited when the matrix of predictors has more variables
than observations, and when there is multicollinearity among X values. Its interesting to note that standard linear regression would likely fail to produce meaningful interpretable models in those cases.

Overview

Multivariate Linear Regression in Latent Space

Multiple Linear Regression is a generalization of simple linear regression for multiple inputs. In turn, Multivariate Linear Regression is a generalization of Multiple Linear Regression for multiple outputs. The multivariate linear regression is a general linear regression model which can map an arbitrary dimension space into another arbitrary dimension space using only linear relationships. In the context of PLS, it is used to map the latent variable space for the inputs X into the latent variable space for the output variables Y. Those latent variable spaces are spawned by the loading matrices for X and Y, commonly denoted P and Q, respectively.

The goal of PLS algorithms are therefore to find those two matrices. There are mainly two algorithms to do this: NIPALS and SIMPLS.

Algorithm 1: NIPALS

Here is an exposition of the NIPALS algorithm for finding the loading matrices required for PLS regression. There are, however, many variations of this algorithm which normalize or do not normalize certain vectors.

Algorithm:

• Let X be the mean-centered input matrix,
• Let Y be the mean-centered output matrix,
• Let P be the loadings matrix for X, and let pdenote the i-th column of P;
• Let Q be the loadings matrix for Y, and let qdenote the i-th column of Q;
• Let T be the score matrix for X, and tdenote the i-th column of T;
• Let U be the score matrix for Y, and udenote the i-th column of U;
• Let W be the PLS weight matrix, and wdenote the i-th column of W; and
• Let B be a diagonal matrix of diagonal coefficients bi

Then:

1. For each factor i to be calculated:
1. Initially choose ui as the largest column vector in
X (having the largest sum of squares)
2. While (ti has not converged to a desired precision)
1. wi ∝ X’u(estimate X weights)
2. ti ∝ Xw(estimate X factor scores)
3. qi ∝ Y’t(estimate Y weights)
4. ui = Yq(estimate Y scores)
3. bi = t’u (compute prediction coefficient b)
5. X = X – tp’ (deflate X)

In other statistical analysis such as PCA, it is often interesting to inspect how much of the variance can be explained by each of the principal component dimensions. The same can also be accomplished for PLS, both for the input (predictor) variables X and outputs (regressor) variables Y. For the input variables, the amount of variance explained by each factor can be computed as bi². For outputs, it can be computed as the sum of the squared elements of its column in the matrix P,  i.e. as Sum(pi²).

Algorithm 2: SIMPLS

SIMPLS is an alternative algorithm for finding the PLS matrices P and Q that has been derived considering the true objective of maximizing the covariance between the latent factors and the output vectors. Both NIPALS and SIMPLS are equivalent when there is just one output variable Y to be regressed. However, their answers can differ when Y is multi-dimensional. However, because the construction of the weight vectors used by SIMPLS is based on the empirical variance–covariance matrix of the joint input and output variables, outliers present in the data can adversely impact its performance.

Algorithm:

• Let X be the mean-centered input matrix,
• Let Y be the mean-centered output matrix,
• Let P be the loadings matrix for X, and let pi denote the i-th column of P;
• Let C be the loadings matrix for Y, and let ci denote the i-th column of C;
• Let T be the score matrix for X, and ti denote the i-th column of T;
• Let U be the score matrix for Y, and ui denote the i-th column of U; and
• Let W be the PLS weight matrix, and wi denote the i-th column of W.

Then:

1. Create the covariance matrix C = X’Y
2. For each factor i to be calculated:
1. Perform SVD on the covariance matrix and store the first left singular vector in wi and the first right singular value times the singular values in ci.
2. ti ∝ X*wi            (estimate X factor scores)
4. ci = ci/norm(ti)     (estimate Y weights)
5. wi = wi/norm(ti)     (estimate X weights)
6. ui = Y*ci            (estimate Y scores)
7. vi = pi              (form the basis vector vi)
9. Make u orthogonal to the previous scores T
10. Deflate the covariance matrix C
1. C = C – vi*(vi‘*C)

Source Code

This section contains the realization of the NIPALS and SIMPLS algorithms in C#. The models have been implemented considering an object-oriented structure that is particularly suitable to be data-bound to Windows.Forms (or WPF) controls.

Class Diagram

Class diagram for the Partial Least Squares Analysis.

Performing PLS using NIPALS

Here is the source code for computing PLS using the NIPALS algorithm:

Performing PLS using SIMPLS

And here is the source code for computing PLS using the SIMPLS algorithm:

Multivariate Linear Regression

Multivariate Linear Regression is computed in a similar manner to a Multiple Linear Regression. The only difference is that, instead of having a weight vector and a intercept, we have a weight matrix and a intercept vector.

The weight matrix and the intercept vector are computed in the PartialLeastSquaresAnalysis class by the CreateRegression method. In case the analyzed data already was mean centered before being fed to the analysis, the constructed intercept vector will consist only of zeros.

Using the code

As an example, lets consider the example data from Hervé Abdi, where the goal is to predict the subjective evaluation of a set of 5 wines. The dependent variables that we want to predict for each wine are its likeability, and how well it goes with meat, or dessert (as rated by a panel of experts). The predictors are the price, the sugar, alcohol, and acidity content of each wine.

Next, we proceed to create the Partial Least Squares Analysis using the Covariance
method (data will only be mean centered but not normalized) and using the SIMPLS
algorithm.

After the analysis has been computed, we can proceed and create the regression model.

Now after the regression has been computed, we can find how well it has performed. The coefficient of determination r² for the variables Hedonic, Goes with Meat and Goes with Dessert can be computed by the CoefficientOfDetermination method of the MultivariateRegressionClass and will be, respectively, 0.9999, 0.9999 and 0.8750 – the closer to one, the better.

Sample application

The accompanying sample application performs Partial Least Squares Analysis and Regression in Excel worksheets. The predictors and dependent variables can be selected once the data has been loaded in the application.

Left: Wine example from Hervé Abdi. Right: Variance explained by PLS using the SIMPLS algorithm

Left: Partial Least Squares Analysis results and regression coefficients for the
full regression model. Right: projection of the dependent and predictors variables
using the three first factors.

Results from the Multivariate Linear Regression performed in Latent Space using
three factors from PLS.

Left: Example data from Geladi and Kowalski. Right: Analysis results for the data using NIPALS. We can see that just two factors are enough to explain the whole variance of the set.

Left: Projections to the latent spaces. Right: Loadings and coefficients for the
two factor Multivariate Linear Regression model.

Results from the Partial Least Squares Regression showing a perfect fit using only the first two factors.

Hidden Markov Model -Based Sequence Classifiers in C#

Distinct Hidden Markov Models can be trained individually on data obtained from each class of a classification problem. If we create a set of those models and specialize each model to recognize each of the separate classes, we can then use the HMMs ability to calculate the likelihood that a given sequence belongs to the model to determine the most likely class of an unknown sequence.

The code presented here is also part of the Accord.NET Framework. The Accord.NET Framework is a framework for developing machine learning, computer vision, computer audition, statistics and math applications. It is based on the already excellent AForge.NET Framework. Please see the starting guide for mode details. The latest version of the framework includes the latest version of this code plus many other statistics and machine learning tools.

Introduction

Hidden Markov Models

An introduction about Hidden Markov Models (HMM) was presented in a previous article, entitled Hidden Markov Models in C#. HMMs are models are stochastic methods to model temporal and sequence data.

They allow, among other things, (1) to infer the most likely sequence of states that produced a given output sequence, to (2) infer which will be the most likely next state (and thus predicting the next output) and (3) calculate the probability that a given sequence of outputs originated from the system (allowing the use of hidden Markov models for sequence classification). In the context of this article, we will be more interested in results from ability (3).

Hidden Markov Models can be seem as finite state machines where for each sequence unit observation there is a state transition and, for each state, there is a output symbol emission. The picture on the left summarizes the overall definition of a HMM given in the previous article.

Hidden Markov Model -Based Sequence Classifier

Hidden Markov Models can be trained individually on data obtained from each class of a classification problem. If we create a set of models and specialize each model to recognize each of the separated classes, then we will be able to explore the HMMs ability to calculate the likelihood that a given sequence belongs to itself to perform classification of discrete sequences.

After all models have been trained, the probability of the unknown-class sequence can be computed for each model. As each model specialized in a given class, the one which outputs the highest probability can be used to determine the most likely class for the new sequence, as shown in the picture below.

Schematic representation of the sequence classification procedure.

Source Code

The implementation is very straightforward, as shown in the picture below. The Tag property of the HiddenMarkovModel class can be used to store additional information about which class the model will be representing.

Class diagram for the HMM-based sequence classifier.

Computing the most likely class for a given sequence

Computing the most likely class for a given sequence is as simple as iterating through all models in the set and selecting the one which has produced the highest likelihood value.

Training each model to recognize each of the output classes

To train each model to recognize each of the output classes we have to split the training data into subsets whose outputs corresponds to each of the classes from the classification problem.

Using the Code

Code usage is very simple. Once one has the input data available in the form of a sequence array and output data in form of a integer array, create a new HiddenMarkovClassifier object with the appropriate parameters.

Then, just call Learn() passing the input and output data and the convergence limit for the learning algorithm. After that, call Compute() passing a new integer sequence to be classified by the system.

Sample Application

The accompanying sample application can read Excel spreadsheets containing integer sequences and labels for those sequences. An example spreadsheet contained inside the Resources folder in the compressed archive file demonstrates the expected format for this data.

Left: Input sequences data. Right: Hidden Markov Models contained in the Sequence Classifier with initial configurations.

Left: Trained models. Right: Performance test for the Hidden Markov Model Sequence Classifier.

Hidden Markov Models in C#

Hidden Markov Models (HMM) are stochastic methods to model temporal and sequence data. They are especially known for their application in temporal pattern recognition such as speech, handwriting, gesture recognition, part-of-speech tagging, musical score following, partial discharges and bioinformatics.

The code presented here is also part of the Accord.NET Framework. The Accord.NET Framework is a framework for developing machine learning, computer vision, computer audition, statistics and math applications. It is based on the already excellent AForge.NET Framework. Please see the starting guide for mode details. The latest version of the framework includes the latest version of this code plus many other statistics and machine learning tools.

Introduction

Hidden Markov Models were first described in a series of statistical papers by Leonard E. Baum and other authors in the second half of the 1960s. One of the first applications of HMMs was speech recognition, starting in the mid-1970s. Indeed, one of the most comprehensive explanations on the topic was published in “A Tutorial On Hidden Markov Models And Selected Applications in Speech Recognition”, by Lawrence R. Rabiner in 1989. In the second half of the 1980s, HMMs began to be applied to the analysis of biological sequences, in particular DNA. Since then, they have become ubiquitous in the field of bioinformatics.

Dynamical systems of discrete nature assumed to be governed by a Markov chain emits a sequence of observable outputs. Under the Markov assumption, it is also assumed that the latest output depends only on the current state of the system. Such states are often not known from the observer when only the output values are observable.

Hidden Markov Models attempt to model such systems and allow, among other things, (1) to infer the most likely sequence of states that produced a given output sequence, to (2) infer which will be the most likely next state (and thus predicting the next output) and (3) calculate the probability that a given sequence of outputs originated from the system (allowing the use of hidden Markov models for sequence classification).

The “hidden” in Hidden Markov Models comes from the fact that the observer does not know in which state the system may be in, but has only a probabilistic insight on where it should be.

Definition

Hidden Markov Models can be seem as finite state machines where for each sequence unit observation there is a state transition and, for each state, there is a output symbol emission.

Notation

Traditionally, HMMs have been defined by the following quintuple:

$\lambda = (N, M, A, B, \pi)$

where

• N is the number of states for the model
• M is the number of distinct observations symbols per state, i.e. the discrete alphabet size.
• A is the NxN state transition probability distribution given in the form of a matrix A = {aij}
• B is the NxM observation symbol probability distribution given in the form of a matrix B = {bj(k)}
• π is the initial state distribution vector π = {πi}

Note that, if we opt out the structure parameters M and N we have the more often used compact notation

$\lambda = (A, B, \pi)$

Canonical problems

There are three canonical problems associated with hidden Markov models, which I’ll quote from Wikipedia:

1. Given the parameters of the model, compute the probability of a particular output sequence. This requires summation over all possible state sequences, but can be done efficiently using the Forward algorithm, which is a form of dynamic programming.
2. Given the parameters of the model and a particular output sequence, find the state sequence that is most likely to have generated that output sequence. This requires finding a maximum over all possible state sequences, but can similarly be solved efficiently by the Viterbi algorithm.
3. Given an output sequence or a set of such sequences, find the most likely set of state transition and output probabilities. In other words, derive the maximum likelihood estimate of the parameters of the HMM given a dataset of output sequences. No tractable algorithm is known for solving this problem exactly, but a local maximum likelihood can be derived efficiently using the Baum-Welch algorithm or the Baldi-Chauvin algorithm. The Baum-Welch algorithm is an example of a forward-backward algorithm, and is a special case of the Expectation-maximization algorithm.

The solution for those problems are exactly what makes Hidden Markov Models useful. The ability to learn from the data (using the solution of problem 3) and then become able to make predictions (solution to problem 2) and able to classify sequences (solution of problem 2) is nothing but applied machine learning. From this perspective, HMMs can just be seem as supervisioned sequence classifiers and sequence predictors with some other useful interesting properties.

Choosing the structure

Choosing the structure for a hidden Markov model is not always obvious. The number of states depend on the application and to what interpretation one is willing to give to the hidden states. Some domain knowledge is required to build a suitable model and also to choose the initial parameters that an HMM can take. There is also some trial and error involved, and there are sometimes complex tradeoffs that have to be made between model complexity and difficulty of learning, just as is the case with most machine learning techniques.

Additional information can be found on http://www.cse.unsw.edu.au/~waleed/phd/tr9806/node12.html.

Algorithms

The solution to the three canonical problems are the algorithms that makes HMMs useful. Each of the three problems are described in the three subsections below.

Evaluation

The first canonical problem is the evaluation of the probability of a particular output sequence. It can be efficiently computed using either the Viterbi-forward or the Forward algorithms, both of which are forms of dynamic programming.

The Viterbi algorithm originally computes the most likely sequence of states which has originated a sequence of observations. In doing so, it is also able to return the probability of traversing this particular sequence of states. So to obtain Viterbi probabilities, please refer to the Decoding problem referred below.

The Forward algorithm, unlike the Viterbi algorithm, does not find a particular sequence of states; instead it computes the probability that any sequence of states has produced the sequence of observations. In both algorithms, a matrix is used to store computations about the possible state sequence paths that the model can assume. The forward algorithm also plays a key role in the Learning problem, and is thus implemented as a separate method.

Decoding

The second canonical problem is the discovery of the most likely sequence of states that generated a given output sequence. This can be computed efficiently using the Viterbi algorithm. A trackback is used to detect the maximum probability path travelled by the algorithm. The probability of travelling such sequence is also computed in the process.

Learning

The third and last problem is the problem of learning the most likely parameters that best models a system given a set of sequences originated from this system. Most implementations I’ve seem did not consider the problem of learning from a set of sequences, but only from a single sequence at a time. The algorithm below, however, is fully suitable to learn from a set of sequences and also uses scaling, which is another thing I have not seem in other implementations.

The source code follows the original algorithm by Rabiner (1989). There are, however, some known issues with the algorithms detailed in Rabiner’s paper. More information about those issues is available in a next section of this article entitled “Remarks”.

Using the code

Lets suppose we have gathered some sequences from a system we wish to model. The sequences are expressed as a integer array such as:

For us, it can be obvious to see that the system is outputting sequences that always start with a zero and have one or more ones at the end. But lets try to fit a Hidden Markov Model to predict those sequences.

Once the model is trained, lets test to see if it recognizes some sequences:

Of course the model performs well as this a rather simple example. A more useful test case would consist of allowing for some errors in the input sequences in the hope that the model will become more tolerant to measurement errors.

We can see that, despite having a very low probability, the likelihood values for the sequences containing a simulated measurement error are greater than the likelihoods for the sequences which do not follow the sequence structure at all.

In a subsequent article, we will see that those low values for the likelihoods will not be a problem because HMMs are often used in sets to form sequence classifiers. When used in such configurations, what really matters is which HMM returns the highest probability among others in the set.

Remarks

A practical issue in the use of Hidden Markov Models to model long sequences is the numerical scaling of conditional probabilities. The probability of observing a long sequence given most models is extremely small, and the use of these extremely small numbers in computations often leads to numerical instability, making application of HMMs to genome length sequences quite challenging.

There are two common approaches to dealing with small conditional probabilities. One approach is to rescale the conditional probabilities using carefully designed scaling factors, and the other approach is to work with the logarithms of the conditional probabilities. For more information on using logarithms please see the work entitled “Numerically Stable Hidden Markov Model Implementation”, by Tobias P. Mann.

Known issues

The code on this article is based on the Tutorial by Rabiner. There are, however, some problems with the scaling and other algorithms. An errata depicting all issues is available in the website “An Erratum for ‘A Tutorial on Hidden Markov Models and Selected Applications in Speech Recognition’” and is maintained by Ali Rahimi. I have not yet verified if the implementation presented here also suffers from the same mistakes explained there. This code has worked well under many situations, but I cannot guarantee its perfectness. Please use at your own risk.

Acknowledgements

Thanks to Guilherme C. Pedroso, for the help with the Baum-Welch generalization for multiple input sequences. He has also co-written a very interesting article using hidden Markov models for gesture recognition, entitled “Automatic Recognition of Finger Spelling for LIBRAS based on a Two-Layer Architecture” published in the 25th Symposium On Applied Computing (ACM SAC 2010).

Kernel Functions for Machine Learning Applications

In recent years, Kernel methods have received major attention, particularly due to the increased popularity of the Support Vector Machines. Kernel functions can be used in many applications as they provide a simple bridge from linearity to non-linearity for algorithms which can be expressed in terms of dot products. In this article, we will list a few kernel functions and some of their properties.

Many of these functions have been incorporated in Accord.NET, a framework for creating machine learning, statistics, and computer vision applications.

Kernel Methods

Kernel methods are a class of algorithms for pattern analysis or recognition, whose best known element is the support vector machine (SVM). The general task of pattern analysis is to find and study general types of relations (such as clusters, rankings, principal components, correlations, classifications) in general types of data (such as sequences, text documents, sets of points, vectors, images, graphs, etc) (Wikipedia, 2010a).

The main characteristic of Kernel Methods, however, is their distinct approach to this problem. Kernel methods map the data into higher dimensional spaces in the hope that in this higher-dimensional space the data could become more easily separated or better structured. There are also no constraints on the form of this mapping, which could even lead to infinite-dimensional spaces. This mapping function, however, hardly needs to be computed because of a tool called the kernel trick.

The Kernel trick

The Kernel trick is a very interesting and powerful tool. It is powerful because it provides a bridge from linearity to non-linearity to any algorithm that can expressed solely on terms of dot products between two vectors. It comes from the fact that, if we first map our input data into a higher-dimensional space, a linear algorithm operating in this space will behave non-linearly in the original input space.

Now, the Kernel trick is really interesting because that mapping does not need to be ever computed. If our algorithm can be expressed only in terms of a inner product between two vectors, all we need is replace this inner product with the inner product from some other suitable space. That is where resides the “trick”: wherever a dot product is used, it is replaced with a Kernel function. The kernel function denotes an inner product in feature space and is usually denoted as:

K(x,y) = <φ(x),φ(y)>

Using the Kernel function, the algorithm can then be carried into a higher-dimension space without explicitly mapping the input points into this space. This is highly desirable, as sometimes our higher-dimensional feature space could even be infinite-dimensional and thus unfeasible to compute.

Kernel Properties

Kernel functions must be continuous, symmetric, and most preferably should have a positive (semi-) definite Gram matrix. Kernels which are said to satisfy the Mercer’s theorem are positive semi-definite, meaning their kernel matrices have only non-negative Eigen values. The use of a positive definite kernel insures that the optimization problem will be convex and solution will be unique.

However, many kernel functions which aren’t strictly positive definite also have been shown to perform very well in practice. An example is the Sigmoid kernel, which, despite its wide use, it is not positive semi-definite for certain values of its parameters. Boughorbel (2005) also experimentally demonstrated that Kernels which are only conditionally positive definite can possibly outperform most classical kernels in some applications.

Kernels also can be classified as anisotropic stationary, isotropic stationary, compactly supported, locally stationary, nonstationary or separable nonstationary. Moreover, kernels can also be labeled scale-invariant or scale-dependant, which is an interesting property as scale-invariant kernels drive the training process invariant to a scaling of the data.

Choosing the Right Kernel

Choosing the most appropriate kernel highly depends on the problem at hand – and fine tuning its parameters can easily become a tedious and cumbersome task. Automatic kernel selection is possible and is discussed in the works by Tom Howley and Michael Madden.

The choice of a Kernel depends on the problem at hand because it depends on what we are trying to model. A polynomial kernel, for example, allows us to model feature conjunctions up to the order of the polynomial. Radial basis functions allows to pick out circles (or hyperspheres) – in constrast with the Linear kernel, which allows only to pick out lines (or hyperplanes).

The motivation behind the choice of a particular kernel can be very intuitive and straightforward depending on what kind of information we are expecting to extract about the data. Please see the final notes on this topic from Introduction to Information Retrieval, by Manning, Raghavan and Schütze for a better explanation on the subject.

Kernel Functions

Below is a list of some kernel functions available from the existing literature. As was the case with previous articles, every LaTeX notation for the formulas below are readily available from their alternate text html tag. I can not guarantee all of them are perfectly correct, thus use them at your own risk. Most of them have links to articles where they have been originally used or proposed.

1. Linear Kernel

The Linear kernel is the simplest kernel function. It is given by the inner product <x,y> plus an optional constant c. Kernel algorithms using a linear kernel are often equivalent to their non-kernel counterparts, i.e. KPCA with linear kernel is the same as standard PCA.

$k(x, y) = x^T y + c$

2. Polynomial Kernel

The Polynomial kernel is a non-stationary kernel. Polynomial kernels are well suited for problems where all the training data is normalized.

$k(x, y) = (\alpha x^T y + c)^d$
Adjustable parameters are the slope alpha, the constant term c and the polynomial degree d.

3. Gaussian Kernel

The Gaussian kernel is an example of radial basis function kernel.

$k(x, y) = \exp\left(-\frac{ \lVert x-y \rVert ^2}{2\sigma^2}\right)$

Alternatively, it could also be implemented using

$k(x, y) = \exp\left(- \gamma \lVert x-y \rVert ^2 )$

The adjustable parameter sigma plays a major role in the performance of the kernel, and should be carefully tuned to the problem at hand. If overestimated, the exponential will behave almost linearly and the higher-dimensional projection will start to lose its non-linear power. In the other hand, if underestimated, the function will lack regularization and the decision boundary will be highly sensitive to noise in training data.

4. Exponential Kernel

The exponential kernel is closely related to the Gaussian kernel, with only the square of the norm left out. It is also a radial basis function kernel.

$k(x, y) = \exp\left(-\frac{ \lVert x-y \rVert }{2\sigma^2}\right)$

5. Laplacian Kernel

The Laplace Kernel is completely equivalent to the exponential kernel, except for being less sensitive for changes in the sigma parameter. Being equivalent, it is also a radial basis function kernel.

$k(x, y) = \exp\left(- \frac{\lVert x-y \rVert }{\sigma}\right)$

It is important to note that the observations made about the sigma parameter for the Gaussian kernel also apply to the Exponential and Laplacian kernels.

6. ANOVA Kernel

The ANOVA kernel is also a radial basis function kernel, just as the Gaussian and Laplacian kernels. It is said to perform well in multidimensional regression problems (Hofmann, 2008).

$k(x, y) = \sum_{k=1}^n \exp (-\sigma (x^k - y^k)^2)^d$

7. Hyperbolic Tangent (Sigmoid) Kernel

The Hyperbolic Tangent Kernel is also known as the Sigmoid Kernel and as the Multilayer Perceptron (MLP) kernel. The Sigmoid Kernel comes from the Neural Networks field, where the bipolar sigmoid function is often used as an activation function for artificial neurons.

$k(x, y) = \tanh (\alpha x^T y + c)$

It is interesting to note that a SVM model using a sigmoid kernel function is equivalent to a two-layer, perceptron neural network. This kernel was quite popular for support vector machines due to its origin from neural network theory. Also, despite being only conditionally positive definite, it has been found to perform well in practice.

There are two adjustable parameters in the sigmoid kernel, the slope alpha and the intercept constant c. A common value for alpha is 1/N, where N is the data dimension. A more detailed study on sigmoid kernels can be found in the works by Hsuan-Tien and Chih-Jen.

The Rational Quadratic kernel is less computationally intensive than the Gaussian kernel and can be used as an alternative when using the Gaussian becomes too expensive.

$k(x, y) = 1 - \frac{\lVert x-y \rVert^2}{\lVert x-y \rVert^2 + c}$

The Multiquadric kernel can be used in the same situations as the Rational Quadratic kernel. As is the case with the Sigmoid kernel, it is also an example of an non-positive definite kernel.

$k(x, y) = \sqrt{\lVert x-y \rVert^2 + c^2}$

The Inverse Multi Quadric kernel. As with the Gaussian kernel, it results in a kernel matrix with full rank (Micchelli, 1986) and thus forms a infinite dimension feature space.

$k(x, y) = \frac{1}{\sqrt{\lVert x-y \rVert^2 + c^2}}$

11. Circular Kernel

The circular kernel is used in geostatic applications. It is an example of an isotropic stationary kernel and is positive definite in R2.

$k(x, y) = \frac{2}{\pi} \arccos ( - \frac{ \lVert x-y \rVert}{\sigma}) - \frac{2}{\pi} \frac{ \lVert x-y \rVert}{\sigma} \sqrt{1 - \left(\frac{ \lVert x-y \rVert}{\sigma} \right)^2}$
$\mbox{if}~ \lVert x-y \rVert < \sigma \mbox{, zero otherwise}$

12. Spherical Kernel

The spherical kernel is similar to the circular kernel, but is positive definite in R3.

$k(x, y) = 1 - \frac{3}{2} \frac{\lVert x-y \rVert}{\sigma} + \frac{1}{2} \left( \frac{ \lVert x-y \rVert}{\sigma} \right)^3$

$\mbox{if}~ \lVert x-y \rVert < \sigma \mbox{, zero otherwise}$

13. Wave Kernel

The Wave kernel is also symmetric positive semi-definite (Huang, 2008).

$k(x, y) = \frac{\theta}{\lVert x-y \rVert \right} \sin \frac{\lVert x-y \rVert }{\theta}$

14. Power Kernel

The Power kernel is also known as the (unrectified) triangular kernel. It is an example of scale-invariant kernel (Sahbi and Fleuret, 2004) and is also only conditionally positive definite.

$k(x,y) = - \lVert x-y \rVert ^d$

15. Log Kernel

The Log kernel seems to be particularly interesting for images, but is only conditionally positive definite.

$k(x,y) = - log (\lVert x-y \rVert ^d + 1)$

16. Spline Kernel

The Spline kernel is given as a piece-wise cubic polynomial, as derived in the works by Gunn (1998).

$k(x, y) = 1 + xy + xy~min(x,y) - \frac{x+y}{2}~min(x,y)^2+\frac{1}{3}\min(x,y)^3$

However, what it actually mean is:

$k(x,y) = \prod_{i=1}^d 1 + x_i y_i + x_i y_i \min(x_i, y_i) - \frac{x_i + y_i}{2} \min(x_i,y_i)^2 + \frac{\min(x_i,y_i)^3}{3}$

With$x,y \in R^d$

17. B-Spline (Radial Basis Function) Kernel

The B-Spline kernel is defined on the interval [−1, 1]. It is given by the recursive formula:

$k(x,y) = B_{2p+1}(x-y)$

$\mbox{where~} p \in N \mbox{~with~} B_{i+1} := B_i \otimes B_0.$

In the work by Bart Hamers it is given by:

$k(x, y) = \prod_{p=1}^d B_{2n+1}(x_p - y_p)$

Alternatively, Bn can be computed using the explicit expression (Fomel, 2000):

$B_n(x) = \frac{1}{n!} \sum_{k=0}^{n+1} \binom{n+1}{k} (-1)^k (x + \frac{n+1}{2} - k)^n_+$

Where x+ is defined as the truncated power function:

$x^d_+ = \begin{cases} x^d, & \mbox{if }x > 0 \\ 0, & \mbox{otherwise} \end{cases}$

18. Bessel Kernel

The Bessel kernel is well known in the theory of function spaces of fractional smoothness. It is given by:

$k(x, y) = \frac{J_{v+1}( \sigma \lVert x-y \rVert)}{ \lVert x-y \rVert ^ {-n(v+1)} }$

where J is the Bessel function of first kind. However, in the Kernlab for R documentation, the Bessel kernel is said to be:

$k(x,x') = - Bessel_{(nu+1)}^n (\sigma |x - x'|^2)$

19. Cauchy Kernel

The Cauchy kernel comes from the Cauchy distribution (Basak, 2008). It is a long-tailed kernel and can be used to give long-range influence and sensitivity over the high dimension space.

$k(x, y) = \frac{1}{1 + \frac{\lVert x-y \rVert^2}{\sigma^2} }$

20. Chi-Square Kernel

The Chi-Square kernel comes from the Chi-Square distribution:

$k(x,y) = 1 - \sum_{i=1}^n \frac{(x_i-y_i)^2}{\frac{1}{2}(x_i+y_i)}$

However, as noted by commenter Alexis Mignon, this version of the kernel is only conditionally positive-definite (CPD). A positive-definite version of this kernel is given in (Vedaldi and Zisserman, 2011) as

$k(x,y) = \sum_{i=1}^n \frac{ 2 x_i y_i}{(x_i + y_i)}$

and is suitable to be used by methods other than support vector machines.

21. Histogram Intersection Kernel

The Histogram Intersection Kernel is also known as the Min Kernel and has been proven useful in image classification.

$k(x,y) = \sum_{i=1}^n \min(x_i,y_i)$

22. Generalized Histogram Intersection

The Generalized Histogram Intersection kernel is built based on the Histogram Intersection Kernel for image classification but applies in a much larger variety of contexts (Boughorbel, 2005). It is given by:

$k(x,y) = \sum_{i=1}^m \min(|x_i|^\alpha,|y_i|^\beta)$

23. Generalized T-Student Kernel

The Generalized T-Student Kernel has been proven to be a Mercel Kernel, thus having a positive semi-definite Kernel matrix (Boughorbel, 2004). It is given by:

$k(x,y) = \frac{1}{1 + \lVert x-y \rVert ^d}$

24. Bayesian Kernel

The Bayesian kernel could be given as:

$\150dpi k(x,y) = \prod_{l=1}^N \kappa_l (x_l,y_l)$

where

$\kappa_l(a,b) = \sum_{c \in \{0;1\}} P(Y=c \mid X_l=a) ~ P(Y=c \mid X_l=b)$

However, it really depends on the problem being modeled. For more information, please see the work by Alashwal, Deris and Othman, in which they used a SVM with Bayesian kernels in the prediction of protein-protein interactions.

25. Wavelet Kernel

The Wavelet kernel (Zhang et al, 2004) comes from Wavelet theory and is given as:

$k(x,y) = \prod_{i=1}^N h(\frac{x_i-c}{a}) \: h(\frac{y_i-c}{a})$

Where a and c are the wavelet dilation and translation coefficients, respectively (the form presented above is a simplification, please see the original paper for details). A translation-invariant version of this kernel can be given as:

$k(x,y) = \prod_{i=1}^N h(\frac{x_i-y_i}{a})$

Where in both h(x) denotes a mother wavelet function. In the paper by Li Zhang, Weida Zhou, and Licheng Jiao, the authors suggests a possible h(x) as:

$h(x) = cos(1.75x)exp(-\frac{x^2}{2})$

Which they also prove as an admissible kernel function.

Source Code

The latest version of the source code for almost all of the kernels listed above is available in the Accord.NET Framework. Some are also available in the sequel of this article, Kernel Support Vector Machines for Classification and Regression in C#. They are provided together with a comprehensive and simple implementation of SVMs (Support Vector Machines) in C#. However, for the latest sources, which may contain bug fixes and other enhancements, please download the most recent version available of Accord.NET.

References

Citing this work

If you would like, please cite this work as: Souza, César R. “Kernel Functions for Machine Learning Applications.” 17 Mar. 2010. Web. <http://crsouza.blogspot.com/2010/03/kernel-functions-for-machine-learning.html>.

Logistic Regression in C#

Logistic regression (sometimes called the logistic model or logit model) is used for prediction of the probability of occurrence of an event by fitting data to a logistic curve. Logistic regression is used extensively in the medical and social sciences as well as marketing applications such as prediction of a customer’s propensity to purchase a product or cease a subscription.

The code presented here is part of the Accord.NET Framework. The Accord.NET Framework is a framework for developing machine learning, computer vision, computer audition, statistics and math applications in .NET. To use the framework in your projects, install it by typing Install-Package Accord.MachineLearning in your IDE’s NuGet package manager.

Overview

The Logistic regression is a generalized linear model used for binomial regression. Like many forms of regression analysis, it makes use of several predictor variables that may be either numerical or categorical.

Logistic Sigmoid Function

The logistic sigmoid function is given by

g(z) = 1 / (1 + Exp(-z))

where in the context of logistical regression z is called the logit.

Logistic Regression Model

The logistic regression model is a generalized linear model. This means that it is just a linear regression model taken as input for a non-linear link function. The linear model has the form

z = c1x1 + c2x2 + … cnxn + i = ct x + i

where c is the coefficient vector, i is the intercept value and x is the observation vector for n variables and in the context of logistic regression is called the logit. The logit is then applied as input for the nonlinear logistic sigmoid function g(z), giving as result a probability.

So in a binomial problem where we are trying to determine whether a observation belongs to class C1 or class C2, the logistic model tell us that:

p(C1|x) = g(ct x + i)

p(C2|x) = 1 – p(C1|x)

where p(C1|x) denotes the probability of C1 being true when x is true.
In other words, denotes the probability of x belonging to class C1.

Coefficients

The coefficients for the logistic regression are the values which multiply each observation variable from a sample input vector. The intercept is analogous to the independent term in a polynomial linear regression model or the threshold or bias value for a neuron in a neural network model.

Odds Ratio

After computation of the logistic regression model, every coefficient will have an associated measure called the odds ratio. The odds ratio is a measure of effect size, describing the strength of association or non-independence between two binary data values and can be approximated by raising the coefficient value to the euler’s number.

Odds ratioc = ec

Standard Error

The standard error for the coefficients can be retrieved from the inverse Hessian matrix calculated during the model fitting phase and can be used to give confidence intervals for the odds ratio. The standard error for the i-th coefficient of the regression can be obtained as:

SEi = sqrt(diag(H-1)i)

Confidence Intervals

The confidence interval around the logistic regression coefficient is plus or minus 1.96*SEi, where SEi is the standard error for coefficient i. We can then define:

95% C.I.i = <lower, upper> = <coefficienti – 1.96 * SEi,  coefficienti + 1.96 * SEi>

Wald Statistic and Wald’s Test

The Wald statistic is the ratio of the logistic coefficient to its standard error. A Wald test is used to test the statistical significance of each coefficient in the model. A Wald test calculates a Z statistic, which is:

z = coefficient / standard error

This z value can be squared, yielding a Wald statistic with a chi-square distribution, or, alternatively, it can be taken as is and compared directly with a Normal distribution.

The Wald test outputs a p-value indicating the significance of individual independent variables. If the value is below a chosen significance threshold (typically 0.05), then the variable plays some role in determining the outcome variable that most likely is not result of chance alone. However, there are some problems with the use of the Wald statistic. The Likelihood-ratio test is a better alternative for the Wald test.

Likelihood-Ratio and Chi-Square Test

The likelihood-ratio is the ratio of the likelihood of the full model over the likelihood of a simpler nested model. When compared to the null model the likelihood-ratio test gives an overall model performance measure. When compared to nested models, each with one variable omitted, it tests the statistical significance of each coefficient in the model. These significance tests are considered to be more reliable than the Wald significance test.

The likelihood-ratio is a chi-square statistic given by:

The model with more parameters will always fit at least as well (have a greater log-likelihood). Whether it fits significantly better and should thus be preferred can be determined by deriving the probability or p-value of the obtained difference D. In many cases, the probability distribution of the test statistic can be approximated by a chi-square distribution with (df1 − df2) degrees of freedom, where df1 and df2 are the degrees of freedom of models 1 and 2 respectively.

Regression to a Logistic Model

If we consider the mapping

φ(<x1, x2, …, xn>) = <x1, x2, … xn, 1>

The logistic regression model can also be rewritten as

p(C1|φ) = g(wt φ) = f(φ, w)

so that w contains all coefficients and the intercept value in a single weight vector. Doing so will allow the logistic regression model to be expressed as a common optimization model in the form f(φ, w) allowing many standard non-linear optimization algorithms to be directly applied in the search for the best parameters w that best fits the probability of a class C1 given a input vector φ.

Likelihood function

The likelihood function for the logistic model is given by:

but, as the log of products equals the sum of logs, taking the log of the likelihood function results in the Log-likelihood function in the form:

Furthermore, if we take the negative of the log-likelihood, we will have a error function called the cross-entropy error function:

which gives both better numerical accuracy and enable us to write the error gradient in the same form as the gradient of the sum-of-squares error function for linear regression models (Bishop, 2006).

Another important detail is that the likelihood surface is convex, so it has no local maxima. Any local maxima will also be a global maxima, so one does not need to worry about getting trapped in a valley when walking down the error function.

Iterative Reweighted Least-Squares

The method of Iterative Reweighted Least-Squares is commonly used to find the maximum likelihood estimates of a generalized linear model. In most common applications, an iterative Newton-Raphson algorithm is used to calculate those maximum likelihood values for the parameters. This algorithm uses second order information, represented in the form of a Hessian matrix, to guide incremental coefficient changes. This is also the algorithm used in this implementation.

In the Accord.NET machine learning framework. the Iterative Reweighted Least-Squares is implemented in the IterativeReweightedLeastSquares class (source).

Source Code

Below is the main code segment for the logistic regression, performing one pass of the Iterative Reweighted Least-Squares algorithm.

Furthermore, as the likelihood function is convex, the Logistic Regression Analysis can perform regression without having to experiment different starting points. Below is the code to compute a logistic regression analysis. The algorithm iterates until the largest absolute parameter change between two iterations becomes less than a given limit, or the maximum number of iterations is reached.

Using the code

Code usage is very simple. To perform a Logistic Regression Analysis, simply create an object of the type LogisticRegressionAnalysis and pass the input and output data as constructor parameters. Optionally you can pass the variables names as well.

Then just call Compute() to compute the analysis.

After that, you can display information about the regression coefficients by binding the CoefficientCollection to a DataGridView.

Sample application

The sample application creates Logistic Regression Analyses by reading Excel workbooks. It can also perform standard Linear Regressions, although there aren’t many options available to specify linear models.

Example data set adapted from http://www.statsdirect.co.uk/help/regression_and_correlation/logi.htm.

The image on the left shows information about the analysis performed, such as coefficient values, standard errors, p-value for the Wald statistic, Odds ratio and 95% confidence intervals. It also reports the log-likelihood, deviance and likelihood-ratio chi-square test for the final model. The newest available version can also compute likelihood-ratio tests for each coefficient, although not shown in the image.

Final Considerations

The logistic regression model can be seen as the exact same as a one layer MLP with only one hidden neuron using a sigmoid activation function trained by a Newton’s method learning algorithm. Thus we can say the Logistic Regression is just a special case of Neural Networks. However, one possible (and maybe unique) advantage of logistic regression is that it allows simpler interpretation of its results in the form of odds ratios and statistical hypothesis testing. Statistical analysis using neural networks is also possible, but some may argue it is not as straightforward as using ordinary logistic regression.

Multilayer perceptron neural networks with sigmoid activation functions can be created using the Accord.Neuro package and namespace.

The Kernel Trick

K(x,y) = <φ(x),φ(y)>

Em aprendizado de máquina, o Kernel trick é um truque que parece ingênuo, mas que tem um poder quase inacreditável: transformar quaisquer algorítmos lineares que possam ser expressos em termos de produtos internos em algorítmos não-lineares.

A idéia chega a ser engraçada. Você tem uma técnica (como a PCA) que lhe permite trabalhar com funções lineares, e você tem alguma função arbitrária não linear que não segue este critério. Com o Kernel trick, você ainda pode fazer sua técnica funcionar. Tudo que você tem de fazer é incrementar o número de dimensões do espaço em que você está trabalhando. E incrementar muito. Mais especificamente, você pode simplesmente mover seu problema para um espaço em que exista uma dimensão independente para cada uma das possíveis entradas de sua função!

Video por Udi Aharoni demonstrando como pontos que não são linearmente separáveis em um espaço de duas dimensões podem quase sempre ser linearmente separados em espaços de maiores dimensões.

Assim que este mapeamento esteja feito, qualquer função poderá ser representada como uma operação linear, porque todas possíveis entradas serão completamente independentes (já que estarão localizadas cada uma em uma dimensão diferente)! Mas é claro, se sua função aceitar um intervalo contínuo de entradas, isto requerirá um espaço de dimensões infinitas, como um espaço de Hilbert, no qual será difícil trabalhar. Em muitas aplicações, como em PCA, tudo que você precisa é de um produto interno, que neste caso você pode computar no espaço original (de poucas dimensões). E computar este produto interno é o papel da função de Kernel.

Veja também:

Kernel Discriminant Analysis in C#

Kernel (Fisher) discriminant analysis (kernel FDA) is a non-linear generalization of linear discriminant analysis (LDA) using techniques of kernel methods. Using a kernel, the originally linear operations of LDA are done in a reproducing kernel Hilbert space with a non-linear mapping.

This code has also been incorporated in Accord.NET Framework, which includes the latest version of this code plus many other statistics and machine learning tools.

Analysis Overview

KDA is an extension of LDA to non-linear distributions, just as KPCA is to PCA. For information about LDA, please check the previous post, Linear Discriminant Analysis in C#. The algorithm presented here is a multi-class generalization of the original algorithm by Mika et al. in Fisher discriminant analysis with kernels (1999).

The objective of KDA is to find a transformation maximizing the between-class variance and minimizing the within-class variance. It can be shown that, with Kernels, the original objective function can be expressed as:

with:

where Kc is the Kernel matrix for class c, uc is the column means vector for Kc, I is the identity matrix, lc is the number of samples in class c and 1lc is a lc x lc matrix with all entries 1/lc.

The Kernel Trick

The Kernel trick transforms any algorithm that solely depends on the dot product between two vectors. Wherever a dot product is used, it is replaced with the kernel function.

K(x,y) = <φ(x),φ(y)>

Thus, a linear algorithm can easily be transformed into a non-linear algorithm. So the algorithm can be carried in a higher-dimension space without explicitly mapping the input points into this space. For more information and a cool video showing how the kernel trick works, please see the introductory text in the previous post, Kernel Principal Component Analysis in C#.

As the problem proposed above is ill-posed (we are estimating l dimensional covariance structures from l samples), the Sw matrix may become singular, and computing the objective function may become problematic. To avoid singularities, we can add a multiple of the identity matrix to the Sw matrix:

The λ adds numerical stabilities as for large λ the matrix Sw will become positive definite. The λ term can also be seen as a regularization term, favoring solutions with small expansion coefficients [Mika et al, 1999].

Source Code

The code below implements a generalization of the original Kernel Discriminant Analysis algorithm by Mika et al. in Fisher discriminant analysis with kernels (1999).

After completing the analysis, we may wish to project new data into discriminant space. The following code projects a data matrix into this space using the basis found during the analysis.

Sample Application

The sample application shows the how Kernel Discriminant Analysis works. The application can load Excel worksheets containing tables in the same format as the included sample worksheet.

The first image shows the Wikipedia example for Kernel Principal Component Analysis. The second image shows the Analysis carried out with a Gaussian kernel with sigma = 3.6. The third image shows the Kernel between class and within class equivalent matrices. The last image shows the subset spawned by each class and its Kernel scatter matrix.

The picture on the left shows the projection of the original data set in the first two discriminant dimensions. Note that just one dimension would already be sufficient to linear separate the classes. On the right, the picture shows how a point in input space (given by the mouse cursor) gets mapped into feature space.

Linear Discriminant Analysis equivalent as a special case

We can also check that a projection equivalent to Linear Discriminant Analysis can be obtained by using a Linear kernel in the Kernel Discriminant Analysis.

Linear Discriminant Analysis in C#

Linear discriminant analysis (LDA) is a method used in statistics and machine learning to find a linear combination of features which best characterize or separates two or more classes of objects or events. The resulting combination may be used as a linear classifier, or, more commonly, for dimensionality reduction before later classification.

The code presented here is also part of the Accord.NET Framework. The Accord.NET Framework is a framework for developing machine learning, computer vision, computer audition, statistics and math applications. It is based on the already excellent AForge.NET Framework. Please see the starting guide for mode details. The latest version of the framework includes the latest version of this code plus many other statistics and machine learning tools.

Motivations

The goals of LDA are somewhat similar to those of PCA. But different from LDA, PCA is an unsupervised technique and as such does not include label information of the data, effectively ignoring this often useful information. For instance, consider two cigar-like clusters in 2 dimensions, one cigar having y = c and the other y = –c (with c being a arbitrary constant), as the image below suggests:

This example was adapted from the
note on Fisher Linear Discriminant Analysis by Max Welling.

The cigars are positioned in parallel and very closely together, such that the variance in the total dataset, ignoring the labels, is in the direction of the cigars. For classification, this would be a terrible projection, because all labels will get mixed and we will destroy any useful information:

A much more useful projection is orthogonal to the cigars, which would perfectly separate the data-cases:

The first row of images was obtained by performing PCA on the original data. The left image is the PCA projection using the first two components. However, as the analysis says the first component accounts for 96% of the information, one can infer all other components could be safely discarded. The result is the image on the right, clearly a not very useful projection.

The second row of images was obtained by performing LDA on the original data. The left image is the LDA projection using the first two dimensions. However, as the analysis says the first dimension accounts for 100% of the information, all other dimensions could be safely discarded. Well, this is actually true, as we can see on the result in the right image.

Analysis Overview

Linear Discriminant Analysis is closely related to principal component analysis (PCA) and factor analysis in that both look for linear combinations of variables which best explain the data. LDA explicitly attempts to model the difference between the classes of data. PCA on the other hand does not take into account any difference in class, and factor analysis builds the feature combinations based on differences rather than similarities. Linear Discriminant Analysis considers maximizing the following objective:

where

are the Between-Class Scatter Matrix and Within-Class Scatter Matrix, respectively. The optimal solution can be found by computing the Eigenvalues of SBSW-1 and taking the Eigenvectors corresponding to the largest Eigen values to form a new basis for the data. Those can be easily obtained by computing the generalized eigenvalue decomposition of SB and SW.

Source Code

The source code below shows the main algorithm for Linear Discriminant Analysis.

Using The Code

Code usage is simple, as it follows the same object model in the previous PCA example.

Considerations

Besides being useful in many situations, in many practical cases linear discriminants are not suitable. A nice insight is that LDA can be extended for use in non-linear classification via the kernel trick. Using kernels, the original observations can be effectively mapped into a higher dimensional non-linear space. Linear classification in this non-linear space is then equivalent to non-linear classification in the original space.

While the code available here already contains code for Kernel Discriminant Analysis, this is something I’ll address in the next post. If you have any suggestions or questions, please leave me a comment.

Finally, as usual, I hope someone finds this information useful.

Kernel Principal Component Analysis in C#

Kernel principal component analysis (kernel PCA) is an extension of principal component analysis (PCA) using techniques of kernel methods. Using a kernel, the originally linear operations of PCA are done in a reproducing kernel Hilbert space with a non-linear mapping.

The code presented here is also part of the Accord.NET Framework. The Accord.NET Framework is a framework for developing machine learning, computer vision, computer audition, statistics and math applications. It is based on the already excellent AForge.NET Framework. Please see the starting guide for mode details. The latest version of the framework includes the latest version of this code plus many other statistics and machine learning tools.

Analysis Overview

KPCA is an extension of PCA to non-linear distributions. Instead of directly doing a PCA, the original data points are mapped into a higher-dimensional (possibly infinite-dimensional) feature space defined by a (usually nonlinear) function φ through a mathematical process called the “kernel trick”.

The Kernel trick

The kernel trick is a very interesting technique, able to transform any algorithm that solely depends on the dot product between two vectors. To apply the trick, we replace all dot products in an algorithm with a kernel function. Thus, a linear algorithm can easily be transformed into a non-linear algorithm. This non-linear algorithm is equivalent to the linear algorithm operating in the range space of φ. However, because kernels are used, the φ function is never explicitly computed. This is desirable [8], as the high-dimensional space can have an arbitrary large dimensionality, or can be even infinite-dimensional (such as in the case the chosen kernel function is a Gaussian) and thus infeasible to compute.

Interesting video by Udi Aharoni demonstrating how points that are not linearly separable in a 2D space can almost always be linearly separated in a higher dimensional (3D) space.

Principal Components in Kernel Space

Like in PCA, the overall idea is to perform a transformation that will maximize the variance of the captured variables while minimizing the overall covariance between those variables. Using the kernel trick, the covariance matrix is substituted by the Kernel matrix and the analysis is carried analogously in feature space. An Eigen value decomposition is performed and the eigenvectors are sorted in ascending order of Eigen values, so those vectors may form a basis in feature space that explain most of the variance in the data on its first dimensions.

However, because the principal components are in feature space, we will not be directly performing dimensionality reduction. Suppose that the number of observations m exceeds the input dimensionality n. In linear PCA, we can find at most n nonzero Eigen values. On the other hand, using Kernel PCA we can find up to m nonzero Eigen values because we will be operating on a m x m kernel matrix[2].

A more detailed discussion can be seen in the works of Mika et al. (Kernel PCA and De-Noising in Feature Spaces) and Scholkopf et al. (Nonlinear Component Analysis as a Kernel Eigenvalue Problem). A more accessible explanation of the topic is also available in the work from Ian Fasel.

The pre-image problem

In standard PCA it was possible to revert the transformation, retrieving the original data points and enabling the use of PCA for compression* and denoising. However, as the results provided by kernel PCA live in some high dimensional feature space,  they need not have pre-images in input space. In fact, even if it exists, it also need be not unique.

Thus, in KPCA, it is only possible to obtain an approximate pre-image for a given set of patterns projected on the feature space. This problem can (and has) been tackled by many approaches, some of them of iterative nature, but some closed-form solutions also exists. Typically those solutions make use of the fact that distances in feature space are related to distances in input space. Thus, those solutions try to achieve an optimal transformation that can embed those feature points in input space respecting those distances relationships.

One of the first solutions of this form were proposed by Kwok and Tsang in their paper “The Pre-Image Problem in Kernel Methods". A better approach is also described by Bakir on his thesis “Extensions to Kernel Dependency Estimation”, alongside with a nice comprehensive description on various other methods for handling the pre-image problem.

Note: actually the use of KPCA for compression is a bit limited, since one needs to store the original data set to compute transformations and its approximate reversion.

Code Overview

Below is the main code behind KPCA. This is a direct, naive implementation of the algorithm that may not scale very well for very large datasets. It is implemented on top of the previous standard PCA code and thus on top of the AForge.NET Framework.

The main code behind the reversion of the projection. It is a direct, naive implementation of the closed-form solution algorithm as proposed by Kwok and Tsang. Currently it is completely not optimized, but it gives an better overview of the algorithm on its essential form. The source code available to download may be better optimized in the future.

Using the Code

As with PCA, code usage is simple. First, choose a kernel implementing the IKernel interface then pass it to the KernelPrincipalComponentAnalysis constructor together with the data points.

Then, simply call Compute() to perform the analysis. Once done, data about the analysis will be available through the object properties and transformations on additional data can be computed by calling Transform().

Sample Application

To demonstrate the use of KPCA, I created a simple Windows Forms Application which performs simple statistical analysis and KPCA transformations. It can also perform projection reversion by solving the approximate pre-image problem.

On left: Scholkopf Toy Example. Middle: Kernel principal components detected by the KPCA. Right: The two first dimensions obtained by the two first Kernel principal components.

Left: Wikipedia example. Middle: Projection using a non-centered Gaussian kernel. Right: Reconstruction of the original data using all Kernel principal components, showing some denoising ability.

Linear PCA as a special case

Additionally we may check that linear PCA is indeed a special case of kernel PCA by using a linear kernel on the example by Lindsay Smith on his A Tutorial On Principal Component Analysis.

Discriminatory Power Analysis by Receiver-Operating Characteristic Curves (Part 2 of 2: C# Source Code)

Part 2 of 2: C# Source Code. Go to Part 1 of 2: Theory.

Happy new year everyone! As promised, here is the second part of the article Performing Discriminatory Power Analysis by Receiver-Operating Characteristic Curves. This second part shows how to create ROC curves in C#. The following C# code implements the creation, visualization and analysis of ROC curves. The sample application accompanying the source code demonstrates how to use them inside your own applications.

Source code

Confusion matrix

The ConfusionMatrix class represents a 2×2 contingency table. All derived measures mentioned above, such as sensitivity and specificity, are available through its properties and methods.

The ReceiverOperatingCharacteristic class represents a ROC curve. The collection of points that define the curve are available through the Points property. Because points are ConfusionMatrix-inherited objects, the collection can be directly bound to a DataGridView, enabling quick curve visual inspection.

Additional information, such as the total Area Under the Curve and its associated deviation error are available through the Area and Error properties, respectively.

Using the code

Code usage is very simple. To use the aforementioned classes and create a ROC curve, simply create a new ReceiverOperatingCharacteristic object passing the actual data, as measured by the experiment, and the test data, as given by the prediction model.

The actual data must be a dichotomous variable, with only two valid values. The “false”  value is assumed to be the lowest value of the dichotomy, while the “true” value is assumed to be the highest. It is recommended to use 0 and 1 values for simplicity, although it isn’t mandatory.

The test data, as given by the prediction model, must have continuous values between the lowest and highest values for the actual data. For example, if the two valid values are 0 and 1, then its values must be inside the [0,1] range.

To compute the Curve using different cutoff values, call the Compute method passing the desired number of points or the desired increment in the cutoff value between points.

Sample applications

Together with the source code, there is an accompanying sample application demonstrating the use of the ROC analysis. To open the example table, click File –> Load then select the excel spreadsheet located inside the executable folder. To plot the curve, select the number of points or the threshold increment and click Plot.

A “good” classifier.

An approximately “random” classifier.

A perfect, ideal classifier.

Curve points for the “good” classifier.