Kernel Support Vector Machines for Classification and Regression in C#

Kernel methods in general have gained increased attention in recent years, partly due to the grown of popularity of the Support Vector Machines. Support Vector Machines are linear classifiers and regressors that, through the Kernel trick, operate in reproducing Kernel Hilbert spaces and are thus able to perform non-linear classification and regression in their input space.

Foreword

If you would like to use SVMs in your .NET applications, download the Accord.NET Framework through NuGet. Afterwards, creating support vector machines for binary and multi-class problems with a variety of kernels becomes very easy. The Accord.NET Framework is a LGPL framework which can be used freely in commercial, closed-source, open-source or free applications. This article explains a bit how the SVM algorithms and the overall SVM module was designed before being added as part of the framework.

Contents

  1. Introduction
    1. Support Vector Machines
    2. Kernel Support Vector Machines
      1. The Kernel Trick
      2. Standard Kernels
  2. Learning Algorithms
    1. Sequential Minimal Optimization
  3. Source Code
    1. Support Vector Machine
    2. Kernel Support Vector Machine
    3. Sequential Minimal Optimization
  4. Using the code
  5. Sample application
    1. Classification
    2. Regression
  6. See also
  7. References

Introduction

Support Vector Machines

Support vector machines (SVMs) are a set of related supervised learning methods used for classification and regression. In simple words, given a set of training examples, each marked as belonging to one of two categories, a SVM training algorithm builds a model that predicts whether a new example falls into one category or the other. Intuitively, an SVM model is a representation of the examples as points in space, mapped so that the examples of the separate categories are divided by a clear gap that is as wide as possible. New examples are then mapped into that same space and predicted to belong to a category based on which side of the gap they fall on.

A linear support vector machine is composed of a set of given support vectors z and a set of weights w. The computation for the output of a given SVM with N support vectors z1, z2, … , zN and weights w1, w2, … , wN is then given by:

F(x) = sum_{i=1}^N  w_i , left langle z_i,x right rangle  + b

Kernel Support Vector Machines

The original optimal hyperplane algorithm proposed by Vladimir Vapnik in 1963 was a linear classifier. However, in 1992, Bernhard Boser, Isabelle Guyon and Vapnik suggested a way to create non-linear classifiers by applying the kernel trick (originally proposed by Aizerman et al.) to maximum-margin hyperplanes. The resulting algorithm is formally similar, except that every dot product is replaced by a non-linear kernel function. This allows the algorithm to fit the maximum-margin hyperplane in a transformed feature space. The transformation may be non-linear and the transformed space high dimensional; thus though the classifier is a hyperplane in the high-dimensional feature space, it may be non-linear in the original input space.

Using kernels, the original formulation for the SVM given SVM with support vectors z1, z2, … , zN and weights w1, w2, … , wN is now given by:

F(x) = sum_{i=1}^N  w_i , k(z_i,x) + b

It is also very straightforward to see that, using a linear kernel of the form K(z,x) = <z,x> = zTx, we recover the original formulation for the linear SVM.

The Kernel trick

The Kernel trick is a very interesting and powerful tool. It is powerful because it provides a bridge from linearity to non-linearity to any algorithm that solely depends on the dot product between two vectors. It comes from the fact that, if we first map our input data into a higher-dimensional space, a linear algorithm operating in this space will behave non-linearly in the original input space.

Now, the Kernel trick is really interesting because that mapping does not need to be ever computed. If our algorithm can be expressed only in terms of a inner product between two vectors, all we need is replace this inner product with the inner product from some other suitable space. That is where resides the “trick”: wherever a dot product is used, it is replaced with a Kernel function. The kernel function denotes an inner product in feature space and is usually denoted as:

K(x,y) = <φ(x),φ(y)>

Using the Kernel function, the algorithm can then be carried into a higher-dimension space without explicitly mapping the input points into this space. This is highly desirable, as sometimes our higher-dimensional feature space could even be infinite-dimensional and thus infeasible to compute.

Standard Kernels

Some common Kernel functions include the linear kernel, the polynomial kernel and the Gaussian kernel. Below is a simple list with their most interesting characteristics.

 

Linear Kernel The Linear kernel is the simplest kernel function. It is given by the common inner product <x,y> plus an optional constant c. Kernel algorithms using a linear kernel are often equivalent to their non-kernel counterparts, i.e. KPCA with linear kernel is equivalent to standard PCA. k(x, y) = x^T y + c
Polynomial Kernel The Polynomial kernel is a non-stationary kernel. It is well suited for problems where all data is normalized. k(x, y) = (alpha x^T y + c)^d
Gaussian Kernel The Gaussian kernel is by far one of the most versatile Kernels. It is a radial basis function kernel, and is the preferred Kernel when we don’t know much about the data we are trying to model. k(x, y) = expleft(-frac{ lVert x-y rVert ^2}{2sigma^2}right)

For more Kernel functions, check Kernel functions for Machine Learning Applications. The accompanying source code includes definitions for over 20 distinct Kernel functions, many of them detailed in the aforementioned post.

Learning Algorithms

Sequential Minimal Optimization

Previous SVM learning algorithms involved the use of quadratic programming solvers. Some of them used chunking to split the problem in smaller parts which could be solved more efficiently. Platt’s Sequential Minimal Optimization (SMO) algorithm puts chunking to the extreme by breaking the problem down into 2-dimensional sub-problems that can be solved analytically, eliminating the need for a numerical optimization algorithm.

The algorithm makes use of Lagrange multipliers to compute the optimization problem. Platt’s algorithm is composed of three main procedures or parts:

  • run, which iterates over all points until convergence to a tolerance threshold;
  • examineExample, which finds two points to jointly optimize;
  • takeStep, which solves the 2-dimensional optimization problem analytically.

The algorithm is also governed by three extra parameters besides the Kernel function and the data points.

  • The parameter Ccontrols the trade off between allowing some training errors and forcing rigid margins. Increasing the value of C increases the cost of misclassifications but may result in models that do not generalize well to points outside the training set.
  • The parameter ε controls the width of the ε-insensitive zone, used to fit the training data. The value of ε can affect the number of support vectors used to construct the regression function. The bigger ε, the fewer support vectors are selected and the solution becomes more sparse. On the other hand, increasing the ε-value by too much will result in less accurate models.
  • The parameter T is the convergence tolerance. It is the criterion for completing the training process.

After the algorithm ends, a new Support Vector Machine can be created using only the points whose Lagrange multipliers are higher than zero. The expected outputs yi can be individually multiplied by their corresponding Lagrange multipliers ai to form a single weight vector w.

F(x) = sum_{i=0}^N { alpha_i y ,  k(z_i,x) } + b = sum_{i=0}^N { w_i , k(z_i,x) } + b

Sequential Minimal Optimization for Regression

A version of SVM for regression was proposed in 1996 by Vladimir Vapnik, Harris Drucker, Chris Burges, Linda Kaufman and Alex Smola. The method was called support vector regression and, as is the case with the original Support Vector Machine formulation, depends only on a subset of the training data, because the cost function for building the model ignores any training data close to the model prediction that is within a tolerance threshold ε.

Platt’s algorithm has also been modified for regression. Albeit still maintaining much of its original structure, the difference lies in the fact that the modified algorithm uses two Lagrange multipliers âi and ai for each input point i. After the algorithm ends, a new Support Vector Machine can be created using only points whose both Lagrange multipliers are higher than zero. The multipliers âi and ai are then subtracted to form a single weight vector w.

F(x) = sum_{i=0}^N { (hat{alpha_i} - alpha_i) ,  k(z_i,x) } + b = sum_{i=0}^N { w_i , k(z_i,x) } + b

The algorithm is also governed by the same three parameters presented above. The parameter ε, however, receives a special meaning. It governs the size of the ε-insensitive tube over the regression line. The algorithm has been further developed and adapted by Alex J. Smola, Bernhard Schoelkopf and further optimizations were introduced by Shevade et al and Flake et al.

Source Code

Here is the class diagram for the Support Vector Machine module. We can see it is very simple in terms of standard class organization.

svm-classdiagram1

Class diagram for the (Kernel) Support Vector Machines module.

Support Vector Machine

Below is the class definition for the Linear Support Vector Machine. It is pretty much self explanatory.

Kernel Support Vector Machine

Here is the class definition for the Kernel Support Vector Machine. It inherits from Support Vector Machine and extends it with a Kernel property. The Compute method is also overridden to include the chosen Kernel in the model computation.

Sequential Minimal Optimization

Here is the code for the Sequential Minimal Optimization (SMO) algorithm.

Using the code

In the following example, we will be training a Polynomial Kernel Support Vector Machine to recognize the XOR classification problem. The XOR function is classic example of a pattern classification problem that is not linearly separable.

Here, remember that the SVM is a margin classifier that classifies instances as either 1 or –1. So the training and expected output for the classification task should also be in this range. There are no such requirements for the inputs, though.

To create the Kernel Support Vector Machine with a Polynomial Kernel, do:

After the machine has been created, create a new Learning algorithm. As we are going to do classification, we will be using the standard SequentialMinimalOptimization algorithm.

After the model has been trained, we can compute its outputs for the given inputs.

The machine should be able to correctly identify all of the input instances.

Sample application

The sample application is able to perform both Classification and Regression using Support Vector Machines. It can read Excel spreadsheets and determines the task to be performed depending on the number of the columns in the sheet. If the input table contains two columns (e.g. X and Y) it will be interpreted as a regression problem X –> Y. If the input table contains three columns (e.g. x1, x2 and Y) it will be interpreted as a classification problem <x1,x2> belongs to class Y, Y being either 1 or -1.

Classification

To perform classification, load a classification task data such as the Yin Yang classification problem.

svm2-1

Yin Yang classification problem. The goal is to create a model which best determines whether a given point belongs to class blue or green. It is a clear example of a non-linearly separable problem.

svm2-2 svm2-3

Creation of a Gaussian Kernel Support Vector Machine with σ = 1.2236, C = 1.0, ε = 0.001 and T = 0.001.

svm2-4

Classification using the created Support Vector Machine. Notice it achieves an accuracy of 97%, with sensitivity and specifity rates of 98% and 96%, respectively.

Regression

To perform regression, we can load the Gaussian noise sine wave example.

svm2-7

Noise sine wave regression problem.

 

svm2-9 svm2-8

Creation of a Gaussian Kernel Support Vector Machine with σ = 1.2236, C = 1.0, ε = 0.2 and T = 0.001.

After the model has been created, we can plot the model approximation for the sine wave data. The blue line shows the curve approximation for the original red training dots.

svm2-10

Regression using the created Kernel Support Vector Machine. Notice the coefficient of determination r² of 0.95. The closer to one, the better.

See also

 

References

49 Comments

  1. Nice tutorial, very good for the development of science and technology. My thesis is now in progress. I took the theme of power forecasting system using support vector regression whether this accord can be used for forecasting process. please help and advice. thank you

  2. thank you in advance on Mr Cesar already giving advice. I had already downloaded the file to me and I have tried but the problem is my lack of understanding about the concept of neural network so that I find difficult to adapt to the support vector regression. actually almost same concept with the Times Series Prediction sample application of the AForge.NET Framework.

  3. Hi Anik,

    I see that the output labels in this dataset are presented as 1 and 2. Are you converting those to -1 and 1 before feeding the learning algorithm? This is a necessary precondition for the learning process.

    Also you could try to normalize those values to have zero mean and unit variance. After the data is normalized, try a Gaussian kernel with varying sigma values. You could first try with sigma values varying from 0.1 to 1.0 in 0.1 increments, then from 1 to 10 in 1.0 increments. If you are using the Accord.NET Framework, you can use the GridSearch class to help in the parameter tuning.

    Regards,
    César

  4. Great Tutorial.
    But i having problem for dataset more than 3 attribute as time series data to complete the process of regression. example dataset that has 4 attribute like
    x1, x2, x3, y where y depends on the attribute value x.

  5. Hello,

    I will take a look on the multiple inputs problems described here. Can you provide more details on which kind of problems are you having? Also, are you normalizing the inputs before feeding the learning algorithm?

    Best regards,
    César

  6. I will predict the value y by using SVM. The following data that I use for the prediction process.

    x1 x2 x3 y
    2 3 4 29
    4 2 2 74
    3 4 2 49
    4 2 5 83
    2 5 6 51
    4 3 2 79
    3 2 1 34
    thank help me.

  7. how to use SMO with c # application that you created above for more than 2-dimensional problems features?, please explain with examples ..
    Thank you very much..

  8. Hello,

    @albert:
    The following code snippet can be used to train a regression SVM using your data example:

    // Input data
    double[][] inputs =
    {
    new double[] { 2, 3, 4 },
    new double[] { 4, 2, 2 },
    new double[] { 3, 4, 2 },
    new double[] { 4, 2, 5 },
    new double[] { 2, 5, 6 },
    new double[] { 4, 3, 2 },
    new double[] { 3, 2, 1 },
    };

    // Output data
    double[] outputs = { 29, 74, 49, 83, 51, 79, 34 };

    // Create a new machine using a Polynomial kernel of degree 2
    var machine = new KernelSupportVectorMachine(new Polynomial(2), 3);

    // Create a new SMO learning algorithm with the given parameters
    var smo = new SequentialMinimalOptimizationRegression(machine, inputs, outputs);
    smo.Complexity = 1.0;
    smo.Epsilon = 1e-2;
    smo.Tolerance = 1e-2;

    // Train the machine
    double error = smo.Run();

    // Retrieve the regression outputs
    double[] y = machine.Compute(inputs);

    // The outputs will be very close to the original:
    // y = 29.01, 73.98, 49.00, 100.41, 50.98, 79.01, 34.00

    @agung:

    The sample application is just a sample application. It is not intended to be used as a full feature application for SVMs. In order to make full use of the source code available here, you may want to implement your own application using the Accord.NET Framework, which does include all the functionality presented here.

    Best regards,
    César

  9. can i classify wisconsin breast canser data set with this application?? and i can’t dowload the source code..
    Thank you very much..

  10. @Anonymous:
    With the sample application, probably no. But with the source code, yes. Something happened with the download servers, which are currently down. I have uploaded the source code and the sample application to the Accord.NET project page instead.

    @agung:
    Please download the full Accord.NET framework. It has support for both Kernel PCA and Multi-class SVMs. There are some examples on the documentation on how to use them. If the online documentation isn’t available, you can also check the help file which comes together with the framework.

    Best regards,
    César

  11. Thanx for your attention.
    I have been downloaded Accord.NET at your link. I want to use SVM and KPCA method for my C# application. But I can’t use Accord.NET. Please tell me how to use Accord.NET?

  12. Hello,

    I want to know how can we find the distance of each sample from the seperating margin. I would also like to know how can in incorporate different formulation of SVM in the code.

    Thanks

  13. Thanx before,

    I have 200 entries feature the results of the kernel principal component analysis with gamma = 500, as follows:

    Example on one of the data:
    Raw data:
    (46 53 54 73 107 65 77 56 46 89 65 24 38 54 55 71 79 102 92 82 89 70 60 38 54 55 71 79 102 92 82 89 70 60 42 47 49 57 97 97 124 128 110 87 79 52 47 49 57 97 97 124 128 110 87 79 52 36 54 49 89 136 154 161 179 158 116 64 66 54 49 89 136 154 161 179 158 116 64 66 33 49 61 139 154 161 189 184 184 135 110 43 49 61 139 154 161 189 184 184 135 110 43 38 55 71 157 158 158 184 166 176 177 119 55 55 71 157 158 158 184 166 176 177 119 55 46 55 80 156 166 184 182 177 179 162 155 50 55 80 156 166 184 182 177 179 162 155 50 36 34 92 168 173 179 183 178 179 175 151 50 34 92 168 173 179 183 178 179 175 151 50 38 43 111 168 180 164 206 170 180 171 140 48 43 111 168 180 164 206 170 180 171 140 48 54 51 113 171 173 187 186 181 168 182 158 41 51 113 171 173 187 186 181 168 182 158 41 41 32 113 161 170 183 173 179 148 154 151 35 32 113 161 170 183 173 179 148 154 151 35 43 49 104 159 141 148 183 164 133 145 151 47 49 104 159 141 148 183 164 133 145 151 47 47 127 112 165 111 124 190 153 130 137 153 93 127 112 165 111 124 190 153 1)

    KPCA feature (with 10 principal component):
    (-0.0168221382595615 -0.0977834005677016 -0.0422072220804251 0.106677188631142 -0.00534655068037047 0.0921886970828544 0.0742572332258422 -0.0256983526385723 0.136221182959765 -0.0211490956645272)

    Then my training KPCA feature using SMO with kernel = 1.2236
    and the accuracy is only 53% only. How to increase the accuracy of the data features?

  14. How can i use Kernel support vector machines if my data class is unbalanced. i.e. for a two class problem , i have large training examples for one class while very few training sample for the second class. The trained svm is biased to the class with large training sample. How can i correct this thing

  15. Hi Tri,

    To calculate the number of steps used during learning, you may add a counter variable in the SequentialMinimalOptimization/Regression class and perform an increment every time the takeStep() method is called.

    Best regards,
    César

  16. Hi,

    I have downloaded the source code and tried to classify my dataset. My dataset contains more than 10,000 features and has only two classes. But, I couldn’t be sure about what is the input format for it.

    Also, please help me if my my problem can be solved by this or not.

    Thanks in advance.

    Binod

  17. Hi Binod,

    Support vector machines are well suited for datasets containing a high number of features but a manageable number of samples. Yes, I would suppose your problem could be solved using a SVM.

    The application demonstrated here is just a sample application. It is not designed to work with custom datasets. You can roll your own application using the Accord.NET Framework and use the sample application as a starting point. The getting started guide has a nice example on how to create a new C# project for SVM training using the framework.

    Best regards,
    César

  18. Thank you Cesar.

    My features have binary values i.e. either 1 or -1. And my ratio of two classes is more than 1:500.

    This is making great problem in my case. The precision is very less.

    Please let me know, if you have worked using SVM in this type of situation and how well SVM works for this type of dataset.

    Thanking you.
    Binod

  19. Hi Binod,

    For such cases, there a small modification which can be done in the Sequential Minimal Optimization to handle unbalanced classes. Unfortunately, the implementation of the algorithm presented on this page or in the Accord.NET Framework does not implement this feature at this time.

    If I can remember correctly, it basically consists on using different cost (C) values for each of the classes. There is a brief section in this guide (section 7) dealing with the subject. I hope it helps.

    Best regards,
    César

  20. Hi Souza,

    Normally after training we get the weights. How to get these weights in the program and use it for computation of outputs(prediction)?

  21. Hi bvis,

    If you can, please download the latest version of the Accord.NET Framework. A starting guide is available here and shows how to create a simple application using support vector machines.

    Best regards,
    César

  22. Dear César,
    I’m working with SVR. So can you send me SVM for regression applications to I reference. Sorry for my english is not good.
    Thank you very much.
    Hoang Tuyet

  23. Hello Cesar,

    Great work! The link you provide like accord-net.origo.ethz.ch/wiki/getting_started cannot be open using firefox and IE. Pls guide me to open this link

  24. I want to find the lagrangian multiplier for this data
    Data | x1 x2 y
    ————————————-
    1 | 1 6 +1
    2 | 1 10 +1
    3 | 4 11 +1
    4 | 5 2 -1
    5 | 7 6 -1
    6 | 10 4 -1

    Can someone help me find the alpha for all the data? (Using SMO)
    Please explain each step in simple mathematical tutorial.

  25. Hello,

    Thanks Cesar for Accord, it helps me a lot.

    can you provide a simple example on how to use RANSAC with SVM for inputs varaibles selection ?

    Best Regards,

    David Alexander

  26. Hello:
    I am a student in computer engineering, I’ve built a SVM network and I want to recognize the faces of people, I used SMO algorithm to obtain alpha i value, I get features by using of Legendre Polynomial of order 4.
    When I train the model of SVM the error rate was very high ,
    my question what is the best kernel function that fitting this
    Work and what are the best parameters.
    Best Regards

  27. Hi César Souza,

    thanks for your tutorial!

    I tried your code snippets which you have explained above. Unfortunately I am running into some troubles by applying your training data (and other). My results are always constant across all data. The SMO algorithm does not change any alpha (it is always equal to zero)…. Can you give any hint what I am doing wrong?

    Thanks in advance and regards

Leave a Reply

Your email address will not be published. Required fields are marked *