// Create a new MAT file reader var reader = new MatReader("C:\Users\cdesouza\Desktop\matrix.mat"); // Let's assume the .mat file had a structure object called // "structure" on it. We can access it and its members using: var s = reader["structure"]["string"].GetValue(); // Otherwise, if we didn't know which objects were in the .MAT file, we // could have used the following to iterate over all objects in the file: foreach (var field in reader.Fields) Console.WriteLine(field.Key); // If we had that structure under the name "structure", // we could list all of its fields using: foreach (var field in structure.Fields) Console.WriteLine(field.Key); // "a", "string" // Let's say that we found out that this structure had a field called // "a". We should first determine what is the type of this field using: var aType = structure["a"].Type; // byte[,] // Now, we can retrieve the value of field "a" using byte[,] a = structure["a"].GetValue<byte[,]>();

To install the Accord.NET Framework into your C# application, install Accord.Math and Accord.IO through NuGet.

]]>In those situations, you would use** std::nth_element** to partially sort the array such that the element that you want will fall in the n-th position that you need.

The **std::nth_element** function also goes one step further: it guarantees that all elements on the left of the nth position that you asked for will be less than the value in that position. Those values, however, can be in arbitrary order.

Unfortunately, knowing the niftiness of this function doesn’t help much if you can’t call it from the environment you are programming in. The good news is that, if you are in .NET, you can call Accord.Sort.NthElement (overloads) from C#, VB.NET or any other .NET language using Accord.NET.

An example can be seen below:

// Declare a random array of values int[] a = { 10, 2, 6, 11, 9, 3, 4, 12, 8, 7, 1, 5 }; // We would like to determine which element // will be at the 6th position of the sorted array: int element = Sort.NthElement(a, 6); // The array will be modified in place to become int[] expected = { 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12 };

The careful reader might have noticed that the entire array has been sorted, instead of just the left part as previously advertised. However, this is on purpose: for very small arrays (like the one in this example), it is more advantageous to use a full, but simpler, sorting algorithm than partial quicksort. When the arrays are large enough (the current threshold is 32 elements), a partial quicksort with a median-of-three pivoting strategy kicks in.

**Disclaimer:** this post is directed towards beginners in computer vision and may contain very loose descriptions or simplifications of some topics in order to bring interest about the field to unexperienced readers – for a succinct and peer-reviewed discussion about the topic, the reader is strongly advised to refer to https://arxiv.org/abs/1608.07138 instead.

In the following paragraphs, we will explore a little three different ways of performing video classification: the task of determining to which of a finite set of labels a video must belong to.

Initial video classification systems were based on handcrafted features, or, in other words, based on techniques for extracting useful information from a video that have been discovered or invented based on the expertise of the researcher.

For example, a very basic approach would be to consider that whatever could be considered as a “corner” in an image (see link for an example) would be quite relevant for determining what was inside this image. We could then present this collection of points (which would be of much reduced size than the image itself) to a classifier and hope it would be able to determine its contents from this simplified representation of the data.

In the case of video, the most successful example of those is certainly the Dense Trajectories approach of Wang *et al*., that captures frame-level descriptors over pixel trajectories determined by optical flow.

Well, some could actually think that using corners or other features to try to guess what is in an image or video is not straightforward at all – why not simply use the image or video itself as a whole and present this to the classifier to check out what it would come up with? Well the truth is that this had been tried, but until a few years ago, it simply wouldn’t work except for simple problems. For some time we didn’t know how to make classifiers that could handle extremely large amounts of data, and yet extract anything useful from it.

This started to change in 2006, when the interest on training large neural networks started to raise. In 2012, a deep convolutional network with 8 layers managed to win the ImageNet challenge, far outperforming other approaches based on Fisher Vector encodings of handcrafted features. Since then, many works have shown how deep nets could perform way better than other methods in image classification and other domains.

However, while deep nets have certainly been shown to be the undisputed winners in action classification, the same could not yet be said about video classification, at least not in the beginning of 2016. Moreover, training deep neural networks for video can also be extremely costly, both in terms of computational power needed as well as the number and sheer size of examples needed to train those networks.

So, could we design classification architectures that could be both powerful *and* easy to train? In the sense that it wouldn’t be necessary to rely on huge amounts of labelled data in order to learn even the most basic, pixel-level aspects of a video, but instead in a way that we could leverage this knowledge already from techniques that are already known to work fairly well?

Well, indeed, the answer seems to be yes – as long as you pay attention to some to some details that can actually make a huge difference.

This is shown in the paper “Sympathy for Details: Hybrid Classification Architectures for Action Recognition.” This paper is mostly centered about two things: a) showing that it is possible to make standard methods perform as good as deep nets for video; and b) showing that by combining Fisher Vectors of traditional handcrafted video features with a multi-layer architecture can actually perform **better** than most deep nets for video, achieving state-of-the-art results at the date of submission.

The paper, co-written with colleagues and advisors from Xerox Research Centre and the Computer Vision Center of the Autonomous University of Barcelona, has been accepted for publication in the 14th European Conference on Computer Vision (ECCV’16), to be held in Amsterdam, The Netherlands, this October, and can be found here.

]]>If yes, then check whether you have other devices attached to your Surface charger (such as your Lumia phone), and disconnect it.

If this solves your issue, the problem is that the charging was not managing to charge both at the same time (especially if you were charging your cellphone using a USB 3.0 cable).

]]>If you haven’t seem the problem personally, it might be difficult to guess from the picture what is going on. The problem is that PowerPoint’s Ribbon is huge given that it is running in a 21″ monitor and not in a tablet anymore.

The problem doesn’t seem to occur with Word or Excel when they are transposed from the Surface screen to the external monitor. It seems to be exclusively related to PowerPoint.

Hopefully, there is a solution for this problem. If you have **Office 365**, open the file

C:Program FilesMicrosoft Office 15rootoffice15powerpnt.exe.manifest

Using a text editor. Then, look for word True/PM in the following block:

<asmv3:application xmlns:asmv3="urn:schemas-microsoft-com:asm.v3"> <asmv3:windowsSettings xmlns="http://schemas.microsoft.com/SMI/2005/WindowsSettings"> <dpiAware><strong>True/PM</strong></dpiAware> </asmv3:windowsSettings> </asmv3:application>

And change it to:

<asmv3:application xmlns:asmv3="urn:schemas-microsoft-com:asm.v3"> <asmv3:windowsSettings xmlns="http://schemas.microsoft.com/SMI/2005/WindowsSettings"> <dpiAware><strong>False</strong></dpiAware> </asmv3:windowsSettings> </asmv3:application>

Now save and open PowerPoint. PowerPoint should not auto-scale properly when you transpose its window from the Surface Pro 3 screen and to external monitor, and vice-versa:

]]>

This might be the case, for example, if you just tried to open an old Windows.Forms application in your brand new Surface Pro computer.

- Go the the Forms designer, then select your Form (by clicking at its title bar)
- Press F4 to open the Properties window, then locate the
**AutoScaleMode**property - Change it from
**Font (default)**to**Dpi**.

Now, go to Program.cs (or the file where your Main method is located) and change it to look like

using System; using System.Collections.Generic; using System.Linq; using System.Windows.Forms; namespace Classification.BoW { static class Program { [STAThread] static void Main() { if (Environment.OSVersion.Version.Major >= 6) SetProcessDPIAware(); Application.EnableVisualStyles(); Application.SetCompatibleTextRenderingDefault(false); Application.Run(new MainForm()); } [System.Runtime.InteropServices.DllImport("user32.dll")] private static extern bool SetProcessDPIAware(); } }

Save and compile. Now your form should look crispy again.

I encountered this problem while opening and editing Accord.NET sample applications in Visual Studio in a Surface 3 Pro.

- http://stackoverflow.com/questions/13228185/winforms-high-dpi-blurry-fonts
- http://stackoverflow.com/questions/27933868/creating-dpi-aware-c-sharp-clickonce-application-with-winforms
- http://blogs.telerik.com/winformsteam/posts/14-02-11/winforms-scaling-at-large-dpi-settings-is-it-even-possible-
- http://stackoverflow.com/questions/27933868/creating-dpi-aware-c-sharp-clickonce-application-with-winforms

Hidden Markov models are simple models that can be created to recognize whether a sequence of observations is similar to the previous sequences that the model has seen before. However, if we create one HMM after each type of sequence that we are trying to distinguish, and them individually ask each model whether it recognizes the given sequence, we have just created a **hidden Markov model classifier**.

However, we might have a slightly better way of classifying those sequences. This method for creating a classifier (i.e. creating individual models to model each sequence type, then asking which model how strongly it recognizes a new sequence) is known as **generative learning**. But we could also have created a model from the ground-up that was just focused on distinguishing between sequence types without modeling them first. This would be known as **discriminative learning**.

And as mentioned in the tagline for this article, HCRFs are the discriminative doppelganger of the HMM classifier. Let’s see then how we can use them.

If you would like to explore them in your projects, the Accord.NET Framework provides Hidden Markov Models, Hidden Markov Model Classifiers, Conditional Random Fields and Hidden Conditional Random Fields.

Because HCRFs are discriminative models, they can be optimized directly using gradient methods that resemble a lot what is normally done with Neural Networks. Examples include:

One of the best algorithms among those is Resilient Backpropagation [1]. The interesting part is that this is also one of the best algorithms for training Neural Networks.

For a real-world example on how HCRFs can be applied, I would kindly invite you to look at my master’s defense presentation. They detail exactly how to transform HMMs into linear-chain HCRFs through nice animations and then apply them for sign language recognition. Use **mouse clicks** to pass slides so you can see the animations!

Sign language recognition with Support Vector

Machines and Hidden Conditional Random Fields

Incidentally, the presentation above shows exactly two kinds of sequence recognition: discrete sequence recognition and multivariate/continuous sequence recognition.

The **discrete case** was the case of fingerspelling: a frame classifier translated images into symbols (i.e. numbers 0, 1, 2, … ), and then a sequence of those numbers was classified into a finite set of words. After, the **continuous case** was the case of natural word recognition: a feature extractor extracted dimensional features, including the (x,y) positions of the user’s hands and face, and then the sequence of those features were classified into a finite set of words.

Let’s consider concrete code examples on how those could have been implemented using the Accord.NET Framework, in C# inside .NET.

In this simple example we will consider only discrete data. See that this is the simplest case for a Hidden Markov Model (in discrete hidden Markov models we don’t have to worry about probability distribution assumptions for the model states). This example is also available at the Hidden Resilient Backpropagation Learning algorithm documentation page and also the article Sequence Classifiers Part II: Hidden Conditional Random Fields at CodeProject.

// Suppose we would like to learn how to classify the // following set of sequences among three class labels: int[][] inputs = { // First class of sequences: starts and // ends with zeros, ones in the middle: new[] { 0, 1, 1, 1, 0 }, new[] { 0, 0, 1, 1, 0, 0 }, new[] { 0, 1, 1, 1, 1, 0 }, // Second class of sequences: starts with // twos and switches to ones until the end. new[] { 2, 2, 2, 2, 1, 1, 1, 1, 1 }, new[] { 2, 2, 1, 2, 1, 1, 1, 1, 1 }, new[] { 2, 2, 2, 2, 2, 1, 1, 1, 1 }, // Third class of sequences: can start // with any symbols, but ends with three. new[] { 0, 0, 1, 1, 3, 3, 3, 3 }, new[] { 0, 0, 0, 3, 3, 3, 3 }, new[] { 1, 0, 1, 2, 2, 2, 3, 3 }, new[] { 1, 1, 2, 3, 3, 3, 3 }, new[] { 0, 0, 1, 1, 3, 3, 3, 3 }, new[] { 2, 2, 0, 3, 3, 3, 3 }, new[] { 1, 0, 1, 2, 3, 3, 3, 3 }, new[] { 1, 1, 2, 3, 3, 3, 3 }, }; // Now consider their respective class labels int[] outputs = { /* Sequences 1-3 are from class 0: */ 0, 0, 0, /* Sequences 4-6 are from class 1: */ 1, 1, 1, /* Sequences 7-14 are from class 2: */ 2, 2, 2, 2, 2, 2, 2, 2 };

In this case, we can try to create a HCRF directly using a **MarkovDiscreteFunction**. There are many ways to create a HCRF and how to define its feature vectors. The framework provides some functions that are already specialized in handling some particular problems, such as for example, the same set of problems that would be solvable by a discrete HMM classifier.

// Create the Hidden Conditional Random Field using a set of discrete features var function = new MarkovDiscreteFunction(states: 3, symbols: 4, outputClasses: 3); var classifier = new HiddenConditionalRandomField<int>(function);

Now that the model has been created, we can start training the model to recognize the sequences and their labels shown above.

// Create a learning algorithm var teacher = new HiddenResilientGradientLearning<int>(classifier) { Iterations = 50 }; // Run the algorithm and learn the models teacher.Run(inputs, outputs);

After training has finished, we can ask the model which would be the most likely class labels for new sequences that we are about to show it. For example:

int y1 = classifier.Compute(new[] { 0, 1, 1, 1, 0 }); // output is y1 = 0 int y2 = classifier.Compute(new[] { 0, 0, 1, 1, 0, 0 }); // output is y1 = 0 int y3 = classifier.Compute(new[] { 2, 2, 2, 2, 1, 1 }); // output is y2 = 1 int y4 = classifier.Compute(new[] { 2, 2, 1, 1 }); // output is y2 = 1 int y5 = classifier.Compute(new[] { 0, 0, 1, 3, 3, 3 }); // output is y3 = 2 int y6 = classifier.Compute(new[] { 2, 0, 2, 2, 3, 3 }); // output is y3 = 2

As you can see, we created a classification algorithm that knows how to differentiate between the different kinds of sequences we fed it in the beginning.

In this complex example we will consider multivariate real-valued data. This would be a better example for real-world problems. One particular problem where sequence recognition might be important is in the context of 3D gesture recognition. A 3D gesture is simply a sequence of 3D coordinates measured over time: if you move your head from the mouse up to your head, and we measured this process using a sensor, we would have a sequence of 3D points saying where your hand was at each step of the capturing process.

Furthermore, if you took 5 seconds to perform this movement, and the sensor captures one coordinate a second, we would have a sequence of 5 points describing your movement.

So, let’s say that we have one of such sensors, we would like to distinguish between sequences belonging to different movements. Let’s say we have the movement “**hands-in-the-air**“, “**typing in a keyboard**“, and “**waving goodbye**“.

Let’s say we decided to represent our frames as:

double[] frame = { x, y, z };

Each movement then would be described as sequence of those frames:

double[][] movement = { frame1, frame2, frame3 };

Let’s see a more concrete example. We have those sequences of 3D coordinates:

double[][] hands_in_air = { new double[] { 1.0, 0.1, 0.0 }, // this movement new double[] { 0.0, 1.0, 0.1 }, // took 6 frames new double[] { 0.0, 1.0, 0.1 }, // to be recorded. new double[] { 0.0, 0.0, 1.0 }, new double[] { 0.0, 0.0, 1.0 }, new double[] { 0.0, 0.0, 0.1 }, // 6 frames }; double[][] typing = { new double[] { 0.0, 0.0, 0.0 }, // the typing new double[] { 0.1, 0.0, 1.0 }, // took only 4. new double[] { 0.0, 0.0, 0.1 }, new double[] { 1.0, 0.0, 0.0 }, }; double[][] waving = { new double[] { 0.0, 0.0, 1.0, 0.0 }, // same for the new double[] { 0.1, 0.0, 1.0, 0.1 }, // waving goodbye. new double[] { 0.0, 0.1, 1.0, 0.0 }, new double[] { 0.1, 0.0, 1.0, 0.1 }, };

In this example, we are considering that we have only one sample of each gesture in our database. However, **this is a big no-no**. In practice, we should have *hundreds* of those samples for the magic to work.

Now let’s create our learning database. We have to create a table where we have all of our sequences or movements on the left side, and their corresponding output labels on the right.

// Those are the movements we want to distinguish: double[][][] movements = { hello, car, wardrobe };

// Those are their labels int[] labels = { 0, 1, 2 };

Now, because we are dealing with multivariate samples (i.e. 3D points) we have to assume one shape for the states in our hidden Markov models. One common choice is to assume they are Gaussian. However to make it even more simpler, let’s assume they are *independent Gaussians*, it is, that the **X** coordinates follow a univariate Gaussian, the **Y** follow another, and the **Z** yet another, and yet they are all uncorrelated.

var initial = new Independent<NormalDistribution> ( new NormalDistribution(0, 1), new NormalDistribution(0, 1), new NormalDistribution(0, 1), new NormalDistribution(0, 1) // PS: You can also mix a GeneralDiscreteDistribution here );

Ok, we are almost there! Let’s create our classifier using those base distributions:

int numberOfWords = 3; // we are trying to distinguish between 3 movement types int numberOfStates = 5; // this value can be found by trial-and-error, but 5 is magic var hmm = new HiddenMarkovClassifier<Independent<NormalDistribution>> ( classes: numberOfWords, topology: new Forward(numberOfStates), // important for time initial: initial );

Now one interesting detail is the choice for the Forward topology above. Topologies are different ways to organize our states in our model. In other words, it tells which kinds of sequences are allowed and which ones are impossible. Choosing a Forward topology is equivalent to saying: *no movement may go back in time*. Which is true, by the way. So we go with it.

Great! Now we have our model. We then need a way to teach it to recognize the sequences we defined in the beginning. *Thanksgod* the framework is flexible enough to support **any** kind of distribution you gave as the emission states. Most implementations can only do Gaussians, Gaussian Mixtures, and the like. But we, we can use anything that supports the IFittableDistribution interface: Bernoullis, Multinomials, Poissons…

// Create a new learning algorithm to train the sequence classifier var teacher = new HiddenMarkovClassifierLearning<Independent<NormalDistribution>>(hmm, // Train each model until the log-likelihood changes less than 0.001 modelIndex => new BaumWelchLearning<Independent<NormalDistribution>>(hmm.Models[modelIndex]) { Tolerance = 0.001, Iterations = 100, // This is necessary so the code doesn't blow up when it realize // there is only one sample per word class. But this could also be // needed in normal situations as well. // FittingOptions = new IndependentOptions() { InnerOption = new NormalOptions() { Regularization = 1e-5 } } });

Now, since we are estimating things from data, it might be the case that at some point some of our Normal distribution is asked to fit a set of numbers that are all the same. In this case, this would lead to a Normal distribution with **zero variance**, which is clearly not allowed in the current rules of the universe. To overcome this problem, we can signal the fitting algorithm that some exceptions may apply by specified a FittingOptions object during the learning phase.

// Finally, we can run the learning algorithm! double logLikelihood = teacher.Run(movements, labels);

After a while, this method should return with a working model for us.

// At this point, the classifier should be successfully // able to distinguish between our three word classes: // int a = hmm.Compute(hello); // should output 0 int b = hmm.Compute(car); // should output 1 int c = hmm.Compute(wardrobe); // should output 2

Which is very nice. In this particular example, we could have a perfect model that was able to correctly predict the database we threw at it. But what if this hasn’t been the case?

If this wasn’t the case, there is always room for improvement. Let’s digivolve our hidden Markov classifier into a Hidden Conditional Random Field using a Markov Multivariate Function, and see what we can do with it:

// Now, we can use the Markov classifier to initialize a HCRF var function = new MarkovMultivariateFunction(hmm); var hcrf = new HiddenConditionalRandomField<double[]>(function);

Although hidden Markov models can operate with any choice of distribution function, unfortunately for HCRFs things aren’t that simple. HCRF are very general models, and as such, they make very few restrictions on which formulation we would like to use. While this is great from a theoretical point of view, it means that the framework cannot be as helpful as it was before and offer a generic support for all possible distributions in the world.

In this case, we can either create our own feature functions specifying how our HCRF should be created, or we could use a few built-in functions that are specialized for some key model formulations such as Gaussian HMMs, Gaussian Mixtures, Multivariate Gaussians, and so on.

Now, let’s check that we really didn’t loose anything by transforming one model into another.

// We can check that both are equivalent for (int i = 0; i < words.Length; i++) { // Should be the same int expected = hmm.Compute(movements[i]); int actual = hcrf.Compute(movements[i]); // Should be the same double h0 = hmm.LogLikelihood(movements[i], 0); double c0 = hcrf.LogLikelihood(movements[i], 0); double h1 = hmm.LogLikelihood(movements[i], 1); double c1 = hcrf.LogLikelihood(movements[i], 1); double h2 = hmm.LogLikelihood(movements[i], 2); double c2 = hcrf.LogLikelihood(movements[i], 2); }

However, here is where we will finally be able to apply discriminative learning to our previously generative-learned models. One of the best learning algorithms for HCRFs happens to be one of the best algorithms also available for Neural Networks. It is one of the fastest gradient algorithms restricted only to first-order information.

So let’s use it and see what happens:

// Now we can learn the HCRF using one of the best learning // algorithms available, Resilient Backpropagation learning: // Create a learning algorithm var rprop = new HiddenResilientGradientLearning<double[]>(hcrf) { Iterations = 50, Tolerance = 1e-5 }; // Run the algorithm and learn the models double error = rprop.Run(words, labels);

At this point, the HCRF should have improved over the previous log-likelihood for the generative model, and will still be able to successfully distinguish between our three movement types:

int hc1 = hcrf.Compute(hands_in_air); int hc2 = hcrf.Compute(typing); int hc3 = hcrf.Compute(waving);

As we could see, in this example we have created a HCRF classifier that is able to distinguish between sequences of multivariate observations (i.e. vectors).

- Accord.NET Framework for Machine Learning and Statistics
- Sequence Classifiers in C# – Part I: Hidden Markov Models
- Sequence Classifiers in C# – Part II: Hidden Conditional Random Fields

- Milind Mahajan, Asela Gunawardana, and Alex Acero. Training algorithms for hidden conditional random fields. International Conference on Acoustics, Speech, and Signal Processing. 2006.

*{“Exception has been thrown by the target of an invocation.”}*

The inner exception might read:

*[System.IO.FileLoadException] = {“Could not load file or assembly ‘Accord.Math, Version=2.13.1.0, Culture=neutral, PublicKeyToken=fa1a88e29555ccf7’ or one of its dependencies. The located assembly’s manifest definition does not match the assembly reference. (Exception from HRESULT: 0x80131040)”:”Accord.Math, Version=2.13.1.0, Culture=neutral, PublicKeyToken=fa1a88e29555ccf7″}*

In this case, it is very likely that those exceptions are occurring because the .NET run-time is looking for an assembly with the specific version indicated above. Even if you have new assemblies with exact the same name and exact public key token, the .NET might still refuse to deserialize it.

In order to get around this, put the following static class into your application:

public static class ExtensionMethods { private static readonly Object lockObj = new Object(); public static object DeserializeAnyVersion(this BinaryFormatter formatter, Stream stream) { lock (lockObj) { try { AppDomain.CurrentDomain.AssemblyResolve += resolve; return formatter.Deserialize(stream); } finally { AppDomain.CurrentDomain.AssemblyResolve -= resolve; } } } private static Assembly resolve(object sender, ResolveEventArgs args) { var display = new AssemblyName(args.Name); if (display.Name == args.Name) return null; return ((AppDomain)sender).Load(display.Name); } }

Now, go back where you were using your deserializer and getting that exception, and instead of calling `formatter.Deserialize`

, call `formatter.DeserializeAnyVersion`

:

BinaryFormatter bf = new BinaryFormatter(); object o = bf.DeserializeAnyVersion(stream);

Deserialization now might work as expected; but please keep in mind that we might be loosing some security here. However, this might be a concern only if your application is dynamically loading assemblies at run-time.

Here are some resources discussing the problem:

- Deserialize types moved across assemblies
- C# deserialization of System.Type throws for a type from a loaded assembly
- C# – BinaryFormatter.Deserialize is “Unable to find assembly”

Such extension method will also be included in the Accord.NET Framework.

]]>Warning 461 ‘OxyPlot.Axes.LinearAxis.LinearAxis(OxyPlot.Axes.AxisPosition, double, double, string)’ is obsolete

If you try to use OxyPlot.Axes.LinearAxis‘ constructor *with parameters*, the compiler will complain telling you the method has been deprecated and shouldn’t be used. I couldn’t find on the web which was the alternative solution to resolve this issue, but then it occurred to me that, what really is being deprecated, is the *passage of arguments* through the constructor’s parameters.

As such, the solution is simply to rewrite your code and call the axis’ default constructor instead, using C# object initialization syntax to configure your object:

var dateAxis = new OxyPlot.Axes.LinearAxis() { Position = AxisPosition.Bottom, Minimum = range.Min, Maximum = range.Max, Key = "xAxis", MajorGridlineStyle = LineStyle.Solid, MinorGridlineStyle = LineStyle.Dot, IntervalLength = 80 };

Hope it can be of help if you were getting those warnings like me.

]]>- Download the machine learning framework.
- Browse the source code online.

As its authors put, LIBLINEAR is a library for large linear classification. It is intended to be used to tackle classification and regression problems with millions of instances and features, although it can only produce linear classifiers, i.e. linear support vector machines.

The framework now offers *almost *all liblinear algorithms in C#, except for one. Those include:

- 0 — L2-regularized logistic regression (primal)
- 1 — L2-regularized L2-loss support vector classification (dual)
- 2 — L2-regularized L2-loss support vector classification (primal)
- 3 — L2-regularized L1-loss support vector classification (dual)
- 4 —
- 5 — L1-regularized L2-loss support vector classification
- 6 — L1-regularized logistic regression
- 7 — L2-regularized logistic regression (dual) for regression
- 11 — L2-regularized L2-loss support vector regression (primal)
- 12 — L2-regularized L2-loss support vector regression (dual)
- 13 — L2-regularized L1-loss support vector regression (dual)

As it can be seen, the command line option 4 is missing. Mode #4 refers to the Crammer and Singer’s formulation for multi-class classification. However, the framework already provides different ways to obtain both multi-class as well as multi-label classifiers through both Voting and Directed Acyclic Graphs (DDAG) mechanisms.

Additionally, the framework also offers:

- Sequential Minimial Optimization
- Sequential Minimal Optimization for Regression
- Least-Squares Learning (LS-SVMs)
- Probabilistic Output Learning (Platt’s algorithm)

The framework can equally load data and load and save support vector machines using the LIBSVM format. This means it should be straighforward to create or learn your models using one tool and run it on the other, if that would be necessary. For example, given that Accord.NET can run on mobile applications, it is possible to create and learn your models in a computing grid using liblinear and then integrate it in your Windows Phone application by loading it in Accord.NET.

The advantages are that:

- Learning algorithms implement one common interface, rather than several functions splitted through the code;
- Algorithms are available as a concise library, ready to be integrated in your existing or new applications, instead of being part of a black-box command line tool (but it can also be used as a command line tool, see sample applications);
- The algorithms can run in Windows, Windows RT, ASP.NET, Windows Phone, Android, iOS, Linux and MacOS (through Mono/Xamarin);
- They can be combined with other meta-algorithms available in the framework to create multi-class and multi-label support vector machines, as well as be part of cross-validation, bootstrapping and split-set validation techniques.

When studying and porting the algorithms, I have also set up a liblinear GitHub repository page to track changes between versions. I hope this repository can also be helpful for other people willing to track modifications done the the liblinear project.

In the following example we will create a linear machine to learn a simple linearly separable binary AND problem.

// Create a simple binary AND // classification problem: double[][] problem = { // a b a + b new double[] { 0, 0, 0 }, new double[] { 0, 1, 0 }, new double[] { 1, 0, 0 }, new double[] { 1, 1, 1 }, }; // Get the two first columns as the problem // inputs and the last column as the output // input columns double[][] inputs = problem.GetColumns(0, 1); // output column int[] outputs = problem.GetColumn(2).ToInt32(); // Plot the problem on screen ScatterplotBox.Show("AND", inputs, outputs).Hold();

// However, SVMs expect the output value to be // either -1 or +1. As such, we have to convert // it so the vector contains { -1, -1, -1, +1 }: // outputs = outputs.Apply(x => x == 0 ? -1 : 1); // Create a new linear-SVM for two inputs (a and b) SupportVectorMachine svm = new SupportVectorMachine(inputs: 2); // Create a L2-regularized L2-loss support vector classification var teacher = new LinearDualCoordinateDescent(svm, inputs, outputs) { Loss = Loss.L2, Complexity = 1000, Tolerance = 1e-5 }; // Learn the machine double error = teacher.Run(computeError: true); // Compute the machine's answers for the learned inputs int[] answers = inputs.Apply(x => Math.Sign(svm.Compute(x))); // Plot the results ScatterplotBox.Show("SVM's answer", inputs, answers).Hold();

The linear SVM’s answer to the linearly separable AND problem. As it can be seen, a linear SVM can correctly predict the colors for each of the points in the original problem. This happens because the SVM learning algorithm is able to find the line that separates the blue points from the red points. |

Now, we will move a bit further. We will use an explicit kernel expansion to learn the non-linearly separable exclusive or (XOR) problem.

// Create a simple binary XOR // classification problem: double[][] problem = { // a b a XOR b new double[] { 0, 0, 0 }, new double[] { 0, 1, 1 }, new double[] { 1, 0, 1 }, new double[] { 1, 1, 0 }, }; // Get the two first columns as the problem // inputs and the last column as the output // input columns double[][] inputs = problem.GetColumns(0, 1); // output column int[] outputs = problem.GetColumn(2).ToInt32(); // Plot the problem on screen ScatterplotBox.Show("XOR", inputs, outputs).Hold();

The binary XOR problem. The XOR problem is not a linerly separable problem, because it is not possible to draw a line separating the blue points from the red points. In this setting, we should expect a linear SVM learning algorithm to fail, because it will not be able to find this line that doesn’t exist. |

// However, SVMs expect the output value to be // either -1 or +1. As such, we have to convert // it so the vector contains { -1, -1, -1, +1 }: // outputs = outputs.Apply(x => x == 0 ? -1 : 1); // Create a new linear-SVM for two inputs (a and b) SupportVectorMachine svm = new SupportVectorMachine(inputs: 2); // Create a L2-regularized L2-loss support vector classification var teacher = new LinearDualCoordinateDescent(svm, inputs, outputs) { Loss = Loss.L2, Complexity = 1000, Tolerance = 1e-5 }; // Learn the machine double error = teacher.Run(computeError: true); // Compute the machine's answers for the learned inputs int[] answers = inputs.Apply(x => Math.Sign(svm.Compute(x))); // Plot the results ScatterplotBox.Show("SVM's answer", inputs, answers).Hold();

As we can see, the linear SVM failed to predict the correct colors for each of the points. The problem is that the answers from a linear SVM are constrained to be hyperplanes (in this 2D case, lines) that separate the points. Because there is no line that separates the blue points from the red points in the XOR problem, the linear SVM learning algorithm tries its best, finding an approximate solution, but not the XOR solution we were looking for. |

// Use an explicit kernel expansion to transform the // non-linear classification problem into a linear one // // Create a quadratic kernel Quadratic quadratic = new Quadratic(constant: 1); // Project the inptus into a higher dimensionality space double[][] expansion = inputs.Apply(quadratic.Transform); // Create a new linear-SVM for the transformed input space svm = new SupportVectorMachine(inputs: expansion[0].Length); // Create the same learning algorithm in the expanded input space teacher = new LinearDualCoordinateDescent(svm, expansion, outputs) { Loss = Loss.L2, Complexity = 1000, Tolerance = 1e-5 }; // Learn the machine error = teacher.Run(computeError: true); // Compute the machine's answers for the learned inputs answers = expansion.Apply(x => Math.Sign(svm.Compute(x))); // Plot the results ScatterplotBox.Show("SVM's answer", inputs, answers).Hold();

By using an explicit kernel expansion, we can use a linear SVM to learn a non-linearly separable problem. This is possible because the kernel transformation projects the data into a higher dimensionality space where the data is indeed linearly separable. For an intuition on how this could be possible, please check the blog post kernel functions for machine learning applications. |

Now, we move even further, and use a linear machine to load one of the toy LIBLINEAR problems available in LibSVM format using the framework’s SparseReader class.

// Create a new LibSVM sparse format data reader // to read the Wisconsin's Breast Cancer dataset // var reader = new SparseReader("examples-sparse.txt"); int[] outputs; // Read the classification problem into dense memory double[][] inputs = reader.ReadToEnd(sparse: false, labels: out outputs); // The dataset has output labels as 4 and 2. We have to convert them // into negative and positive labels so they can be properly processed. // outputs = outputs.Apply(x => x == 2 ? -1 : +1); // Create a new linear-SVM for the problem dimensions var svm = new SupportVectorMachine(inputs: reader.Dimensions); // Create a learning algorithm for the problem's dimensions var teacher = new LinearDualCoordinateDescent(svm, inputs, outputs) { Loss = Loss.L2, Complexity = 1000, Tolerance = 1e-5 }; // Learn the classification double error = teacher.Run(); // Compute the machine's answers for the learned inputs int[] answers = inputs.Apply(x => Math.Sign(svm.Compute(x))); // Create a confusion matrix to show the machine's performance var m = new ConfusionMatrix(predicted: answers, expected: outputs); // Show it onscreen DataGridBox.Show(new ConfusionMatrixView(m));

The confusion matrix for the binary classification problem. As it can be seen, the higher values concentrate in the diagonal. Those values indicate how many hits (correct guesses) the machine was able to make. The other values that don’t lie in the diagonal indicate how many errors (and of what kind) the machine made in this classification problem. |

The last version of this tutorial can also be seen on the project’s wiki pages for linear machines.

- R.-E. Fan, K.-W. Chang, C.-J. Hsieh, X.-R. Wang, and C.-J. Lin. LIBLINEAR: A Library for Large Linear Classification, Journal of Machine Learning Research 9(2008), 1871-1874. Software available at http://www.csie.ntu.edu.tw/~cjlin/liblinear