Partial Least Squares Regression (PLS) is a technique that generalizes and combines features from *principal component analysis* and *(multivariate) multiple regression*. It has been widely adopted in the field of chemometrics and social sciences.

- Download sample application and source code
- Browse the sample application source code online
- Browse the Partial Least Squares source code online
- Download the Accord.NET Statistics Framework

*The code presented here is also part of the Accord.NET Framework. The Accord.NET Framework is a framework for developing machine learning, computer vision, computer audition, statistics and math applications. Please see the starting guide for mode details. The latest version of the framework includes the latest version of this code plus many other statistics and machine learning tools.*

## Contents

## Introduction

Partial least squares regression (PLS-regression) is a statistical method that is related to principal components regression. The goal of this method is to find a linear regression model by projecting both the predicted variables *and* the observable variables to new, latent variable spaces. It was developed in the 1960s by Herman Wold to be used in econometrics. However, today it is most commonly used for regression in the field of chemometrics.

In statistics, latent variables (as opposed to observable variables), are variables

that are not directly observed but are rather inferred (through a mathematical model) from other variables that are observed (directly measured). Mathematical models that aim to explain observed variables in terms of latent variables are called latent variable models.

A PLS model will try to find the multidimensional direction in the *X* space that explains the maximum multidimensional variance direction in the *Y* space. PLS-regression is particularly suited when the matrix of predictors has more variables

than observations, and when there is multicollinearity among *X* values. Its interesting to note that standard linear regression would likely fail to produce meaningful interpretable models in those cases.

## Overview

### Multivariate Linear Regression in Latent Space

Multiple Linear Regression is a generalization of simple linear regression for multiple inputs. In turn, Multivariate Linear Regression is a generalization of Multiple Linear Regression for multiple *outputs*. The multivariate linear regression is a general linear regression model which can map an arbitrary dimension space into another arbitrary dimension space using only linear relationships. In the context of PLS, it is used to map the latent variable space for the inputs **X** into the latent variable space for the output variables **Y**. Those latent variable spaces are spawned by the loading matrices for **X** and **Y**, commonly denoted P and Q, respectively.

The goal of PLS algorithms are therefore to find those two matrices. There are mainly two algorithms to do this: *NIPALS* and *SIMPLS*.

### Algorithm 1: NIPALS

Here is an exposition of the NIPALS algorithm for finding the loading matrices required for PLS regression. There are, however, many variations of this algorithm which normalize or do not normalize certain vectors.

Algorithm:

- Let X be the mean-centered input matrix,
- Let Y be the mean-centered output matrix,
- Let P be the
loadings matrix for X, and letpdenote the_{i }i-thcolumn of P;- Let Q be the
loadings matrix for Y, and letqdenote the_{i }i-thcolumn of Q;- Let T be the
score matrix for X, andtdenote the_{i }i-thcolumn of T;- Let U be the
score matrix for Y, andudenote the_{i }i-thcolumn of U;- Let W be the
PLS weight matrix, andwdenote the_{i }i-thcolumn of W; and- Let B be a diagonal matrix of diagonal coefficients
b_{i}Then:

- For each factor
ito be calculated:

- Initially choose
uas the largest column vector in_{i}

X (having the largest sum of squares)- While (
thas not converged to a desired precision)_{i}

- w
_{i}∝ X’u_{i }(estimate X weights)- t
_{i}∝ Xw_{i }(estimate X factor scores)- q
_{i}∝ Y’t_{i }(estimate Y weights)- u
_{i}= Yq_{i }(estimate Y scores)- b
_{i}= t’u (compute prediction coefficient b)- p
_{i}= X’t (estimate X factor loadings)- X = X – tp’ (deflate X)

In other statistical analysis such as PCA, it is often interesting to inspect how much of the variance can be explained by each of the principal component dimensions. The same can also be accomplished for PLS, both for the input (predictor) variables **X** and outputs (regressor) variables **Y**. For the input variables, the amount of variance explained by each factor can be computed as * b_{i}²*. For outputs, it can be computed as the sum of the squared elements of its column in the matrix

**P**,

*i.e.*as

*.*

**Sum(p**_{i}²)### Algorithm 2: SIMPLS

SIMPLS is an alternative algorithm for finding the PLS matrices **P** and **Q** that has been derived considering the true objective of maximizing the covariance between the latent factors and the output vectors. Both NIPALS and SIMPLS are equivalent when there is just one output variable **Y** to be regressed. However, their answers can differ when **Y** is multi-dimensional. However, because the construction of the weight vectors used by SIMPLS is based on the empirical variance–covariance matrix of the joint input and output variables, outliers present in the data can adversely impact its performance.

Algorithm:

- Let X be the mean-centered input matrix,
- Let Y be the mean-centered output matrix,
- Let P be the
loadings matrix for X, and letpdenote the_{i}i-thcolumn of P;- Let C be the
loadings matrix for Y, and letcdenote the_{i}i-thcolumn of C;- Let T be the
score matrix for X, andtdenote the_{i}i-thcolumn of T;- Let U be the
score matrix for Y, andudenote the_{i}i-thcolumn of U; and- Let W be the
PLS weight matrix, andwdenote the_{i}i-thcolumn of W.Then:

- Create the covariance matrix C = X’Y
- For each factor
ito be calculated:

- Perform SVD on the covariance matrix and store the first left singular vector in
wand the first right singular value times the singular values in_{i}c._{i}- t
_{i}∝ X*w_{i}(estimate X factor scores)- p
_{i}= X’*t_{i}(estimate X factor loadings)- c
_{i}= c_{i}/norm(t_{i}) (estimate Y weights)- w
_{i}= w_{i}/norm(t_{i}) (estimate X weights)- u
_{i}= Y*c_{i}(estimate Y scores)- v
_{i}= p_{i}(form the basis vector v_{i})- Make v orthogonal to the previous loadings V
- Make u orthogonal to the previous scores T
- Deflate the covariance matrix C

- C = C – v
_{i}*(v_{i}‘*C)

## Source Code

This section contains the realization of the NIPALS and SIMPLS algorithms in C#. The models have been implemented considering an object-oriented structure that is particularly suitable to be data-bound to Windows.Forms (or WPF) controls.

## Class Diagram

## Performing PLS using NIPALS

Here is the source code for computing PLS using the *NIPALS* algorithm:

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 |
/// <summary> /// Computes PLS parameters using NIPALS algorithm. /// </summary> /// private void nipals(double[,] X, double[,] Y, int factors, double tolerance) { // References: // - Hervé Abdi, http://www.utdallas.edu/~herve/abdi-wireCS-PLS2010.pdf // Initialize and prepare the data int rows = sourceX.GetLength(0); int xcols = sourceX.GetLength(1); int ycols = sourceY.GetLength(1); // Initialize storage variables double[,] E = (double[,])X.Clone(); double[,] F = (double[,])Y.Clone(); double[,] T = new double[rows, factors]; double[,] U = new double[rows, factors]; double[,] P = new double[xcols, factors]; double[,] C = new double[ycols, factors]; double[,] W = new double[xcols, xcols]; double[] B = new double[xcols]; double[] varX = new double[factors]; double[] varY = new double[factors]; // Initialize the algorithm bool stop = false; #region NIPALS for (int factor = 0; factor < factors && !stop; factor++) { // Select t as the largest column from X, double[] t = E.GetColumn(largest(E)); // Select u as the largest column from Y. double[] u = F.GetColumn(largest(F)); // Will store weights for X and Y double[] w = new double[xcols]; double[] c = new double[ycols]; double norm_t = Norm.Euclidean(t); #region Iteration while (norm_t > 1e-14) { // Store initial t to check convergence double[] t0 = (double[])t.Clone(); // Step 1. Estimate w (X weights): w ∝ E'*u // (in Abdi's paper, X is referred as E). // 1.1. Compute w = E'*u; w = new double[xcols]; for (int j = 0; j < w.Length; j++) for (int i = 0; i < u.Length; i++) w[j] += E[i, j] * u[i]; // 1.2. Normalize w (w = w/norm(w)) w = w.Divide(Norm.Euclidean(w)); // Step 2. Estimate t (X factor scores): t ∝ E*w // (in Abdi's paper, X is referred as E). // 2.1. Compute t = E*w t = new double[rows]; for (int i = 0; i < t.Length; i++) for (int j = 0; j < w.Length; j++) t[i] += E[i, j] * w[j]; // 2.2. Normalize t: t = t/norm(t) t = t.Divide(norm_t = Norm.Euclidean(t)); // Step 3. Estimate c (Y weights): c ∝ F't // (in Abdi's paper, Y is referred as F). // 3.1. Compute c = F'*t0; c = new double[ycols]; for (int j = 0; j < c.Length; j++) for (int i = 0; i < t.Length; i++) c[j] += F[i, j] * t[i]; // 3.2. Normalize q: c = c/norm(q) c = c.Divide(Norm.Euclidean(c)); // Step 4. Estimate u (Y scores): u = F*q // (in Abdi's paper, Y is referred as F). // 4.1. Compute u = F*q; u = new double[rows]; for (int i = 0; i < u.Length; i++) for (int j = 0; j < c.Length; j++) u[i] += F[i, j] * c[j]; // Recalculate norm of the difference norm_t = 0.0; for (int i = 0; i < t.Length; i++) { double d = (t0[i] - t[i]); norm_t += d * d; } norm_t = Math.Sqrt(norm_t); } #endregion // Compute the value of b which is used to // predict Y from t as b = t'u [Abdi, 2010] double b = t.InnerProduct(u); // Compute factor loadings for X as p = E'*t [Abdi, 2010] double[] p = new double[xcols]; for (int j = 0; j < p.Length; j++) for (int i = 0; i < rows; i++) p[j] += E[i, j] * t[i]; // Perform deflaction of X and Y for (int i = 0; i < t.Length; i++) { // Deflate X as X = X - t*p'; for (int j = 0; j < p.Length; j++) E[i, j] -= t[i] * p[j]; // Deflate Y as Y = Y - b*t*q'; for (int j = 0; j < c.Length; j++) F[i, j] -= b * t[i] * c[j]; } // Calculate explained variances varY[factor] = b * b; varX[factor] = p.InnerProduct(p); // Save iteration results T.SetColumn(factor, t); P.SetColumn(factor, p); U.SetColumn(factor, u); C.SetColumn(factor, c); W.SetColumn(factor, w); B[factor] = b; // Check for residuals as stop criteria double[] norm_x = Norm.Euclidean(E); double[] norm_y = Norm.Euclidean(F); stop = true; for (int i = 0; i < norm_x.Length && stop == true; i++) { // If any of the residuals is higher than the tolerance if (norm_x[i] > tolerance || norm_y[i] > tolerance) stop = false; } } |

## Performing PLS using SIMPLS

And here is the source code for computing PLS using the *SIMPLS* algorithm:

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 |
/// <summary> /// Computes PLS parameters using SIMPLS algorithm. /// </summary> /// private void simpls(double[,] X, double[,] Y, int factors) { // References: // - Martin Anderson, "A comparison of nine PLS1 algorithms". Journal of Chemometrics, // 2009. Available on: http://onlinelibrary.wiley.com/doi/10.1002/cem.1248/pdf // - Hervé Abdi, http://www.utdallas.edu/~herve/abdi-wireCS-PLS2010.pdf // - Statsoft, http://www.statsoft.com/textbook/partial-least-squares/#SIMPLS // - Sijmen de Jong, "SIMPLS: an alternative approach to partial least squares regression" // - N.M. Faber and J. Ferré, “On the numerical stability of two widely used PLS algorithms,” // J. Chemometrics, 22, pps 101-105, 2008. // Initialize and prepare the data int rows = sourceX.GetLength(0); int xcols = sourceX.GetLength(1); int ycols = sourceY.GetLength(1); // Initialize storage variables double[,] P = new double[xcols, factors]; // loading matrix P, the loadings for X such that X = TP + F double[,] C = new double[ycols, factors]; // loading matrix C, the loadings for Y such that Y = TC + E double[,] T = new double[rows, factors]; // factor score matrix T double[,] U = new double[rows, factors]; // factor score matrix U double[,] W = new double[xcols, factors]; // weight matrix W double[] varX = new double[factors]; double[] varY = new double[factors]; // Orthogonal loadings double[,] V = new double[xcols, factors]; // Create covariance matrix C = X'Y double[,] covariance = X.TransposeAndMultiply(Y); #region SIMPLS for (int factor = 0; factor < factors; factor++) { // Step 1. Obtain the dominant eigenvector w of C'C. However, we // can avoid computing the matrix multiplication by using the // singular value decomposition instead, which is also more // stable. the ﬁrst weight vector w is the left singular vector // of C=X'Y [Abdi, 2007]. var svd = new SingularValueDecomposition(covariance, computeLeftSingularVectors: true, computeRightSingularVectors: false, autoTranspose: true); double[] w = svd.LeftSingularVectors.GetColumn(0); double[] c = covariance.TransposeAndMultiply(w); // Step 2. Estimate X factor scores: t ∝ X*w // Similarly to NIPALS, the T factor of SIMPLS // is computed as T=X*W [Statsoft] [Abdi, 2010]. // 2.1. Estimate t (X factor scores): t = X*w [Abdi, 2010] double[] t = new double[rows]; for (int i = 0; i < t.Length; i++) for (int j = 0; j < w.Length; j++) t[i] += X[i, j] * w[j]; // 2.2. Normalize t (X factor scores): t = t/norm(t) double norm_t = Norm.Euclidean(t); t = t.Divide(norm_t); // Step 3. Estimate p (X factor loadings): p = X'*t double[] p = new double[xcols]; for (int i = 0; i < p.Length; i++) for (int j = 0; j < t.Length; j++) p[i] += X[j, i] * t[j]; // Step 4. Estimate X and Y weights. Actually, the weights have // been computed in the first step during SVD. However, since // the X factor scores have been normalized, we also have to // normalize weights accordingly: w = w/norm(t), c = c/norm(t) w = w.Divide(norm_t); c = c.Divide(norm_t); // Step 5. Estimate u (Y factor scores): u = Y*c [Abdi, 2010] double[] u = new double[rows]; for (int i = 0; i < u.Length; i++) for (int j = 0; j < c.Length; j++) u[i] += Y[i, j] * c[j]; // Step 6. Initialize the orthogonal loadings double[] v = (double[])p.Clone(); // Step 7. Make v orthogonal to the previous loadings // http://en.wikipedia.org/wiki/Gram%E2%80%93Schmidt_process if (factor > 0) { // 7.1. MGS for v [Martin Anderson, 2009] for (int j = 0; j < factor; j++) { double proj = 0.0; for (int k = 0; k < v.Length; k++) proj += v[k] * V[k, j]; for (int k = 0; k < v.Length; k++) v[k] -= proj * V[k, j]; } // 7.1. MGS for u [Martin Anderson, 2009] for (int j = 0; j < factor; j++) { double proj = 0.0; for (int k = 0; k < u.Length; k++) proj += u[k] * T[k, j]; for (int k = 0; k < u.Length; k++) u[k] -= proj * T[k, j]; } } // 7.2. Normalize orthogonal loadings v = v.Divide(Norm.Euclidean(v)); // Step 8. Deflate covariance matrix as s = s - v * (v' * s) // as shown in simpls1 in [Martin Anderson, 2009] appendix. double[,] cov = (double[,])covariance.Clone(); for (int i = 0; i < v.Length; i++) { for (int j = 0; j < v.Length; j++) { double d = v[i] * v[j]; for (int k = 0; k < ycols; k++) cov[i, k] -= d * covariance[j, k]; } } covariance = cov; // Save iteration W.SetColumn(factor, w); U.SetColumn(factor, u); C.SetColumn(factor, c); T.SetColumn(factor, t); P.SetColumn(factor, p); V.SetColumn(factor, v); // Compute explained variance varX[factor] = p.InnerProduct(p); varY[factor] = c.InnerProduct(c); } #endregion // Set class variables this.scoresX = T; // factor score matrix T this.scoresY = U; // factor score matrix U this.loadingsX = P; // loading matrix P, the loadings for X such that X = TP + F this.loadingsY = C; // loading matrix Q, the loadings for Y such that Y = TQ + E this.weights = W; // the columns of R are weight vectors this.coeffbase = W; // Calculate variance explained proportions this.componentProportionX = new double[factors]; this.componentProportionY = new double[factors]; double sumX = 0.0, sumY = 0.0; for (int i = 0; i < rows; i++) { // Sum of squares for matrix X for (int j = 0; j < xcols; j++) sumX += X[i, j] * X[i, j]; // Sum of squares for matrix Y for (int j = 0; j < ycols; j++) sumY += Y[i, j] * Y[i, j]; } // Calculate variance proportions for (int i = 0; i < factors; i++) { componentProportionY[i] = varY[i] / sumY; componentProportionX[i] = varX[i] / sumX; } } |

## Multivariate Linear Regression

Multivariate Linear Regression is computed in a similar manner to a Multiple Linear Regression. The only difference is that, instead of having a weight vector and a intercept, we have a weight matrix and a intercept vector.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 |
/// <summary> /// Computes the Multiple Linear Regression for an input vector. /// </summary> /// <param name="input">The input vector.</param> /// <returns>The calculated output.</returns> public double[] Compute(double[] input) { int N = input.Length; int M = coefficients.GetLength(1); double[] result = new double[M]; for (int i = 0; i < M; i++) { result[i] = intercepts[i]; for (int j = 0; j < N; j++) result[i] += input[j] * coefficients[j, i]; } return result; } |

The weight matrix and the intercept vector are computed in the *PartialLeastSquaresAnalysis* class by the *CreateRegression* method. In case the analyzed data already was mean centered before being fed to the analysis, the constructed intercept vector will consist only of zeros.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 |
/// <summary> /// Creates a Multivariate Linear Regression model using /// coefficients obtained by the Partial Least Squares. /// </summary> public MultivariateLinearRegression CreateRegression(int factors) { int rows = sourceX.GetLength(0); int xcols = sourceX.GetLength(1); int ycols = sourceY.GetLength(1); // Compute regression coefficients B of Y on X as B = RQ' double[,] B = new double[xcols, ycols]; for (int i = 0; i < xcols; i++) for (int j = 0; j < ycols; j++) for (int k = 0; k < factors; k++) B[i, j] += coeffbase[i, k] * loadingsY[j, k]; // Divide by standard deviation if X has been normalized if (analysisMethod == AnalysisMethod.Correlation) for (int i = 0; i < xcols; i++) for (int j = 0; j < ycols; j++) B[i, j] = B[i, j] / stdDevX[i]; // Compute regression intercepts A as A = meanY - meanX'*B double[] A = new double[ycols]; for (int i = 0; i < ycols; i++) { double sum = 0.0; for (int j = 0; j < xcols; j++) sum += meanX[j] * B[j, i]; A[i] = meanY[i] - sum; } return new MultivariateLinearRegression(B, A, true); } |

## Using the code

As an example, lets consider the example data from Hervé Abdi, where the goal is to predict the subjective evaluation of a set of 5 wines. The dependent variables that we want to predict for each wine are its likeability, and how well it goes with meat, or dessert (as rated by a panel of experts). The predictors are the price, the sugar, alcohol, and acidity content of each wine.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 |
double[,] dependent = { // Wine | Hedonic | Goes with meat | Goes with dessert { 14, 7, 8 }, { 10, 7, 6 }, { 8, 5, 5 }, { 2, 4, 7 }, { 6, 2, 4 }, }; double[,] predictors = { // Wine | Price | Sugar | Alcohol | Acidity { 7, 7, 13, 7 }, { 4, 3, 14, 7 }, { 10, 5, 12, 5 }, { 16, 7, 11, 3 }, { 13, 3, 10, 3 }, }; |

Next, we proceed to create the Partial Least Squares Analysis using the Covariance

method (data will only be mean centered but not normalized) and using the SIMPLS

algorithm.

1 2 3 4 5 6 |
// Create the analysis using the covariance method and the SIMPLS algorithm PartialLeastSquaresAnalysis pls = new PartialLeastSquaresAnalysis(predictors, dependent, AnalysisMethod.Covariance, PartialLeastSquaresAlgorithm.SIMPLS); // Compute the analysis pls.Compute(); |

After the analysis has been computed, we can proceed and create the regression model.

1 2 3 4 5 |
// Create the Multivariate Linear Regression model MultivariateLinearRegression regression = pls.CreateRegression(); // Compute the regression outputs for the predictor variables double[][] aY = regression.Compute(predictors.ToArray()); |

Now after the regression has been computed, we can find how well it has performed. The coefficient of determination r² for the variables *Hedonic*, *Goes with Meat* and *Goes with Dessert* can be computed by the *CoefficientOfDetermination* method of the *MultivariateRegressionClass* and will be, respectively, ** 0.9999**, **0.9999** and** 0.8750 **– the closer to one, the better.

## Sample application

The accompanying sample application performs Partial Least Squares Analysis and Regression in Excel worksheets. The predictors and dependent variables can be selected once the data has been loaded in the application.

**Left:**Wine example from Hervé Abdi.

**Right:**Variance explained by PLS using the SIMPLS algorithm

**Left:**Partial Least Squares Analysis results and regression coefficients for the

full regression model.

**Right:**projection of the dependent and predictors variables

using the three first factors.

Results from the Multivariate Linear Regression performed in Latent Space using

three factors from PLS.

**Left:**Example data from Geladi and Kowalski.

**Right:**Analysis results for the data using NIPALS. We can see that just two factors are enough to explain the whole variance of the set.

two factor Multivariate Linear Regression model.

### References

- Abdi, H. Partial least square regression, projection on latent structure regression, PLS-Regression. Wiley Interdisciplinary Reviews: Computational Statistics, 2, 97-106, 2010.
- Abdi, H. Partial least square regression (PLS regression). In N.J. Salkind (Ed.): Encyclopedia of Measurement and Statistics. Thousand Oaks (CA): Sage. pp. 740-744, 2007.
- Mevik, B-H. and Wehrens, R. The pls Package: Principal Component and Partial Least Squares Regression in R. Journal of Statistical Software, Volume 18, Issue 2, 2007.
- De Jong, S. SIMPLS: an alternative approach to partial least squares regression. Chemometrics and Intelligent Laboratory Systems, 18: 251–263, 1993.
- Geladi, P. and Kowalski, B.R.

An example of 2-block predictive partial least squares regression with simulated

data.*Anal. Chim. Acta***185**, 19-32 (1986). - Rosipal, R. and Trejo, L.J.

Kernel Partial Least Squares for Nonlinear Regression and Discrimination.

The Journal of Machine Learning Research, Volume 2, pp. 123, 2002. - Garson, D. Partial Least Squares Regression (PLS). Website.
- Wikipedia contributors. “Partial least squares.” Wikipedia, The Free Encyclopedia. Website.