Calibration of PLS-DA model

Calibration of PLS-DA model is very similar to conventional PLS with one difference — you need to provide information about class membership of each object instead of a matrix or a vector with response values. This can be done in two different ways. If you have multiple classes, it is always recommended to provide your class membering data as a factor with predefined labels or a vector with class names as text values. The labels/values in this case will be used as class names. It is also acceptable to use numbers as labels but it will make interpretation of results less readable and can possible cause problems with performance statistics calculations. So use names!

It is very important that you use the same labels/names for e.g. calibration and test set, because this is the way model will identify which class an object came from. And if you have e.g. a typo in a label value, model will assume that the corresponding object is a stranger.

So let’s prepare our data.

data(iris)

cal.ind = c(1:25, 51:75, 101:125)
val.ind = c(26:50, 76:100, 126:150)

Xc = iris[cal.ind, 1:4]
Xv = iris[val.ind, 1:4]

cc.all = iris[cal.ind, 5]
cv.all = iris[val.ind, 5]

In this case, the fifth column of dataset Iris is already factor, otherwise we have to make it as a factor explicitely. Lets check if it is indeed correct.

show(cc.all)
##  [1] setosa     setosa     setosa     setosa     setosa     setosa    
##  [7] setosa     setosa     setosa     setosa     setosa     setosa    
## [13] setosa     setosa     setosa     setosa     setosa     setosa    
## [19] setosa     setosa     setosa     setosa     setosa     setosa    
## [25] setosa     versicolor versicolor versicolor versicolor versicolor
## [31] versicolor versicolor versicolor versicolor versicolor versicolor
## [37] versicolor versicolor versicolor versicolor versicolor versicolor
## [43] versicolor versicolor versicolor versicolor versicolor versicolor
## [49] versicolor versicolor virginica  virginica  virginica  virginica 
## [55] virginica  virginica  virginica  virginica  virginica  virginica 
## [61] virginica  virginica  virginica  virginica  virginica  virginica 
## [67] virginica  virginica  virginica  virginica  virginica  virginica 
## [73] virginica  virginica  virginica 
## Levels: setosa versicolor virginica

However, for the model with just one class, virginica, we need to prepare the class variable in a different way. In this case it is enought to provide a vector with logical values, where TRUE will correspond to a member and FALSE to a non-member of the class. Here is an example how to do it (we will make two — one for calibration and one for validation subsets).

cc.vir = cc.all == 'virginica'
cv.vir = cv.all == 'virginica'
show(cc.vir)
##  [1] FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE
## [12] FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE
## [23] FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE
## [34] FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE
## [45] FALSE FALSE FALSE FALSE FALSE FALSE  TRUE  TRUE  TRUE  TRUE  TRUE
## [56]  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE
## [67]  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE

Now we can calibrate the models:

m.all = plsda(Xc, cc.all, 3, cv = 1)
m.vir = plsda(Xc, cc.vir, 3, cv = 1, classname = 'virginica')

You could notice one important difference. In case when parameter c is a vector with logical values you also need to provide a name of the class. If you do not do this, a default name will be used, but it may cause problems when you e.g. validate your model using test set where class membership is a factor as we have in this example.

Let’s look at the summary for each of the model. As you can see below, summary for multi class PLS-DA simply shows one set of results for each class. The performance statistics include explained X and Y variance (individual for last used component and cumulative for all of them), values for confusion matrix (True Positives, False Positives, True Negatives, False Negatives) as well as specificity and sensitivity values.

summary(m.all)
## 
## PLS-DA model (class plsda) summary statistics
## 
## Number of selected components: 3
## Info: 
## 
## Class #1 (setosa)
##     X expvar X cumexpvar Y expvar Y cumexpvar TP FP TN FN Spec Sens
## Cal     2.19       99.65     4.82       58.05 25  0 50  0    1    1
## CV      2.35       99.61     4.04       53.59 25  0 50  0    1    1
## 
## Class #2 (versicolor)
##     X expvar X cumexpvar Y expvar Y cumexpvar TP FP TN FN Spec Sens
## Cal     2.19       99.65     4.82       58.05 11  5 45 14 0.90 0.44
## CV      2.35       99.61     4.04       53.59 10  6 44 15 0.88 0.40
## 
## Class #3 (virginica)
##     X expvar X cumexpvar Y expvar Y cumexpvar TP FP TN FN Spec Sens
## Cal     2.19       99.65     4.82       58.05 24  3 47  1 0.94 0.96
## CV      2.35       99.61     4.04       53.59 24  3 47  1 0.94 0.96

Dealing with the multi-class PLS-DA model is similar to dealing with PLS2 models, when you have several y-variables. Every time you want to show a plot or results for a particular class, just provide the class number using parameter nc. For example this is how to show summary only for the third class (virginica).

 summary(m.all, nc = 3)
## 
## PLS-DA model (class plsda) summary statistics
## 
## Number of selected components: 3
## Info: 
## 
## Class #3 (virginica)
##     X expvar X cumexpvar Y expvar Y cumexpvar TP FP TN FN Spec Sens
## Cal     2.19       99.65     4.82       58.05 24  3 47  1 0.94 0.96
## CV      2.35       99.61     4.04       53.59 24  3 47  1 0.94 0.96

You can also show statistics only for calibration or only for cross-validation parts, in this case you will see details about contribution of every component to the model.

 summary(m.all$calres, nc = 3)
## 
## PLS-DA results (class plsdares) summary:
## Number of selected components: 3
## 
## Class #3 (virginica)
##        X expvar X cumexpvar Y expvar Y cumexpvar TP FP TN FN Spec Sens
## Comp 1    91.97       91.97    46.36       46.36 24  5 45  1 0.90 0.96
## Comp 2     5.50       97.47     6.88       53.23 21  5 45  4 0.90 0.84
## Comp 3     2.19       99.65     4.82       58.05 24  3 47  1 0.94 0.96

For one class models, the behaviour is actually similar, but there will be always one set of results — for the corresponding class. Here is the summary.

summary(m.vir)
## 
## PLS-DA model (class plsda) summary statistics
## 
## Number of selected components: 3
## Info: 
## 
## Class #1 (virginica)
##     X expvar X cumexpvar Y expvar Y cumexpvar TP FP TN FN Spec Sens
## Cal     4.71       98.53     0.06       61.31 24  3 47  1 0.94 0.96
## CV      6.13       98.90     2.22       57.15 24  3 47  1 0.94 0.96

Like in SIMCA you can also get a confusion matrix for particular result. Here is an example for multiple classes model.

getConfusionMatrix(m.all$calres)
##            setosa versicolor virginica None
## setosa         25          0         0    0
## versicolor      0         11         3   11
## virginica       0          5        24    0

And for the one-class model.

getConfusionMatrix(m.vir$calres)
##           virginica None
## None              3   47
## virginica        24    1