This vignette illustrates the standard use of the PLNLDA
function and the methods accompanying the R6 Classes PLNLDA
and PLNLDAfit
.
The packages required for the analysis are PLNmodels plus some others for data manipulation and representation:
library(PLNmodels)
We illustrate our point with the trichoptera data set, a full description of which can be found in the corresponding vignette. Data preparation is also detailed in the specific vignette.
data(trichoptera)
<- prepare_data(trichoptera$Abundance, trichoptera$Covariate) trichoptera
The trichoptera
data frame stores a matrix of counts
(trichoptera$Abundance
), a matrix of offsets
(trichoptera$Offset
) and some vectors of covariates
(trichoptera$Wind
, trichoptera$Temperature
,
etc.) In the following, we’re particularly interested in the
trichoptera$Group
discrete covariate which
corresponds to disjoint time spans during which the catching took place.
The correspondence between group label and time spans is:
Label | Number.of.Consecutive.Nights | Date |
---|---|---|
1 | 12 | June 59 |
2 | 5 | June 59 |
3 | 5 | June 59 |
4 | 4 | June 59 |
5 | 4 | July 59 |
6 | 1 | June 59 |
7 | 3 | June 60 |
8 | 4 | June 60 |
9 | 5 | June 60 |
10 | 4 | June 60 |
11 | 1 | June 60 |
12 | 1 | July 60 |
In the vein of Fisher (1936) and Rao (1948), we introduce a multi-class LDA model for multivariate count data which is a variant of the Poisson Lognormal model of Aitchison and Ho (1989) (see the PLN vignette as a reminder). Indeed, it can viewed as a PLN model with a discrete group structure in the latent Gaussian space.
This PLN-LDA model can be written in a hierarchical framework where a sample of \(p\)-dimensional observation vectors \(\mathbf{Y}_i\) is related to some \(p\)-dimensional vectors of latent variables \(\mathbf{Z}_i\) and a discrete structure with \(K\) groups in the following way: \[\begin{equation} \begin{array}{rcl} \text{group structure } & \mathbf{\mu}_i = \mu_{g_i} & g_i \in \{1, \dots, K\} \\ \text{latent space } & \mathbf{Z}_i \quad \text{indep.} & \mathbf{Z}_i \sim \mathcal{N}({\boldsymbol\mu}_i, \boldsymbol{\Sigma}) & \\ \text{observation space } & Y_{ij} | Z_{ij} \quad \text{indep.} & Y_{ij} | Z_{ij} \sim \mathcal{P}\left(\exp\{Z_{ij}\}\right) \end{array} \end{equation}\] where \(g_i\) denotes the group sample \(i\) belongs to.
The different parameters \({\boldsymbol\mu}_k \in\mathbb{R}^p\) corresponds to the group-specific main effects and the variance matrix \(\boldsymbol{\Sigma}\) is shared among groups. An equivalent way of writing this model is the following: \[\begin{equation} \begin{array}{rcl} \text{latent space } & \mathbf{Z}_i \sim \mathcal{N}({\boldsymbol\mu}_i,\boldsymbol\Sigma) & \boldsymbol{\mu}_i = \mathbf{g}_i^\top \mathbf{M} \\ \text{observation space } & Y_{ij} | Z_{ij} \quad \text{indep.} & Y_{ij} | Z_{ij} \sim \mathcal{P}\left(\exp\{Z_{ij}\}\right), \end{array} \end{equation}\] where, with a slight abuse of notation, \(\mathbf{g}_i\) is a group-indicator vector of length \(K\) (\(g_{ik} = 1 \Leftrightarrow g_i = k\)) and \(\mathbf{M} = [\boldsymbol{\mu}_1^\top, \dots, \boldsymbol{\mu}_K^\top]^\top\) is a \(K \times p\) matrix collecting the group-specific main effects.
Just like PLN, PLN-LDA generalizes to a formulation close to a multivariate generalized linear model where the main effect is due to a linear combination of the discrete group structure, \(d\) covariates \(\mathbf{x}_i\) and a vector \(\mathbf{o}_i\) of \(p\) offsets in sample \(i\). The latent layer then reads \[\begin{equation} \mathbf{Z}_i \sim \mathcal{N}({\mathbf{o}_i + \mathbf{g}_i^\top \mathbf{M} + \mathbf{x}_i^\top\boldsymbol\Theta},\boldsymbol\Sigma) \end{equation}\] where \(\boldsymbol\Theta\) is a \(d\times p\) matrix of regression parameters.
Given:
We can predict the observation’s group using Bayes rule as follows: for \(k \in {1, \dots, K}\), compute \[\begin{equation} \begin{aligned} f_k(\mathbf{Y}) & = p(\mathbf{Y} | \mathbf{g} = k, \mathbf{o}, \mathbf{x}, \hat{\boldsymbol{\Theta}}, \hat{\boldsymbol{\Sigma}}) \\ & = \boldsymbol{\Phi}_{PLN}(\mathbf{Y}; \mathbf{o} + \boldsymbol{\mu}_k + \mathbf{x}^\top \hat{\boldsymbol{\Theta}}, \hat{\boldsymbol{\Sigma}}) \\ p_k & = \frac{n_k}{\sum_{k' = 1}^K n_{k'}} \end{aligned} \end{equation}\] where \(\boldsymbol{\Phi}_{PLN}(\bullet; \boldsymbol{\mu}, \boldsymbol{\Sigma})\) is the density function of a PLN distribution with parameters \((\boldsymbol{\mu}, \boldsymbol{\Sigma})\). \(f_k(\mathbf{Y})\) and \(p_k\) are respectively plug-in estimates of (i) the probability of observing counts \(\mathbf{Y}\) in a sample from group \(k\) and (ii) the probability that a sample originates from group \(k\).
The posterior probability \(\hat{\pi}_k(\mathbf{Y})\) that observation \(\mathbf{Y}\) belongs to group \(k\) and most likely group \(\hat{k}(\mathbf{Y})\) can thus be defined as \[\begin{equation} \begin{aligned} \hat{\pi}_k(\mathbf{Y}) & = \frac{p_k f_k(\mathbf{Y})}{\sum_{k' = 1}^K p_{k'} f_{k'}(\mathbf{Y})} \\ \hat{k}(\mathbf{Y}) & = \underset{k \in \{1, \dots, K\}}{\arg\max} \hat{\pi}_k(\mathbf{Y}) \end{aligned} \end{equation}\]
Classification and prediction are the main objectives in (PLN-)LDA. To reach this goal, we first need to estimate the model parameters. Inference in PLN-LDA focuses on the group-specific main effects \(\mathbf{M}\), the regression parameters \(\boldsymbol\Theta\) and the covariance matrix \(\boldsymbol\Sigma\). Technically speaking, we can treat \(\mathbf{g}_i\) as a discrete covariate and estimate \([\mathbf{M}, \boldsymbol{\Theta}]\) using the same strategy as for the standard PLN model. Briefly, we adopt a variational strategy to approximate the log-likelihood function and optimize the consecutive variational surrogate of the log-likelihood with a gradient-ascent-based approach. To this end, we rely on the CCSA algorithm of Svanberg (2002) implemented in the C++ library (Johnson 2011), which we link to the package.
In the package, the PLN-LDA model is adjusted with the function
PLNLDA
, which we review in this section. This function
adjusts the model and stores it in a object of class
PLNLDAfit
which inherits from the class
PLNfit
, so we strongly recommend the reader to be somehow
comfortable with PLN
and PLNfit
before using
PLNLDA
(see the PLN vignette).
We start by adjusting the above model to the Trichoptera data set. We
use Group
, the catching time spans, as a discrete structure
and use log as an offset to capture differences in sampling luck.
The model can be fitted with the function PLNLDA
as
follows:
<- PLNLDA(Abundance ~ 0 + offset(log(Offset)),
myLDA_nocov grouping = Group,
data = trichoptera)
##
## Performing discriminant Analysis...
## DONE!
Note that PLNLDA
uses the standard formula
interface, like every other model in the PLNmodels
package.
PLNLDAfit
The myLDA_nocov
variable is an R6
object
with class PLNLDAfit
, which comes with a couple of methods.
The most basic is the show/print
method, which sends a
brief summary of the estimation process and available methods:
myLDA_nocov
## Linear Discriminant Analysis for Poisson Lognormal distribution
## ==================================================================
## nb_param loglik BIC ICL
## 391 -799.954 -1560.804 -1190.093
## ==================================================================
## * Useful fields
## $model_par, $latent, $latent_pos, $var_par, $optim_par
## $loglik, $BIC, $ICL, $loglik_vec, $nb_param, $criteria
## * Useful S3 methods
## print(), coef(), sigma(), vcov(), fitted()
## predict(), predict_cond(), standard_error()
## * Additional fields for LDA
## $percent_var, $corr_map, $scores, $group_means
## * Additional S3 methods for LDA
## plot.PLNLDAfit(), predict.PLNLDAfit()
Comprehensive information about PLNLDAfit
is available
via ?PLNLDAfit
.
The user can easily access several fields of the
PLNLDAfit
object using S3
methods, the most
interesting ones are
sigma(myLDA_nocov) %>% corrplot::corrplot(is.corr = FALSE)
NULL
as there are no covariates)coef(myLDA_nocov)
## NULL
$group_means %>% head() %>% knitr::kable(digits = 2) myLDA_nocov
grouping1 | grouping2 | grouping3 | grouping4 | grouping5 | grouping6 | grouping7 | grouping8 | grouping9 | grouping10 | grouping11 | grouping12 |
---|---|---|---|---|---|---|---|---|---|---|---|
-26.00 | -6.82 | -25.57 | -24.40 | -27.35 | -27.93 | -26.24 | -25.39 | -25.46 | -5.64 | -3.46 | -24.38 |
-25.99 | -26.73 | -5.66 | -24.38 | -7.44 | -27.92 | -26.23 | -25.37 | -25.45 | -25.52 | -23.35 | -4.47 |
-2.38 | -3.96 | -2.36 | -2.92 | -6.50 | -5.54 | -2.69 | -2.03 | -2.71 | -2.65 | -23.46 | -3.85 |
-26.01 | -26.76 | -5.69 | -4.52 | -27.29 | -27.98 | -26.26 | -5.51 | -5.59 | -4.91 | -23.41 | -24.43 |
-0.28 | -0.26 | -0.62 | -1.09 | -0.61 | -0.11 | -0.66 | -0.80 | -0.39 | -0.45 | -0.61 | -1.16 |
-4.00 | -3.32 | -2.93 | -4.47 | -6.72 | -8.00 | -2.85 | -3.06 | -5.53 | -3.99 | -23.33 | -24.35 |
The PLNLDAfit
class also benefits from two important
methods: plot
and predict
.
plot
methodThe plot
methods provides easy to interpret graphics
which reveals here that the groups are well separated:
plot(myLDA_nocov)
By default, plot
shows the first 3 axis of the LDA when
there are 4 or more groups and uses special representations for the edge
cases of 3 or less groups.
ggplot2
-savvy users who want to make their own
representations can extracts the \(n \times
(K-1)\) matrix of sample scores from the PLNLDAfit
object …
$scores %>% head %>% knitr::kable(digits = 2) myLDA_nocov
LD1 | LD2 | LD3 | LD4 | LD5 | LD6 | LD7 | LD8 | LD9 | LD10 | LD11 |
---|---|---|---|---|---|---|---|---|---|---|
3216854 | -9287.47 | 66775.26 | -62489.30 | 23819.61 | 11181.17 | 1542.66 | -476.50 | 235.22 | 29.01 | -145.98 |
3216701 | -12719.42 | 67139.00 | -62859.04 | 23683.69 | 11202.68 | 1573.64 | -494.50 | 230.09 | 31.18 | -147.14 |
3216812 | -44478.71 | 71299.16 | -65085.26 | 22570.66 | 11001.74 | 1328.93 | -486.08 | 257.66 | 36.10 | -140.22 |
3216709 | -80370.78 | 75960.51 | -67417.44 | 21352.93 | 10673.76 | 886.39 | -402.62 | 298.41 | 38.42 | -131.47 |
3216686 | -59555.30 | 73225.87 | -66000.56 | 22084.17 | 10816.90 | 1072.95 | -415.42 | 278.64 | 36.13 | -136.13 |
3216702 | -33257.71 | 69805.49 | -64247.83 | 22979.81 | 11032.90 | 1350.78 | -459.37 | 251.66 | 33.11 | -142.32 |
…or the \(p \times (K-1)\) matrix of correlations between scores and (latent) variables
$corr_map %>% head %>% knitr::kable(digits = 2) myLDA_nocov
LD1 | LD2 | LD3 | LD4 | LD5 | LD6 | LD7 | LD8 | LD9 | LD10 | LD11 | |
---|---|---|---|---|---|---|---|---|---|---|---|
Che | -0.32 | 0.31 | 0.28 | 0.16 | -0.66 | -0.26 | 0.36 | -0.24 | 0.83 | -0.47 | 0.04 |
Hyc | -0.08 | -0.48 | 0.22 | 0.40 | 0.03 | -0.58 | -0.83 | 0.47 | -0.11 | -0.22 | 0.58 |
Hym | 0.07 | -0.19 | -0.09 | -0.74 | 0.32 | -0.10 | 0.29 | 0.29 | -0.35 | 0.03 | -0.60 |
Hys | -0.46 | 0.04 | -0.63 | -0.02 | 0.20 | -0.42 | 0.37 | 0.24 | -0.35 | -0.04 | 0.26 |
Psy | 0.25 | 0.31 | 0.16 | -0.29 | -0.38 | 0.25 | 0.11 | 0.17 | 0.50 | -0.57 | -0.36 |
Aga | 0.11 | -0.17 | -0.08 | -0.88 | 0.04 | -0.15 | 0.24 | -0.02 | -0.14 | -0.19 | -0.51 |
predict
methodThe predict
method has a slightly different behavior
than its siblings in other models of the PLNmodels. The
goal of predict
is to predict the discrete class based on
observed species counts (rather than predicting counts from
known covariates).
By default, the predict
use the argument
type = "posterior"
to output the matrix of log-posterior
probabilities \(\log(\hat{\pi})_k\)
<- predict(myLDA_nocov, newdata = trichoptera)
predicted.class ## equivalent to
## predicted.class <- predict(myLDA_nocov, newdata = trichoptera, type = "posterior")
%>% head() %>% knitr::kable(digits = 2) predicted.class
1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 |
---|---|---|---|---|---|---|---|---|---|---|---|
-17.44 | -45.49 | -42.56 | -25.47 | -53.71 | -32.73 | -25.92 | -84.57 | -81.64 | -20.60 | -198.99 | -72.94 |
-9.67 | -17.04 | -58.00 | -21.34 | -70.24 | -27.31 | -14.70 | -15.58 | -15.22 | -14.70 | -117.05 | -64.49 |
-12.95 | -22.57 | -111.24 | -39.81 | -116.17 | -30.80 | -22.53 | -49.09 | -43.52 | -22.93 | -143.23 | -116.66 |
-18.43 | -30.99 | -118.89 | -113.20 | -141.03 | -54.03 | -41.82 | -111.46 | -100.23 | -44.51 | -300.82 | -219.91 |
-13.85 | -20.01 | -51.72 | -60.31 | -71.80 | -38.19 | -26.88 | -50.79 | -44.28 | -25.85 | -188.81 | -112.57 |
-9.26 | -13.49 | -36.26 | -24.37 | -47.25 | -23.75 | -14.62 | -15.91 | -15.30 | -14.33 | -97.21 | -65.81 |
You can also show them in the standard (and human-friendly) \([0, 1]\) scale with
scale = "prob"
to get the matrix \(\hat{\pi}_k\)
<- predict(myLDA_nocov, newdata = trichoptera, scale = "prob")
predicted.class %>% head() %>% knitr::kable(digits = 3) predicted.class
1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 |
---|---|---|---|---|---|---|---|---|---|---|---|
0.959 | 0.000 | 0 | 0 | 0 | 0 | 0.000 | 0.000 | 0.000 | 0.041 | 0 | 0 |
0.980 | 0.001 | 0 | 0 | 0 | 0 | 0.006 | 0.003 | 0.004 | 0.006 | 0 | 0 |
1.000 | 0.000 | 0 | 0 | 0 | 0 | 0.000 | 0.000 | 0.000 | 0.000 | 0 | 0 |
1.000 | 0.000 | 0 | 0 | 0 | 0 | 0.000 | 0.000 | 0.000 | 0.000 | 0 | 0 |
0.998 | 0.002 | 0 | 0 | 0 | 0 | 0.000 | 0.000 | 0.000 | 0.000 | 0 | 0 |
0.972 | 0.014 | 0 | 0 | 0 | 0 | 0.005 | 0.001 | 0.002 | 0.006 | 0 | 0 |
Setting type = "response"
, we can predict the most
likely group \(\hat{k}\) instead:
<- predict(myLDA_nocov, newdata = trichoptera, type = "response")
predicted.class predicted.class
## 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26
## 1 1 1 1 1 1 9 2 1 1 1 1 2 3 2 2 2 3 3 3 9 3 4 1 4 4
## 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49
## 12 5 4 5 6 7 7 7 8 8 8 8 8 1 9 9 9 10 10 10 10 11 12
## Levels: 1 2 3 4 5 6 7 8 9 10 11 12
We can assess that the predictions are quite similar to the real group (this is not a proper validation of the method as we used data set for both model fitting and prediction and are thus at risk of overfitting).
table(predicted.class, trichoptera$Group, dnn = c("predicted", "true"))
## true
## predicted 1 2 3 4 5 6 7 8 9 10 11 12
## 1 10 0 0 1 0 0 0 0 1 0 0 0
## 2 1 4 0 0 0 0 0 0 0 0 0 0
## 3 0 1 4 0 0 0 0 0 0 0 0 0
## 4 0 0 0 3 1 0 0 0 0 0 0 0
## 5 0 0 0 0 2 0 0 0 0 0 0 0
## 6 0 0 0 0 0 1 0 0 0 0 0 0
## 7 0 0 0 0 0 0 3 0 0 0 0 0
## 8 0 0 0 0 0 0 0 4 1 0 0 0
## 9 1 0 1 0 0 0 0 0 3 0 0 0
## 10 0 0 0 0 0 0 0 0 0 4 0 0
## 11 0 0 0 0 0 0 0 0 0 0 1 0
## 12 0 0 0 0 1 0 0 0 0 0 0 1
Finally, we can get the coordinates of the new data on the same graph
at the original ones with type = "scores"
. This is done by
averaging the latent positions \(\hat{\mathbf{Z}}_i + \boldsymbol{\mu}_k\)
(found when the sample is assumed to come from group \(k\)) and weighting them with the \(\hat{\pi}_k\). Some samples, have
compositions that put them very far from their group mean.
library(ggplot2)
<- predict(myLDA_nocov, newdata = trichoptera, type = "scores")
predicted.scores colnames(predicted.scores) <- paste0("Axis.", 1:ncol(predicted.scores))
<- as.data.frame(predicted.scores)
predicted.scores $group <- trichoptera$Group
predicted.scoresplot(myLDA_nocov, map = "individual", nb_axes = 2, plot = FALSE) +
geom_point(data = predicted.scores,
aes(x = Axis.1, y = Axis.2, color = group, label = NULL))
It is possible to correct for other covariates before finding the LDA
axes that best separate well the groups. In our case ,we’re going to use
Wind
as a covariate and illustrate the main differences
with before :
<- PLNLDA(Abundance ~ Wind + 0 + offset(log(Offset)),
myLDA_cov grouping = Group,
data = trichoptera)
##
## Performing discriminant Analysis...
## DONE!
All fields of our new PLNLDA
fit can be accessed as
before with similar results. The only important difference is the result
of coef
: since we included a covariate in the model,
coef
now returns a 1-column matrix for \(\hat{\boldsymbol{\Theta}}\) instead of
NULL
coef(myLDA_cov) %>% head %>% knitr::kable()
Wind | |
---|---|
Che | -0.3313695 |
Hyc | 1.2586177 |
Hym | -0.1928927 |
Hys | -0.4545873 |
Psy | 0.0378202 |
Aga | -0.0612222 |
The group-specific main effects can still be accessed with
$group_means
$group_means %>% head %>% knitr::kable(digits = 2) myLDA_cov
grouping1 | grouping2 | grouping3 | grouping4 | grouping5 | grouping6 | grouping7 | grouping8 | grouping9 | grouping10 | grouping11 | grouping12 |
---|---|---|---|---|---|---|---|---|---|---|---|
-16.23 | -9.93 | -19.34 | -15.09 | -20.58 | -35.17 | -23.94 | -22.54 | -15.55 | 2.37 | 8.45 | -18.88 |
-19.26 | -35.11 | -2.26 | -12.80 | -10.51 | -39.13 | -26.82 | -23.98 | -11.79 | -13.91 | -4.36 | 0.75 |
-1.19 | -4.41 | -1.77 | -1.81 | -5.80 | -6.84 | -2.66 | -1.84 | -1.59 | -1.58 | -19.83 | -3.32 |
-19.57 | -26.54 | -3.34 | -0.53 | -21.76 | -30.99 | -24.12 | -5.21 | -1.31 | -0.70 | -14.96 | -20.50 |
-0.05 | -0.42 | -0.47 | -0.85 | -0.46 | -0.43 | -0.60 | -0.71 | -0.06 | -0.15 | -0.18 | -1.03 |
-1.90 | -4.07 | -1.76 | -2.48 | -5.52 | -10.43 | -2.72 | -2.75 | -3.39 | -1.84 | -18.27 | -21.49 |
plot
methodOnce again, the plot
method is very useful to get a
quick look at the results.
plot(myLDA_cov)
predict
methodWe can again predict the most likely group for each sample :
<- predict(myLDA_cov, newdata = trichoptera, type = "response") predicted.class_cov
and check that we recover the correct class in most cases (again, we used the same data set for model fitting and group prediction only for ease of exposition):
table(predicted.class_cov, trichoptera$Group, dnn = c("predicted", "true"))
## true
## predicted 1 2 3 4 5 6 7 8 9 10 11 12
## 1 11 0 0 1 0 0 0 0 1 0 0 0
## 2 1 4 0 0 0 0 0 0 0 0 0 0
## 3 0 1 4 0 0 0 0 0 0 0 0 0
## 4 0 0 0 3 1 0 0 0 0 0 0 0
## 5 0 0 0 0 2 0 0 0 0 0 0 0
## 6 0 0 0 0 0 1 0 0 0 0 0 0
## 7 0 0 0 0 0 0 3 0 0 0 0 0
## 8 0 0 0 0 0 0 0 4 1 0 0 0
## 9 0 0 1 0 0 0 0 0 3 0 0 0
## 10 0 0 0 0 0 0 0 0 0 4 0 0
## 11 0 0 0 0 0 0 0 0 0 0 1 0
## 12 0 0 0 0 1 0 0 0 0 0 0 1