Introduction

Mixed models are models which combine random and fixed effects.

Modern and classic approaches

The modern approach to mixed models involves estimating the variances of random effects and finding the conditional modes for each group (see below).

Compared to the classic ANOVA approach to mixed models, the modern approach is:

Example

Testing the effects of acid rain on spruce tree growth.

We have a complex manipulation (what kind of air the trees are exposed to). As a result:

Model

Treatment is a fixed effect

Tree is a random effect

Random effects

Random effects are based on unordered factors (grouping variables)

Is this a random effect?

Treating something as a random effect means treating the levels as interchangeable from the point of view of your scientific hypothesis.

There aren’t hard-and-fast rule about when you should model a predictor as a random effect. Here are some criteria …

philosophical questions

Types of analysis

inferential questions

Choosing to use a random effect will affect your inferences

Examples

Influenza vaccination experiment

Spruce trees

Practical questions

Random slopes

Fitting

Modern mixed-model packages (see below) can fit a wide variety of models. You just need to specify which effects are random.

Too few levels

If you have something that should be a random-effect predictor, but you don’t have enough levels, you can’t fit a modern mixed model

It’s OK to treat your random effect as a fixed effect, as long as this is properly reflected in your scientific conclusions.

Residual structure and group structure

Standard random effects: we attach a random effect to the group

  • Sometimes called G-side modeling
  • Group membership is binary (same or different)
  • Hierarchy is possible
    • You and I are in the same country, but not the same village
    • But the model doesn’t understand if our villages are close together

R-side modeling: we impose a covariance structure on the residuals

  • Arbitrary variance-covariance structure (time, space, etc.)
    • I am very close to Ben, and sort of close to you
  • Can also allow for heteroscedasticity
  • Disadvantages
    • Harder to combine effects
    • Easier to mis-specify the model
    • Difficult to implement (especially for generalized models)

How modern methods work

It’s complicated!

How modern methods work

Typically based on marginal likelihood: probability of observing outcomes integrated over different possible values of the random effects.

Balance (dispersion of RE around 0) with (dispersion of data conditional on RE)

Shrinkage: estimated values get “shrunk” toward the overall mean, especially in small-sample/extreme units

How do we do it?

Different for linear mixed models (LMMs: normally distributed response) and generalized linear mixed models (GLMMs: binomial, Poisson, etc.)

  • LMMs: REML vs ML
    • analogy: dividing by \(n-1\) when estimating variance
    • analogy: paired \(t\)-test
    • REML is the natural extension to >2 treatment levels per block
  • GLMMs: PQL, Laplace, Gauss-Hermite

Or:

  • Bayesian approach: put a combined prior on the parameters

Practical details in R

  • Classical designs
    • aov + Error() term
  • Modern methods
    • lme (nlme package): older, better documented, more stable, does R-side models, complex variance structures, gives denominator df/\(p\) values
    • (g)lmer (lme4 package): newer, faster, does crossed random effects, GLMMs
    • glmmTMB: more families for GLMMs; zero-inflation etc etc
    • many others (see task view)