The ‘McMaster Pandemic’ model

Ben Bolker

2022-06-14

model structure

The “McMasterPandemic” model is a compartmental model built for the purposes of modeling the COVID-19 pandemic. Its main features are:

library(McMasterPandemic)

vis_model(method="igraph",do_symbols=FALSE)
## Warning in rExp(p, return_val = "eigenvector", ...): CHECK: may not be working
## properly for testify?

## NULL

basic operation (simulation)

time-varying parameters

breakpoints

log-linear models

The

splines

calibration

model expansion

Testing and tracing

To incorporate testing and tracing mechanistically, we have expanded the basic model in order to be able to

Our basic approach is to add an additional set of compartments to our model, expanding each compartment into the following sub-compartments:

In addition to these compartments, we also add accumulator compartments for negative and positive test reports. The appropriate accumulator is incremented at an appropriate time.

We can then recover the daily numbers of negative and positive tests recorded by differencing the cumulative totals (and add observation error if desired).

## Loading required package: shape

Individuals can also move between epidemiological compartments while awaiting test results, e.g. from “presymptomatic, awaiting positive test results” (Ip_p) to “mild infection, awaiting positive test results” (Im_p). This does not change their testing status, except that when untested, severely symptomatic individuals are hospitalized for COVID, we assume they are tested immediately (i.e. `

In order to reflect the range of possible testing strategies, we assign a testing weight to each compartment in the model that specifies what fraction of the current testing intensity is allocated to that compartment. We take the current testing intensity \(T\) (overall per capita tests/day) as a model input. Then, given testing weights \(w_i\), the *per capita} rate at which individuals in epidemiological compartment \(i\) move from _u to _n or _p (awaiting test results) is \[\begin{equation} \frac{T w_i}{\sum_j w_j P_j} \,, \end{equation}\] where \(P_j\) is the proportion of the population in compartment \(i\). The weights depend on the testing strategy and population state in complicated ways; if only confirmatory testing is being done (no screening or contact tracing or surveillance testing), then the weights will be skewed toward symptomatic compartments – although not entirely, as observed test positivity rarely goes above 20%, and sensitivity is thought to be much higher than this. More contact tracing will increase the weights of non-symptomatically infected people. More random-surveillance testing will make the weights of all the groups more similar.

Including the testing structure increases the number of compartments substantially, and consequently yields a much larger flow matrix

vis_model(testify=TRUE,aspect="fill")
## Warning in rExp(p, return_val = "eigenvector", ...): CHECK: may not be working
## properly for testify?

Explicit testing structure is enabled if the parameter vector/list contains an element testing_intensity which is set >0. (As a side note, if you are using read_params("PHAC_testify.csv") to capture our most recent set of default parameters, and you don’t want an explicit-testing model, you should use update(., testing_intensity=0).) The argument testing_time to the make_ratemat() function determines when testing is counted (“sample” or “report”); this can be passed to run_sim() (or from farther upstream) in the ratemat_args (list) argument; the default is to set counting time to “sample”, with a warning.

age structure

goals

The ability to simulate/forecast

why another simulator?

Given the large number of available simulators, why did we write another one?

Platform

Familiarity/convenience/speed and ease of implementation, debugging, maintenance are most important. For us, this rules out Julia and Python and strongly suggests R. Speed may eventually be important, for (1) running stochastic replicates and varying-parameter ensembles and (2) incorporating the model in simulation-based estimation methods (ABC, iterated filtering/pomp). If we are focused on estimation, we have a choice among Gibbs-based platforms (NIMBLE, JAGS) and platforms that only allow continuous latent variables (TMB, Stan).

Model

Compartmental structure

Interventions

Some of these are simple. Want to set up general structure for importing a dated list of changes in parameters (date, focal parameter, new value or proportional change). Testing rates can be included as part of this “non-autonomous”/forcing/external part of the parameterization.

Parameterization

Asymptomatic/undetected cases

Stochasticity

Geographic structure

Age structure

Delay distributions

(not sure how important this is?)

Disadvantages of linear-chain and convolution models are speed and potentially complexity/transparency (modeling convolution with GI handles arbitrary distribution of infectiousness over time, but making it interact with testing and treatment pipelines seems complicated?)

Relevant existing models (see model list)