Time-to-event data, including both survival and censoring times, are
created using functions defSurv
and genSurv
.
The survival data definitions require a variable name as well as a
specification of a scale value, which determines the mean survival time
at a baseline level of covariates (i.e. all covariates set to 0). The
Weibull distribution is used to generate these survival times. In
addition, covariates (which have been defined previously) that influence
survival time can be included in the formula
field.
Positive coefficients are associated with longer survival times (and
lower hazard rates). Finally, the shape of the distribution can
be specified. A shape
value of 1 reflects the
exponential distribution. As of simstudy
version
0.5.0, it is also possible to generate survival data that violate a
proportional hazards assumption. In addition, data with two or more
competing risks can be generated.
The density, mean, and variance of the Weibull distribution that is used in the data generation process are defined by the parameters \(\lambda\) (scale) and \(\nu\) (shape) as shown below.
\[\begin{aligned} f(t) &= \frac{t^{\frac{1}{\nu}-1}}{\lambda \nu} exp\left(-\frac{t^\frac{1}{\nu}}{\lambda}\right) \\ E(T) &= \lambda ^ \nu \Gamma(\nu + 1) \\ Var(T) &= (\lambda^2)^\nu \left( \Gamma(2 \nu + 1) - \Gamma^2(\nu + 1) \right) \\ \end{aligned}\]The survival time \(T\) data are generated based on this formula:
\[ T = \left( -\frac{log(U) \lambda}{exp(\beta ^ \prime x)} \right)^\nu, \]
where \(U\) is a uniform random variable between 0 and 1, \(\beta\) is a vector of parameters in a Cox proportional hazard model, and \(x\) is a vector of covariates that impact survival time. \(\lambda\) and \(\nu\) can also vary by covariates.
Here is an example showing how to generate data with covariates. In this case the scale and shape parameters will vary by group membership.
# Baseline data definitions
<- defData(varname = "x1", formula = 0.5, dist = "binary")
def <- defData(def, varname = "grp", formula = 0.5, dist = "binary")
def
# Survival data definitions
set.seed(282716)
<- defSurv(varname = "survTime", formula = "1.5*x1", scale = "grp*50 + (1-grp)*25",
sdef shape = "grp*1 + (1-grp)*1.5")
<- defSurv(sdef, varname = "censorTime", scale = 80, shape = 1)
sdef
sdef
## varname formula scale shape transition
## 1: censorTime 0 80 1 0
## 2: survTime 1.5*x1 grp*50 + (1-grp)*25 grp*1 + (1-grp)*1.5 0
The data are generated with calls to genData
and
genSurv
:
# Baseline data definitions
<- genData(300, def)
dtSurv <- genSurv(dtSurv, sdef)
dtSurv
head(dtSurv)
## id x1 grp censorTime survTime
## 1: 1 0 0 42.9 9.88
## 2: 2 0 1 77.2 17.34
## 3: 3 0 1 143.8 142.94
## 4: 4 1 1 181.9 16.47
## 5: 5 1 0 210.9 108.28
## 6: 6 0 1 34.1 8.12
# A comparison of survival by group and x1
round(mean(survTime), 1), keyby = .(grp, x1)] dtSurv[,
## grp x1 V1
## 1: 0 0 149.2
## 2: 0 1 23.4
## 3: 1 0 46.3
## 4: 1 1 12.2
Observed survival times and censoring indicators can be generated using the competing risk functionality and specifying a censoring variable:
<- genData(300, def)
dtSurv <- genSurv(dtSurv, sdef, timeName = "obsTime", censorName = "censorTime",
dtSurv eventName = "status", keepEvents = TRUE)
head(dtSurv)
## id x1 grp censorTime survTime obsTime status type
## 1: 1 0 1 92.4 49.071 49.071 1 survTime
## 2: 2 1 0 45.8 25.639 25.639 1 survTime
## 3: 3 1 1 278.2 4.045 4.045 1 survTime
## 4: 4 0 0 12.7 13.325 12.663 0 censorTime
## 5: 5 0 0 26.5 323.764 26.531 0 censorTime
## 6: 6 1 0 23.5 0.157 0.157 1 survTime
# estimate proportion of censoring by x1 and group
round(1 - mean(status), 2), keyby = .(grp, x1)] dtSurv[,
## grp x1 V1
## 1: 0 0 0.71
## 2: 0 1 0.10
## 3: 1 0 0.44
## 4: 1 1 0.13
Here is a Kaplan-Meier plot of the data by the four groups:
Here is a survival analysis (using a Cox proportional hazard model) of a slightly simplified data set with two baseline covariates only:
# Baseline data definitions
<- defData(varname = "x1", formula = 0.5, dist = "binary")
def <- defData(def, varname = "x2", formula = 0.5, dist = "binary")
def
# Survival data definitions
<- defSurv(varname = "survTime", formula = "1.5*x1 - .8*x2", scale = 50, shape = 1/2)
sdef <- defSurv(sdef, varname = "censorTime", scale = 80, shape = 1)
sdef
<- genData(300, def)
dtSurv <- genSurv(dtSurv, sdef, timeName = "obsTime", censorName = "censorTime",
dtSurv eventName = "status")
<- survival::coxph(Surv(obsTime, status) ~ x1 + x2, data = dtSurv) coxfit
The 95% confidence intervals of the parameter estimates include the values used to generate the data:
Characteristic | log(HR)1 | 95% CI1 | p-value |
---|---|---|---|
x1 | 1.5 | 1.2, 1.8 | <0.001 |
x2 | -0.89 | -1.1, -0.64 | <0.001 |
1 HR = Hazard Ratio, CI = Confidence Interval |
In the previous example, we actually used the competing risk
mechanism in genSurv
to generate an observed time variable
(which was the earliest of the censoring and event time). This is done
by specifying a timeName argument that will represent the
observed time value. The event status is indicated in the field set by
the eventName argument (which defaults to “event”). If a
variable name is indicated in the censorName argument, the
censored events automatically have a value of 0. As we saw above,
competing risk information can be generated as part of
genSurv
. However, there is an additional function
addCompRisk
that will generate the competing risk
information using an existing data set. The example here will take that
approach.
<- defData(varname = "x1", formula = .5, dist = "binary")
d1 <- defData(d1, "x2", .5, dist = "binary")
d1
<- defSurv(varname = "event_1", formula = "-10 - 0.6*x1 + 0.4*x2", shape = 0.3)
dS <- defSurv(dS, "event_2", "-6.5 + 0.3*x1 - 0.5*x2", shape = 0.5)
dS <- defSurv(dS, "censor", "-7", shape = 0.55)
dS
<- genData(1001, d1)
dtSurv <- genSurv(dtSurv, dS)
dtSurv
dtSurv
## id x1 x2 censor event_1 event_2
## 1: 1 0 0 81.38 32.80 21.85
## 2: 2 0 0 44.71 11.22 28.56
## 3: 3 0 1 5.61 22.91 52.99
## 4: 4 1 1 14.85 22.18 4.72
## 5: 5 0 0 43.83 13.58 36.07
## ---
## 997: 997 1 1 14.61 14.76 26.14
## 998: 998 1 1 30.19 6.42 16.13
## 999: 999 1 0 7.54 26.09 20.33
## 1000: 1000 1 0 19.50 27.08 6.43
## 1001: 1001 1 0 31.74 11.95 22.59
<- addCompRisk(dtSurv, events = c("event_1", "event_2", "censor"),
dtSurv timeName = "time", censorName = "censor")
dtSurv
## id x1 x2 time event type
## 1: 1 0 0 21.85 2 event_2
## 2: 2 0 0 11.22 1 event_1
## 3: 3 0 1 5.61 0 censor
## 4: 4 1 1 4.72 2 event_2
## 5: 5 0 0 13.58 1 event_1
## ---
## 997: 997 1 1 14.61 0 censor
## 998: 998 1 1 6.42 1 event_1
## 999: 999 1 0 7.54 0 censor
## 1000: 1000 1 0 6.43 2 event_2
## 1001: 1001 1 0 11.95 1 event_1
The competing risk data can be plotted using the cumulative incidence functions (rather than the survival curves):
The data generation can all be done in two (instead of three) steps:
<- genData(101, d1)
dtSurv <- genSurv(dtSurv, dS, timeName = "time", censorName = "censor")
dtSurv dtSurv
## id x1 x2 time event type
## 1: 1 0 1 4.83 1 event_1
## 2: 2 0 1 22.42 1 event_1
## 3: 3 1 0 9.16 1 event_1
## 4: 4 1 0 25.13 1 event_1
## 5: 5 1 0 10.51 1 event_1
## ---
## 97: 97 0 0 13.88 2 event_2
## 98: 98 0 1 19.53 1 event_1
## 99: 99 0 0 15.10 0 censor
## 100: 100 0 1 10.99 2 event_2
## 101: 101 1 1 13.08 1 event_1
In the standard simstudy
data generation process for
survival/time-to-event outcomes that includes covariates that effect the
hazard rate at various time points, the ratio of hazards comparing
different levels of a covariate are constant across all time points. For
example, if we have a single binary covariate \(x\), the hazard \(\lambda(t)\) at time \(t\) is
\[\lambda(t|x) = \lambda_0(t) e ^ {\beta x}\] where \(\lambda_0(t)\) is a baseline hazard when \(x=0\). The ratio of the hazards for \(x=1\) compared to \(x=0\) is
\[\frac{\lambda_0(t) e ^ {\beta}}{\lambda_0(t)} = e ^ \beta,\]
so the log of the hazard ratio is a constant \(\beta\), and the hazard ratio is always \(e^\beta\).
However, we may not always want to make the assumption that the hazard ratio is constant over all time periods. To facilitate this, it is possible to specify two different data definitions for the same outcome, using the transition field to specify when the second definition replaces the first. (While it would theoretically be possible to generate data for more than two periods, the process is more involved, and has not been implemented at this time.)
Constant/proportional hazard ratio
To start, here is an example assuming a constant log hazard ratio of -0.7:
<- defData(varname = "x", formula = 0.4, dist = "binary")
def
<- defSurv(varname = "death", formula = "-14.6 - 0.7*x", shape = 0.35)
defS <- defSurv(defS, varname = "censor", scale = exp(13), shape = 0.5)
defS
<- genData(500, def)
dd <- genSurv(dd, defS, digits = 2, timeName = "time", censorName = "censor")
dd
<- survfit(Surv(time, event) ~ x, data = dd) fit
The Cox proportional hazards model recovers the correct log hazards rate:
<- coxph(formula = Surv(time, event) ~ x, data = dd) coxfit
Characteristic | log(HR)1 | 95% CI1 | p-value |
---|---|---|---|
x | -0.72 | -0.92, -0.52 | <0.001 |
1 HR = Hazard Ratio, CI = Confidence Interval |
We can test the assumption of proportional hazards using weighted residuals. If the \(\text{p-value} < 0.05\), then we would conclude that the assumption of proportional hazards is not warranted. In this case \(p = 0.22\), so the model is apparently reasonable:
cox.zph(coxfit)
## chisq df p
## x 1.51 1 0.22
## GLOBAL 1.51 1 0.22
Non-constant/non-proportional hazard ratio
In this next case, the risk of death when \(x=1\) is lower at all time points compared to when \(x=0\), but the relative risk (or hazard ratio) changes at 150 days:
<- defData(varname = "x", formula = 0.4, dist = "binary")
def
<- defSurv(varname = "death", formula = "-14.6 - 1.3*x", shape = 0.35, transition = 0)
defS <- defSurv(defS, varname = "death", formula = "-14.6 - 0.4*x", shape = 0.35,
defS transition = 150)
<- defSurv(defS, varname = "censor", scale = exp(13), shape = 0.5)
defS
<- genData(500, def)
dd <- genSurv(dd, defS, digits = 2, timeName = "time", censorName = "censor")
dd
<- survfit(Surv(time, event) ~ x, data = dd) fit
The survival curve for the sample with \(x=1\) has a slightly different shape under this data generation process compared to the previous example under a constant hazard ratio assumption; there is more separation early on (prior to day 150), and then the gap is closed at a quicker rate.
If we ignore the possibility that there might be a different relationship over time, the Cox proportional hazards model gives an estimate of the log hazard ratio quite close to -0.70:
<- survival::coxph(formula = Surv(time, event) ~ x, data = dd) coxfit
Characteristic | log(HR)1 | 95% CI1 | p-value |
---|---|---|---|
x | -0.71 | -0.90, -0.52 | <0.001 |
1 HR = Hazard Ratio, CI = Confidence Interval |
However, further inspection of the proportionality assumption should make us question the appropriateness of the model. Since \(p<0.05\), we would be wise to see if we can improve on the model.
cox.zph(coxfit)
## chisq df p
## x 7.4 1 0.0065
## GLOBAL 7.4 1 0.0065
We might be able to see from the plot where proportionality diverges, in which case we can split the data set into two parts at the identified time point. (In many cases, the transition point or points won’t be so obvious, in which case the investigation might be more involved.) By splitting the data at day 150, we get the desired estimates:
<- survSplit(Surv(time, event) ~ ., data= dd, cut=c(150),
dd2 episode= "tgroup", id="newid")
<- survival::coxph(Surv(tstart, time, event) ~ x:strata(tgroup), data=dd2) coxfit2
Characteristic | log(HR)1 | 95% CI1 | p-value |
---|---|---|---|
x * strata(tgroup) | |||
x * tgroup=1 | -1.3 | -1.6, -0.93 | <0.001 |
x * tgroup=2 | -0.38 | -0.62, -0.13 | 0.003 |
1 HR = Hazard Ratio, CI = Confidence Interval |
And the diagnostic test of proportionality confirms the appropriateness of the model:
cox.zph(coxfit2)
## chisq df p
## x:strata(tgroup) 2.44 2 0.3
## GLOBAL 2.44 2 0.3
Throughout this vignette, I have been using various assumptions for
the parameters - formula, scale, and shape -
that define the Weibull-based survival distribution. Where do these
assumptions come from and how can we determine what is appropriate to
use in our simulations? That will depend, of course, on each specific
application and use of the simulation, but there are two helper
functions in simstudy
, survGetParams
and
survParamPlot
, that are intended to guide the process.
survGetParams
will provide the formula and
shape parameters (the scale parameter will always be
set to 1) that define a curve close to points provided as inputs. For
example, if we would like to find the parameters for a distribution
where 80% survive until day 100, and 10% survive until day 200 (any
number of points may be provided):
<- list(c(100, 0.80), c(200, 0.10))
points <- survGetParams(points)
r r
## [1] -17.007 0.297
We can visualize the curve that is defined by these parameters:
survParamPlot(f = r[1], shape = r[2], points)
And we can generate data based on these parameters:
<- defSurv(varname = "death", formula = -17, scale = 1, shape = 0.3)
defS <- defSurv(defS, varname = "censor", formula = 0, scale = exp(18.5), shape = 0.3)
defS
<- genData(500)
dd <- genSurv(dd, defS, timeName = "time", censorName = "censor") dd