match
after changes in R devel.getOption("ddhazard_max_threads")
defaults to one.all.equal
.PF_get_score_n_hess
are fixed. One is that a off diagonal block in the observed information matrix was not computed. The other is that parts of the score and observed information matrix was only correct if parts of them were multiplied by the duplication matrix.nlopt
is no longer used in mode optimization. A Newton–Raphson method is used instead. This seems a bit faster in some cases and does not fail in some cases where nlopt
did.fix_seed
argument is added to PF_control
. fix_seed = FALSE
combined with averaging and a low number of particles seems to yield better results.PF_EM
when some periods do not have any observations.PF_EM
.PF_get_score_n_hess
is added to compute the approximate negative observation matrix and score vector.nu
in PF_control
scales the scale matrix to get an identical covariance matrix.get_Q_0
.predict.ddhazard
has been re-written. The output with type = "term"
has changed. It now yields a list of lists. Each list contains a list for each new_data
row. The time zero index is no longer included if tstart
and tstop
is not matched in new_data
. Parallel computing is no longer supported. It likely did not yield any reduction in computation time with the previous implementation. Calls with type = "term"
now uses the tstart
and tstop
argument and supports predictions in the future. A covariance matrix is added to the terms in the predictions.type == "VAR"
in particle filters in the smoothing proposal distribution. This has a major impact for most calls.type == "VAR"
in particle filters where the transition from the time zero state to time one was not used in the M-step estimation. This only has a larger impact for short series.method == "bootstrap_filter"
where a wrong covariance matrix was used for the proposal distribution.method == "AUX_normal_approx_w_particles"
where a wrong covariance matrix was used for the proposal distribution.logLik.PF_clouds
. The log-likelihood approximation was too high especially for the auxiliary particle filters.PF_EM
...._w_particles
methods so results have changed.random
and fixed
argument is added to PF_EM
as an alternative way to specify the random and fixed effect parts.PF_EM
and can be estimated with model = "exponential"
.ddhazard
objects are no longer degenerate (e.g., in the case where a second order random walk is used). Instead the dimension is equal to the dimension of the error term.PF_EM
has been moved from the control
list. Further, there is a PF_control
which should preferably be used to construct the object for the control
argument of PF
.static_glm
.PF_EM
uses \(Q_0\) instead of \(Q\) for the artificial prior and a bug have been fixed for sampling in the initial state in the backward filter. This have changed the output.PF_EM
with seed argument. The new way to get reproducible is to call f1 <- PF_EM(...); .GlobalEnv$.Random.seed" <- f1$seed; f2 <- eval(f1$call)
kinda as in simulate.lm
.model
argument to ddhazard
should be changed from "exp_bin"
, "exp_clip_time"
or "exp_clip_time_w_jump"
to "exponential"
.glm
is used to find the first state vector.ddhazard
control argument is changed.The following has been changed or added:
get_risk_obj
when is_for_discrete_model = TRUE
in the call. The issue was that individuals who were right censored in the middle of an interval were included despite that we do not know that they survive the entire interval. This will potentially affect the output for logit fits with ddhazard
.ddhazard_boot
now provides the option of different learning rates to be used rather than one if the first fit succeeds.ddhazard
with control = list(criteria = "delta_likeli", ...)
. The relative change in coefficient seems “preferable” as a default since it tends to not converge when the fit is has large “odd” deviation due to a few observations. The likelihood method though stops earlier for model does not have such deviation.ddhazard
.hatvalues
method for ddhazard
. These described “ddhazard” vignette and examples of usage are shown the vignette “Diagnostics”.residuals
method and a vignette “Diagnostics” with examples of usage of the residuals
function.rug
call in the shiny app demo, fixed a bug with the simulation function for the logit model and added the computation time of the estimation to the output.ddhazard
with control = list(use_pinv = FALSE, ...)
.exp_combined
method.The following have been added:
weights
argument when calling ddhazard
.ddhazard_boot
. See the new vignette ‘Bootstrap_illustration’ for details.print
is added.