This vignette describes the analysis of data on the mean off-time
reduction in patients given dopamine agonists as adjunct therapy in
Parkinson’s disease, in a network of 7 trials of 4 active drugs plus
placebo (Dias et al.
2011). The data are available in this package as
parkinsons:
head(parkinsons)
#>   studyn trtn     y    se   n  diff se_diff
#> 1      1    1 -1.22 0.504  54    NA   0.504
#> 2      1    3 -1.53 0.439  95 -0.31   0.668
#> 3      2    1 -0.70 0.282 172    NA   0.282
#> 4      2    2 -2.40 0.258 173 -1.70   0.382
#> 5      3    1 -0.30 0.505  76    NA   0.505
#> 6      3    2 -2.60 0.510  71 -2.30   0.718We consider analysing these data in three separate ways:
y and corresponding
standard errors se);diff and
corresponding standard errors se_diff);Note: In this case, with Normal likelihoods for both arms and contrasts, we will see that the three analyses give identical results. In general, unless the arm-based likelihood is Normal, results from a model using a contrast-based likelihood will not exactly match those from a model using an arm-based likelihood, since the contrast-based Normal likelihood is only an approximation. Similarity of results depends on the suitability of the Normal approximation, which may not always be appropriate - e.g. with a small number of events or small sample size for a binary outcome. The use of an arm-based likelihood (sometimes called an “exact” likelihood) is therefore preferable where possible in general.
We begin with an analysis of the arm-based data - means and standard errors.
We have arm-level continuous data giving the mean off-time reduction
(y) and standard error (se) in each arm. We
use the function set_agd_arm() to set up the network.
arm_net <- set_agd_arm(parkinsons, 
                      study = studyn,
                      trt = trtn,
                      y = y, 
                      se = se,
                      sample_size = n)
arm_net
#> A network with 7 AgD studies (arm-based).
#> 
#> ------------------------------------------------------- AgD studies (arm-based) ---- 
#>  Study Treatment arms
#>  1     2: 1 | 3      
#>  2     2: 1 | 2      
#>  3     3: 4 | 1 | 2  
#>  4     2: 4 | 3      
#>  5     2: 4 | 3      
#>  6     2: 4 | 5      
#>  7     2: 4 | 5      
#> 
#>  Outcome type: continuous
#> ------------------------------------------------------------------------------------
#> Total number of treatments: 5
#> Total number of studies: 7
#> Reference treatment is: 4
#> Network is connectedWe let treatment 4 be set by default as the network reference
treatment, since this results in considerably improved sampling
efficiency over choosing treatment 1 as the network reference. The
sample_size argument is optional, but enables the nodes to
be weighted by sample size in the network plot.
Plot the network structure.
We fit both fixed effect (FE) and random effects (RE) models.
First, we fit a fixed effect model using the nma()
function with trt_effects = "fixed". We use \(\mathrm{N}(0, 100^2)\) prior distributions
for the treatment effects \(d_k\) and
study-specific intercepts \(\mu_j\). We
can examine the range of parameter values implied by these prior
distributions with the summary() method:
summary(normal(scale = 100))
#> A Normal prior distribution: location = 0, scale = 100.
#> 50% of the prior density lies between -67.45 and 67.45.
#> 95% of the prior density lies between -196 and 196.The model is fitted using the nma() function.
arm_fit_FE <- nma(arm_net, 
                  trt_effects = "fixed",
                  prior_intercept = normal(scale = 100),
                  prior_trt = normal(scale = 10))
#> Note: Setting "4" as the network reference treatment.Basic parameter summaries are given by the print()
method:
arm_fit_FE
#> A fixed effects NMA with a normal likelihood (identity link).
#> Inference for Stan model: normal.
#> 4 chains, each with iter=2000; warmup=1000; thin=1; 
#> post-warmup draws per chain=1000, total post-warmup draws=4000.
#> 
#>       mean se_mean   sd   2.5%   25%   50%   75% 97.5% n_eff Rhat
#> d[1]  0.52    0.01 0.48  -0.41  0.19  0.53  0.84  1.47  1554    1
#> d[2] -1.28    0.01 0.52  -2.30 -1.64 -1.28 -0.93 -0.27  1461    1
#> d[3]  0.04    0.01 0.33  -0.59 -0.19  0.04  0.27  0.66  1791    1
#> d[5] -0.30    0.00 0.21  -0.71 -0.44 -0.30 -0.16  0.13  2891    1
#> lp__ -6.69    0.06 2.38 -12.27 -8.05 -6.37 -4.95 -3.04  1581    1
#> 
#> Samples were drawn using NUTS(diag_e) at Mon Apr 29 16:39:50 2024.
#> For each parameter, n_eff is a crude measure of effective sample size,
#> and Rhat is the potential scale reduction factor on split chains (at 
#> convergence, Rhat=1).By default, summaries of the study-specific intercepts \(\mu_j\) are hidden, but could be examined
by changing the pars argument:
The prior and posterior distributions can be compared visually using
the plot_prior_posterior() function:
We now fit a random effects model using the nma()
function with trt_effects = "random". Again, we use \(\mathrm{N}(0, 100^2)\) prior distributions
for the treatment effects \(d_k\) and
study-specific intercepts \(\mu_j\),
and we additionally use a \(\textrm{half-N}(5^2)\) prior for the
heterogeneity standard deviation \(\tau\). We can examine the range of
parameter values implied by these prior distributions with the
summary() method:
summary(normal(scale = 100))
#> A Normal prior distribution: location = 0, scale = 100.
#> 50% of the prior density lies between -67.45 and 67.45.
#> 95% of the prior density lies between -196 and 196.
summary(half_normal(scale = 5))
#> A half-Normal prior distribution: location = 0, scale = 5.
#> 50% of the prior density lies between 0 and 3.37.
#> 95% of the prior density lies between 0 and 9.8.Fitting the RE model
arm_fit_RE <- nma(arm_net, 
                  seed = 379394727,
                  trt_effects = "random",
                  prior_intercept = normal(scale = 100),
                  prior_trt = normal(scale = 100),
                  prior_het = half_normal(scale = 5),
                  adapt_delta = 0.99)
#> Note: Setting "4" as the network reference treatment.
#> Warning: There were 3 divergent transitions after warmup. See
#> https://mc-stan.org/misc/warnings.html#divergent-transitions-after-warmup
#> to find out why this is a problem and how to eliminate them.
#> Warning: Examine the pairs() plot to diagnose sampling problemsWe do see a small number of divergent transition errors, which cannot
simply be removed by increasing the value of the
adapt_delta argument (by default set to 0.95
for RE models). To diagnose, we use the pairs() method to
investigate where in the posterior distribution these divergences are
happening (indicated by red crosses):
The divergent transitions occur in the upper tail of the heterogeneity standard deviation. In this case, with only a small number of studies, there is not very much information to estimate the heterogeneity standard deviation and the prior distribution may be too heavy-tailed. We could consider a more informative prior distribution for the heterogeneity variance to aid estimation.
Basic parameter summaries are given by the print()
method:
arm_fit_RE
#> A random effects NMA with a normal likelihood (identity link).
#> Inference for Stan model: normal.
#> 4 chains, each with iter=2000; warmup=1000; thin=1; 
#> post-warmup draws per chain=1000, total post-warmup draws=4000.
#> 
#>        mean se_mean   sd   2.5%    25%    50%    75% 97.5% n_eff Rhat
#> d[1]   0.55    0.02 0.66  -0.65   0.16   0.53   0.93  1.77  1437    1
#> d[2]  -1.31    0.02 0.71  -2.69  -1.74  -1.31  -0.89  0.00  1628    1
#> d[3]   0.04    0.01 0.50  -0.91  -0.25   0.04   0.33  0.97  1636    1
#> d[5]  -0.31    0.01 0.46  -1.26  -0.51  -0.29  -0.09  0.59  2024    1
#> lp__ -12.83    0.10 3.60 -20.63 -15.03 -12.50 -10.23 -6.88  1380    1
#> tau    0.40    0.02 0.40   0.01   0.13   0.29   0.52  1.49   618    1
#> 
#> Samples were drawn using NUTS(diag_e) at Mon Apr 29 16:40:01 2024.
#> For each parameter, n_eff is a crude measure of effective sample size,
#> and Rhat is the potential scale reduction factor on split chains (at 
#> convergence, Rhat=1).By default, summaries of the study-specific intercepts \(\mu_j\) and study-specific relative effects
\(\delta_{jk}\) are hidden, but could
be examined by changing the pars argument:
The prior and posterior distributions can be compared visually using
the plot_prior_posterior() function:
Model fit can be checked using the dic() function:
(arm_dic_FE <- dic(arm_fit_FE))
#> Residual deviance: 13.4 (on 15 data points)
#>                pD: 11.1
#>               DIC: 24.4(arm_dic_RE <- dic(arm_fit_RE))
#> Residual deviance: 13.7 (on 15 data points)
#>                pD: 12.6
#>               DIC: 26.3Both models fit the data well, having posterior mean residual deviance close to the number of data points. The DIC is similar between models, so we choose the FE model based on parsimony.
We can also examine the residual deviance contributions with the
corresponding plot() method.
For comparison with Dias et al. (2011), we can produce relative effects
against placebo using the relative_effects() function with
trt_ref = 1:
(arm_releff_FE <- relative_effects(arm_fit_FE, trt_ref = 1))
#>       mean   sd  2.5%   25%   50%   75% 97.5% Bulk_ESS Tail_ESS Rhat
#> d[4] -0.52 0.48 -1.47 -0.84 -0.53 -0.19  0.41     1561     2426    1
#> d[2] -1.81 0.32 -2.46 -2.02 -1.81 -1.59 -1.19     5125     2917    1
#> d[3] -0.48 0.49 -1.44 -0.81 -0.48 -0.15  0.47     2416     2508    1
#> d[5] -0.82 0.53 -1.86 -1.18 -0.83 -0.47  0.20     1762     1810    1
plot(arm_releff_FE, ref_line = 0)(arm_releff_RE <- relative_effects(arm_fit_RE, trt_ref = 1))
#>       mean   sd  2.5%   25%   50%   75% 97.5% Bulk_ESS Tail_ESS Rhat
#> d[4] -0.55 0.66 -1.77 -0.93 -0.53 -0.16  0.65     1498     1706    1
#> d[2] -1.86 0.51 -2.94 -2.15 -1.86 -1.56 -0.87     4005     2430    1
#> d[3] -0.51 0.66 -1.72 -0.90 -0.49 -0.12  0.75     2471     2260    1
#> d[5] -0.85 0.79 -2.37 -1.28 -0.85 -0.41  0.71     1801     1677    1
plot(arm_releff_RE, ref_line = 0)Following Dias et al. (2011), we produce absolute predictions of
the mean off-time reduction on each treatment assuming a Normal
distribution for the outcomes on treatment 1 (placebo) with mean \(-0.73\) and precision \(21\). We use the predict()
method, where the baseline argument takes a
distr() distribution object with which we specify the
corresponding Normal distribution, and we specify
baseline_trt = 1 to indicate that the baseline distribution
corresponds to treatment 1. (Strictly speaking,
type = "response" is unnecessary here, since the identity
link function was used.)
arm_pred_FE <- predict(arm_fit_FE, 
                       baseline = distr(qnorm, mean = -0.73, sd = 21^-0.5),
                       type = "response",
                       baseline_trt = 1)
arm_pred_FE
#>          mean   sd  2.5%   25%   50%   75% 97.5% Bulk_ESS Tail_ESS Rhat
#> pred[4] -1.25 0.53 -2.30 -1.61 -1.25 -0.89 -0.22     1796     2499    1
#> pred[1] -0.73 0.22 -1.16 -0.88 -0.73 -0.58 -0.29     3910     3936    1
#> pred[2] -2.53 0.39 -3.29 -2.80 -2.54 -2.27 -1.77     4640     2825    1
#> pred[3] -1.21 0.53 -2.22 -1.57 -1.21 -0.86 -0.15     2585     2980    1
#> pred[5] -1.55 0.57 -2.68 -1.94 -1.56 -1.16 -0.44     1993     2600    1
plot(arm_pred_FE)arm_pred_RE <- predict(arm_fit_RE, 
                       baseline = distr(qnorm, mean = -0.73, sd = 21^-0.5),
                       type = "response",
                       baseline_trt = 1)
arm_pred_RE
#>          mean   sd  2.5%   25%   50%   75% 97.5% Bulk_ESS Tail_ESS Rhat
#> pred[4] -1.27 0.69 -2.62 -1.69 -1.26 -0.85  0.05     1650     1667    1
#> pred[1] -0.73 0.22 -1.16 -0.87 -0.73 -0.58 -0.30     3976     3867    1
#> pred[2] -2.59 0.56 -3.68 -2.92 -2.57 -2.25 -1.49     3978     2541    1
#> pred[3] -1.23 0.69 -2.54 -1.66 -1.23 -0.81  0.13     2654     2319    1
#> pred[5] -1.58 0.83 -3.12 -2.05 -1.57 -1.11  0.04     1930     1701    1
plot(arm_pred_RE)If the baseline argument is omitted, predictions of mean
off-time reduction will be produced for every study in the network based
on their estimated baseline response \(\mu_j\):
arm_pred_FE_studies <- predict(arm_fit_FE, type = "response")
arm_pred_FE_studies
#> ---------------------------------------------------------------------- Study: 1 ---- 
#> 
#>             mean   sd  2.5%   25%   50%   75% 97.5% Bulk_ESS Tail_ESS Rhat
#> pred[1: 4] -1.65 0.47 -2.57 -1.96 -1.66 -1.33 -0.75     2070     2888    1
#> pred[1: 1] -1.12 0.43 -1.95 -1.41 -1.13 -0.84 -0.28     3883     3420    1
#> pred[1: 2] -2.93 0.51 -3.92 -3.28 -2.93 -2.57 -1.96     3903     3458    1
#> pred[1: 3] -1.61 0.40 -2.39 -1.87 -1.60 -1.34 -0.84     4134     3242    1
#> pred[1: 5] -1.95 0.51 -2.97 -2.28 -1.95 -1.60 -0.95     2263     2753    1
#> 
#> ---------------------------------------------------------------------- Study: 2 ---- 
#> 
#>             mean   sd  2.5%   25%   50%   75% 97.5% Bulk_ESS Tail_ESS Rhat
#> pred[2: 4] -1.16 0.51 -2.19 -1.50 -1.17 -0.81 -0.20     1462     2235    1
#> pred[2: 1] -0.64 0.26 -1.15 -0.81 -0.64 -0.46 -0.13     4983     3344    1
#> pred[2: 2] -2.44 0.24 -2.92 -2.60 -2.44 -2.28 -1.99     4253     3244    1
#> pred[2: 3] -1.12 0.52 -2.12 -1.47 -1.12 -0.76 -0.10     2389     2725    1
#> pred[2: 5] -1.46 0.55 -2.56 -1.84 -1.46 -1.09 -0.39     1653     2265    1
#> 
#> ---------------------------------------------------------------------- Study: 3 ---- 
#> 
#>             mean   sd  2.5%   25%   50%   75% 97.5% Bulk_ESS Tail_ESS Rhat
#> pred[3: 4] -1.11 0.41 -1.94 -1.38 -1.11 -0.83 -0.30     1796     2328    1
#> pred[3: 1] -0.59 0.35 -1.27 -0.83 -0.59 -0.36  0.11     4190     3442    1
#> pred[3: 2] -2.40 0.38 -3.15 -2.65 -2.40 -2.13 -1.65     4199     3486    1
#> pred[3: 3] -1.07 0.46 -1.95 -1.38 -1.07 -0.77 -0.16     2930     2423    1
#> pred[3: 5] -1.41 0.47 -2.33 -1.72 -1.41 -1.10 -0.49     1972     2508    1
#> 
#> ---------------------------------------------------------------------- Study: 4 ---- 
#> 
#>             mean   sd  2.5%   25%   50%   75% 97.5% Bulk_ESS Tail_ESS Rhat
#> pred[4: 4] -0.39 0.30 -0.97 -0.60 -0.39 -0.18  0.19     2277     2640    1
#> pred[4: 1]  0.13 0.51 -0.86 -0.21  0.14  0.48  1.14     2326     2753    1
#> pred[4: 2] -1.67 0.55 -2.76 -2.05 -1.67 -1.30 -0.60     2282     2895    1
#> pred[4: 3] -0.35 0.25 -0.83 -0.52 -0.35 -0.18  0.14     4546     3077    1
#> pred[4: 5] -0.69 0.37 -1.43 -0.94 -0.69 -0.45  0.02     2468     2805    1
#> 
#> ---------------------------------------------------------------------- Study: 5 ---- 
#> 
#>             mean   sd  2.5%   25%   50%   75% 97.5% Bulk_ESS Tail_ESS Rhat
#> pred[5: 4] -0.56 0.34 -1.20 -0.79 -0.56 -0.33  0.13     2475     2599    1
#> pred[5: 1] -0.03 0.53 -1.10 -0.38 -0.03  0.32  1.00     2503     2883    1
#> pred[5: 2] -1.84 0.57 -2.96 -2.21 -1.84 -1.45 -0.71     2344     2950    1
#> pred[5: 3] -0.52 0.29 -1.10 -0.71 -0.52 -0.31  0.06     5216     3387    1
#> pred[5: 5] -0.86 0.40 -1.63 -1.13 -0.86 -0.59 -0.04     2629     2791    1
#> 
#> ---------------------------------------------------------------------- Study: 6 ---- 
#> 
#>             mean   sd  2.5%   25%   50%   75% 97.5% Bulk_ESS Tail_ESS Rhat
#> pred[6: 4] -2.20 0.18 -2.56 -2.31 -2.19 -2.08 -1.86     3342     2849    1
#> pred[6: 1] -1.67 0.51 -2.68 -2.02 -1.68 -1.33 -0.66     1728     2468    1
#> pred[6: 2] -3.48 0.55 -4.53 -3.86 -3.49 -3.10 -2.40     1567     2472    1
#> pred[6: 3] -2.16 0.37 -2.86 -2.41 -2.15 -1.91 -1.41     2043     2625    1
#> pred[6: 5] -2.50 0.17 -2.82 -2.61 -2.50 -2.39 -2.16     5040     3309    1
#> 
#> ---------------------------------------------------------------------- Study: 7 ---- 
#> 
#>             mean   sd  2.5%   25%   50%   75% 97.5% Bulk_ESS Tail_ESS Rhat
#> pred[7: 4] -1.81 0.18 -2.17 -1.93 -1.80 -1.68 -1.46     3122     2471    1
#> pred[7: 1] -1.28 0.52 -2.28 -1.64 -1.28 -0.94 -0.26     1617     2223    1
#> pred[7: 2] -3.09 0.56 -4.16 -3.48 -3.08 -2.70 -2.01     1483     2177    1
#> pred[7: 3] -1.76 0.37 -2.45 -2.02 -1.77 -1.51 -1.03     1895     2504    1
#> pred[7: 5] -2.11 0.20 -2.49 -2.24 -2.11 -1.96 -1.71     4216     3018    1
plot(arm_pred_FE_studies)We can also produce treatment rankings, rank probabilities, and cumulative rank probabilities.
(arm_ranks <- posterior_ranks(arm_fit_FE))
#>         mean   sd 2.5% 25% 50% 75% 97.5% Bulk_ESS Tail_ESS Rhat
#> rank[4] 3.51 0.72    2   3   3   4     5     1946       NA    1
#> rank[1] 4.63 0.77    2   5   5   5     5     2089       NA    1
#> rank[2] 1.05 0.26    1   1   1   1     2     2382     2410    1
#> rank[3] 3.51 0.92    2   3   4   4     5     2355       NA    1
#> rank[5] 2.29 0.69    1   2   2   2     4     2180     2219    1
plot(arm_ranks)(arm_rankprobs <- posterior_rank_probs(arm_fit_FE))
#>      p_rank[1] p_rank[2] p_rank[3] p_rank[4] p_rank[5]
#> d[4]      0.00      0.05      0.49      0.38      0.09
#> d[1]      0.00      0.04      0.07      0.12      0.78
#> d[2]      0.96      0.04      0.01      0.00      0.00
#> d[3]      0.00      0.17      0.26      0.44      0.12
#> d[5]      0.04      0.71      0.18      0.06      0.01
plot(arm_rankprobs)(arm_cumrankprobs <- posterior_rank_probs(arm_fit_FE, cumulative = TRUE))
#>      p_rank[1] p_rank[2] p_rank[3] p_rank[4] p_rank[5]
#> d[4]      0.00      0.05      0.53      0.91         1
#> d[1]      0.00      0.04      0.11      0.22         1
#> d[2]      0.96      0.99      1.00      1.00         1
#> d[3]      0.00      0.17      0.43      0.88         1
#> d[5]      0.04      0.75      0.93      0.99         1
plot(arm_cumrankprobs)We now perform an analysis using the contrast-based data (mean differences and standard errors).
With contrast-level data giving the mean difference in off-time
reduction (diff) and standard error (se_diff),
we use the function set_agd_contrast() to set up the
network.
contr_net <- set_agd_contrast(parkinsons, 
                              study = studyn,
                              trt = trtn,
                              y = diff, 
                              se = se_diff,
                              sample_size = n)
contr_net
#> A network with 7 AgD studies (contrast-based).
#> 
#> -------------------------------------------------- AgD studies (contrast-based) ---- 
#>  Study Treatment arms
#>  1     2: 1 | 3      
#>  2     2: 1 | 2      
#>  3     3: 4 | 1 | 2  
#>  4     2: 4 | 3      
#>  5     2: 4 | 3      
#>  6     2: 4 | 5      
#>  7     2: 4 | 5      
#> 
#>  Outcome type: continuous
#> ------------------------------------------------------------------------------------
#> Total number of treatments: 5
#> Total number of studies: 7
#> Reference treatment is: 4
#> Network is connectedThe sample_size argument is optional, but enables the
nodes to be weighted by sample size in the network plot.
Plot the network structure.
We fit both fixed effect (FE) and random effects (RE) models.
First, we fit a fixed effect model using the nma()
function with trt_effects = "fixed". We use \(\mathrm{N}(0, 100^2)\) prior distributions
for the treatment effects \(d_k\). We
can examine the range of parameter values implied by these prior
distributions with the summary() method:
summary(normal(scale = 100))
#> A Normal prior distribution: location = 0, scale = 100.
#> 50% of the prior density lies between -67.45 and 67.45.
#> 95% of the prior density lies between -196 and 196.The model is fitted using the nma() function.
contr_fit_FE <- nma(contr_net, 
                    trt_effects = "fixed",
                    prior_trt = normal(scale = 100))
#> Note: Setting "4" as the network reference treatment.Basic parameter summaries are given by the print()
method:
contr_fit_FE
#> A fixed effects NMA with a normal likelihood (identity link).
#> Inference for Stan model: normal.
#> 4 chains, each with iter=2000; warmup=1000; thin=1; 
#> post-warmup draws per chain=1000, total post-warmup draws=4000.
#> 
#>       mean se_mean   sd  2.5%   25%   50%   75% 97.5% n_eff Rhat
#> d[1]  0.52    0.01 0.48 -0.43  0.18  0.53  0.85  1.44  2039    1
#> d[2] -1.30    0.01 0.53 -2.32 -1.65 -1.29 -0.94 -0.28  2133    1
#> d[3]  0.05    0.01 0.33 -0.62 -0.17  0.05  0.27  0.70  3049    1
#> d[5] -0.30    0.00 0.21 -0.71 -0.45 -0.30 -0.16  0.10  3069    1
#> lp__ -3.19    0.04 1.46 -6.82 -3.90 -2.84 -2.12 -1.42  1737    1
#> 
#> Samples were drawn using NUTS(diag_e) at Mon Apr 29 16:40:20 2024.
#> For each parameter, n_eff is a crude measure of effective sample size,
#> and Rhat is the potential scale reduction factor on split chains (at 
#> convergence, Rhat=1).The prior and posterior distributions can be compared visually using
the plot_prior_posterior() function:
We now fit a random effects model using the nma()
function with trt_effects = "random". Again, we use \(\mathrm{N}(0, 100^2)\) prior distributions
for the treatment effects \(d_k\), and
we additionally use a \(\textrm{half-N}(5^2)\) prior for the
heterogeneity standard deviation \(\tau\). We can examine the range of
parameter values implied by these prior distributions with the
summary() method:
summary(normal(scale = 100))
#> A Normal prior distribution: location = 0, scale = 100.
#> 50% of the prior density lies between -67.45 and 67.45.
#> 95% of the prior density lies between -196 and 196.
summary(half_normal(scale = 5))
#> A half-Normal prior distribution: location = 0, scale = 5.
#> 50% of the prior density lies between 0 and 3.37.
#> 95% of the prior density lies between 0 and 9.8.Fitting the RE model
contr_fit_RE <- nma(contr_net, 
                    seed = 1150676438,
                    trt_effects = "random",
                    prior_trt = normal(scale = 100),
                    prior_het = half_normal(scale = 5),
                    adapt_delta = 0.99)
#> Note: Setting "4" as the network reference treatment.
#> Warning: There were 5 divergent transitions after warmup. See
#> https://mc-stan.org/misc/warnings.html#divergent-transitions-after-warmup
#> to find out why this is a problem and how to eliminate them.
#> Warning: Examine the pairs() plot to diagnose sampling problemsWe do see a small number of divergent transition errors, which cannot
simply be removed by increasing the value of the
adapt_delta argument (by default set to 0.95
for RE models). To diagnose, we use the pairs() method to
investigate where in the posterior distribution these divergences are
happening (indicated by red crosses):
The divergent transitions occur in the upper tail of the heterogeneity standard deviation. In this case, with only a small number of studies, there is not very much information to estimate the heterogeneity standard deviation and the prior distribution may be too heavy-tailed. We could consider a more informative prior distribution for the heterogeneity variance to aid estimation.
Basic parameter summaries are given by the print()
method:
contr_fit_RE
#> A random effects NMA with a normal likelihood (identity link).
#> Inference for Stan model: normal.
#> 4 chains, each with iter=2000; warmup=1000; thin=1; 
#> post-warmup draws per chain=1000, total post-warmup draws=4000.
#> 
#>       mean se_mean   sd   2.5%   25%   50%   75% 97.5% n_eff Rhat
#> d[1]  0.53    0.02 0.74  -0.76  0.13  0.52  0.93  1.85  1349 1.00
#> d[2] -1.32    0.02 0.75  -2.75 -1.73 -1.32 -0.90  0.08  1655 1.00
#> d[3]  0.05    0.01 0.50  -0.93 -0.24  0.04  0.33  1.08  2018 1.00
#> d[5] -0.30    0.01 0.49  -1.18 -0.52 -0.31 -0.10  0.67  1461 1.00
#> lp__ -8.15    0.08 2.80 -14.25 -9.84 -7.89 -6.17 -3.37  1259 1.00
#> tau   0.42    0.02 0.45   0.02  0.13  0.29  0.55  1.67   493 1.01
#> 
#> Samples were drawn using NUTS(diag_e) at Mon Apr 29 16:40:30 2024.
#> For each parameter, n_eff is a crude measure of effective sample size,
#> and Rhat is the potential scale reduction factor on split chains (at 
#> convergence, Rhat=1).By default, summaries of the study-specific relative effects \(\delta_{jk}\) are hidden, but could be
examined by changing the pars argument:
The prior and posterior distributions can be compared visually using
the plot_prior_posterior() function:
Model fit can be checked using the dic() function:
(contr_dic_FE <- dic(contr_fit_FE))
#> Residual deviance: 6.4 (on 8 data points)
#>                pD: 4.1
#>               DIC: 10.5(contr_dic_RE <- dic(contr_fit_RE))
#> Residual deviance: 6.6 (on 8 data points)
#>                pD: 5.4
#>               DIC: 12Both models fit the data well, having posterior mean residual deviance close to the number of data points. The DIC is similar between models, so we choose the FE model based on parsimony.
We can also examine the residual deviance contributions with the
corresponding plot() method.
For comparison with Dias et al. (2011), we can produce relative effects
against placebo using the relative_effects() function with
trt_ref = 1:
(contr_releff_FE <- relative_effects(contr_fit_FE, trt_ref = 1))
#>       mean   sd  2.5%   25%   50%   75% 97.5% Bulk_ESS Tail_ESS Rhat
#> d[4] -0.52 0.48 -1.44 -0.85 -0.53 -0.18  0.43     2058     2033    1
#> d[2] -1.81 0.33 -2.47 -2.03 -1.81 -1.59 -1.16     4970     3270    1
#> d[3] -0.47 0.49 -1.41 -0.79 -0.47 -0.15  0.53     2877     2737    1
#> d[5] -0.82 0.53 -1.88 -1.19 -0.82 -0.45  0.22     2169     2082    1
plot(contr_releff_FE, ref_line = 0)(contr_releff_RE <- relative_effects(contr_fit_RE, trt_ref = 1))
#>       mean   sd  2.5%   25%   50%   75% 97.5% Bulk_ESS Tail_ESS Rhat
#> d[4] -0.53 0.74 -1.85 -0.93 -0.52 -0.13  0.76     1918     1301    1
#> d[2] -1.85 0.53 -2.94 -2.14 -1.85 -1.55 -0.83     2862     1173    1
#> d[3] -0.48 0.72 -1.83 -0.89 -0.49 -0.09  0.90     2249     1142    1
#> d[5] -0.83 0.96 -2.41 -1.29 -0.84 -0.37  0.70     1971     1443    1
plot(contr_releff_RE, ref_line = 0)Following Dias et al. (2011), we produce absolute predictions of
the mean off-time reduction on each treatment assuming a Normal
distribution for the outcomes on treatment 1 (placebo) with mean \(-0.73\) and precision \(21\). We use the predict()
method, where the baseline argument takes a
distr() distribution object with which we specify the
corresponding Normal distribution, and we specify
baseline_trt = 1 to indicate that the baseline distribution
corresponds to treatment 1. (Strictly speaking,
type = "response" is unnecessary here, since the identity
link function was used.)
contr_pred_FE <- predict(contr_fit_FE, 
                       baseline = distr(qnorm, mean = -0.73, sd = 21^-0.5),
                       type = "response",
                       baseline_trt = 1)
contr_pred_FE
#>          mean   sd  2.5%   25%   50%   75% 97.5% Bulk_ESS Tail_ESS Rhat
#> pred[4] -1.25 0.53 -2.29 -1.61 -1.25 -0.88 -0.20     2225     2535    1
#> pred[1] -0.73 0.22 -1.16 -0.88 -0.73 -0.58 -0.30     3988     3629    1
#> pred[2] -2.54 0.40 -3.32 -2.81 -2.55 -2.27 -1.76     4703     3604    1
#> pred[3] -1.20 0.53 -2.23 -1.56 -1.19 -0.83 -0.16     2933     3110    1
#> pred[5] -1.55 0.58 -2.68 -1.94 -1.56 -1.16 -0.45     2319     2379    1
plot(contr_pred_FE)contr_pred_RE <- predict(contr_fit_RE, 
                       baseline = distr(qnorm, mean = -0.73, sd = 21^-0.5),
                       type = "response",
                       baseline_trt = 1)
contr_pred_RE
#>          mean   sd  2.5%   25%   50%   75% 97.5% Bulk_ESS Tail_ESS Rhat
#> pred[4] -1.26 0.77 -2.67 -1.68 -1.27 -0.84  0.12     1995     1516    1
#> pred[1] -0.73 0.22 -1.16 -0.88 -0.73 -0.59 -0.31     3715     3592    1
#> pred[2] -2.58 0.57 -3.75 -2.91 -2.58 -2.25 -1.46     2808     1235    1
#> pred[3] -1.21 0.75 -2.62 -1.66 -1.23 -0.80  0.22     2275     1195    1
#> pred[5] -1.56 0.98 -3.17 -2.06 -1.57 -1.09  0.05     2050     1535    1
plot(contr_pred_RE)If the baseline argument is omitted an error will be
raised, as there are no study baselines estimated in this network.
We can also produce treatment rankings, rank probabilities, and cumulative rank probabilities.
(contr_ranks <- posterior_ranks(contr_fit_FE))
#>         mean   sd 2.5% 25% 50% 75% 97.5% Bulk_ESS Tail_ESS Rhat
#> rank[4] 3.50 0.72    2   3   3   4     5     2582       NA    1
#> rank[1] 4.63 0.78    2   5   5   5     5     2109       NA    1
#> rank[2] 1.06 0.28    1   1   1   1     2     2410     2472    1
#> rank[3] 3.53 0.92    2   3   4   4     5     3405       NA    1
#> rank[5] 2.28 0.67    1   2   2   2     4     2284     2190    1
plot(contr_ranks)(contr_rankprobs <- posterior_rank_probs(contr_fit_FE))
#>      p_rank[1] p_rank[2] p_rank[3] p_rank[4] p_rank[5]
#> d[4]      0.00      0.05      0.48      0.38      0.08
#> d[1]      0.00      0.04      0.07      0.12      0.78
#> d[2]      0.96      0.04      0.01      0.00      0.00
#> d[3]      0.00      0.16      0.26      0.45      0.13
#> d[5]      0.04      0.72      0.18      0.05      0.01
plot(contr_rankprobs)(contr_cumrankprobs <- posterior_rank_probs(contr_fit_FE, cumulative = TRUE))
#>      p_rank[1] p_rank[2] p_rank[3] p_rank[4] p_rank[5]
#> d[4]      0.00      0.05      0.53      0.92         1
#> d[1]      0.00      0.04      0.11      0.22         1
#> d[2]      0.96      0.99      1.00      1.00         1
#> d[3]      0.00      0.16      0.43      0.87         1
#> d[5]      0.04      0.76      0.94      0.99         1
plot(contr_cumrankprobs)We now perform an analysis where some studies contribute arm-based data, and other contribute contrast-based data. Replicating Dias et al. (2011), we consider arm-based data from studies 1-3, and contrast-based data from studies 4-7.
studies <- parkinsons$studyn
(parkinsons_arm <- parkinsons[studies %in% 1:3, ])
#>   studyn trtn     y    se   n  diff se_diff
#> 1      1    1 -1.22 0.504  54    NA   0.504
#> 2      1    3 -1.53 0.439  95 -0.31   0.668
#> 3      2    1 -0.70 0.282 172    NA   0.282
#> 4      2    2 -2.40 0.258 173 -1.70   0.382
#> 5      3    1 -0.30 0.505  76    NA   0.505
#> 6      3    2 -2.60 0.510  71 -2.30   0.718
#> 7      3    4 -1.20 0.478  81 -0.90   0.695
(parkinsons_contr <- parkinsons[studies %in% 4:7, ])
#>    studyn trtn     y    se   n  diff se_diff
#> 8       4    3 -0.24 0.265 128    NA   0.265
#> 9       4    4 -0.59 0.354  72 -0.35   0.442
#> 10      5    3 -0.73 0.335  80    NA   0.335
#> 11      5    4 -0.18 0.442  46  0.55   0.555
#> 12      6    4 -2.20 0.197 137    NA   0.197
#> 13      6    5 -2.50 0.190 131 -0.30   0.274
#> 14      7    4 -1.80 0.200 154    NA   0.200
#> 15      7    5 -2.10 0.250 143 -0.30   0.320We use the functions set_agd_arm() and
set_agd_contrast() to set up the respective data sources
within the network, and then combine together with
combine_network().
mix_arm_net <- set_agd_arm(parkinsons_arm, 
                           study = studyn,
                           trt = trtn,
                           y = y, 
                           se = se,
                           sample_size = n)
mix_contr_net <- set_agd_contrast(parkinsons_contr, 
                                  study = studyn,
                                  trt = trtn,
                                  y = diff, 
                                  se = se_diff,
                                  sample_size = n)
mix_net <- combine_network(mix_arm_net, mix_contr_net)
mix_net
#> A network with 3 AgD studies (arm-based), and 4 AgD studies (contrast-based).
#> 
#> ------------------------------------------------------- AgD studies (arm-based) ---- 
#>  Study Treatment arms
#>  1     2: 1 | 3      
#>  2     2: 1 | 2      
#>  3     3: 4 | 1 | 2  
#> 
#>  Outcome type: continuous
#> -------------------------------------------------- AgD studies (contrast-based) ---- 
#>  Study Treatment arms
#>  4     2: 4 | 3      
#>  5     2: 4 | 3      
#>  6     2: 4 | 5      
#>  7     2: 4 | 5      
#> 
#>  Outcome type: continuous
#> ------------------------------------------------------------------------------------
#> Total number of treatments: 5
#> Total number of studies: 7
#> Reference treatment is: 4
#> Network is connectedThe sample_size argument is optional, but enables the
nodes to be weighted by sample size in the network plot.
Plot the network structure.
We fit both fixed effect (FE) and random effects (RE) models.
First, we fit a fixed effect model using the nma()
function with trt_effects = "fixed". We use \(\mathrm{N}(0, 100^2)\) prior distributions
for the treatment effects \(d_k\) and
study-specific intercepts \(\mu_j\). We
can examine the range of parameter values implied by these prior
distributions with the summary() method:
summary(normal(scale = 100))
#> A Normal prior distribution: location = 0, scale = 100.
#> 50% of the prior density lies between -67.45 and 67.45.
#> 95% of the prior density lies between -196 and 196.The model is fitted using the nma() function.
mix_fit_FE <- nma(mix_net, 
                  trt_effects = "fixed",
                  prior_intercept = normal(scale = 100),
                  prior_trt = normal(scale = 100))
#> Note: Setting "4" as the network reference treatment.Basic parameter summaries are given by the print()
method:
mix_fit_FE
#> A fixed effects NMA with a normal likelihood (identity link).
#> Inference for Stan model: normal.
#> 4 chains, each with iter=2000; warmup=1000; thin=1; 
#> post-warmup draws per chain=1000, total post-warmup draws=4000.
#> 
#>       mean se_mean   sd  2.5%   25%   50%   75% 97.5% n_eff Rhat
#> d[1]  0.54    0.01 0.50 -0.45  0.20  0.53  0.87  1.52  1351    1
#> d[2] -1.27    0.01 0.53 -2.33 -1.62 -1.27 -0.92 -0.21  1452    1
#> d[3]  0.05    0.01 0.33 -0.56 -0.16  0.05  0.27  0.70  2384    1
#> d[5] -0.29    0.00 0.20 -0.68 -0.42 -0.29 -0.16  0.11  3047    1
#> lp__ -4.69    0.05 1.94 -9.39 -5.70 -4.36 -3.26 -1.99  1589    1
#> 
#> Samples were drawn using NUTS(diag_e) at Mon Apr 29 16:40:45 2024.
#> For each parameter, n_eff is a crude measure of effective sample size,
#> and Rhat is the potential scale reduction factor on split chains (at 
#> convergence, Rhat=1).By default, summaries of the study-specific intercepts \(\mu_j\) are hidden, but could be examined
by changing the pars argument:
The prior and posterior distributions can be compared visually using
the plot_prior_posterior() function:
We now fit a random effects model using the nma()
function with trt_effects = "random". Again, we use \(\mathrm{N}(0, 100^2)\) prior distributions
for the treatment effects \(d_k\) and
study-specific intercepts \(\mu_j\),
and we additionally use a \(\textrm{half-N}(5^2)\) prior for the
heterogeneity standard deviation \(\tau\). We can examine the range of
parameter values implied by these prior distributions with the
summary() method:
summary(normal(scale = 100))
#> A Normal prior distribution: location = 0, scale = 100.
#> 50% of the prior density lies between -67.45 and 67.45.
#> 95% of the prior density lies between -196 and 196.
summary(half_normal(scale = 5))
#> A half-Normal prior distribution: location = 0, scale = 5.
#> 50% of the prior density lies between 0 and 3.37.
#> 95% of the prior density lies between 0 and 9.8.Fitting the RE model
mix_fit_RE <- nma(mix_net, 
                  seed = 437219664,
                  trt_effects = "random",
                  prior_intercept = normal(scale = 100),
                  prior_trt = normal(scale = 100),
                  prior_het = half_normal(scale = 5),
                  adapt_delta = 0.99)
#> Note: Setting "4" as the network reference treatment.
#> Warning: There were 1 divergent transitions after warmup. See
#> https://mc-stan.org/misc/warnings.html#divergent-transitions-after-warmup
#> to find out why this is a problem and how to eliminate them.
#> Warning: Examine the pairs() plot to diagnose sampling problemsWe do see a small number of divergent transition errors, which cannot
simply be removed by increasing the value of the
adapt_delta argument (by default set to 0.95
for RE models). To diagnose, we use the pairs() method to
investigate where in the posterior distribution these divergences are
happening (indicated by red crosses):
The divergent transitions occur in the upper tail of the heterogeneity standard deviation. In this case, with only a small number of studies, there is not very much information to estimate the heterogeneity standard deviation and the prior distribution may be too heavy-tailed. We could consider a more informative prior distribution for the heterogeneity variance to aid estimation.
Basic parameter summaries are given by the print()
method:
mix_fit_RE
#> A random effects NMA with a normal likelihood (identity link).
#> Inference for Stan model: normal.
#> 4 chains, each with iter=2000; warmup=1000; thin=1; 
#> post-warmup draws per chain=1000, total post-warmup draws=4000.
#> 
#>        mean se_mean   sd   2.5%    25%    50%   75% 97.5% n_eff Rhat
#> d[1]   0.54    0.01 0.62  -0.70   0.16   0.54  0.93  1.74  2026 1.00
#> d[2]  -1.30    0.02 0.69  -2.71  -1.72  -1.29 -0.87  0.02  2090 1.00
#> d[3]   0.01    0.01 0.47  -0.95  -0.26   0.02  0.30  0.90  2852 1.00
#> d[5]  -0.29    0.01 0.42  -1.14  -0.50  -0.30 -0.09  0.53  2405 1.00
#> lp__ -10.64    0.09 3.25 -17.61 -12.61 -10.33 -8.33 -5.14  1183 1.00
#> tau    0.39    0.01 0.38   0.02   0.13   0.29  0.53  1.42   646 1.01
#> 
#> Samples were drawn using NUTS(diag_e) at Mon Apr 29 16:40:57 2024.
#> For each parameter, n_eff is a crude measure of effective sample size,
#> and Rhat is the potential scale reduction factor on split chains (at 
#> convergence, Rhat=1).By default, summaries of the study-specific intercepts \(\mu_j\) and study-specific relative effects
\(\delta_{jk}\) are hidden, but could
be examined by changing the pars argument:
The prior and posterior distributions can be compared visually using
the plot_prior_posterior() function:
Model fit can be checked using the dic() function:
(mix_dic_FE <- dic(mix_fit_FE))
#> Residual deviance: 9.4 (on 11 data points)
#>                pD: 7.1
#>               DIC: 16.5(mix_dic_RE <- dic(mix_fit_RE))
#> Residual deviance: 9.5 (on 11 data points)
#>                pD: 8.4
#>               DIC: 17.9Both models fit the data well, having posterior mean residual deviance close to the number of data points. The DIC is similar between models, so we choose the FE model based on parsimony.
We can also examine the residual deviance contributions with the
corresponding plot() method.
For comparison with Dias et al. (2011), we can produce relative effects
against placebo using the relative_effects() function with
trt_ref = 1:
(mix_releff_FE <- relative_effects(mix_fit_FE, trt_ref = 1))
#>       mean   sd  2.5%   25%   50%   75% 97.5% Bulk_ESS Tail_ESS Rhat
#> d[4] -0.54 0.50 -1.52 -0.87 -0.53 -0.20  0.45     1362     1865    1
#> d[2] -1.81 0.34 -2.49 -2.03 -1.80 -1.58 -1.13     5688     3158    1
#> d[3] -0.48 0.50 -1.45 -0.82 -0.48 -0.14  0.48     2201     2871    1
#> d[5] -0.83 0.53 -1.89 -1.18 -0.84 -0.47  0.23     1471     2375    1
plot(mix_releff_FE, ref_line = 0)(mix_releff_RE <- relative_effects(mix_fit_RE, trt_ref = 1))
#>       mean   sd  2.5%   25%   50%   75% 97.5% Bulk_ESS Tail_ESS Rhat
#> d[4] -0.54 0.62 -1.74 -0.93 -0.54 -0.16  0.70     2009     2208    1
#> d[2] -1.83 0.50 -2.84 -2.13 -1.83 -1.54 -0.83     4185     2461    1
#> d[3] -0.53 0.64 -1.81 -0.90 -0.53 -0.14  0.73     2992     2498    1
#> d[5] -0.83 0.75 -2.36 -1.28 -0.83 -0.40  0.74     2220     1883    1
plot(mix_releff_RE, ref_line = 0)Following Dias et al. (2011), we produce absolute predictions of
the mean off-time reduction on each treatment assuming a Normal
distribution for the outcomes on treatment 1 (placebo) with mean \(-0.73\) and precision \(21\). We use the predict()
method, where the baseline argument takes a
distr() distribution object with which we specify the
corresponding Normal distribution, and we specify
baseline_trt = 1 to indicate that the baseline distribution
corresponds to treatment 1. (Strictly speaking,
type = "response" is unnecessary here, since the identity
link function was used.)
mix_pred_FE <- predict(mix_fit_FE, 
                       baseline = distr(qnorm, mean = -0.73, sd = 21^-0.5),
                       type = "response",
                       baseline_trt = 1)
mix_pred_FE
#>          mean   sd  2.5%   25%   50%   75% 97.5% Bulk_ESS Tail_ESS Rhat
#> pred[4] -1.26 0.54 -2.28 -1.62 -1.26 -0.90 -0.20     1437     2402    1
#> pred[1] -0.72 0.22 -1.14 -0.87 -0.72 -0.58 -0.29     3556     3638    1
#> pred[2] -2.53 0.40 -3.33 -2.79 -2.53 -2.25 -1.78     4946     3640    1
#> pred[3] -1.20 0.54 -2.24 -1.56 -1.21 -0.83 -0.14     2203     2910    1
#> pred[5] -1.55 0.57 -2.67 -1.93 -1.56 -1.16 -0.43     1532     2360    1
plot(mix_pred_FE)mix_pred_RE <- predict(mix_fit_RE, 
                       baseline = distr(qnorm, mean = -0.73, sd = 21^-0.5),
                       type = "response",
                       baseline_trt = 1)
mix_pred_RE
#>          mean   sd  2.5%   25%   50%   75% 97.5% Bulk_ESS Tail_ESS Rhat
#> pred[4] -1.27 0.66 -2.55 -1.69 -1.26 -0.86  0.04     2089     2170    1
#> pred[1] -0.73 0.22 -1.17 -0.88 -0.73 -0.58 -0.31     4051     3817    1
#> pred[2] -2.56 0.55 -3.68 -2.90 -2.56 -2.22 -1.47     4127     2529    1
#> pred[3] -1.25 0.67 -2.60 -1.67 -1.26 -0.84  0.06     3027     2404    1
#> pred[5] -1.56 0.78 -3.19 -2.03 -1.55 -1.10  0.02     2234     2132    1
plot(mix_pred_RE)If the baseline argument is omitted, predictions of mean
off-time reduction will be produced for every arm-based study
in the network based on their estimated baseline response \(\mu_j\):
mix_pred_FE_studies <- predict(mix_fit_FE, type = "response")
mix_pred_FE_studies
#> ---------------------------------------------------------------------- Study: 1 ---- 
#> 
#>             mean   sd  2.5%   25%   50%   75% 97.5% Bulk_ESS Tail_ESS Rhat
#> pred[1: 4] -1.66 0.46 -2.59 -1.96 -1.67 -1.34 -0.76     1906     2750    1
#> pred[1: 1] -1.13 0.44 -1.99 -1.42 -1.12 -0.83 -0.27     3421     3183    1
#> pred[1: 2] -2.93 0.52 -3.96 -3.29 -2.93 -2.58 -1.90     3322     2785    1
#> pred[1: 3] -1.61 0.40 -2.39 -1.88 -1.61 -1.33 -0.83     3506     3107    1
#> pred[1: 5] -1.95 0.51 -2.97 -2.28 -1.95 -1.62 -0.96     2057     2618    1
#> 
#> ---------------------------------------------------------------------- Study: 2 ---- 
#> 
#>             mean   sd  2.5%   25%   50%   75% 97.5% Bulk_ESS Tail_ESS Rhat
#> pred[2: 4] -1.18 0.53 -2.23 -1.53 -1.18 -0.83 -0.15     1384     2051    1
#> pred[2: 1] -0.64 0.26 -1.15 -0.82 -0.64 -0.47 -0.12     4802     3488    1
#> pred[2: 2] -2.45 0.25 -2.93 -2.61 -2.45 -2.29 -1.96     4260     3056    1
#> pred[2: 3] -1.13 0.54 -2.16 -1.49 -1.12 -0.77 -0.06     2016     2729    1
#> pred[2: 5] -1.47 0.56 -2.57 -1.84 -1.48 -1.09 -0.37     1495     2249    1
#> 
#> ---------------------------------------------------------------------- Study: 3 ---- 
#> 
#>             mean   sd  2.5%   25%   50%   75% 97.5% Bulk_ESS Tail_ESS Rhat
#> pred[3: 4] -1.13 0.42 -1.93 -1.41 -1.12 -0.85 -0.30     1680     2691    1
#> pred[3: 1] -0.59 0.36 -1.33 -0.83 -0.58 -0.35  0.09     3847     3220    1
#> pred[3: 2] -2.40 0.38 -3.16 -2.65 -2.40 -2.15 -1.63     4053     3307    1
#> pred[3: 3] -1.07 0.46 -1.98 -1.38 -1.07 -0.76 -0.16     2690     2739    1
#> pred[3: 5] -1.42 0.47 -2.33 -1.74 -1.41 -1.10 -0.52     1942     2711    1
plot(mix_pred_FE_studies)We can also produce treatment rankings, rank probabilities, and cumulative rank probabilities.
(mix_ranks <- posterior_ranks(mix_fit_FE))
#>         mean   sd 2.5% 25% 50% 75% 97.5% Bulk_ESS Tail_ESS Rhat
#> rank[4] 3.49 0.71    2   3   3   4     5     2091       NA    1
#> rank[1] 4.63 0.79    2   5   5   5     5     2052       NA    1
#> rank[2] 1.06 0.29    1   1   1   1     2     2240     2381    1
#> rank[3] 3.54 0.93    2   3   4   4     5     2898       NA    1
#> rank[5] 2.28 0.67    1   2   2   2     4     2131     2344    1
plot(mix_ranks)(mix_rankprobs <- posterior_rank_probs(mix_fit_FE))
#>      p_rank[1] p_rank[2] p_rank[3] p_rank[4] p_rank[5]
#> d[4]      0.00      0.04      0.51      0.37      0.08
#> d[1]      0.00      0.05      0.06      0.11      0.78
#> d[2]      0.95      0.04      0.01      0.00      0.00
#> d[3]      0.00      0.17      0.24      0.46      0.13
#> d[5]      0.04      0.71      0.18      0.05      0.01
plot(mix_rankprobs)(mix_cumrankprobs <- posterior_rank_probs(mix_fit_FE, cumulative = TRUE))
#>      p_rank[1] p_rank[2] p_rank[3] p_rank[4] p_rank[5]
#> d[4]      0.00      0.04      0.55      0.92         1
#> d[1]      0.00      0.05      0.10      0.22         1
#> d[2]      0.95      0.99      1.00      1.00         1
#> d[3]      0.00      0.17      0.41      0.87         1
#> d[5]      0.04      0.75      0.94      0.99         1
plot(mix_cumrankprobs)