- Introduction
- Setup
- Example dataset
- Model
- Extracting
draws from a fit in tidy-format using
`spread_draws`

- Point summaries and intervals
- Combining variables with different indices in a single tidy format data frame
- Plotting intervals with multiple probability levels
- Intervals with densities
- Other
visualizations of distributions:
`stat_slabinterval`

- Posterior means and predictions
- Quantile dotplots
- Posterior predictions
- Posterior predictions, Kruschke-style
- Fit/prediction curves
- Comparing levels of a factor
- Ordinal models

This vignette describes how to use the `tidybayes`

and
`ggdist`

packages to extract and visualize tidy data frames of
draws from posterior distributions of model variables, means, and
predictions from `brms::brm`

. For a more general introduction
to `tidybayes`

and its use on general-purpose Bayesian
modeling languages (like Stan and JAGS), see
`vignette("tidybayes")`

.

The following libraries are required to run this vignette:

```
library(magrittr)
library(dplyr)
library(purrr)
library(forcats)
library(tidyr)
library(modelr)
library(ggdist)
library(tidybayes)
library(ggplot2)
library(cowplot)
library(rstan)
library(brms)
library(ggrepel)
library(RColorBrewer)
library(gganimate)
library(posterior)
library(distributional)
theme_set(theme_tidybayes() + panel_border())
```

These options help Stan run faster:

To demonstrate `tidybayes`

, we will use a simple dataset
with 10 observations from 5 conditions each:

```
set.seed(5)
n = 10
n_condition = 5
ABC =
tibble(
condition = rep(c("A","B","C","D","E"), n),
response = rnorm(n * 5, c(0,1,2,1,-1), 0.5)
)
```

A snapshot of the data looks like this:

condition | response |
---|---|

A | -0.4204277 |

B | 1.6921797 |

C | 1.3722541 |

D | 1.0350714 |

E | -0.1442796 |

A | -0.3014540 |

B | 0.7639168 |

C | 1.6823143 |

D | 0.8571132 |

E | -0.9309459 |

This is a typical tidy format data frame: one observation per row. Graphically:

Let’s fit a hierarchical model with shrinkage towards a global mean:

```
m = brm(
response ~ (1|condition),
data = ABC,
prior = c(
prior(normal(0, 1), class = Intercept),
prior(student_t(3, 0, 1), class = sd),
prior(student_t(3, 0, 1), class = sigma)
),
control = list(adapt_delta = .99),
file = "models/tidy-brms_m.rds" # cache model (can be removed)
)
```

The results look like this:

```
## Family: gaussian
## Links: mu = identity; sigma = identity
## Formula: response ~ (1 | condition)
## Data: ABC (Number of observations: 50)
## Draws: 4 chains, each with iter = 2000; warmup = 1000; thin = 1;
## total post-warmup draws = 4000
##
## Group-Level Effects:
## ~condition (Number of levels: 5)
## Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## sd(Intercept) 1.16 0.42 0.61 2.21 1.00 1014 1444
##
## Population-Level Effects:
## Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## Intercept 0.51 0.47 -0.47 1.44 1.00 945 1410
##
## Family Specific Parameters:
## Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## sigma 0.56 0.06 0.46 0.70 1.00 1734 1803
##
## Draws were sampled using sampling(NUTS). For each parameter, Bulk_ESS
## and Tail_ESS are effective sample size measures, and Rhat is the potential
## scale reduction factor on split chains (at convergence, Rhat = 1).
```

`spread_draws`

Now that we have our results, the fun begins: getting the draws out
in a tidy format! First, we’ll use the `get_variables()`

function to get a list of raw model variable names so that we know what
variables we can extract from the model:

```
## [1] "b_Intercept" "sd_condition__Intercept" "sigma" "r_condition[A,Intercept]"
## [5] "r_condition[B,Intercept]" "r_condition[C,Intercept]" "r_condition[D,Intercept]" "r_condition[E,Intercept]"
## [9] "lprior" "lp__" "accept_stat__" "stepsize__"
## [13] "treedepth__" "n_leapfrog__" "divergent__" "energy__"
```

Here, `b_Intercept`

is the global mean, and the
`r_condition[]`

variables are offsets from that mean for each
condition. Given these variables:

`r_condition[A,Intercept]`

`r_condition[B,Intercept]`

`r_condition[C,Intercept]`

`r_condition[D,Intercept]`

`r_condition[E,Intercept]`

We might want a data frame where each row is a draw from either
`r_condition[A,Intercept]`

,
`r_condition[B,Intercept]`

, `...[C,...]`

,
`...[D,...]`

, or `...[E,...]`

, and where we have
columns indexing which chain/iteration/draw the row came from and which
condition (`A`

to `E`

) it is for. That would allow
us to easily compute quantities grouped by condition, or generate plots
by condition using ggplot, or even merge draws with the original data to
plot data and posteriors simultaneously.

The workhorse of `tidybayes`

is the
`spread_draws()`

function, which does this extraction for us.
It includes a simple specification format that we can use to extract
variables and their indices into tidy-format data frames.

Given a variable in the model like this:

`r_condition[D,Intercept]`

We can provide `spread_draws()`

with a column
specification like this:

`r_condition[condition,term]`

Where `condition`

corresponds to `D`

and
`term`

corresponds to `Intercept`

. There is
nothing too magical about what `spread_draws()`

does with
this specification: under the hood, it splits the variable indices by
commas and spaces (you can split by other characters by changing the
`sep`

argument). It lets you assign columns to the resulting
indices in order. So `r_condition[D,Intercept]`

has indices
`D`

and `Intercept`

, and
`spread_draws()`

lets us extract these indices as columns in
the resulting tidy data frame of draws from
`r_condition`

:

condition | term | r_condition | .chain | .iteration | .draw |
---|---|---|---|---|---|

A | Intercept | 0.2021113 | 1 | 1 | 1 |

A | Intercept | 0.2423497 | 1 | 2 | 2 |

A | Intercept | -0.0947473 | 1 | 3 | 3 |

A | Intercept | 0.3562914 | 1 | 4 | 4 |

A | Intercept | 0.0307567 | 1 | 5 | 5 |

A | Intercept | 0.4712750 | 1 | 6 | 6 |

A | Intercept | -0.1679763 | 1 | 7 | 7 |

A | Intercept | -0.8257029 | 1 | 8 | 8 |

A | Intercept | -0.2229136 | 1 | 9 | 9 |

A | Intercept | -0.1243703 | 1 | 10 | 10 |

We can choose whatever names we want for the index columns; e.g.:

c | t | r_condition | .chain | .iteration | .draw |
---|---|---|---|---|---|

A | Intercept | 0.2021113 | 1 | 1 | 1 |

A | Intercept | 0.2423497 | 1 | 2 | 2 |

A | Intercept | -0.0947473 | 1 | 3 | 3 |

A | Intercept | 0.3562914 | 1 | 4 | 4 |

A | Intercept | 0.0307567 | 1 | 5 | 5 |

A | Intercept | 0.4712750 | 1 | 6 | 6 |

A | Intercept | -0.1679763 | 1 | 7 | 7 |

A | Intercept | -0.8257029 | 1 | 8 | 8 |

A | Intercept | -0.2229136 | 1 | 9 | 9 |

A | Intercept | -0.1243703 | 1 | 10 | 10 |

But the more descriptive and less cryptic names from the previous example are probably preferable.

In this particular model, there is only one term
(`Intercept`

), thus we could omit that index altogether to
just get each `condition`

and the value of
`r_condition`

for that condition:

condition | r_condition | .chain | .iteration | .draw |
---|---|---|---|---|

A | 0.2021113 | 1 | 1 | 1 |

A | 0.2423497 | 1 | 2 | 2 |

A | -0.0947473 | 1 | 3 | 3 |

A | 0.3562914 | 1 | 4 | 4 |

A | 0.0307567 | 1 | 5 | 5 |

A | 0.4712750 | 1 | 6 | 6 |

A | -0.1679763 | 1 | 7 | 7 |

A | -0.8257029 | 1 | 8 | 8 |

A | -0.2229136 | 1 | 9 | 9 |

A | -0.1243703 | 1 | 10 | 10 |

**Note:** If you have used `spread_draws()`

with a raw sample from Stan or JAGS, you may be used to using
`recover_types`

before `spread_draws()`

to get
index column values back (e.g. if the index was a factor). This is not
necessary when using `spread_draws()`

on
`rstanarm`

models, because those models already contain that
information in their variable names. For more on
`recover_types`

, see `vignette("tidybayes")`

.

`tidybayes`

provides a family of functions for generating
point summaries and intervals from draws in a tidy format. These
functions follow the naming scheme
`[median|mean|mode]_[qi|hdi]`

, for example,
`median_qi()`

, `mean_qi()`

,
`mode_hdi()`

, and so on. The first name (before the
`_`

) indicates the type of point summary, and the second name
indicates the type of interval. `qi`

yields a quantile
interval (a.k.a. equi-tailed interval, central interval, or percentile
interval) and `hdi`

yields a highest (posterior) density
interval. Custom point summary or interval functions can also be applied
using the `point_interval()`

function.

For example, we might extract the draws corresponding to posterior distributions of the overall mean and standard deviation of observations:

.chain | .iteration | .draw | b_Intercept | sigma |
---|---|---|---|---|

1 | 1 | 1 | -0.0423766 | 0.5499988 |

1 | 2 | 2 | 0.1185824 | 0.5462394 |

1 | 3 | 3 | -0.0212874 | 0.5676817 |

1 | 4 | 4 | -0.0386104 | 0.5767170 |

1 | 5 | 5 | 0.1288243 | 0.5644390 |

1 | 6 | 6 | 0.0607536 | 0.5616347 |

1 | 7 | 7 | 0.0418741 | 0.5266549 |

1 | 8 | 8 | 0.6844114 | 0.5196302 |

1 | 9 | 9 | 0.6180920 | 0.5988949 |

1 | 10 | 10 | 0.6364200 | 0.6091926 |

Like with `r_condition[condition,term]`

, this gives us a
tidy data frame. If we want the median and 95% quantile interval of the
variables, we can apply `median_qi()`

:

b_Intercept | b_Intercept.lower | b_Intercept.upper | sigma | sigma.lower | sigma.upper | .width | .point | .interval |
---|---|---|---|---|---|---|---|---|

0.5259484 | -0.4721829 | 1.436464 | 0.557849 | 0.4580284 | 0.7036759 | 0.95 | median | qi |

We can specify the columns we want to get medians and intervals from,
as above, or if we omit the list of columns, `median_qi()`

will use every column that is not a grouping column or a special column
(like `.chain`

, `.iteration`

, or
`.draw`

). Thus in the above example, `b_Intercept`

and `sigma`

are redundant arguments to
`median_qi()`

because they are also the only columns we
gathered from the model. So we can simplify this to:

b_Intercept | b_Intercept.lower | b_Intercept.upper | sigma | sigma.lower | sigma.upper | .width | .point | .interval |
---|---|---|---|---|---|---|---|---|

0.5259484 | -0.4721829 | 1.436464 | 0.557849 | 0.4580284 | 0.7036759 | 0.95 | median | qi |

If you would rather have a long-format list of intervals, use
`gather_draws()`

instead:

.variable | .value | .lower | .upper | .width | .point | .interval |
---|---|---|---|---|---|---|

b_Intercept | 0.5259484 | -0.4721829 | 1.4364642 | 0.95 | median | qi |

sigma | 0.5578490 | 0.4580284 | 0.7036759 | 0.95 | median | qi |

For more on `gather_draws()`

, see
`vignette("tidybayes")`

.

When we have a model variable with one or more indices, such as
`r_condition`

, we can apply `median_qi()`

(or
other functions in the `point_interval()`

family) as we did
before:

condition | r_condition | .lower | .upper | .width | .point | .interval |
---|---|---|---|---|---|---|

A | -0.3292104 | -1.2718102 | 0.7241221 | 0.95 | median | qi |

B | 0.4771384 | -0.4718292 | 1.5114266 | 0.95 | median | qi |

C | 1.3004918 | 0.3701968 | 2.3546531 | 0.95 | median | qi |

D | 0.4960814 | -0.4547683 | 1.5259761 | 0.95 | median | qi |

E | -1.3983219 | -2.3852123 | -0.4104601 | 0.95 | median | qi |

How did `median_qi()`

know what to aggregate? Data frames
returned by `spread_draws()`

are automatically grouped by all
index variables you pass to it; in this case, that means
`spread_draws()`

groups its results by
`condition`

. `median_qi()`

respects those groups,
and calculates the point summaries and intervals within all groups.
Then, because no columns were passed to `median_qi()`

, it
acts on the only non-special (`.`

-prefixed) and non-group
column, `r_condition`

. So the above shortened syntax is
equivalent to this more verbose call:

```
m %>%
spread_draws(r_condition[condition,]) %>%
group_by(condition) %>% # this line not necessary (done by spread_draws)
median_qi(r_condition) # b is not necessary (it is the only non-group column)
```

condition | r_condition | .lower | .upper | .width | .point | .interval |
---|---|---|---|---|---|---|

A | -0.3292104 | -1.2718102 | 0.7241221 | 0.95 | median | qi |

B | 0.4771384 | -0.4718292 | 1.5114266 | 0.95 | median | qi |

C | 1.3004918 | 0.3701968 | 2.3546531 | 0.95 | median | qi |

D | 0.4960814 | -0.4547683 | 1.5259761 | 0.95 | median | qi |

E | -1.3983219 | -2.3852123 | -0.4104601 | 0.95 | median | qi |

`tidybayes`

also provides an implementation of
`posterior::summarise_draws()`

for grouped data frames
(`tidybayes::summaries_draws.grouped_df()`

), which you can
use to quickly get convergence diagnostics:

condition | variable | mean | median | sd | mad | q5 | q95 | rhat | ess_bulk | ess_tail |
---|---|---|---|---|---|---|---|---|---|---|

A | r_condition | -0.3124791 | -0.3292104 | 0.4925077 | 0.4388009 | -1.0996520 | 0.5055601 | 1.002195 | 993.9697 | 1458.941 |

B | r_condition | 0.4878841 | 0.4771384 | 0.4919612 | 0.4404406 | -0.3032438 | 1.3157838 | 1.004103 | 1032.4160 | 1492.356 |

C | r_condition | 1.3211860 | 1.3004918 | 0.4910684 | 0.4447563 | 0.5447095 | 2.1591287 | 1.002873 | 1032.1502 | 1432.777 |

D | r_condition | 0.5053691 | 0.4960814 | 0.4931151 | 0.4518579 | -0.2774363 | 1.2967185 | 1.002082 | 1026.0885 | 1509.267 |

E | r_condition | -1.4021193 | -1.3983219 | 0.4856434 | 0.4314284 | -2.1946220 | -0.5856131 | 1.003443 | 1037.8651 | 1560.830 |

`spread_draws()`

and `gather_draws()`

support
extracting variables that have different indices into the same data
frame. Indices with the same name are automatically matched up, and
values are duplicated as necessary to produce one row per all
combination of levels of all indices. For example, we might want to
calculate the mean within each condition (call this
`condition_mean`

). In this model, that mean is the intercept
(`b_Intercept`

) plus the effect for a given condition
(`r_condition`

).

We can gather draws from `b_Intercept`

and
`r_condition`

together in a single data frame:

.chain | .iteration | .draw | b_Intercept | condition | r_condition |
---|---|---|---|---|---|

1 | 1 | 1 | -0.0423766 | A | 0.2021113 |

1 | 1 | 1 | -0.0423766 | B | 1.0692225 |

1 | 1 | 1 | -0.0423766 | C | 1.9237138 |

1 | 1 | 1 | -0.0423766 | D | 1.0640134 |

1 | 1 | 1 | -0.0423766 | E | -1.0131926 |

1 | 2 | 2 | 0.1185824 | A | 0.2423497 |

1 | 2 | 2 | 0.1185824 | B | 1.1102848 |

1 | 2 | 2 | 0.1185824 | C | 1.6346541 |

1 | 2 | 2 | 0.1185824 | D | 0.8431110 |

1 | 2 | 2 | 0.1185824 | E | -1.2025763 |

Within each draw, `b_Intercept`

is repeated as necessary
to correspond to every index of `r_condition`

. Thus, the
`mutate`

function from dplyr can be used to find their sum,
`condition_mean`

(which is the mean for each condition):

```
m %>%
spread_draws(`b_Intercept`, r_condition[condition,]) %>%
mutate(condition_mean = b_Intercept + r_condition) %>%
median_qi(condition_mean)
```

condition | condition_mean | .lower | .upper | .width | .point | .interval |
---|---|---|---|---|---|---|

A | 0.1985268 | -0.1335055 | 0.5492377 | 0.95 | median | qi |

B | 1.0016484 | 0.6581033 | 1.3364913 | 0.95 | median | qi |

C | 1.8358584 | 1.4717853 | 2.1765832 | 0.95 | median | qi |

D | 1.0190215 | 0.6731754 | 1.3676501 | 0.95 | median | qi |

E | -0.8904053 | -1.2257849 | -0.5537563 | 0.95 | median | qi |

`median_qi()`

uses tidy evaluation (see
`vignette("tidy-evaluation", package = "rlang")`

), so it can
take column expressions, not just column names. Thus, we can simplify
the above example by moving the calculation of
`condition_mean`

from `mutate`

into
`median_qi()`

:

```
m %>%
spread_draws(b_Intercept, r_condition[condition,]) %>%
median_qi(condition_mean = b_Intercept + r_condition)
```

condition | condition_mean | .lower | .upper | .width | .point | .interval |
---|---|---|---|---|---|---|

A | 0.1985268 | -0.1335055 | 0.5492377 | 0.95 | median | qi |

B | 1.0016484 | 0.6581033 | 1.3364913 | 0.95 | median | qi |

C | 1.8358584 | 1.4717853 | 2.1765832 | 0.95 | median | qi |

D | 1.0190215 | 0.6731754 | 1.3676501 | 0.95 | median | qi |

E | -0.8904053 | -1.2257849 | -0.5537563 | 0.95 | median | qi |

`median_qi()`

and its sister functions can produce an
arbitrary number of probability intervals by setting the
`.width =`

argument:

```
m %>%
spread_draws(b_Intercept, r_condition[condition,]) %>%
median_qi(condition_mean = b_Intercept + r_condition, .width = c(.95, .8, .5))
```

condition | condition_mean | .lower | .upper | .width | .point | .interval |
---|---|---|---|---|---|---|

A | 0.1985268 | -0.1335055 | 0.5492377 | 0.95 | median | qi |

B | 1.0016484 | 0.6581033 | 1.3364913 | 0.95 | median | qi |

C | 1.8358584 | 1.4717853 | 2.1765832 | 0.95 | median | qi |

D | 1.0190215 | 0.6731754 | 1.3676501 | 0.95 | median | qi |

E | -0.8904053 | -1.2257849 | -0.5537563 | 0.95 | median | qi |

A | 0.1985268 | -0.0190356 | 0.4184225 | 0.80 | median | qi |

B | 1.0016484 | 0.7790907 | 1.2178606 | 0.80 | median | qi |

C | 1.8358584 | 1.6072368 | 2.0529455 | 0.80 | median | qi |

D | 1.0190215 | 0.7926312 | 1.2374315 | 0.80 | median | qi |

E | -0.8904053 | -1.1147945 | -0.6656555 | 0.80 | median | qi |

A | 0.1985268 | 0.0806090 | 0.3168711 | 0.50 | median | qi |

B | 1.0016484 | 0.8853918 | 1.1135458 | 0.50 | median | qi |

C | 1.8358584 | 1.7211107 | 1.9476755 | 0.50 | median | qi |

D | 1.0190215 | 0.8988493 | 1.1366991 | 0.50 | median | qi |

E | -0.8904053 | -1.0033817 | -0.7747320 | 0.50 | median | qi |

The results are in a tidy format: one row per group and uncertainty
interval width (`.width`

). This facilitates plotting. For
example, assigning `-.width`

to the `linewidth`

aesthetic will show all intervals, making thicker lines correspond to
smaller intervals. The `ggdist::geom_pointinterval()`

geom
automatically sets the `linewidth`

aesthetic appropriately
based on the `.width`

column in the data to produce plots of
points with multiple probability levels:

To see the density along with the intervals, we can use
`ggdist::stat_eye()`

(“eye plots”, which combine intervals
with violin plots), or `ggdist::stat_halfeye()`

(interval +
density plots):

```
m %>%
spread_draws(b_Intercept, r_condition[condition,]) %>%
mutate(condition_mean = b_Intercept + r_condition) %>%
ggplot(aes(y = condition, x = condition_mean)) +
stat_halfeye()
```

Or say you want to annotate portions of the densities in color; the
`fill`

aesthetic can vary within a slab in all geoms and
stats in the `ggdist::geom_slabinterval()`

family, including
`ggdist::stat_halfeye()`

. For example, if you want to
annotate a domain-specific region of practical equivalence (ROPE), you
could do something like this:

```
m %>%
spread_draws(b_Intercept, r_condition[condition,]) %>%
mutate(condition_mean = b_Intercept + r_condition) %>%
ggplot(aes(y = condition, x = condition_mean, fill = after_stat(abs(x) < .8))) +
stat_halfeye() +
geom_vline(xintercept = c(-.8, .8), linetype = "dashed") +
scale_fill_manual(values = c("gray80", "skyblue"))
```

`stat_slabinterval`

There are a variety of additional stats for visualizing distributions
in the `ggdist::geom_slabinterval()`

family of stats and
geoms:

See `vignette("slabinterval", package = "ggdist")`

for an
overview.

Rather than calculating conditional means manually as in the previous
example, we could use `add_epred_draws()`

, which is analogous
to `brms::posterior_epred()`

(giving posterior draws from the
expectation of the posterior predictive; i.e. posterior distributions of
conditional means), but uses a tidy data format. We can combine it with
`modelr::data_grid()`

to first generate a grid describing the
predictions we want, then transform that grid into a long-format data
frame of draws from conditional means:

condition | .row | .chain | .iteration | .draw | .epred |
---|---|---|---|---|---|

A | 1 | NA | NA | 1 | 0.1597348 |

A | 1 | NA | NA | 2 | 0.3609321 |

A | 1 | NA | NA | 3 | -0.1160347 |

A | 1 | NA | NA | 4 | 0.3176810 |

A | 1 | NA | NA | 5 | 0.1595810 |

A | 1 | NA | NA | 6 | 0.5320286 |

A | 1 | NA | NA | 7 | -0.1261022 |

A | 1 | NA | NA | 8 | -0.1412915 |

A | 1 | NA | NA | 9 | 0.3951784 |

A | 1 | NA | NA | 10 | 0.5120497 |

To plot this example, we’ll also show the use of
`ggdist::stat_pointinterval()`

instead of
`ggdist::geom_pointinterval()`

, which summarizes draws into
points and intervals within ggplot:

Intervals are nice if the alpha level happens to line up with whatever decision you are trying to make, but getting a shape of the posterior is better (hence eye plots, above). On the other hand, making inferences from density plots is imprecise (estimating the area of one shape as a proportion of another is a hard perceptual task). Reasoning about probability in frequency formats is easier, motivating quantile dotplots (Kay et al. 2016, Fernandes et al. 2018), which also allow precise estimation of arbitrary intervals (down to the dot resolution of the plot, 100 in the example below).

Within the slabinterval family of geoms in tidybayes is the
`dots`

and `dotsinterval`

family, which
automatically determine appropriate bin sizes for dotplots and can
calculate quantiles from samples to construct quantile dotplots.
`ggdist::stat_dotsinterval()`

is the variant designed for use
on samples:

```
ABC %>%
data_grid(condition) %>%
add_epred_draws(m) %>%
ggplot(aes(x = .epred, y = condition)) +
stat_dotsinterval(quantiles = 100)
```

The idea is to get away from thinking about the posterior as indicating one canonical point or interval, but instead to represent it as (say) 100 approximately equally likely points.

Where `add_epred_draws()`

is analogous to
`brms::posterior_epred()`

, `add_predicted_draws()`

is analogous to `brms::posterior_predict()`

, giving draws
from the posterior predictive distribution.

Here is an example of posterior predictive distributions plotted
using `ggdist::stat_slab()`

:

```
ABC %>%
data_grid(condition) %>%
add_predicted_draws(m) %>%
ggplot(aes(x = .prediction, y = condition)) +
stat_slab()
```

We could also use `ggdist::stat_interval()`

to plot
predictive bands alongside the data:

```
ABC %>%
data_grid(condition) %>%
add_predicted_draws(m) %>%
ggplot(aes(y = condition, x = .prediction)) +
stat_interval(.width = c(.50, .80, .95, .99)) +
geom_point(aes(x = response), data = ABC) +
scale_color_brewer()
```

Altogether, data, posterior predictions, and posterior distributions of the means:

```
grid = ABC %>%
data_grid(condition)
means = grid %>%
add_epred_draws(m)
preds = grid %>%
add_predicted_draws(m)
ABC %>%
ggplot(aes(y = condition, x = response)) +
stat_interval(aes(x = .prediction), data = preds) +
stat_pointinterval(aes(x = .epred), data = means, .width = c(.66, .95), position = position_nudge(y = -0.3)) +
geom_point() +
scale_color_brewer()
```

The above approach to posterior predictions integrates over the parameter uncertainty to give a single posterior predictive distribution. Another approach, often used by John Kruschke in his book Doing Bayesian Data Analysis, is to attempt to show both the predictive uncertainty and the parameter uncertainty simultaneously by showing several possible predictive distributions implied by the posterior.

We can do this pretty easily by asking for the distributional
parameters for a given prediction implied by the posterior. We’ll do it
explicitly here by setting `dpar = c("mu", "sigma")`

in
`add_epred_draws()`

. Rather than specifying the parameters
explicitly, you can also just set `dpar = TRUE`

to get draws
from all distributional parameters in a model, and this will work for
any response distribution supported by brms. Then, we can select a small
number of draws using `sample_draws()`

and then use
`ggdist::stat_slab()`

to visualize each predictive
distribution implied by the values of `mu`

and
`sigma`

:

```
ABC %>%
data_grid(condition) %>%
add_epred_draws(m, dpar = c("mu", "sigma")) %>%
sample_draws(30) %>%
ggplot(aes(y = condition)) +
stat_slab(aes(xdist = dist_normal(mu, sigma)),
slab_color = "gray65", alpha = 1/10, fill = NA
) +
geom_point(aes(x = response), data = ABC, shape = 21, fill = "#9ECAE1", size = 2)
```

We could even combine the Kruschke-style plots of predictive distributions with half-eyes showing the posterior means:

```
ABC %>%
data_grid(condition) %>%
add_epred_draws(m, dpar = c("mu", "sigma")) %>%
ggplot(aes(x = condition)) +
stat_slab(aes(ydist = dist_normal(mu, sigma)),
slab_color = "gray65", alpha = 1/10, fill = NA, data = . %>% sample_draws(30), scale = .5
) +
stat_halfeye(aes(y = .epred), side = "bottom", scale = .5) +
geom_point(aes(y = response), data = ABC, shape = 21, fill = "#9ECAE1", size = 2, position = position_nudge(x = -.2))
```

To demonstrate drawing fit curves with uncertainty, let’s fit a
slightly naive model to part of the `mtcars`

dataset:

```
m_mpg = brm(
mpg ~ hp * cyl,
data = mtcars,
file = "models/tidy-brms_m_mpg.rds" # cache model (can be removed)
)
```

We can draw fit curves with probability bands:

```
mtcars %>%
group_by(cyl) %>%
data_grid(hp = seq_range(hp, n = 51)) %>%
add_epred_draws(m_mpg) %>%
ggplot(aes(x = hp, y = mpg, color = ordered(cyl))) +
stat_lineribbon(aes(y = .epred)) +
geom_point(data = mtcars) +
scale_fill_brewer(palette = "Greys") +
scale_color_brewer(palette = "Set2")
```

Or we can sample a reasonable number of fit lines (say 100) and overplot them:

```
mtcars %>%
group_by(cyl) %>%
data_grid(hp = seq_range(hp, n = 101)) %>%
# NOTE: this shows the use of ndraws to subsample within add_epred_draws()
# ONLY do this IF you are planning to make spaghetti plots, etc.
# NEVER subsample to a small sample to plot intervals, densities, etc.
add_epred_draws(m_mpg, ndraws = 100) %>%
ggplot(aes(x = hp, y = mpg, color = ordered(cyl))) +
geom_line(aes(y = .epred, group = paste(cyl, .draw)), alpha = .1) +
geom_point(data = mtcars) +
scale_color_brewer(palette = "Dark2")
```

Or we can create animated hypothetical outcome plots (HOPs) of fit lines:

```
set.seed(123456)
# NOTE: using a small number of draws to keep this example
# small, but in practice you probably want 50 or 100
ndraws = 20
p = mtcars %>%
group_by(cyl) %>%
data_grid(hp = seq_range(hp, n = 101)) %>%
add_epred_draws(m_mpg, ndraws = ndraws) %>%
ggplot(aes(x = hp, y = mpg, color = ordered(cyl))) +
geom_line(aes(y = .epred, group = paste(cyl, .draw))) +
geom_point(data = mtcars) +
scale_color_brewer(palette = "Dark2") +
transition_states(.draw, 0, 1) +
shadow_mark(future = TRUE, color = "gray50", alpha = 1/20)
animate(p, nframes = ndraws, fps = 2.5, width = 432, height = 288, res = 96, dev = "png", type = "cairo")
```

Or we could plot posterior predictions (instead of means). For this
example we’ll also use `alpha`

to make it easier to see
overlapping bands:

```
mtcars %>%
group_by(cyl) %>%
data_grid(hp = seq_range(hp, n = 101)) %>%
add_predicted_draws(m_mpg) %>%
ggplot(aes(x = hp, y = mpg, color = ordered(cyl), fill = ordered(cyl))) +
stat_lineribbon(aes(y = .prediction), .width = c(.95, .80, .50), alpha = 1/4) +
geom_point(data = mtcars) +
scale_fill_brewer(palette = "Set2") +
scale_color_brewer(palette = "Dark2")
```

This gets difficult to judge by group, so probably better to facet into multiple plots. Fortunately, since we are using ggplot, that functionality is built in:

```
mtcars %>%
group_by(cyl) %>%
data_grid(hp = seq_range(hp, n = 101)) %>%
add_predicted_draws(m_mpg) %>%
ggplot(aes(x = hp, y = mpg)) +
stat_lineribbon(aes(y = .prediction), .width = c(.99, .95, .8, .5), color = brewer.pal(5, "Blues")[[5]]) +
geom_point(data = mtcars) +
scale_fill_brewer() +
facet_grid(. ~ cyl, space = "free_x", scales = "free_x")
```

`brms::brm()`

also allows us to set up submodels for
parameters of the response distribution *other than* the location
(e.g., mean). For example, we can allow a variance parameter, such as
the standard deviation, to also be some function of the predictors.

This approach can be helpful in cases of non-constant variance (also
called *heteroskedasticity* by folks who like obfuscation via
Latin). E.g., imagine two groups, each with different mean response
*and variance*:

```
set.seed(1234)
AB = tibble(
group = rep(c("a", "b"), each = 20),
response = rnorm(40, mean = rep(c(1, 5), each = 20), sd = rep(c(1, 3), each = 20))
)
AB %>%
ggplot(aes(x = response, y = group)) +
geom_point()
```

Here is a model that lets the mean *and standard deviation* of
`response`

be dependent on `group`

:

```
m_ab = brm(
bf(
response ~ group,
sigma ~ group
),
data = AB,
file = "models/tidy-brms_m_ab.rds" # cache model (can be removed)
)
```

We can plot the posterior distribution of the mean
`response`

alongside posterior predictive intervals and the
data:

```
grid = AB %>%
data_grid(group)
means = grid %>%
add_epred_draws(m_ab)
preds = grid %>%
add_predicted_draws(m_ab)
AB %>%
ggplot(aes(x = response, y = group)) +
stat_halfeye(aes(x = .epred), scale = 0.6, position = position_nudge(y = 0.175), data = means) +
stat_interval(aes(x = .prediction), data = preds) +
geom_point(data = AB) +
scale_color_brewer()
```

This shows posteriors of the mean of each group (black intervals and the density plots) and posterior predictive intervals (blue).

The predictive intervals in group `b`

are larger than in
group `a`

because the model fits a different standard
deviation for each group. We can see how the corresponding
distributional parameter, `sigma`

, changes by extracting it
using the `dpar`

argument to
`add_epred_draws()`

:

```
grid %>%
add_epred_draws(m_ab, dpar = TRUE) %>%
ggplot(aes(x = sigma, y = group)) +
stat_halfeye() +
geom_vline(xintercept = 0, linetype = "dashed")
```

By setting `dpar = TRUE`

, all distributional parameters
are added as additional columns in the result of
`add_epred_draws()`

; if you only want a specific parameter,
you can specify it (or a list of just the parameters you want). In the
above model, `dpar = TRUE`

is equivalent to
`dpar = list("mu", "sigma")`

.

If we wish compare the means from each condition,
`compare_levels()`

facilitates comparisons of the value of
some variable across levels of a factor. By default it computes all
pairwise differences.

Let’s demonstrate `compare_levels()`

with
`ggdist::stat_halfeye()`

. We’ll also re-order by the mean of
the difference:

The `posterior_epred()`

function for ordinal and
multinomial regression models in brms returns multiple variables for
each draw: one for each outcome category (in contrast to
`rstanarm::stan_polr()`

models, which return draws from the
latent linear predictor). The philosophy of `tidybayes`

is to
tidy whatever format is output by a model, so in keeping with that
philosophy, when applied to ordinal and multinomial `brms`

models, `add_epred_draws()`

adds an additional column called
`.category`

and a separate row containing the variable for
each category is output for every draw and predictor.

We’ll fit a model using the `mtcars`

dataset that predicts
the number of cylinders in a car given the car’s mileage (in miles per
gallon). While this is a little backwards causality-wise (presumably the
number of cylinders causes the mileage, if anything), that does not mean
this is not a fine prediction task (I could probably tell someone who
knows something about cars the MPG of a car and they could do reasonably
well at guessing the number of cylinders in the engine).

Before we fit the model, let’s clean the dataset by making the
`cyl`

column an ordered factor (by default it is just a
number):

mpg | cyl | disp | hp | drat | wt | qsec | vs | am | gear | carb | |
---|---|---|---|---|---|---|---|---|---|---|---|

Mazda RX4 | 21.0 | 6 | 160 | 110 | 3.90 | 2.620 | 16.46 | 0 | 1 | 4 | 4 |

Mazda RX4 Wag | 21.0 | 6 | 160 | 110 | 3.90 | 2.875 | 17.02 | 0 | 1 | 4 | 4 |

Datsun 710 | 22.8 | 4 | 108 | 93 | 3.85 | 2.320 | 18.61 | 1 | 1 | 4 | 1 |

Hornet 4 Drive | 21.4 | 6 | 258 | 110 | 3.08 | 3.215 | 19.44 | 1 | 0 | 3 | 1 |

Hornet Sportabout | 18.7 | 8 | 360 | 175 | 3.15 | 3.440 | 17.02 | 0 | 0 | 3 | 2 |

Valiant | 18.1 | 6 | 225 | 105 | 2.76 | 3.460 | 20.22 | 1 | 0 | 3 | 1 |

Then we’ll fit an ordinal regression model:

```
m_cyl = brm(
cyl ~ mpg,
data = mtcars_clean,
family = cumulative,
seed = 58393,
file = "models/tidy-brms_m_cyl.rds" # cache model (can be removed)
)
```

`add_epred_draws()`

will include a `.category`

column, and `.epred`

will contain draws from the posterior
distribution for the probability that the response is in that category.
For example, here is the fit for the first row in the dataset:

mpg | .row | .category | .epred | .lower | .upper | .width | .point | .interval |
---|---|---|---|---|---|---|---|---|

21 | 1 | 4 | 0.3567055 | 0.0928295 | 0.7235128 | 0.95 | median | qi |

21 | 1 | 6 | 0.6112210 | 0.2568508 | 0.8932980 | 0.95 | median | qi |

21 | 1 | 8 | 0.0144072 | 0.0002072 | 0.1296048 | 0.95 | median | qi |

Note: for the `.category`

variable to retain its original
factor level names you must be using `brms`

greater than or
equal to version 2.15.9.

We could plot fit lines for predicted probabilities against the dataset:

```
data_plot = mtcars_clean %>%
ggplot(aes(x = mpg, y = cyl, color = cyl)) +
geom_point() +
scale_color_brewer(palette = "Dark2", name = "cyl")
fit_plot = mtcars_clean %>%
data_grid(mpg = seq_range(mpg, n = 101)) %>%
add_epred_draws(m_cyl, value = "P(cyl | mpg)", category = "cyl") %>%
ggplot(aes(x = mpg, y = `P(cyl | mpg)`, color = cyl)) +
stat_lineribbon(aes(fill = cyl), alpha = 1/5) +
scale_color_brewer(palette = "Dark2") +
scale_fill_brewer(palette = "Dark2")
plot_grid(ncol = 1, align = "v",
data_plot,
fit_plot
)
```

The above display does not let you see the correlation between
`P(cyl|mpg)`

for different values of `cyl`

at a
particular value of `mpg`

. For example, in the portion of the
posterior where `P(cyl = 6|mpg = 20)`

is high,
`P(cyl = 4|mpg = 20)`

and `P(cyl = 8|mpg = 20)`

must be low (since these must add up to 1).

One way to see this correlation might be to employ hypothetical outcome plots (HOPs) just for the fit line, “detaching” it from the ribbon (another alternative would be to use HOPs on top of line ensembles, as demonstrated earlier in this document). By employing animation, you can see how the lines move in tandem or opposition to each other, revealing some patterns in how they are correlated:

```
# NOTE: using a small number of draws to keep this example
# small, but in practice you probably want 50 or 100
ndraws = 20
p = mtcars_clean %>%
data_grid(mpg = seq_range(mpg, n = 101)) %>%
add_epred_draws(m_cyl, value = "P(cyl | mpg)", category = "cyl") %>%
ggplot(aes(x = mpg, y = `P(cyl | mpg)`, color = cyl)) +
# we remove the `.draw` column from the data for stat_lineribbon so that the same ribbons
# are drawn on every frame (since we use .draw to determine the transitions below)
stat_lineribbon(aes(fill = cyl), alpha = 1/5, color = NA, data = . %>% select(-.draw)) +
# we use sample_draws to subsample at the level of geom_line (rather than for the full dataset
# as in previous HOPs examples) because we need the full set of draws for stat_lineribbon above
geom_line(aes(group = paste(.draw, cyl)), linewidth = 1, data = . %>% sample_draws(ndraws)) +
scale_color_brewer(palette = "Dark2") +
scale_fill_brewer(palette = "Dark2") +
transition_manual(.draw)
animate(p, nframes = ndraws, fps = 2.5, width = 576, height = 192, res = 96, dev = "png", type = "cairo")
```

Notice how the lines move together, and how they move up or down
together or in opposition. We could take a slice through these lines at
an x position in the above chart (say, `mpg = 20`

) and look
at the correlation between them using a scatterplot matrix:

```
tibble(mpg = 20) %>%
add_epred_draws(m_cyl, value = "P(cyl | mpg = 20)", category = "cyl") %>%
ungroup() %>%
select(.draw, cyl, `P(cyl | mpg = 20)`) %>%
gather_pairs(cyl, `P(cyl | mpg = 20)`, triangle = "both") %>%
filter(.row != .col) %>%
ggplot(aes(.x, .y)) +
geom_point(alpha = 1/50) +
facet_grid(.row ~ .col) +
ylab("P(cyl = row | mpg = 20)") +
xlab("P(cyl = col | mpg = 20)")
```

While talking about the mean for an ordinal distribution often does not make sense, in this particular case one could argue that the expected number of cylinders for a car given its miles per gallon is a meaningful quantity. We could plot the posterior distribution for the average number of cylinders for a car given a particular miles per gallon as follows:

\[
\textrm{E}[\textrm{cyl}|\textrm{mpg}=m] = \sum_{c \in \{4,6,8\}} c\cdot
\textrm{P}(\textrm{cyl}=c|\textrm{mpg}=m)
\] We can use the above formula to derive a posterior
distribution for \(\textrm{E}[\textrm{cyl}|\textrm{mpg}=m]\)
from the model. The model gives us a posterior distribution for \(\textrm{P}(\textrm{cyl}=c|\textrm{mpg}=m)\):
when `mpg`

= \(m\), the
response-scale linear predictor (the `.epred`

column from
`add_epred_draws()`

) for `cyl`

(aka
`.category`

) = \(c\) is
\(\textrm{P}(\textrm{cyl}=c|\textrm{mpg}=m)\).
Thus, we can group within `.draw`

and then use
`summarise`

to calculate the expected value:

```
label_data_function = . %>%
ungroup() %>%
filter(mpg == quantile(mpg, .47)) %>%
summarise_if(is.numeric, mean)
data_plot_with_mean = mtcars_clean %>%
data_grid(mpg = seq_range(mpg, n = 101)) %>%
# NOTE: this shows the use of ndraws to subsample within add_epred_draws()
# ONLY do this IF you are planning to make spaghetti plots, etc.
# NEVER subsample to a small sample to plot intervals, densities, etc.
add_epred_draws(m_cyl, value = "P(cyl | mpg)", category = "cyl", ndraws = 100) %>%
group_by(mpg, .draw) %>%
# calculate expected cylinder value
mutate(cyl = as.numeric(as.character(cyl))) %>%
summarise(cyl = sum(cyl * `P(cyl | mpg)`), .groups = "drop") %>%
ggplot(aes(x = mpg, y = cyl)) +
geom_line(aes(group = .draw), alpha = 5/100) +
geom_point(aes(y = as.numeric(as.character(cyl)), fill = cyl), data = mtcars_clean, shape = 21, size = 2) +
geom_text(aes(x = mpg + 4), label = "E[cyl | mpg]", data = label_data_function, hjust = 0) +
geom_segment(aes(yend = cyl, xend = mpg + 3.9), data = label_data_function) +
scale_fill_brewer(palette = "Set2", name = "cyl")
plot_grid(ncol = 1, align = "v",
data_plot_with_mean,
fit_plot
)
```

Now let’s do some posterior predictive checking: do posterior
predictions look like the data? For this, we’ll make new predictions at
the same values of `mpg`

as were present in the original
dataset (gray circles) and plot these with the observed data (colored
circles):

```
mtcars_clean %>%
# we use `select` instead of `data_grid` here because we want to make posterior predictions
# for exactly the same set of observations we have in the original data
select(mpg) %>%
add_predicted_draws(m_cyl, seed = 1234) %>%
# recover original factor labels
mutate(cyl = levels(mtcars_clean$cyl)[.prediction]) %>%
ggplot(aes(x = mpg, y = cyl)) +
geom_count(color = "gray75") +
geom_point(aes(fill = cyl), data = mtcars_clean, shape = 21, size = 2) +
scale_fill_brewer(palette = "Dark2") +
geom_label_repel(
data = . %>% ungroup() %>% filter(cyl == "8") %>% filter(mpg == max(mpg)) %>% dplyr::slice(1),
label = "posterior predictions", xlim = c(26, NA), ylim = c(NA, 2.8), point.padding = 0.3,
label.size = NA, color = "gray50", segment.color = "gray75"
) +
geom_label_repel(
data = mtcars_clean %>% filter(cyl == "6") %>% filter(mpg == max(mpg)) %>% dplyr::slice(1),
label = "observed data", xlim = c(26, NA), ylim = c(2.2, NA), point.padding = 0.2,
label.size = NA, segment.color = "gray35"
)
```

This looks pretty good. Let’s check using another typical posterior
predictive checking plot: many simulated distributions of the response
(`cyl`

) against the observed distribution of the response.
For a continuous response variable this is usually done with a density
plot; here, we’ll plot the number of posterior predictions in each bin
as a line plot, since the response variable is discrete:

```
mtcars_clean %>%
select(mpg) %>%
add_predicted_draws(m_cyl, ndraws = 100, seed = 12345) %>%
# recover original factor labels
mutate(cyl = levels(mtcars_clean$cyl)[.prediction]) %>%
ggplot(aes(x = cyl)) +
stat_count(aes(group = NA), geom = "line", data = mtcars_clean, color = "red", linewidth = 3, alpha = .5) +
stat_count(aes(group = .draw), geom = "line", position = "identity", alpha = .05) +
geom_label(data = data.frame(cyl = "4"), y = 9.5, label = "posterior\npredictions",
hjust = 1, color = "gray50", lineheight = 1, label.size = NA) +
geom_label(data = data.frame(cyl = "8"), y = 14, label = "observed\ndata",
hjust = 0, color = "red", lineheight = 1, label.size = NA)
```