This note covers how to extract samples from an RStan object and some situations in which you might want to use the **generated quantities** block in your Stan file. Specifically we cover generating posterior predictions and evaluating the point-wise posterior log likelihood of an autoregressive process with one time lag, known as AR(1). AR(p) processes are limited in their ability to model time series and models such as vector auto regression and gaussian processes tend to be much better alternatives (both of which can be fit in Stan).

The motivation for an AR(1) model is if you believe your outcome variable at time \(t\) can be explained by the outcome variable at time \(t-1\). For example, we may believe that the monthly unemployment rate of a country depends on the previous months rate.

Mathematically we define an AR(1) process as, \[ \begin{align} y_t \sim \mathcal{N}(\alpha + \rho y_{t-1}, \sigma) \end{align} \] where \(\alpha\), \(\rho\), and \(\sigma\) are parameters. To make sure the series is stationary (i.e.Â well behaved and not explosive) we can bound the parameter on the lag variable such that \(\rho \in [-1,1]\).

This is a generative model so we can simulate the data and see if we can recover the parameters by fitting the model in Stan.

```
# AR(1) data generation process
alpha <- 3
rho <- 0.6
dat <- list(N = 200)
dat$y[1] <- 1
for (n in 2:dat$N)
dat$y[n] <- alpha + rho * dat$y[n-1] + rnorm(1, 0, 1)
```

Our Stan model will look something like,

```
// AR(1) Model
data {
int<lower=0> N; // number of observations
real y[N]; // time series data
}
parameters {
real<lower=-1,upper=1> rho; // parameter on lag term
real alpha; // constant term
real<lower=0> sigma; // sd of error
}
model {
// likelihood
for (n in 2:N)
target+= normal_lpdf(y[n] | alpha + rho * y[n-1], sigma);
// priors
target+= normal_lpdf(rho | 0 , 0.5);
target+= normal_lpdf(alpha | 0 , 1);
}
```

Notice that weâ€™ve put bounds on the data and parameters using angle brackets `<...>`

.

```
library(rstan)
rstan_options(auto_write = TRUE) # complied model saved in same directory as stan file
options(mc.cores = parallel::detectCores()) # run parallel::detectCores() chains in parallel
# fit the model
fit0 <- stan("models/AR1.stan", data = dat,
iter = 500, chains = 4)
```

We can take a look at the diagnostics by printing `fit0`

to the console and examining the effective sample sizes and \(\hat{R}\). We can also look at the traceplots to visually determine whether the chains have mixed well.

```
retro <- c(rgb(255, 102, 136, alpha = 150, max = 255),
rgb(187, 102, 136, alpha = 150, max = 255),
rgb(119, 102, 136, alpha = 150, max = 255),
rgb(51, 102, 136, alpha = 150, max = 255))
print(fit0, digits = 2)
```

```
## Inference for Stan model: AR1.
## 4 chains, each with iter=500; warmup=250; thin=1;
## post-warmup draws per chain=250, total post-warmup draws=1000.
##
## mean se_mean sd 2.5% 25% 50% 75% 97.5% n_eff
## rho 0.64 0.00 0.05 0.55 0.61 0.64 0.67 0.74 392
## alpha 2.65 0.02 0.36 1.91 2.44 2.66 2.90 3.34 400
## sigma 0.93 0.00 0.05 0.84 0.89 0.93 0.96 1.02 528
## lp__ -273.32 0.07 1.31 -276.57 -273.97 -272.95 -272.33 -271.84 402
## Rhat
## rho 1
## alpha 1
## sigma 1
## lp__ 1
##
## Samples were drawn using NUTS(diag_e) at Wed Apr 12 19:48:58 2017.
## For each parameter, n_eff is a crude measure of effective sample size,
## and Rhat is the potential scale reduction factor on split chains (at
## convergence, Rhat=1).
```

`traceplot(fit0) + scale_color_manual(values = retro)`

```
## Scale for 'colour' is already present. Adding another scale for
## 'colour', which will replace the existing scale.
```