Bayesian Inference

NYU Applied Statistics for Social Science Research

Eric Novik

New York University

Introductions

  • State your name and where you are from
  • What is your field of study/work, and what are you hoping to learn in this class

\[ \DeclareMathOperator{\E}{\mathbb{E}} \DeclareMathOperator{\P}{\mathbb{P}} \DeclareMathOperator{\V}{\mathbb{V}} \DeclareMathOperator{\L}{\mathscr{L}} \DeclareMathOperator{\I}{\text{I}} \DeclareMathOperator{\d}{\mathrm{d}} \]

Lecture 1: Bayesian Workflow

  • Overview of the course
  • Subjective vs Objective
  • Statistics vs AI/ML
  • Brief history of Bayesian inference
  • Stories: pricing books and developing drugs
  • Review of probability and simulations
  • Bayes’s rule
  • Introduction to Bayesian workflow
  • Wanna bet? Mystery box game

Subjective vs Objective

  • An experienced sommelier is able to correctly identify 8 types of wine from 8 samples
  • An astrologer was able to correctly predict if the stock market is up or down 8 times in a row
  • Is your inference about their predictive ability different? The data are the same!
  • Should we let data speak?

Statistics vs. AI/ML (Simplification!)

  • AI is about automating tasks that humans are able to do
    • Recognizing faces, cats, and other objects
    • Identifying tumors on a radiology scan
    • Playing Chess and Go
    • Driving a car
    • Post ChatGPT 4: Generating coherent text/images/video, signs of reasoning
  • Statistics is useful for answering questions that humans are not able to do
    • How fast does a drug clear from the body?
    • What is the expected tumor size two months after treatment?
    • How would patients respond under a different treatment?
    • Should I take this drug?
    • Should we (FDA/EMA/…) approve this drug?

Note

“Machine learning excels when you have lots of training data that can be reasonably modeled as exchangeable with your test data; Bayesian inference excels when your data are sparse and your model is dense.” — Andrew Gelman

Turn to your neighbor and discuss

Classify each of the following: {decision, parameter inference, prediction, causal inference}

  1. How fast does a drug clear from the body?
  2. What is the expected tumor size two months after treatment?
  3. How would patients respond under a different treatment?
  4. Should I take this drug?
  5. Should we (FDA/EMA/…) approve this drug?

Question

Which category is missing data imputation?

Brief History

Summary of the book The Theory That Would Not Die

  • Thomas Bayes (1702(?) — 1761) is credited with the discovery of the “Bayes’s Rule”
  • His paper was published posthumously by Richard Price in 1763
  • Laplace (1749 — 1827) independently discovered the rule and published it in 1774
  • Scientific context: Newton’s Principia was published in 1687
  • Bayesian wins: German Enigma cipher, search for a missing H-bomb, Federalist papers, Moneyball, FiveThirtyEight

Laplace’s Demon

We may regard the present state of the universe as the effect of its past and the cause of its future. An intellect which at any given moment knew all of the forces that animate nature and the mutual positions of the beings that compose it, if this intellect were vast enough to submit the data to analysis, could condense into a single formula the movement of the greatest bodies of the universe and that of the lightest atom; for such an intellect nothing could be uncertain, and the future just like the past would be present before its eyes.

Marquis Pierre Simon de Laplace (1749 — 1827)

“Uncertainty is a function of our ignorance, not a property of the world.”

Modern Examples of Bayesian Analyses

Elasticity of Demand

Stan with Hierarchical Models

Pharmacokinetics of Drugs

Stan with Ordinary Differential Equations

Nonparametric Bayes

Stan with Gaussian Processes

Example Simulation

  • We will use R’s sample() function to simulate rolls of a die and replicate() function to repeat the rolling process many times
die <- 1:6; N <- 1e4
roll <- function(x, n) {
  sample(x, size = n, replace = TRUE)
}
roll(die, 2) # roll the die twice
[1] 1 4
rolls <- replicate(N, roll(die, 2))
rownames(rolls) <- c("X1", "X2")
rolls[, 1:10] # print first 10 outcomes
   [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9] [,10]
X1    5    1    2    6    5    5    4    6    1     3
X2    1    3    3    4    2    3    2    6    5     4
Y <- colSums(rolls) # Y = X1 + X2
head(Y)
[1]  6  4  5 10  7  8
mean(Y < 11) # == 1/N * sum(Y < 11)
[1] 0.9148
  • Given a Random Variable \(Y\), \(y^{(1)}, y^{(2)}, y^{(3)}, \ldots, y^N\) are simulations or draws from \(Y\)
  • Fundamental bridge (Blitzstein & Hwang p. 164): \(\P(A) = \E(\I_A)\)
  • For \(Y = X_1 + X_2\): \(\P(Y < 11) \approx \frac{1}{N} \sum^{N}_{n=1} \I(y^n < 11)\)
  • In R, \(Y < 11\) creates an indicator variable
  • And mean() does the average

Example Simulation

This is a more direct demonstration of random draws from a distribution of \(Y = X_1 + X_2\)

# Define the possible values of 
# Y = X1 + X2 and their probabilities

Omega <- 2:12
Y_probs <- 
  c(1, 2, 3, 4, 5, 6, 5, 4, 3, 2, 1) / 36

# Simulate 10,000 realizations of Y
Y_samples <- sample(Omega,
                    size = 1e4, 
                    replace = TRUE, 
                    prob = Y_probs)

# Compute Pr(A)
mean(Y_samples < 11)
[1] 0.918
# Plot a histogram of the realizations of Y
par(bg = "#f0f1eb")
hist(Y_samples, 
     breaks = seq(1.5, 12.5, by = 1), 
     main = "Realizations of Y",
     xlab = "Y", ylab = "P",
     col = "lightblue",
     prob = TRUE)

Can’t Stop Board Game

Law of Total Probability (LOTP)

Let \(A\) be a partition of \(\Omega\), so that each \(A_i\) is disjoint, \(\P(A_i >0)\), and \(\cup A_i = \Omega\). \[ \P(B) = \sum_{i=1}^{n} \P(B \cap A_i) = \sum_{i=1}^{n} \P(B \mid A_i) \P(A_i) \]

BIPEPSE: AI as a tutor

Suppose you forgot the law of iterated expectations (Adam’s Law) which states \[ \E(Y) = \E_X(\E_{Y|X}(Y \mid X)) \]

Use the following AI prompts

  • Background: I am a Masters student in Statistics. I want to go over the law of iterated expectations. Intuition: First, give me an intuitive explanation. I will ask a follow-up question after.
  • Proof: Please, provide a step-by-step proof of the discrete case, explaining every move.
  • Example: Please provide a numerical example.
  • Problem: Now, please give me a practice problem.
  • Simulation: Write a simulation and ask AI to check it OR say: write a simulation in R
  • Explanation: How is this for an explanation? [Enter an explanation in your words and have it check you]

Adam’s Law Simulation

library(purrr)
n <- 20
class_sizes <- sample(n)
mean_gpa_vec <- runif(n = n, 2.8, 3.2)
N <- sum(class_sizes)
W <- class_sizes / N
G <- class_sizes |>
  map(\(x) rnorm(n = x, mean = mean_gpa_vec, sd = 1))

# E(Y)
EY <- unlist(G) |> mean()
print(EY)

# E(E(Y | X))
class_means <- map_dbl(G, mean) # can't just average those
gpaXprob <- class_means * W
EEYX <- sum(gpaXprob)
print(EEYX)

Bivariate Case

  • A bivariate CDF \(F_{X,Y}(x, y) = \P(X \leq x, Y \leq y)\)

\[ F_{X,Y}(x, y) \geq 0 \quad \text{for all } x, y \]

  • If the PDF exists

\[ F_{X,Y}(x, y) = \int_{-\infty}^x \int_{-\infty}^y f_{X,Y}(u, v) \, \mathrm{d}v \, \mathrm{d}u \]

  • Normalization

\[ \int_{-\infty}^\infty \int_{-\infty}^\infty f_{X,Y}(x, y) \, \mathrm{d}x \, \mathrm{d}y = 1 \]

  • From CDF to PDF

\[ f_{X,Y}(x, y) = \frac{\partial^2}{\partial x \partial y} F_{X,Y}(x, y) \]

Bivariate Case

  • Marginalization

\[ f_Y(y) = \int_{-\infty}^\infty f_{X,Y}(x, y) \, \mathrm{d}x. \]

  • Conditional CDF

\[ F_{Y \mid X}(y \mid x) = \P(Y \leq y \mid X = x) \]

  • Conditional PDF: a function of \(y\) for fixed \(x\) \[ f_{Y|X}(y \mid x) = \frac{f_{X,Y}(x, y)}{f_X(x)} \]

Bivariate Normal

\[ f_{X,Y}(x, y) = \frac{1}{2 \pi \sqrt{\det(\Sigma)}} \exp\left( -\frac{1}{2} \begin{bmatrix} x - \mu_X \\ y - \mu_Y \end{bmatrix}^\top \Sigma^{-1} \begin{bmatrix} x - \mu_X \\ y - \mu_Y \end{bmatrix} \right) \]

\[ \Sigma = \begin{bmatrix} \sigma_X^2 & \rho\,\sigma_X\sigma_Y \\ \rho\,\sigma_X\sigma_Y & \sigma_Y^2 \end{bmatrix}, \quad \rho = \frac{\operatorname{Cov}(X, Y)}{\sigma_X \, \sigma_Y}, \quad -1 \leq \rho \leq 1 \]

Bivariate Normal Demo

library(distributional)

sigma_X <- 1
sigma_Y <- 2
rho <- 0.3  # Correlation coefficient

rho_sigmaXY <- rho * sigma_X * sigma_Y # why?

cov_matrix <- matrix(c(sigma_X^2,   rho_sigmaXY,
                       rho_sigmaXY, sigma_Y^2), 2, 2)

dmv_norm <- dist_multivariate_normal(mu    = list(c(0, 0)), 
                                     sigma = list(cov_matrix))
dimnames(dmv_norm) <- c("x", "y")
variance(dmv_norm)
     x y
[1,] 1 4
covariance(dmv_norm)
[[1]]
       x   y
[1,] 1.0 0.6
[2,] 0.6 4.0
density(dmv_norm, cbind(3, 3)) |> round(2)   # should be low
[1] 0
cdf(dmv_norm, cbind(3, 3)) |> round(2)       # should be high
[1] 0.93

Bivariate Normal Demo: Sampling

# draws from the bivariate normal
draws <- generate(dmv_norm, 1e4)            
str(draws)
List of 1
 $ : num [1:10000, 1:2] 0.956 1.422 0.175 0.597 -1.106 ...
  ..- attr(*, "dimnames")=List of 2
  .. ..$ : NULL
  .. ..$ : chr [1:2] "x" "y"
draws <- as_tibble(draws[[1]])
head(draws)
# A tibble: 6 × 2
       x     y
   <dbl> <dbl>
1  0.956  1.75
2  1.42   1.68
3  0.175 -1.70
4  0.597  2.71
5 -1.11  -2.10
6 -0.826 -1.95
# estimated means
colMeans(draws) |> round(2)
    x     y 
-0.01  0.00 
# estimated covariance matrix
cov(draws) |> round(2)
     x    y
x 0.99 0.60
y 0.60 3.95
# estimated correlations matrix
cor(draws) |> round(2)
    x   y
x 1.0 0.3
y 0.3 1.0
# should be close to cdf(dist, cbind(3, 3)); why?
# both evaluate to P(X and Y < 3)
# & is an _and_ operator
mean(draws$x < 3 & draws$y < 3) |> round(2)
[1] 0.93
# P(X < 0 or Y < 1)
# | is an _or_, not conditioning operator
mean(draws$x < 0 | draws$y < 1) |> round(2)
[1] 0.8

Bivariate Normal Demo: Analytical

How would we do the above analytically? \[ \P(X < 0 \cup Y < 1) = \P(X < 0) + \P(Y < 1) - \P(X < 0 \cap Y < 1) \]

# Compute P(X < 0)
p_X_less_0 <- cdf(dist_normal(mu = 0, sigma = sigma_X), 0) # pnorm(0, mean = 0, sd = sigma_X)

# Compute P(Y < 1)
p_Y_less_1 <- cdf(dist_normal(mu = 0, sigma = sigma_Y), 1) # pnorm(1, mean = 0, sd = sigma_Y)

# Compute P(X < 0, Y < 1)
p_X_less_0_and_Y_less_1 <- cdf(dmv_norm, cbind(0, 1))

# P(X < 0 or Y < 1)
(p_X_less_0 + p_Y_less_1 - p_X_less_0_and_Y_less_1) |> round(2)
[1] 0.8
  • Would you rather do mean(draws$x < 0 | draws$y < 1) or the analytical solution?
  • How would you evaluate \(\P(X < 0 \mid Y < 1)\)
draws |> filter(y < 1) |>
  summarise(pr = mean(x < 0)) |>
  as.numeric() |> round(2)
[1] 0.57

Bayes’s Rule

  • Bayes’s rule for events

\[ \P(A \mid B) = \frac{\P(A \cap B)}{\P(B)} = \frac{\P(B \mid A) \P(A)}{\sum_{i=1}^{n} \P(B \mid A_i) \P(A_i)} \]

  • Bayes’s rule for continuous RVs

\[ \underbrace{\color{blue}{f_{\Theta|Y}(\theta \mid y)}}_{\text{Posterior}} = \frac{ \overbrace{ \color{red}{f_{Y|\Theta}(y \mid \theta)} }^{\text{Likelihood}} \cdot \overbrace{\color{green}{f_\Theta(\theta)}}^{ \text{Prior} } }{ \underbrace{\int_{-\infty}^{\infty} f_{Y|\Theta}(y \mid \theta) f_\Theta(\theta) \, \text{d}\theta}_{\text{Prior Predictive } f(y)} } \propto \color{red}{f_{Y|\Theta}(y \mid \theta)} \cdot \color{green}{f_\Theta(\theta)} \]

Notation

  • \(\theta\): unknowns or parameters to be estimated; could be multivariate, discrete, and continuous; in your book \(\pi\)
  • \(y\): observations or measurements to be modelled (\(y_1, y_2, ...\))
  • \(\widetilde{y}\) : unobserved but observable quantities; in your book \(y'\)
  • \(x\): covariates
  • \(f( \theta )\): a prior model, P[DM]F of \(\theta\)
  • \(f(y \mid \theta, x)\): an observational model, P[DM]F when it is a function of \(y\); in your book: \(f(y \mid \pi)\)
  • \(f(y \mid \theta)\): is a likelihood function when it is a function of \(\theta\); in your book: \(\L(\pi \mid y)\)
  • Some people write \(\L(\theta; y)\) or simply \(\L(\theta)\)
  • \(U(z)\): utility function where \(z\) can be some \(g(\theta, \tilde{y})\)
  • \(\E(U(z) \mid d)\): expected utility for decision \(d\)

Bayesian Workflow in a Nutshell

  • Specify a prior model \(f(\theta)\)
  • Pick a data model \(f(y \mid \theta, x)\), including conditional mean: \(\E(y \mid x)\)
  • Set up a prior predictive distribution \(f(y)\)
  • After observing data \(y\), we treat \(f(y \mid \theta, x)\) as the likelihood of observing \(y\) under all plausible values of \(\theta\), conditioning on \(x\) if necessary
  • Derive a posterior distribution for \(\theta\), \(\, f(\theta \mid y, x)\)
  • Evaluate model quality: 1) quality of the inferences; 2) quality of predictions; go back to the beginning
  • Compute derived quantities, such as \(\P(\theta > \alpha)\), \(f(\widetilde{y} \mid y)\), and \(d^* = \textrm{arg max}_d \ \E(U(z) \mid d)\)