Introduction

Welcome to the 10th Edition of Making Bayesian Statistics Accessible to Everyone.

This edition contains 25 multiple choice and true/false questions that evaluate your understanding of key concepts, computations, interpretations, and applications presented in Editions 1 through 8.

You will test your skills in:

  • Prior/posterior distributions

  • Conjugacy

  • Bayesian estimation

  • Credible intervals

  • Empirical Bayes

  • Jeffreys priors

  • Beta-binomial and other conjugate models

  • Noninformative priors for location/scale

Each question is followed by four options (for MCQs), with one correct answer. The answers and explanations are provided at the end.


Quiz Questions

Multiple Choice Questions (1–15)

Q1. Which of the following statements best describes a conjugate prior?

A. A prior that leads to a posterior of the same functional form.

B. A prior that is always noninformative.

C. A prior that maximizes the likelihood.

D. A prior that is uniform over the parameter space.

Q2. Given \(Y \sim Bin(n, \theta)\) and \(\theta \sim Beta(\alpha, \beta)\), what is the posterior distribution of \(\theta | Y\)?

A. \(Beta(n+\alpha, \beta)\)

B. \(Beta(Y+\alpha, n-Y+\beta)\)

C. \(Gamma(Y+\alpha, \beta+n)\)

D. \(Beta(\alpha-Y, \beta+n+Y)\)

Q3. In a Beta-Binomial model, the mean of the Beta($\alpha, \beta$) distribution is:

A. \(\alpha – \beta\)

B. \(\displaystyle \frac{\alpha}{\alpha + \beta}\)

C. \(\displaystyle \frac{\alpha + \beta}{\alpha}\)

D. \(\alpha + \beta\)

Q4. Which statement about Jeffreys prior is true?

A. It is always proper.

B. It is invariant under reparameterization.

C. It requires expert knowledge.

D. It is only used for binomial models.

Q5. What is the Jeffreys prior for a scale parameter \(\theta\)?

A. \(p(\theta) \propto \theta\)

B. \(p(\theta) \propto \displaystyle \frac{1}{\theta^2}\)

C. \(p(\theta) \propto \displaystyle \frac{1}{\theta}\)

D. \(p(\theta) \propto \sqrt{\theta}\)

Q6. For the Normal-Normal model \(Y \sim N(\theta, \sigma^2)\), with prior \(\theta \sim N(\mu, \tau^2)\), what is the posterior mean?

A. \(\bar{y}\)

B. \(\mu\)

C. Weighted average of \(\mu\) and \(y\)

D. \(\displaystyle \frac{\sigma^2}{\tau^2}\)

Q7. In the Bayesian approach, the posterior distribution is proportional to:

A. Prior + Likelihood

B. Prior / Likelihood

C. Prior × Likelihood

D. Prior – Likelihood

Q8. What is the main difference between credible and confidence intervals?

A. Credible intervals depend only on the data.

B. Confidence intervals are probability statements about parameters.

C. Credible intervals make probability statements about parameters.

D. They are identical under all conditions.

Q9. The prior \(Beta(1,1)\) represents:

A. A noninformative prior over [0,1]

B. A uniform prior over integers

C. A conjugate prior for Poisson

D. An improper prior

Q10. What happens to the influence of the prior as the sample size increases?

A. It increases

B. It stays the same

C. It becomes dominant

D. It decreases

Q11. Which of the following is not a conjugate model?

A. Normal-Normal

B. Poisson-Gamma

C. Multinomial-Dirichlet

D. Normal-Exponential

Q12. Which of the following is true regarding improper priors?

A. They always lead to improper posteriors.

B. They cannot be used in Bayesian analysis.

C. They are valid if the posterior is proper.

D. They are only used for multivariate models.

Q13. In empirical Bayes, the prior is:

A. Fully specified by expert opinion.

B. Chosen from noninformative families.

C. Estimated from the data.

D. Invariant to sample size.

Q14. The marginal distribution of \(Y\) in the Beta-Binomial model is:

A. Binomial

B. Gamma

C. Beta-Binomial

D. Uniform

Q15. When using a uniform prior over \([a,b]\), the posterior is zero outside:

A. \([a,b]\)

B. \([0,1]\)

C. All real numbers

D. Positive integers


True/False Questions (16–25)

Q16. The Beta(0.5,0.5) prior puts more weight at the extremes 0 and 1.
True / False

Q17. The posterior mean is always between the prior mean and the MLE.
True / False

Q18. Noninformative priors always lead to unbiased posteriors.
True / False

Q19. In a Jeffreys prior, the Fisher information plays a central role.
True / False

Q20. The variance of the Beta-Binomial is always smaller than the Binomial.
True / False

Q21. The Normal-Inverse-Gamma is a conjugate prior for unknown mean and variance.
True / False

Q22. An improper prior can never be used in a real Bayesian application.
True / False

Q23. The posterior distribution for \(\theta\) in the Exponential-Gamma model is Gamma.
True / False

Q24. The prior \(p(\theta) \propto 1\) is noninformative for location parameters.
True / False

Q25. Posterior variance always decreases with increasing sample size.
True / False


Answer Key and Explanations

Q#AnswerExplanation
1AConjugate priors yield posteriors of the same family.
2BPosterior: \(Beta(y+\alpha, n-y+\beta)\).
3BMean of \(Beta(\alpha, \beta)\) is \(\alpha/(\alpha + \beta)\).
4BJeffreys priors are reparametrization invariant.
5CJeffreys prior for scale is \(\propto 1/\theta\).
6CPosterior mean is a precision-weighted average of prior mean and data.
7CPosterior \(\propto\) Prior × Likelihood.
8CCredible intervals directly refer to parameter probabilities.
9ABeta(1,1) = Uniform(0,1), a common noninformative prior.
10DPrior influence decreases as data dominates.
11DNormal-Exponential is not a conjugate pair.
12CImproper priors are acceptable if posterior is proper.
13CEmpirical Bayes estimates prior from data.
14CThe marginal is a Beta-Binomial distribution.
15APosterior is zero outside support of prior.
16TrueBeta(0.5,0.5) emphasizes extremes.
17FalseExtreme posteriors may break this.
18FalseBias can still occur depending on data/prior.
19TrueFisher information is the basis.
20FalseVariance is larger due to overdispersion.
21TrueIt’s the standard conjugate for unknown mean/var.
22FalseImproper priors are often used (e.g., Jeffreys).
23TruePosterior is Gamma in Exponential-Gamma model.
24TrueIt is standard for noninformative location priors.
25FalseNot guaranteed; depends on prior, structure.

Well done! If you’ve mastered this quiz, you’re well on your way to thinking like a Bayesian.

For deeper dives, return to Editions 1–8 and stay tuned for our upcoming series on hierarchical models.

Stay engaged with 3 D Statistical Learning as we continue making Bayesian thinking second nature!