Introduction

In Edition 7, we address a foundational yet subtle aspect of Bayesian inference: noninformative priors. Also referred to as “vague,” “flat,” or “reference” priors, they are used when we want our prior beliefs to have minimal influence on the posterior distribution.

We’ll discuss both permissible and improper priors, introduce the Jeffreys prior as a principled default, and visualize how different choices of noninformative priors behave, especially in the context of the binomial model.


What Are Noninformative Priors?

Noninformative priors aim to reflect a lack of prior knowledge about the parameter of interest. Ideally, they should have little to no influence on the posterior distribution, letting the data speak for themselves.

Terminology

These priors may be called:

  • Flat priors
  • Vague priors
  • Diffuse priors
  • Reference priors

Admissible vs. Improper Priors

Admissible (Proper) Noninformative Priors

If the parameter \(\theta\) lies within a finite set or bounded interval, a uniform distribution can serve as a valid noninformative prior.

Example:
For \(\theta \in [0, 1]\), the uniform distribution \(\theta \sim \text{Beta}(1, 1)\) is a common choice.

Improper Priors

If \(\theta \in \mathbb{R}\), then \(p(\theta) = c\) is not integrable over the real line and is thus improper:

$$\int_{-\infty}^{\infty} c \, d\theta = \infty \quad \text{for all } c > 0$$

Still, such priors may be acceptable if the resulting posterior is proper (i.e., normalizable).

⚠️ Warning: Improper priors may yield improper posteriors — these must be handled with caution.


3. Example: Improper Prior with Valid Posterior

Let \(Y \mid \theta \sim \mathcal{N}(\theta, 1)\) and assume the improper prior \(p(\theta) \propto 1\).

Posterior

\(p(\theta \mid y) \propto p(y \mid \theta)p(\theta) \propto \exp\left( -\frac{1}{2}(y – \theta)^2 \right)\)

This is proportional to the density of \(\mathcal{N}(y, 1)\), which is a proper distribution. Hence, the posterior is valid.


4. Jeffreys’ Invariance Principle

Noninformative priors should ideally be invariant under reparameterization. That is, if \(\phi = h(\theta)\), then:

\(p(\phi) = p(\theta) \left| \frac{d\theta}{d\phi} \right|\)

Jeffreys proposed using:

\(p(\theta) \propto \sqrt{J(\theta)}\)

Where \(J(\theta)\) is the Fisher Information:

\(J(\theta) = \mathbb{E} \left[ \left( \frac{d}{d\theta} \log p(y \mid \theta) \right)^2 \right]\)

This ensures the prior respects the geometry of the parameter space and remains invariant under transformations.


5. Example: Binomial Model and Jeffreys Prior

Let \(Y \sim \text{Binomial}(n, \theta)\). The likelihood is:

\(p(y \mid \theta) \propto \theta^y (1 – \theta)^{n – y}\)

The Fisher information is:

\(J(\theta) = \frac{n}{\theta(1 – \theta)}\)

Thus, Jeffreys prior is:

\(p(\theta) \propto \sqrt{J(\theta)} \propto \frac{1}{\sqrt{\theta(1 – \theta)}}\)

This corresponds to a Beta(0.5, 0.5) distribution — U-shaped, with more weight on the boundaries.


6. Visual Comparison of Noninformative Priors

plot of chunk compare-priors


7. Conclusion

In this edition, we explored:

  • The rationale for noninformative priors,
  • The risks and conditions for using improper priors,
  • The motivation and calculation of the Jeffreys prior,
  • A visual comparison of different priors in the binomial setting.

Understanding noninformative priors provides a foundation for objective Bayesian analysis, especially in early stages of modeling when limited prior knowledge is available.

In the upcoming Edition 8, we continue with Noninformative Priors for Location and Scale Parameters.


Stay curious with us at 3 D Statistical Learning, where mathematics meets accessibility.

Special thanks to Dr. Dany Djeudeu for guiding this journey into the Bayesian world.