Introduction: What’s at Stake?

Imagine you have to make a decision, but you don’t know exactly what the true state of the world is. You’re stuck in a game against uncertainty, and you’d like to avoid regret.

Bayesian decision theory helps by using prior beliefs. But what if you don’t trust your prior? What if you want to guard against the worst-case scenario?

That’s where minimax rules come in.


What Is a Minimax Rule?

A minimax rule is a decision rule that minimizes your maximum possible risk.

Think of it as:

“Choose the rule that does best when things go worst.”

Instead of trusting a specific prior distribution like in Bayesian analysis, minimax rules ask:

“What’s the worst-case loss this rule could lead to, over all possible truths?”

Then it picks the rule that keeps that worst-case loss as small as possible.


A Simple Example

Let’s reuse a decision problem we’ve seen earlier:

  • Parameter space: \(\Theta = \{0, 1\}\)

  • Action space: \(\mathcal{A} = \{0, 1\}\)

  • Loss function: 0–1 loss (you lose 1 if your action is wrong)

We observe \(X \in \{0,1\}\) with:

  • \(P(X = 1 \mid \theta = 0) = 0.3\)

  • \(P(X = 1 \mid \theta = 1) = 0.9\)

You can define a decision rule \(\delta\) by saying what to do for \(X = 0\) and \(X = 1\).

Let’s suppose you consider the class of Bayes rules based on priors:

$$
\pi_w(\theta = 1) = w, \quad \pi_w(\theta = 0) = 1 – w, \quad \text{with } w \in [0, 1]
$$

Now ask:
→ As \(w\) varies, how does the maximum risk of the Bayes rule behave?


Bayes Risk vs. Max Risk

For a given prior \(w\), we can compute:

  • The Bayes risk \(r(\delta_w)\): your average risk, assuming \(w\) is the correct prior.

  • The maximum risk \(R_{\max}(\delta_w)\): your worst-case risk across \(\theta\).

The minimax rule is the rule \(\delta_w\) where the maximum risk is as small as possible.

In this example, we find that the minimax rule corresponds to a Bayes rule for a special prior \(w^* \approx 0.41\) — the one that makes both risks equal:

$$
R(\delta_{w^}, \theta = 0) = R(\delta_{w^}, \theta = 1)
$$

This is called a least favorable prior, because it maximizes the Bayes risk — and yet still defines the minimax rule.


Why This Matters

  • The Bayesian approach is great when you believe your prior.

  • The minimax approach is cautious: it avoids putting all your trust in any one prior.

Minimax rules are helpful when:

  • You want a conservative strategy,

  • You face adversarial or uncertain environments,

  • You want robust decisions even when priors are questionable.


Intuition Through Analogy

Imagine you’re building a bridge and choosing between materials. You don’t know exactly how strong the future winds will be, but you know the possible range.

  • The Bayes engineer builds assuming the most likely wind.

  • The Minimax engineer builds for the strongest possible wind.

Both are valid, but serve different goals.


Summary

Rule TypeTrusts Prior?StrategyBest When…
BayesYesMinimize average (expected) lossYou have reliable prior beliefs
MinimaxNoMinimize worst-case lossYou want robustness or fear prior misspec.

Next Steps

In the next edition, we’ll explore how to evaluate and compare decision rules using risk functions and admissibility. We’ll also see how Bayes and Minimax rules can sometimes agree, and when they definitely don’t.

Stay tuned for the next part!

We gratefully acknowledge Dr. Dany Djeudeu for preparing this course.