# Bayes’ Theorem

Bayes’ Rule is an application of the LOTP with the Conditional Probability rule.

Bayes’ theorem describes the probability of an event based on prior knowledge of conditions that might be related to the event.

Let $B_{1},…,B_{n}$ be a partition of $S$ and $A$ be any event, then $P(B_{i}∣A)=P(A)P(A∣B_{i})⋅P(B_{i}) =∑_{R=1}P(A∣B_{R})⋅P(B_{R})P(A∣B_{i})⋅P(B_{i}) $

$P(B_{i})$ $→$ Prior Probability $P(B_{i}∣A)$ $→$ Posterior Probability $P(A∣B_{i})$ $→$ Likelihood

How is bayes rule derived? Apply the basic rule of Conditional Probability, and leverage the fact that AND is commutative. $P(B_{i}∩A)=P(A∣B_{i})P(B_{i})=P(B_{i}∣A)P(A)$ Therefore, you can simplify $P(B_{i}∣A)=P(A)P(A∩B_{i}) =p(A)P(B_{i}∩A) =…$

Solution: $B_{1}=$ The fair coin was chosen $B_{2}=$ The biased coin was chosen $A=$ 3 heads observed in 3 tosses

We want to find $P(B_{1}∣A)$, so we can use Bayes’ Theorem, and calculate $P(A)=P(A∣B_{1})P(B_{1})+P(A∣B_{2})P(B_{2})$

### Relation to more advanced control theory / ML

This is why we say (from Kalman Filter in Python) $Posterior=NormalizationLikelihood×Prior $

- This is at the core of Bayes Filter update

You just continuously update the posterior, setting prior to old posterior every time.

See Kalman Filter.

Bayes theorem is super useful because it turns a hard problem into an easy problem.

Hard problems:

- P(Cancer = True | Test = Positive)
- P(Rain = True | Readings)

Stated like that the problems seem unsolvable.

Easy problems:

- P(Test = Positive | Cancer = True)
- P(Readings | Rain = True)

Bayes’ Theorem lets us solve the hard problem by solving the easy problem.

### Bayes rule with Conditioning

From CS287. $P(x∣y,z)=P(y∣z)P(y∣x,z)P(x∣z) $