First understand Discrete Bayes, which helps you build really solid foundation. Everything else builds on top of this. A kalman filter just uses gaussians.

Instead of histograms used with a discrete Bayes filter, we now use gaussians to model the data.

We have two fundamental equations:
$x_{N}=x_{N}⊕f_{x_{N}}(∙)predict step$$x=L⊗x_{N}update step$

$⊕$ is gaussian addition, $⊗$ is gaussian multiplication (see Gaussian Operator)

$x_{N}$ indicates that it is gaussian

The basic things

However, this notation is rarely used. Below, we will introduce conversion to conventional Kalman filter notation.

In literature, this is the notation (an alternative but equivalent implementation):

The kalman gain tells us how much we trust the process vs. the measurement.
$K=P+RP $

& \text{(Kalman form)} & \\
\hline
\bar \mu = \mu + \mu_{f_x} & \bar x = x + dx & \bar{\mathbf x} = \mathbf{Fx} + \mathbf{Bu}\\
\bar\sigma^2 = \sigma_x^2 + \sigma_{f_x}^2 & \bar P = P + Q & \bar{\mathbf P} = \mathbf{FPF}^\mathsf T + \mathbf Q \\ \hline\end{array}$$
- $\mathbf x,\, \mathbf P$ are the state mean and covariance
- correspond to $x$ and $\sigma^2$
- $\mathbf F$ is the state transition function
- corresponds to $f_x$
- When multiplied by $\bf x$ it computes the prior
- $\mathbf Q$ is the process covariance
- corresponds to $\sigma^2_{f_x}$
- $\mathbf B$ and $\mathbf u$ model control inputs to the system
> [!note] Addressing discrepancies
>
>
> So I think in the univariate cases, we didn't introduce $Bu$, a possible control input. So then we had $\overline{x} = Fx$ for the multivariate case. Which is fine. But we're also introducing this control input. I was thinking how the control input was $F$, but no, there are like two possible transitions.
**Update**
$$\begin{array}{|l|l|l|}
\hline
\text{Univariate} & \text{Univariate} & \text{Multivariate}\\
& \text{(Kalman form)} & \\
\hline
& y = z - \bar x & \mathbf y = \mathbf z - \mathbf{H\bar x} \\
& K = \frac{\bar P}{\bar P+R}&
\mathbf K = \mathbf{\bar{P}H}^\mathsf T (\mathbf{H\bar{P}H}^\mathsf T + \mathbf R)^{-1} \\
\mu=\frac{\bar\sigma^2\, \mu_z + \sigma_z^2 \, \bar\mu} {\bar\sigma^2 + \sigma_z^2} & x = \bar x + Ky & \mathbf x = \bar{\mathbf x} + \mathbf{Ky} \\
\sigma^2 = \frac{\sigma_1^2\sigma_2^2}{\sigma_1^2+\sigma_2^2} & P = (1-K)\bar P &
\mathbf P = (\mathbf I - \mathbf{KH})\mathbf{\bar{P}} \\
\hline
\end{array}$$
- $\mathbf H$ is the measurement function
- $\mathbf z,\, \mathbf R$ are the measurement mean and noise covariance
- Correspond to $z$ and $\sigma_z^2$ in the univariate filter
- $\mathbf y$ and $\mathbf K$ are the residual and Kalman gain
> [!important] Your job as a designer
>
>
> Your job as a designer will be to design the state $\left(\mathbf x, \mathbf P\right)$, the process $\left(\mathbf F, \mathbf Q\right)$, the measurement $\left(\mathbf z, \mathbf R\right)$, and the measurement function $\mathbf H$. If the system has control inputs, such as a robot, you will also design $\mathbf B$ and $\mathbf u$.
### CS287
I was learning about Kalman Filters back in like Grade 9 when I was trying to build a self-balancing rocket. I watched through quite a few lectures.
i was trying to build a [[notes/Rocket Propulsion|Rocket]].
They used those things in rockets! Actually, Kalman filters are used everywhere. Self-driving cars, drones, everything I am interested in actually uses the Kalman filter.
From [[notes/CS287|CS287]], a Kalman filter is just a special case of [[notes/Bayes Filter|Bayes Filter]], where dynamics and sensory models linear [[notes/Normal Distribution|Gaussian]]s.
![[attachments/Screen Shot 2022-12-31 at 3.13.18 AM.png]]
### Kalman Filter from Visual SLAM book
Notation also used in [[notes/Visual SLAM|Visual SLAM]].
- $u$ is the motion data
- $z$ is the observation data
- $x$ is the state
- $y$ are the landmarks
### Related
- [[notes/Extended Kalman Filter|Extended Kalman Filter]]