# Expected Value

In Probability Theory, the expected value (also called expectation, expectancy, mathematical expectation, mean, average, or first moment) is a generalization of the weighted average.

```
title: Expected Value (Definition)
Let $X$ be a discrete [[Random Variable]] with [[Probability Mass Function|p.m.f.]] $f(x)$, then the expected value $E[X]$ is
$$\mu = E[X] = \sum\limits_{x \in X} x \cdot f (x)$$
For a continuous [[Random Variable]], we have
$$E(x) = \int_{- \infty}^{\infty} x(f(x))dx$$
```

Informally, the expected value is the arithmetic mean of a large number of independently selected outcomes of a random variable.

When we change the $x$ term, it becomes below (notice that $f(x)$ doesn’t change). see property 2 below $E(x_{2})=∫_{−∞}x_{2}(f(x))dx$

**Properties**

- For any constant $α$ and $β$, $E(αX+βY)=αE(X)+βE(Y)$ (linearity)
- $E(g(x))=∑_{x∈X}g(x)⋅f_{x}(x)$, $f_{x}(x)$ is the p.m.f. of $X$
- What this means for example, if you have a p.d.f f(x) where
- f(0) = 0.2
- f(1) = 0.5
- f(2) = 0.3

- To find $E(X_{2})$, you have $0_{2}⋅0.2+1_{2}⋅0.5+2_{2}⋅0.3=1.7$ (notice how the p.d.f. doesn’t change)
- Another example: $E(lnX)=∑_{x∈X}lnxf(x)$

- What this means for example, if you have a p.d.f f(x) where

**Other Properties**

- $E(k)=k$; $Var(k)=0$, where $k$ is a constant
- $E(aX+b)=aE(X)+b;Var(aX+b)=a_{2}Var(X)$
- $E(X−μ)=0$ for any r.v. $X$
- $Var(X)=E[(X−E(X))_{2}]=E(X_{2})−μ_{2}$
- To get to this answer, expand the squared term to $E[X_{2}−2XE(X)+E(X)_{2}]$, you can treat $E(X)$ as a constant, and realize that $E(x)=μ$, so it simplifies to $E(x_{2})−μ_{2}$
- I was still a little confused, the proof is here: https://www.probabilitycourse.com/chapter3/3_2_4_variance.php#:~:text=The%20variance%20of%20a%20random,%E2%88%92%CE%BCX)2%5D.

- If $X$ and $Y$ are independent then $Var(X+Y)=Var(X)+Var(Y)$

### Related

### Random

Wow, another Serendipity moment. Tying the Serendipity moment. Tying the Reinforcement Learning stuff (expected return) I was seeing to Competitive Programming, as well as the probability theory that I will be learning for my upcoming Competitive Programming, as well as the probability theory that I will be learning for my upcoming Statistics class.