数学期望及常见分布的期望计算与推导

数学期望及常见分布的期望计算与推导

文章目录

1. 数据期望定义2. 随机变量函数的数学期望3. 二维随机变量函数的期望4. 数学期望性质5. 常见随机变量分布的期望5.1

(

0

1

)

(0-1)

(0−1)分布5.2 二项分布5.3 泊松分布5.4 几何分布5.5 超几何分布5.6 均匀分布5.7 指数分布5.8 正态分布5.9 总结

1. 数据期望定义

设离散型随机变量

X

X

X的分布律为

P

{

X

=

x

k

}

=

p

k

,

k

=

0

,

1

,

2

,

.

P\{X=x_k\}=p_k,\quad k=0,1,2,\cdots.

P{X=xk​}=pk​,k=0,1,2,⋯. 若级数

k

=

1

x

k

p

k

\sum\limits_{k=1}^{\infty}x_kp_k

k=1∑∞​xk​pk​ 绝对收敛,则称级数

k

=

1

x

k

p

k

\sum\limits_{k=1}^{\infty}x_kp_k

k=1∑∞​xk​pk​ 的和为随机变量

X

X

X的数学期望,记为

E

(

X

)

E(X)

E(X). 即

E

(

X

)

=

k

=

1

x

k

p

k

.

E(X) = \sum\limits_{k=1}^{\infty}x_kp_k.

E(X)=k=1∑∞​xk​pk​.

设连续型随机变量

X

X

X的概率密度为

f

(

x

)

f(x)

f(x) ,若积分

+

x

f

(

x

)

d

x

\int_{-\infty}^{+\infty}xf(x)dx

∫−∞+∞​xf(x)dx 绝对收敛,则称积分

+

x

f

(

x

)

d

x

\int_{-\infty}^{+\infty}xf(x)dx

∫−∞+∞​xf(x)dx的值为随机变量

X

X

X的数学期望,记为

E

(

X

)

E(X)

E(X). 即

E

(

X

)

=

+

x

f

(

x

)

d

x

.

E(X)=\int_{-\infty}^{+\infty}xf(x)dx.

E(X)=∫−∞+∞​xf(x)dx.

数学期望简称期望,又称均值

可以用加权平均值来理解期望。

2. 随机变量函数的数学期望

Y

Y

Y是随机变量

X

X

X的函数:

Y

=

g

(

X

)

Y=g(X)

Y=g(X) (

g

g

g是连续函数).

如果

X

X

X是离散型随机变量,它的分布律为

P

{

X

=

x

k

}

=

p

k

,

k

=

0

,

1

,

2

,

,

P\{X=x_k\}=p_k,\quad k=0,1,2,\cdots,

P{X=xk​}=pk​,k=0,1,2,⋯, 若

k

=

1

g

(

x

k

)

p

k

\sum\limits_{k=1}^{\infty}g(x_k)p_k

k=1∑∞​g(xk​)pk​ 绝对收敛,则有

E

(

Y

)

=

E

[

g

(

X

)

]

=

k

=

1

g

(

x

k

)

p

k

.

E(Y)=E[g(X)]=\sum\limits_{k=1}^{\infty}g(x_k)p_k.

E(Y)=E[g(X)]=k=1∑∞​g(xk​)pk​.如果

X

X

X是连续型随机变量,它的概率密度为

f

(

x

)

f(x)

f(x),若积分

+

g

(

x

)

f

(

x

)

d

x

\int_{-\infty}^{+\infty}g(x)f(x)dx

∫−∞+∞​g(x)f(x)dx 绝对收敛,则有

E

(

Y

)

=

E

[

g

(

X

)

]

=

+

g

(

x

)

f

(

x

)

d

x

.

E(Y)=E[g(X)]=\int_{-\infty}^{+\infty}g(x)f(x)dx.

E(Y)=E[g(X)]=∫−∞+∞​g(x)f(x)dx.

3. 二维随机变量函数的期望

Z

Z

Z是随机变量

X

X

X,

Y

Y

Y的函数:

Z

=

g

(

X

,

Y

)

Z=g(X,Y)

Z=g(X,Y) (

g

g

g是连续函数),那么,

Z

Z

Z 是一个一维随机变量

(

X

,

Y

)

(X,Y)

(X,Y)为离散型随机变量,其分布律为

P

{

X

=

x

i

,

Y

=

y

j

}

=

p

i

j

,

i

,

j

=

1

,

2

,

,

P\{X=x_i,Y=y_j\}=p_{ij},\quad i,j=1,2,\cdots,

P{X=xi​,Y=yj​}=pij​,i,j=1,2,⋯, 则有

E

(

Z

)

=

E

[

g

(

X

,

Y

)

]

=

j

=

1

i

=

1

g

(

x

i

,

y

j

)

p

i

j

E(Z)=E[g(X,Y)] = \sum\limits_{j=1}^{\infty}\sum\limits_{i=1}^{\infty}g(x_i,y_j)p_{ij}

E(Z)=E[g(X,Y)]=j=1∑∞​i=1∑∞​g(xi​,yj​)pij​,这里假设上式右边的级数绝对收敛。若

(

X

,

Y

)

(X,Y)

(X,Y)为连续型随机变量,其概率密度为

f

(

x

,

y

)

f(x,y)

f(x,y),则有

E

(

Z

)

=

E

[

g

(

X

,

Y

)

]

=

+

+

g

(

x

,

y

)

f

(

x

,

y

)

d

x

d

y

\begin{aligned} E(Z)=E[g(X,Y)] = \int_{-\infty}^{+\infty}\int_{-\infty}^{+\infty}g(x,y)f(x,y)dxdy\end{aligned}

E(Z)=E[g(X,Y)]=∫−∞+∞​∫−∞+∞​g(x,y)f(x,y)dxdy​

4. 数学期望性质

C

C

C是常数,则有

E

(

C

)

=

C

.

E(C)=C.

E(C)=C.

目前我们主要研究离散型随机变量和连续性随机变量,因此,我们从两个方面证明期望性质的正确性

证明:

对于离散型,有

E

(

X

)

=

k

=

1

x

k

p

k

=

C

k

=

1

x

k

=

C

E(X) = \sum\limits_{k=1}^{\infty}x_kp_k = C\sum\limits_{k=1}^{\infty}x_k = C

E(X)=k=1∑∞​xk​pk​=Ck=1∑∞​xk​=C

对于连续型,有

E

(

X

)

=

+

x

f

(

x

)

d

x

.

=

C

+

f

(

x

)

d

x

=

C

E(X)=\int_{-\infty}^{+\infty}xf(x)dx. = C\int_{-\infty}^{+\infty}f(x)dx = C

E(X)=∫−∞+∞​xf(x)dx.=C∫−∞+∞​f(x)dx=C

X

X

X 是一个随机变量,

C

C

C是常数,则有

E

(

C

X

)

=

C

E

(

X

)

.

E(CX)=CE(X).

E(CX)=CE(X).

证明:

对于离散型,有

E

(

C

X

)

=

k

=

1

C

x

k

p

k

=

C

k

=

1

x

k

p

k

=

C

E

(

X

)

E(CX) = \sum\limits_{k=1}^{\infty}Cx_kp_k = C\sum\limits_{k=1}^{\infty}x_kp_k = CE(X)

E(CX)=k=1∑∞​Cxk​pk​=Ck=1∑∞​xk​pk​=CE(X)

对于连续型,有

E

(

X

)

=

+

C

x

f

(

x

)

d

x

.

=

C

+

x

f

(

x

)

d

x

=

C

E

(

X

)

E(X)=\int_{-\infty}^{+\infty}Cxf(x)dx. = C\int_{-\infty}^{+\infty}xf(x)dx = CE(X)

E(X)=∫−∞+∞​Cxf(x)dx.=C∫−∞+∞​xf(x)dx=CE(X)

X

X

X,

Y

Y

Y 是两个随机变量,则有

E

(

X

+

Y

)

=

E

(

X

)

+

E

(

Y

)

.

E(X+Y)=E(X)+E(Y).

E(X+Y)=E(X)+E(Y).

证明

对于离散型,有

E

(

X

+

Y

)

=

j

=

1

i

=

1

(

x

i

+

y

j

)

p

i

j

=

j

=

1

i

=

1

x

i

p

i

j

+

j

=

1

i

=

1

y

j

p

i

j

=

E

(

X

)

+

E

(

Y

)

E(X+Y) = \sum\limits_{j=1}^{\infty}\sum\limits_{i=1}^{\infty}(x_i+y_j)p_{ij} = \sum\limits_{j=1}^{\infty}\sum\limits_{i=1}^{\infty}x_ip_{ij}+\sum\limits_{j=1}^{\infty}\sum\limits_{i=1}^{\infty}y_jp_{ij} = E(X)+E(Y)

E(X+Y)=j=1∑∞​i=1∑∞​(xi​+yj​)pij​=j=1∑∞​i=1∑∞​xi​pij​+j=1∑∞​i=1∑∞​yj​pij​=E(X)+E(Y)

对于连续型,有

E

(

X

+

Y

)

=

+

+

(

x

+

y

)

f

(

x

,

y

)

d

x

d

y

=

+

+

x

f

(

x

,

y

)

d

x

d

y

+

+

+

y

f

(

x

,

y

)

d

x

d

y

=

+

x

d

x

[

+

f

(

x

,

y

)

d

y

]

+

+

y

d

y

[

+

f

(

x

,

y

)

d

y

]

=

+

x

f

X

(

x

)

d

x

+

+

y

f

Y

(

y

)

d

y

=

E

(

X

)

+

E

(

Y

)

\begin{aligned} E(X+Y)&=\int_{-\infty}^{+\infty}\int_{-\infty}^{+\infty}(x+y)f(x,y)dxdy\\&=\int_{-\infty}^{+\infty}\int_{-\infty}^{+\infty}xf(x,y)dxdy+\int_{-\infty}^{+\infty}\int_{-\infty}^{+\infty}yf(x,y)dxdy\\&=\int_{-\infty}^{+\infty}xdx[\int_{-\infty}^{+\infty}f(x,y)dy]+\int_{-\infty}^{+\infty}ydy[\int_{-\infty}^{+\infty}f(x,y)dy]\\&=\int_{-\infty}^{+\infty}xf_X(x)dx + \int_{-\infty}^{+\infty}yf_Y(y)dy\\&=E(X)+E(Y)\end{aligned}

E(X+Y)​=∫−∞+∞​∫−∞+∞​(x+y)f(x,y)dxdy=∫−∞+∞​∫−∞+∞​xf(x,y)dxdy+∫−∞+∞​∫−∞+∞​yf(x,y)dxdy=∫−∞+∞​xdx[∫−∞+∞​f(x,y)dy]+∫−∞+∞​ydy[∫−∞+∞​f(x,y)dy]=∫−∞+∞​xfX​(x)dx+∫−∞+∞​yfY​(y)dy=E(X)+E(Y)​

该性质可推广到任意有限个随机变量之和的情况

X

X

X,

Y

Y

Y 是两个相互独立的随机变量,则有

E

(

X

Y

)

=

E

(

X

)

E

(

Y

)

.

E(XY)=E(X)E(Y).

E(XY)=E(X)E(Y).

证明

对于离散型,有

E

(

X

Y

)

=

j

=

1

i

=

1

(

x

i

y

j

)

p

i

j

=

j

=

1

i

=

1

x

i

y

j

p

i

p

j

=

i

=

1

x

i

p

i

j

=

1

y

j

p

j

=

E

(

X

)

E

(

Y

)

E(XY) = \sum\limits_{j=1}^{\infty}\sum\limits_{i=1}^{\infty}(x_iy_j)p_{ij} = \sum\limits_{j=1}^{\infty}\sum\limits_{i=1}^{\infty}x_iy_jp_{i\cdot}p_{\cdot j} = \sum\limits_{i=1}^{\infty}x_ip_{i\cdot}\cdot\sum\limits_{j=1}^{\infty}y_jp_{\cdot j} = E(X)\cdot E(Y)

E(XY)=j=1∑∞​i=1∑∞​(xi​yj​)pij​=j=1∑∞​i=1∑∞​xi​yj​pi⋅​p⋅j​=i=1∑∞​xi​pi⋅​⋅j=1∑∞​yj​p⋅j​=E(X)⋅E(Y)

对于连续型,有

E

(

X

Y

)

=

+

+

x

y

f

(

x

,

y

)

d

x

d

y

=

+

+

x

f

X

(

x

)

y

f

Y

(

y

)

d

x

d

y

=

[

+

x

f

X

(

x

)

d

x

]

[

+

y

f

Y

(

y

)

d

y

]

=

E

(

X

)

E

(

Y

)

\begin{aligned} E(XY)&=\int_{-\infty}^{+\infty}\int_{-\infty}^{+\infty}xyf(x,y)dxdy\\&=\int_{-\infty}^{+\infty}\int_{-\infty}^{+\infty}xf_X(x)yf_Y(y)dxdy\\&=[\int_{-\infty}^{+\infty}xf_X(x)dx]\cdot[\int_{-\infty}^{+\infty}yf_Y(y)dy]\\&=E(X)E(Y)\end{aligned}

E(XY)​=∫−∞+∞​∫−∞+∞​xyf(x,y)dxdy=∫−∞+∞​∫−∞+∞​xfX​(x)yfY​(y)dxdy=[∫−∞+∞​xfX​(x)dx]⋅[∫−∞+∞​yfY​(y)dy]=E(X)E(Y)​

该性质可推广到任意有限个相互独立的随机变量之积的情况

5. 常见随机变量分布的期望

5.1

(

0

1

)

(0-1)

(0−1)分布

随机变量

X

X

X服从

(

0

1

)

(0-1)

(0−1)分布,则其分布律为

P

{

X

=

k

}

=

p

k

(

1

p

)

1

k

,

k

=

0

,

1

P\{X=k\} = p^k(1-p)^{1-k}, \quad k=0,1

P{X=k}=pk(1−p)1−k,k=0,1

此时有

E

(

X

)

=

p

E(X)=p

E(X)=p .

证明:

E

(

X

)

=

k

=

0

1

x

k

p

k

=

0

p

0

(

1

p

)

1

0

+

1

p

1

(

1

p

)

1

1

=

p

E(X)=\sum\limits_{k=0}^{1}x_kp_k = 0\cdot p^0(1-p)^{1-0}+1\cdot p^1(1-p)^{1-1} = p

E(X)=k=0∑1​xk​pk​=0⋅p0(1−p)1−0+1⋅p1(1−p)1−1=p

5.2 二项分布

X

b

(

n

,

p

)

X\sim b(n,p)

X∼b(n,p) ,则其分布律为

P

{

X

=

k

}

=

(

k

n

)

p

k

q

n

k

k

=

0

,

1

,

2

,

n

P\{X=k\} = \left(_k^n\right)p^kq^{n-k} \quad k=0,1,2\cdots, n

P{X=k}=(kn​)pkqn−kk=0,1,2⋯,n ,此时有

E

(

X

)

=

n

p

.

E(X)=np.

E(X)=np.

证明:

E

(

X

)

=

k

=

0

n

k

(

k

n

)

p

k

q

n

k

=

k

=

0

n

k

n

!

k

!

(

n

k

)

!

p

k

q

n

k

=

k

=

1

n

n

p

(

n

1

)

!

(

k

1

)

!

(

n

k

)

!

p

k

1

q

n

k

(

k

1

n

k

=

0

0

(

1

)

!

)

=

n

p

(

p

+

q

)

n

1

(

k

n

p

(

p

+

q

)

n

1

)

=

n

p

\begin{aligned}E(X) &= \sum\limits_{k=0}^{n}k(_k^n)p^kq^{n-k} \\&=\sum\limits_{k=0}^{n}k\frac{n!}{k!(n-k)!}p^kq^{n-k} \\&= \sum\limits_{k=1}^{n}np\frac{(n-1)!}{(k-1)!(n-k)!}p^{k-1}q^{n-k} \quad (k变成从1到n,原因计算期望时k=0的一项为0,不影响结果,且这一步能避免出现(-1)!) \\&= np\cdot (p+q)^{n-1} \quad (上式中只有k是未知数,因此np在累加时可提出,累加项刚好为(p+q)^{n-1}的二项展开项) \\&=np \end{aligned}

E(X)​=k=0∑n​k(kn​)pkqn−k=k=0∑n​kk!(n−k)!n!​pkqn−k=k=1∑n​np(k−1)!(n−k)!(n−1)!​pk−1qn−k(k变成从1到n,原因计算期望时k=0的一项为0,不影响结果,且这一步能避免出现(−1)!)=np⋅(p+q)n−1(上式中只有k是未知数,因此np在累加时可提出,累加项刚好为(p+q)n−1的二项展开项)=np​

5.3 泊松分布

X

π

(

λ

)

X\sim \pi(\lambda)

X∼π(λ) ,则其分布律为

P

{

X

=

k

}

=

λ

k

k

!

e

λ

k

=

0

,

1

,

2

,

P\{X=k\} = \frac{\lambda^k}{k!}e^{-\lambda} \quad k=0,1,2,\cdots

P{X=k}=k!λk​e−λk=0,1,2,⋯ ,此时有

E

(

X

)

=

λ

.

E(X)=\lambda.

E(X)=λ.

证明:

E

(

X

)

=

k

=

0

k

λ

k

k

!

e

λ

=

λ

e

λ

k

=

1

λ

k

1

(

k

1

)

!

(

k

1

n

k

=

0

0

(

1

)

.

λ

e

λ

)

=

λ

e

λ

e

λ

(

e

λ

=

n

=

0

N

1

n

!

x

n

(

k

1

)

=

n

,

k

=

1

λ

k

1

(

k

1

)

!

=

n

=

0

λ

n

n

!

)

=

λ

\begin{aligned}E(X) &= \sum\limits_{k=0}^{\infty}k\frac{\lambda^k}{k!}e^{-\lambda} \\&=\lambda e^{-\lambda}\sum\limits_{k=1}^{\infty}\frac{\lambda^{k-1}}{(k-1)!} \quad (k变成从1到n,原因计算期望时k=0的一项为0,不影响结果,且这一步能避免出现(-1)的阶乘.\lambda e^{-\lambda}为常数,可以从累加式中提出 )\\&= \lambda e^{-\lambda}e^{\lambda} \quad (泰勒公式可知e^{\lambda}=\sum\limits_{n=0}^{N}\frac{1}{n!}x^n,如果看不出来,令(k-1)=n,则 \sum\limits_{k=1}^{\infty}\frac{\lambda^{k-1}}{(k-1)!}=\sum\limits_{n=0}^{\infty}\frac{\lambda^{n}}{n!} ) \\&= \lambda \quad \end{aligned}

E(X)​=k=0∑∞​kk!λk​e−λ=λe−λk=1∑∞​(k−1)!λk−1​(k变成从1到n,原因计算期望时k=0的一项为0,不影响结果,且这一步能避免出现(−1)的阶乘.λe−λ为常数,可以从累加式中提出)=λe−λeλ(泰勒公式可知eλ=n=0∑N​n!1​xn,如果看不出来,令(k−1)=n,则k=1∑∞​(k−1)!λk−1​=n=0∑∞​n!λn​)=λ​ 关于泰勒公式之前文章 离散型随机变量及其常见分布律 有相关描述.

证明方法二:

E

(

X

)

=

k

=

0

k

λ

k

k

!

e

λ

=

λ

k

=

1

λ

k

1

(

k

1

)

!

e

λ

k

1

=

n

E

(

X

)

=

λ

n

=

0

λ

n

n

!

e

λ

=

λ

1

(

n

=

0

λ

n

n

!

e

λ

1

)

=

λ

\begin{aligned}E(X) &= \sum\limits_{k=0}^{\infty}k\frac{\lambda^k}{k!}e^{-\lambda} =\lambda \sum\limits_{k=1}^{\infty}\frac{\lambda^{k-1}}{(k-1)!}e^{-\lambda} \quad \\&令k-1=n, 则\\E(X)&= \lambda \sum\limits_{n=0}^{\infty}\frac{\lambda^{n}}{n!}e^{-\lambda}\\ &=\lambda \cdot1 \quad (\sum\limits_{n=0}^{\infty}\frac{\lambda^{n}}{n!}e^{-\lambda} 刚好是泊松分布,所有可能取值的概率和,即为 1) \\&= \lambda \quad \end{aligned}

E(X)E(X)​=k=0∑∞​kk!λk​e−λ=λk=1∑∞​(k−1)!λk−1​e−λ令k−1=n,则=λn=0∑∞​n!λn​e−λ=λ⋅1(n=0∑∞​n!λn​e−λ刚好是泊松分布,所有可能取值的概率和,即为1)=λ​ 关于泊松分布所有可能取值的概率和为1的证明, 感兴趣的同学可以看看之前的文章 离散型随机变量及其常见分布律 有相关证明.

5.4 几何分布

X

G

(

p

)

X\sim G(p)

X∼G(p) ,则其分布律为

P

{

X

=

k

}

=

(

1

p

)

k

1

p

k

=

1

,

2

,

3

,

P\{X=k\} = (1-p)^{k-1}p \quad k = 1,2,3,\cdots

P{X=k}=(1−p)k−1pk=1,2,3,⋯ ,此时有

E

(

X

)

=

1

p

.

E(X)=\frac{1}{p}.

E(X)=p1​.

证明:

E

(

X

)

=

k

=

1

k

(

1

p

)

k

1

p

=

p

k

=

1

k

(

1

p

)

k

1

S

=

k

=

1

k

(

1

p

)

k

1

=

1

(

1

p

)

0

+

2

(

1

p

)

1

+

3

(

1

p

)

2

+

+

(

k

1

)

(

1

p

)

k

2

+

k

(

1

p

)

k

1

(

1

)

(

1

p

)

S

=

1

(

1

p

)

1

+

2

(

1

p

)

2

+

3

(

1

p

)

3

+

+

(

k

1

)

(

1

p

)

k

1

+

k

(

1

p

)

k

(

2

)

(

1

)

(

2

)

,

p

S

=

(

1

p

)

0

+

(

1

p

)

1

+

(

1

p

)

2

+

+

(

1

p

)

k

2

+

(

1

p

)

k

1

k

(

1

p

)

k

p

S

=

1

p

(

1

p

+

k

)

(

1

p

)

k

0

(

1

p

)

1

k

lim

k

(

1

p

)

k

=

0

p

S

=

1

p

S

=

k

=

1

k

(

1

p

)

k

1

=

1

p

2

E

(

X

)

=

p

k

=

1

k

(

1

p

)

k

1

=

p

S

=

1

p

\begin{aligned} &E(X) = \sum\limits_{k=1}^{\infty}k(1-p)^{k-1}p = p\sum\limits_{k=1}^{\infty}k(1-p)^{k-1}\\ &令 S = \sum\limits_{k=1}^{\infty}k(1-p)^{k-1} = 1\cdot(1-p)^0+2\cdot(1-p)^1+3\cdot(1-p)^2+\cdots+(k-1)\cdot(1-p)^{k-2}+k\cdot(1-p)^{k-1} \quad (1)\\&(1-p)S=1\cdot(1-p)^1+2\cdot(1-p)^2+3\cdot(1-p)^3+\cdots+(k-1)\cdot(1-p)^{k-1}+k\cdot(1-p)^{k} \quad (2)\\&由 (1)-(2) 可得, pS = (1-p)^0+(1-p)^1+(1-p)^2+\cdots+(1-p)^{k-2}+(1-p)^{k-1}-k\cdot(1-p)^{k} \\&\therefore pS = \frac{1}{p}-(\frac{1}{p}+k)(1-p)^k\quad \because 0 \leq (1-p) \leq 1 且 k \to \infty \quad \therefore \lim\limits_{k\to \infty}(1-p)^k=0 \quad \therefore pS=\frac{1}{p}\\&\therefore S=\sum\limits_{k=1}^{\infty}k(1-p)^{k-1} = \frac{1}{p^2} \\&\therefore E(X) = p\sum\limits_{k=1}^{\infty}k(1-p)^{k-1}=pS = \frac{1}{p} \end{aligned}

​E(X)=k=1∑∞​k(1−p)k−1p=pk=1∑∞​k(1−p)k−1令S=k=1∑∞​k(1−p)k−1=1⋅(1−p)0+2⋅(1−p)1+3⋅(1−p)2+⋯+(k−1)⋅(1−p)k−2+k⋅(1−p)k−1(1)(1−p)S=1⋅(1−p)1+2⋅(1−p)2+3⋅(1−p)3+⋯+(k−1)⋅(1−p)k−1+k⋅(1−p)k(2)由(1)−(2)可得,pS=(1−p)0+(1−p)1+(1−p)2+⋯+(1−p)k−2+(1−p)k−1−k⋅(1−p)k∴pS=p1​−(p1​+k)(1−p)k∵0≤(1−p)≤1且k→∞∴k→∞lim​(1−p)k=0∴pS=p1​∴S=k=1∑∞​k(1−p)k−1=p21​∴E(X)=pk=1∑∞​k(1−p)k−1=pS=p1​​

证明方法二:

E

(

X

)

=

k

=

1

k

(

1

p

)

k

1

p

=

p

k

=

1

k

(

1

p

)

k

1

\begin{aligned} &E(X) = \sum\limits_{k=1}^{\infty}k(1-p)^{k-1}p = p\sum\limits_{k=1}^{\infty}k(1-p)^{k-1} \end{aligned}

​E(X)=k=1∑∞​k(1−p)k−1p=pk=1∑∞​k(1−p)k−1​

我们注意到,求和级数的形式为

k

x

k

1

kx^{k-1}

kxk−1 , 求和不便,但是我们知道

(

x

k

)

=

k

x

k

1

(x^k)' = kx^{k-1}

(xk)′=kxk−1

k

=

1

k

x

k

1

=

(

k

=

1

x

k

)

=

(

x

(

1

x

k

)

1

x

)

0

<

x

<

1

k

lim

k

x

k

=

0

k

=

1

k

x

k

1

=

(

x

1

x

)

=

1

(

1

x

)

2

\begin{aligned} &\therefore \sum\limits_{k=1}^{\infty}kx^{k-1} =(\sum\limits_{k=1}^{\infty}x^k)' =(\frac{x(1-x^k)}{1-x})' \\&当0

​∴k=1∑∞​kxk−1=(k=1∑∞​xk)′=(1−xx(1−xk)​)′当0

E

(

X

)

=

p

k

=

1

k

(

1

p

)

k

1

=

p

1

(

1

(

1

p

)

)

2

(

0

<

1

p

<

1

k

)

=

p

1

p

2

=

1

p

\begin{aligned} \therefore E(X) &= p\sum\limits_{k=1}^{\infty}k(1-p)^{k-1} \\& =p\frac{1}{(1-(1-p))^2} \quad (0 <1-p<1 且 k\to \infty)\\&=p\frac{1}{p^2}\\&=\frac{1}{p}\end{aligned}

∴E(X)​=pk=1∑∞​k(1−p)k−1=p(1−(1−p))21​(0<1−p<1且k→∞)=pp21​=p1​​

5.5 超几何分布

X

H

(

n

,

M

,

N

)

X\sim H(n,M,N)

X∼H(n,M,N) ,则其分布律为

P

{

X

=

k

}

=

(

k

M

)

(

n

k

N

M

)

(

n

N

)

k

=

0

,

1

,

,

m

i

n

{

n

,

M

}

.

P\{X=k\} = \frac{(_k^M)(_{n-k}^{N-M})}{(_n^N)} \quad k= 0,1,\cdots,min\{n,M\}.

P{X=k}=(nN​)(kM​)(n−kN−M​)​k=0,1,⋯,min{n,M}. ,此时有

E

(

X

)

=

n

M

N

.

E(X)=n\frac{M}{N}.

E(X)=nNM​.

证明:

E

(

X

)

=

k

=

0

m

i

n

{

n

,

M

}

k

(

k

M

)

(

n

k

N

M

)

(

n

N

)

=

k

=

0

m

i

n

{

n

,

M

}

k

M

!

k

!

(

M

k

)

!

(

N

M

)

!

(

n

k

)

!

(

N

M

n

+

k

)

!

n

!

(

N

n

)

!

N

!

=

k

=

1

m

i

n

{

n

,

M

}

M

(

M

1

)

!

(

k

1

)

!

(

M

k

)

!

(

N

M

)

!

(

n

k

)

!

(

N

M

n

+

k

)

!

n

(

n

1

)

!

(

N

n

)

!

N

(

N

1

)

!

=

n

M

N

1

(

n

1

N

1

)

k

=

1

m

i

n

{

n

,

M

}

(

k

1

M

1

)

(

n

k

N

M

)

=

n

M

N

1

(

n

1

N

1

)

(

n

1

N

1

)

(

C

m

+

n

k

=

i

=

0

k

C

m

i

C

n

k

i

)

=

n

M

N

\begin{aligned} E(X) &= \sum\limits_{k=0}^{min\{n,M\}}k\frac{(_k^M)(_{n-k}^{N-M})}{(_n^N)} \\&=\sum\limits_{k=0}^{min\{n,M\}}k\frac{M!}{k!(M-k)!}\frac{(N-M)!}{(n-k)!(N-M-n+k)!}\frac{n!(N-n)!}{N!} \\&=\sum\limits_{k=1}^{min\{n,M\}}\frac{M(M-1)!}{(k-1)!(M-k)!}\frac{(N-M)!}{(n-k)!(N-M-n+k)!}\frac{n(n-1)!(N-n)!}{N(N-1)!}\\ &= n\frac{M}{N}\frac{1}{(^{N-1}_{n-1})}\sum\limits_{k=1}^{min\{n,M\}}(_{k-1}^{M-1})(_{n-k}^{N-M}) \\&=n\frac{M}{N}\frac{1}{(^{N-1}_{n-1})}(^{N-1}_{n-1}) \quad (范德蒙恒等式C_{m+n}^k = \sum\limits_{i=0}^{k}C_{m}^iC_{n}^{k-i}) \\&=n\frac{M}{N} \end{aligned}

E(X)​=k=0∑min{n,M}​k(nN​)(kM​)(n−kN−M​)​=k=0∑min{n,M}​kk!(M−k)!M!​(n−k)!(N−M−n+k)!(N−M)!​N!n!(N−n)!​=k=1∑min{n,M}​(k−1)!(M−k)!M(M−1)!​(n−k)!(N−M−n+k)!(N−M)!​N(N−1)!n(n−1)!(N−n)!​=nNM​(n−1N−1​)1​k=1∑min{n,M}​(k−1M−1​)(n−kN−M​)=nNM​(n−1N−1​)1​(n−1N−1​)(范德蒙恒等式Cm+nk​=i=0∑k​Cmi​Cnk−i​)=nNM​​

5.6 均匀分布

X

U

(

a

,

b

)

X\sim U(a,b)

X∼U(a,b) ,则其概率密度为

f

(

x

)

=

{

1

b

a

,

a

<

x

<

b

0

,

e

l

s

e

.

f(x)=\begin{cases} \frac{1}{b-a},\quad a

f(x)={b−a1​,a

E

(

X

)

=

a

+

b

2

.

E(X)=\frac{a+b}{2}.

E(X)=2a+b​.

证明:

E

(

X

)

=

+

x

f

(

x

)

d

x

=

a

x

0

d

x

+

a

b

x

1

b

a

d

x

+

b

+

x

0

d

x

=

0

+

(

1

2

1

b

a

x

2

)

a

b

+

0

=

a

+

b

2

\begin{aligned} E(X) &= \int_{-\infty}^{+\infty}xf(x)dx = \int_{-\infty}^{a}x\cdot0dx+\int_{a}^{b}x\frac{1}{b-a}dx+\int_{b}^{+\infty}x\cdot0dx\\&=0+(\frac{1}{2}\frac{1}{b-a}x^2)\bigg|_a^b+0 \\&=\frac{a+b}{2}\end{aligned}

E(X)​=∫−∞+∞​xf(x)dx=∫−∞a​x⋅0dx+∫ab​xb−a1​dx+∫b+∞​x⋅0dx=0+(21​b−a1​x2)∣∣∣∣​ab​+0=2a+b​​

5.7 指数分布

X

E

(

θ

)

X\sim E(\theta)

X∼E(θ) ,则其概率密度为

f

(

x

)

=

{

1

θ

e

x

/

θ

,

0

<

x

0

,

e

l

s

e

(

θ

>

0

)

.

f(x)=\begin{cases} \frac{1}{\theta}e^{-x/\theta},\quad 00).

f(x)={θ1​e−x/θ,00). ,此时有

E

(

X

)

=

θ

.

E(X)=\theta.

E(X)=θ.

证明:

E

(

X

)

=

+

x

f

(

x

)

d

x

=

0

x

0

d

x

+

0

+

x

1

θ

e

x

/

θ

d

x

=

0

+

(

x

e

x

/

θ

)

0

+

0

+

e

x

/

θ

d

x

(

)

=

0

(

θ

e

x

/

θ

)

0

+

=

(

0

θ

)

=

θ

\begin{aligned} E(X) &= \int_{-\infty}^{+\infty}xf(x)dx = \int_{-\infty}^{0}x\cdot0dx+\int_{0}^{+\infty}x\frac{1}{\theta}e^{-x/\theta}dx\\&=0+(-xe^{-x/\theta})\bigg|_0^{+\infty} -\int_{0}^{+\infty}-e^{-x/\theta}dx \quad (分部积分法)\\&=0-(\theta e^{-x/\theta})\bigg|_0^{+\infty}\\&= -(0-\theta) \\&= \theta\end{aligned}

E(X)​=∫−∞+∞​xf(x)dx=∫−∞0​x⋅0dx+∫0+∞​xθ1​e−x/θdx=0+(−xe−x/θ)∣∣∣∣​0+∞​−∫0+∞​−e−x/θdx(分部积分法)=0−(θe−x/θ)∣∣∣∣​0+∞​=−(0−θ)=θ​

证明方法二:

E

(

X

)

=

+

x

f

(

x

)

d

x

=

0

x

0

d

x

+

0

+

x

1

θ

e

x

/

θ

d

x

=

0

+

(

x

e

x

/

θ

)

0

+

0

+

e

x

/

θ

d

x

(

)

=

θ

0

+

1

θ

e

x

/

θ

d

x

(

θ

1

θ

0

+

1

θ

e

x

/

θ

d

x

=

F

(

+

)

F

(

0

)

=

1

)

=

θ

1

=

θ

\begin{aligned} E(X) &= \int_{-\infty}^{+\infty}xf(x)dx = \int_{-\infty}^{0}x\cdot0dx+\int_{0}^{+\infty}x\frac{1}{\theta}e^{-x/\theta}dx\\&=0+(-xe^{-x/\theta})\bigg|_0^{+\infty} -\int_{0}^{+\infty}-e^{-x/\theta}dx \quad (分部积分法)\\&=\theta\int_{0}^{+\infty}\frac{1}{\theta}e^{-x/\theta}dx \quad (构造\theta和\frac{1}{\theta},不影响其结果,好处是积分项\int_{0}^{+\infty}\frac{1}{\theta}e^{-x/\theta}dx=F(+\infty)-F(0)=1 )\\&= \theta \cdot 1 \\&= \theta\end{aligned}

E(X)​=∫−∞+∞​xf(x)dx=∫−∞0​x⋅0dx+∫0+∞​xθ1​e−x/θdx=0+(−xe−x/θ)∣∣∣∣​0+∞​−∫0+∞​−e−x/θdx(分部积分法)=θ∫0+∞​θ1​e−x/θdx(构造θ和θ1​,不影响其结果,好处是积分项∫0+∞​θ1​e−x/θdx=F(+∞)−F(0)=1)=θ⋅1=θ​

5.8 正态分布

X

N

(

μ

,

σ

2

)

X\sim N(\mu,\sigma^2)

X∼N(μ,σ2) ,则其概率密度为

f

(

x

)

=

1

2

π

σ

e

(

x

μ

)

2

2

σ

2

,

<

x

<

+

.

f(x)=\frac{1}{\sqrt{2\pi}\sigma}e^{-\frac{(x-\mu)^2}{2\sigma^2}} , \quad -\infty

f(x)=2π

​σ1​e−2σ2(x−μ)2​,−∞

E

(

X

)

=

μ

.

E(X)=\mu.

E(X)=μ.

证明:

E

(

X

)

=

+

x

f

(

x

)

d

x

=

+

x

1

2

π

σ

e

(

x

μ

)

2

2

σ

2

d

x

x

μ

σ

=

t

,

x

=

t

σ

+

μ

=

1

2

π

σ

+

(

t

σ

+

μ

)

e

t

2

2

σ

d

t

=

1

2

π

(

+

t

σ

e

t

2

2

d

t

+

+

μ

e

t

2

2

d

t

)

=

1

2

π

(

e

t

2

2

)

+

+

1

2

π

μ

2

π

(

+

e

t

2

2

d

t

=

2

π

)

=

0

+

μ

=

μ

\begin{aligned} E(X) &= \int_{-\infty}^{+\infty}xf(x)dx = \int_{-\infty}^{+\infty}x\frac{1}{\sqrt{2\pi}\sigma}e^{-\frac{(x-\mu)^2}{2\sigma^2}}dx\\& 令 \frac{x-\mu}{\sigma}=t,则 x= t\sigma+\mu\\&=\frac{1}{\sqrt{2\pi}\sigma}\int_{-\infty}^{+\infty}(t\sigma+\mu)e^{-\frac{t^2}{2}}\sigma dt\\&= \frac{1}{\sqrt{2\pi}}(\int_{-\infty}^{+\infty}t\sigma e^{-\frac{t^2}{2}}dt+\int_{-\infty}^{+\infty}\mu e^{-\frac{t^2}{2}} dt) \\&= \frac{1}{\sqrt{2\pi}}(-e^{-\frac{t^2}{2}})\bigg|_{-\infty}^{+\infty}+\frac{1}{\sqrt{2\pi}}\mu\sqrt{2\pi} \quad(在介绍连续型随机变量概率密度时,有证明 \int_{-\infty}^{+\infty}e^{-\frac{t^2}{2}} dt=\sqrt{2\pi}) \\&=0+\mu \quad \\&= \mu \end{aligned}

E(X)​=∫−∞+∞​xf(x)dx=∫−∞+∞​x2π

​σ1​e−2σ2(x−μ)2​dx令σx−μ​=t,则x=tσ+μ=2π

​σ1​∫−∞+∞​(tσ+μ)e−2t2​σdt=2π

​1​(∫−∞+∞​tσe−2t2​dt+∫−∞+∞​μe−2t2​dt)=2π

​1​(−e−2t2​)∣∣∣∣​−∞+∞​+2π

​1​μ2π

​(在介绍连续型随机变量概率密度时,有证明∫−∞+∞​e−2t2​dt=2π

​)=0+μ=μ​

关于

+

e

t

2

2

d

t

=

2

π

\int_{-\infty}^{+\infty}e^{-\frac{t^2}{2}} dt=\sqrt{2\pi}

∫−∞+∞​e−2t2​dt=2π

​的详细证明可看之前文章 连续型随机变量及其常见分布函数和概率密度 中,正态分布必要性证明部分.

证明方法二:

我们知道,一般正态分布可以通过

Z

=

X

μ

σ

Z=\frac{X-\mu}{\sigma}

Z=σX−μ​ 转为标准正态分布,标准正态分布的期望是很好计算的,设为

E

(

X

0

)

E(X_0)

E(X0​),之后再利用期望的性质

E

(

X

μ

σ

)

=

E

(

X

)

σ

μ

σ

=

E

(

X

0

)

.

E(\frac{X-\mu}{\sigma}) = \frac{E(X)}{\sigma}-\frac{\mu}{\sigma} = E(X_0).

E(σX−μ​)=σE(X)​−σμ​=E(X0​). 反解出

E

(

X

)

E(X)

E(X)即可. 下面给出具体的证明步骤:

标准正态分布的概率密度为

f

(

x

)

=

+

1

2

π

e

x

2

2

d

x

f(x) = \int_{-\infty}^{+\infty}\frac{1}{\sqrt{2\pi}}e^{-\frac{x^2}{2}}dx

f(x)=∫−∞+∞​2π

​1​e−2x2​dx 则

E

(

X

0

)

=

+

x

f

(

x

)

d

x

=

+

x

1

2

π

e

x

2

2

d

x

=

1

2

π

(

e

x

2

2

)

+

=

0

\begin{aligned} E(X_0) &= \int_{-\infty}^{+\infty}xf(x)dx = \int_{-\infty}^{+\infty}x\frac{1}{\sqrt{2\pi}}e^{-\frac{x^2}{2}}dx\\&= \frac{1}{\sqrt{2\pi}}(-e^{-\frac{x^2}{2}})\bigg|_{-\infty}^{+\infty} \\&=0\end{aligned}

E(X0​)​=∫−∞+∞​xf(x)dx=∫−∞+∞​x2π

​1​e−2x2​dx=2π

​1​(−e−2x2​)∣∣∣∣​−∞+∞​=0​

E

(

X

μ

σ

)

=

E

(

X

)

σ

μ

σ

=

0

E

(

X

)

=

μ

\begin{aligned} \therefore E(\frac{X-\mu}{\sigma}) = \frac{E(X)}{\sigma}-\frac{\mu}{\sigma} = 0 \quad \therefore E(X) = \mu \end{aligned}

∴E(σX−μ​)=σE(X)​−σμ​=0∴E(X)=μ​

5.9 总结

分布参数分布律或概率密度数学期望

(

0

1

)

(0-1)

(0−1)分布

0

<

p

<

1

0

0

P

{

X

=

k

}

=

p

k

(

1

p

)

1

k

,

k

=

0

,

1

P\{X=k\} = p^k(1-p)^{1-k}, \quad k=0,1

P{X=k}=pk(1−p)1−k,k=0,1

p

p

p二项分布

X

b

(

n

,

p

)

X\sim b(n,p)

X∼b(n,p)

n

1

0

<

p

<

1

n\geq1\\0

n≥10

P

{

X

=

k

}

=

(

k

n

)

p

k

q

n

k

k

=

0

,

1

,

2

,

n

P\{X=k\} = \left(_k^n\right)p^kq^{n-k} \quad k=0,1,2\cdots, n

P{X=k}=(kn​)pkqn−kk=0,1,2⋯,n

n

p

np

np泊松分布

X

π

(

λ

)

X\sim \pi(\lambda)

X∼π(λ)

λ

>

0

\lambda>0

λ>0

P

{

X

=

k

}

=

λ

k

k

!

e

λ

k

=

0

,

1

,

2

,

P\{X=k\} = \frac{\lambda^k}{k!}e^{-\lambda} \quad k=0,1,2,\cdots

P{X=k}=k!λk​e−λk=0,1,2,⋯

λ

\lambda

λ几何分布

X

G

(

p

)

X\sim G(p)

X∼G(p)

0

<

p

<

1

0

0

P

{

X

=

k

}

=

(

1

p

)

k

1

p

k

=

1

,

2

,

3

,

P\{X=k\} = (1-p)^{k-1}p \quad k = 1,2,3,\cdots

P{X=k}=(1−p)k−1pk=1,2,3,⋯

1

p

\frac{1}{p}

p1​超几何分布

X

H

(

n

,

M

,

N

)

X\sim H(n,M,N)

X∼H(n,M,N)

N

,

M

,

n

N

M

N

n

N,M,n\\N\geq M\\ N\geq n

N,M,nN≥MN≥n

P

{

X

=

k

}

=

(

k

M

)

(

n

k

N

M

)

(

n

N

)

k

=

0

,

1

,

,

m

i

n

{

n

,

M

}

.

P\{X=k\} = \frac{(_k^M)(_{n-k}^{N-M})}{(_n^N)} \quad k= 0,1,\cdots,min\{n,M\}.

P{X=k}=(nN​)(kM​)(n−kN−M​)​k=0,1,⋯,min{n,M}.

n

M

N

n\frac{M}{N}

nNM​均匀分布

X

U

(

a

,

b

)

X\sim U(a,b)

X∼U(a,b)

a

<

b

a

a

f

(

x

)

=

{

1

b

a

,

a

<

x

<

b

0

,

e

l

s

e

.

f(x)=\begin{cases} \frac{1}{b-a},\quad a

f(x)={b−a1​,a

a

+

b

2

\frac{a+b}{2}

2a+b​指数分布

X

E

(

θ

)

X\sim E(\theta)

X∼E(θ)

θ

>

0

\theta>0

θ>0

f

(

x

)

=

{

1

θ

e

x

/

θ

,

0

<

x

0

,

e

l

s

e

.

f(x)=\begin{cases} \frac{1}{\theta}e^{-x/\theta},\quad 0

f(x)={θ1​e−x/θ,0

θ

\theta

θ正态分布

X

N

(

μ

,

σ

2

)

X\sim N(\mu,\sigma^2)

X∼N(μ,σ2)

μ

σ

>

0

\mu\\\sigma>0

μσ>0

f

(

x

)

=

1

2

π

σ

e

(

x

μ

)

2

2

σ

2

,

<

x

<

+

.

f(x)=\frac{1}{\sqrt{2\pi}\sigma}e^{-\frac{(x-\mu)^2}{2\sigma^2}} , \quad -\infty

f(x)=2π

​σ1​e−2σ2(x−μ)2​,−∞

μ

\mu

μ

相关数据流

cda软件是什么?
365bet官方亚洲版

cda软件是什么?

⌚ 06-29 👁️‍🗨️ 7429
闲鱼拍了必须多久发货?发货的规则是什么?
365bet官方亚洲版

闲鱼拍了必须多久发货?发货的规则是什么?

⌚ 07-03 👁️‍🗨️ 7910
马云有几个孩子?马云儿子叫什么名字多大了照片个人资料
线上365bet注册

马云有几个孩子?马云儿子叫什么名字多大了照片个人资料

⌚ 07-04 👁️‍🗨️ 2362