zoukankan      html  css  js  c++  java
  • Exponential Distribution

    原文发布在我的个人小站:here

    Story

    The Exponential distribution is the continuous counterpart to the [Geometric distribution](file://./Geometric-Distribution.md). The story of the Exponential distribution is analogous, but we are now waiting for a success in continuous time, where successes arrive at a rate of (lambda) successes per unit of time. The average number of successes in a time interval of length (t) is (lambda t), though the actual number of successes varies randomly. An Exponential random variable represents the waiting time until the first arrival of a success.

    ——adapted from Book BH

    Basic

    Definition: A continuous r.v. (X) is said to have the Exponential distribution with parameter (lambda) if its PDF is

    [f(x) = lambda e^{-lambda x}, quad x > 0 ]

    The corresponding CDF is

    [F(x) = 1 - e^{-lambda x}, quad x > 0 ]

    To calculate the expectation and variance, we first consider (X sim Exp(1)) with PDF (f(x) = e^{-x}), then

    [egin{split} E(X) &= int_0^{infty} x e^{-x} dx = 1 \ E(X^2) &= int_0^{infty} x^2 e^{-x} dx \ &= -x^2e^{-x}|_0^{infty} + 2int_0^{infty} x e^{-x} dx \ &= 2E(X) = 2 \ Var(X) &= E(X^2) - E^2(X) = 2-1 = 1 \ M_X(t) &= E(e^{tX}) = int_0^{infty} e^{tx} e^{-x} dx \ &= int_0^{infty} e^{-(1-t)x} dx = frac{1}{1-t} quad ext{for }t<1 end{split} ]

    Now let (Y=frac{X}{lambda} sim Exp(lambda)) for

    [f_Y(y) = f_X(X(y))frac{dx}{dy} = e^{-lambda y}cdotlambda sim Exp(lambda) ]

    or

    [P(Yle y) = P(Xle lambda y) = 1 - e^{-lambda y} sim Exp(lambda). ]

    Hence, we can get

    • (E(Y) = E(X/lambda) = 1/lambda)
    • (Var(Y) = Var(X/lambda) = 1/lambda^2)
    • MGF (moment generating function):

    [egin{split} M_Y(t) &= E(e^{tY}) =E(e^{tX/lambda}) \ &= E(e^{frac{t}{lambda}X}) = M_X(frac{t}{lambda}) = frac{1}{1-t/lambda} \ &= frac{lambda}{lambda -t} quad ext{for }t<lambda end{split} ]

    Memeoryless Property

    Memoryless is something like (P(X ge s+t ~|~ X ge s) = P(X ge t)), let (X sim Exp(lambda)), then

    [egin{split} P(X ge s+t ~|~ X ge s) &= frac{P(X ge s+t, ~X ge s)}{P(X ge s)} \ &= frac{P(X ge s+t)}{P(X ge s)} \ &= frac{e^{-lambda (s+t)}}{e^{-lambda s}} = e^{-lambda t} \ &= P(X ge t) end{split} ]

    Theorem: If (X) is a positive continuous r.v. with memoryless property, then (X) has an exponential distribution. Similarly, if (X) is discrete, then it has a geometric distribution.

    Proof idea: use survival function and solve differential equations.

    Examples

    eg.1 (X_1 sim Exp(lambda_1), ~X_2 sim Exp(lambda_2)), and (X_1 perp X_2). Then (P(X_1 < X_2) = frac{lambda_1}{lambda_1 + lambda_2}).

    Proof: By LOTP (law of total probability),

    [egin{split} P(X_1 < X_2) &= int_0^{infty} f_{X_1}(x) P(X_2 > X_1 ~|~ X_1=x) dx \ &= int_0^{infty} f_{X_1}(x) P(X_2 > x ~|~ X_1=x) dx \ &= int_0^{infty} f_{X_1}(x) P(X_2 > x) dx quad ext{(independence)} \ &= int_0^{infty} lambda_1 e^{-lambda_1 x} e^{-lambda_2 x} dx \ &= lambda_1 int_0^{infty} e^{-(lambda_1 + lambda_2) x} dx \ &= frac{lambda_1}{lambda_1 + lambda_2} end{split} ]

    eg.2 ({X_i}_{i=1}^n) are independent with (X_j sim Exp(lambda_j)). Let (L = min(X_1, cdots, X_n)), then (L sim Exp(lambda_1 + cdots lambda_n)).

    Proof:

    [egin{split} P(L > t) &= Pleft(min(X_1,cdots,X_n) > t ight) \ &= P(X_1 > t, cdots, X_n >t) \ &= P(X_1 > t) cdots P(X_n >t) quad ext{indep.} \ &= e^{-lambda_1 t}cdots e^{-lambda_n t} \ &= e^{-(lambda_1 + cdots lambda_n)t} sim Expleft(sum_j lambda_j ight) end{split} ]

    The intuition of this result is that if you consider (n) Poisson processes with rate (lambda_j),

    • (X_1) as the waiting time for a green car
    • (X_2) as the waiting time for a red car
    • ...

    Then (L) is the waiting time for a car of any color (i.e., any car). So it makes sense, the rate is (lambda_1 + cdots + lambda_n).

    eg.3 (Difference of two exponetial) Let (X sim Exp(lambda)) and (Y sim Exp(mu)), (X perp Y). Then what is the PDF of (Z=X-Y)?

    Solution:
    Recall the story of exponential, one can think of (X) and (Y) as waiting times for two independent things. For example,

    • (X) as the waiting time for a red car passing by
    • (Y) as the waiting time for a blue car

    If we see a blue car passing by, then the further waiting time for a red car is still distributed as same distribution as (Y), for the memoryless property of exponential. Likewise, if we see a red car passing by, then the further waiting time is distributed as same as (X). The further waiting time is somehow what we are interested in, say (Z).

    The above intuition says that, the conditional distribution of (X-Y) given (X > Y) is the distribution of (X), and the conditional distribution of (X-Y) given (X le Y) is the distribution of (-Y) (or in other words, the conditional distribution of (Y-X) given (Y ge X) is same as the distribution of (Y)).

    To make full use of our intuition, we know that

    • If (X>Y), which means (Z>0), then (Z~|~X>Y = X) a.s. holds, that is

    [egin{gathered} f_Z(z~|~X>Y) = lambda e^{-lambda z} \ ext{and since }P(X<Y) = 0 \ implies f_Z(z) = f_Z(z~|~X>Y)P(X>Y) \ = frac{mu}{lambda + mu}lambda e^{-lambda z}. end{gathered} ]

    • If (X < Y), which means (Z < 0), then (Z~|~X<Y = -Y) a.s. holds, that is

    [egin{gathered} f_Z(z~|~X<Y) = f_Y(y(z))left|frac{dy}{dz} ight| = mu e^{mu z} \ implies f_Z(z) = f_Z(z~|~X<Y)P(X<Y) \ = frac{lambda}{lambda + mu} mu e^{mu z} end{gathered} ]

    However, this is just a sketch. Later we will see how to derivate the form mathematically.

    From the above point of view, the PDF of (Z) had better be discussed by the sign of (Z).

    • If (Z > 0), which implies $X > Yimplies P(X < Y) = 0 $, then

    [egin{split} P(Z > z) &= P(X-Y>z ~|~ X>Y)P(X>Y) + P(Z>z~|~X<Y)P(X<Y) \ &= P(X>z)P(X>Y) + 0 quad ext{(memoryless)} \ &= frac{mu}{lambda + mu} e^{-lambda z} quad ext{(by eg.1)} \ implies f_Z(z) &= frac{lambdamu}{lambda + mu} e^{-lambda z} quad ext{for }z>0 end{split} ]

    • If (Z le 0), which implies (X le Y), then

    [egin{split} P(Z < z) &= P(Z<z ~|~ X>Y)P(X>Y) + P(X-Y<z~|~X<Y)P(X<Y) \ &= 0 + P(Y-X > -z ~|~ Y>X)P(Y>X) \ &= P(Y>X)P(Y > -z) quad ext{(memoryless)} \ &= frac{lambda}{lambda + mu}e^{mu z} quad ext{(by eg.1)} \ implies f_Z(z) &= frac{lambdamu}{lambda + mu}e^{mu z} quad ext{for }z<0 end{split} ]

    Therefore, the PDF of (Z) has the form

    [f_Z(z) = frac{lambdamu}{lambda + mu} egin{cases} e^{-lambda z} &quad z>0 \ e^{mu z} &quad z<0 end{cases} ]

    Note: (P(X=Y)=0) since the integral domain is a line ((y=x)) whose measure is 0. That is (P(Z=0) = 0). This is why we can give no care of the case (X=Y).

  • 相关阅读:
    软件开发平台正在面临一次重大的升级,java, net比起来简直弱爆了,新型的Html5+JS+JSON开发平台正在形成 人工智能
    Qt编写地图综合应用48地球模式、三维模式、地铁模式
    Qt数据库应用1数据导入导出csv
    Qt编写地图综合应用50获取区域边界
    Qt编写地图综合应用49地图类型(街道图、卫星图)
    Qt数据库应用2数据导出到xls
    Qt编写地图综合应用47经纬度地址互相转换
    C# 线程手册 第七章 网络和线程 系列
    SQL Server Transaction Log Truncate && Shrink
    a href=#与 a href=javascript:void(0) 的区别 打开新窗口链接的几种办法
  • 原文地址:https://www.cnblogs.com/yychi/p/10648490.html
Copyright © 2011-2022 走看看