Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Expectations, Variance, Covariance, and Moment Generating Functions, Study notes of Probability and Statistics

The concepts of expectations, variance, covariance, and moment generating functions, including the properties and examples of each. It includes the expectation of functions and sums, variance and covariance of a sum, and the correlation inequality.

Typology: Study notes

Pre 2010

Uploaded on 03/11/2009

koofers-user-ypx
koofers-user-ypx 🇺🇸

10 documents

1 / 3

Toggle sidebar

Related documents


Partial preview of the text

Download Expectations, Variance, Covariance, and Moment Generating Functions and more Study notes Probability and Statistics in PDF only on Docsity! Lecture 15: Feb 9, Expectations, Variance and Covariance (Ross 7.1, 7.4) 15.1: Expectations of functions and sums (i) For a discrete random variable, E(g(X)) = ∑ x∈X g(x) pX(x). For a continuous random variable, E(g(X)) = ∫ x g(x) fX(x) dx. (This was proved for the discrete case in 394; Ross P.145) If X and Y have joint pmf pX,Y (x, y) or pdf fX,Y (x, y) then E(g(X, Y )) = ∑ x ∑ y g(X, Y ) pX,Y (x, y) or E(g(X, Y )) = ∫ x ∫ y g(X, Y ) fX,Y (x, y) dx dy respectively. (This would be proved exact same way.) (ii) Recall also (from 394), E(g1(X) + g2(X)) = ∫ (g1(x) + g2(x))fX(x) dx = E(g1(X)) + E(g2(X)). Now, if X and Y have joint pmf pX,Y (x, y) or pdf fX,Y (x, y) then E(g1(X) + g2(Y )) = ∫ x ∫ y (g1(X) + g2(Y )) fX,Y (x, y) dx dy = ∫ x g1(X) (∫ y fX,Y (x, y)dy ) dx + ∫ y g2(Y ) (∫ x fX,Y (x, y)dx ) dy = ∫ x g1(X)fX(x)dx + ∫ y g2(Y )fY (y)dy = E(g1(X)) + E(g2(Y )) 15.2: Expectation of a product of (functions of) independent rvs If X and Y are independent random variables E(g1(X)g2(Y )) = ∫ x ∫ y (g1(X)g2(Y )) fX,Y (x, y) dx dy = ∫ x ∫ y (g1(X)g2(Y )) fX(x)fY (y) dx dy = (∫ x g1(X)fX(x) dx ) (∫ y g2(Y )fY (y) dy ) = E(g1(X))E(g2(Y )) The proof for discrete random variables is similar. 15.3: Variance, Covariance, and correlation (i) Recall, if E(X) = µ, var(X) ≡ E((X − µ)2) = E(X2 − 2µX + µ2) = E(X2) − 2µE(X) + µ2 = E(X2) − (E(X))2 (ii) If E(X) = µ and E(Y ) = ν, define cov(X, Y ) ≡ E((X − µ)(Y − ν)). Then cov(X, Y ) = E(XY − µY − νX + µν) = E(XY ) − µE(Y ) − νE(X) + µν = E(XY ) − E(X)E(Y ). Note var(X) = cov(X, X), and cov(X,−Y ) = − cov(X, Y ). (iii) We see from 15.2 that if X and Y are independent, then cov(X, Y ) = 0. (iv) The converse is NOT true: i.e. cov(X, Y ) = 0 does not imply X,Y independent. Example: X and Y uniform on a circle/disc: X = cos(U), Y = sin(U), where U ∼ U(0, 2π). (v) Define the correlation coefficient ρ by ρ(X, Y ) = cov(X, Y )/ √ var(X)var(Y ) Note from (iii), if X and Y are independent, ρ(X, Y ) = 0. As in (iv), in general, the converse is NOT true. Also note ρ(X, X) = +1, ρ(X,−X) = − 1. We shall show below that −1 ≤ ρ ≤ 1 always. 1 Lecture 16: Feb 11, Variances and covariances of sums of random variables (Ross 7.4) 16.1: Variance and covariance of a sum (i) Let Xi have mean µi, i = 1, ..., n, and Yj have mean νj , j = 1, ..., m. So E( ∑n 1 Xi) = ∑n 1 µi and E( ∑m 1 Yj) = ∑m 1 νj . cov   n ∑ i=1 Xi, m ∑ j=1 Yj   = E   ( n ∑ i=1 Xi − n ∑ i=1 µi )   n ∑ j=1 Yj − m ∑ j=1 νj     = E   n ∑ i=1 (Xi − µi) m ∑ j=1 (Yj − µj)   = E( n ∑ i=1 m ∑ j=1 (Xi − µi)(Yj − νj)) = n ∑ i=1 m ∑ j=1 E((Xi − µi)(Yj − νj)) = n ∑ i=1 m ∑ j=1 cov(Xi, Yj). (ii) var ( n ∑ i=1 Xi ) = cov   n ∑ i=1 Xi, n ∑ j=1 Xj   = n ∑ i=1 n ∑ j=1 cov(Xi, Xj) = n ∑ i=1 var(Xi) + 2 ∑ ∑ i<j cov(Xi, Xj) (iii) If Xi and Xj are independent for all pairs (Xi, Xj), then cov(Xi, Xj) = 0 so var ( n ∑ i=1 Xi ) = n ∑ i=1 var(Xi) 16.2: The correlation inequality Let X have variance σ2X and Y have variance σ 2 Y . 0 ≤ var ( X σX ± Y σY ) = var(X) σ2X + var(Y ) σ2Y ± 2cov(X, Y ) σXσY = 2(1 ± ρ(X, Y )) Hence 0 ≤ (1 − ρ(X, Y )) so ρ ≤ 1; 0 ≤ (1 + ρ(X, Y )) so ρ ≥ −1. i.e. −1 ≤ ρ ≤ 1. 16.3: Mean and variance of a sample mean Let X1, ..., Xn be independent and identically distributed (i.i.d.) each with mean µ and variance σ2. The sample mean is defined as X = n−1 ∑ i Xi. Then E(X) = E(n−1 n ∑ i=1 Xi) = n −1 n ∑ i=1 E(Xi) = (nµ)/n = µ, var(X) = var(n−1 n ∑ i=1 Xi) = n −2 n ∑ i=1 var(Xi) = (nσ 2)/n2 = σ2/n We can estimate µ by X, and the variance of this estimator is σ2/n: but now we need to estimate σ2. 16.4: Mean of a sample variance Let X1, ..., Xn be i.i.d. each with mean µ and variance σ 2. Note E( ∑ i(Xi − µ)2) = nσ2; but we usually do not know µ. The sample variance is defined as S2 = ∑ i(Xi − X)2/(n − 1). Then (n − 1)S2 ≡ n ∑ i=1 (Xi − X)2 = n ∑ i=1 (Xi − µ + µ − X)2 = n ∑ i=1 ( (Xi − µ)2 − (Xi − µ)(X − µ) + (X − µ)2 ) = n ∑ i=1 (Xi − µ)2 − 2(X − µ) n ∑ i−1 (Xi − µ) + n(X − µ)2 = n ∑ i=1 (Xi − µ)2 − 2(X − µ)n(X − µ) + n(X − µ)2 = n ∑ i=1 (Xi − µ)2 − n(X − µ)2 E(S2) = (n − 1)−1 ( n ∑ i=1 E((Xi − µ)2) − nE((X − µ)2) ) = (n − 1)−1(nvar(Xi) − nvar(X)) = (n − 1)−1(nσ2 − n(σ2/n)) = σ2 2