Prepare for your exams

Study with the several resources on Docsity

Earn points to download

Earn points by helping other students or get them with a premium plan

Guidelines and tips

Prepare for your exams

Study with the several resources on Docsity

Earn points to download

Earn points by helping other students or get them with a premium plan

Community

Ask the community

Ask the community for help and clear up your study doubts

University Rankings

Discover the best universities in your country according to Docsity users

Free resources

Our save-the-student-ebooks!

Download our free guides on studying techniques, anxiety management strategies, and thesis advice from Docsity tutors

A lecture from the mathematical statistics for food and resource economics course, focusing on covariance and correlation between random variables. It covers definitions, properties, and theorems related to covariance and correlation matrices, as well as the derivation of the ordinary least squares estimator for a linear regression equation.

Typology: Study notes

Pre 2010

1 / 6

Download Covariance and Correlation: Moments of More than One Random Variable and more Study notes Introduction to Macroeconomics in PDF only on Docsity! Moments of More than One Random Variable Lecture IX I. Covariance and Correlation A. Definition 4.3.1: ,Cov X Y E X E X Y E Y E XY XE Y E X Y E X E Y E XY E X E Y E X E Y E X E Y E XY E X E Y 1. Note that this is simply a generalization of the standard variance formulation. Specifically, letting Y X yields: 22 Cov XX E XX E X E X E X E X 2. From a sample perspective, we have: 2 1 1 1 1 , n tt n t tt V X x n Cov X Y x y n 3. Together the variance and covariance matrices are typically written as a variance matrix: , , xx xy yx yy V X Cov X Y Cov X Y V Y Note that , ,xy yxCov X Y Cov Y X . 4. Substituting the sample measures into the variance matrix yields AEB 6933 Mathematical Statistics for Food and Resource Economics Lecture IX Professor Charles B. Moss Fall 2007 2 n t tt n t tt n t tt n t tt n t tt n t tt n t tt n t tt yyyx xyxx yyxy yxxx n yy n xy n yx n xx n ss ss S 11 11 11 11 1 11 11 The sample covariance matrix can then be written as: nn n n yx yx yy xx n 11 1 11 5. In terms of the theoretical distribution, the variance matrix can be written as: 2 2 , , , , x f x y dxdy xy f x y dxdy xy f x y dxdy y f x y dxdy 6. Example 4.3.2 X Y -1 0 1 1 0.167 0.083 0.167 0.417 0 0.083 0.000 0.083 0.167 -1 0.167 0.083 0.167 0.417 0.417 0.167 0.417 C(X,Y) is then computed as 0.1667 0 0.1667 , 0 0 0 0.1667 0 0.1667 C X Y B. Theorem 4.3.2. ,V X Y V X V Y Cov X Y AEB 6933 Mathematical Statistics for Food and Resource Economics Lecture IX Professor Charles B. Moss Fall 2007 5 2 22 0 0 ( , ) 0 ( , ) ( ) E XY E Y E X E X E X E XY E Y E X E X E X Cov X Y V X Cov X Y V X 2. A little razzle-dazzle: 2 min min ' min ' ' ' ' ' ' S Y X Y X Y Y Y X X Y X X Using matrix differentiation: 2 1 ' ' ' ' ' 2 ' 2 ' 0 ' ( ' ) S Y X X Y X X X X X Y X X X X X Y G. Theorem 4.3.6. The best linear predictor (or more exactly, the minimum mean-squared-error linear predictor) of Y based on X is given by * * X , where * and * are the least square estimates. II. Conditional Mean and Variance A. Definition 4.4.1 Let ,X Y be a bivariate discrete random variable taking values ,i jx y , 1,2,i j Let jP y X be the conditional probability of jY y given X . Let .,. be an arbitrary function. Then the conditional mean of ,X Y given X , denoted ,E X Y X or by , Y X E X Y , is defined by | 1 , , | Y X i ii E X Y X y P y X B. Definition 4.4.2 Let ,X Y be a bivarite continuous random variable with conditional density f y x . Let .,. be an arbitrary function. Then the conditional mean of ,X Y given X is defined by: AEB 6933 Mathematical Statistics for Food and Resource Economics Lecture IX Professor Charles B. Moss Fall 2007 6 | , , |Y XE X Y X y f y X dy C. Theorem 4.4.1 (Law of Iterated Means) , ,X Y XE X Y E E X Y (Where the symbol X E denotes the expectation with respect to X ). D. Theorem 4.4.2 | |, , ,X Y X X Y XV X Y E V X Y V E X Y Proof: 2 2 | | |Y X Y X Y XV E E Implies 2 2 | |X Y X X Y XE V E E E By definition of conditional variance 2 2 | |X Y X X Y XV E E E E Adding these expressions yields 22 | |X Y X X Y XE V V E E E V E. Theorem 4.4.3 The best predictor (or the minimum mean-squared- error predictor) of Y based on X is given by E Y X .