Prepare for your exams

Study with the several resources on Docsity

Earn points to download

Earn points by helping other students or get them with a premium plan

Guidelines and tips

Prepare for your exams

Study with the several resources on Docsity

Earn points to download

Earn points by helping other students or get them with a premium plan

Community

Ask the community

Ask the community for help and clear up your study doubts

University Rankings

Discover the best universities in your country according to Docsity users

Free resources

Our save-the-student-ebooks!

Download our free guides on studying techniques, anxiety management strategies, and thesis advice from Docsity tutors

A lecture transcript from appendix b&c of wooldridge's textbook, focusing on probability and statistics beyond the univariate case. The lecture covers topics such as confidence intervals, hypothesis testing, type i and type ii errors, joint distribution, marginal distribution, independence, covariance, and correlation. The document also includes examples and formulas to help students understand these concepts.

Typology: Exams

Pre 2010

1 / 3

Download Understanding Covariance and Correlation in Multivariate Statistics and more Exams Introduction to Econometrics in PDF only on Docsity! Lecture 4: A Review of Probability and Statistics, Beyond the Univariate Case Wooldridge Appendix B&C Review from Last Time Con
dence Interval Example Needs to be a 2 tailed test to make hypothesis testing mirror con
dence intervals In order to reject the null hypothesis Sample size n; pop var 2; sample mean X Hypothesis testing: checking against a critical value: X0p n z 2 or X0p n z 2 CI: use critical value to construct CI: X z 2 p n and see if 0 falls outside the CI For 95% of the random samples we pull, the constructed con
dence interval will contain . Type I and Type II Errors In choosing the Type I error to target, its often a matter of consequence/severity Example from class (people die vs people get too much treatment) H0 : cancer; HA : no cancer Type I Error: probability that you say someone doesnt have cancer when they in fact do, so they dont get treated Thats more important than treating people that dont have cancer By eliminating the prob of type I error (making it unlikely that the hypothesis is rejected) we must decrease the power of our testing (increase type II error) Usually we cant estimate Type II errors because they depend on HA which is usually not well de
ned Ways to increase power (deal w Type II errors): Increase sample size More e¢ cient measure to get smaller Joint Distribution We are now interested in looking at two (or more) random variables . Often ask important questions that rely on more than 1 factor. If we have two discrete random variables X and Y , then the joint distribution is described by the joint probability density function (pdf) of X;Y : fX;Y (x; y) = P (X = x; Y = y). This can easily translate to more than 2 variables. Show what a marginal pdf is....in discrete case, marginal pdf of x : fX(x) = P y f(x; y)::: sum over all value of Y. Similar for marginal pdf of y. Example 1.4 E(X) = P xif(xi) E(X) = 2( 19 ) + 3( 2 9 ) + 4( 3 9 ) + 5( 2 9 ) + 6( 1 9 ) = 2+6+12+10+6 9 = 36 9 = 4 E(Y ) = 0( 39 ) + 1( 4 9 ) + 2( 2 9 ) = 8 9 1 Independence Why is independence important? Knowing the outcomes of X does not change the probabilities of possible outcomes of Y Random variables (discrete and continuous) X and Y are independent i¤ the joint pdf is the product of the marginal pdfs: fX;Y (x; y) = fX(x)fY (y) In the discrete case (to match what we wrote as the joint distribution): fX;Y (x; y) = P (X = x)P (Y = y). Once we establish independence, we can prove this useful conclusion If X and Y are independent, then E(XY ) = E(X)E(Y ) Show by using discrete random variables: E(XY ) = P x P y xyfX;Y (x; y) = P x P y xyfX(x)fY (y) =P x xfX(x) P y yfY (y) = E(X)E(Y ) Covariance Need summary measures of how variables vary with one another...we are looking to use a single number (as we did with expected value and variance) to summarize something about an entire distribution Population covariance: describes relationship between two variables involving a population Set up as follows: E(X) = X and E(Y ) = Y De
ned as the expected value of the product (X X)(Y Y ). [Why does this make sense? If X is above mean, Y above mean yield positive] Cov(X;Y ) = XY = E[(XX)(Y Y )] = E[(XX)Y (XX)Y ] = E[(XX)Y ] E[(XX)Y ] = E[(XX)Y ]Y [E(X X)| {z } =0 ] = z }| { E[(X X)Y ] = E[XY ] XE(Y ) = E(XY ) XY Go through PS 1.1b...show similarities to the above NP t=1 (XtX)(YtY ) = P (XtX)Yt P (XtX)Y = P (XtX)YtY X (Xt X)| {z } =0 from part A = P (Xt X)Yt Part A: P (Xt X) = P Xt nX = P Xt n( 1n P Xt) = 0 Important thing to note: be aware of measurements! Covariance can be altered simply by multiplying a rv by a constant. Property: Cov(a1X + b1; a2Y + b2) = a1a2Cov(X;Y ) Use PS 1.4 as an example of how to calculate covariance Cov(X;Y ) = E(XY ) E(X)E(Y ) From before we know: (X) = 4; E(Y ) = 89 E(XY ) = (3 1)( 29 ) + (4 2) 2 9 + (5 1)( 2 9 ) = 6+16+10 9 = 32 9 Cov(X;Y ) = 329 4( 8 9 ) = 0 How does covariance relate to independence? PS 1.3: If X;Y are independent rv, then Cov(X;Y ) = 0 Assume discrete random variables for the
rst case 2