Prepare for your exams

Study with the several resources on Docsity

Earn points to download

Earn points by helping other students or get them with a premium plan

Guidelines and tips

Prepare for your exams

Study with the several resources on Docsity

Earn points to download

Earn points by helping other students or get them with a premium plan

Community

Ask the community

Ask the community for help and clear up your study doubts

University Rankings

Discover the best universities in your country according to Docsity users

Free resources

Our save-the-student-ebooks!

Download our free guides on studying techniques, anxiety management strategies, and thesis advice from Docsity tutors

It is straightforward to show that if two RVs are independent, then cov(X, Y ) = 0. The covariance gives a way to quantify how two variables move to- gether ...

Typology: Study notes

2022/2023

1 / 3

Download Covariance Correlation coefficient and more Study notes Signals and Systems in PDF only on Docsity! Covariance When discussing a single RV, we used the notion of variance to cap- ture how much that RV could differ from its expected value. With two RVs we have a similar notion called the covariance, defined as cov(X, Y ) = E[(X − E[X ])(Y − E[Y ])] = E[XY ]− E[X ] E[Y ]. Note that cov(X,X) = var(X). It is straightforward to show that if two RVs are independent, then cov(X, Y ) = 0. The covariance gives a way to quantify how two variables move to- gether when they are not independent. When cov(X,X) > 0, then X and Y tend to have the same sign (i.e., one being positive indicates that the other is likely positive). In contrast, when cov(X,X) < 0, then X and Y tend to have the opposite sign (i.e., one being positive indicates that the other is likely negative). To show that the covariance actually quantifies something important, remember our rule that var(X+Y ) = var(X)+var(Y ) only when X and Y are independent? What about when the RVs are dependent? In general, we can say that: var(X + Y ) = var(X) + var(Y ) + 2cov(X,X). Correlation coefficient The covariance discussed above seems like a very helpful way to quan- tify the dependence between RVs. One glaring problem is that it’s not normalized. While the covariance can be bounded using the in- dividual variances, any bound derived in this way would depends on the distribution of the variables (or even the scaling of the variables). So, it’s hard to compare from one situation to the next. 60 ECE 3077 Notes by M. Davenport, J. Romberg and C. Rozell. Last updated 21:24, June 25, 2014 To alleviate this, we can introduce the correlation coefficient : ρ(X, Y ) = cov(X, Y )√ var(X)var(Y ) which can be shown to be −1 ≤ ρ(X, Y ) ≤ 1. When ρ(X, Y ) = 0 we call the RVs uncorrelated. The correlation coefficient gives us a way to quantify dependence that is on a normalized scale and can be compared from one situation to the next. The sign of ρ(X, Y ) tells you whether the RVs tend to go in the same or opposite directions. The size of ρ(X, Y ) tells you the extent to which this tendency is true. A value of ρ(X, Y ) = 1 means that one RV can be exactly determined from the other (i.e, there is no randomness once you know one of the RVs). We call X and Y uncorrelated if E[XY ] = E[X ] · E[Y ]. It is easy to see that uncorrelated RVs have ρ(X, Y ) = 0. Independence and correlation/covariance It is straightforward to show that: X, Y independent ⇒ X, Y uncorrelated and cov(X,X) = 0 X, Y uncorrelated or cov(X,X) = 0 6⇒ X, Y independent. So independent random variables are uncorrelated (and have covari- ance of zero), but uncorrelated random variables are not necessarily independent. 61 ECE 3077 Notes by M. Davenport, J. Romberg and C. Rozell. Last updated 21:24, June 25, 2014