Prepare for your exams

Study with the several resources on Docsity

Earn points to download

Earn points by helping other students or get them with a premium plan

Guidelines and tips

Prepare for your exams

Study with the several resources on Docsity

Earn points to download

Earn points by helping other students or get them with a premium plan

Community

Ask the community

Ask the community for help and clear up your study doubts

University Rankings

Discover the best universities in your country according to Docsity users

Free resources

Our save-the-student-ebooks!

Download our free guides on studying techniques, anxiety management strategies, and thesis advice from Docsity tutors

This document from the eecs 501 course at an unspecified university covers the concepts of covariance matrices, including their definition, properties, applications, and the multidimensional gaussian pdf. The relationship between covariance matrices and eigenvalues, eigenvectors, and the karhunen-loeve expansion. It also discusses the significance of the eigenvalue 0 and the grouping of eigenvalues.

Typology: Study notes

Pre 2010

1 / 2

Download Covariance Matrices in EECS 501: Gaussian PDF Applications & Properties and more Study notes Electrical and Electronics Engineering in PDF only on Docsity! EECS 501 COVARIANCE MATRICES Fall 2001 DEF: A random vector is a vector of random variables ~x = [x1 . . . xN ]′. Note: Unless otherwise stated, a random vector is a column vector. DEF: The mean vector of random vector ~x is ~µ = E[~x] = [E[x1] . . . E[xN ]]′. DEF: The covariance matrix Kx = Λx of ~x is the N ×N matrix whose (i, j)th element (Kx)ij = λxixj = E[xixj ]−E[xi]E[xj ]. Note: Kx = E[(~x− E[~x])(~x− E[~x])′] = E[~x~x′]− E[~x]E[~x]′ (outer products). Also Outer product ~x~y′ = [xiyj ] = N ×N matrix having rank 1. Note: Inner product ~x′~y = ∑ xiyi=scalar=Trace of outer product. 1. Kx is a symmetric matrix: (Kx)ij = λxixj = λxjxi = (Kx)ji. 2. Kx is a positive semidefinite matrix: For any vector ~a, the scalar ~a′Kx~a = ∑N i=1 ∑N j=1 ai(Kx)ijaj ≥ 0. 3. In particular, the diagonal elements of Kx have (Kx)ii = σ2xi ≥ 0. This is necessary but not sufficient for Kx to be positive semidefinite. Thm: Let random vector ~y = A~x+~b for any constant matrix A and vector ~b. A need not be square. Then E[~y] = AE[~x] +~b and Ky = AKxA′. Proof: Ky = E[(~y − E[~y])(~y − E[~y])′] = E[A(~x− E[~x])(A(~x− E[~x]))′] = E[A(~x− E[~x])(~x− E[~x])′A′] = AKxA′ using (A~x)′ = ~x′A′. #2: Define rv y = ~a′~x = ∑N i=1 aixi. Then σ 2 y = ~a′Kx~a ≥ 0. DEF: Kx has N eigenvalues λi and associated eigenvectors vi which solve Kxvi = λivi, i = 1 . . . N . Fact: Kx real & symmetric→ λi & vi real. Fact: Kx is positive semidefinite iff λi ≥ 0, i = 1 . . . N . Matlab: eig→ λi, vi. Thm: Let V = [v1|v2| . . . |vN ] (matrix of eigenvectors) and ~y = V ′~x. Then: λyiyj = E[yiyj ]− E[yi]E[yj ] = λiδij = 0 if i 6= j. Proof: Kxvi = λivi → KxV = V diag[λi] → Ky = V ′KxV = diag[λi] since ~vi ′ ~vj = 0 if i 6= j (V is a unitary matrix: V ′V = V V ′ = I). Note: This is called decorrelating or (pre)whitening the vector ~x. It is an essential part of communications and signal processing in noise. DEF: Cross-correlation matrix Kxy = E[(~x−E[~x])(~y−E[~y])′]. Kxy = K ′yx. Props: Kx+y = Kx + Ky + Kxy + Kyx = Kx + Ky + Kxy + K ′xy symmetric. ~z = A~x +~b → Kzy = AKxy and Kyz = KyxA′. Compare to σ2x+y. DEF: ~x and ~y are uncorrelated if Kxy = [0] ↔ E[~x~y′] = E[~x]E[~y]′.