Prepare for your exams

Study with the several resources on Docsity

Earn points to download

Earn points by helping other students or get them with a premium plan

Guidelines and tips

Prepare for your exams

Study with the several resources on Docsity

Earn points to download

Earn points by helping other students or get them with a premium plan

Community

Ask the community

Ask the community for help and clear up your study doubts

University Rankings

Discover the best universities in your country according to Docsity users

Free resources

Our save-the-student-ebooks!

Download our free guides on studying techniques, anxiety management strategies, and thesis advice from Docsity tutors

A lecture note from a statistics course (bst 631) on statistical theory i, covering topics such as hierarchical models, mixture distributions, multiple random variables, covariance, correlation, and bivariate normal distributions. It includes important results, illustrations, and examples.

Typology: Study notes

2009/2010

1 / 6

Download Lecture 20 on Statistical Theory I: Hierarchical Models, Covariance, and Correlation and more Study notes Biostatistics in PDF only on Docsity! Lecture 20 on BST 631: Statistical Theory I – Kui Zhang, 11/07/2006 1 Review for the previous lecture Definitions: Hierarchical Model, Mixture distribution Theorem: How to calculate expected values of a random variable based on the hierarchical model ( ( | ))EX E E X Y= Examples: Binomial-Poisson hierarchy, Poisson-Gamma mixture Chapter 4 – Multiple Random Variables Chapter 4.4 – Hierarchical Models and Mixture Distributions Definition 4.4.4: A random variable X is said to have a mixture distribution if the distribution of X depends on a quantity that also has a distribution. IMPORTANT RESULTS: These results are useful when one is interested only the expectation and variances of a random variable. Theorem 4.4.3: If X and Y are any two random variables, then ( ( | ))EX E E X Y= , provided that the expectations exist. Theorem 4.4.7: For any two random variables X and Y , then Lecture 20 on BST 631: Statistical Theory I – Kui Zhang, 11/07/2006 2 ( ( | )) ( ( | ))VarX E Var X Y Var E X Y= + , provided that the expectations exist. Proof. First we have, 2 2 2 2 ( ) ( ( | ) ( | ) ) ( ( | )) ( ( | ) ) 2 ([ ( | )][ ( | ) ]). VarX E X EX E X E X Y E X Y EX E X E X Y E E X Y EX E X E X Y E X Y EX = − = − + − = − + − + − − In addition, ([ ( | )][ ( | ) ]) ( {[( ( | ))( ( | ) )] | })E X E X Y E X Y EX E E X E X Y E X Y EX Y− − = − − , {[ ( | )][ ( | ) ] | } ( ( | ) )( {[ ( | )] | }) ( ( | ) )( ( | ) ( | )) 0. E X E X Y E X Y EX Y E X Y EX E X E X Y Y E X Y EX E X Y E X Y − − = − − = − − = and 2 2( ( | )) ( [ ( | )] | ) ( ( | )), E X E X Y E E X E X Y Y E Var X Y − = − = 2([ ( | ) ] ) ( ( | ),E E X Y EX Var E X Y− = Finally, we have ( ( | )) ( ( | ))VarX E Var X Y Var E X Y= + . Illustrations: 1. Binomial-Poisson Hierarchy: models |X Y ~ binomial ( , )Y p and ~Y Poisson( )λ . (Recall that ~X Poisson( )pλ ) Using Theorem 4.4.3 and 4.4.5, we get ( ( | )) ( )EX E E X Y E pY pλ= = = and 2( ( | )) ( ( | )) ( (1 )) ( ) (1 )VarX E Var X Y Var E X Y E Yp p Var pY p p pλ λ= + = − + = − + . Lecture 20 on BST 631: Statistical Theory I – Kui Zhang, 11/07/2006 5 Definition 4.5.10: Let Xµ−∞ < < ∞ , Yµ−∞ < < ∞ , 0Xσ > , 0Yσ > , and 1 1ρ− < < be five real numbers. The bivariate normal pdf with means Xµ and Yµ , variances 2Xσ and 2Yσ , and correlation ρ is the bivariate pdf given by 2 2 22 1 1( , ) exp ( ) 2 ( )( ) ( ) 2(1 )2 1 X X Y Y X X Y YX Y x x y yf x y µ µ µ µρ ρ σ σ σ σπσ σ ρ − − − − = − − + −− , for ,x y−∞ < < ∞ −∞ < < ∞ . Alternatively, we can re-write this formula in a general way. Define 2 2 X X Y X Y Y σ ρσ σ ρσ σ σ Σ = , which the variance- covariance of the bivariate normal distribution, then we have 1 2 1/ 2 1 1( , ) exp ( , ) ( , ) 2( 2 ) | | T X Y X Yf x y x y x yµ µ µ µπ − = − − − Σ − − Σ . Properties: 1. The marginal distributions of X and Y are 2( , )X Xn µ σ and 2( , )Y Yn µ σ , respectively. 2. The correlation between X and Y is XYρ ρ= . 3. For any constants a and b , the distribution of aX bY+ is 2 2 2 2( , 2 )X Y X Y X Yn a b a b abµ µ σ σ ρσ σ+ + + . 4. The conditional distributions of X given Y y= and of Y given X x= are also normal distributions. The pdf of |X Y y= is 2 2( ( / )( ), (1 ))X X Y Y Xn yµ ρ σ σ µ σ ρ+ − − . 5. X and Y are independent if and only if 0ρ = . Lecture 20 on BST 631: Statistical Theory I – Kui Zhang, 11/07/2006 6 Example (Marginal Normality Does not Imply Joint Normality): Let X and Y are independent random variables with the pdf (0,1)n , define , if 0 -X, if 0. X XY Z XY > = < Then Z has normal distribution but Z and Y is not bivariate normal.