Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Understanding Confidence Intervals & Hypothesis Testing: Sampling & Test Statistics - Prof, Study notes of Data Analysis & Statistical Methods

A lecture note from dr. Kobi abayomi's introductory course on statistics. It covers the concept of sampling distributions and test statistics, which are essential for constructing confidence intervals and performing hypothesis tests. Various probability distributions, such as bernoulli, binomial, poisson, normal, exponential, chi-square, and t-distribution, and their roles in statistical inference. It also demonstrates how to calculate test statistics, such as z-statistic and t-statistic, and interpret their significance.

Typology: Study notes

Pre 2010

Uploaded on 08/04/2009

koofers-user-iks
koofers-user-iks 🇺🇸

10 documents

1 / 13

Toggle sidebar

Related documents


Partial preview of the text

Download Understanding Confidence Intervals & Hypothesis Testing: Sampling & Test Statistics - Prof and more Study notes Data Analysis & Statistical Methods in PDF only on Docsity!

ISYE 2028 A and B

Lecture 10

Sampling Distributions and Test Statistics

Dr. Kobi Abayomi

April 2, 2009

1 Introduction: The context for Confidence Intervals

and Hypothesis Testing: Sampling Distributions for

Test Statistics

Here is a (non-exhaustive) illustration of the population—sample dichotomy that is the center of what we are studying in this introductory course.

Population Sample Random Variable Statistic Population Mean, Expectation Sample Mean Parameter Estimate μ x

We make assumptions or define a population to ”fit” observed data. Our data is information about events we wish to speak—or gain inference about.

The natural framework is that of an experiment: the population composes our assumptions about what might happen; the sample data compose what we actually observe. Our beliefs about what we see—that is the sample distribution, are related to our general assumptions— that is the population distribution.

We have canonical population models in our overview of random variables. Bernoulli, Bi- nomial, Poisson, Normal, Exponential, etc. characterize types of experiments; we use these characterizations to make statements about data.

Bernoulli distribution to model simple events that can either happen or not. Like whether a coin turns up heads or not.

Binomial distribution to model sums or totals of Bernoulli events. Like whether a coin turns up heads k times in n tosses.

Poisson distribution to model Binomial type events when the probability of any event is very low, and the number of events is very high. Like the number of soldiers who are kicked in the head in a military campaign in 18th century France.

Exponential distribution to model continuous, positive events like waiting times, or time to failure.

Normal distribution to model averages of events, or events where the outcomes are contin- uous, or when we just don’t know any better (ha!).

Chi-Square distribution to model squared deviations, sums of squared deviations, and squared normal random variables.

Moving on, we use our these canonical random variables, to make statements about observed data.

The setup is almost always this: we compare observed data to an expected value under our assumptions. This comparison yields a test statistic.

We then use our probability model (i.e. our fundamental assumption about population for the data) to make a probabilistic statement about the population parameter.

In general, a test statistic looks like this:

T estStatistic =

observed value − expected value standard error

(1)

In general the ”observed value” will be some statistic or function of data. The ”expected value” will be some parameter, the population correspondence of the statistic. We call statistics used in this context – to estimate population parameters – estimators. A popular notation is to use θˆ, read ”theta-hat”, as an estimator of the population parameter θ. We have already been exposed to one such estimator: ˆμ = x – the sample mean.

We use functions of data — statistics — to estimate parameters and then our test statistics

are rescalings by the standard deviation of our estimator. We call

V ar(θˆ) = the standard error of the estimator.

2 Sampling Distributions as Test Statistics

We haven’t yet looked at hypothesis testing, but we have used a specific example of a test statistic for the population mean.

For instance, if X ∼ μ, σ^2 is a random variable [model], and we collect some data x =

∑n i xi.

2.1 The distribution of the sample mean, variance known

You will recall that the sampling distribution x ∼ N (μ, σ^2 /n) can be used to construct the test statistic

Z =

x − μ 0 √ σ^2 /n

which has the standard normal distribution N (0, 1)

The Z statistic is the deviation of the data from the null hypothesis, over its standard deviation. In words:

Z ≡

obs − exp S.D(obs)

is the statistic we want to use if we want to make a probabilistic statement about the true mean μ, using observed data X 1 , ..., Xn

2.1.1 Example

We believe X ∼ μ = 800, σ^2 = 40^2. Find the probability that a random sample of 16 bulbs will have an average life of less than 775 hours.

Solution

We know X ∼ N (μX = 800, σ X^2 = 40 2 16 = 100).

So Then:

P(X < 775) =

= P(Z < − 2 .5) = 0. 0062

2.1.2 Example

We saw that we could use the number of successes in a Binomial experiment as an estimate of the parameter for a Bernoulli.

We let ˆp = Yn and Y ∼ Bin(n, p); remember that Y =

Xi and every Xi ∼ Bern(p).^1

Then ˆp ≡ our estimate for the population proportion of success of the Bernoulli experiment notated with X.

Using some numbers for illustration: It is known that .42 of trick or treating nutritionists are overweight. How likely is it 50 nutritionists, out of a sample of 100 are pleasantly plump?

Solution

First notice that we are given the population proportion p = .42 by the words ”it is known”. Notice also that we have crossed into the world of data - if the word ”sample” is deleted then we could see this as merely a standard Binomial probability question.^2

Here we are asking a question about the distribution of ˆp, the sample estimate of the popu- lation proportion: P(ˆp ≥ .5).

We know that the distribution of the sample estimate of the population propor- tion is normal: ˆp ∼ N (μ =. 42 , σ^2 =.^42100 ∗.^58 ).

We’ll use a z-statistic: zo = x−σ μ= (^) (. 42.^5 ∗−. 58.^42 100 )^1 /^2

.

Then the probability that we’ll see 50 or more heavy nutritionists out of a sample of 100 is: P(ˆp ≥ .5) = P(Z ≥ zo) = P(Z ≥ 1 .63) ≈ .05.

What do I want you to get from this illustration?

  • First: The sampling distribution is truly a distribution - we can answer probability questions about sample data by appealing directly to the sample distribution.
  • Second: The distribution of the sample mean is Normal. This is the result of the central limit theorem. Regardless of the distribution of the parameter, if our estimate of it is a sample average (an average of data) we can use the CLT to make probability statements. (^1) this is just the definition of a Binomial experiment (^2) And the answer would be P(Y ≥ 50) = ∑^100 k=50 C k (^100) (.42)k(.58) 100 −k.
  • Third: Notice the special use of the Binomial setup to generate estimates of the Bernoulli parameter.
  • Fourth: Notice our usual construction of Z so that we can use our standard normal tables (in the back of the book or on your computer).

Situations often arise where the sample mean cannot sufficiently describe, or test for, im- portant hypothetical differences in populations. We must appeal to other distributions, to other quantifications of difference, to test other hypothesis. A useful alternative is...

2.2 The T Distribution

In many situations we cannot assume that we know the variance of the sample mean. As well, we often have not enough samples to apply the central limit theorem to the sampling distribution. In these situation we construct the t-statistic:

t =

x − μ s/

n

(2)

The t-distribution, T ∼ t(df ) is an approximation to the normal distribution. Notice I have written df as the parameter of the distribution.^3

The T distribution is centered at zero, just like the Z.^4. We let df ≡ degrees of f reedom.

When we talk about sample data, we loosely define ”degrees of freedom” as the number of independent observations — the number of observations we have left after we subtract the number of parameters we have to estimate.

df ≡ n − k

where we let n = the number of observations and k = the number of parameters to be estimated.

Notice that our constructed t-statistic is a deviation, which we expect to be Normal, rescaled

(or divided) by our estimate of the standard deviation

s^2 n.

Notice that (^3) What, if any, are the parameters for the Z N (0, 1) distribution? The parameters are μ = 0 and σ (^2) = 1. (^4) It turns out that E(T ) = 0 and V ar(T ) = r r− 2

T =

(X − μ)/(σ/

n) √ S^2 /σ^2

is a standard normal random variable divided by a chi-square random variable. We showed at the end of lecture 8 that the distribution of X is independent from the distribution of S^2 and are Normal and Chi-Squared.

The density function for the t distribution we get by writing

T =

Z

V /r

with Z and V independent:

fZ,V =

2 πe−z^2 /^2

·

Γ(r/2)2r/^2

vr/^2 −^1 e−v/^2

From what we know about transformations we get the joint distribution (letting U = V ):

fT,U = h(

t

u √ r

, u)|J| = ...

and we integrate over U , integrate out the Chi-Squared random variable to get

fT =

Γ[(r + 1)/2] √ πrΓ(r/2)

(1 + t^2 /r)r/2+1/^2

Notice that this simplifies to

T =

X − μ S/

n

which is the way you use it.

2.2.1 Illustration and Setup

Suppose we have a process X ∼ μ, σ^2 unknown, and our estimator σˆ^2 = s^2. We want to look at the sample mean x = x^1 +··· n +xnto gain inference about μ.

We then need to look at the probability distribution for T.

Example

What is the probability of a sample of size 25 having a mean of 518 grams and standard deviation of 40 grams, if the population mean yield is 500 grams.

Solution

The t statistic is

t =

518 − 500

40 /

= 2. 25

Then:

P(X > 518) =

= P(t 24 > 2 .25) = 0. 02

3 The Chi Squared Distribution and Test statistic

Example Say we are interested in the fairness of a die. Here is the observed distribution after 120 tosses:

Die Face 1 2 3 4 5 6 Obs. Count 30 17 15 23 24 21

What is the probability that the die is fair?

Using what we only what we have done so far we could test the hypothesis that the die is fair by doing a test of mean:

What is the probability that the mean is 3.5?

We calculate the sample mean to be x = 3.433. Using the variance of a fair die, σ^2 = 2.91, we can compute the sampling distribution and thus the value of the z-statistic is

z 0 =

3. 475 − 3. 5

2. 9 / 120

= −. 161

.

This yields

P(Z > zo) =. 4364

too high to say it is unfair.

For a better test^5 in this situation I’ll point out that the expected number of counts, for each die face, —under the hypothesis that the die is fair and each face is equally likely— should be 16 ∗ 120 = 20. Looking at our data, it seems we have more than 20 in some cases, and less than 20 in others; the positive and negative deviances tend to cancel out.

Die Face 1 2 3 4 5 6 Obs. Count 30 17 5 23 24 21 Exp. Count 20 20 20 20 20 20

A better test statistic here is the Chi-square^6 statistic, χ^2 :

χ^2 =

∑^ n

i

(obsi − expi)^2 expi

The Chi-squared distribution is strictly positive and takes one parameter, ν - the degrees of freedom, or number of independent observations (number of observations minus the number of parameters to be estimated).

3.1 The Chi-Square test for Goodness of Fit

In general, the Chi-square statistic is a test of Goodness of Fit, or how well the data fits, distributionally. Large values of the Chi-square statistic indicate large deviations of the observed from the expected, thus we reject the null hypothesis for large values of the test statistic. For the Goodness of Fit type hypotheses tests, the deviations have already been squared: the tests are naturally one sided.

The observed count at each bin is obvious and collected in the data. We must calculate the expected count for each bin under the null hypothesis. Here, if the die is fair, the probability of getting in any bin is: P(Die is 1) = · · · = P(Die is 6) = 1/6. So the expected number of counts in each bin is 1/ 6 ∗ 120 = 20. This is the general procedure, if I call P(Bini) = πi, then:

Expectedi = πi ∗ n (^5) More powerful (^6) The χ (^2) distribution, in general, is the sum of squared independent standard normal random variables.

That is χ^2 =

∑n i Zi^ where each^ Z^ ∼^ N^ (0,^ 1). We say^ χ (^2) has n degrees of freedom.

and we are ‘testing’ if

πi 6 =

ni n

, f or at least 1 i

Here, our observed test statistic is χ^2 o = (30−20)

2 20 +^ · · ·^ +^

(21−20)^2 20 = 18.00.^ The number of degrees of freedom n − 1, here 6 − 1 = 5. Notice that the total number of observations is fixed, that is how we calculate the expected frequency. Once the total is set we lose a degree of freedom.

Now

P(χ^25 > 18 .00) < 0. 005

From the table in the back of the book, which you should familiarize yourself with. So we say the die is unfair.

3.2 The Chi-Square test for independence, the Two-Way layout

The Chi-Square test is useful for the contingency table, or two-way setup. Remember the contingency table from Chapter 1: a variable on rows, a variable on columns, each cell has the observed counts for each bivariate value of the variable.

We used this example:

Fashion Level Classlevel Low Middle High Total Graduate 6 4 1 11 PhD. 5 1 2 8 Pre-K 30 25 75 130 Total 41 30 78 149

So, briefly, if we let X = Class Level and Y = F ashion Level then the observed value of (X = GraduateandY = Low) = 6 and the observed probability is P(X = GraduateandY = Low) = 6/ 149 ≈ .04.

Under a null hypothesis that the distribution of Class level is independent from Fashion level the probability P(X = Graduate and Y = Low) = P(X = Graduate) ∗ P(Y = Low), we would just multiply the marginal probabilities.. I’ll use this notation: nij ≡ the number of observations in row i, column j. As well ni., n.j , n.. are the sum over row i, column j, and the total, respectively.

Then P(X = row i and Y = col j) = n ni...nn.j... So the expected number of counts in row i and column j, under a hypothesis of independence, is:

Expectedij =

ni.n.j n..

For our data here we calculate

χ^2 o =

(6 − 11 .89)^2

+ · · · +

(75 − 82 .65)^2

= 14. 92

.

And

P(χ^24 > 14 .92) < 0. 005

We conclude that class level and fashion are not independent.

4 F -distribution for ratio of variance

If X 1 , ...Xm is distributed N (μ 1 , σ^21 ) and Y 1 , ...Yn is distributed N (μ 2 , σ^22 ) then the ratio

F =

S 12 /σ^21 S 22 /σ^22

(3)

has what we call an F distribution with numerator degrees of freedom m−1 and denominator degrees of freedom n − 1.

F is the ratio of two independent chi-squared variables, call them U ∼ χ^2 (m − 1) and

V ∼ χ^2 (n − 1). If U = (m−1)S

2 σ 12 then^ U^ ∼^ χ

(^2) (m − 1).

If V = (n−1)S

2 σ^22 then^ V^ ∼^ χ

(^2) (n − 1). Then

F =

(m−1)S^2 σ^2 /(m^ −^ 1) (n−1)S^2 σ^2 /(n^ −^ 1)

which just simplifies to (3).^7

An important identity for the F-distribution is:

F 1 −α,ν 1 ,ν 2 = F (^) α,ν−^12 ,ν 1 (4)

You’ll notice that you may have to use this fact in looking up values on the F -table in some books.

5 Miscellanea

5.1 Boxplots

A boxplot is an illustration of the distribution of a a sample

Figure 1 This boxplot displays data that is skewed to the right and with an IQR = 4.

In R:

x<-rnorm(100)

(^7) It turns out that E(F ) = ν 2 ν 2 − 2 and^ var(F^ ) =^

2 ν^22 (ν 1 +ν 2 −2) ν 1 (ν 2 −2)^2 (ν 2 −4) where^ U^ ∼^ χ (^2) (ν 1 ), V ∼ χ (^2) (ν 2 ), F = U/ν 1 V /ν 2 and U is independent of V.

y<-c(x,rnorm(20,5))

boxplot(x)

boxplot(x, horizontal=T)

boxplot(y, horizontal=T)

5.2 Quantile-Quantile, Plots

A quantile-quantile plot is a plot of data values on the (ordinate) y axis versus theoretical quantiles on the (abscissa) x axis. So a Q − Q plot, in the typical name, should look like a 45 degree line when these values are similar: the plot is

X vs. Y == FX (X(i/n)) vs. Fn(x(i)) == (x, y)

where Fn is the empirical cdf (the cdf induced by the data), i.e.

Fn(x) =

n

∑^ n

i=

(^1) {xi≤x}

A qth quantile — 0 < q < 1 — say, is the value of the random variable (or data) yielded by evaluating the inverse cumulative distribution function at q. That is

F −^1 (q) = qth^ quantile

or

F (qth^ quantile) = q

Figure 2 This qqplot displays data from a t distribution with 5 degrees of freedom.

In R

y <- rt(200, df = 5)

qqnorm(y); qqline(y, col = 2)

6 Exercises

  • Do exercises 8.39-8.49 on page 265.
  • Do exercise 8.57 and 8.61 on page 266.
  • Do exercise 8.69 and 8.73 on page 267.
  • Use R to make Normal Quantile-Quantile plots for averages of samples drawn from
    1. Binomial
    2. Poisson

distributions. Draw samples (say of size 10, 25, whatever) and take averages of these samples. Then make the Q-Q plot on the distribution of averages. Look at boxplots of the distributions of averages.