Study with the several resources on Docsity
Earn points by helping other students or get them with a premium plan
Prepare for your exams
Study with the several resources on Docsity
Earn points to download
Earn points by helping other students or get them with a premium plan
Community
Ask the community for help and clear up your study doubts
Discover the best universities in your country according to Docsity users
Free resources
Download our free guides on studying techniques, anxiety management strategies, and thesis advice from Docsity tutors
This chapter explores the concepts of confidence intervals and hypothesis tests for means. It introduces the standard deviation of the sampling distribution, the central limit theorem, and the t-distributions. One-sample t-confidence intervals, one-sample t-tests, and calculating sample sizes. Key concepts include the mean, standard error of the mean, and the margin of error.
Typology: Study notes
1 / 2
Standard deviation of the sampling distribution is s/√n, measures how much the sample statistic varies from sample to sample. It is smaller than the standard deviation of the population by a factor of √ n. Averages are less variable than individual observations. The mean of the sampling distribution is an unbiased estimate of the population mean μ. Mean of all
Central Limit Theorem: When randomly sampling from any population with mean μ and standard deviation s , when n is large enough, the sampling distribution is approximately normal: ~ N ( μ , s /√ n ). Usually 25, 40 to overcome extreme skewness/outliers. o The mean of a random sample has a sampling distribution whose shape can be approximated by the Normal model; a larger sample will yield a better approximation. X-bar: sample mean Standard Error of (X-bar) = SE(X-bar) = s/√n Confidence Intervals X-bar ± t*n-1 × SE(Y-bar) o same thing as: X-bar +/- ME For a sample of size n, the sample standard deviation s is
( )^2 1 1 x x n s (^) i n – 1 = degrees of freedom (SO much easier to plug into your calculator though... STAT > Edit > Put into L1. Go back into STAT > Calc > 1-Var Stats) SEM = s /√ n, standard error of the mean T Distributions One-sample t statistic
[When s is known, the sampling distribution is N ( m , s /√ n )] But when s is estimated from the sample standard deviation s , the sampling distribution follows a t distribution t ( μ , s /√ n ) with degrees of freedom n – 1 When n is very large, s is a very good estimate of s , and the corresponding t distributions are very close to the normal distribution. The t distributions become wider for smaller sample sizes, reflecting the lack of precision in estimating s from s. One-sample T-Confidence Intervals Margin of error m = t ∗ s^ /√ n^ , leading to xbar +/- the ME One-sample T-Test Stating the null and alternative hypotheses ( H 0 versus H a) Deciding on a one-sided or two-sided test Choosing a significance level a Calculating t and its degrees of freedom:
Finding the area under the curve with Table T
Stating the P-value and interpreting the result The t procedures are robust to small deviations from normality – the results will not be affected too much. Factors that strongly matter: Random sampling: the sample must be an SRS from the population. Outliers and Skewedness: they strongly influence the mean and therefore the t procedures. However, their impact diminishes as the sample size gets larger because of the Central Limit Theorem. Specifically: o When n < 15, the data must be close to normal and without outliers. o When 15 < n < 40, mild skewness is acceptable but not outliers. o When n > 40, the t- statistic will be valid even with strong skewness. Calculating Sample Size m = z ∗ σ
⇔ n =
z ∗ σ
2 If less than 60, use T