Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Probability Distributions: Continuous and Discrete, Transformation Rules, Study notes of Probability and Statistics

An overview of various continuous and discrete probability distributions, including their standard versions, transformation rules, and properties. It covers uniform, normal, exponential, gamma, bernoulli, binomial, poisson, geometric, negative binomial, and hypergeometric distributions.

Typology: Study notes

Pre 2010

Uploaded on 08/31/2009

koofers-user-dc3
koofers-user-dc3 🇺🇸

10 documents

1 / 8

Toggle sidebar

Related documents


Partial preview of the text

Download Probability Distributions: Continuous and Discrete, Transformation Rules and more Study notes Probability and Statistics in PDF only on Docsity! Some continuous and discrete distributions Table of contents I. Continuous distributions and transformation rules. A. Standard uniform distribution U [0, 1]. B. Uniform distribution U [a, b]. C. Standard normal distribution N(0, 1). D. Normal distribution N(µ, σ). E. Standard exponential distribution. F. Exponential distribution with mean λ. G. Standard Gamma distribution Γ(r, 1). H. Gamma distribution Γ(r, λ). II. Discrete distributions and transformation rules. A. Bernoulli random variables. B. Binomial distribution. C. Poisson distribution. D. Geometric distribution. E. Negative binomial distribution. F. Hypergeometric distribution. 1 Continuous distributions. Each continuous distribution has a “standard” version and a more general rescaled version. The transformation from one to the other is always of the form Y = aX + b, with a > 0, and the resulting identities: fY (y) = fX ( y−b a ) a (1) FY (y) = FX ( y − b a ) (2) E(Y ) = aE(X) + b (3) V ar(Y ) = a2V ar(X) (4) MY (t) = e btMX(at) (5) 1 1.1 Standard uniform U [0, 1] This distribution is “pick a random number between 0 and 1”. fX(x) = { 1 if 0 < x < 1 0 otherwise FX(x) =    0 if x ≤ 0 x if 0 ≤ x ≤ 1 1 if x ≥ 1 E(X) = 1/2 V ar(X) = 1/12 MX(t) = et − 1 t 1.2 Uniform U [a, b] This distribution is “pick a random number between a and b”. To get a random number between a and b, take a random number between 0 and 1, multiply it by b − a, and add a. The properties of this random variable are obtained by applying rules (1–5) to the previous subsection. fX(x) = { 1/(b − a) if a < x < b 0 otherwise FX(x) =    0 if x ≤ a x−a b−a if a ≤ x ≤ b 1 if x ≥ b E(X) = (a + b)/2 V ar(X) = (b − a)2/12 MX(t) = ebt − eat t(b − a) 1.3 Standard normal N(0, 1) This is the most important distribution in all of probability because of the Central Limit Theorem, which states that the sums (or averages) of a large number of independent random variables is approximately normal, no mat- ter what the original distributions look like. Specifically, if X is a random variable with mean µ and standard deviation σ, and if X1, X2, . . . are inde- pendent copies of X, and if Sn = X1 + · · · + Xn, then for large values of n, 2 1.8 Gamma distribution Γ(r, λ) This is a standard Gamma variable multiplied by λ, or equivalently the sum of r independent exponential variables, each with mean λ fX(x) = λ −rxr−1e−x/λ/(r − 1)!, x > 0 FX(x) = complicated, E(X) = λr V ar(X) = λ2r MX(t) = (1 − λt)−r 2 Discrete distributions and transformation rules. The discrete random variables we will consider always take on integer values, so we never rescale them. Also, the cdf FX(x) is rarely useful, with the notable exception of the geometric distribution. The transformations that are more relevant are those for adding two independent random variables. If Z = X + Y , with X and Y independent, then fZ(z) = ∑ x fX(x)fY (z − x) (6) E(Z) = E(X) + E(Y ) (7) V ar(Z) = V ar(X) + V ar(Y ) (8) MZ(t) = MX(t)MY (t) (9) 2.1 Bernoulli A Bernoulli random variable is a variable that can only take on the values 0 and 1. We let p be the probability of 1, and 1 − p the probability of 0. This example is easy to analyze, and MANY interesting random variables can be built from this simple building block. fX(x) =    1 − p if x = 0 p if x = 1 0 otherwise. 5 E(X) = p V ar(X) = p(1 − p) MX(t) = 1 + p(e t − 1) 2.2 Binomial A binomial random variable is the sum of n independent Bernoulli random variables, all with the same value of p. The usual application is counting the number of successes in n independent tries at something, where each try has a probability p of success. It also applies to sampling with replacement, and is a very good approximation to sampling (without replacement) from very large populations. When np and n(1 − p) are both large (say, 20 or bigger), then the binomial distribution is well approximated by the normal distribution. When n is large (at least 30) and p is small (less than 0.1), then the binomial distribution is approximated well by the Poisson distribution. fX(x) = ( n x ) px(1 − p)n−x E(X) = np V ar(X) = np(1 − p) MX(t) = (1 + p(e t − 1))n 2.3 Poisson The Poisson distribution (pronounced pwah-SON) is the limit of binomial when n is large and p is small. The correspondence is λ = np. The Poisson distribution replicates itself, in that the sum of a Poisson(λ) random variable and a (independent!) Poisson(µ) random variable is a Poisson(λ+µ) random variable. Anything that involves the sum of many, many long-shot events (e.g., number of people hit by lightning in a year, or number of broken bones from playing soccer, or number of clicks on a Geiger counter) will be described by the Poisson distribution. Closely related to the Poisson distribution is a Poisson process. In a Pois- son process events accumulate at a certain average rate r. The number of events in “time” t is then given by Poisson(rt). Examples include radioac- tivity, where you have clicks per unit time, typos (where r is errors/page and 6 t is the number of pages), industrial defects (r equals the average number of defects per foot of sheet metal and t is the number of feet). fX(x) = λ xe−λ/x! E(X) = λ V ar(X) = λ MX(t) = e λ(et−1) 2.4 Geometric The geometric distribution describes the waiting time between successes in a Binomial process. For instance, flip coins and count the turns until the first head, or roll dice and count the turns until you get a “6”. It is very much like the exponential distribution, with λ corresponding to 1/p, except that the geometric distribution is discrete while the exponential distribution is continuous. fX(x) = pq x−1, x = 1, 2, . . . , where q = 1 − p E(X) = 1/p V ar(X) = q/p2 MX(t) = pet 1 − qet 2.5 Negative binomial The sum X of r independent geometric random variables is given by the discrete analog of the Gamma distribution (which describes the sum of r independent exponential random variables). fX(x) = ( x − 1 r − 1 ) prqx−r, x = r, r + 1, . . . , where q = 1 − p E(X) = r/p V ar(X) = rq/p2 MX(t) = prert (1 − qet)r 7