Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

First Year Interest Group Seminar - Homework 3 Problems with Solutions | N 1, Assignments of Health sciences

Material Type: Assignment; Class: FIRST-YEAR INTEREST GROUP SMNR; Subject: Nursing; University: University of Texas - Austin; Term: Unknown 1989;

Typology: Assignments

Pre 2010

Uploaded on 08/31/2009

koofers-user-obn-1
koofers-user-obn-1 🇺🇸

10 documents

Partial preview of the text

Download First Year Interest Group Seminar - Homework 3 Problems with Solutions | N 1 and more Assignments Health sciences in PDF only on Docsity!

University of Texas at Austin

HW Assignment 3

Problem 3.1 (Poisson Point Process). Let (S, S, μ) be a measurable space, i.e., S is a non-empty set, S is a σ-algebra, and μ is a positive measure on S. A mapping N : Ω × S → N ∪ {∞},

where (Ω, F, P) is a probability space, is called a Poisson Point Process (PPP) with mean measure μ if

  • the mapping N (B) (more precisely, the mapping ω 7 → N (ω, B)) is F-measurable (a random variable) for each B ∈ S and has the Poisson distribution^1 with parameter μ(B) (denoted by P (μ(B))), whenever μ(B) < ∞, and
  • for each ω ∈ Ω, the mapping B 7 → N (ω, B) is an N ∪ {∞}-valued measure, and
  • random variables (N (B 1 ), N (B 2 ),... , N (Bd)) are independent when the sets B 1 , B 2 ,... , Bd are (pairwise) disjoint.

The purpose of this problem is to show that, under mild conditions on (S, S, μ), a PPP with mean measure μ exists.

(1) We assume first that 0 < μ(S) < ∞ and let (Ω, F, P) be a probability space which supports a random variable N and an iid sequence {Xk}k∈N, independent of N and such that

  • N ∼ P (μ(S)), and
  • for each k, Xk takes values in S and P[Xk ∈ B] = μ(B)/μ(S), for all B ∈ S (technically, a measurable mapping from Ω into a measurable space is called a random element). Show that such a probability space exists. (2) For B ∈ S, define

N (ω, B) =

N ∑ (ω)

k=

(^1) {Xk (ω)∈B},

i.e., N (ω, B) is the number of terms in X 1 (ω),... , XN (ω)(ω) that fall into B. Show that N (B) is a random variable for each B ∈ S. (3) Pick (pairwise) disjoint B 1 ,... , Bd in S and compute P[N (B 1 ) = n 1 ,... , N (Bd) = nd|N = m], for m, n 1 ,... , nd ∈ N 0. (4) Show that N is a PPP with mean measure μ. (5) Show that a PPP with mean measure μ exists when (S, S, μ) is merely a σ-finite measure space, i.e., there exists a sequence {Bn}n∈N in S such that S = ∪nBn and μ(Bn) < ∞, for all n ∈ N 0.

Solution:

(1) Even though this looks like an application of Kolmogorov’s extension theorem, it is not (at least not of the version we know). The good news is that because of all the independence, we can construct the probability space directly. We start with a space for N. This can be simply ΩN = N 0 , with FN = P(ΩN ), and the measure PN , characterized by

PN ({k}) = e−μ(S)^ μ(S)

k k! , As for Xk, k ∈ N, the main idea is to take (Ωk, Fk, Pk) = (S, S, μ(·)/μ(S)) itself as a probability space. Indeed, the identity mapping is exactly the random element with the required distribution. Finally, we need to put all these probability spaces together. Because of the independence, we may simply multiply them as in Ω = (ΩN , FN , PN ) × (Ω 1 , F 1 , P 1 ) × (Ω 2 , F 2 , P 2 ) ×... An element ω ∈ Ω is of the form ω = (ωN , ω 1 , ω 2 ,... ), so that, if we define N (ω) = ωN , Xk(ω) = ωk, k ∈ N, the random variables N , {Xn}n∈N are independent and have exactly the required distributions. (2) It is clearly true that

N (ω, B) =

∑^ ∞

k=

(^1) {Xk (ω)∈B} (^1) {k≤N (ω)},

so that N (·, B) is a random variable, since all sets of the form {Xk ∈ B} and {N ≥ k}, are F-measurable.

(^1) A r.v. X is said to have the Poisson distribution with parameter c > 0, if P[X = n] = e−c cn n! , for^ n^ ∈^ N^0. When^ c^ = 0,^ P[X^ = 0] = 1.

(3) We can think of B 1 , B 2 ,... , Bn as bins (urns) and X 1 ,... , Xm as m balls that we place in those urns with probabilities pj = μ(Bj )/μ(S), j = 1,... , d. When p 1 +· · ·+pd < 1, we throw the ball away with probability pd+1 = 1−(p 1 +· · ·+pd). First of all, we must have n 1 + · · · + nd ≤ m, since only m balls are available. Then, we realize that each run of such an experiment would produce a vector of the form (j 1 , j 2 ,... , jm), where jk ∈ { 1 , 2 ,... , d + 1}, i.e., jk is the bin in which the k-th ball has been placed (we set jk = d + 1 is the k-th ball has been thrown away). An elementary outcome such as (j 1 ,... , jm) happens with probability pj 1 pj 2... pjm. The set of all elementary events which contribute to the final count of n 1 balls in bin 1, n 2 balls in bin 2, etc. is

E(n 1 ,...,nd) = {(j 1 ,... , jm) :

∑^ m

k=

(^1) {jk =1} = n 1 ,

∑^ m

k=

(^1) {jk =2} = n 2 ,... }

It is important to realize that all elementary events in E(n 1 ,...,nd) have the same probability pn 1 1 pn 2 2... pn d+1d+1 , so, in order to compute the desired probability, all we have to do is count the number of elements #E(n 1 ,...,nd) in E(n 1 ,...,nd). This is, however, a clasical example of the use of the multinomial coefficient in counting, i.e.

#E(n 1 ,...,nd) =

m n 1 n 2... nd

m! n 1! n 2!... nd! (m − (n 1 + n 2 + · · · + nd))!

Therefore,

P[N (B 1 ) = n 1 ,... , N (Bd) = nd|N = m] =

m n 1 n 2... nd

μ(B 1 ) μ(S)

)n 1

...

μ(Bd) μ(S)

)nd ( μ(S) − (μ(B 1 ) + · · · + μ(Bd)) μ(S)

)m−(n 1 +···+nd) .

(4) In order to compute the joint probability P[N (B 1 ) = n 1 ,... , N (Bd) = nd], over disjoint sets B 1 ,... , Bd ∈ S and n 1 ,... , nd ∈ N 0 , we use the law to total probability and the conditional probabilty computed in part (3). Indeed, if we set pk = μ(Bk)/μ(S), for k = 1,... , d, pd+1 = 1 −

∑d k=1 pk^ and^ K^ =^

∑d k=1 nk, we have

P[N (B 1 ) = n 1 ,... , N (Bd) = nd] =

∑^ ∞

m=

P[N = m]P[N (B 1 ) = n 1 ,... , N (Bd) = nd|N = m]

∑^ ∞

m=K

e−μ(S)^

μ(S)m m!

m! n 1! n 2!... nd! (m − K)! pn 1 1 pn 2 2... pn d dpmd+1−K

e−μ(S) n 1! n 2!... nd!

μ(B 1 )n^1... μ(Bd)nd

∑^ ∞

m=K

1 (m−K)! (μ(S)^ −

∑^ d

k=

μ(Bk))m−K

e−μ(S) n 1! n 2!... nd!

μ(B 1 )n^1... μ(Bd)nd^ eμ(S)−

Pd k=1 μ(Bk^ )

∏^ d

k=

e−μ(Bk^ )^

μ(Bk)nk nk!

This equality establishes both the independence of random variables N (·, Bk), k = 1,... , d over disjoint sets, as well as the fact that N (·, B) has the Poisson distribution with parameter μ(B). It remains to establish that for each ω ∈ Ω, the mapping B 7 → N (ω, B) is an N 0 ∪ {∞}-valued measure. For a fixed ω ∈ Ω, N (ω, ·) is clearly a set function that maps S into the To show σ-additivity, let {Bn}n∈N be a collection of disjoint sets in S. Then,

N (ω, ∪nBn) =

N ∑ (ω)

k=

(^1) {Xk (ω)∈∪nBn} =

N ∑ (ω)

k=

∑^ ∞

n=

(^1) {Xk (ω)∈Bn} =

∑^ ∞

n=

N ∑ (ω)

k=

(^1) {Xk (ω)∈Bn} =

∑^ ∞

n=

N (ω, Bn),

where we are allowed to interchange sums because all terms are non-negative. (5) Assume μ(S) = ∞. Using σ-finiteness, write S = ∪∞ n=1Sn, with each μ(Sn) < ∞. Without loss of generality, we may assume, additionally, that μ(Sn) > 0 for all n ∈ N, and that Sn ∩ Sm = ∅, for n 6 = m. Using the above steps (1) - (4), for each n ∈ N, we can construct a PPP N (n)^ on (Sn, Sn, μn), where Sn = {S ∩ Sn : S ∈ S}, and μn(B) = μ(B), for B ∈ Sn. By multiplying the probability spaces that carry this sequence of PPPs, we may (and do) assume that all N (n), n ∈ N, are independent.