Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Generative Models-Introduction to Machine Learning-Lecture 15-Computer Science, Lecture notes of Introduction to Machine Learning

Generative Models, Optimal Classification, Bayes Classifier, Bayes Risk, Discriminant Functions, Two-Category, Equal Covariance Gaussian, Gaussian Case, Linear Discriminant, Generative Models, Maximum Likelihood, Density Estimation, Unequal Covariances, Gaussians, Decision Boundaries, Quadratic Decision Boundaries, Gaussian ML, Naive Bayes Classifier, SPAM Detection, MAP Estimate, Greg Shakhnarovich, Lecture Slides, Introduction to Machine Learning, Computer Science, Toyota Technological Institu

Typology: Lecture notes

2011/2012

Uploaded on 03/12/2012

alfred67
alfred67 🇺🇸

4.9

(20)

328 documents

1 / 37

Toggle sidebar

Related documents


Partial preview of the text

Download Generative Models-Introduction to Machine Learning-Lecture 15-Computer Science and more Lecture notes Introduction to Machine Learning in PDF only on Docsity!

Lecture 15: Generative models

TTIC 31020: Introduction to Machine Learning

Instructor: Greg Shakhnarovich

TTI–Chicago

October 29, 2010

Reminder: optimal classification

Expected classification error is minimized by

h(x) = argmax c

p (y = c | x)

=

p (x | y) p(y) p(x)

.

The Bayes classifier:

h∗(x) = argmax c

p (x | y = c) p(y = c) p(x) = argmax c

p (x | y = c) p(y = c)

= argmax c

{log pc(x) + log Pc}.

Note: p(x) is equal for all c, and can be ignored.

Reminder: optimal classification

Expected classification error is minimized by

h(x) = argmax c

p (y = c | x)

=

p (x | y) p(y) p(x)

.

The Bayes classifier:

h∗(x) = argmax c

p (x | y = c) p(y = c) p(x)

= argmax c

{log pc(x) + log Pc}.

Note: p(x) is equal for all c, and can be ignored.

Reminder: optimal classification

Expected classification error is minimized by

h(x) = argmax c

p (y = c | x)

=

p (x | y) p(y) p(x)

.

The Bayes classifier:

h∗(x) = argmax c

p (x | y = c) p(y = c) p(x) = argmax c

p (x | y = c) p(y = c)

= argmax c

{log pc(x) + log Pc}.

Note: p(x) is equal for all c, and can be ignored.

Bayes risk

−0.2−10 −8 −6 −4 −2 0 2 4 6 8 10

−0.

0

y =2, h∗(x) =

y =1, h∗(x) =

The risk (probability of error) of Bayes classifier h∗^ is called the Bayes risk R∗. This is the minimal achievable risk for the given p(x, y) with any classifier! In a sense, R∗^ measures the inherent difficulty of the classification problem.

R∗^ = 1 −

x

max c {p (x | c = y) Pc} dx

Discriminant functions

We can construct, for each class c, a discriminant function

δc(x) , log pc(x) + log Pc

such that h∗(x) = argmax c

δc(x).

Can simplify δc by removing terms and factors common for all δc since they won’t affect the decision boundary. For example, if Pc = 1/C for all c, can drop the prior term:

δc(x) = log pc(x)

Two-category case

In case of two classes y ∈ {± 1 }, the Bayes classifier is

h∗(x) = argmax c=± 1

δc(x) = sign (δ+1(x) − δ− 1 (x)).

Decision boundary is given by δ+1(x) − δ− 1 (x) = 0.

  • (^) Sometimes f (x) = δ+1(x) − δ− 1 (x) is referred to as a discriminant function.

With equal priors, this is equivalent to the (log)-likelihood ratio test:

h∗(x) = sign

[

log p (x | y = +1) p (x | y = −1)

]

.

Equal covariance Gaussian case

Consider the case of pc(x) = N (x; μc, Σ), and equal prior for all classes.

δk(x) = log p(x | y = k)

= − log(2π)d/^2 −

1

2

log(|Σ|) ︸ ︷︷ ︸ same for all k

1

2

(x − μk)T^ Σ−^1 (x − μk)

Equal covariance Gaussian case

Consider the case of pc(x) = N (x; μc, Σ), and equal prior for all classes.

δk(x) = log p(x | y = k)

= − log(2π)d/^2 −

1

2

log(|Σ|) ︸ ︷︷ ︸ same for all k

1

2

(x − μk)T^ Σ−^1 (x − μk)

∝ const − xT^ Σ−^1 x + μTk Σ−^1 x + xT^ Σ−^1 μk − μTk Σ−^1 μk

Equal covariance Gaussian case

Consider the case of pc(x) = N (x; μc, Σ), and equal prior for all classes.

δk(x) = log p(x | y = k)

= − log(2π)d/^2 −

1

2

log(|Σ|) ︸ ︷︷ ︸ same for all k

1

2

(x − μk)T^ Σ−^1 (x − μk)

∝ const − xT^ Σ−^1 x + μTk Σ−^1 x + xT^ Σ−^1 μk − μTk Σ−^1 μk

Now consider the two classes:

δk(x) ∝ 2 μTk Σ−^1 x − μTk Σ−^1 μk

δc(x) ∝ 2 μTq Σ−^1 x − μTq Σ−^1 μq

Linear discriminant

Two class discriminants:

δk(x) − δq(x) = μTk Σ−^1 x − xT^ Σ−^1 μk + μTk Σ−^1 μk − μTq Σ−^1 x − xT^ Σ−^1 μq + μTq Σ−^1 μq

Linear discriminant

Two class discriminants:

δk(x) − δq(x) = μTk Σ−^1 x − xT^ Σ−^1 μk + μTk Σ−^1 μk − μTq Σ−^1 x − xT^ Σ−^1 μq + μTq Σ−^1 μq = wT^ x + w 0

If we know what μ 1 ,...,C and Σ are, we can compute the optimal w, w 0 directly.

What should we do when we don’t know the Gaussians?

Generative models for classification

In generative models one explicitly models p(x, y) or, equivalently, pc(x) and Pc, to derive discriminants. Typically, the model imposes certain parametric form on the assumed distributions, and requires estimation of the parameters from data.

  • (^) Most popular: Gaussian for continuous, multinomial for discrete.
  • We will see later in this class non-parametric models. Often, the classifier is OK even if data clearly don’t conform to assumptions.

−12−10 −5 0 5 10 15 −10^ −

− −4^ −

02

46

8

−2−6 −4 −2 0 2 4 6 8

0

2

4

6

8

10

Generative models for classification

In generative models one explicitly models p(x, y) or, equivalently, pc(x) and Pc, to derive discriminants. Typically, the model imposes certain parametric form on the assumed distributions, and requires estimation of the parameters from data.

  • (^) Most popular: Gaussian for continuous, multinomial for discrete.
  • We will see later in this class non-parametric models. Often, the classifier is OK even if data clearly don’t conform to assumptions.

−12−10 −5 0 5 10 15 −10^ −

− −4^ −

02

46

8

−2−6 −4 −2 0 2 4 6 8

0

2

4

6

8

10

−4 −6 −4 −2 0 2 4 6 8 10 12

0

2 4

6

8

10

Maximum likelihood density estimation

Let X = {x 1 ,... , xN } be a set of data points

  • no labels; in the current context X all come from class c

We assume parametric distribution model p(x; θ).

The (log)-likelihood of θ given X (assuming i.i.d. sampling):

log p(X; θ) ,

∑^ N

i=

log p(xi; θ).

ML estimate of θ:

θˆM L , argmax θ

log p(X; θ)

  • (^) Intuitively: the observed data is most likely (has highest probability) for these settings of θ.

Gaussians with unequal covariances

What if we remove the restriction that ∀c, Σc = Σ?

Compute ML estimate for μc, Σc for each c.

We get discriminants (and decision boundaries) quadratic in x:

δc(x) = −

1

2

xT^ Σ− c 1 x + μTc Σ− c 1 x − 〈const in x〉

(as shown in PS1).

A quadratic form in x: xT^ Ax.

Quadratic decision boundaries

What do quadratic boundaries look like in 2D?

Second-degree curves can be any conic section:

Quadratic decision boundaries

What do quadratic boundaries look like in 2D?

Second-degree curves can be any conic section:

Quadratic decision boundaries

What do quadratic boundaries look like in 2D?

Second-degree curves can be any conic section:

Can all of these arise from two Gaussian classes?

Sources of error in generative models

Reminder: three sources of error: noise variance (irreducible), structural due to our model class, estimation due to our choice of model from that class.

In generative model, estimation error may be due to overfitting.

Sources of error in generative models

Reminder: three sources of error: noise variance (irreducible), structural due to our model class, estimation due to our choice of model from that class.

In generative model, estimation error may be due to overfitting.

Two issues: regularization (MAP estimation instead of ML), controlling number of parameters (degrees of freedom) in the model.

Parameters in Gaussian ML

Single Gaussian in Rd:

Parameters in Gaussian ML

Single Gaussian in Rd: d for the mean, plus

Model Σ =



σ 12 σ 12... σ 1 d σ 12 σ^22... σ 2 d

........... σ 1 d..... σ^2 d



param

Parameters in Gaussian ML

Single Gaussian in Rd: d for the mean, plus

Model Σ =



σ 12 σ 12... σ 1 d σ 12 σ^22... σ 2 d

........... σ 1 d..... σ^2 d



param d + d(d − 1)/ 2

Parameters in Gaussian ML

Single Gaussian in Rd: d for the mean, plus

Model Σ =



σ 12 σ 12... σ 1 d σ 12 σ^22... σ 2 d

........... σ 1 d..... σ^2 d





σ^21 0... 0 0 σ 22... 0

.......... 0..... σ d^2



param d + d(d − 1)/ 2