Study with the several resources on Docsity
Earn points by helping other students or get them with a premium plan
Prepare for your exams
Study with the several resources on Docsity
Earn points to download
Earn points by helping other students or get them with a premium plan
Community
Ask the community for help and clear up your study doubts
Discover the best universities in your country according to Docsity users
Free resources
Download our free guides on studying techniques, anxiety management strategies, and thesis advice from Docsity tutors
Generative Models, Optimal Classification, Bayes Classifier, Bayes Risk, Discriminant Functions, Two-Category, Equal Covariance Gaussian, Gaussian Case, Linear Discriminant, Generative Models, Maximum Likelihood, Density Estimation, Unequal Covariances, Gaussians, Decision Boundaries, Quadratic Decision Boundaries, Gaussian ML, Naive Bayes Classifier, SPAM Detection, MAP Estimate, Greg Shakhnarovich, Lecture Slides, Introduction to Machine Learning, Computer Science, Toyota Technological Institu
Typology: Lecture notes
1 / 37
TTIC 31020: Introduction to Machine Learning
Instructor: Greg Shakhnarovich
TTI–Chicago
October 29, 2010
Expected classification error is minimized by
h(x) = argmax c
p (y = c | x)
p (x | y) p(y) p(x)
The Bayes classifier:
h∗(x) = argmax c
p (x | y = c) p(y = c) p(x) = argmax c
p (x | y = c) p(y = c)
= argmax c
{log pc(x) + log Pc}.
Note: p(x) is equal for all c, and can be ignored.
Expected classification error is minimized by
h(x) = argmax c
p (y = c | x)
p (x | y) p(y) p(x)
The Bayes classifier:
h∗(x) = argmax c
p (x | y = c) p(y = c) p(x)
= argmax c
{log pc(x) + log Pc}.
Note: p(x) is equal for all c, and can be ignored.
Expected classification error is minimized by
h(x) = argmax c
p (y = c | x)
p (x | y) p(y) p(x)
The Bayes classifier:
h∗(x) = argmax c
p (x | y = c) p(y = c) p(x) = argmax c
p (x | y = c) p(y = c)
= argmax c
{log pc(x) + log Pc}.
Note: p(x) is equal for all c, and can be ignored.
−0.2−10 −8 −6 −4 −2 0 2 4 6 8 10
−0.
0
y =2, h∗(x) =
y =1, h∗(x) =
The risk (probability of error) of Bayes classifier h∗^ is called the Bayes risk R∗. This is the minimal achievable risk for the given p(x, y) with any classifier! In a sense, R∗^ measures the inherent difficulty of the classification problem.
x
max c {p (x | c = y) Pc} dx
We can construct, for each class c, a discriminant function
δc(x) , log pc(x) + log Pc
such that h∗(x) = argmax c
δc(x).
Can simplify δc by removing terms and factors common for all δc since they won’t affect the decision boundary. For example, if Pc = 1/C for all c, can drop the prior term:
δc(x) = log pc(x)
In case of two classes y ∈ {± 1 }, the Bayes classifier is
h∗(x) = argmax c=± 1
δc(x) = sign (δ+1(x) − δ− 1 (x)).
Decision boundary is given by δ+1(x) − δ− 1 (x) = 0.
With equal priors, this is equivalent to the (log)-likelihood ratio test:
h∗(x) = sign
log p (x | y = +1) p (x | y = −1)
Consider the case of pc(x) = N (x; μc, Σ), and equal prior for all classes.
δk(x) = log p(x | y = k)
= − log(2π)d/^2 −
log(|Σ|) ︸ ︷︷ ︸ same for all k
(x − μk)T^ Σ−^1 (x − μk)
Consider the case of pc(x) = N (x; μc, Σ), and equal prior for all classes.
δk(x) = log p(x | y = k)
= − log(2π)d/^2 −
log(|Σ|) ︸ ︷︷ ︸ same for all k
(x − μk)T^ Σ−^1 (x − μk)
∝ const − xT^ Σ−^1 x + μTk Σ−^1 x + xT^ Σ−^1 μk − μTk Σ−^1 μk
Consider the case of pc(x) = N (x; μc, Σ), and equal prior for all classes.
δk(x) = log p(x | y = k)
= − log(2π)d/^2 −
log(|Σ|) ︸ ︷︷ ︸ same for all k
(x − μk)T^ Σ−^1 (x − μk)
∝ const − xT^ Σ−^1 x + μTk Σ−^1 x + xT^ Σ−^1 μk − μTk Σ−^1 μk
Now consider the two classes:
δk(x) ∝ 2 μTk Σ−^1 x − μTk Σ−^1 μk
δc(x) ∝ 2 μTq Σ−^1 x − μTq Σ−^1 μq
Two class discriminants:
δk(x) − δq(x) = μTk Σ−^1 x − xT^ Σ−^1 μk + μTk Σ−^1 μk − μTq Σ−^1 x − xT^ Σ−^1 μq + μTq Σ−^1 μq
Two class discriminants:
δk(x) − δq(x) = μTk Σ−^1 x − xT^ Σ−^1 μk + μTk Σ−^1 μk − μTq Σ−^1 x − xT^ Σ−^1 μq + μTq Σ−^1 μq = wT^ x + w 0
If we know what μ 1 ,...,C and Σ are, we can compute the optimal w, w 0 directly.
What should we do when we don’t know the Gaussians?
In generative models one explicitly models p(x, y) or, equivalently, pc(x) and Pc, to derive discriminants. Typically, the model imposes certain parametric form on the assumed distributions, and requires estimation of the parameters from data.
−12−10 −5 0 5 10 15 −10^ −
− −4^ −
02
46
8
−2−6 −4 −2 0 2 4 6 8
0
2
4
6
8
10
In generative models one explicitly models p(x, y) or, equivalently, pc(x) and Pc, to derive discriminants. Typically, the model imposes certain parametric form on the assumed distributions, and requires estimation of the parameters from data.
−12−10 −5 0 5 10 15 −10^ −
− −4^ −
02
46
8
−2−6 −4 −2 0 2 4 6 8
0
2
4
6
8
10
−4 −6 −4 −2 0 2 4 6 8 10 12
−
0
2 4
6
8
10
Let X = {x 1 ,... , xN } be a set of data points
We assume parametric distribution model p(x; θ).
The (log)-likelihood of θ given X (assuming i.i.d. sampling):
log p(X; θ) ,
i=
log p(xi; θ).
ML estimate of θ:
θˆM L , argmax θ
log p(X; θ)
What if we remove the restriction that ∀c, Σc = Σ?
Compute ML estimate for μc, Σc for each c.
We get discriminants (and decision boundaries) quadratic in x:
δc(x) = −
xT^ Σ− c 1 x + μTc Σ− c 1 x − 〈const in x〉
(as shown in PS1).
A quadratic form in x: xT^ Ax.
What do quadratic boundaries look like in 2D?
Second-degree curves can be any conic section:
What do quadratic boundaries look like in 2D?
Second-degree curves can be any conic section:
What do quadratic boundaries look like in 2D?
Second-degree curves can be any conic section:
Can all of these arise from two Gaussian classes?
Reminder: three sources of error: noise variance (irreducible), structural due to our model class, estimation due to our choice of model from that class.
In generative model, estimation error may be due to overfitting.
Reminder: three sources of error: noise variance (irreducible), structural due to our model class, estimation due to our choice of model from that class.
In generative model, estimation error may be due to overfitting.
Two issues: regularization (MAP estimation instead of ML), controlling number of parameters (degrees of freedom) in the model.
Single Gaussian in Rd:
Single Gaussian in Rd: d for the mean, plus
Model Σ =
σ 12 σ 12... σ 1 d σ 12 σ^22... σ 2 d
........... σ 1 d..... σ^2 d
Single Gaussian in Rd: d for the mean, plus
Model Σ =
σ 12 σ 12... σ 1 d σ 12 σ^22... σ 2 d
........... σ 1 d..... σ^2 d
Single Gaussian in Rd: d for the mean, plus
Model Σ =
σ 12 σ 12... σ 1 d σ 12 σ^22... σ 2 d
........... σ 1 d..... σ^2 d
σ^21 0... 0 0 σ 22... 0
.......... 0..... σ d^2