Study with the several resources on Docsity
Earn points by helping other students or get them with a premium plan
Prepare for your exams
Study with the several resources on Docsity
Earn points to download
Earn points by helping other students or get them with a premium plan
Community
Ask the community for help and clear up your study doubts
Discover the best universities in your country according to Docsity users
Free resources
Download our free guides on studying techniques, anxiety management strategies, and thesis advice from Docsity tutors
Support Vector Machines, Andreas Argyriou, Large Margin Classification, Optimal Separating Hyperplane, Optimal Linear Classifier, Representer Theorem, Regularization, Lagrange Multipliers, Max-Margin Optimization, Quadratic Programming, Margin Decision Boundary, Support Vectors, SVM Classification, Nonlinear Decision Boundaries, Greg Shakhnarovich, Lecture Slides, Introduction to Machine Learning, Computer Science, Toyota Technological Institute at Chicago, United States of America.
Typology: Lecture notes
1 / 37
TTIC 31020: Introduction to Machine Learning
Instructor: Greg Shakhnarovich
Lecture by Andreas Argyriou
TTI–Chicago
October 20, 2010
Large margin classification; optimal separating hyperplane
Support Vector Machines
Which decision boundary is better?
Which decision boundary is better?
Which decision boundary is better?
Regularization alone does not capture this intuition
Recall the geometry of linear classification:
Discriminant function:
y(x) = w
T x + w 0
Distance from a correctly
classified (x, y) to the
boundary:
‖w‖
y
wT^ x + w 0
x 2
x 1
w
x
y(x) ‖w‖ x⊥
−w 0 ‖w‖
y = 0 y < 0
y > 0
R 2
R 1
Important: the distance does not chance if we scale w → aw,
w 0 → aw 0
Distance from a correctly classified (x, y) to the boundary:
‖w‖
y
wT^ x + w 0
Margin of the classifier on X = {(xi, yi)}Ni=1, assuming it
achieves 100% accuracy: the distance to the closest point
min i
‖w‖
yi
w
T xi + w 0
We are interested in a large margin classifier:
argmax w,w 0
‖w‖
min i
yi
w
T xi + w 0
So, we seek argmaxw,w 0
1 ‖w‖ mini^ yi
wT^ xi + w 0
Hard optimization problem... but: we can set
min i
yi
w
T xi + w 0
since can rescale ‖w‖, w 0 appropriately.
Then, the optimization becomes:
argmax w,w 0
‖w‖
s.t. yi
wT^ xi + w 0
≥ 1 , ∀i = 1,... , N.
So, we seek argmaxw,w 0
1 ‖w‖ mini^ yi
wT^ xi + w 0
Hard optimization problem... but: we can set
min i
yi
w
T xi + w 0
since can rescale ‖w‖, w 0 appropriately.
Then, the optimization becomes:
argmax w,w 0
‖w‖
s.t. yi
wT^ xi + w 0
≥ 1 , ∀i = 1,... , N.
⇒ argmin w
‖w‖^2 —————— ” ” ——————–
Consider the optimization problem
w
∗ = argmin w
‖w‖
2 s.t. yi(w
T xi + w 0 ) ≥ 1 ∀i
Theorem: the solution can be represented as
w∗^ =
i=
αixi
This is the “magic” behind Support Vector Machines!
w∗^ = argmin w
‖w‖^2 s.t. yi(wT^ xi + w 0 ) ≥ 1 ∀i ⇒ w∗^ =
∑^ N
i=
αixi
Let w∗^ = wX + w⊥, where wX =
i=1 αixi^ ∈^ Span(x^1 ,... ,^ xN^ ), w⊥ ∈/ Span(x 1 ,... , xN )
w∗^ = argmin w
‖w‖^2 s.t. yi(wT^ xi + w 0 ) ≥ 1 ∀i ⇒ w∗^ =
∑^ N
i=
αixi
Let w∗^ = wX + w⊥, where wX =
i=1 αixi^ ∈^ Span(x^1 ,... ,^ xN^ ), w⊥ ∈/ Span(x 1 ,... , xN ), i.e., wT ⊥xi = 0 for all i = 1,... , N
w∗^ = argmin w
‖w‖^2 s.t. yi(wT^ xi + w 0 ) ≥ 1 ∀i ⇒ w∗^ =
∑^ N
i=
αixi
Let w∗^ = wX + w⊥, where wX =
i=1 αixi^ ∈^ Span(x^1 ,... ,^ xN^ ), w⊥ ∈/ Span(x 1 ,... , xN ), i.e., wT ⊥xi = 0 for all i = 1,... , N
For all xi we have
w
∗T xi = w
T X xi^ +^ w
T ⊥xi^ =
w∗^ = argmin w
‖w‖^2 s.t. yi(wT^ xi + w 0 ) ≥ 1 ∀i ⇒ w∗^ =
∑^ N
i=
αixi
Let w∗^ = wX + w⊥, where wX =
i=1 αixi^ ∈^ Span(x^1 ,... ,^ xN^ ), w⊥ ∈/ Span(x 1 ,... , xN ), i.e., wT ⊥xi = 0 for all i = 1,... , N
For all xi we have
w
∗T xi = w
T X xi^ +^ w
T ⊥xi^ =^ w
T X xi
w∗^ = argmin w
‖w‖^2 s.t. yi(wT^ xi + w 0 ) ≥ 1 ∀i ⇒ w∗^ =
∑^ N
i=
αixi
Let w∗^ = wX + w⊥, where wX =
i=1 αixi^ ∈^ Span(x^1 ,... ,^ xN^ ), w⊥ ∈/ Span(x 1 ,... , xN ), i.e., wT ⊥xi = 0 for all i = 1,... , N
For all xi we have
w
∗T xi = w
T X xi^ +^ w
T ⊥xi^ =^ w
T X xi
therefore,
yi(w∗
T xi + w 0 ) ≥ 1 ⇒ yi(wXT xi + w 0 ) ≥ 1
w∗^ = argmin w
‖w‖^2 s.t. yi(wT^ xi + w 0 ) ≥ 1 ∀i ⇒ w∗^ =
∑^ N
i=
αixi
Now, we have
‖w∗‖^2 = w∗
T w∗
w∗^ = argmin w
‖w‖^2 s.t. yi(wT^ xi + w 0 ) ≥ 1 ∀i ⇒ w∗^ =
∑^ N
i=
αixi
Now, we have
‖w∗‖^2 = w∗
T w∗^ = (wX + w⊥)
T (wX + w⊥)
w∗^ = argmin w
‖w‖^2 s.t. yi(wT^ xi + w 0 ) ≥ 1 ∀i ⇒ w∗^ =
∑^ N
i=
αixi
Now, we have
‖w∗‖^2 = w∗
T w∗^ = (wX + w⊥)
T (wX + w⊥) = wX T^ wX ︸ ︷︷ ︸ ‖wX ‖^2
since wTX w⊥ = 0.
w∗^ = argmin w
‖w‖^2 s.t. yi(wT^ xi + w 0 ) ≥ 1 ∀i ⇒ w∗^ =
∑^ N
i=
αixi
Now, we have
‖w∗‖^2 = w∗
T w∗^ = (wX + w⊥)
T (wX + w⊥) = wX T^ wX ︸ ︷︷ ︸ ‖wX ‖^2
since wTX w⊥ = 0.
Suppose w⊥ 6 = 0. Then, we have a solution wX that satisfies
all the constraints, and for which ‖wX ‖^2 < ‖wX ‖^2 + ‖w⊥‖^2 = ‖w∗‖^2.
w∗^ = argmin w
‖w‖^2 s.t. yi(wT^ xi + w 0 ) ≥ 1 ∀i ⇒ w∗^ =
∑^ N
i=
αixi
Now, we have
‖w∗‖^2 = w∗
T w∗^ = (wX + w⊥)
T (wX + w⊥) = wX T^ wX ︸ ︷︷ ︸ ‖wX ‖^2
since wTX w⊥ = 0.
Suppose w⊥ 6 = 0. Then, we have a solution wX that satisfies
all the constraints, and for which ‖wX ‖^2 < ‖wX ‖^2 + ‖w⊥‖^2 = ‖w∗‖^2.
This contradicts optimality of w∗, hence w∗^ = wX. QED
In general d-dimensional case, we solve the regularization
problem:
minimize ‖w‖^2 =
∑^ d
j=1
w^2 j ,
subject to the margin constraint
∀i, yi(w 0 + w
T xi) − 1 ≥ 0.
The representer theorem tells us that the solution is expressed
as a linear combination of the training examples.
min w
‖w‖^2 =
∑^ d
j=1
w^2 j ,
subject to yi(w 0 + w
T xi) − 1 ≥ 0 , i = 1,... , N.
We will associate with each constraint the loss
max α≥ 0
α
1 − yi(w 0 + wT^ xi)
min w
‖w‖^2 =
∑^ d
j=1
w^2 j ,
subject to yi(w 0 + w
T xi) − 1 ≥ 0 , i = 1,... , N.
We will associate with each constraint the loss
max α≥ 0
α
1 − yi(w 0 + wT^ xi)
0 , if yi
w 0 + wT^ xi
∞ otherwise (constraint violated).
min w
‖w‖^2 =
∑^ d
j=1
w^2 j ,
subject to yi(w 0 + w
T xi) − 1 ≥ 0 , i = 1,... , N.
We will associate with each constraint the loss
max α≥ 0
α
1 − yi(w 0 + wT^ xi)
0 , if yi
w 0 + wT^ xi
∞ otherwise (constraint violated).
We can reformulate our problem now:
min w
‖w‖
2
i=1
max αi≥ 0
αi
1 − yi(w 0 + w
T xi)
We want all the constraint terms to be zero:
min w
‖w‖
2
i=1
max αi≥ 0
αi
1 − yi(w 0 + w
T xi)
= min w
max {αi≥ 0 }
‖w‖^2 +
i=1
αi
1 − yi(w 0 + wT^ xi)
= max {αi≥ 0 }
min w
‖w‖^2 +
i=1
αi
1 − yi(w 0 + wT^ xi)
J(w,w 0 ;α)
We need to minimize J(w, w 0 ; α) for any settings of
α = [α 1 ,... , αN ]T^.