Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Support Vector Machines-Introduction to Machine Learning-Lecture 11-Computer Science, Lecture notes of Introduction to Machine Learning

Support Vector Machines, Andreas Argyriou, Large Margin Classification, Optimal Separating Hyperplane, Optimal Linear Classifier, Representer Theorem, Regularization, Lagrange Multipliers, Max-Margin Optimization, Quadratic Programming, Margin Decision Boundary, Support Vectors, SVM Classification, Nonlinear Decision Boundaries, Greg Shakhnarovich, Lecture Slides, Introduction to Machine Learning, Computer Science, Toyota Technological Institute at Chicago, United States of America.

Typology: Lecture notes

2011/2012

Uploaded on 03/12/2012

alfred67
alfred67 🇺🇸

4.9

(20)

328 documents

1 / 37

Toggle sidebar

Related documents


Partial preview of the text

Download Support Vector Machines-Introduction to Machine Learning-Lecture 11-Computer Science and more Lecture notes Introduction to Machine Learning in PDF only on Docsity!

Lecture 11: Support Vector Machines

TTIC 31020: Introduction to Machine Learning

Instructor: Greg Shakhnarovich

Lecture by Andreas Argyriou

TTI–Chicago

October 20, 2010

Plan for today

Large margin classification; optimal separating hyperplane

Support Vector Machines

Optimal linear classifier

Which decision boundary is better?

Optimal linear classifier

Which decision boundary is better?

Optimal linear classifier

Which decision boundary is better?

Regularization alone does not capture this intuition

Classification margin

Recall the geometry of linear classification:

Discriminant function:

y(x) = w

T x + w 0

Distance from a correctly

classified (x, y) to the

boundary:

1

‖w‖

y

(

wT^ x + w 0

)

x 2

x 1

w

x

y(x) ‖w‖ x⊥

−w 0 ‖w‖

y = 0 y < 0

y > 0

R 2

R 1

Important: the distance does not chance if we scale w → aw,

w 0 → aw 0

Large margin classifier

Distance from a correctly classified (x, y) to the boundary:

1

‖w‖

y

(

wT^ x + w 0

)

Margin of the classifier on X = {(xi, yi)}Ni=1, assuming it

achieves 100% accuracy: the distance to the closest point

min i

1

‖w‖

yi

(

w

T xi + w 0

)

We are interested in a large margin classifier:

argmax w,w 0

{

1

‖w‖

min i

yi

(

w

T xi + w 0

)

}

Optimal separating hyperplane

So, we seek argmaxw,w 0

{

1 ‖w‖ mini^ yi

(

wT^ xi + w 0

) }

Hard optimization problem... but: we can set

min i

yi

(

w

T xi + w 0

)

= 1,

since can rescale ‖w‖, w 0 appropriately.

Then, the optimization becomes:

argmax w,w 0

1

‖w‖

s.t. yi

(

wT^ xi + w 0

)

≥ 1 , ∀i = 1,... , N.

Optimal separating hyperplane

So, we seek argmaxw,w 0

{

1 ‖w‖ mini^ yi

(

wT^ xi + w 0

) }

Hard optimization problem... but: we can set

min i

yi

(

w

T xi + w 0

)

= 1,

since can rescale ‖w‖, w 0 appropriately.

Then, the optimization becomes:

argmax w,w 0

1

‖w‖

s.t. yi

(

wT^ xi + w 0

)

≥ 1 , ∀i = 1,... , N.

⇒ argmin w

‖w‖^2 —————— ” ” ——————–

Representer theorem

Consider the optimization problem

w

∗ = argmin w

‖w‖

2 s.t. yi(w

T xi + w 0 ) ≥ 1 ∀i

Theorem: the solution can be represented as

w∗^ =

∑^ N

i=

αixi

This is the “magic” behind Support Vector Machines!

Representer theorem - proof I

w∗^ = argmin w

‖w‖^2 s.t. yi(wT^ xi + w 0 ) ≥ 1 ∀i ⇒ w∗^ =

∑^ N

i=

αixi

Let w∗^ = wX + w⊥, where wX =

∑N

i=1 αixi^ ∈^ Span(x^1 ,... ,^ xN^ ), w⊥ ∈/ Span(x 1 ,... , xN )

Representer theorem - proof I

w∗^ = argmin w

‖w‖^2 s.t. yi(wT^ xi + w 0 ) ≥ 1 ∀i ⇒ w∗^ =

∑^ N

i=

αixi

Let w∗^ = wX + w⊥, where wX =

∑N

i=1 αixi^ ∈^ Span(x^1 ,... ,^ xN^ ), w⊥ ∈/ Span(x 1 ,... , xN ), i.e., wT ⊥xi = 0 for all i = 1,... , N

Representer theorem - proof I

w∗^ = argmin w

‖w‖^2 s.t. yi(wT^ xi + w 0 ) ≥ 1 ∀i ⇒ w∗^ =

∑^ N

i=

αixi

Let w∗^ = wX + w⊥, where wX =

∑N

i=1 αixi^ ∈^ Span(x^1 ,... ,^ xN^ ), w⊥ ∈/ Span(x 1 ,... , xN ), i.e., wT ⊥xi = 0 for all i = 1,... , N

For all xi we have

w

∗T xi = w

T X xi^ +^ w

T ⊥xi^ =

Representer theorem - proof I

w∗^ = argmin w

‖w‖^2 s.t. yi(wT^ xi + w 0 ) ≥ 1 ∀i ⇒ w∗^ =

∑^ N

i=

αixi

Let w∗^ = wX + w⊥, where wX =

∑N

i=1 αixi^ ∈^ Span(x^1 ,... ,^ xN^ ), w⊥ ∈/ Span(x 1 ,... , xN ), i.e., wT ⊥xi = 0 for all i = 1,... , N

For all xi we have

w

∗T xi = w

T X xi^ +^ w

T ⊥xi^ =^ w

T X xi

Representer theorem - proof I

w∗^ = argmin w

‖w‖^2 s.t. yi(wT^ xi + w 0 ) ≥ 1 ∀i ⇒ w∗^ =

∑^ N

i=

αixi

Let w∗^ = wX + w⊥, where wX =

∑N

i=1 αixi^ ∈^ Span(x^1 ,... ,^ xN^ ), w⊥ ∈/ Span(x 1 ,... , xN ), i.e., wT ⊥xi = 0 for all i = 1,... , N

For all xi we have

w

∗T xi = w

T X xi^ +^ w

T ⊥xi^ =^ w

T X xi

therefore,

yi(w∗

T xi + w 0 ) ≥ 1 ⇒ yi(wXT xi + w 0 ) ≥ 1

Representer theorem - proof II

w∗^ = argmin w

‖w‖^2 s.t. yi(wT^ xi + w 0 ) ≥ 1 ∀i ⇒ w∗^ =

∑^ N

i=

αixi

Now, we have

‖w∗‖^2 = w∗

T w∗

Representer theorem - proof II

w∗^ = argmin w

‖w‖^2 s.t. yi(wT^ xi + w 0 ) ≥ 1 ∀i ⇒ w∗^ =

∑^ N

i=

αixi

Now, we have

‖w∗‖^2 = w∗

T w∗^ = (wX + w⊥)

T (wX + w⊥)

Representer theorem - proof II

w∗^ = argmin w

‖w‖^2 s.t. yi(wT^ xi + w 0 ) ≥ 1 ∀i ⇒ w∗^ =

∑^ N

i=

αixi

Now, we have

‖w∗‖^2 = w∗

T w∗^ = (wX + w⊥)

T (wX + w⊥) = wX T^ wX ︸ ︷︷ ︸ ‖wX ‖^2

  • w ⊥Tw⊥ ︸ ︷︷ ︸ ‖w⊥‖^2

,

since wTX w⊥ = 0.

Representer theorem - proof II

w∗^ = argmin w

‖w‖^2 s.t. yi(wT^ xi + w 0 ) ≥ 1 ∀i ⇒ w∗^ =

∑^ N

i=

αixi

Now, we have

‖w∗‖^2 = w∗

T w∗^ = (wX + w⊥)

T (wX + w⊥) = wX T^ wX ︸ ︷︷ ︸ ‖wX ‖^2

  • w ⊥Tw⊥ ︸ ︷︷ ︸ ‖w⊥‖^2

,

since wTX w⊥ = 0.

Suppose w⊥ 6 = 0. Then, we have a solution wX that satisfies

all the constraints, and for which ‖wX ‖^2 < ‖wX ‖^2 + ‖w⊥‖^2 = ‖w∗‖^2.

Representer theorem - proof II

w∗^ = argmin w

‖w‖^2 s.t. yi(wT^ xi + w 0 ) ≥ 1 ∀i ⇒ w∗^ =

∑^ N

i=

αixi

Now, we have

‖w∗‖^2 = w∗

T w∗^ = (wX + w⊥)

T (wX + w⊥) = wX T^ wX ︸ ︷︷ ︸ ‖wX ‖^2

  • w ⊥Tw⊥ ︸ ︷︷ ︸ ‖w⊥‖^2

,

since wTX w⊥ = 0.

Suppose w⊥ 6 = 0. Then, we have a solution wX that satisfies

all the constraints, and for which ‖wX ‖^2 < ‖wX ‖^2 + ‖w⊥‖^2 = ‖w∗‖^2.

This contradicts optimality of w∗, hence w∗^ = wX. QED

Margin and regularization

In general d-dimensional case, we solve the regularization

problem:

minimize ‖w‖^2 =

∑^ d

j=1

w^2 j ,

subject to the margin constraint

∀i, yi(w 0 + w

T xi) − 1 ≥ 0.

The representer theorem tells us that the solution is expressed

as a linear combination of the training examples.

Lagrange multipliers

min w

1

2

‖w‖^2 =

1

2

∑^ d

j=1

w^2 j ,

subject to yi(w 0 + w

T xi) − 1 ≥ 0 , i = 1,... , N.

We will associate with each constraint the loss

max α≥ 0

α

[

1 − yi(w 0 + wT^ xi)

]

=

Lagrange multipliers

min w

1

2

‖w‖^2 =

1

2

∑^ d

j=1

w^2 j ,

subject to yi(w 0 + w

T xi) − 1 ≥ 0 , i = 1,... , N.

We will associate with each constraint the loss

max α≥ 0

α

[

1 − yi(w 0 + wT^ xi)

]

=

{

0 , if yi

(

w 0 + wT^ xi

)

− 1 ≥ 0 ,

∞ otherwise (constraint violated).

Lagrange multipliers

min w

1

2

‖w‖^2 =

1

2

∑^ d

j=1

w^2 j ,

subject to yi(w 0 + w

T xi) − 1 ≥ 0 , i = 1,... , N.

We will associate with each constraint the loss

max α≥ 0

α

[

1 − yi(w 0 + wT^ xi)

]

=

{

0 , if yi

(

w 0 + wT^ xi

)

− 1 ≥ 0 ,

∞ otherwise (constraint violated).

We can reformulate our problem now:

min w

{

1

2

‖w‖

2

∑^ N

i=1

max αi≥ 0

αi

[

1 − yi(w 0 + w

T xi)

]

}

Max-margin optimization

We want all the constraint terms to be zero:

min w

{

1

2

‖w‖

2

∑^ N

i=1

max αi≥ 0

αi

[

1 − yi(w 0 + w

T xi)

]

}

= min w

max {αi≥ 0 }

{

1

2

‖w‖^2 +

∑^ N

i=1

αi

[

1 − yi(w 0 + wT^ xi)

]

}

= max {αi≥ 0 }

min w

{

1

2

‖w‖^2 +

∑^ N

i=1

αi

[

1 − yi(w 0 + wT^ xi)

]

}

︸ ︷︷ ︸

J(w,w 0 ;α)

.

We need to minimize J(w, w 0 ; α) for any settings of

α = [α 1 ,... , αN ]T^.