Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Generative Learning algorithms - Lectures Notes - 4,, Study notes of Machine Learning

Artificial Intelligence. Lectures Notes of Machine Learning. Prof. Andrew Ng - Stanford University - Contents: Generative Learning algorithms

Typology: Study notes

2010/2011

Uploaded on 10/30/2011

ilyastrab
ilyastrab 🇺🇸

4.4

(52)

382 documents

1 / 14

Toggle sidebar

Related documents


Partial preview of the text

Download Generative Learning algorithms - Lectures Notes - 4, and more Study notes Machine Learning in PDF only on Docsity! CS229 Lecture notes Andrew Ng Part IV Generative Learning algorithms So far, we’ve mainly been talking about learning algorithms that model p(y|x; θ), the conditional distribution of y given x. For instance, logistic regression modeled p(y|x; θ) as hθ(x) = g(θ T x) where g is the sigmoid func- tion. In these notes, we’ll talk about a different type of learning algorithm. Consider a classification problem in which we want to learn to distinguish between elephants (y = 1) and dogs (y = 0), based on some features of an animal. Given a training set, an algorithm like logistic regression or the perceptron algorithm (basically) tries to find a straight line—that is, a decision boundary—that separates the elephants and dogs. Then, to classify a new animal as either an elephant or a dog, it checks on which side of the decision boundary it falls, and makes its prediction accordingly. Here’s a different approach. First, looking at elephants, we can build a model of what elephants look like. Then, looking at dogs, we can build a separate model of what dogs look like. Finally, to classify a new animal, we can match the new animal against the elephant model, and match it against the dog model, to see whether the new animal looks more like the elephants or more like the dogs we had seen in the training set. Algorithms that try to learn p(y|x) directly (such as logistic regression), or algorithms that try to learn mappings directly from the space of inputs X to the labels {0, 1}, (such as the perceptron algorithm) are called discrim- inative learning algorithms. Here, we’ll talk about algorithms that instead try to model p(x|y) (and p(y)). These algorithms are called generative learning algorithms. For instance, if y indicates whether a example is a dog (0) or an elephant (1), then p(x|y = 0) models the distribution of dogs’ features, and p(x|y = 1) models the distribution of elephants’ features. After modeling p(y) (called the class priors) and p(x|y), our algorithm 1 2 can then use Bayes rule to derive the posterior distribution on y given x: p(y|x) = p(x|y)p(y) p(x) . Here, the denominator is given by p(x) = p(x|y = 1)p(y = 1) + p(x|y = 0)p(y = 0) (you should be able to verify that this is true from the standard properties of probabilities), and thus can also be expressed in terms of the quantities p(x|y) and p(y) that we’ve learned. Actually, if were calculating p(y|x) in order to make a prediction, then we don’t actually need to calculate the denominator, since arg max y p(y|x) = arg max y p(x|y)p(y) p(x) = arg max y p(x|y)p(y). 1 Gaussian discriminant analysis The first generative learning algorithm that we’ll look at is Gaussian discrim- inant analysis (GDA). In this model, we’ll assume that p(x|y) is distributed according to a multivariate normal distribution. Lets talk briefly about the properties of multivariate normal distributions before moving on to the GDA model itself. 1.1 The multivariate normal distribution The multivariate normal distribution in n-dimensions, also called the multi- variate Gaussian distribution, is parameterized by a mean vector µ ∈ Rn and a covariance matrix Σ ∈ Rn×n, where Σ ≥ 0 is symmetric and positive semi-definite. Also written “N (µ, Σ)”, its density is given by: p(x; µ, Σ) = 1 (2π)n/2|Σ|1/2 exp ( − 1 2 (x − µ)T Σ−1(x − µ) ) . In the equation above, “|Σ|” denotes the determinant of the matrix Σ. For a random variable X distributed N (µ, Σ), the mean is (unsurpris- ingly,) given by µ: E[X] = ∫ x x p(x; µ, Σ)dx = µ The covariance of a vector-valued random variable Z is defined as Cov(Z) = E[(Z − E[Z])(Z − E[Z])T ]. This generalizes the notion of the variance of a 5 1.2 The Gaussian Discriminant Analysis model When we have a classification problem in which the input features x are continuous-valued random variables, we can then use the Gaussian Discrim- inant Analysis (GDA) model, which models p(x|y) using a multivariate nor- mal distribution. The model is: y ∼ Bernoulli(φ) x|y = 0 ∼ N (µ0, Σ) x|y = 1 ∼ N (µ1, Σ) Writing out the distributions, this is: p(y) = φy(1 − φ)1−y p(x|y = 0) = 1 (2π)n/2|Σ|1/2 exp ( − 1 2 (x − µ0) T Σ−1(x − µ0) ) p(x|y = 1) = 1 (2π)n/2|Σ|1/2 exp ( − 1 2 (x − µ1) T Σ−1(x − µ1) ) Here, the parameters of our model are φ, Σ, µ0 and µ1. (Note that while there’re two different mean vectors µ0 and µ1, this model is usually applied using only one covariance matrix Σ.) The log-likelihood of the data is given by `(φ, µ0, µ1, Σ) = log m ∏ i=1 p(x(i), y(i); φ, µ0, µ1, Σ) = log m ∏ i=1 p(x(i)|y(i); µ0, µ1, Σ)p(y (i); φ). 6 By maximizing ` with respect to the parameters, we find the maximum like- lihood estimate of the parameters (see problem set 1) to be: φ = 1 m m ∑ i=1 1{y(i) = 1} µ0 = ∑m i=1 1{y (i) = 0}x(i) ∑m i=1 1{y (i) = 0} µ1 = ∑m i=1 1{y (i) = 1}x(i) ∑m i=1 1{y (i) = 1} Σ = 1 m m ∑ i=1 (x(i) − µy(i))(x (i) − µy(i)) T . Pictorially, what the algorithm is doing can be seen in as follows: −2 −1 0 1 2 3 4 5 6 7 −7 −6 −5 −4 −3 −2 −1 0 1 Shown in the figure are the training set, as well as the contours of the two Gaussian distributions that have been fit to the data in each of the two classes. Note that the two Gaussians have contours that are the same shape and orientation, since they share a covariance matrix Σ, but they have different means µ0 and µ1. Also shown in the figure is the straight line giving the decision boundary at which p(y = 1|x) = 0.5. On one side of the boundary, we’ll predict y = 1 to be the most likely outcome, and on the other side, we’ll predict y = 0. 1.3 Discussion: GDA and logistic regression The GDA model has an interesting relationship to logistic regression. If we view the quantity p(y = 1|x; φ, µ0, µ1, Σ) as a function of x, we’ll find that it 7 can be expressed in the form p(y = 1|x; φ, Σ, µ0, µ1) = 1 1 + exp(−θT x) , where θ is some appropriate function of φ, Σ, µ0, µ1. 1 This is exactly the form that logistic regression—a discriminative algorithm—used to model p(y = 1|x). When would we prefer one model over another? GDA and logistic regres- sion will, in general, give different decision boundaries when trained on the same dataset. Which is better? We just argued that if p(x|y) is multivariate gaussian (with shared Σ), then p(y|x) necessarily follows a logistic function. The converse, however, is not true; i.e., p(y|x) being a logistic function does not imply p(x|y) is multivariate gaussian. This shows that GDA makes stronger modeling as- sumptions about the data than does logistic regression. It turns out that when these modeling assumptions are correct, then GDA will find better fits to the data, and is a better model. Specifically, when p(x|y) is indeed gaus- sian (with shared Σ), then GDA is asymptotically efficient. Informally, this means that in the limit of very large training sets (large m), there is no algorithm that is strictly better than GDA (in terms of, say, how accurately they estimate p(y|x)). In particular, it can be shown that in this setting, GDA will be a better algorithm than logistic regression; and more generally, even for small training set sizes, we would generally expect GDA to better. In contrast, by making significantly weaker assumptions, logistic regres- sion is also more robust and less sensitive to incorrect modeling assumptions. There are many different sets of assumptions that would lead to p(y|x) taking the form of a logistic function. For example, if x|y = 0 ∼ Poisson(λ0), and x|y = 1 ∼ Poisson(λ1), then p(y|x) will be logistic. Logistic regression will also work well on Poisson data like this. But if we were to use GDA on such data—and fit Gaussian distributions to such non-Gaussian data—then the results will be less predictable, and GDA may (or may not) do well. To summarize: GDA makes stronger modeling assumptions, and is more data efficient (i.e., requires less training data to learn “well”) when the mod- eling assumptions are correct or at least approximately correct. Logistic regression makes weaker assumptions, and is significantly more robust to deviations from modeling assumptions. Specifically, when the data is in- deed non-Gaussian, then in the limit of large datasets, logistic regression will 1This uses the convention of redefining the x(i)’s on the right-hand-side to be n + 1- dimensional vectors by adding the extra coordinate x (i) 0 = 1; see problem set 1. 10 1, . . . ,m}, we can write down the joint likelihood of the data: L(φy, φi|y=0, φi|y=1) = m ∏ i=1 p(x(i), y(i)). Maximizing this with respect to φy, φi|y=0 and φi|y=1 gives the maximum likelihood estimates: φj|y=1 = ∑m i=1 1{x (i) j = 1 ∧ y (i) = 1} ∑m i=1 1{y (i) = 1} φj|y=0 = ∑m i=1 1{x (i) j = 1 ∧ y (i) = 0} ∑m i=1 1{y (i) = 0} φy = ∑m i=1 1{y (i) = 1} m In the equations above, the “∧” symbol means “and.” The parameters have a very natural interpretation. For instance, φj|y=1 is just the fraction of the spam (y = 1) emails in which word j does appear. Having fit all these parameters, to make a prediction on a new example with features x, we then simply calculate p(y = 1|x) = p(x|y = 1)p(y = 1) p(x) = ( ∏n i=1 p(xi|y = 1)) p(y = 1) ( ∏n i=1 p(xi|y = 1)) p(y = 1) + ( ∏n i=1 p(xi|y = 0)) p(y = 0) , and pick whichever class has the higher posterior probability. Lastly, we note that while we have developed the Naive Bayes algorithm mainly for the case of problems where the features xi are binary-valued, the generalization to where xi can take values in {1, 2, . . . , ki} is straightforward. Here, we would simply model p(xi|y) as multinomial rather than as Bernoulli. Indeed, even if some original input attribute (say, the living area of a house, as in our earlier example) were continuous valued, it is quite common to discretize it—that is, turn it into a small set of discrete values—and apply Naive Bayes. For instance, if we use some feature xi to represent living area, we might discretize the continuous values as follows: Living area (sq. feet) < 400 400-800 800-1200 1200-1600 >1600 xi 1 2 3 4 5 Thus, for a house with living area 890 square feet, we would set the value of the corresponding feature xi to 3. We can then apply the Naive Bayes 11 algorithm, and model p(xi|y) with a multinomial distribution, as described previously. When the original, continuous-valued attributes are not well- modeled by a multivariate normal distribution, discretizing the features and using Naive Bayes (instead of GDA) will often result in a better classifier. 2.1 Laplace smoothing The Naive Bayes algorithm as we have described it will work fairly well for many problems, but there is a simple change that makes it work much better, especially for text classification. Lets briefly discuss a problem with the algorithm in its current form, and then talk about how we can fix it. Consider spam/email classification, and lets suppose that, after complet- ing CS229 and having done excellent work on the project, you decide around June 2003 to submit the work you did to the NIPS conference for publication. (NIPS is one of the top machine learning conferences, and the deadline for submitting a paper is typically in late June or early July.) Because you end up discussing the conference in your emails, you also start getting messages with the word “nips” in it. But this is your first NIPS paper, and until this time, you had not previously seen any emails containing the word “nips”; in particular “nips” did not ever appear in your training set of spam/non- spam emails. Assuming that “nips” was the 35000th word in the dictionary, your Naive Bayes spam filter therefore had picked its maximum likelihood estimates of the parameters φ35000|y to be φ35000|y=1 = ∑m i=1 1{x (i) 35000 = 1 ∧ y (i) = 1} ∑m i=1 1{y (i) = 1} = 0 φ35000|y=0 = ∑m i=1 1{x (i) 35000 = 1 ∧ y (i) = 0} ∑m i=1 1{y (i) = 0} = 0 I.e., because it has never seen “nips” before in either spam or non-spam training examples, it thinks the probability of seeing it in either type of email is zero. Hence, when trying to decide if one of these messages containing “nips” is spam, it calculates the class posterior probabilities, and obtains p(y = 1|x) = ∏n i=1 p(xi|y = 1)p(y = 1) ∏n i=1 p(xi|y = 1)p(y = 1) + ∏n i=1 p(xi|y = 0)p(y = 0) = 0 0 . This is because each of the terms “ ∏n i=1 p(xi|y)” includes a term p(x35000|y) = 0 that is multiplied into it. Hence, our algorithm obtains 0/0, and doesn’t know how to make a prediction. 12 Stating the problem more broadly, it is statistically a bad idea to estimate the probability of some event to be zero just because you haven’t seen it be- fore in your finite training set. Take the problem of estimating the mean of a multinomial random variable z taking values in {1, . . . , k}. We can param- eterize our multinomial with φi = p(z = i). Given a set of m independent observations {z(1), . . . , z(m)}, the maximum likelihood estimates are given by φj = ∑m i=1 1{z (i) = j} m . As we saw previously, if we were to use these maximum likelihood estimates, then some of the φj’s might end up as zero, which was a problem. To avoid this, we can use Laplace smoothing, which replaces the above estimate with φj = ∑m i=1 1{z (i) = j} + 1 m + k . Here, we’ve added 1 to the numerator, and k to the denominator. Note that ∑k j=1 φj = 1 still holds (check this yourself!), which is a desirable property since the φj’s are estimates for probabilities that we know must sum to 1. Also, φj 6= 0 for all values of j, solving our problem of probabilities being estimated as zero. Under certain (arguably quite strong) conditions, it can be shown that the Laplace smoothing actually gives the optimal estimator of the φj’s. Returning to our Naive Bayes classifier, with Laplace smoothing, we therefore obtain the following estimates of the parameters: φj|y=1 = ∑m i=1 1{x (i) j = 1 ∧ y (i) = 1} + 1 ∑m i=1 1{y (i) = 1} + 2 φj|y=0 = ∑m i=1 1{x (i) j = 1 ∧ y (i) = 0} + 1 ∑m i=1 1{y (i) = 0} + 2 (In practice, it usually doesn’t matter much whether we apply Laplace smooth- ing to φy or not, since we will typically have a fair fraction each of spam and non-spam messages, so φy will be a reasonable estimate of p(y = 1) and will be quite far from 0 anyway.) 2.2 Event models for text classification To close off our discussion of generative learning algorithms, lets talk about one more model that is specifically for text classification. While Naive Bayes