Data Mining - Bayesian classification, undefined for Data Mining. Moradabad Institute of Technology (MIT)

Data Mining

Description: Summary about Classification and Prediction, Bayesian Theorem: Basics, Bayesian Theorem, Towards Naïve Bayesian Classifier, Naïve Bayesian Classifier: Training Dataset, Avoiding the 0-Probability Problem.
Showing pages  1  -  2  of  13
The preview of this document ends here! Please or to read the full document or to download it.
Document information
Uploaded by: amit mohta
Views: 1000+
Downloads : 14
University: Moradabad Institute of Technology (MIT)
Address: Engineering
Subject: Data Mining
Upload date: 03/09/2011
Embed this document:
Chapter 6. Classification and Prediction

November 20, 2014 Data Mining: Concepts and Techniques 1

Chapter 6. Classification and Prediction

• What is classification? What is prediction?

• Issues regarding classification and prediction

• Classification by decision tree induction

• Bayesian classification

• Rule-based classification

• Classification by back propagation

• Support Vector Machines (SVM)

• Associative classification • Lazy learners (or learning from

your neighbors) • Other classification methods • Prediction • Accuracy and error measures • Ensemble methods • Model selection • Summary

November 20, 2014 Data Mining: Concepts and Techniques 2

Bayesian Classification: Why?

A statistical classifier: performs probabilistic prediction, i.e., predicts class membership probabilities( that a given tuple belongs to a particular class)

Foundation: Based on Bayes’ Theorem given by Thomas Bayes • Performance: A simple Bayesian classifier, naïve Bayesian classifier,

has comparable performance with decision tree and selected neural network classifiers.

Class Conditional Independence : Naïve Bayesian Classifiers assume that the effect of an attribute value on a given class is independent of the values of the other attributes. This assumption is called class conditional independence.

Incremental: Each training example can incrementally increase/decrease the probability that a hypothesis is correct — prior knowledge can be combined with observed data

Standard: Even when Bayesian methods are computationally intractable, they can provide a standard of optimal decision making against which other methods can be measured

• Bayesian Belief Network: are graphical models that allow the representation of dependencies among subsets of attributes

November 20, 2014 Data Mining: Concepts and Techniques 3

Bayesian Theorem: Basics • Let X be a data sample (“evidence”): class label is unknown • Let H be a hypothesis that X belongs to class C • E.g. Our world of tuples is confined to customers described by the attributes

age and income. X is a35 year old customer with an income $40,000. • Classification is to determine P(H|X), the probability that the hypothesis

holds given the observed data sample X. P(H|X) reflects the probability that customer X will buy a computer given that we know the customer’s age and income.

P(H) (prior probability), the initial probability – E.g., X will buy computer, regardless of age, income, …

P(X): prior probability of X. probability that sample data is observed( that a person from our set of customers is 35 years old and earns $40,000

P(X|H) (posteriori probability), the probability of observing the sample X, given that the hypothesis holds – E.g., Given that X will buy computer, the prob. that X is 31..40, medium

income

November 20, 2014 Data Mining: Concepts and Techniques 4

Bayesian Theorem • Given training data X, posteriori probability of a

hypothesis H, P(H|X), follows the Bayes theorem

• Informally, this can be written as posteriori = likelihood x prior/evidence

• Predicts X belongs to Ci iff the probability P(Ci|X) is the highest among all the P(Ck|X) for all the k classes

• Practical difficulty: require initial knowledge of many probabilities, significant computational cost

)( )()|()|( X

XX P HPHPHP

November 20, 2014 Data Mining: Concepts and Techniques 5

Towards Naïve Bayesian Classifier

• Let D be a training set of tuples and their associated class labels, and each tuple is represented by an n-D attribute vector X = (x1, x2, …, xn)

• Suppose there are m classes C1, C2, …, Cm. • Classification is to derive the maximum posteriori,

i.e., the maximal P(Ci|X) • This can be derived from Bayes’ theorem

• Since P(X) is constant for all classes, only

needs to be maximized

)( )()|(

)|( X X

X P iCPiCP

iCP

)()|()|( iCPiCPiCP XX

November 20, 2014 Data Mining: Concepts and Techniques 6

Derivation of Naïve Bayes Classifier

• A simplified assumption: attributes are conditionally independent (i.e., no dependence relation between attributes):

• This greatly reduces the computation cost: Only counts the class distribution

• If Ak is categorical, P(xk|Ci) is the # of tuples in Ci having value xk for Ak divided by |Ci, D| (# of tuples of Ci in D)

• If Ak is continous-valued, P(xk|Ci) is usually computed based on Gaussian distribution with a mean μ and standard deviation σ

and P(xk|Ci) is

)|(...)|()|( 1

)|()|( 21

CixPCixPCixP n

k CixPCiP nk 

X

2

2

2 )(

2 1),,( 

 

 

x

exg

),,()|( ii CCk

xgCiP X

November 20, 2014 Data Mining: Concepts and Techniques 7

Naïve Bayesian Classifier: Training Dataset

Class: C1:buys_com puter = ‘yes’ C2:buys_com puter = ‘no

Data sample X = (age <=30, Income = medium, Student = yes Credit_rating = Fair)

November 20, 2014 Data Mining: Concepts and Techniques 8

Naïve Bayesian Classifier: An Example

• P(Ci): P(buys_computer = “yes”) = 9/14 = 0.643 P(buys_computer = “no”) = 5/14= 0.357

• Compute P(X|Ci) for each class P(age = “<=30” | buys_computer = “yes”) = 2/9 = 0.222 P(age = “<= 30” | buys_computer = “no”) = 3/5 = 0.6 P(income = “medium” | buys_computer = “yes”) = 4/9 = 0.444 P(income = “medium” | buys_computer = “no”) = 2/5 = 0.4 P(student = “yes” | buys_computer = “yes) = 6/9 = 0.667 P(student = “yes” | buys_computer = “no”) = 1/5 = 0.2 P(credit_rating = “fair” | buys_computer = “yes”) = 6/9 = 0.667 P(credit_rating = “fair” | buys_computer = “no”) = 2/5 = 0.4

X = (age <= 30 , income = medium, student = yes, credit_rating = fair)

P(X|Ci) : P(X|buys_computer = “yes”) = 0.222 x 0.444 x 0.667 x 0.667 = 0.044 P(X|buys_computer = “no”) = 0.6 x 0.4 x 0.2 x 0.4 = 0.019 P(X|Ci)*P(Ci) : P(X|buys_computer = “yes”) * P(buys_computer = “yes”) = 0.028

P(X|buys_computer = “no”) * P(buys_computer = “no”) = 0.007

Therefore, X belongs to class (“buys_computer = yes”)

November 20, 2014 Data Mining: Concepts and Techniques 9

Avoiding the 0-Probability Problem

• Naïve Bayesian prediction requires each conditional prob. be non-zero. Otherwise, the predicted prob. will be zero

• Ex. Suppose a dataset with 1000 tuples, income=low (0), income= medium (990), and income = high (10),

• Use Laplacian correction (or Laplacian estimator) – Adding 1 to each case

Prob(income = low) = 1/1003 Prob(income = medium) = 991/1003 Prob(income = high) = 11/1003

– The “corrected” prob. estimates are close to their “uncorrected” counterparts

 

n

k CixkPCiXP

1 )|()|(

November 20, 2014 Data Mining: Concepts and Techniques 10

Naïve Bayesian Classifier: Comments

• Advantages – Easy to implement – Good results obtained in most of the cases

• Disadvantages – Assumption: class conditional independence,

therefore loss of accuracy – Practically, dependencies exist among variables

• E.g., hospitals: patients: Profile: age, family history, etc. Symptoms: fever, cough etc., Disease: lung cancer,

diabetes, etc. • Dependencies among these cannot be modeled by Naïve

Bayesian Classifier • How to deal with these dependencies?

– Bayesian Belief Networks

November 20, 2014 Data Mining: Concepts and Techniques 11

Bayesian Belief Networks • A Bayesian network (or a belief network) is a probabilistic graphical model that represents

a set of variables and their probabilistic independencies. For example, a Bayesian network could represent the probabilistic relationships between diseases and symptoms. Given symptoms, the network can be used to compute the probabilities of the presence of various diseases. Bayesian belief network allows a subset of the variables conditionally independent

• Bayesian Belief Networks is defined by two components

a) A Directed acyclic graph b) A set of conditional Probability Tables

• A graphical model of causal relationships – Represents dependency among the variables – Gives a specification of joint probability distribution X Y

Z P

 Nodes: random variables  Links: dependency  X & Y are the parents of Z, & Y is the parent of P  No dependency between Z and P  Has no loops or cycles

November 20, 2014 Data Mining: Concepts and Techniques 12

Bayesian Belief Network: An Example

Family History

LungCancer

PositiveXRay

Smoker

Emphysema

Dyspnea

LC

~LC

(FH, S) (FH, ~S) (~FH, S) (~FH, ~S)

0.8

0.2

0.5

0.5

0.7

0.3

0.1

0.9

Bayesian Belief Networks

The conditional probability table (CPT) for variable LungCancer:

 

n

i YParents ixiPxxP n

1 ))(|(),...,( 1

CPT shows the conditional probability for each possible combination of its parents

Derivation of the probability of a particular combination of values of X, from CPT:

November 20, 2014 Data Mining: Concepts and Techniques 13

Training Bayesian Networks • Several scenarios:

– Given both the network structure and all variables observable: learn only the CPTs

– Network structure known, some hidden variables: gradient descent (greedy hill- climbing) method, analogous to neural network learning

– Network structure unknown, all variables observable: search through the model space to reconstruct network topology

– Unknown structure, all hidden variables: No good algorithms known for this purpose

Docsity is not optimized for the browser you're using. In order to have a better experience please switch to Google Chrome, Firefox, Internet Explorer 9+ or Safari! Download Google Chrome