You are on page 1of 143

Classification and Prediction

Supervised vs. Unsupervised Learning

■Supervised learning (classification)


■ Supervision: The training data (observations, measurements,
etc.) are accompanied by labels indicating the class of the
observations
■ New data is classified based on the training set
■Unsupervised learning (clustering)
■ The class labels of training data is unknown
■ Given a set of measurements, observations, etc. with the aim
of establishing the existence of classes or clusters in the data

*
Classification

Classification is a form of data analysis that extracts models


describing important data classes

■ predicts categorical class labels (discrete, unordered)

■classifies data (constructs a model) based on the training set


and the values (class labels) in a classifying attribute and uses it
in classifying new data

A medical researcher wants to analyze cancer data to predict



which one of three specific treatments a patient should receive

■Task is classification, where a model or classifier is constructed


to predict class (categorical) labels as “treatment A,”
“treatment B,” or “treatment C”

*
Prediction

Numeric Prediction

 The marketing manager wants to predict how much a given


customer will spend during a sale at Supermarket

the model constructed predicts a continuous-valued function,


or ordered value, as opposed to a class label: predictor

i.e., predicts unknown or missing values

Regression analysis

*
Classification—A Two-Step Process
1. Learning step (where a classification model is constructed)
2. Classification step (where the model is used to predict class
labels for given data)

1. Model construction: a classifier is built describing a set of


predetermined classes (Training Phase)

■Each tuple/sample is assumed to belong to a predefined


class, as determined by the class label attribute

■ The set of tuples used for model construction is training set

■The model is represented as classification rules, decision


trees, or mathematical formulae
*
Classification—A Two-Step Process
2. Model usage: for classifying future or unknown objects

■ Estimate accuracy of the model


■The known label of test sample is compared with the

classified result from the model


■Accuracy rate is the percentage of test set samples that

are correctly classified by the model


■Test set is independent of training set

■If the accuracy is acceptable, use the model to classify new


data

If training set is used to measure the classifier’s accuracy, this estimate would
likely be optimistic, because the classifier tends to overfit the data
*
Process (1): Model Construction

Classification
Algorithms
Training
Data

Classifier
(Model)

IF rank = ‘professor’
OR years > 6
THEN tenured = ‘yes’
*
Process (2): Using the Model in Prediction

Classifier

Testing
Data Unseen Data

(Jeff, Professor, 4)
NAME RANK YEARS TENURED
T om A ssistant P rof 2 no Tenured?
M erlisa A ssociate P rof 7 no
G eorge P rofessor 5 yes
Joseph A ssistant P rof 7 yes
Decision Tree Induction: Training Dataset
age income student credit_rating buys_computer
<=30 high no fair no
<=30 high no excellent no
31…40 high no fair yes
>40 medium no fair yes
>40 low yes fair yes
>40 low yes excellent no
31…40 low yes excellent yes
<=30 medium no fair no
<=30 low yes fair yes
>40 medium yes fair yes
<=30 medium yes excellent yes
31…40 medium no excellent yes
31…40 high yes fair yes
>40 medium no excellent no
Output: A Decision Tree for “buys_computer”
Decision trees can easily be converted to classification rules.

branch represents an
outcome of the
age? test

<=30 31..40
overcast >40
internal node youth middle-aged
denotes a test on an attribute senior

student? yes credit rating?

no yes excellent fair

no yes yes

leaf node (or terminal


node) holds a class label
Decision Tree Induction
■During late 1970s and early 1980s, J. Ross Quinlan, a researcher in ML,
developed a decision tree algorithm known as ID3 (Iterative
Dichotomiser)

Later presented C4.5 (a successor of ID3), which became a benchmark


■In 1984, a group of statisticians (L. Breiman, J. Friedman, R. Olshen,


and C. Stone) published the book Classification and Regression Trees
(CART), which described the generation of binary decision trees

■ID3 and CART were invented independently of one another at around


the same time, yet follow a similar approach for learning decision trees

*
Decision Tree Induction
Basic algorithm

■ID3, C4.5, and CART adopt a greedy (i.e., nonbacktracking)

approach

Tree is constructed in a top-down recursive divide-and-conquer


manner

■The training set is recursively partitioned (based on selected


attributes) into smaller subsets as the tree is being built

■Test attributes are selected on the basis of a heuristic or


statistical measure (e.g., information gain or Gini index [enforce the
resulting tree to be binary])

*
Decision Tree Induction Algorithm
Input: D, attribute list, and Attribute selection method.

The tree starts as a single node, N, representing the training tuples in D


■If the tuples in D are all of the same class, then node N becomes a leaf
and is labeled with that class

■Otherwise, the algorithm calls Attribute selection method to determine


the splitting criterion [splitting attribute]
■The resulting partitions at each branch are as “pure” as possible.

■A partition is pure if all the tuples in it belong to the same class.

■The node N is labeled with the splitting criterion, which serves as a test
at the node A branch is grown from node N for each of the outcomes of
the splitting criterion. The tuples in D are partitioned accordingly.
*
Decision Tree Induction
■Conditions for stopping partitioning

■All samples for a given node belong to the same class

■There are no remaining attributes for further partitioning – majority


voting is employed for classifying the leaf [the most common class in D]

■There are no samples left

The computational complexity of the algorithm given training set D is


O(n ×|D|×log(|D|))

n is the number of attributes


|D| is the number of training tuples in D

*
Let A be the splitting attribute

Partitioning scenarios Examples

If A is discrete-valued, then
one branch is grown for
each known value of A.

If A is continuous-valued,
then two branches
are grown, corresponding to
A <= split point and
A > split point

If A is discrete-valued
and a binary tree must be
produced, then the test is of
the form A ϵSA, where SA is
the splitting subset for A.
Algorithm for forming a decision tree from training tuples
Attribute Selection Measure:
Information Gain
■measure is based on pioneering work by Claude Shannon on
information theory

■Select the attribute with the highest information gain as the splitting
attribute

■attribute minimizes the information needed to classify the tuples in the


resulting partitions and reflects the least randomness or “impurity” in
these partitions.

■minimizes the expected number of tests needed to classify a given tuple


and guarantees that a simple (but not necessarily the simplest) tree is
found

*
Attribute Selection Measure:
Information Gain
■Select the attribute with the highest information gain
■Let pi be the probability that an arbitrary tuple in D belongs to class Ci,

estimated by |Ci, D|/|D|


■Expected information (entropy) needed to classify a tuple in D:

■Information needed (after using A to split D into v partitions) to classify


D:

■ Information gained by branching on attribute A

*
Attribute Selection: Information Gain
Class C1: buys_computer = “yes” [9]
Class C2: buys_computer = “no” [5]

means “age <=30” has 5 out of


14 samples, with 2 yes’es and 3 no’s.

Hence

Similarly,

*
Computing Information-Gain for
Continuous-Valued Attributes
■Let attribute A be a continuous-valued attribute ( age values )
■Must determine the best split point for A

■Sort the value A in increasing order


■Typically, the midpoint between each pair of adjacent values is

considered as a possible split point


■(ai+ai+1)/2 is the midpoint between the values of ai and ai+1
■For each possible split-point for A, evaluate InfoA(D), where the
number of partitions is two, that is, v=2
■ Split:
■D1 is the set of tuples in D satisfying A ≤ split-point, and D2 is the set
of tuples in D satisfying A > split-point

■The point with the minimum expected information requirement for A


is selected as the split point for A.
*
Gain Ratio for Attribute Selection
■Information gain measure is biased towards attributes with a
large number of values

■A split on product ID would result in a large number of partitions (as many


as there are values), each one containing just one tuple, pure partition

The information gained by partitioning on this attribute is maximal.


Such a partitioning is useless for classification.


*
Gain Ratio for Attribute Selection
■C4.5 (a successor of ID3) uses gain ratio to overcome the bias
(normalization to information gain)

■ GainRatio(A) = Gain(A)/SplitInfo(A)

■Ex. Total no. of tuples having values vj


/ Total no. of tuples in data set

value represents the potential information generated by splitting the training data
set, D, into v partitions, corresponding to the v outcomes of a test on attribute A

■ gain_ratio(income) = 0.029/1.557 = 0.019

■The attribute with the maximum gain ratio is selected as the


splitting attribute
*
Gain Ratio for Attribute Selection

■Information gain, measures the information with respect to


classification that is acquired based on the same partitioning

Gain ratio ,for each outcome, it considers the number of tuples


having that outcome with respect to the total number of tuples in D.

*
Gini Index (CART)
■ Measures the impurity of D, a data partition or set of training tuples

■If a data set D contains examples from n classes, gini index, gini(D) is
defined as

where pj is the relative frequency of class j in D / the probability that a tuple in D belongs to class Cj

■ Compute a weighted sum of the impurity of each resulting partition

■If a data set D is split on A into two subsets D1 and D2, the gini index
gini(D) is defined as

*
Gini Index (CART)
■ Reduction in Impurity:

■The attribute provides the smallest ginisplit(D) (or the largest reduction
in impurity) is chosen to split the node (need to enumerate all the
possible splitting points for each attribute)

*
Gini Index (CART)
■ The Gini index considers a binary split for each attribute.

Let’s first consider the case where A is a discrete-valued attribute having v distinct
values
e.g. income with 3 values low, medium and high

How to determine the best binary split on income?

examine all the possible subsets that can be formed using known values of
income except power set and empty set

• There are 2v – 2 possible ways to form two partitions of the data, D, based on a
binary split on A

*
Computation of Gini Index
■ Ex. D has 9 tuples in buys_computer = “yes” and 5 in “no”

■Suppose the attribute income partitions D into 10 in D1: {low, medium}


and 4 in D2

Gini{low,high} is 0.458; Gini{medium,high} is 0.450.

Thus, split on the {low,medium} (and {high}) since it has the lowest Gini
index

*
Comparing Attribute Selection Measures

The three measures, in general, return good results but


■Information gain:

■biased towards multivalued attributes

■Gain ratio:

■tends to prefer unbalanced splits in which one partition is

much smaller than the others


■Gini index:

■biased to multivalued attributes

■has difficulty when # of classes is large

■tends to favor tests that result in equal-sized partitions

and purity in both partitions

*
Other Attribute Selection Measures
 CHAID: a popular decision tree algorithm, measure based on χ2 test for
independence
 C-SEP: performs better than info. gain and gini index in certain cases
 G-statistic: has a close approximation to χ2 distribution
 MDL (Minimal Description Length) principle (i.e., the simplest solution is
preferred):
 The best tree as the one that requires the fewest # of bits to both (1)
encode the tree, and (2) encode the exceptions to the tree
 Multivariate splits (partition based on multiple variable combinations)
 CART: finds multivariate splits based on a linear comb. of attrs.
 Which attribute selection measure is the best?
 Most give good results, none is significantly superior than others
31
Overfitting and Tree Pruning
 Overfitting: An induced tree may overfit the training data
 Too many branches, some may reflect anomalies due to

noise or outliers
 Poor accuracy for unseen samples

 Two approaches to avoid overfitting


 Prepruning: Halt tree construction early ̵ do not split a node

if this would result in the goodness measure falling below a


threshold
 Difficult to choose an appropriate threshold

 Postpruning( common ) Remove branches from a “fully grown”

tree—get a sequence of progressively pruned trees


 Use a set of data different from the training data to

decide which is the “best pruned tree” ( pruning set )


32
Overfitting and Tree Pruning

CART: cost complexity algorithm


• the cost complexity of a tree to be a function of the number of leaves in the
tree and the error rate of the tree (the percentage of tuples misclassified by the
tree)
• computes the cost complexity of the subtree at N and the subtree at N if it
were to be pruned
• The two values are compared.

C4.5 uses a method called pessimistic pruning (uses the training set )
similar to the cost complexity method, however, does not require the use of a prune set
33
Decision Tree
Decision trees can suffer from repetition and replication

Repetition occurs when an attribute is repeatedly tested along a given branch of


the tree

In replication, duplicate subtrees exist


within the tree.

34
Enhancements to Basic Decision Tree Induction

 Allow for continuous-valued attributes


 Dynamically define new discrete-valued attributes that
partition the continuous attribute value into a discrete set of
intervals
 Handle missing attribute values
 Assign the most common value of the attribute
 Assign probability to each of the possible values
 Attribute construction
 Create new attributes based on existing ones that are
sparsely represented
 This reduces fragmentation, repetition, and replication
35
Classification in Large Databases
 Classification—a classical problem extensively studied by
statisticians and machine learning researchers
 Scalability: Classifying data sets with millions of examples and
hundreds of attributes with reasonable speed
 inefficient due to swapping of the training tuples in and out of main and
cache memories
 Why is decision tree induction popular?
 relatively faster learning speed (than other classification
methods)
 convertible to simple and easy to understand classification
rules
 can use SQL queries for accessing databases

 comparable classification accuracy with other methods

 RainForest (VLDB’98 — Gehrke, Ramakrishnan & Ganti)


 Builds an AVC-list (attribute-value, class label)
36
Rainforest: Training Set and Its AVC Sets

Training Examples AVC-set on Age AVC-set on income


age income studentcredit_rating
buys_computerAge Buy_Computer income Buy_Computer

<=30 high no fair no yes no


<=30 high no excellent no yes no
high 2 2
31…40 high no fair yes <=30 2 3
31..40 4 0 medium 4 2
>40 medium no fair yes
>40 low yes fair yes >40 3 2 low 3 1
>40 low yes excellent no
31…40 low yes excellent yes
AVC-set on
<=30 medium no fair no AVC-set on Student
credit_rating
<=30 low yes fair yes
student Buy_Computer
>40 medium yes fair yes Credit
Buy_Computer

<=30 medium yes excellent yes yes no rating yes no


31…40 medium no excellent yes yes 6 1 fair 6 2
31…40 high yes fair yes no 3 4 excellent 3 3
>40 medium no excellent no
37
BOAT (Bootstrapped Optimistic
Algorithm for Tree Construction)
 Use a statistical technique called bootstrapping to create
several smaller samples (subsets), each fits in memory
 Each subset is used to create a tree, resulting in several
trees
 These trees are examined and used to construct a new
tree T’
 It turns out that T’ is very close to the tree that would
be generated using the whole data set together
 Adv: requires only two scans of DB, an incremental alg.

38
Rule Based Classification

39
Using IF-THEN Rules for Classification
Represent the knowledge in the form of IF-THEN rules
R1: IF age = youth AND student = yes THEN buys_computer = yes

“IF”-part (or left-hand side)of a rule is known as the Rule antecedent/precondition


“THEN”-part (or right-hand side) is the rule consequent (class predication)

Assessment of a rule: coverage and accuracy

ncovers = # of tuples covered by R


ncorrect = # of tuples correctly classified by R
Using IF-THEN Rules for Classification
Consider rule R1 Find coverage and accuracy

covers 2 of the 14 tuples. It can correctly classify both tuples.

coverage(R1) = 2/14 = 14:28%

accuracy(R1) = 2/2 = 100%.


Using IF-THEN Rules for Classification
If a rule is satisfied by X, the rule is said to be triggered.
X= (age = youth, income = medium, student = yes, credit rating = fair).

X satisfies R1, which triggers the rule.


If R1 is the only rule satisfied, then the rule fires by returning the class prediction
for X.

If more than one rule is triggered, need conflict resolution


Size ordering: assign the highest priority to the triggering rules that has the
“toughest” requirement (i.e., with the most attribute test)

Class-based ordering: the classes are sorted in order of decreasing “importance”


(decreasing order of prevalence )or misclassification cost per class [all of the rules for the
most prevalent (or most frequent) class come first, the rules for the next prevalent
class come next, and so on]

Rule-based ordering (decision list): rules are organized into one long priority list,
according to some measure of rule quality [accuracy, coverage, or size (number of attribute
tests in the rule antecedent) ]or by experts. rule that appears earliest in the list has the highest priority
Using IF-THEN Rules for Classification

• What if there is no rule satisfied by X.

 A fallback or default rule can be set up to specify a default class based on a


training set.

The class in majority or the majority class of the tuples that were not covered by
any rule

The default rule is evaluated at the end, if and only if no other rule covers X.

The condition in the default rule is empty


Rule Extraction from a Decision Tree
age?

 Rules are easier to understand than large trees <=30 31..40 >40

 One rule is created for each path from the root to astudent? yes
credit rating?

leaf no yes excellent fair

 Each attribute-value pair along a path forms a no yes yes


conjunction: the leaf holds the class prediction
Rules are mutually exclusive and exhaustive
(cannot have rule conflict) (one rule for each possible attribute–value
combination, does not require a default rule)

Example: Rule extraction from our buys_computer decision-tree


IF age = young AND student = no THEN buys_computer = no
IF age = young AND student = yes THEN buys_computer = yes
IF age = mid-age THEN buys_computer = yes
IF age = old AND credit_rating = excellent THEN buys_computer = yes
IF age = young AND credit_rating = fair THEN buys_computer = no
Rule Extraction from a Decision Tree
“How can we prune the rule set?”

 If given rule pre condition , any condition that does not improve the
estimated accuracy of the rule can be pruned

 Any rule that does not contribute to the overall accuracy of the entire
rule set can also be pruned.
Rule Induction: Sequential Covering Method
 Sequential covering algorithm: Extracts rules directly from training
data
 Typical sequential covering algorithms: AQ, CN2, RIPPER
 Rules are learned sequentially, each for a given class Ci will cover
many tuples of Ci but none (or few) of the tuples of other classes
 Steps:
 Rules are learned one at a time

 Each time a rule is learned, the tuples covered by the rules are
removed
 Repeat the process on the remaining tuples until termination
condition, e.g., when no more training examples or when the
quality of a rule returned is below a user-specified threshold

Decision-tree induction: learning a set of rules simultaneously


46
Sequential Covering Algorithm

while (enough target tuples left)


generate a rule
remove positive target tuples satisfying this rule

Examples covered
Examples covered by Rule 2
by Rule 1 Examples covered
by Rule 3

Positive
examples

47
Rule Generation
Empty rule
 To generate a rule 1. IF THEN loan decision =accept.
2. IF income = high THEN loan decision = accept.
while(true) 3. IF income= high AND credit rating = excellent
THEN loan decision = accept.
find the best predicate p
if foil-gain(p) > threshold then add p to current rule
else break

A3=1&&A1=2
A3=1&&A1=2
&&A8=5A3=1

Positive Negative
examples examples

adopts a greedy depth-first strategy


48
Rule Generation

49
How to Learn-One-Rule?
 Start with the most general rule possible: condition = empty
 Adding new attributes by adopting a greedy depth-first strategy
 Picks the one that most improves the rule quality

 Rule-Quality measures: consider both coverage and accuracy


 Foil-gain (in FOIL & RIPPER): assesses info_gain by extending

condition pos' pos


FOIL _ Gain  pos'(log 2  log 2 )
pos' neg ' pos  neg
 favors rules that have high accuracy and cover many positive tuples

 pos (neg) :the number of positive (negative) tuples covered by R.


 pos’ (neg’): the number of positive (negative) tuples covered by R’.
FOIL(First Order Inductive Learner)
50
How to Learn-One-Rule?
 Rule pruning based on an independent set of test tuples

Pos/neg are # of positive/negative tuples covered by R.


If FOIL_Prune is higher for the pruned version of R, prune R

pos  neg
FOIL _ Prune( R) 
pos  neg

51
Classifier Eager or Lazy?
 Eager learners, when given a set of training tuples, will construct a
model before receiving new (e.g., test) tuples to classify.
 The learned model as being ready and eager to classify previously
unseen tuples.

 Lasy learner instead waits until the last minute before doing any model
construction to classify a given test tuple.
 Given a training tuple, a lazy learner simply stores it (or does only a
little minor processing) and waits until it is given a test tuple
 Lazy learners store the training tuples or “instances,” they are also
referred to as instance-based learners
 Can be computationally expensive, so can be implemented on parallel
hardware
 Require efficient storage techniques e.g case-based reasoning,kNN
52
kNN Classifier

53
1-Nearest Neighbor
3-Nearest Neighbor
kNN Algorithm
Store all input data in the training set

For each pattern in the test set

Search for the K nearest patterns to the


input pattern using a Euclidean distance
measure

The classification for the input pattern is the


most common class among k nearest neighbors

Lazy Learners
kNN Algorithm
X1 = {x11, x12, : : : , x1n} and X2 ={x21, x22, : : : , x2n}

For each numeric attribute, take the difference between the corresponding
values of that attribute in tuple X1 and in tuple X2, square this difference,
and accumulate it.
The square root is taken of the total accumulated distance count

•Normalize the values of each attribute before using .


•Helps prevent attributes with initially large ranges (e.g., income) from
outweighing attributes with initially smaller ranges (e.g., binary attributes).
Training parameters and typical settings

Number of nearest neighbors

o The numbers of nearest neighbors (K) should be based on


cross validation over a number of K setting.

o When k=1 is a good baseline model to benchmark against.

o A good rule-of-thumb numbers is k should be


less than the square root of the total number of training patterns.
K-Nearest Neighbor Classifier
Example : 3-Nearest Neighbors

Customer Age Income No. credit Response


cards
John 35 35K 3 No

Rachel 22 50K 2 Yes

Hannah 63 200K 1 No

Tom 59 170K 1 No

Nellie 25 40K 4 Yes

David 37 50K 2 ?
K-Nearest Neighbor Classifier
Example
Customer Age Income No. Response Distance from David
(K) cards
John 35 35 3 No sqrt [(35-37)2+(35-50)2
+(3-2)2]=15.16
Rachel 22 50 2 Yes sqrt [(22-37)2+(50-50)2
+(2-2)2]=15
Hannah 63 200 1 No sqrt [(63-37)2+(200-
50)2 +(1-2)2]=152.23
Tom 59 170 1 No sqrt [(59-37)2+(170-
50)2 +(1-2)2]=122
Nellie 25 40 4 Yes sqrt [(25-37)2+(40-50)2
+(4-2)2]=15.74
David 37 50 2 Yes
K Nearest Neighbors

 K Nearest Neighbors
 Advantage
 Nonparametric architecture
 Simple
 Powerful
 Requires no training time
 Disadvantage
 Memory intensive
 Classification/estimation is slow

61
Training dataset (nominal attributes)
Customer ID Debt Income Marital Status Risk

Abel High High Married Good


Ben Low High Married Doubtful
Candy Medium Very low Unmarried Poor
Dale Very high Low Married Poor
Ellen High Low Married Poor
Fred High Very low Married Poor
George Low High Unmarried Doubtful
Harry Low Medium Married Doubtful
Igor Very Low Very High Married Good
Jack Very High Medium Married Poor
k-nn

 K=3
 Distance
 Score for an attribute is 1 for a match and
0 otherwise
 Distance is sum of scores for each
attribute
Test Set
Customer Debt Income Marital Status Risk
ID
Zeb High Medium Married ?
Yong Low High Married ?
Xu Very low Very low Unmarried ?
Vasco High Low Married ?
Unace High Low Divorced ?
Trey Very low Very low Married ?
Steve Low High Unmarried ?
Bayesian Classification
■A statistical classifier: performs probabilistic prediction, i.e.,
predicts class membership probabilities

Foundation: Based on Bayes’ Theorem.


■Performance: A simple Bayesian classifier, naïve Bayesian


classifier, has comparable performance with decision tree and
selected neural network classifiers

■Incremental: Each training example can incrementally


increase/decrease the probability that a hypothesis is correct —
prior knowledge can be combined with observed data

*
Bayes’ Theorem: Basics
■Let X be a data sample (“evidence”): class label is unknown
■Let H be a hypothesis that X belongs to class C

■Classification is to determine P(H|X), (i.e., posterior probability): the

probability that the hypothesis holds given the observed data sample X

■P(H) (prior probability): the initial probability


■E.g., any customer X will buy computer, regardless of age, income, …

■P(X): probability that sample data is observed

■P(X|H) (likelihood): the probability of observing the sample X, given that the

hypothesis holds
■E.g., Given that X will buy computer, the prob. that X is 31..40, medium

income

Bayes’ Theorem:

*
Naïve Bayesian Classification
■Let D be a training set of tuples and their associated class labels,
and each tuple is represented by an n-D attribute vector X = (x1,
x2, …, xn)
■Suppose there are m classes C1, C2, …, Cm.

■Classification is to derive the maximum posterior, i.e., the


maximal P(Ci|X)
■This can be derived from Bayes’ theorem

Since P(X) is constant for all classes, only


■ needs to be
maximized

*
Naïve Bayes Classifier: An Example

P(Ci): P(buys_computer = “yes”) = 9/14 = 0.643



P(buys_computer = “no”) = 5/14= 0.357

Compute P(X|Ci) for each class


P(age = “<=30” | buys_computer = “yes”) = 2/9 = 0.222


P(age = “<= 30” | buys_computer = “no”) = 3/5 = 0.6
P(income = “medium” | buys_computer = “yes”) = 4/9 = 0.444
P(income = “medium” | buys_computer = “no”) = 2/5 = 0.4
P(student = “yes” | buys_computer = “yes) = 6/9 = 0.667
P(student = “yes” | buys_computer = “no”) = 1/5 = 0.2
P(credit_rating = “fair” | buys_computer = “yes”) = 6/9 = 0.667
P(credit_rating = “fair” | buys_computer = “no”) = 2/5 = 0.4

*
Naïve Bayes Classifier: An Example

■X = (age <= 30 , income = medium, student = yes, credit_rating = fair)


■ P(X|Ci)

Using the above probabilities, we obtain

To find the class, Ci, that maximizes P(X|Ci)P(Ci), we compute

Therefore, X belongs to class (“buys_computer = yes”)

*
Naïve Bayes Classifier: Training Dataset

Class:
C1:buys_computer = ‘yes’
C2:buys_computer = ‘no’

Data to be classified:
X = (age <=30,
Income = high,
Student = yes
Credit_rating = Fair)

*
Avoiding the Zero-Probability
Problem
■Naïve Bayesian prediction requires each conditional prob. be
non-zero. Otherwise, the predicted prob. will be zero

■Ex. Suppose a dataset with 1000 tuples, income=low (0), income=


medium (990), and income = high (10)
■Use Laplacian correction (or Laplacian estimator)

■Adding 1 to each case

Prob(income = low) = 1/1003


Prob(income = medium) = 991/1003
Prob(income = high) = 11/1003
■The “corrected” prob. estimates are close to their

“uncorrected” counterparts
*
Naïve Bayes Classifier: Comments
Advantages

■Easy to implement
■Good results obtained in most of the cases

Disadvantages

■Assumption: class conditional independence, therefore loss of
accuracy
■Practically, dependencies exist among variables
■E.g., hospitals: patients: Profile: age, family history, etc.

Symptoms: fever, cough etc., Disease: lung cancer, diabetes, etc.


■Dependencies among these cannot be modeled by Naïve

Bayes Classifier

*
Draw decision tree for given training data

AGE COMPETITION TYPE PROFIT


Old Yes Swr Down
Old No Swr Down
Old No Hwr Down
Mid Yes Swr Down
Mid Yes Hwr Down
Mid No Hwr Up
Mid No Swr Up
New Yes Swr Up
New No Hwr Up
New No Swr Up

Classify ( new, yes, hwr) with Naïve Bayes classifer


Model Evaluation and Selection
 Evaluation metrics: How can we measure accuracy? Other
metrics to consider?
 Use validation test set of class-labeled tuples instead of
training set when assessing accuracy
 Methods for estimating a classifier’s accuracy:
 Holdout method, random subsampling
 Cross-validation
 Bootstrap
 Comparing classifiers:
 Confidence intervals
 Cost-benefit analysis and ROC Curves
74
Classifier Evaluation Metrics: Confusion
Matrix
Confusion Matrix:
Actual class\Predicted class C1 ¬ C1
C1 True Positives (TP) False Negatives (FN)
¬ C1 False Positives (FP) True Negatives (TN)

Example of Confusion Matrix:


Actual class\Predicted buy_computer buy_computer Total
class = yes = no
buy_computer = yes 6954 46 7000
buy_computer = no 412 2588 3000
Total 7366 2634 10000

 Given m classes, an entry, CMi,j in a confusion matrix indicates


# of tuples in class i that were labeled by the classifier as class j
 May have extra rows/columns to provide totals
75
Classifier Evaluation Metrics: Accuracy,
Error Rate, Sensitivity and Specificity
A\P C ¬C Class Imbalance Problem:

C TP FN P
 One class may be rare, e.g.
¬C FP TN N
fraud, or HIV-positive
P’ N’ All
 Significant majority of the

 Classifier Accuracy, or negative class and minority of


recognition rate: percentage of the positive class (97% and misclassifying
all the cancer tuples)
test set tuples that are correctly
 Sensitivity: True Positive
classified
recognition rate
Accuracy = (TP + TN)/All
 Sensitivity = TP/P
 Error rate: 1 – accuracy, or
 Specificity: True Negative
Error rate = (FP + FN)/All recognition rate
 Specificity = TN/N

76
Classifier Evaluation Metrics: Accuracy,
Error Rate, Sensitivity and Specificity
 Class Imbalance Problem:
 Sensitivity = TP/P

 Specificity = TN/N

 Accuracy = (TP + TN)/All


Calculate sensitivity, specificity and
accuracy.

77
Classifier Evaluation Metrics: Example

Actual Class\Predicted class cancer = yes cancer = no Total Recognition(%)


cancer = yes 90 210 300 30.00 (sensitivity
cancer = no 140 9560 9700 98.56 (specificity)
Total 230 9770 10000 96.40 (accuracy)

 Precision = 90/230 = 39.13% Recall = 90/300 = 30.00%

Although the classifier has a high accuracy, it’s ability to correctly


label the positive (rare) class is poor given its low sensitivity.
It has high specificity, meaning that it can accurately recognize
negative tuples
78
Classifier Evaluation Metrics:
Precision and Recall, and F-measures
 Precision: exactness – what % of tuples that the classifier
labeled as positive are actually positive

 Recall: completeness – what % of positive tuples did the


classifier label as positive?
 Perfect score is 1.0
 Same as sensitivity or true positive rate

 Inverse relationship between precision & recall (it is possible to increase


one at the cost of reducing the other)

79
Classifier Evaluation Metrics:
Precision and Recall, and F-measures
 F measure (F1 or F-score): harmonic mean of precision and
recall,

 Fß: weighted measure of precision and recall


 assigns ß times as much weight to recall as to precision

80
Evaluating Classifier Accuracy:
Holdout & Cross-Validation Methods
 Holdout method
 Given data is randomly partitioned into two independent sets

 Training set (e.g., 2/3) for model construction

 Test set (e.g., 1/3) for accuracy estimation

 Random sampling: a variation of holdout

 Repeat holdout k times, accuracy = avg. of the accuracies


obtained

81
Evaluating Classifier Accuracy:
Holdout & Cross-Validation Methods
 Cross-validation (k-fold, where k = 10 is most popular)
 Randomly partition the data into k mutually exclusive subsets,
each approximately equal size

 Training and testing is performed k times

 At i-th iteration, use Di as test set and others as training set

 Leave-one-out: k folds where k = # of tuples, for small sized


data. Only one sample is “left out” at a time for the test set

 *Stratified cross-validation*: folds are stratified so that class


disttribution in each fold is approx. the same as that in the
initial data

82
Evaluating Classifier Accuracy: Bootstrap

 Bootstrap
 Works well with small data sets
 Samples the given training tuples uniformly with replacement
 i.e., each time a tuple is selected, it is equally likely to be selected
again and re-added to the training set
 Several bootstrap methods, and a common one is .632 boostrap
 A data set with d tuples is sampled d times, with replacement, resulting in
a training set of d samples. The data tuples that did not make it into the
training set end up forming the test set. About 63.2% of the original data
end up in the bootstrap, and the remaining 36.8% form the test set (since
(1 – 1/d)d ≈ e-1 = 0.368)
 Repeat the sampling procedure k times, overall accuracy of the model:

83
Estimating Confidence Intervals:
Classifier Models M1 vs. M2
 Suppose we have 2 classifiers, M1 and M2, which one is better?

 Use 10-fold cross-validation to obtain and

 These mean error rates are just estimates of error on the true
population of future data cases

 What if the difference between the 2 error rates is just


attributed to chance?

 Use a test of statistical significance

 Obtain confidence limits for our error estimates (“One model is better
than the other by a margin of error of ±4%.”)

84
Estimating Confidence Intervals:
Null Hypothesis
 Perform 10-fold cross-validation
 Assume samples follow a t distribution with k–1 degrees of
freedom (here, k=10)
 Use t-test (or Student’s t-test)
 Null Hypothesis: M1 & M2 are the same ( the difference in mean error rate between the
two is zero)

 If we can reject null hypothesis, then


 we conclude that the difference between M1 & M2 is
statistically significant
 Chose model with lower error rate

85
Estimating Confidence Intervals: t-test

 If only 1 test set available: pairwise comparison


 For ith round of 10-fold cross-validation, the same cross
partitioning is used to obtain err(M1)i and err(M2)i
 Average over 10 rounds to get
and
 t-test computes t-statistic with k-1 degrees of
freedom:
where

 If two test sets available: use non-paired t-test


where

where k1 & k2 are # of cross-validation samples used for M1 & M2, resp.
86
Estimating Confidence Intervals:
Table for t-distribution

 Symmetric
 Significance level,
e.g., sig = 0.05 or
5% means M1 & M2
are significantly
different for 95% of
population
 Confidence limit, z
= sig/2

87
Estimating Confidence Intervals:
Statistical Significance
 Are M1 & M2 significantly different?
 Compute t. Select significance level (e.g. sig = 5%)

 Consult table for t-distribution: Find t value corresponding

to k-1 degrees of freedom (here, 9)


 t-distribution is symmetric: typically upper % points of

distribution shown → look up value for confidence limit


z=sig/2 (here, 0.025)
 If t > z or t < -z, then t value lies in rejection region:

 Reject null hypothesis that mean error rates of M1 & M2

are same
 Conclude: statistically significant difference between M1

& M2
 Otherwise, conclude that any difference is chance
88
Issues Affecting Model Selection

 Accuracy
 classifier accuracy: predicting class label
 Speed
 time to construct the model (training time)
 time to use the model (classification/prediction time)
 Robustness: handling noise and missing values
 Scalability: efficiency in disk-resident databases
 Interpretability
 understanding and insight provided by the model
 Other measures, e.g., goodness of rules, such as decision tree
size or compactness of classification rules
90
Chapter 8. Classification: Basic Concepts

 Classification: Basic Concepts


 Decision Tree Induction
 Bayes Classification Methods
 Rule-Based Classification
 Model Evaluation and Selection
 Techniques to Improve Classification Accuracy:
Ensemble Methods
 Summary
91
Ensemble Methods: Increasing the Accuracy

 Ensemble methods
 Use a combination of models to increase accuracy

 Combine a series of k learned models, M1, M2, …, Mk, with

the aim of creating an improved model M*


 Popular ensemble methods
 Bagging: averaging the prediction over a collection of

classifiers
 Boosting: weighted vote with a collection of classifiers

 Ensemble: combining a set of heterogeneous classifiers

92
Bagging: Boostrap Aggregation
 Analogy: Diagnosis based on multiple doctors’ majority vote
 Training
 Given a set D of d tuples, at each iteration i, a training set Di of d tuples

is sampled with replacement from D (i.e., bootstrap)


 A classifier model Mi is learned for each training set Di

 Classification: classify an unknown sample X


 Each classifier Mi returns its class prediction

 The bagged classifier M* counts the votes and assigns the class with the

most votes to X
 Prediction: can be applied to the prediction of continuous values by taking
the average value of each prediction for a given test tuple
 Accuracy
 Often significantly better than a single classifier derived from D

 For noise data: not considerably worse, more robust

 Proved improved accuracy in prediction


93
Boosting
 Analogy: Consult several doctors, based on a combination of
weighted diagnoses—weight assigned based on the previous
diagnosis accuracy
 How boosting works?
 Weights are assigned to each training tuple
 A series of k classifiers is iteratively learned
 After a classifier Mi is learned, the weights are updated to
allow the subsequent classifier, Mi+1, to pay more attention to
the training tuples that were misclassified by Mi
 The final M* combines the votes of each individual classifier,
where the weight of each classifier's vote is a function of its
accuracy
 Boosting algorithm can be extended for numeric prediction
 Comparing with bagging: Boosting tends to have greater accuracy,
but it also risks overfitting the model to misclassified data 94
Adaboost (Freund and Schapire, 1997)
 Given a set of d class-labeled tuples, (X1, y1), …, (Xd, yd)
 Initially, all the weights of tuples are set the same (1/d)
 Generate k classifiers in k rounds. At round i,
 Tuples from D are sampled (with replacement) to form a training set
Di of the same size
 Each tuple’s chance of being selected is based on its weight
 A classification model Mi is derived from Di
 Its error rate is calculated using Di as a test set
 If a tuple is misclassified, its weight is increased, o.w. it is decreased
 Error rate: err(Xj) is the misclassification error of tuple Xj. Classifier Mi
error rate is the sum of the weights of the misclassified tuples:
d
error ( M i )   w j  err ( X j )
j
 The weight of classifier Mi’s vote is 1  error ( M i )
log
error ( M i )
95
Random Forest (Breiman 2001)
 Random Forest:
 Each classifier in the ensemble is a decision tree classifier and is

generated using a random selection of attributes at each node to


determine the split
 During classification, each tree votes and the most popular class is

returned
 Two Methods to construct Random Forest:
 Forest-RI (random input selection): Randomly select, at each node, F

attributes as candidates for the split at the node. The CART methodology
is used to grow the trees to maximum size
 Forest-RC (random linear combinations): Creates new attributes (or

features) that are a linear combination of the existing attributes


(reduces the correlation between individual classifiers)
 Comparable in accuracy to Adaboost, but more robust to errors and outliers
 Insensitive to the number of attributes selected for consideration at each
split, and faster than bagging or boosting
96
Classification of Class-Imbalanced Data Sets
 Class-imbalance problem: Rare positive example but numerous
negative ones, e.g., medical diagnosis, fraud, oil-spill, fault, etc.
 Traditional methods assume a balanced distribution of classes
and equal error costs: not suitable for class-imbalanced data
 Typical methods for imbalance data in 2-class classification:
 Oversampling: re-sampling of data from positive class

 Under-sampling: randomly eliminate tuples from negative


class
 Threshold-moving: moves the decision threshold, t, so that
the rare class tuples are easier to classify, and hence, less
chance of costly false negative errors
 Ensemble techniques: Ensemble multiple classifiers
introduced above
 Still difficult for class imbalance problem on multiclass tasks
97
Basics of Neural Network
 What is a Neural Network
 Neural Network Classifier
 Data Normalization
 Neuron and bias of a neuron
 Single Layer Feed Forward
 Limitation
 Multi Layer Feed Forward
 Back propagation
Neural Networks
What is a Neural Network?
•Biologically motivated approach to
machine learning

Similarity with biological network


Fundamental processing elements of a neural network
is a neuron
1.Receives inputs from other source
2.Combines them in someway
3.Performs a generally nonlinear operation on the result
4.Outputs the final result
Similarity with Biological Network

• Fundamental processing element of a


neural network is a neuron
• A human brain has 100 billion neurons
• An ant brain has 250,000 neurons
Synapses, the basis of learning and memory
Neural Network

 Neural Network is a set of connected


INPUT/OUTPUT UNITS, where each connection
has a WEIGHT associated with it.

 Neural Network learning is also called CONNECTIONIST


learning due to the connections between units.

 It is a case of SUPERVISED, INDUCTIVE or


CLASSIFICATION learning.
Neural Network
 Neural Network learns by adjusting the weights so as
to be able to correctly classify the training data and
hence, after testing phase, to classify unknown data.

 Neural Network needs long time for training.

 Neural Network has a high tolerance to noisy and


incomplete data
Neural Network Classifier
 Input: Classification data
It contains classification attribute
 Data is divided, as in any classification problem.
[Training data and Testing data]

 All data must be normalized.


(i.e. all values of attributes in the database are changed to contain values in
the internal [0,1] or[-1,1])
Neural Network can work with data in the range of (0,1) or (-1,1)

 Two basic normalization techniques


[1] Max-Min normalization
[2] Decimal Scaling normalization
One Neuron as a
Network
 Here x1 and x2 are normalized attribute value of data.

 y is the output of the neuron , i.e the class label.

 x1 and x2 values multiplied by weight values w1 and w2 are input to the neuron x.

 Value of x1 is multiplied by a weight w1 and values of x2 is multiplied by a weight


w2.

 Given that

 w1 = 0.5 and w2 = 0.5


 Say value of x1 is 0.3 and value of x2 is 0.8,

 So, weighted sum is :

 sum= w1 x x1 + w2 x x2 = 0.5 x 0.3 + 0.5 x 0.8 = 0.55


One Neuron as a Network
 The neuron receives the weighted sum as input and calculates the
output as a function of input as follows :

 y = f(x) , where f(x) is defined as

 f(x) = 0 { when x< 0.5 }


 f(x) = 1 { when x >= 0.5 }

 For our example, x ( weighted sum ) is 0.55, so y = 1 ,

 That means corresponding input attribute values are classified in


class 1.

 If for another input values , x = 0.45 , then f(x) = 0,


 so we could conclude that input values are classified to class 0.


Bias of a Neuron

 We need the bias value to be added to the weighted sum


∑wixi so that we can transform it from the origin.
v = ∑wixi + b, here b is the bias

x1-x2= -1
x2 x1-x2=0

x1-x2= 1

x1
Bias as extra input
w0
x0 = +1
x1 W1
Activation
v
function
Input
Attributex2 w2   ( )
Output
values   Summing function
class
weights
y
xm wm
m
v w x
j 0
j j

w0  b
Neuron with Activation

 The neuron is the basic information processing unit of a NN. It


consists of:

1 A set of links, describing the neuron inputs, with weights


W 1 , W2 , … , W m

2. An adder function (linear combiner) for computing the


weighted sum of the inputs (real numbers):
m
u  wjxj
j 1

3 Activation function : for limiting the amplitude of the


neuron output.
y   (u  b)
Why We Need Multi Layer ?

 Linear Separable:

x y x y
 Linear inseparable:

 Solution? x y
A Multilayer Feed-Forward Neural
Network
Output Class
Ok
Output nodes
w jk
Oj
Hidden nodes

wij - weights

Input nodes
Network is fully connected
Input Record : xi
Neural Network Learning

 The inputs are fed simultaneously into the input


layer.

 The weighted outputs of these units are fed


into hidden layer.

 The weighted outputs of the last hidden layer are


inputs to units making up the output layer.
A Multilayer Feed Forward Network

 The units in the hidden layers and output layer are


sometimes referred to as neurodes, due to their symbolic
biological basis, or as output units.

 A network containing two hidden layers is called a three-


layer neural network, and so on.

 The network is feed-forward in that none of the weights


cycles back to an input unit or to an output unit of a
previous layer.
A Multilayered Feed – Forward Network

 INPUT: records without class attribute with normalized


attributes values.

 INPUT VECTOR: X = { x1, x2, …. xn}


where n is the number of (non class) attributes.

 INPUT LAYER – there are as many nodes as non-class


attributes i.e. as the length of the input vector.

 HIDDEN LAYER – the number of nodes in the hidden


layer and the number of hidden layers depends on
implementation.
A Multilayered Feed–Forward Network

 OUTPUT LAYER – corresponds to the class


attribute.
 There are as many nodes as classes (values
of the class attribute).

Ok k= 1, 2,.. #classes
• Network is fully connected, i.e. each unit provides input
to each unit in the next forward layer.
Classification by Back propagation

 Back Propagation learns by iteratively processing a


set of training data (samples).

 For each sample, weights are modified to minimize


the error between network’s classification and
actual classification.
Steps in Back propagation Algorithm

 STEP ONE: initialize the weights and biases.

 The weights in the network are initialized to


random numbers from the interval [-1,1].

 Each unit has a BIAS associated with it

 The biases are similarly initialized to random


numbers from the interval [-1,1].

 STEP TWO: feed the training sample.


Steps in Back propagation Algorithm
( cont..)

 STEP THREE: Propagate the inputs forward; we


compute the net input and output of each unit in
the hidden and output layers.

 STEP FOUR: back propagate the error.

 STEP FIVE: update weights and biases to reflect the


propagated errors.

 STEP SIX: terminating conditions.


Propagation through Hidden Layer
( One Node )
- Bias j
x0 w0j


x1 w1j
f
output y
xn wnj

Input weight weighted Activation


vector x vector sum function
w
 The inputs to unit j are outputs from the previous layer. These are
multiplied by their corresponding weights in order to form a weighted
sum, which is added to the bias associated with unit j.
 A nonlinear activation function f is applied to the net input.
Propagate the inputs forward
 For unit j in the input layer, its output is equal to its input, that
is,
Oj  I j

for input unit j.


 The net input to each unit in the hidden and output layers is
computed as follows.
 Given a unit j in a hidden or output layer, the net input is
I j   wij Oi   j
i

where wij is the weight of the connection from unit i in the previous layer
to unit j; Oi is the output of unit i from the previous layer;

j is the bias of the unit


Propagate the inputs forward
 Each unit in the hidden and output layers takes its net
input and then applies an activation function. The
function symbolizes the activation of the neuron
represented by the unit. It is also called a logistic,
sigmoid, or squashing function.
 Given a net input Ij to unit j, then
Oj = f(Ij),
the output of unit j, is computed as
1
Oj  I j
1 e
Back propagate the error
 When reaching the Output layer, the error is computed and
propagated backwards.
 For a unit k in the output layer the error is computed by a
formula:
Errk  Ok (1  Ok )(Tk  Ok )
Where Ok – actual output of unit k ( computed by activation
function. 1
Ok 
1  eIk

Tk – True output based of known class label; classification of training


sample

Ok(1-Ok) – is a Derivative ( rate of change ) of activation function.


Back propagate the error

 The error is propagated backwards by updating


weights and biases to reflect the error of the
network classification .
 For a unit j in the hidden layer the error is
computed by a formula:

Err j  O j (1  O j ) Errk w jk
k

where wjk is the weight of the connection from unit j to


unit k in the next higher layer, and Errk is the error of unit k.
Update weights and biases
 Weights are updated by the following equations,
where l is a constant between 0.0 and 1.0 reflecting
the learning rate, this learning rate is fixed for
implementation.
wij  (l ) ErrjOi

wij  wij  wij

• Biases are updated by the following equations

 j  (l ) Errj

 j   j   j
Update weights and biases

 We are updating weights and biases after the presentation of


each sample.
 This is called case updating.

 Epoch --- One iteration through the training set is called an epoch.

 Epoch updating ------------


 Alternatively, the weight and bias increments could be accumulated in
variables and the weights and biases updated after all of the samples of
the training set have been presented.

 Case updating is more accurate


Terminating Conditions
 Training stops
• All wij in the previous epoch are below some
threshold, or

•The percentage of samples misclassified in the previous


epoch is below some threshold, or

• a pre specified number of epochs has expired.

• In practice, several hundreds of thousands of epochs may


be required before the weights will converge.
Backpropagation Formulas

Output vector
Errk  Ok (1  Ok )(Tk  Ok )
Output nodes
1 Err j  O j (1  O j ) Errk w jk
Oj  I j k
1 e
Hidden nodes

I j   wij Oi   j wij  j   j  (l) Err j


i
wij  wij  (l ) Err j Oi
Input nodes

Input vector: xi
Example of Back propagation
Input = 3, Hidden
Neuron = 2 Output
=1

Initialize weights :

Random Numbers
from -1.0 to 1.0

Initial Input and


weight

x1 x2 x3 w14 w15 w24 w25 w34 w35 w46 w56

1 0 1 0.2 -0.3 0.4 0.1 -0.5 0.2 -0.3 -0.2


Example ( cont.. )

 Bias added to Hidden


 + Output nodes
 Initialize Bias
 Random Values from
 -1.0 to 1.0

 Bias ( Random )

θ4 θ5 θ6

-0.4 0.2 0.1


Net Input and Output Calculation
1
I j   wij Oi   j Oj  I j
i 1 e

Unitj Net Input Ij Output Oj

4 0.2 + 0 + 0.5 -0.4 = -0.7 1


Oj  = 0.332
1  e0.7
-0.3 + 0 + 0.2 + 0.2 =0.1 1 = 0.525
5 Oj 
1  e  0 .1

1
6 (-0.3)0.332-(0.2)(0.525)+0.1= -0.105 Oj  = 0.475
1  e0.105
Calculation of Error at Each Node
Errk  Ok (1  Ok )(Tk  Ok )
Err j  O j (1  O j ) Errk w jk
k

Unit j Error j
6 0.475(1-0.475)(1-0.475) =0.1311
We assume T 6 = 1

5 0.525 x (1- 0.525)x 0.1311x


(-0.2) = 0.0065

4 0.332 x (1-0.332) x 0.1311 x


(-0.3) = -0.0087
Calculation of weights and Bias Updating
Learning Rate l =0.9
wij  ( l ) ErrjOi  j  ( l ) Errj
wij  wij  wij  j   j   j

Weight New Values


w46 -0.3 + 0.9(0.1311)(0.332) = -
0.261
w56 -0.2 + (0.9)(0.1311)(0.525) = -
0.138
w14 0.2 + 0.9(-0.0087)(1) = 0.192

w15 -0.3 + (0.9)(-0.0065)(1) = -0.306


……..similarly ………similarly
θ6 0.1 +(0.9)(0.1311)=0.218

……..similarly ………similarly
Calculation of weights and Bias Updating
Learning Rate l =0.9
Neural Network as a Classifier
 Weakness
 Long training time
 Require a number of parameters typically best determined empirically,
e.g., the network topology or “structure.”
 Poor interpretability: Difficult to interpret the symbolic meaning behind
the learned weights and of “hidden units” in the network
 Strength
 High tolerance to noisy data
 Ability to classify untrained patterns
 Well-suited for continuous-valued inputs and outputs
 Successful on an array of real-world data, e.g., hand-written letters
 Algorithms are inherently parallel
 Techniques have recently been developed for the extraction of rules
from trained neural networks
SVM—Support Vector Machines
 A relatively new classification method for both linear and
nonlinear data
 It uses a nonlinear mapping to transform the original training
data into a higher dimension
 With the new dimension, it searches for the linear optimal
separating hyperplane (i.e., “decision boundary”)
 With an appropriate nonlinear mapping to a sufficiently high
dimension, data from two classes can always be separated by a
hyperplane
 SVM finds this hyperplane using support vectors (“essential”
training tuples) and margins (defined by the support vectors)
SVM—History and Applications

 Vapnik and colleagues (1992)—groundwork from Vapnik &


Chervonenkis’ statistical learning theory in 1960s
 Features: training can be slow but accuracy is high owing to
their ability to model complex nonlinear decision boundaries
(margin maximization)
 Used for: classification and numeric prediction
 Applications:
 handwritten digit recognition, object recognition, speaker
identification, benchmarking time-series prediction tests
SVM—General Philosophy

Small Margin Large Margin


Support Vectors
SVM—Margins and Support Vectors

April 13, 2019


SVM—When Data Is Linearly Separable

Let data D be (X1, y1), …, (X|D|, y|D|), where Xi is the set of training tuples
associated with the class labels yi
There are infinite lines (hyperplanes) separating the two classes but we want to
find the best one (the one that minimizes classification error on unseen data)
SVM searches for the hyperplane with the largest margin, i.e., maximum
marginal hyperplane (MMH)
SVM—Linearly Separable
 A separating hyperplane can be written as
W●X+b=0
where W={w1, w2, …, wn} is a weight vector and b a scalar (bias)
 For 2-D it can be written as
w0 + w1 x1 + w2 x2 = 0
 The hyperplane defining the sides of the margin:
H1: w0 + w1 x1 + w2 x2 ≥ 1 for yi = +1, and
H2: w0 + w1 x1 + w2 x2 ≤ – 1 for yi = –1
 Any training tuples that fall on hyperplanes H1 or H2 (i.e., the
sides defining the margin) are support vectors
 This becomes a constrained (convex) quadratic optimization
problem: Quadratic objective function and linear constraints 
Quadratic Programming (QP)  Lagrangian multipliers
Why Is SVM Effective on High Dimensional Data?

 The complexity of trained classifier is characterized by the # of


support vectors rather than the dimensionality of the data
 The support vectors are the essential or critical training examples —
they lie closest to the decision boundary (MMH)
 If all other training examples are removed and the training is
repeated, the same separating hyperplane would be found
 The number of support vectors found can be used to compute an
(upper) bound on the expected error rate of the SVM classifier, which
is independent of the data dimensionality
 Thus, an SVM with a small number of support vectors can have good
generalization, even when the dimensionality of the data is high
A2

SVM—Linearly Inseparable

 Transform the original input data into a higher dimensional


space
A1

 Search for a linear separating hyperplane in the new space


SVM: Different Kernel functions
 Instead of computing the dot product on the transformed
data, it is math. equivalent to applying a kernel function
K(Xi, Xj) to the original data, i.e., K(Xi, Xj) = Φ(Xi) Φ(Xj)
 Typical Kernel Functions

 SVM can also be used for classifying multiple (> 2) classes


and for regression analysis (with additional parameters)
SVM vs. Neural Network

 SVM  Neural Network


 Deterministic algorithm  Nondeterministic
 Nice generalization algorithm
properties  Generalizes well but
doesn’t have strong
 Hard to learn – learned in
mathematical foundation
batch mode using
 Can easily be learned in
quadratic programming
incremental fashion
techniques
 To learn complex
 Using kernels can learn
functions—use multilayer
very complex functions perceptron (nontrivial)

You might also like