You are on page 1of 9

A Framework for Multiple Classifier Systems Comparison (MCSCF) P.Shunmugapriya1, S.

Kanmani2 1-Research Scholar/CSE, PEC, Pududcherry 2- Professor/IT, PEC, Puducherry

Abstract In this Paper, we propose a new framework for comparing Multiple Classifier Systems in the literature. A Classifier Ensemble or Multiple Classifier System combines a finite number of classifiers of same kind or different types, which are trained simultaneously for a common classification task. The Ensemble is able to efficiently improve the generalization ability of the classifier when compared with a single classifier. The objective of this paper is to introduce a framework MCSCF that analyses the existing research work on Classifier Ensembles. Our framework compares the classifier ensembles on the basis of Constituent classifiers of an ensemble and Combination methods, Classifier selection basis, Standard datasets, Evaluation criteria and Behavior of classifier ensemble outcomes. It is observed that, different types of classifiers are combined using a number of different fusion methods and classification accuracy is highly improved in the ensembles irrespective of the application domain. Keywords Data Mining, Pattern Classification, Classifier Ensemble, Multiple classifier Systems

1. Introduction Combining classifiers to get higher classification accuracy is rapidly growing and enjoying a lot of attention from pattern recognition and machine learning communities. For ensembling, the classification capabilities of a single classifier are not required to be very strong. What is important is to use suitable combinative strategies to improve the generalization of classifier ensembles. In order to speed up the convergences and simplify structures, the combinative compo nents are often weak or simple [1]. Classifiers like Region Based Classifier ,Contour Based Classifier, Enhanced Loci classifier, Histogram-based classifier , Crossing Based Classifier, Neural Networks classifier, K-nearest neighbor classifier, SVM, Anorm, KDE, KMP(Kernel Matching Pursuit), Minimum Distance Classifier, Maximum Likelihood Classifier, Mahalanobis Classifier, Nave Bayesian, Decision tree, Fisher classifier, Nearest Means classifier are often combined to form classifier fusion or classifier ensemble as shown in [Table I]. Also there are many classifier combination methods available. By using different combination methods to a number of various classifiers, classifier ensembles result with variations in their performance and classification accuracy. So far, different methods of classifier ensembles have been experimented on different application domain. The important objective of this paper is to provide a comparison framework of works on classifier combination .i.e. what classifiers are used in ensembles, what are the methods of combination that have been applied, what application domain and datasets are considered. [2] and [9] are classifier ensemble works carried out in context aware framework using evolutionary algorithm and [2] on line proposal. [3] uses voting method to form an ensemble that classifies Hyper spectral data. [5] considers distribution characteristic and diversity as the base features for classifier selection and uses k-means algorithm, a new kernel based probability c-means (KPCM) algorithm is designed for classifier ensemble. [6] is double bagging, a variable method of bagging .This uses SVM with 4 kernels and has shown enhanced classification accuracy than bagging and boosting. A-priori knowledge about classifiers is considered as an important criterion in selecting the classifiers [7] and uses a number of classifiers and ensemble methods to form an ensemble that has always 90% accuracy of classification. Boosting method is employed to obtain diversity among the classifiers and uses very effective method of ensembling - stacked generalization [8]. In [10] feature selection and GA are used and the ensemble performs well in accurate prediction of cancer data. For the area of Remote sensing, [11] considers diversity as an important feature and works on satellite image. [12], [14], [18] and [19] are also works carried out giving importance to diversity of classifiers. [16] is based on feature selection that uses product rule for the ensemble. Certain works are carried out in interest on looking at the performance of classifier ensembles, by using several combinations of classifiers and then look for the best combination [17], [24] and [25]. The text is organized as follows: Section 2 is about data mining, pattern classifier and classifier ensemble. Section 3 presents the comparative framework of works on classifier ensemble. In section 4, we discuss the comparison of ensemble results and our inference from the framework. Section 5 is our conclusion. 2. Data Mining Data mining or Knowledge Discovery in Databases (KDD) is a method for data analysis developed in recent years. It is an interdisciplinary research focusing upon methodologies for discovering and extracting implicit, previously unknown and potentially useful knowledge (rules, patterns, regularities as well as constraints) from data. The mining task is mainly divided into 4 types: Class/Concept description, Association analysis, Classification or Prediction and Clustering analysis. Among them, the classification task has received more attention. In the task of classification, the mining algorithm will generate the exact class description for the classification of unknown data by analyzing the existing data . 2.1. Pattern Classification

Classification is the action of assigning an object to a category according to the characteristics of the object. In Data Mining, Classification refers to a form of Data analysis which can be used to extract models describing important data classes or to predict future data trends. i.e. To classify objects(or patterns) into categories(or classes). Classification has 2 stages: a. A model is determined from a set of data called training data. i.e. classes have been established before hand. The model is shown as rules, decision trees or mathematical formula. b. Correctness of the evolved model is estimated. This is done by studying the results of the evolved models function on a set of data called test data. Classes of the test data are determined before hand. Classification techniques that are available are Decision Tree Learning, Decision Rule Learning, Naive Bayesian Classifier, Bayesian Belief Networks, Neural Networks, k-Nearest Neighbor Classifier, Support Vector classifier. These techniques differ in the learning mechanism and in the representation of the learned model. Classification algorithms are Decision tree, Correlation analysis, Bayesian classification, Neural networks, Nearest neighbor, Genetic Algorithms, Rough sets and Fuzzy technology. Also methods like Support Vector Machine (SVM), Neural Networks, K Nearest Neighbor, K Means, and Boosting and Quadratic classifiers exist.. SVM is the most preferred method in many classification and machine learning tasks. Also many standard tools like WEKA, KNIME, Rapid Miner, R Project, MATLAB tools are available for classification. Several frameworks like LIBLINEAR and LIBSVM are useful resources available on the web for classification. Many people and industries are interested in the decision support system and prediction systems for the better choice and risk. Classification forms the basis of these systems.

2.2. Multiple Classifier Combination Though many classifiers exist and are widely used for classification in different application areas, the classifiers are not able to give a good accuracy of classification (Performance accuracy). To enhance the performance of classification algorithms, many works were carried out and there comes the idea of combining different classifiers to perform the classification task. Such a combination is called by names like Combination of Multiple Classifiers, Committee Machine, Classifier Ensemble and Classifier Fusion. The most popular usage is Classifier Ensemble. Combining classifiers to get a higher accuracy is an important research topic in the recent years. Classifier Ensemble is a learning paradigm where several classifiers are combined in order to generate a single classification result with improved performance accuracy. There are many methods available for combining the classifiers. Some of the popular ensembling methods are Majority voting, Decision templates, nave Bayesian, Boosting, Bagging etc. There are many proposals for classifier ensembles and also works are still under pursue. To have an idea about what are the types of ensembles that have been proposed so far, we have given a comparative framework of existing ensemble methods in section 3.

3. Comparative Framework of Works on Classifier Ensemble The framework for comparing the existing classifier ensembles is given in Table I. Features for classifier selection, types of classifiers, combination methods, datasets, evaluation criteria are the important features discussed in the framework. Ensemble works are carried out in 2 perspectives: a. To look for the performance of ensemble, by combining a number of different classifiers using various combination methods. In certain works, the classifiers that are to be put into combination are selected on the basis of several features like, diversity, accuracy, feature selection, feature subspaces, random sampling etc.. b. Application of different ensemble works to a number of application domain and several standard datasets and conclude the performance of the ensemble. 3.1. Classifier Selection Basis Classifiers are selected for a particular ensemble, based on several features as specified in the 3 rd column of Table I. The features are Prior knowledge about classifiers, Diversity of classifiers, Feature selection, Reject option and Random subspaces. L.R.B.Schomaker et al. [7] considers Prior knowledge about the behavior of classifiers as an important factor. A prior knowledge about classifiers performing classification in single or when put into combination the way they behave, plays an important role in his work. Another criterion for selection of classifiers is Diversity of classifiers. The diversity of the classifier outputs is therefore a vital requirement for the success of the ensemble. [1]. Diversity indirectly talks about the correctness of the individual classifiers that are to be combined. [5], [8], [11], [12], [14], [19] are some samples that say about ensemble works in which, classifiers are selected on the basis of diversity of classifiers. The diversity among the combination of classifiers is defined as: if one classifier has some errors, then for

combination, we look for classifiers which have errors on different objects [1]. There are two kinds of diversity measures: pairwise and nonpairwise. Pairwise measures are taken ,considering two classifiers at a time and Non pairwise measures are taken considering all the classifiers at one time [23]. Both these measures have different evaluation criteria. Diverse classifiers for any work are obtained by bootstrapping of the original dataset, resampling of training set and adoption of different feature set with Bagging [20], Boosting [21] and Random Subspace method [22]. When classifiers are more diverse, it is likely to expect very good classification accuracy from them. This is because the more, the diversity of the classifiers, they are very good finding different error conditions. Feature Selection is another criterion for selecting the classifiers. Data objects contain a number of features. There may be some irrelevant features that are not helpful for classification. In that case, features that are important in identifying the uniqueness of an object are selected by using several feature selection algorithms. After deciding the features, classifiers that can work on this particular feature subset are selected and combined [10], [15] discusses the ensemble proposals on the basis of feature selection Classifiers of an ensemble are also selected based on the assumption that they are independent of each other and also has some diversity [5]. Distribution characteristic is difference in performance of classifiers in different region of characteristic space. Apart from performing the task of classification correctly, classifiers should also be capable of identifying data objects that do not belong to a particular class and capable of rejecting them. When such classifiers are selected for the ensemble, it is called as classifiers with Reject option. Apart from the above features, classifiers are also selected related to the type of application domain and the method of ensemble to be applied. Certain ensembles are fit into context aware framework and in most of such cases evolutionary algorithm is used for classification leading to a very good classification performance. [2] and [9] discusses such type of methodology. GA plays a very important role in combining classifiers [11]. When GA is the method of ensemble to be applied, classifiers that can work well in GA framework are selected for the ensemble. 3.2. Classifiers for Ensemble Different kinds of standard classifiers exist and most of the classifiers perform well when put into an ensemble. Region Based Classifier, Contour Based Classifier, Enhanced Loci classifier, Histogram-based classifier, Crossing based classifier, Nearest Neighbor, K- Nearest Neighbor ( K taking values 1,2,3 etc), Anorm, KDE, Minimum Distance classifier, Maximum Likelihood Classifier, Mahalanobis Classifier , SVM, KMP(Kernel Matching Pursuit), Decision Tree, Binary Decision Tree, Nearest Mean Classifier, Maximum Entropy Model, Heterogeneous Base Learner, Nave Bayesian classifier, Linear Discriminant Analysis, Non-Linear Discriminant Analysis and Rough Set Theory are the classifiers used in most of the ensemble applications. When diversity is the base feature considered, then by using methods like Bagging, Boosting, Random Sampling, the classifiers are trained and thereby the classifiers are diversified. The diverse classifiers are then put into ensemble. In case of Context Aware Framework, the classifiers are trained with different context conditions before put into ensemble [2], [9]. Cost sensitivity is considered as the important factor for a classifier and in such cases the classifier is trained to possess cost sensitivity [18]. SVM is one such classifier trained to have cost sensitivity and it is CS-SVC. SVM is used as such in many ensembles. There are 4 kernel functions with which SVM can combine and yield very good classification accuracy: Linear Kernel, Polynomial Kernel, Radial Basis Kernel and Sigmoid kernel. SVM with Radial Basis Kernel yields a very good performance. 3.3. Classifier Combination Methods Majority Voting, Weighted Majority Voting, Nave Bayesian Combination, Multinomial Methods, Probabilistic Approximation, Singular Value Decomposition, Trainable Combiners, Non Trainable Combiners, Decision Templates, Dempster-Shafer method, Bagging, Boosting and Stacking(Stacked Generalization) are the methods available for combining the classifiers according to Ludmila I Kuncheva [1]. Out of the available methods, Majority Voting, Dempster-Shafer method, Boosting, Stacking and Bagging are the popular methods which are used in most of the ensemble works [6],[7],[8],[12],[14],[17],[18],[19]. Bayesian decision rules are another popular method of ensembling. The rules are Product Rule, Sum rule, max rule, min rule and median rule.[11]. Product Rule is the rule which is most commonly used among the 5 rules. [16]. A multinomial method has two types: Behavior Knowledge Space (BKS) Method and Werneckes Method. Out of these 2 methods BKS is widely used [7]. In Boosting, Adaboost algorithm is the most commonly used ensemble algorithm [12].

Apart from this, GA is used for forming the classifier ensemble [2], [5], [9] and [10]. Classifiers are also fit into context aware framework for the task of ensembling [2] and [9]. In several cases, a method which is capable of integrating the advantages of existing classifiers is used for combining the classifiers. Another way is initially after classification is done by some classifiers, an additional classifier either of the same type or of a different kind is used finally for classification [3]. A new ensemble method is designed, a new ensemble algorithm is proposed or modifications are made to existing ensemble methods and algorithms in certain cases without using the existing methods [12], [16] and [24]. 3.4. Datasets for Classifier Ensemble Datasets that are used for an individual classifier can serve as the datasets for ensemble too. Same as classifiers, ensembles have their usage in all most all the areas like Machine Learning, Image processing, Bioinformatics, Geology, Chemistry, Medicine, Robotics, Expert and Decision making systems, Diagnosis systems, Remote sensing etc. According to the type of application domain, datasets are taken from various available standard Databases. UCI datasets [12],[14],[16] and [17], ELENA database[5] and [8], EFERET, EYALE, EINHA [2] and [9] are some of the benchmark datasets that are used for experiments in most of the ensemble works. For remote sensing and land cover classification, satellite images are used. Some Researchers collect datasets on their own for their works. 3.5. Evaluation Methods The performance of a classifier is a compound characteristic, whose most important component is the classification accuracy [1]. Some of the standard measures that are very often used for evaluating the performance of classifiers and ensembles are Cross validation(confusion matrix as a result of Cross validation), Kappa statistics, McNemars test, Cochrans Q test, F -test, OAO(OneAgainst-One) approach, OAA(One-Against-All) approach, ROC(Receiver Operating Characteristic), AUC(Area Under the ROC Curve), FAR, FRR, Multinomial Selection Procedure for Comparing Classifiers [9], [10], [11]. [14], [16], [17], [19],[24]. Cross-Validation is the most popular method for estimating the classification accuracy. Some of the variants of Cross-Validation obtained by varying the training and testing data are: K-Hold-Out Paired t-Test, K-Fold Cross-Validation Paired t-Test, Diettterichs 5 x 2-Fold Cross-Validation Paired t-Test (5x2cv).

4. Results and Discussion Most of the works discussed here are carried out in order to enhance the performance of multiple classifiers ensemble and simplify the system design. Different classifiers are combined using a number of different classifier combination methods and has been experimented with standard datasets. It is seen from [2], [3],[5],[6],[7],[8],[10],[12],[14],[16],[17][19] and [24] the new methods designed are highly robust and show improved classification accuracy than the existing ensemble methods and the individual constituent classifier. Most of the new methods show a better performance compared to individual classifiers and the ensembles Bagging, Boosting, Rotation Forest and Random Forest. Not only that, the running time is improved in [14] and the training time for the ensemble is reduced by 82% without any performance degradation [18]. Also the new methods designed using Evolutionary algorithm in Context aware framework has resulted in ensembles with highest recognition rate and reduced error rates [2] and [9]. It could be inferred from the framework that: Feature Selection and Diversity of classifiers play an important role in selecting the classifiers for an ensemble. 75% of the ensemble works uses Cross-Validation for evaluating the performance of classifier ensemble. Bayesian Decision Rules, Bagging, Boosting, Stacking and voting are the methods most commonly used for combination of classifiers. UCI datasets are taken for most of the applications. 80% of the works are on two-class classification problem. WEKA and MATLAB are the tools used for the implementation of most of the classifier works. 100% Classification accuracy has not been achieved in any ensemble work.

5. Conclusion and Future work A classifier ensemble is a very successful technique where the outputs of a set of separately trained classifiers are combined to form one unified prediction. First it improves the generalization performance of a classification system greatly. Second, it can be viewed as an effective approach for classification as a result of its variety of potential applications and validity. Although classifier ensembles have been used widely, the key problem for researchers is to effectively design the individual classifiers that are highly correct, with diversities between them and thereby the ensemble is also highly accurate. We have given a framework that consolidates the types of existing ensemble methods. It uses classifier selection features, types of classifiers and combination methods for comparison. Any new work can easily fit into our framework and will be useful to the researchers for making an analysis of existing work.

As a Future work, Ensembles can be tried for 100% classification accuracy and for many unexplored application domain. Most of the works on binary classification can be extended for multi class classification.

Table I: Comparative Framework of Works on Classifier Ensemble

Ref. No

On the Basis of

[7]

A-Priori Knowledge

Classifiers Used 1.Region Based Classifier 2.Contour Based Classifier 3.Enhanced Loci 4.Histogram-based 5.Crossing based

Method of Ensembling 1.Majority Voting(MV) 2.Dempster Shafer (DS) 3.Behavioral Knowledge Space(BKS)

Dataset

Sample Size

Evaluated By Comparing the classification results with that of constituent classifiers

Results 1.Classification Accuracy 90% 2.Better Accuracy than the constituent classifiers

Hand Written Numerals

4 UCI datasets : 1.Pima 1.Knn (k=1) 2.Knn (k=3) 3.Svm (r=1) 4.Anorm 5.Kde.

764 patterns 8 features 2 classes Comparing the classification results with that of constituent classifiers

[12]

Diversity of Classifiers

AdaBoosting

2.Spam 4016 patterns 54 features 2 classes 3.Haberman 4.Horse-colic 2 classes

Better Performance of the new method than the individual classifiers of the ensemble

[8]

Diversity of Classifiers

3 different classifiers diversified by Boosting

1.Boosting 2.Stacked Generalization

SATIMAGE Dataset from ELENA Database

6435 pixels 36 attributes 6 classes

Comparing the classification results with that of constituent classifiers

[11]

Diversity of Classifiers

1.Minimum Distance Classifier 2.Maximum Likelihood Classifier 3.Mahalanobis Classifier 4.K nearest Neighbor Classifier

Five Bayesian Decision rules: 1.Product rule 2. sum rule 3. max rule 4. min rule, 5. median rule

1.Boosting generates more diverse classifiers than Cross validation. 2.Highly Robust compared to original Boosting and Stacking 1. Diversity is not always beneficial. 2.Increasing the number of Base classifiers in the ensemble will not increase the Classification accuracy. 1. A New Ensemble of SVM and KMP is designed. 2. High Classification Accuracy of SVM. 3. Quick Running Time of KMP.

Remote Sensing Image SPOT IV Satellite Image

6 Land Cover classes

1.Overall Accuracy 2.Kappa statistics 3.McNemars Test 4.Cochrans Q Test 5.F-Test

[14]

Diversity of Classifiers

SVM KMP(Kernel Matching Pursuit)

I.UCI Datasets 1.Waveform 2.Shuttle 3.Sat Bagging and Majority voting II. Image Recognition6 Plane Class Images

614 Shee ts 128 X 128 pixels

1.OAO Approach 2. Comparing the classification results with that of constituent classifiers

1.Maximum Entropy Model 1.Diversity of Classifiers [19] 2.Best Ensemble Proposal KDD Cup 2009 3.Nave Bayesian

L1 Regularized maximum Entropy Model

2 Versions Customer Relationship Management ORANGE A French Telecom Companys Dataset 3 tasks (Churn, Appetency, Upselling) 1.Larger Version15,000 Feature Variables 50,000 examples 2.Smaller Version: 230 features 50,000 examples

Boosting 2.Heterogeneous Base Learner MODL CriterionSelective Nave Bayesian Post processing the results of 3 methods with SVM

1.By Cross Validation 2. Overall Accuracy

1. Good Classification Accuracy. 2.Has won 3rd place in Ensemble proposal of KDD Cup 2009

[5]

1.Distribution Characteristics 2.Diversity

Classifiers with Diversity

1. Kernel Clustering 2. New KPCM Algorithm

Phenome Dataset from ELENA Database

2 classes 5000 samples 5 features

In comparison with Bagging applied to the same dataset.

Better Performance than bagging and any other constituent ensemble classifier 1. SVC parameter optimizations reduced by 89%. 2. Reduction in overall Training time by 82% without performance degradation.

[18]

1.Feature Subspaces 2.Diversity

Cost Sensitive SVC s

1.Random Subspace method 2.Bagging

Hidden Signal DetectionHidden Signal Dataset

Tr.Set 7931 positive 7869negative Tst.Set 9426 positive 179,528 negative

Comparison with Conventional SVCs in terms of Detection Rate and Cost Expectations

[10]

Feature Selection

1.Fisher classifier 2.Binary Decision Tree 3. Nearest mean Classifier 4.SVM 5.Nearest Neighbor(1-nn)

1.Colon cancer data Ensemble Feature Selection based on GA 2.Hepato Cellular Carcinoma Data 3.High grade glioma dataset

Tr.Set 40 Tst.Set 22 Tr.Set 33 Tst.Set 27 Tr.Set 21 Tst.Set 29 1.10- Fold Cross Validation and 2. Assigning Weights Better prediction Accuracy

[16]

Feature Selection

n number of SVM s

Product Rule

12 UCI benchmark Datasets

4- Fold CrossValidation

[9]

1.Evolutionary Fusion 2.Context awareness

Best classifiers in each ensemble obtained by GA

1.K-Means Algorithm 2.GA

Face Recognition FERET Dataset

6 contexts

1.ROC 2.FAR 3.FRR

1.Simplified Datasets 2.Reduced time complexity, 3.A new FSCE algorithm is proposed 4.Higher Classification Accuracy 1.Reduced Error Rates 2.Good Recognition Rate

[2]

1.Evolutionary Fusion 2.Context awareness

n number of different classifiers trained with different context conditions

Embedding Classifier Ensemble into Context Aware Framework

4 Face Recognition Systems 1.E- FERET 2.E-YALE 3.E-INHA 4.IT (All 4 datasets are further divided into 3 separate datasets I, II,III ) UCI DataSets

Nine data sets Each containing 10000 face images under 100 kinds of illumination

By creating similar offline system(without Result Evaluator)trained and tested on the same dataset

1.Highest Recognition Rate than the individual classifiers 2.Most Stable Performance

[17]

1.Nave Bayesian 2.K-Nearest Neighbor

1.Static Majority Voting(SMV) 2.Wighted Majority Voting(WMV) 3.Dynamic WMV(DWMV)

1.TicTacToe EndGame

958 Instances 9 features 2 classes 3196 Instances 36 features 2 classes

Cross Validation

1.Better Classification Accuracy than the individual Classifier. 2. DWMV has Higher classification Accuracy. Better performance than popular ensemble methods like Bagging, boosting, Random forest and Rotation Forest. Good Accuracy than a single constituent classifier in the Ensemble 1.A new ensemble CSPE is designed 2.Better performance than LDA , RLDA and SVM 3.Average accuracy of 83.02% in BCI (Comparitively a good Accuracy) 1.Improved Class Prediction with Acceptable Accuracy. 2. Enhanced Rule Generation

2.Chess EndGame

1.Decision Tree 2. SVM With 4 kernels: 1. linear, 2.polynomial 3.radial basis 4.sigmoid

[6]

1.Bagging 2. Double Bagging

1.Condition Diagnosis of Electric Power Apparatus- GIS dataset 2. 15 UCI Benchmark Datasets

Each dataset has different number of objects, classes and features

In comparison with Other ensembles performance on the same data.

[3]

n number of SVMs

Additional SVM is used to fusion the outputs of all SVMs

HyperSpectral AVIRIS Data

220 data channels, 145 x 145 pixels, 6 land cover classes

1.Overall classification Results compared to an individual classifier. 2.By simple voting.

[24]

Any classifier capable of classifying CSP

1.Majority Voting 2.Rejecting the Outliers

BCIEEG signals

20 Cross Validations

[25]

1.Rough Set Theory 2.Decision Tree 3.SVM

Integrating the advantages of RST, DT and SVM

UCI- Teaching Assistant Evaluation (TAE) Dataset

151 Instances 6 Features 2 classes

1. 102(68%) Tr.Data 49(32%) -Tst. Data 2. 6-Folds Cross Validation

Note:

* - Undefined in the literature

References: [1] Ludmila I. Kuncheva, Combining Pattern Classifiers - Methods and Algorithms A JOHN WILEY & SONS, INC., PUBLICATION, TK7882.P3K83 2004 Zhan Yu, Mi Young Nam and Phill Kyu Rhee, Online Evolutionary Context -Aware Classifier Ensemble Framework For Object Recognition, Proceedings of the 2009 IEEE International Conference on Systems, Man, and Cybernetics,San Antonio, TX, USA - October 2009, Page(s): 3428 3433, 2009 Jn Atli Benediktsson, Xavier Ceamanos Garcia, Bjrn Waske, Jocelyn Chanussot, Johannes R. Sveinsson, and Mathieu Fauvel, Ensemble Methods For Classification Of Hyperspectral Data, IEEE International Geoscience and Remote Sensing Symposium, IGARSS 2008, Volume: 1, Page(s): I-62 - I-65, 2008 Gao Daqi, Zhu Shangming, Chen Wei and Li Yongli , A Classifier Ensemble Model and Its Applications IEEE International Joint Conference Proceedings on Neural Networks, Volume: 2, Page(s): 1172 1177, 2005 Cheng Xian-yi and Guo Hong-ling, The Technology of Selective Multiple Classifiers Ensemble Based on Kernel Clustering, Second International Symposium on Intelligent Information Technology Application, Second International Symposium on Intelligent Information Technology Application, Volume: 2, Page(s): 146 150, 2008 Faisal M. Zaman Hideo Hirose, Double SVMBagging: A New Double Bagging with Support Vector Machine, Engineering Letter, 17:2, EL_17_2_09, (Advance online publication: 22 May 2009) In: L.R.B. Schomaker and L.G. Vuurpijl (Eds.), Classifier combination: the role of a-priori knowledge, Proceedings of the Seventh International Workshop on Frontiers in Handwriting Recognition, September 11-13, 2000, Amsterdam, ISBN 9076942-01-3, Nijmegen: International Unipen Foundation, pp 143-152. Nima Hatami and Reza Ebrahimpour , Combining Multiple Classifiers: Diversify with Boosting and Combining by Stacking, IJCSNS International Journal of Computer Science and Network Security, VOL.7, No.1, January 2007 Suman Sedai and Phill Kyu Rhee Evolutionary Classifier Fusion for Optimizing Face Recognition , Frontiers in the Convergence of Bioscience and Information Technologies, Page(s): 728 733, 2007 Kun-Hong Liu, De Shuang Huang Microarray data prediction by evolutionary classifier ensemble system Kun-Hong Liu; De-Shuang Huang; Jun Zhang; IEEE Congress on Evolutionary Computation, Page(s): 634 637, 2007

[2]

[3]

[4] [5]

[6]

[7]

[8]

[9]

[10]

[11] Man Sing WONG, and Wai Yeung YAN Investigation of Diversity and Accuracy in Ensemble of Classifiers using Bayesian Decision Rules , International Workshop on Earth Observation and Remote Sensing Applications, Page(s): 1 6, 2008 [12] Abbas Golestani , Kushan Ahmadian Ali Amiri,and MohammadReza JahedMotlagh A Novel Adaptive-Boost-Based Strategy for Combining Classifiers Using Diversity Concept 6th IEEE/ACIS International Conference on Computer and Information Science (ICIS 2007), Page(s): 128 134, 2007 [14] Gou Shuiping ,Mao Shasha and Licheng Jiao, Isomerous Multiple Classifier Ensemble Method with SVM and KMP International Conference on Audio, Language and Image Processing, Page(s): 1234 1239, 2008

[15] D.M.J. Tax and R.P.W. Duin Growing a Multi Class Classifier with a Reject Option Pattern Recognition Letters 29 (2008 ) 1565-1570 [16] Bing Chen, Hua-Xiang Zhang An Approach of Multiple Classifiers Ensemble Based on Feature Selection Fifth International Conference on Fuzzy Systems and Knowledge Discovery, Volume: 2, Page(s): 390 394, 2008 [17] Dr.S.Kanmani, Ch P C Rao, M.Venkateswaralu and M.Mirshad Ensemble Methods for Data Mining IE (I) Journal -CP, Vol 90, May 2009, 15-19 [18] Barry Y. Chen, Tracy D. Lemmond, and William G. Hanley, Building Ultra-Low False Alarm Rate Support Vector Classifier Ensembles Using Random Subspaces IEEE Symposium on Computational Intelligence and Data Mining, Page(s): 1 8,2009. [19] Hung-Yi Lo, Kai-Wei Chang, Shang-Tse Chen, Tsung-Hsien Chiang, Chun-Sung Ferng, Cho-Jui Hsieh, Yi-Kuang Ko, TsungTing Kuo, Hung-Che Lai, Ken-Yi Lin, Chia-Hsuan Wang, Hsiang-Fu Yu, Chih-Jen Lin, Hsuan-Tien Lin, Shou-de Lin An

Ensemble of Three Classifiers for KDD Cup 2009: Expanded Linear Model, Heterogeneous Boosting, and Selec tive Nave Bayes JMLR: Workshop and Conference Proceedings 1: 1-16 KDD cup, 57-64, 2009 [20] L. Breiman, Bagging predictors, Machine Learning, vol. 24, no. 2, pp.123-140, August 1996.

[21] Y. Freund and R. Schapire, Experiments with a new boosting algorithm, Proceedings of 13th International Conference on Machine Learning, Bari, Italy, pp. 148-156, July 1996. [22] T. K. Ho, The random subspace method for constructing decision forests, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 20, no. 8, pp. 832-844, August 1998.

[23] L. I. Kuncheva, and C. J. Whitaker, Ten measures of diversity in classifier ensembles: limit for two classifiers, DE RA/IEE Workshop on Intelligent Sensor Processing, Birmingham, U.K., pp.10/1-10/6, February 2001. [24] Xu Lei, Ping Yang, Peng Xu, Tie-Jun Liu, and De-Zhong Yao, Common Spatial Pattern Ensemble Classifier and Its Application in Brain-Computer Interface, Journal of Electronic And Technology Since Of China, Vol. 7, No. 1, March 2009

[25] Li-Fei Chen, Improve class prediction performance using a hybrid data mining approach , International Conference on Machine Learning and Cybernetics, Vol 1, pp 210-214, 2009

You might also like