You are on page 1of 5

International Journal of Applied Engineering Research, ISSN 0973-4562 Vol. 10 No.

55 (2015)
Research India Publications; httpwww.ripublication.comijaer.htm

Ant Colony Optimization Based Attribute Reduction for


Disease Diagnostic System
D. Asir Antony Gnana Singh1, P.Surenther2, E. Jebamalar Leavline 3
1&2

Department of Computer Science and Engineering Bharathidasan Institute of Technology, Anna University, Tiruchirappalli620 024, India.
3
Department of Electronics and Communication Engineering Bharathidasan Institute of Technology, Anna University,
Tiruchirappalli, India.
1
E-mail: asirantony@gmail.com

Abstract
Attribute reduction plays a significant role in data preprocessing in order to improve the performance of the
machine learning algorithm. Attribute reduction is a process
of selecting the best and relevant attributes from high
dimensional data by eliminating redundant and irrelevant
attributes for improving the predictive accuracy of the
machine learning algorithm. This paper proposes an ant
colony optimization (ACO)-based attribute reduction
technique for improving the performance of machine learning
algorithm. The performance of the proposed algorithm is
tested on benchmark dataset. It is observed that the proposed
algorithm is yielding better predictive accuracy on the
training dataset compared to other approaches.
Keywords: Attribute reduction, ant colony optimization,
disease diagnostic system
1. Introduction
Attribute reduction is a process of reducing the
dimensionality by removing the irrelevant and redundant
attributes or variable from the given dataset for the
construction of the classification or predictive model. In realworld, the datasets contain irrelevant and redundant features.
Redundant attributes may have the similar information where
the irrelevant attributes does not carry the information which
is needed to perform the prediction or classification. Feature
selection techniques provide four main benefits when
constructing predictive models such as improved model
interpretability,
shorter
training
times,
enhanced
generalization and runtime.
Feature selection has been an active field of research area
in pattern recognition, machine learning, statistics and data
mining. The main objective of feature selection is to choose a
subset of input variables by eliminating features, which are
irrelevant or of no predictive information to improve the
performance of classification algorithm.
Data mining is the process of using programmed data
analysis techniques to uncover previously invisible
relationships among data items. Data mining often involves
the examination of data stored in a database. There are three
major data mining techniques namely regression,
classification, and clustering. Data mining is the process that

1561

attempts to determine patterns in large datasets. It utilizes the


techniques of artificial intelligence, machine learning,
information, and database systems. The overall aim of the
data mining approach is to analyze the information from a
datasets and transform it into a reasonable structure for
further deployment in application development. The data
mining is a part of the knowledge discovery. The knowledge
discovery contains two stages; one is data preprocessing and
the other is data post processing. The data preporcessing
includes data cleaning, data tranformation, etc. The data post
processing involves the classification or predictive model
building to perform the prediction or classification on the
data.
Numerous data mining techniques have been developed
and used in data mining tasks recently, including association
and classification. Association is one of the best known data
mining techniques. In association technique, outline is
discovered based on a relationship of a particular item with
other items in the same business. For example, the
association technique is used in market basket analysis to
identify what products that customers frequently buy
together. Based on this data, business people can have
corresponding marketing campaign to sell more products to
make more profit.
Classification is a fundamental data mining technique
based on machine learning. Classification is used to
categorize the new data into one of the predefined set of
classes or groups. Classification technique makes use of
arithmetic techniques such as decision trees, neural network,
linear programming and statistics. One can apply
classification in any application, given all past records of
staffs that left the company and predict which current staffs
are probably to leave in the future. In this case, divide the
staff records into two groups as leave and stay and then
ask the data mining software to classify the employees into
each group.

2. Related works
Classification is the method of assigning predefined class
labels to some invisible or test data. For this purpose, a set of
labeled statistics is used to train a classifier which is then
used for grouping invisible data. The classification method is
also known as supervised learning.

International Journal of Applied Engineering Research, ISSN 0973-4562 Vol. 10 No.55 (2015)
Research India Publications; httpwww.ripublication.comijaer.htm

Sivagaminathan and Ramakrishnan proposed a hybrid


approach which is a grouping of the neural networks and
ACO for attribute selection. There are 12 diagnosis dataset
containing 21 to 34 attributes were used and it gives 77%
standard accuracy in thyroid disease dataset [1]. Parag
Deoskarat el al. presented an efficient support-based and ant
colony-based optimization technique for lung cancer
classification. In this paper, he explained about an algorithm
for detection of lung cancer. The algorithm can perfom three
types of operations; first accept the dataset and create
patterns, secondly identifies the relevant data from the
patterns, and finally evaluates the effectiveness of algorithm.
For the conduction of this experiment the lung cancer dataset
was collected from the UCI repository, the lung cancer early
detection is used for save the life of the humans, and their
techniques which produces the accuracy of detection is more
efficient compare to other techniques [2].
Huang et al. used the classification precision and attribute
weights of the constructed SVM classifier to plan the
pheromone update in ACO-based attribute selection. In this
study, the datassets from UCI and replicated datasets are used
and 94.65% average accuracy is obtained [3]. Chen et al.
presented an
unven set approach for attribute selection
based on ACO. Common information is used as heuristic
information. Attribute selection is started with a attribute core
rather than an arbitrary attribute which causes absolute graph
to become in a slighter form, attribute reduction is finished
after the core is detected. They have studied the dataset from
UCI with C4.5 classifier and achieved 98.2% average
correctness for classification [4]. .
Priyanka Dhasalet et al. proposed a image classification
by feature selection, they used the ant colony optimization
algorithm for feature selection and support vector machine
used for binary classification he used one against one (OAO)
technique for pairwise comparisons between classes, one
against all (OAA) compares a class with all other classes.
However, SVM based technique increases the complexity of
preprocessing and it is difficult to reduce the noise [5].
Zarchisu Hlaing at el. proposed a method for solving
travelling salesman problem by using improved ant colony
optimization algorithm, TSP is combinatorial optimization
and used for wide application, ACO is used for find
optimization solution. they proved that ACO is more
effective than ACS algorithm [6]. Maryam Bahojb et al.
proposed a feature selection based on ACO and genetic
algorithm (GA) for persian font recognition. Support vector
machine is used for classification based on combination of
GA and ACO and the recognition accuracy is increased. One
hundred iteration is consider as maximum, population
number is 50 for GA, mutation rate is 0.02, and it is
implemented in MATLAB [7].
Ling chen at el. presented a image selection based on
ACO techniques. The feature selection is an essential task for
image classification and recognition. The ACO algorithm is
used for better classification with smaller features compared

1562

to other algorithm. the 5 cross validation is used and selected


features consider as heuristic information [8].
Parthi Deyat el al. proposed ACO-based routing for
mobile ad-hoc networks towards improved quality of services
in MANET. This method solves the major issues in routing
of the nodes. ACO is used as a best techniques in MANET
routing protocols to avoid the major complexity in mobile
network in order to improve the QoS. This algorithm
consists of reactive and proactive components, ant agents that
are used to establish connection between intermediate nodes.
Hesham arafat et al. proposed rough set and ACO-based
feature selection, the rough set theory offers heuristic
function to measure the knowledge reduction, concept
approximation and object classification, the combination of
both algorithms gives more effective approach. the feature
extraction task is essential for wide applications [10].
Akarsu and Karahoca [11] used ACO for clustering and
feature selection for breast cancer classificatin. To remove
irrelevant or redundant features from the dataset,
chronological backward search technique is applied. Attribute
selection and clustering algorithms are included as a wrapper
method. The outcome showed that the exactness of the FSACO clustering approach is enhanced than the filter
approaches.
3. System architecture
ACO algorithm has newly been used in various kinds of data
mining problems such as clustering, and classification.
Figure 1 shows the architecture of feature selection for
classification using ACO.
Diseases data set
Diseases dataset consists of several factors most commonly a
dataset relates to the contents of a single database table, or a
single arithmetic data matrix, where every column of the
table represents a particular variable, and each row resembles
to a given member of the dataset in question. The dataset lists
standards for each of the variables, such as height and mass
of an objective, for each member of the data set.
Attribute reduction using ACO techniques
Ant colony optimization algorithm is a probabilistic method
for solving computational problems and can be used to find
the path concluded graphs. This algorithm is a member of ant
colony family. Ant colony optimization algorithms have been
applied to many combinatorial optimization problems,
extending from quadratic assignment to protein folding or
routing vehicles and a lot of resultant methods have been
adapted to active problems in real variables, stochastic
troubles, multi-targets and parallel operations.
Selecting the best features
Feature subset collection is a process of selecting a subset of
features that characterizes the full dataset from a much larger
set. There may be thousands of features present in a real
world datasets and each feature may carry only a little bit of

International Journal of Applied Engineering Research, ISSN 0973-4562 Vol. 10 No.55 (2015)
Research India Publications; httpwww.ripublication.comijaer.htm

information, it would be very difficult to treat all the features.


Therefore it is very essential to extract or select important
features from the dataset. The best features selected are used
for classification and the efficiency and runtime are
compared.

Diseases dataset

cross authentication is the fitness of the exact feature subset


and is used to update the pheromone values. This procedure
continues until an ending criterion is met. The features set
that yields the best accuracy is reimbursed as the solution
[12,13].
Pheromone initialization
The existence of pheromone values on the edges is the basic
module of the ACO. Initially, it is initialized by some small
random value. In these experiments, the pheromone values on
all the edges are initialized at the start of the algorithm with
the same amount of pheromone. In this way no attribute is
preferred over other attributes by the first ant. The initial
pheromone is calculated as in Equation 1.

Data pre process


Attribute reduction
using ACO
techniques

Select best attributes

i, j

(t

0)

(1)

bi
i 1

where a is the total amount of attributes, bi is the amount of


values in the domain of feature
Training data set

Test dataset

Pheromone update rule


After each ant alters the provisions of a rule according to
Max Change constraint, pheromone informing is decided. We
have definite a new function to update pheromone, in such a
way that whenever each ant has altered the terms of rule Raj,
value of rule Raj is projected if the excellent rule Raj is
enlarged then pheromone of this rule is increase according to
value of quality improved. In these experiments, the new
update strategy is employed in each iteration and the
pheromone helps to improve the quality of rule. Pheromone
updating is carried out according to equation (2) and equation
(3)
(2)
Qafter modify Qbeforemodify
Q

Predictive model
building

Accuracy on disease
prediction

i, j

Fig. 1: Architecture of feature selection for classification


using ACO.
Ant Colony Optimization (ACO) algorithm
ACO is used for designing algorithms for complex
combinatorial optimization problems. The main idea of the
proposed approach is to provide a fully connected NN
graph.
The graph acts like a search space for the ants to move,
the links represent the connection between features of a
particular dataset and nodes are the features. Each ant
constructs a candidate solution in this search space by
traversing a path of nodes and links. This path is actually the
subset of the features. After an ant has completed its tour, the
ability of the traversed path (selected features) is evaluated by
running ID3 algorithm for the selected features and then the
correctness of the learned model is checked. We perform tenfold cross validation procedure for checking the accuracy of
the classifier. The average accuracy after performing ten-fold

1563

(t 1)

i, j

(t )

i, j

(t ).(

.c)

(3)

where Q shows variation in value of the rule after and


before variation, `c' is regulate influence of quality. It is
necessary to decrease the pheromone of terms that have not
contributed in the arrangement of rules. For this purpose,
pheromone evaporation is simulated. To simulate the
phenomenon evaporation in real ant colony, the amount of
pheromone associated with each term ij that does not take
place in the constructed rule must be decreased. The
pheromone of unexploited terms is decreased by separating
the quantity of the value of eachij by the summation of all ij.
[14]
Heuristic function
Heuristic function entitles the quality of an attribute. Its value
greatly stimulations an ants decision to move from one node
to another. A good heuristic function is very cooperative in
solving troubles by ACO. We have used the information gain
of each feature as a heuristic function. We compute the
information gain for each attribute in the dataset. When an

International Journal of Applied Engineering Research, ISSN 0973-4562 Vol. 10 No.55 (2015)
Research India Publications; httpwww.ripublication.comijaer.htm

ant wants to make decision about the next node, the


consistent attributes information gain is used as heuristic
value and is used for probability calculation [15].
4. Implementation
In order to analyze the performance of the proposed system,
the diseases dataset namely diabetes is taken [16]. Initially
the heuristic function is computed between each attribute and
class using Information Gain. Then the incremental search is
adopted for subset generation for calculating the
classification accuracy of the each subset of attributes using
Naive Bayes and J48 classifiers in Weka environment [17].
Table 1. Classification accuracy without heuristic function
with incremented search using Nave Bayes and J48.

No of
attributes
1
2
3
4
5
6
7

Heuristic function
calculated between
class attribute and
other attributes
(Increment search)
{1}
{1,2}
{1,2,3}
{1,2,3,4}
{1,2,3,4,5}
{1,2,3,4,5,6}
{1,2,3,4,5,6,7}

Accuracy
of
Naive
Bayes
(NB)
75.4
77
76.6
76.5
76.3
76.8
76.4

Accuracy
of
J48

Fig. 2: Accuracy on J48 classifier

73.04
73.47
74.60
74.34
73.69
74.21
73.82

Table 2. Classification accuracy with heuristic function and


incremented search using Nave Bayes and J48.

No. of
attributes
1
2
3
4
5

Without
heuristic
function
(Random
search)
{1}
{1,2}
{1,2,3}
{1,2,3,4}
{1,2,3,4,5}

Accuracy
of Naive
Base

Accuracy
of J48

65
66.6
73.8
76.3
76.3

63.8
65.6
74.8
73.8
73.8

{1,2,3,4,5,6}

76.3

74.8

{1,2,3,4,5,6,7}

76.3

74.69

Fig. 3: Accuracy on Naive Bayes classifier


5.

Table 1 shows the number of attributes with subset selected


based on heuristic function with incremented search and their
corresponding accuracy for Naive Bayes and J48. Table 2
shows number of attributes with subset selected without
heuristic function with random search and their
corresponding accuracy for Naive Bayes and J48. The
classification accuracy with and without heuristic function
are shown in Figure 2 and Figure 3, respectively.

1564

Conclusion

This paper proposed an ant colony optimization (ACO)based attribute reduction algorithm for improving the
performance of the supervised learning algorithms in terms of
reducing the runtime and improve the accuracy for Diseases
Diagnosis System (DDS). This research work focused on the
feature reduction on training diseases dataset which is learnt
to build the predictive model for the for DDS, the diabetes
dataset is used for evaluating the predictive accuracy for the
DDS using information gain (IG)-based heuristic function
with incremental search for attribute subset generation and
reduction. The results were obtained with the heuristic
function using Naive Bayes and J48 classifies in terms of
classifications accuracy. It is observed that the heuristic
function produces better result compared to the classification
without heuristic function. In future, this work can be
extended with other heuristic functions and searching
strategies for attribute reduction.

International Journal of Applied Engineering Research, ISSN 0973-4562 Vol. 10 No.55 (2015)
Research India Publications; httpwww.ripublication.comijaer.htm

References
[1] Sivagaminathan, R. K., and Ramakrishnan, S., 2007, A
hybrid approach for feature subset selection using
neural networks and ant colony optimization, Expert
Syst Appl., 33 (1), pp. 4960.
[2] Parag Deoskar, Dr. Divakar Singh, Dr. Anju Singh, An
Efficient Support Based Ant Colony Optimization
Technique for Lung Cancer Data, Expert Syst
International
Journal
of
Advanced
Research in Computer and Communication Engineering
Vol. 2, Issue 9, September 2013.
[3] Huang, C. L., 2009, ACO-based hybrid classification
system with feature subset selection and model
parameters optimization, Neurocomputing, 73 (13),
pp. 438448.
[4] Chen, Y., Miao, D., and Wang, R., 2010, A rough set
approach to feature selection based on ant colony
optimization, Pattern Recogn Lett., 31 (3), pp. 226
233.
[5] Priyanka Dhasal, Shiv Shakti Shrivastava, Hitesh
Gupta, Parmalik Kumar, An Optimized Feature
selection for Image Classification Based on SVMACO,
International Journal of Advanced Computer Research
(IJACR) ,Volume-2 Number-3 Issue-5 September-2012.
[6] Zar Chi Su Su Hlaing and May Aye Khine, Solving
Traveling Salesman Problem by Using Improved Ant
Colony Optimization Algorithm International Journal of
Information and Education Technology, Vol. 1, No. 5,
December 2011
[7] Maryam Bahojb Imani, Tahereh Pourhabibi,
Mohammad Reza Keyvanpour, and Reza Azmi A New
Feature Selection Method Based on Ant Colony and
Genetic Algorithm on Persian Font Recognition
International Journal of Machine Learning and
Computing, Vol. 2, No. 3, June 2012
[8] Chen, Ling, Bolun Chen, and Yixin Chen. "Image
feature selection based on ant colony optimization." AI
2011: Advances in Artificial Intelligence. Springer
Berlin Heidelberg, 2011. 580-589.
[9] Suman Banik, Bibhash Roy, Parthi Dey, Nabendu
Chaki, Sugata Sanyal, QoS Routing using OLSR with
Optimization for Flooding, International Journal of
Information and Communication Technology Research,
ISSN-2223-4985,
Vol.1,
No.4,
pp.164-168,
August 2011.
[10] Hesham Arafat, Rasheed M.Elawady, Sherif Barakat
and Nora M.Elrashidy Using Rough Set and Ant
Colony optimization In Feature Selection Volume 2,
Issue 1, January February 2013. ISSN 2278-6856
[11] Akarsu, E., and Karahoca, A., 2011, Simultaneous
feature selection and ant colony clustering, Proc.
Computer Science, 3, pp. 14321438.
[12] Aghdam, M. H., Ghasem-Aghaee, N., and Basiri, M.
E., 2009, Text feature selection using ant colony

1565

[13]

[14]

[15]

[16]

[17]

optimization, Expert Syst Appl., 36 (3), pp. 6843


6853.
Liu, L., Dai, Y., and Gao, J., 2009, Ant colony
optimization algorithm for continuous domains based on
position distribution model of ant colony foraging,
Scientific World J., Article ID 428539, 9 pages, 2014.
Sara, Esra, and Selma Aye zel. "An Ant Colony
Optimization Based Feature Selection for Web Page
Classification." The Scientific World Journal2014
(2014).
Liu, B., Abbass, H.A, McKay, B., 2002, Density-based
heuristic for rule discovery with ant-miner, Proc. 6th
Australia-Japan joint workshop on intelligent and
evolutionary system, 184, 2002
Blake, C. L., and Merz, C. J., 1996, UCI Repository of
Machine Learning Databases, 1996, Available from
http://www.ics.uci.edu./~mlearn/ MLReporsitory. html.
WEKA, http://www.cs.waikato.ac.nz/ml/weka.

You might also like