You are on page 1of 6

World Automation Congress c 2010 TSI Press.

A H IGH P ERFORMANCE H YBRID M ETAHEURISTIC FOR T RAVELING S ALESMAN P ROBLEM

C HUN -W EI T SAI1 , J UI -L E C HEN3 , S HIH -PANG T SENG2,3 , M ING -C HAO C HIANG2 , AND C HU -S ING YANG1
1

Department of Electrical Engineering, National Cheng Kung University, Tainan 70101, Taiwan cwtsai87@gmail.com; csyang@ee.ncku.edu.tw

Department of Computer Science and Engineering, National Sun Yat-sen University, Kaohsiung 80424, Taiwan tsp@mail.tajen.edu.tw; mcchiang@cse.nsysu.edu.tw
3

Department of Computer Science and Information Engineering, Tajen University, Pingtung 90741, Taiwan reler@mail.tajen.edu.tw; tsp@mail.tajen.edu.tw

A BSTRACT Genetic algorithm (GA) is one of the most widely used metaheuristics in nding the approximate solutions of complex problems in a variety of domains. As such, many researchers have focused their attention on enhancing the performance of GAin terms of either the speed or the quality but rarely in terms of both. This paper presents an efcient hybrid metaheuristic to resolve these two seemingly conicting goals. That is, the proposed method can not just reduce the running time of GA and its variants, but it can also make the loss of quality small, called High Performance Hybrid Metaheuristic (or HPHM for short). The underlying idea of the proposed algorithm is to leverage the strengths of GA, Tabu Search, and the notion of Pattern Reduction. To evaluate the performance of the proposed algorithm, we use it to solve the traveling salesman problem. Our experimental results indicate that the proposed algorithm can signicantly enhance the performance of GA and its variantsespecially the speed. Key Words: genetic algorithm, pattern reduction, and traveling salesman problem

1. Introduction
Over the past three decades or so, metaheuristics have been probably the most widely used techniques in nding the true or approximate solutions of complex and large problems [1] in a reasonable time. Researchers working on metaheuristics have developed many efcient algorithmsin terms of the speed or the quality but rarely in terms of both. Among them are genetic algorithm (GA) [2, 3, 4, 5, 6, 7], tabu search (TS) [8, 9, 10], simulated annealing (SA) [11, 1], ant colony system (ACS) [12], particle swarm optimization (PSO) [13], bees algorithm [14], harmony search [15], and the others. However, until now, no efcient method is ever found to solve NP-complete problems [16] in a reasonable time, not even metaheuristics. Even more important, each metaheuristic has its pros and cons. As such, all the metaheuristics are important in the sense that they can be used to solve problems in different domains. Moreover, in many situations, a problem to which an efcient solution can not be provided by a metaheuristic may be solved efciently by a combination of metaheuristics, say, A and B. One of the many possibilities is that A is GA and B is TS. If we combine these two algorithms to create a hybrid algorithm, say, C, TS in C might be able to be used to prevent GA, also in C, from falling into local optima at early generations in its convergence process. Of course, hybrid algorithms have been created and used in enhancing the performance of metaheuristics for a long time. However, most of them take into account either the speed or the quality but rarely both at the same time. In this paper, we present an efcient hybrid metaheuristic to solve the dilemma of not being able to reduce the running time of GA and its variants [17] while at the same time making the loss of quality small. The proposed algorithm is called High Performance Hybrid Metaheuristic (or HPHM for short) and is motivated by the observation that many sub-solutions of GA [17] will quickly become more and more similar to each other or even the same as the number of iterations grows. For this reason, an alternative search strategy is generally required to prevent the diversity, or search ability, of the population from being reduced too quickly, that is, to cut it down to one and only one or a very small number of chromosomes (solutions). The proposed method is in essence a hybrid metaheuristic that leverages the strengths of three algorithms: GA [2], TS [10], and Pattern Reduction (PR) [17]. More precisely, the proposal algorithm uses GA to guide the global search, TS to guide the local search (i.e., to enhance the quality), and PR to reduce the computation time of a GA and its variants. To evaluate the performance of HPHM, we use it to solve the traveling salesman problem (TSP) and compare the results with those found by GA and its variants. We also analyze the computation time and the quality of the end result because they make a good indicator of the performance of the algorithms evaluated. Our simulation results indicate that the proposed algorithm can signicantly reduce the computation time of GA and its variants while at the same time making the loss of quality smallespecially PREGA we proposed previously. The remainder of the paper is organized as follows. Section 2 gives a brief introduction to the problem that is used to evaluate the performance of the proposed algorithm, the relationships between metaheuristics and PR, and the hybrid genetic algorithm. Section 3 provides a detailed description of the proposed algorithm rst and then gives a simple example to illustrate how HPHM works, exactly. Performance evaluation of the proposed algorithm is presented in Section 4. Conclusion is given in Section 5.

2. Related Work
In this section, we give a brief introduction to the TSP that is used to evaluate the performance of the proposed algorithm, the relationships between metaheuristics and PR, and the hybrid genetic algorithm.

2.1. Traveling Salesman Problem


As we all know, many real-world problems are complex, in the sense that no efcient method is ever found to solve them in a reasonable time. Among them are the combinatorial optimization problem (COP) [16], the scheduling problem, the vehicle routing problem, the robot arm control problem, just to name a few. The combinatorial optimization problem has attracted

WORLD AUTOMATION CONGRESS

attentions of researchers from different backgrounds including applied mathematics and computer science. In the area of combinatorial optimization, the TSP [18, 19, 20] has been widely used as a yardstick by which the performance of a new algorithm is measured because TSP is NP-complete [21, 22, 23]. As such, any efcient solution to the TSP can be used to solve all the NP-complete problems [21, 22, 23] efciently. On the other hand, metaheuristics have been widely used in solving the TSP. The problem asks for a solution that is permutation c(1) , c(2) , . . . , c(n) of the given n cities that minimizes ! n1 X D= d(c(i) , c(i+1) ) + d(c(n) , c(1) ). (1)
i=1

In short, Eq. (1) gives the distance D of the tour that starts at city c(1) , visits each city in sequence, and then returns directly to c(1) from the last city c(n) .

2.2. Metaheuristics and Pattern Reduction


Because metaheuristics and its variants provide an efcient method to nd the approximate solutions of the COPs in a reasonable time. Many researchers have tried to use metaheuristics and its variants to solve the COPs. In [24, 17, 25, 26], we showed that metaheuristics involve a lot of redundant computations at the later iterations in its convergence process. As such, we proposed the PR algorithm and applied it to several problem domains such as k-means for the clustering problem [24], genetic algorithm for the traveling salesman problem [17], tabu search for the traveling salesman problem [26], and vector quantization for the image compression problem [25]. The purpose of the PR algorithm is to eliminate as many of the redundant computations as it can and is basically made up of two operators: detection and compression. Moreover, we have developed three different detection techniques. They are, respectively, time-oriented, space-oriented, and problem-specic. Our experience shows that all the fast metaheuristics focused on reducing the computation time of metaheuristics such as PR, Learnable Evolution Model (LEM) [27], and multiple-search genetic algorithm (MSGA) [28, 29] may cause a loss of the quality in some situations. As a consequence, how to reduce the computation time while at the same time preserving or even enhancing the quality of the end result has become an important issue to be resolved.

2.3. Hybrid Genetic Algorithm


Since most, if not all, of the fast metaheuristics face the premature convergence problem, many hybrid genetic algorithms have been proposed to improve the quality of GA such as GA combined with local search [30, 31] and GA combined with other metaheuristic algorithm. Brown et al. [32] presented SAGA that alternates between GA and SA in each iteration. In other words, GA and SA are run alternately to nd the solution and improve its quality. In [33], Adler used SA-Recombination and SA-Mutation to replace the crossover and mutation operators of simple GA. In [34], Thangiah et al. presented a different approach, which combines the simulated annealing, tabu search, and genetic algorithm to nd better solutions for the vehicle routing problem. In [35], Katayama and Narihisa focus their attention on the problem of when to switch between GA and SA. Also among the research on hybrid GA are GA with tabu search and other metaheuristics. For instance, in [36], Ting et al. combined tabu search with GA called TGA. Tabu search in the TGA is used to maintain the diversity of the population to avoid the premature convergence problem of GA. In [37], Lee et al. showed that GA can be combined with ant colony optimization to improve the quality of the end result of the multiple sequence alignment problem. Moreover, many researchers [38, 39, 40] focused their attention on GA combined with local search because local search can be used to ne-tune the solutions found by GA to obtain a better result. However, it is worth mentioning that many local search methods are domain-specic in the sense that they are very effective for one problem domain but not for the others. For instance, 2-opt, 3-opt, and Lin-Kernighan are very effective for the TSP [38], but not for the others. Another problem in using local search method is that it usually takes much longer time than simple GA.1 For this reason, [40] focused on reducing the computation time of local search of a hybrid GA. In summary, the most important problem in the design of HGA is how to leverage the strengths of GA and other metaheuristics because without such a balance, HGA may give a poor result.

3. The Proposed Algorithm


In this section, we begin with a brief introduction to the proposed algorithm, called High Performance Hybrid Metaheuristic (or HPHM for short), which is developed to enhance the performance of all the PR-enhanced algorithms [24, 17, 25, 26] we proposed previously and the other GAs. Then, we will give a simple example to illustrate how the proposed algorithm works, exactly.

3.1. Motivation
The High Performance Hybrid Metaheuristic is motivated by the observation that the solutions of simple, or traditional, GA (sGA) will become more and more similar to each other or even the same as the number of iterations increases [2], especially for the fast genetic algorithms (FGAs) such as LEM [27], MSGA [28, 29], and PREGA [17]. Most of the FGAs try to nd the same solution R in less time by either removing low-performance solutions from the population or using a fast convergence procedure to reduce the computation time. However, even though FGAs can nd solutions much faster than sGA can, they all face the premature convergence problema problem that can seriously degrade the quality of the end result of FGAs. An immediate solution to this problem is to increase the population size. However, increasing the population size increases the computation time. LEM [27] and MSGA [28, 29] both try to avoid the premature convergence problem by re-creating part
1

Simple GA (sGA) generally refers to GA with the original selection, crossover, and mutation operators and not combined with any other metaheuristics.

A H IGH P ERFORMANCE H YBRID M ETAHEURISTIC

FOR

T RAVELING S ALESMAN P ROBLEM

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21

Randomly generate an initial population S = {S1 , S2 , . . . , Sn } While the termination criterion is not met /* The Genetic Algorithm (GA) Phase */ For i = 1 to GGA Evaluate the tness value fi of each chromosome Si Select the tter chromosomes to reproduce Perform the crossover and mutation operators to generate a new generation of population S If (i > 2), perform PR [17] to detect and compress the common genes Let S = S End For /* The Tabu Search (TS) Phase */ Select the best solution from S and call it s For i = 1 to GTS Create a set of solutions St that is composed of the neighbors of s that are not currently in the tabu list Choose the best solution s in St Update the tabu list based on s Let s = s End For Recreate from the solution of TS s a set of solutions S for the GA phase End Output the nal result S

F IGURE 1. Outline of the HPHM algorithm.

of the population. In this paper, we propose a different approach to solving the premature convergence problem. The detail of the proposed algorithm is given in the next subsection.

3.2. The HPHM Algorithm


The proposed algorithm basically leverages the strengths of three algorithms: GA for its global search ability, TS for its local search ability to enhance the quality of GA, and PR for its ability to reduce the computation time of GA. In general, simple GA designed by Holland [2] includes the initialization, selection, reproduction, crossover, and mutation operators. As Fig. 1 shows, initially, a population of chromosomes is randomly generated, in such a way that no particular directions are preferred. Then, in line 5, the tness value of each chromosome is evaluated. After that, HPHM uses these values to decide chromosomes that would survive the selection in line 6. Then, in line 7, the crossover and mutation operators are performed. The crossover operator is used to reproduce the chromosomes (solutions) and to explore new directions. The mutation operator is used to avoid falling into the local optima by perturbing the values of genes in a chromosome. As described in Fig. 1, the proposed algorithm adds a new operator we proposed previously called PR [17] to sGA (i.e., to the GA phase). PR is composed of two operators: detection and compression. The pattern reduction algorithm can effectively and efciently reduce the computation time of GA, GA-based, and other metaheuristic algorithms. In other words, it is the PR operator of HPHM that plays the role of reducing the computation time of GA. The second phase of HPHM uses tabu search because it provides a mechanism to enhance the quality of the solutions found by GA. The reason is that the chromosomes will quickly become very similar to each other as the number of generations increases, especially for PREGA. In other words, if we keep using GA to search for the solutions, the improvement will be limited. For this reason, we employ tabu search as the second phase, as shown in lines 14 to 17 of Fig. 1. First, HPHM selects the best solution s from the set of solutions S found in the GA phase. Tabu search then creates a set of solutions St that are the neighbors of s that are not currently in the tabu list. After that, tabu search chooses the solution s from St and updates the tabu list, as shown in lines 15 to 16. When the proposed algorithm meets the termination criterion, the nal solutions will be output. Note that in Fig. 1, GGA and GTS represent, respectively, the number of iterations that the GA and TS phases perform and m the number of chromosomes of GA. S and S represent the population of GA; s and s the solution of TS. In addition, St represent a set of solutions the size of which is predened. Apparently, one of the most important things HPHM has to deal with is to determine when to stop the GA and TS and how to recreate from a single solution s a set of solutions S for GA. The following describes how HPHM solves these two problems. 1. When to stop the GA and TS phase: In general, the rst thing to do is to determine when to stop the GA and TS. A simple strategy is when either GA or TS converges. It resembles the stop criterion of k-means, say, when the algorithm can not further improve the solution, or the solutions are stable for a certain number of iterations. However, as far as the experimental result of this paper is concerned, an even simpler method is taken. The GA and TS are stopped at a xed interval, namely, every GGA generations and every GTS iterations. 2. How to recreate from a single solution of TS the population for GA: The second thing HPHM has to deal with is to recreate the population for GA from the nal solution of TS. We propose two methods to solve this problem. The rst method works as follows: First, the nal solution of TS is copied unchanged as the rst solution. Then, each of the remaining m 1 chromosomes is generated as follows: First, generate the rst gene randomly. Then, the remaining n 1 genes are generated as the nearest-neighbor (NN) of the gene to its left. That is, gi+1 is a nearest neighbor of gi where gi indicates the i-th gene. The second method differs from the rst one in how the remaining m 1 chromosomes are generated and works as follows: First, just like the rst method, the nal solution of TS is copied unchanged as the rst solution. Then, each of the remaining m 1 chromosomes is generated as follows: First, 90% of the genes are cloned from s by randomly selecting a starting locus and consecutively copying 90% of the genes. Then, the remaining 10% are

WORLD AUTOMATION CONGRESS

Best solution of GA The operators of GA 1 2 3,4 5 6

1 2 The GA Phase

2 1

3 3

4 4

5 6

6 5 The TS Phase 1

The operators of TS

3,4

Detection

1 2 1 2

2 1 2 1

3 3 3,4 3,4

4 4

5 6 5 6

6 5 6 5

PR for GA

Recreate the population for GA

Compression

1 5

5 1

3 3

4 4

2 2

6 6

F IGURE 2. A simple example for illustrating how HPHM works.

generated just like the rst method. The only difference is that the rst gene is randomly generated from the genes not cloned. In other words, the set of chromosomes are generated in such a way that they are similar to, but not the same as s , so as to balance diversication and intensication.

3.3. An Example
A TSP with six cities is given in Fig. 2 as an example to illustrate how HPHM works, exactly. As Fig. 2 shows, the GA phase is essentially a combination of sGA and PR. As usual, sGA is composed of the selection, crossover, and mutation operators. It is the PR operator of the GA phase that takes the responsibility of detecting genes that are common to all the individuals rst and then compressing them so as to reduce the computation time at later generations. The PR algorithm will keep detecting and compressing common genes until it is high time to enter the TS phase at which HPHM will decompress the common genes that have been compressed. In addition, the so-far-best chromosome (the best solution s in Fig. 2) will be selected for the TS phase. In other words, the TS phase will perform the tabu search repeatedly until it is time to switch back to the GA phase or when the stopping criterion is met. The recreation operator is responsible for recreating the m 1 chromosomes in addition to s for the GA phase. Overall, HPHM will alternate between GA and TS phases until the stopping criterion is met.

4. Performance Evaluation
In this paper, the performance of the proposed algorithm is evaluated by using it to solve the TSP problem. All the empirical analyses were conducted on an IBM X3400 machine with 2.0 GHz Xeon CPU and 8GB of memory using CentOS 5.0 running Linux 2.6.18. Moreover, all the programs are written in C++ and compiled using g++ (GNU C++ compiler). For all the algorithms evaluated, the nearest-neighbor method [41] is used as the initial operator, and the 2-opt [42] is employed as the local search method. In addition, all the simulations use PMX as the default crossover operator of GA-based algorithms.

4.1. Data Sets and Parameter Settings


TABLE 1. Data sets for TSP. Data set # of cities Optimum d198 198 15,780 d493 493 35,002 d657 657 48,912 pr1002 1,002 259,045 d1291 1,291 50,801 d1655 1,655 62,128 pr2392 2,392 378,032

The benchmarks for evaluating the performance of these algorithms in solving the TSP problem are shown in Table 1. All the simulations are carried out for 30 runs, with the population size xed at 25, the crossover probability at 1.0, the mutation probability at 0.01, and the number of generations at 1,000 for all the algorithms compared. Furthermore, all the algorithms alternate between GA and TS once every 50 iterations. In other words, they perform GA for 50 iterations, TS for another 50 iterations, then back to GA for another 50 iterations, and so on until the stop criterion is met. Moreover, for all the simulations, PR is started at the second generation.

A H IGH P ERFORMANCE H YBRID M ETAHEURISTIC

FOR

T RAVELING S ALESMAN P ROBLEM

TABLE 2. Simulation results.


Data set d198 d493 d657 u724 pr1002 d1291 d1655 pr2392 GA D 15,950.85 36,163.85 52,174.40 44,748.29 285,634.60 55,718.04 70,332.43 449,637.10 EHM-E1 d198 d493 d657 u724 pr1002 d1291 d1655 pr2392 D 16,010.71 36,363.72 52,176.75 45,047.86 276,405.70 54,222.00 66,945.93 409,355.20 T ( 0.38%) 60.23 ( 0.55%) 484.32 ( 0.00%) 935.17 ( 0.67%) 1,087.62 (-3.23%) 1,895.28 (-2.69%) 3,928.13 (-4.81%) 6,032.48 (-8.96%) 12,903.27 HPHM-E1 T ( 0.29%) ( 1.08%) ( 0.00%) ( 0.76%) (-2.94%) (-2.88%) (-4.97%) (-9.24%) 57.36 448.04 931.71 1,079.84 2,059.04 3,910.35 5,894.64 13,079.91 (-42.38%) (-45.60%) (-35.69%) (-46.63%) (-41.06%) (-38.22%) (-44.98%) (-39.15%) (-39.49%) (-41.20%) (-35.45%) (-46.25%) (-45.75%) (-37.94%) (-43.69%) (-39.97%) D 16,006.25 36,472.30 51,683.35 44,177.19 272,521.00 54,313.75 66,456.34 409,476.20 D 16,181.04 36,367.47 51,686.93 44,356.15 273,145.60 54,069.63 66,457.76 409,967.40 T 99.55 823.65 1,448.73 2,023.30 3,493.49 6,329.50 10,713.37 21,495.98 D 16,442.36 37,158.70 53,882.20 45,482.30 290,885.20 54,360.04 69,879.55 431,909.60 PREGA T ( 3.08%) 1.94 ( 2.75%) 14.60 ( 3.27%) 27.19 ( 1.64%) 32.93 ( 1.84%) 58.64 (-2.44%) 132.62 (-0.64%) 215.70 (-3.94%) 374.20 EHM-E2 T ( 0.35%) 66.75 ( 0.85%) 529.82 (-0.94%) 937.43 (-1.28%) 1,208.54 (-4.59%) 2,295.15 (-2.52%) 4,055.78 (-5.51%) 6,752.36 (-8.93%) 14,230.32 HPHM-E2 T ( 1.44%) ( 0.56%) (-0.93%) (-0.88%) (-4.37%) (-2.96%) (-5.51%) (-8.82%) 18.66 130.38 259.86 303.36 600.82 1,104.67 1,903.30 3,924.64 (-81.25%) (-84.17%) (-82.06%) (-85.01%) (-82.80%) (-82.55%) (-82.23%) (-81.74%) (-32.95%) (-35.67%) (-35.29%) (-40.27%) (-34.30%) (-35.92%) (-36.97%) (-33.80%) (-98.05%) (-98.23%) (-98.12%) (-98.37%) (-98.32%) (-97.90%) (-97.99%) (-98.26%)

D d198 15,996.91 d493 36,554.30 d657 52,173.25 u724 45,088.78 pr1002 277,225.10 d1291 54,113.08 d1655 66,835.42 pr2392 408,105.20 T : Time in seconds.

4.2. Simulation Results


Table 2 compares GA (i.e., simple GA) [2], PREGA (GA with PR [17]), EHM-E1, EHM-E2, HPHM-E1, and HPHM-E2, where EHM indicates GA+TS; HPHM indicates GA+TS+PR. In addition, the sufx -E1 indicates using the rst method to recreate the population for the GA phase; the sufx -E2 indicates using the second method. The columns marked D and T represent the quality of the end result and the running time, respectively. The number in parentheses, i.e., (x%), represents the enhancement of an algorithm compared to GA, which is dened as follows: GA x= 100 (2) GA where indicates either PREGA, EHM-E1, EHM-E2, HPHM-E1, or HPHM-E2.
100 90 80 70 60 50 40 30 20 10 0 PREGA EHM-E1 Time EHM-E2 Algorithm HPHM-E1 Quality HPHM-E2 10 9 8 7 6 5 4 3 2 1 0

F IGURE 3. Enhancement of time and quality of algorithms evaluated. As Table 2 shows, PREGA can reduce from 97.90% up to 98.37% the computation time of GAs while limiting the loss of the quality to no more than 3.08%. It, however, is still a loss, meaning that there is room to improve. EHM-E1 and EHM-E2 provide another solution to reduce the computation time of GA while minimizing the loss of the quality. In Table 2, the result also shows that EHM can also use the pattern reduction technique, called HPHM), to improve the quality of the end result of PREGA in addition to reducing the computation time, especially for the large datasets. The result of HPHM-E2 shows that cloning 90% of the new genes from the nal solution of TS can save more computation time than HPHM-E1, and it also makes the loss of the quality small compared to PREGA. Fig. 3 compares the performance of GA, PREGA, EHM-E1, EHM-E2, HPHM-E1, and HPHM-E2 in solving the TSP benchmark pr2392, using GA as the base for the purpose of comparison. The

Enhancement of Quality

Enhancement of Time

WORLD AUTOMATION CONGRESS

result of Fig. 3 also shows that HPHM-E2 outperforms all the other methods in terms of both the computation time saved and the quality enhanced. In summary, the proposed algorithm provides an efcient solution to balance the computation time and the quality of the end result of GA, especially when the problem is large.

5. Conclusions
The proposed algorithm is motivated by the observation that many time-efcient GAs are incapable of enhancing the quality of the end result, especially at the later generations of the convergence process. For this reason, in the paper, we presents a hybrid metaheuristic for enhancing the performance of GA and its variants, called HPHM. Our simulation results showed that the proposed algorithm can efciently enhance the performance of GA and its variants, in terms of the quality of the end result on several well-known data sets for the TSP. In particular, the proposed algorithm can keep fast GA-based algorithms from falling into local minima at the early generations. In the future, our goal is to try different hybrid strategies to further enhance the quality of the end result of GA and its variants.
REFERENCES [1] J. W. Pepper, B. L. Golden, and E. A. Wasil, Solving the traveling salesman problem with annealing-based heuristics: A computational study, IEEE Transactions on Systems, Man and CyberneticsPart A: Systems and Humans, vol. 32, no. 1, pp. 7277, 2002. [2] J. H. Holland, Adaptation in Natural and Articial Systems. Boston, MA: MIT Press, 1992. [3] D. E. Goldberg, Genetic Algorithms in Search, Optimization, and Machine Learning. Addison-Wesley Professional, 1989. [4] Z. Michalewicz, Genetic Algorithms + Data Structures = Evolution Programs. Berlin, Germany: Springer-Verlag, 1996. [5] S. Karungaru, M. Fukumi, and N. Akamatsu, Automatic human faces morphing using genetic algorithms based control points selection, International Journal of Innovative Computing, Information and Control, vol. 3, no. 2, pp. 247256, 2007. [6] F. Xhafa and J. Carretero, Genetic algorithm based schedulers for grid computing systems, International Journal of Innovative Computing, Information and Control, vol. 3, no. 5, pp. 10531071, 2007. [7] C.-H. Hsieh, W.-H. Ho, and J.-H. Chou, Optimal pid controllers design of pwm feedback time-varying systems using orthogonal-functions approach and genetic algorithm, International Journal of Innovative Computing, Information and Control, vol. 5, no. 2, pp. 387397, 2009. [8] F. Glover, Tabu searchpart I, ORSA Journal on Computing, vol. 1, no. 3, pp. 190206, 1989. [9] F. Glover, Tabu searchpart II, ORSA Journal on Computing, vol. 2, no. 1, pp. 432, 1990. [10] R. Mart, Multi-start methods, in Handbook of Metaheuristics (F. W. Glover and G. A. Kochenberger, eds.), (Boston, MA), pp. 355368, Kluwer Academic Publishers, 1993. [11] C. Lo and C. Hus, Annealing framework with learning memory, IEEE Transactions on Systems, Man and Cybernetics, Part A, vol. 28, no. 5, pp. 113, 1998. [12] M. Dorigo and L. M. Gambardella, Ant colony system: A cooperative learning approach to the traveling salesman problem, IEEE Transactions on Evolutionary Computation, vol. 1, no. 1, pp. 5366, 1997. [13] J. Kennedy and R. C. Eberhart, Particle swarm optimization, in Proceedings of the IEEE International Conference on Neural Networks, pp. 19421948, 1995. [14] D. T. Pham, A. Ghanbarzadeh, E. Koc, S. Otri, S. Rahim, and M. Zaidi, The bees algorithm, tech. rep., Manufacturing Engineering Centre, Cardiff University, Royaumes-Unis, 2005. [15] Z. W. Geem, J. H. Kim, and G. Loganathan, A new heuristic optimization algorithm: Harmony search, SIMULATION, vol. 76, no. 2, pp. 6068, 2001. [16] C. Blum and A. Roli, Metaheuristics in combinatorial optimization: Overview and conceptual comparison, ACM Computing Surveys, vol. 35, no. 3, pp. 268308, 2003. [17] S.-P. Tseng, C.-W. Tsai, M.-C. Chiang, and C.-S. Yang, Fast genetic algorithm based on pattern reduction, in Proceedings of the IEEE International Conference on Systems, Man and Cybernetics, pp. 214219, 2008. [18] E. L. Lawer, J. K. Lenstra, A. H. R. Kan, and D.B.Shmoys, The Traveling Salesman Problem. New York, NY: Wiley, 1985. [19] M. M. Flood, The traveling salesman problem, Operation Research, vol. 4, pp. 6178, 1955. [20] J. Yi, G. Yang, Z. Zhang, and Z. Tang, An improved elastic net method with time-dependent parameters for traveling salesman problem, International Journal of Innovative Computing, Information and Control, vol. 5, no. 4, pp. 10891100, 2009. [21] A. BARVINOK, The geometric maximum traveling salesman problem, Journal of the ACM, vol. 50, no. 5, pp. 641664, 2003. [22] T. Cormen, C. Leiserson, and R. Rivest, Introduction to Algorithms. MIT Press, 1990. [23] M. Garey and D. Johnson, Computers and Intractability: A Guide to the Theory of NP-Completeness. W. H. Freeman and Company, 1979. [24] C.-W. Tsai, C.-S. Yang, and M.-C. Chiang, A novel pattern reduction algorithm for k-means based clustering, in Proceedings of the IEEE International Conference on Systems, Man and Cybernetics, pp. 504509, 2007. [25] C.-W. Tsai, C.-Y. Lee, M.-C. Chiang, and C.-S. Yang, A fast vq codebook generation algorithm via pattern reduction, Elsevier Pattern Recognition Letters, 2009, Accepted. [26] C.-W. Tsai, S.-P. Tseng, M.-C. Chiang, and C.-S. Yang, A time-efcient method for metaheuristics: Using tabu search and tabu ga as a case, in Proceedings of the Ninth International Conference on Hybrid Intelligent Systems, 2009, Accept. [27] R. S. Michalski, Learnable evolution model: Evolutionary processes guided by machine learning, Machine Learning, vol. 38, no. 1-2, pp. 940, 2000. [28] C.-F. Tsai, C.-W. Tsai, and T. Yang, A modied multiple-searching method to genetic algorithms for solving traveling salesman problem, in IEEE International Conference on Systems, Man and Cybernetics, vol. 3, 2002. [29] C.-F. Tsai, C.-W. Tsai, and C.-P. Chen, A novel algorithm for multimedia multicast routing in a large scale network, Journal of Systems and Software, vol. 72, no. 3, pp. 431441, 2004. [30] R. K. Ahuja, Ozlem Ergun, J. B. Orlin, and A. P. Punnen, A survey of very large-scale neighborhood search techniques, Discrete Applie Mathematics, vol. 123, no. 1-3, pp. 75102, 2002. [31] S. Lin and B. W. Kernighan, An effective heuristic algorithm for the traveling-salesman problem, Operations Research, vol. 21, no. 2, pp. 498516, 1973. [32] D. E. Brown, C. L. Huntley, and A. R. Spillane, A parallel genetic heuristic for the quadratic assignment problem, in Proceedings of the third international conference on Genetic algorithms, pp. 406415, 1989. [33] D. Adler, Genetic algorithms and simulated annealing: a marriage proposal, IEEE International Conference on Neural Networks, vol. 2, pp. 11041109, 1993. [34] S. R. Thangiah, I. H. Osman, and T. Sun, Hybrid genetic algorithms, simulated annealing and tabu search methods for vehicle routing problems with time windows, Technical Report UKC/OR94/4, pp. 137, 1994. [35] K. Katayama and H. Narihisa, A efcient hybrid genetic algorithm for the traveling salesman problem, Electronics and Communications in Japan, Part III: Fundamental Electronic Science, vol. 84, no. 2, pp. 7683, 2001. [36] C.-K. Ting, S.-T. Li, and C. Lee, On the harmonious mating strategy through tabu search, Information Sciences, vol. 156, no. 3-4, pp. 189214, 2003. [37] Z.-J. Lee, S.-F. Su, C.-C. Chuang, and K.-H. Liu, Genetic algorithm with ant colony optimization (ga-aco) for multiple sequence alignment, Applied Soft Computing, vol. 8, no. 1, pp. 5578, 2008. [38] R. Baraglia, J. I. Hidalgo, and R. Perego, A hybrid heuristic for the traveling salesman problem, IEEE Transactions on Evolutionary Computation, vol. 5, no. 6, pp. 613622, 2001. [39] G. A. Jayalakshmi, S. Sathiamoorthy, and R. Rajaram, A hybrid genetic algorithm - a new approach to solve traveling salesman problem, International Journal of Computational Engineering Science, vol. 2, pp. 339355, 2001. [40] H. D. Nguyen, I. Yoshihara, K. Yamamori, and M. Yasunaga, Implementation of an effective hybrid ga for large-scale traveling salesman problems, IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, vol. 37, no. 1, pp. 9299, 2007. [41] G. Reinelt, The Traveling Salesman: Computational Solutions for TSP Applications, vol. 840. Heidelberg: Springer-Verlag, Berlin Heidelberg, 1994. [42] R. Yang, Solving large travelling salesman problems with small populations, pp. 157162, 1997.

You might also like