You are on page 1of 56

Republic of Tunisia Ministry of Higher Education and Scientic Research University of Carthage

Tunisia Polytechnic School

Operations Research Project

Solving The Knapsack Problem Using The Genetic Algorithm


Mai-June 2013

Elaborated by: Lot Slim Akram Bedoui


Second year engineering students Option: Economics and Scientic Management

Supervised by: Mm Safa Bhar Operations Research Professor at TPS

Academic Year 2012 - 2013

ii

Abstract
The knapsack problem is a problem in combinatorial optimization where, given a set of items, each with a weight and a value, we aim at determining the number of each item to include in a collection so that the total weight is less than or equal to a given limit and the total value is as large as possible. It has been studied for more than a century and many approches have been developed to solve it. One of them is the genetic algorithme which is an evolutionary method inspired by the process of natural selection. Computer science comes in the help of mathematic to obtain fast near-optimal solution to the KP. Keywords: Knapsack problem, optimization, metaheuristic, evolutionary algorithms, genetic algorithm, natural selection, java application.

iii

Contents
1 Introduction 2 Knapsack Problem Setting 2.1 Origins . . . . . . . . . . . . . . . . . . . . . . . . 2.2 Applications . . . . . . . . . . . . . . . . . . . . . 2.3 Problem formulation . . . . . . . . . . . . . . . . 2.4 Problem variations . . . . . . . . . . . . . . . . . 2.4.1 bounded vs unbounded knapsack problems 2.4.2 Multiple-choice knapsack problem . . . . . 2.4.3 Subset sum problem . . . . . . . . . . . . 2.4.4 Multiple knapsack problem . . . . . . . . . 2.4.5 Quadratic knapsack problem . . . . . . . . 3 Genetic Algorithm Formulation 3.1 Overview . . . . . . . . . . . . . . . . 3.1.1 Metaheuristic . . . . . . . . . 3.1.2 Evolutionary algorithms . . . 3.2 Genetic algorithms . . . . . . . . . . 3.2.1 Brief history of GA . . . . . . 3.2.2 GA concepts . . . . . . . . . . 3.2.3 Flowchart explanation . . . . 3.3 GA operators and terminologies . . . 3.3.1 Encoding . . . . . . . . . . . 3.3.2 Fitness function . . . . . . . . 3.3.3 Selection . . . . . . . . . . . . 3.3.4 Crossover . . . . . . . . . . . 3.3.5 Mutation . . . . . . . . . . . 3.3.6 Penalizing . . . . . . . . . . . 3.4 Advantages Of GAs . . . . . . . . . . 3.4.1 Global Search Methods . . . . 3.4.2 Blind Search Methods . . . . 3.4.3 Probabilistic rules . . . . . . . 3.4.4 Parallel machines applicability 1 2 2 3 3 3 4 5 5 5 6 7 7 7 8 11 11 12 13 14 14 15 15 16 18 18 19 19 20 20 20

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . .

Contents 4 Implementation 4.1 Programming languages used 4.2 Graphical Interface . . . . . . 4.3 C code listing . . . . . . . . . 4.4 Java code listing . . . . . . . 4.5 Tests and results . . . . . . . Conclusion

iv 21 21 21 22 33 37 51

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

List of Figures
2.1 2.2 3.1 3.2 3.3 3.4 3.5 3.6 3.7 3.8 3.9 3.10 3.11 4.1 4.2 Knapsack problem illustration (from wikipedia) . . . . . . . . . . . . Knapsack example of solution . . . . . . . . . . . . . . . . . . . . . . Structure of a single population evolutionary owchart of genetic algorithm . . . . . . . . Binary encoding example . . . . . . . . . . . Permutation encoding example . . . . . . . Value encoding example . . . . . . . . . . . Single Point Crossover example . . . . . . . Two Point Crossover example . . . . . . . . Uniform Crossover example . . . . . . . . . Flipping mutation example . . . . . . . . . . Interchanging mutation example . . . . . . . Reversing mutation example . . . . . . . . . algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 2 8 12 14 14 15 17 17 17 18 18 18 22 22

Graphical interface: setting the parameters . . . . . . . . . . . . . . . Graphical interface: showing results . . . . . . . . . . . . . . . . . . .

Chapter 1 Introduction
In the eld of engineering, Operations Research (OR) is of huge use as its one of the most powerful tools of decision making. By using techniques such as mathematical modeling to analyze complex situations, operations research gives executives the power to make more eective decisions and build more productive systems based on more complete data, consideration of all available options, careful predictions of outcomes and estimates of risk and the latest decision tools and techniques such as simulation, optimization, probability and statistics. Employing techniques from other mathematical sciences, such as mathematical modeling, statistical analysis, and mathematical optimization, operations research arrives at optimal or near-optimal solutions to complex problems. We said optimal or near-optimal solutions because it really depends on the method followed to solve the given problem. In fact, in this domain we distinguish between exact and heuristic methods. While the rst gives The exact solution, the second converges to it with an error margin from the optimal solution. The reason why we use the latest technique is that it converges very quickly to the solution. Mathematically, we say that it converges polynomially, by contrast to the exponential convergence of the rst ones. A vast variety of problems have been expressed since the second half of the last century, and very solutions have been given to them. One of the most popular and typical examples is The Knapsack Problem which is treated in the present work due to its simplicity. In the seek of solution, we will opt for the genetic algorithm, an example of the metaheurisitc evolutionary algorithms. This report summarizes the work that we did during the last weeks and is divided into three main parts. While the rst two parts are dedicated to theoretical presentation of Knapsack Problem and Genetic Algorithme respectively, we will present in the last part the computational and programming work we elaborated.

Chapter 2 Knapsack Problem Setting


2.1 Origins

Suppose we are planning a hiking trip; and we are, therefore, interested in lling a knapsack with items that are considered necessary for the trip. There are N dierent item types that are deemed desirable; these could include bottle of water, apple, orange, sandwich, and so forth. Each item type has a given set of two attributes, namely a weight (or volume, or in other cases a cost) and a value (or a prot) that quanties the level of importance associated with each unit of that type of item. Since the knapsack has a limited weight (or volume) capacity, the problem of interest is to gure out how to load the knapsack with a combination of units of the specied types of items that yields the greatest total value. What we have just described is called the knapsack, or the rucksack, problem .

Figure 2.1: Knapsack problem illustration (from wikipedia)

Figure 2.2: Knapsack example of solution

2 Knapsack Problem Setting

2.2

Applications

A large variety of resource allocation problems can be cast in the framework of a knapsack problem. The general idea is to think of the capacity of the knapsack as the available amount of a resource and the item types as activities to which this resource can be allocated. Two quick examples are the allocation of an advertising budget to the promotions of individual products and the allocation of your eort to the preparation of nal exams in dierent subjects. In nance, selection of capital investments and nancial portfolios, selection of assets for asset-backed securitization are examples that show how solving this problem can allow companies to be more competitive and protable.

2.3

Problem formulation

The knapsack problem has been studied for more than a century, with early works dating as far back as 1897. Great mathematician Tobias Dantzig is though to be the rst to set the problem mathematically as follows: Let there be n items, x1 to xn where xi has a value vi and weight wi . The maximum weight that we can carry in the bag is W. It is common to assume that all values and weights are nonnegative. To simplify the representation, we also assume that the items are listed in increasing order of weight. Thus, the optimization problem is written as:

maximize
i=1 n

vi xi w i xi
i=1

subject to

xi {0, 1}

2.4

Problem variations

The last formulation is known as the 0-1 knapsack problem, as it restricts the number xi of copies of each kind of item to zero or one. But there is a huge amount of dierent kinds of variations of the knapsack problem in the scientic literature,

2 Knapsack Problem Setting among them:

2.4.1

bounded vs unbounded knapsack problems

The bounded knapsack problem (BKP) removes the restriction that there is only one of each item, but restricts the number xi of copies of each kind of item to an integer value ci . Mathematically the bounded knapsack problem can be formulated as:
n

maximize
i=1 n

vi xi w i xi
i=1

subject to

xi {0, 1, ..., ci } The unbounded knapsack problem (UKP, sometimes called the integer knapsack problem) places no upper bound on the number of copies of each kind of item and can be formulated as above except for that the only restriction on xi is that it is a non-negative integer. If the example with the colored bricks above is viewed as an unbounded knapsack problem, then the solution is to take three yellow boxes and three grey boxes. Mathematically the UKP can be formulated as:
n

maximize
i=1 n

vi xi w i xi
i=1

subject to xi

2 Knapsack Problem Setting

2.4.2

Multiple-choice knapsack problem

If the items are subdivided into k classes denoted Ni , and exactly one item must be taken from each class, we get the multiple-choice knapsack problem:
k

maximize
i=1 j Ni k

vij xij

subject to
i=1 j Ni

wij xij xij = 1 1


j Ni

W i i k k and j Ni

xij {0, 1} 1

2.4.3

Subset sum problem

The subset sum problem (often the corresponding decision problem is given instead), is a special case of the decision and 0-1 problems where each kind of item, the weight equals the value vi = wi (if for each item the prots and weights are identical):
n

maximize
i=1 n

v i xi v i xi
i=1

subject to

xi {0, 1} Its frequently in the eld of cryptography.

2.4.4

Multiple knapsack problem

If we have n items and m knapsacks with capacities Wi , we get:

2 Knapsack Problem Setting

maximize
i=1 j =1 n

vi xij wj xij
i=1 m

subject to

Wi 1 11 j j

i n

xij
i=1

xij {0, 1} 1

n and 1

As a special case of the multiple knapsack problem, when the prots are equal to weights and all bins have the same capacity, we can have multiple subset sum problem. his variation is used in many loading and scheduling problems in Operations Research and has a polynomial-time approximation scheme (PTAS).

2.4.5

Quadratic knapsack problem

The quadratic knapsack problem (QKP) maximizes a quadratic objective function subject to a binary and linear capacity constraint. The problem was treated in the late 70s and is described by:

n1

maximize
j =1 n

vj xj +
i=1 j =i+1

vij xi xj W j n

subject to
j =1

w j xj

xj {0, 1} 1

Chapter 3 Genetic Algorithm Formulation


3.1 Overview

To solve the knapsack problem, many approches have been developed over the past decades. In this framework we will focus on one particular method: genetic algorithm. GA is a metaheuristic that mimics the process of natural evolution to generate useful solutions to optimization and search problems.

3.1.1

Metaheuristic

As dened by Osman and Laporte, a metaheuristic is formally dened as an iterative generation process which guides a subordinate heuristic by combining intelligently dierent concepts for exploring and exploiting the search space, learning strategies are used to structure information in order to nd eciently near-optimal solutions. Metaheuristics are strategies that guide the search process. The goal is to eciently explore the search space in order to nd near-optimal solutions. Techniques which constitute metaheuristic algorithms range from simple local search procedures to complex learning processes. MA are approximate and usually stochastic (nondeterministic). They may incorporate mechanisms to avoid getting trapped in conned areas of the search space. The basic concepts of metaheuristics permit an abstract level description. Metaheuristics are not problem-specic. Metaheuristics may make use of domain-specic knowledge in the form of heuristics that are controlled by the upper level strategy. Todays more advanced metaheuristics use search experience (embodied in some form of memory) to guide the search.

3 Genetic Algorithm Formulation

3.1.2

Evolutionary algorithms

EA prniciples Evolutionary algorithms are stochastic search methods that mimic the metaphor of natural biological evolution. Evolutionary algorithms operate on a population of potential solutions applying the principle of survival of the ttest to produce better and better approximations to a solution. At each generation, a new set of approximations is created by the process of select-ing individuals according to their level of tness in the problem domain an d breeding them together using operators borrowed from natural genetics. This process leads to the evolution of populations of individuals that are better suited to their environment than the individuals that they were created from, just as in natural adaptation. Evolutionary algorithms model natural processes, such as selection, recombination, mutation, migration, locality and neighborhood. The following gure shows the structure of a simple evolutionary algorithm. Evolutionary algorithms work on populations of individuals instead of single solu-tions. In this way the search is performed in a parallel manner.

Figure 3.1: Structure of a single population evolutionary algorithm

EA techniques Similar techniques dier in the implementation details and the nature of the particular applied problem: - Genetic algorithm: This is the most popular type of EA. One seeks the solution of a problem in the form of strings of numbers (traditionally binary, although the best representations are usually those that reect something about the problem being solved), by applying operators such as recombination and

3 Genetic Algorithm Formulation

mutation (sometimes one, sometimes both). This type of EA is often used in optimization problems. - Genetic programming: Here the solutions are in the form of computer programs, and their tness is determined by their ability to solve a computational problem. - Evolutionary programming: Similar to genetic programming, but the structure of the program is xed and its numerical parameters are allowed to evolve. - Gene expression programming: Like genetic programming, GEP also evolves computer programs but it explores a genotype-phenotype system, where computer programs of dierent sizes are encoded in linear chromosomes of xed length. - Evolution strategy: Works with vectors of real numbers as representations of solutions, and typically uses self-adaptive mutation rates. - Memetic algorithm: It is the hybrid form of population based methods. Inspired by the both Darwinian principles of natural evolution and Dawkins notion of a meme and viewed as a form of population-based algorithm coupled with individual learning procedures capable of performing local renements. The focus of the research study is thus to balance been exploration and exploitation in the search. - Dierential evolution: Based on vector dierences and is therefore primarily suited for numerical optimization problems. - Neuroevolution: Similar to genetic programming but the genomes represent articial neural networks by describing structure and connection weights. The genome encoding can be direct or indirect. - Learning classier system: LCS is a machine learning system with close links to reinforcement learning and genetic algorithms.

3 Genetic Algorithm Formulation Biological terminology

10

At this point it is useful to formally introduce some of the biological terminology that will be used throughout this work. In the context of genetic algorithms, these biological terms are used in the spirit of analogy with real biology, though the entities they refer to are much simpler than the real biological ones. All living organisms consist of cells, and each cell contains the same set of one or more chromosomes (strings of DNA) that serve as a blueprint for the organism. A chromosome can be conceptually divided into genes, each of which encodes a particular protein. Very roughly, one can think of a gene as encoding a trait, such as eye color. The dierent possible settings for a trait (e.g., blue, brown, hazel) are called alleles. Each gene is located at a particular locus (position) on the chromosome. Many organisms have multiple chromosomes in each cell. The complete collection of genetic material (all chromosomes taken together) is called the organisms genome. The term genotype refers to the particular set of genes contained in a genome. Two individuals that have identical genomes are said to have the same genotype. The genotype gives rise, under fetal and later development, to the organisms phenotype, its physical and mental characteristics, such as eye color, height, brain size, and intelligence. Organisms whose chromosomes are arrayed in pairs are called diploid ; organisms whose chromosomes are unpaired are called haploid. In nature, most sexually reproducing species are diploid, including human beings, who each have 23 pairs of chromosomes in each somatic (non-germ) cell in the body. During sexual reproduction, recombination (or crossover ) occurs: in each parent, genes are exchanged between each pair of chromosomes to form a gamete (a single chromosome), and then gametes from the two parents pair up to create a full set of diploid chromosomes. In haploid sexual reproduction, genes are exchanged between the two parents singlestrand chromosomes. Ospring are subject to mutation, in which single nucleotides (elementary bits of DNA) are changed from parent to ospring, the changes often resulting from copying errors. The tness of an organism is typically dened as the probability that the organism will live to reproduce (viability ) or as a function of the number of ospring the organism has (fertility ). In genetic algorithms, the term chromosome typically refers to a candidate solution to a problem, often encoded as a bit string. The genes are either single bits or short blocks of adjacent bits that encode a particular element of the candidate solution (e.g., in the context of multiparameter function optimization the bits encoding a particular parameter might be considered to be a gene). An allele in a bit string is

3 Genetic Algorithm Formulation

11

either 0 or 1; for larger alphabets more alleles are possible at each locus. Crossover typically consists of exchanging genetic material between two singlechromosome haploid parents. Mutation consists of ipping the bit at a randomly chosen locus (or, for larger alphabets, replacing a the symbol at a randomly chosen locus with a randomly chosen new symbol). Most applications of genetic algorithms employ haploid individuals, particularly, single-chromosome individuals. The genotype of an individual in a GA using bit strings is simply the conguration of bits in that individuals chromosome. Often there is no notion of phenotype in the context of GAs, although more recently many workers have experimented with GAs in which there is both a genotypic level and a phenotypic level (e.g., the bit-string encoding of a neural network and the neural network itself). Natural Selection In nature, the individual that has better survival traits will survive for a longer period of time. This in turn provides it a better chance to produce ospring with its genetic material. Therefore, after a long period of time, the entire population will consist of lots of genes from the superior individuals and less from the inferior individuals. In a sense, the ttest survived and the un t died out. This force of nature is called natural selection. The existence of competition among individuals of a species was recognized certainly before Darwin. The mistake made by the older theorists (like Lamarck) was that the environment had an eect on an individual. That is, the environment will force an individual to adapt to it. The molecular explanation of evolution proves that this is biologically impossible. The species does not adapt to the environment, rather, only the ttest survive.

3.2
3.2.1

Genetic algorithms
Brief history of GA

Although simulating evolution computationally was possible since the 50s, it took John Holland (from the University of Michigan) more than a decade to rst introduce the concepts of GA after working on adaptive systems. The main results of his

3 Genetic Algorithm Formulation

12

work were published in his book Adaptation in Natural and Articial Systems by the 1975. But research in GAs remained largely theoretical until the mid-1980s, when The First International Conference on Genetic Algorithms was held in Pittsburgh, Pennsylvania. Nowadays genetic algorithms are applied to a broad range of subjects.

3.2.2

GA concepts

The most common type of genetic algorithm works like this: a population is created with a group of individuals created randomly. The individuals in the population are then evaluated. The evaluation function is provided by the programmer and gives the individuals a score based on how well they perform at the given task. Two individuals are then selected based on their tness, the higher the tness, the higher the chance of being selected. These individuals then reproduce to create one or more ospring, after which the ospring are mutated randomly. This continues until a suitable solution has been found or a certain number of generations have passed, depending on the needs of the programmer.

Figure 3.2: owchart of genetic algorithm

3 Genetic Algorithm Formulation

13

3.2.3

Flowchart explanation

Step 1: Create a random initial population An initial population is created from a random selection of solutions. These solutions have been seen as represented by chromosomes as in living organisms. A chromosome is a packet of genetic information organized in a standard way that denes completely an individual solution. The genetic structure (way in which that information is packed and dened) enables the solutions to be manipulated. The genetic operands (way in which that information can be manipulated) enables the solutions to reproduce and envolve. Step 2: Evaluate Fitness A value for tness is assigned to each solution (chromosome) depending on how close it actually is to solving the problem. Therefore we need to dene the problem, model it, simulate it or have a data set as sample answers. Each possible solution has to be tested in the problem and the answer evaluated (or marked) on how good it is. The overall mark of each solution relative to all the marks of all solutions produces a tness ranking. Step 3: Produce Next Generation Those chromosomes with a higher tness value are more likely to reproduce ospring. The population for the next Generation will be produced using the genetic operators. Reproduction by Copy or Crossing Over and Mutation will be applied to the chromosomes according to the selection rule. This rule states that the tter and individual is, the higher the probability it has to reproduce Step 4: Next Generation or Termination If the population in the last generation contains a solution that produces an output that is close enough or equal to the desired answer then the problem has been solved. This is the ideal termination criterion of the evolution. If this is not the case, then the new generation will go through the same process as their parents did, and the evolution will continue. This will iterate until a solution is reached or another of the termination criteria is satised. A termination criterion that always must be included is Time-Out (either as computing time or as number of generations evaluated). Since

3 Genetic Algorithm Formulation

14

one drawback of Evolutionary Programming is that is very dicult (impossible most of the time) to know if the ideal termination criterion is going to be satised, or when.

3.3
3.3.1

GA operators and terminologies


Encoding

The process of representing a solution in the form of a string that conveys the necessary information. Just as in a chromosome, each gene controls a particular characteristic of the individual, similarly, each bit in the string represents a characteristic of the solution. Some encoding methods are: Binary encoding Most common method of encoding is binary coded. Chromosomes are strings of 1 and 0 and each position in the chromosome represents a particular characteristic of the problem.

Figure 3.3: Binary encoding example

Permutation encoding Usseful in ordering problems such as the TSP were every chromosome is a string of numbers, each of which represents a city to be visited.

Figure 3.4: Permutation encoding example

3 Genetic Algorithm Formulation Value encoding

15

Used in problems where complicated values, such as real numbers, are used and where binary encoding would not suce. Good fo some problems, but often necessary to develop some specic crossover and mutation techniques for these chromosomes.

Figure 3.5: Value encoding example

3.3.2

Fitness function

A tness function value quanties the optimality of a solution. The value is used to rank a particular solution against all the other solutions. A tness value is assigned to each solution depending on how close it is actually to the optimal solution of the problem. Ideal tness function correlates closely to goal and is quickely computable.

3.3.3

Selection

The process that determines which solutions are to be preserved and allowed to reproduce and which ones deserve to die out. The primary objective of the selection operator is to emphasize the good solutions and eliminate the bad solutions in a population while keeping the population size constant.The rule is Select the best, discard the rest. Practically, we identify the good solutions in a population, make multiple copies of the good solutions and eliminate bad solutions from the population so that multiple copies of good solutions can be placed in the population. There are dierent techniques to implement selection in Genetic Algorithms: Tournament selection In tournament selection several tournaments are played among a few individuals. The individuals are chosen at random from the population. The winner of each tournament is selected for next generation. Selection pressure can be adjusted easily by changing the tournament size. Weak individuals have a smaller chance to be selected if tournament size is large.

3 Genetic Algorithm Formulation Roulette Wheel selection

16

Each current string in the population has a slot assigned to it which is in proportion to its tness. We spin the weighted roulette wheel thus dened n times (where n is the total number of solutions). Each time Roulette Wheel stops, the string corresponding to that slot is created. Strings that are tter are assigned a larger slot and hence have a better chance of appearing in the new population. Rank selection The Roulette wheel selection will have problems when the tness values dier very much. In this case, the chromosomes with low tness values will have a very few chance to be selected. This problem can be avoided using ranking selection. Every chromosome receives tness from the ranking. The worst has tness 1 and the best has tness N. It preserves diversity and results in slow convergence. Elitism Elitism is a method which copies the best chromosomes to the new ospring population before crossover and mutation. When creating a new population by crossover or mutation, the best chromosome might be lost. It forces GAs to retain some number of the best individuals at each generation and has been found that elitism signicantly improves performance.

3.3.4

Crossover

It is the process in which two chromosomes (strings) combine their genetic material (bits) to produce a new ospring which possesses both their characteristics. Two strings are picked from the mating pool at random to cross over. The method chosen depends on the Encoding Method. Crossover between 2 good solutions MAY NOT ALWAYS yield a better or as good a solution. Since parents are good, probability of the child being good is high. If ospring is not good (poor solution), it will be removed in the next iteration during Selection. The most known mthods are:

3 Genetic Algorithm Formulation Single Point Crossover

17

A random point is chosen on the individual chromosomes (strings) and the genetic material is exchanged at this point.

Figure 3.6: Single Point Crossover example

Two Point Crossover Two random points are chosen on the individual chromosomes (strings) and the genetic material is exchanged at these points.

Figure 3.7: Two Point Crossover example

Uniform Crossover Each gene (bit) is selected randomly from one of the corresponding genes of the parent chromosomes. Uniform Crossover yields ONLY 1 ospring.

Figure 3.8: Uniform Crossover example

3 Genetic Algorithm Formulation

18

3.3.5

Mutation

It is the process by which a string is deliberately changed so as to maintain diversity in the population set. Mutation Probability determines how often the parts of a chromosome will be mutated. For chromosomes using Binary Encoding, randomly selected We have 3 types of mutation: ipping, interchanging and reversing. Flipping

Figure 3.9: Flipping mutation example

Interchanging

Figure 3.10: Interchanging mutation example

Reversing The number of bits to be inverted depends on the Mutation Probability.

Figure 3.11: Reversing mutation example

3.3.6

Penalizing

This death penalty heuristic is a popular option in many evolutionary techniques (e.g., evolution strategies). Note that rejection of infeasible individuals oers a few simplications of the algorithm.

3 Genetic Algorithm Formulation

19

The method of eliminating infeasible solutions from a population may work reasonably well when the feasible search space is convex and it constitutes a reasonable part of the whole search space (e.g., evolution strategies do not allow equality constraints since with such constraints the ratio between the sizes of feasible and infeasible search spaces is zero). Otherwise such an approach has serious limitations. For example, for many search problems where the initial population consists of infeasible individuals only, it might be essential to improve them (as opposed to rejecting them). Moreover, quite often the system can reach the optimum solution easier if it is possible to cross an infeasible region (especially in non-convex feasible search spaces). Penalizing infeasible individuals is the most common approach in the genetic algorithms community. Several researchers studied heuristics on design of penalty functions. Some hypotheses were formulated by Richardson, Siedlecki and Sklanski in 1989. penalties which are functions of the distance from feasibility are better performers than those which are merely functions of the number of violated constraints, for a problem having few constraints, and few full solutions, penalties which are solely functions of the number of violated constraints are not likely to nd solutions, good penalty functions can be constructed from two quantities, the maximum com- pletion cost and the expected completion cost, penalties should be close to the expected completion cost, but should not frequently fall below it. The more accurate the penalty, the better will be the solutions found. When penalty often underestimates the completion cost, then the search may not nd a solution.the genetic algorithm with a variable penalty coecient outperforms the xed penalty factor algorithm, Here, variability of penalty coecient was determined by a heuristic rule.

3.4
3.4.1

Advantages Of GAs
Global Search Methods

GAs search for the function optimum starting from a population of points of the function domain, not a single one. This characteristic suggests that GAs are global search methods. They can, in fact, climb many peaks in parallel, reducing the probability of nding local minima, which is one of the drawbacks of traditional optimization methods.

3 Genetic Algorithm Formulation

20

3.4.2

Blind Search Methods

GAs only use the information about the objective function. They do not require knowledge of the rst derivative or any other auxiliary information, allowing a number of problems to be solved without the need to formulate restrictive assumptions. For this reason, GAs are often called blind search methods.

3.4.3

Probabilistic rules

GAs use probabilistic transition rules during iterations, unlike the traditional methods that use xed transition rules. This makes them more robust and applicable to a large range of problems.

3.4.4

Parallel machines applicability

GAs can be easily used in parallel machines- Since in real-world design optimization problems, most computational time is spent in evaluating a solution, with multiple processors all solutions in a population can be evaluated in a distributed manner. This reduces the overall computational time substantially.

21

Chapter 4 Implementation
4.1 Programming languages used

In order to elaborate an application to solve KPs, we used two programming languages which are: C The calculus part (details of the genetic algorithm) is developed using C language. Then they are included in the java project as a library. Java To have an interactive application, we developed a graphical interface where the user can set the KP parameters. The main classes are: * KnapSacProject.java It is the rst class to be run. * Interface.java It contains the previous graphical interface. * algorithm.java It loads the C les to calculate and solve the problem. * StatusColumnCellRenderer.java It manages the changement in the result cells table.

4.2

Graphical Interface

After resolution, of an item is chosen as part of the optimal solution the corresponding cell is colored in green and the user should take it, otherwise its colored in red and he should leave it.

4 Implementation

22

Figure 4.1: Graphical interface: setting the parameters

Figure 4.2: Graphical interface: showing results

4.3

C code listing

The C les are converted into headers so that java can load them. The most important functions are as follow: gaParameters.h
1

2 3 4 5

/ * GA p a r a m e t e r s : g e n e r a t i o n s , p o p u l a t i o n s i z e and how many c h i l d r e n p e r GEN * / #d e f i n e POP 70 #d e f i n e CHILDREN 40 /* t h e r a t e f o r p e n a l i z i n g t h e u n s a t i s f i e d c o n s t r a i n t */

4 Implementation
#d e f i n e RATE 10

23

6 7 8

9 10 11 12 13 14 15 16 17 18

/ * How we r e p r e s e n t each i n d i v i d u a l , with i t s chromossome , i t s f i t n e s s , and i t s p r o b a b i l i t y * / t y p e d e f s t r u c t crom { c h a r cromo [ 1 5 0 ] ; int fitness ; i n t prob ; } crom ; i n t o b j e c t ( crom cromo , i n t P r o f i t T [ ] , i n t s i z e ) ; i n t p e n a l t y ( crom cromo , i n t CostT [ ] , i n t Cmax , i n t s i z e ) ; v o i d e v a l u a t e ( crom * cromo , i n t P r o f i t T [ ] , i n t CostT [ ] , i n t Cmax , i n t size ) ; v o i d c r o s s o v e r ( crom par1 , crom par2 , crom * son1 , crom * son2 , i n t s i z e ) ; v o i d mutation ( crom * cromo , i n t s i z e ) ; v o i d p r o b a b i l i t y ( crom * pop , i n t s i z e p o p ) ; v o i d s e l e c t ( crom * pop , i n t s i z e p o p , i n t n s e l e c t i o n s , crom * r e s u l t , i n t size ) ; i n t random ( i n t s t a r t , i n t end ) ; i n t iround ( double x ) ;

19 20 21 22

23 24

galib.h
1 2 3 4 5 6 7

#i n c l u d e #i n c l u d e #i n c l u d e #i n c l u d e #i n c l u d e

< s t d i o . h> < s t d l i b . h> <math . h> < time . h> gaParameters . h

v o i d g a s o l v e ( i n t * r e s u l t , i n t * CostT , i n t * P r o f i t T , i n t Cmax , i n t GENS, i n t TURNS, i n t s i z e ) { i n t i , j , k , gen , t ; time t start , stop ; time (& s t a r t ) ; /* * * * * * * * * * * * * * * * * * * * * * * * * * * * * d e c l a r a t i o n of the population , t h e i r children , * * an a u x i l i a r y v a r i a b l e and t h e b e s t i n d i v i d u a l e v e r * * * * * * * * * * * * * * * * * * * * * * * * * * * * */ crom pop [POP] , c h i l d r e n [CHILDREN] , temp [POP + CHILDREN] , b e s t , best turn ;

8 9 10 11 12 13 14 15 16 17

4 Implementation

24

18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39

/* * * * * * * * * * * * * * * * * * * * * in the beginning the best i n d i v i d u a l * i s t h e 0 one , s o . . . no need t o s e t * any more p a r a m e t e r s s i n c e we l l j u s t * u s e t h e f i t n e s s f o r comparison * * * * * * * * * * * * * * * * * * * * best . f i t n e s s = 0; /* * * * * * * * * * * * * * * * * * Let s run t h i s program s e v e r a l * times to get a report of i t s * performance , s o why not put * t h i s on a l o o p i n s t e a d o f * r u n n i n g i t s e v e r a l t i m e s and * take notes ? * * * * * * * * * * * * * * * * * f o r ( t = 0 ; t < TURNS; ++t ) { best turn . f i t n e s s = 0;
* * * * * * * */

* * * * * */

40 41 42 43 44 45 46 47 48

/ * Let s g e n e r a t e t h e i n i t i a l p o p u l a t i o n a t random ( o r pseudo random a s you wish ) * / f o r ( i = 0 ; i < POP; ++i ) { f o r ( j = 0 ; j < s i z e ; ++j ) { pop [ i ] . cromo [ j ] = random ( 0 , 2 ) ; } } gen = 0 ; / * r e p e a t t h e r e p r o d u c t i o n s t e p s u n t i l t h e max number o f g e n e r a t i o n s i s r e a c h e d */ w h i l e ( gen++ <GENS) { / * F i r s t , l e t s s e e how good they a r e . . . * / f o r ( i = 0 ; i < POP; ++i ) e v a l u a t e (&pop [ i ] , P r o f i t T , CostT , Cmax , s i z e ) ; / * . . . and what i s t h e chance o f each one * / p r o b a b i l i t y ( pop , POP) ; /* * * * * * * * * * * * * * * * * * * * * * * * * *

49 50 51 52 53 54 55 56 57 58

4 Implementation

25

59 60 61 62 63 64

65 66 67 68 69 70 71

* And two by two , my human zoo , s h a l l r e p r o d u c e * * u n t i l t h e number d e s i r e d o f c h i l d r e n i s r e a c h e d * * * * * * * * * * * * * * * * * * * * * * * * * * */ f o r ( i = 0 ; i < CHILDREN; i += 2 ) { s e l e c t ( pop , POP, 2 , temp , s i z e ) ; c r o s s o v e r ( temp [ 0 ] , temp [ 1 ] , &c h i l d r e n [ i ] , &c h i l d r e n [ i + 1] , size ) ; }

/ * Do our c h i l d r e n a r e good enough ? * / f o r ( i = 0 ; i < CHILDREN; ++i ) e v a l u a t e (& c h i l d r e n [ i ] , P r o f i t T , CostT , Cmax , s i z e ) ; / * Let s do some mutation e x p e r i m e n t s t o our p o p u l a t i o n buhuahuahua * / i = random ( 0 , POP) ; mutation (&pop [ i ] , s i z e ) ; / * Let s s e e how good our mutant i s * / e v a l u a t e (&pop [ i ] , P r o f i t T , CostT , Cmax , s i z e ) ; / * Let s g a t h e r a l l t o g e t h e r t h e o l d i e s and t h e n e w b i e s * / /* f i r s t t h e o l d i e s */ f o r ( i = 0 ; i < POP; ++i ) { temp [ i ] . f i t n e s s = pop [ i ] . f i t n e s s ; f o r ( j = 0 ; j < s i z e ; ++j ) temp [ i ] . cromo [ j ] = pop [ i ] . cromo [ j ] ; } / * and now our c h i l d r e n * / f o r ( k = 0 ; i < POP + CHILDREN; ++i , ++k ) { temp [ i ] . f i t n e s s = c h i l d r e n [ k ] . f i t n e s s ; f o r ( j = 0 ; j < s i z e ; ++j ) temp [ i ] . cromo [ j ] = c h i l d r e n [ k ] . cromo [ j ] ; } / * Let s e l e c t t h e b e s t o f t h i s g e n e r a t i o n * / f o r ( i = 1 , k = 0 ; i < POP + CHILDREN; ++i ) i f ( temp [ i ] . f i t n e s s > temp [ k ] . f i t n e s s ) { / * We a r e l o o k i n g f o r someone who r e s p e c t t h e c o n s t r a i n t s */ i f ( ! p e n a l t y ( temp [ i ] , CostT , Cmax , s i z e ) )

72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97

98

4 Implementation
k = i; } / * Let s s t o r e i t f o r l a t e r * / i f ( temp [ k ] . f i t n e s s > b e s t t u r n . f i t n e s s ) { f o r ( i = 0 ; i < s i z e ; ++i ) b e s t t u r n . cromo [ i ] = temp [ k ] . cromo [ i ] ; b e s t t u r n . f i t n e s s = temp [ k ] . f i t n e s s ; }

26

99 100 101 102 103 104 105 106 107 108 109

110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128

/ * Now t h e p a r t y can begin , who w i l l l i v e and who s h a l l d i e ? */ p r o b a b i l i t y ( temp , POP + CHILDREN) ; s e l e c t ( temp , POP + CHILDREN, POP, pop , s i z e ) ; } / * End o f t h i s G e n e r a t i o n * / i f ( best turn . f i t n e s s > best . f i t n e s s ) { f o r ( i = 0 ; i < s i z e ; ++i ) b e s t . cromo [ i ] = b e s t t u r n . cromo [ i ] ; best . f i t n e s s = best turn . f i t n e s s ; } } / * End o f t h i s Turn * / / * And t h e b e s t i n d i v i d u a l e v e r was . . . * / f o r ( i = 0 ; i < s i z e ; ++i ) { r e s u l t [ i ] = b e s t . cromo [ i ] ; } / * The l a s t two b i t s a r e r e s e r v e d f o r o b j e c t i v e f u n c t i o n and p e n a l t y */ r e s u l t [ s i z e ] = o b j e c t ( best , ProfitT , s i z e ) ; r e s u l t [ s i z e + 1 ] = p e n a l t y ( b e s t , CostT , Cmax , s i z e ) ;

129 130 131 132 133 134 135 136 137 138 139

/ * And t h e b e s t i n d i v i d u a l e v e r was . . . * / f o r ( i = 0 ; i < s i z e ; ++i ) { p r i n t f ( The %d th gene i s e q u a l t o %d \ n , i + 1 , b e s t . cromo [ i ] ) ; } p r i n t f ( The b e s t f i t n e s s was : %d \ n , o b j e c t ( b e s t , P r o f i t T , s i z e ) ) ; p r i n t f ( P e n a l t y : %d \ n , p e n a l t y ( b e s t , CostT , Cmax , s i z e ) ) ;

4 Implementation

27

140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182

time (& s t o p ) ; d o u b l e d i f = d i f f t i m e ( stop , s t a r t ) ; p r i n t f ( Elasped time i s %.4 l f s e c o n d s . \ n , d i f ) ; } /* * * * * * * * * * * * * * * * * * this function will calculate * * t h e p r o f i t f u n c t i o n based on * * the s i x b a s i c v a r i a b l e s * * * * * * * * * * * * * * * * * */ i n t o b j e c t ( crom cromo , i n t P r o f i t T [ ] , i n t s i z e ) { int objt = 0 , i ; f o r ( i = 0 ; i < s i z e ; ++i ) { o b j t += cromo . cromo [ i ] * P r o f i t T [ i ] ; } return objt ; } /* * * * * * * * * * * * * * * * * * this function will calculate * * the penalty f u n c t i o n * * * * * * * * * * * * * * * * * */ i n t p e n a l t y ( crom cromo , i n t CostT [ ] , i n t Cmax , i n t s i z e ) { i n t P = 0 , Pn , i ; f o r ( i = 0 ; i < s i z e ; ++i ) { P += cromo . cromo [ i ] * CostT [ i ] ; } /* * * * * Pn
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * i f (P Cmax) > 0 ( t h e amount p a s s e d from c o n s t r a i n t ) t h e v a l u e r e t u r n e d w i l l be : p e n a l t y r a t e * t h i s d i f e r e n c e e l s e i t w i l l be 0 , f o r i t r e s p e c t e d t h e c o n s t r a i n t s * * * * * * * * * * * * * * * * * * * * * * * * * * * * * = ( ( P Cmax) > 0 ) ? (RATE * (P Cmax) ) : 0 ; * * * * */

r e t u r n Pn ; }

4 Implementation
/* * * * * * * * * * * * * * * * * * this function w i l l evaluate * * each i n d i v i d u a l t h r u t h e * * o b j e c t i v e and t h e p e n a l t y * * function * * * * * * * * * * * * * * * * * */ v o i d e v a l u a t e ( crom * cromo , i n t P r o f i t T [ ] , i n t CostT [ ] , i n t Cmax , i n t size ) { i n t objt , P; / * Now l e t s apply t h e o b j e c t i v e f u n c t i o n t o i t * / o b j t = o b j e c t ( ( * cromo ) , P r o f i t T , s i z e ) ; / * i t s p e n a l t y f o r not r e s p e c t i n g our c o n s t r a i n t s * / P = p e n a l t y ( ( * cromo ) , CostT , Cmax , s i z e ) ; /* * * * * * * * * * * * * * * * * * * * * * * * * * * * Let s g u a r a n t e e t h a t t h e f i t n e s s w i l l be always * * p o s i t i v e , b e c a u s e we don t want t h e p r o b a b i l i t y * * f u n c t i o n t o g i v e us n e g a t i v e r e s u l t s * * So i f i t s p e n a l t y i s t o o much much , l e t s z e r o * * t h e r e s u l t and t a k i n g i t out o f f u r t h e r s e l e c t i o n * * ( i t w i l l have a p r o b a b i l i t y o f 0% t o be c h o s e n ) * * * * * * * * * * * * * * * * * * * * * * * * * * * */ cromo> f i t n e s s = ( o b j t P > 0 ) ? ( o b j t P) : 0 ; }

28

183 184 185 186 187 188 189

190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224

/* * * * * * * * * * * * * * * * * * * t h i s f u n c t i o n w i l l perform t h e * * b r e e d i n g dance , where two * * i n d i v i d u a l s exchange p a r t o f * * t h e i r chromossomes * * * * * * * * * * * * * * * * * * */ v o i d c r o s s o v e r ( crom par1 , crom par2 , crom * son1 , crom * son2 , i n t s i z e ) { i n t point , i ; / * Let s g e t a p o i n t between 0 and s i z e * / p o i n t = random ( 0 , s i z e ) ; /* * * * * * * * * * * * * * * * * *

4 Implementation
* The f i r s t p a r t we copy t h e g e n e s * * from p a r e n t 1 t o c h i l d 1 * * and from p a r e n t 2 t o c h i l d 2 * * * * * * * * * * * * * * * * * * */ f o r ( i = 0 ; i < p o i n t ; ++i ) { son1 >cromo [ i ] = par1 . cromo [ i ] ; son2 >cromo [ i ] = par2 . cromo [ i ] ; }

29

225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259

/* * * * * * * * * * * * * * * * * * * * * Here we l l c r o s s t h e i r chromosome , * now c o p y i n g from p a r e n t 2 t o c h i l d 1 * and from p a r e n t 1 t o c h i l d 2 * * * * * * * * * * * * * * * * * * * * f o r ( ; i < s i z e ; ++i ) { son1 >cromo [ i ] = par2 . cromo [ i ] ; son2 >cromo [ i ] = par1 . cromo [ i ] ; } } /* * * * * * * * * * * * * * * * * * * t h i s f u n c t i o n w i l l perform t h e * * mutation , where one i n d i v i d u a l * * may have h i s l i f e changed . . . * * or j u s t die f o r i t * * * * * * * * * * * * * * * * * * */ v o i d mutation ( crom * cromo , i n t s i z e ) { i n t point , i ;

* * * * */

/ * Let s g e t a p o i n t between 0 and s i z e * / p o i n t = random ( 0 , s i z e ) ; / * j u s t i n v e r t t h e b i t i n t h e p o i n t c h o s e n ( b l e s s t h e b i n a r y system ) */ cromo>cromo [ p o i n t ] = ! cromo>cromo [ p o i n t ] ; } /* * * * * * * * * * * * * * * * * * * h e r e we have a b e t t e r rand ( ) * * f u n c t i o n , where we can have *

260 261 262 263 264 265 266

4 Implementation

30

267 268 269 270 271 272

273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308

* a w e l l d e f i n e d range of values * * NB:RAND MAX i s t h e maximum v a l u e * * r e t u r n e d by t h e rand f u n c t i o n . * * * * * * * * * * * * * * * * * * */ i n t random ( i n t s t a r t , i n t end ) { r e t u r n ( i n t ) ( ( ( ( d o u b l e ) rand ( ) / ( d o u b l e ) (RAND MAX + 1 ) ) * end ) + start ) ; }

/* * * * * * * * * * * * * * * * * * for getting total probability * n e a r 100% we must round t h e * numbers t h e r i g h t way u s i n g t h e * c l o s e s t t o even r u l e * * * * * * * * * * * * * * * * * i n t iround ( double x ) { int r ; double v ;

* * * * * */

/* g e t t h e f i r s t d e c i m a l d i g i t */ v = x ( int ) x ; / * and t u r n i t t o i n t e g e r * / v = v * 10.0; r = ( int ) v ; / * i f t h e f i r s t d i g i t was g r e a t e r than 5 we c e i l t h e number * / i f ( r > 5) v = c e i l (x) ; / * l e s s than 5 we f l o o r i t * / e l s e i f ( r < 5) v = floor (x) ; / * e l s e we round t o t h e c l o s e s t even number * / else i f ( ( i n t ) x % 2) v = c e i l (x) ; else v = floor (x) ; r = ( int ) v ; return r ; }

4 Implementation

31

309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337

/* * * * * * * * * * * * * * * * * * * t h i s f u n c t i o n c a l c u l a t e the * * p r o b a b i l i t y o f each i n d i v i d u a l * * t o be c h o s e n based on i t s * * fitness * * * * * * * * * * * * * * * * * * */ v o i d p r o b a b i l i t y ( crom * pop , i n t s i z e p o p ) { int i ; d o u b l e prob ; i n t sum = 0 , prob sum = 0 ; / * Let s c a l c u l a t e t h e sum o f a l l f i t n e s s * / f o r ( i = 0 ; i < s i z e p o p ; ++i ) { sum += pop [ i ] . f i t n e s s ; } /* * * * * * * * * * * * * * * * * now f o r each one we d i v i d e * * i t s f i t n e s s by t h e sum o f * * a l l and m u l t i p l y by 100 * * r e s u l t i n g in i t s percentage * * * * * * * * * * * * * * * * */ f o r ( i = 0 ; i < s i z e p o p ; ++i ) { prob = ( d o u b l e ) ( 1 0 0 * pop [ i ] . f i t n e s s ) / sum ; pop [ i ] . prob = i r o u n d ( prob ) ; / * j u s t i n c a s e we want t o s e e i f we have r e a l l y 100% ( and r a r e l y we do ) * / prob sum += i r o u n d ( prob ) ; } } v o i d s e l e c t ( crom * pop , i n t s i z e p o p , i n t n s e l e c t i o n s , crom * r e s u l t , i n t size ) { i n t i , j , k , c h o i c e , theone , t r i e s = 0 ; c h a r * h pop ; / * Let s u s e t h i s dynamic a r r a y t o a v o i d c h o o s i n g 2 same i n d i v i d u a l s */

338 339 340 341 342 343

344 345 346 347 348

4 Implementation
h pop = ( c h a r * ) m a l l o c ( s i z e o f ( c h a r ) * s i z e p o p ) ; / * i n i t i a l i z e i t t o 0 , where 0 i s not c h o o s e n and 1 i s a l r e a d y c h o o s e n */ memset ( ( v o i d * ) h pop , s i z e p o p , \ 0 ) ; f o r ( i = 0 ; i < n s e l e c t i o n s ; ++i ) { t r ie s = 0; do { j = 0; theone = 0 ; / * 0 t o 100 p e r c e n t * / c h o i c e = random ( 1 , 1 0 0 ) ; / * sum t h e p r o b a b i l i t i e s u n t i l we g e t t h e p e r c e n t a g e randomly c h o s e n * / w h i l e ( t h e o n e < c h o i c e && j < s i z e p o p ) t h e o n e += pop [ j ++]. prob ; / * g e t back t o t h e c h o s e n one * / j ; /* * * * * * * * * * * * * * * * * * * * * * a f t e r t h e loop , j w i l l s t o r e t h e * v a l u e o f t h e c h o s e n one , but i n * c a s e we have p a s s e d t h r u t h e l i m i t . . . * * * * * * * * * * * * * * * * * * * * * j = j % size pop ; i f ( j < 0) j = 0 ; /* * * * * * * * * * * * * * * * * l o o p u n t i l we c h o o s e someone * not c h o s e n b e f o r e , o r we have * t r i e d more than 20 t i m e s * * * * * * * * * * * * * * * * } w h i l e ( h pop [ j ] && t r i e s++ < 2 0 ) ; / * t h i s one i s now c h o s e n * / h pop [ j ] = 1 ; /* * * * * * * * * * * * do t h e copy dance *
* * * * */ * * * * */

32

349 350 351

352 353 354 355 356 357 358 359 360 361 362 363 364

365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389

4 Implementation
* * * * * * * * * * */ f o r ( k = 0 ; k < s i z e ; ++k ) r e s u l t [ i ] . cromo [ k ] = pop [ j ] . cromo [ k ] ;

33

390 391 392 393 394 395 396 397 398 399 400 401 402 403

/* * * * * * * * * * * * * * * * * * * * * o n l y t h e f i t n e s s w i l l be c o p i e d * f o r t h e p r o b a b i l i t y w i l l be d i f f e r e n t * * * * * * * * * * * * * * * * * * * * r e s u l t [ i ] . f i t n e s s = pop [ j ] . f i t n e s s ; } / * l e t s not waste memory * / f r e e ( h pop ) ; }

* * * */

4.4

Java code listing

KnapSacProject.java
1 2 3 4 5 6 7 8 9

package k n a p s a c p r o j e c t ; p u b l i c c l a s s KnapSacProject { p u b l i c s t a t i c v o i d main ( S t r i n g [ ] a r g s ) { I n t e r f a c e i n = new I n t e r f a c e ( ) ; in . s e t V i s i b l e ( true ) ; } }

KnapSacProject.java
1 2 3 4 5 6 7 8 9 10 11 12

package k n a p s a c p r o j e c t ; import com . a l g o r i t h m . j n i . a l g o r i t h m ; import j a v a x . swing . t a b l e . DefaultTableModel ; p u b l i c c l a s s I n t e r f a c e e x t e n d s j a v a x . swing . JFrame { p u b l i c native double [ ] geneticAlgorithm ( i n t [ ] cost , i n t [ ] p r o f i t ) ; public Interface () { initComponents ( ) ; }

4 Implementation

34

13 14

15

16 17 18 19 20 21 22 23 24

p r i v a t e v o i d j B u t t o n 1 A c t i o n P e r f o r m e d ( j a v a . awt . e v e n t . ActionEvent e v t ) { DefaultTableModel model = ( DefaultTableModel ) j T a b l e 1 . getModel ( ) ; Object [ ] o b j e c t s = new Object [ 4 ] ; o b j e c t s [ 0 ] = model . getRowCount ( ) + 1 ; objects [1] = null ; objects [2] = null ; objects [3] = null ; model . addRow ( o b j e c t s ) ; } p r i v a t e v o i d j B u t t o n 2 A c t i o n P e r f o r m e d ( j a v a . awt . e v e n t . ActionEvent e v t ) { DefaultTableModel model = ( DefaultTableModel ) j T a b l e 1 . getModel ( ) ; i n t [ ] rows = j T a b l e 1 . g e t S e l e c t e d R o w s ( ) ; f o r ( i n t i = 0 ; i < rows . l e n g t h ; i ++) { model . removeRow ( rows [ i ] i ) ; } f o r ( i n t i = 0 ; i < j T a b l e 1 . getRowCount ( ) ; i ++) { model . s e t V a l u e A t ( i + 1 , i , 0 ) ; } } p r i v a t e v o i d j B u t t o n 3 A c t i o n P e r f o r m e d ( j a v a . awt . e v e n t . ActionEvent e v t ) { j T e x t F i e l d 1 7 . s e t T e x t ( 100 ) ; jTextField18 . setText ( 1 ) ; } p u b l i c i n t [ ] getTableData ( DefaultTableModel model , I n t e g e r c o l I n d e x ) { i n t nRow = model . getRowCount ( ) ; i n t [ ] t a b l e D a t a = new i n t [ nRow ] ; f o r ( i n t i = 0 ; i < nRow ; i ++) { t a b l e D a t a [ i ] = ( i n t ) model . getValueAt ( i , c o l I n d e x ) ; } return tableData ; }

25

26 27 28 29 30 31 32 33 34 35

36 37 38 39 40

41 42 43 44 45 46 47 48

4 Implementation

35

49

50

51 52 53

54

55 56 57 58 59 60 61 62 63

64 65 66 67 68 69 70 71 72 73 74 75 76 77

p r i v a t e v o i d j B u t t o n 4 A c t i o n P e r f o r m e d ( j a v a . awt . e v e n t . ActionEvent e v t ) { DefaultTableModel model = ( DefaultTableModel ) j T a b l e 1 . getModel ( ) ; i n t nRow = model . getRowCount ( ) ; try { a l g o r i t h m a l g o = new a l g o r i t h m ( getTableData ( model , 1 ) , getTableData ( model , 2 ) , I n t e g e r . p a r s e I n t ( j T e x t F i e l d 1 7 . getText ( ) ) , I n t e g e r . p a r s e I n t ( j T e x t F i e l d 1 8 . getText ( ) ) , I n t e g e r . p a r s e I n t ( j T e x t F i e l d 1 . getText ( ) ) ) ; int [ ] r e s u l t = algo . getResult () ; f o r ( i n t i = 0 ; i < nRow ; i ++) { i f ( r e s u l t [ i ]==1) model . s e t V a l u e A t ( t a k e , i , 3 ) ; else model . s e t V a l u e A t ( l e a v e , i , 3 ) ; } j T a b l e 1 . getColumnModel ( ) . getColumn ( 3 ) . s e t C e l l R e n d e r e r ( new St at us Co lum nC el lR en der er ( ) ) ; j T e x t F i e l d 1 9 . s e t T e x t ( I n t e g e r . t o S t r i n g ( r e s u l t [ nRow ] ) ) ; j T e x t F i e l d 2 0 . s e t T e x t ( I n t e g e r . t o S t r i n g ( r e s u l t [ nRow+1]) ) ; } catch ( Exception e ) { System . out . p r i n t l n ( e . g e t C l a s s ( ) ) ; } } p u b l i c s t a t i c v o i d main ( S t r i n g a r g s [ ] ) { j a v a . awt . EventQueue . i n v o k e L a t e r ( new Runnable ( ) { p u b l i c v o i d run ( ) { new I n t e r f a c e ( ) . s e t V i s i b l e ( t r u e ) ; } }) ; }

algorithm.java
1 2 3 4 5 6 7

package com . a l g o r i t h m . j n i ; import com . C l i b r a r y . j n i . N a t i v e U t i l s ; import j a v a . i o . IOException ; public c l a s s algorithm {

4 Implementation
public native int [ ] geneticAlgorithm ( int [ ] cost , int [ ] p r o f i t , int cmax , i n t gens , i n t t u r n s ) ; static { try { N a t i v e U t i l s . l o a d L i b r a r y F r o m J a r ( / d i s t / libMyCLibrary . s o ) ; } c a t c h ( IOException e ) { e . p r i n t S t a c k T r a c e ( ) ; // This i s p r o b a b l y not t h e b e s t way t o h a n d l e e x c e p t i o n : ) } } protected int [ ] cost , p r o f i t , r e s u l t ; p r o t e c t e d i n t gens , t u r n s , cmax ;

36

9 10 11 12 13

14 15 16 17 18 19

20 21 22 23 24 25 26 27 28 29 30 31 32 33

p u b l i c a l g o r i t h m ( i n t [ ] c o s t , i n t [ ] p r o f i t , i n t gens , i n t t u r n s , i n t cmax ) { this . cost = cost ; this . profit = profit ; t h i s . gens = gens ; t h i s . t u r n s=t u r n s ; t h i s . cmax = cmax ; } public int [ ] getResult () { r e t u r n g e n e t i c A l g o r i t h m ( c o s t , p r o f i t , cmax , gens , t u r n s ) ; } p u b l i c s t a t i c v o i d main ( S t r i n g [ ] a r g s ) { } }

KnapSacProject.java
1 2 3 4 5 6 7 8 9 10 11 12

package k n a p s a c p r o j e c t ; import import import import import import j a v a . awt . C o l o r ; j a v a . awt . Component ; j a v a x . swing . JLabel ; j a v a x . swing . JTable ; j a v a x . swing . t a b l e . D e f a u l t T a b l e C e l l R e n d e r e r ; j a v a x . swing . t a b l e . DefaultTableModel ;

p u b l i c c l a s s S ta tu sC olu mn Ce ll Re nde re r e x t e n d s D e f a u l t T a b l e C e l l R e n d e r e r { @Override

4 Implementation

37

13

p u b l i c Component getTableCellRendererComponent ( JTable t a b l e , Object val ue , b o o l e a n i s S e l e c t e d , b o o l e a n hasFocus , i n t row , i n t c o l ) { // C e l l s a r e by d e f a u l t r e n d e r e d a s a JLabel . JLabel l = ( JLabel ) s u p e r . getTableCellRendererComponent ( t a b l e , val ue , i s S e l e c t e d , hasFocus , row , c o l ) ; // Get t h e s t a t u s f o r t h e c u r r e n t row . DefaultTableModel model = ( DefaultTableModel ) t a b l e . getModel ( ) ; i f ( model . getValueAt ( row , c o l ) . e q u a l s ( l e a v e ) ) { l . setBackground ( C o l o r .RED) ; } else { l . setBackground ( C o l o r .GREEN) ; } // Return t h e JLabel which r e n d e r s t h e c e l l . return l ; } }

14 15 16

17 18 19 20 21 22 23 24 25 26 27 28

4.5

Tests and results

As demanded, we will apply our project in the resolution of the following 10 examples. In each case we will note the optimal result and the time needed to calculate it. Note that for time computation, we use Windowss command interpreter (Cmd.exe). Test n 1: Cmax = 27 Item no 1 2 3 4 5 6 7 8 9 10 prot 5 8 6 3 7 5 2 9 2 7 cost 2 1 1 8 9 3 4 3 6 3 leave VS take take take take leave take take take take leave take

4 Implementation Objective 49 Test n 2: Cmax = 20 Item no 1 2 3 4 5 6 7 8 9 10 Objective 26 prot 10 7 9 8 1 1 7 7 7 3 cost 5 6 7 8 6 4 9 7 8 4 Time (ms) 0.0 leave VS take take leave leave leave leave leave leave take take leave Penalty 0 Time (ms) 0.0 Penalty 0

38

4 Implementation Test n 3: Cmax = 58 Item no 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 Objective 86 prot 2 7 6 3 1 9 8 8 8 3 5 2 6 3 5 4 9 4 9 3 cost 2 1 2 6 10 4 3 10 2 9 8 2 6 6 5 9 8 10 5 8 Time (ms) 0.0 leave VS take leave leave leave take take take take leave leave take take take take take take take leave take leave take Penalty 0

39

4 Implementation Test n 4: Cmax = 38 Item no 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 cost 8 8 6 10 10 3 1 7 3 6 1 1 6 7 1 8 3 10 6 4 Objective 72 prot 1 9 6 4 5 9 7 7 10 1 4 7 8 1 8 10 3 4 2 10 Time (ms) 0.0 leave VS take leave leave take leave leave take take leave take take take take leave leave take take leave leave leave take Penalty 0

40

4 Implementation Test n 5: Cmax = 86 Item no 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 Objective 118 prot 4 8 3 9 8 5 3 7 7 8 6 7 7 5 10 2 3 5 8 5 2 7 1 7 5 2 6 10 9 3 cost 9 9 10 6 4 7 2 8 1 7 1 3 2 9 2 4 3 8 3 4 10 2 2 3 6 5 5 1 10 3 Time (ms) 0.0 leave VS take leave take leave take take take take take leave leave take take take leave take take leave take take leave leave take take take take leave leave take leave take Penalty 0

41

4 Implementation Test n 6: Cmax = 57 Item no 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 Objective 79 prot 4 3 6 5 9 2 7 7 8 4 7 4 2 10 4 6 4 10 7 8 9 2 4 6 1 8 9 4 10 2 cost 3 4 7 8 7 6 9 9 1 8 10 10 9 1 4 4 8 6 6 6 9 7 5 5 7 10 6 4 6 10 Time (ms) 0.0 leave VS take take leave leave take take leave leave take take leave leave leave leave take take take leave take leave leave leave leave leave take leave leave leave leave take leave Penalty 0

42

4 Implementation Test n 7: Cmax = 109 Item no 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 prot 3 2 4 5 9 7 8 5 8 3 6 3 8 9 6 9 3 5 2 8 5 10 5 8 8 1 6 1 3 7 cost 6 8 7 5 1 4 10 10 3 8 10 7 10 8 8 2 4 5 9 1 3 3 8 4 10 6 7 4 9 7 leave VS take take leave leave leave leave take leave leave take leave take take take take leave take leave leave take take take take leave take take leave leave take leave leave

43

4 Implementation Item no 31 32 33 34 35 36 37 38 39 40 cost 7 5 6 6 3 6 2 8 5 4 Objective 135 prot 2 1 3 3 2 10 7 3 4 1 Time (ms) 0.0 leave VS take take take take leave take take leave take take leave Penalty 0

44

4 Implementation Test n 8: Cmax = 61 Item no 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 prot 5 3 3 9 9 4 6 4 10 6 3 5 10 5 3 5 5 4 4 8 9 4 4 1 5 1 8 5 2 3 cost 1 10 3 3 1 10 2 3 5 10 8 4 7 1 8 3 7 9 7 8 3 5 1 7 5 7 1 2 2 1 leave VS take leave leave leave take take leave leave leave leave take leave leave leave leave leave leave leave leave take take take take take take leave leave take leave take leave

45

4 Implementation Item no 31 32 33 34 35 36 37 38 39 40 cost 9 4 3 3 2 2 4 2 2 1 Objective 72 prot 3 4 8 10 7 8 7 7 4 2 Time (ms) 0.0 leave VS take leave take leave leave leave leave take leave leave leave Penalty 0

46

4 Implementation Test n 9: Cmax = 146 Item no 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 prot 5 1 8 6 10 9 3 7 6 5 6 10 9 1 5 6 2 9 8 6 1 1 8 9 7 6 9 6 8 1 cost 7 10 4 2 6 6 1 7 9 8 6 7 5 1 10 1 7 10 2 3 6 2 1 5 8 8 2 10 1 7 leave VS take take take take take leave leave take take leave leave leave take leave leave leave take take take take take take take take take take leave take leave take take

47

4 Implementation Item no 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 cost 2 4 1 6 8 10 9 9 8 4 3 7 9 5 3 3 5 8 3 7 Objective 188 prot 4 3 1 7 6 9 9 10 6 2 6 1 6 1 2 5 2 5 9 3 Time (ms) 0.0 leave VS take take take take leave take leave take leave take take leave take take take leave take take take leave leave Penalty 0

48

4 Implementation Test n 10: Cmax = 90 Item no 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 prot 2 9 8 1 3 2 9 4 5 7 10 9 3 3 7 9 10 4 6 10 3 9 6 8 7 5 1 7 5 10 cost 7 7 2 8 9 6 5 5 1 2 6 10 3 4 3 9 8 10 1 2 10 9 2 2 8 4 4 6 10 9 leave VS take leave take take leave leave leave take take take take take leave leave take leave take leave leave take take leave leave take take leave take leave leave leave take

49

4 Implementation Item no 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 cost 3 2 2 2 10 7 6 7 2 1 8 4 5 5 1 2 5 6 10 2 Objective 140 prot 2 8 7 1 6 6 10 1 6 6 5 5 4 5 1 8 2 3 5 9 Time (ms) 0.0 leave VS take leave leave take leave leave take leave take leave leave leave leave leave take leave leave leave leave take leave Penalty 0

50

51

Conclusion
Throughout this project, we tried to create a graphical interface that allows us to simply manipulate and solve the 0-1 KP. The conception took into consideration the mathematical background of GAs. As a further step, we can make deeper research on how to pass the time calculates in C to the java application in order to display it on the interface. Moreover, as we can enlarge our mastery of metaheurisitic concepts by including other approches like the dynamic programmig, ant colony, Lagrangian relaxation method ...etc. This will let us to make a benchmark so that we can compare these dierent methods in term of rapidity of calculations.

You might also like