You are on page 1of 19

manuscript No.

(will be inserted by the editor)

A greedy heuristic and simulated annealing approach for a bicriteria owshop scheduling problem with precedence constraints - A practical manufacturing case
Samer Hanoun Saeid Nahavandi

Abstract This paper considers a owshop scheduling problem with two criteria, where the primary (dominant) criterion is the minimization of material waste and the secondary criterion is the minimization of the total tardiness time. The decision maker does not authorize trade-os between the criteria. In view of the nature of this problem, a hierarchical (lexicographical) optimization approach is followed. An eective greedy heuristic is proposed to minimize the material waste and a simulated annealing (SA) algorithm is developed to minimize the total tardiness time, subjective to the constraint computed for the primary criterion. The solution accuracy is compared with the optimal solution obtained by complete enumeration for randomly generated problem sets. From the results, it is observed that the greedy heuristic produces the optimal solution and the SA solution does not dier signicantly from the optimal solution. Keywords Flowshop scheduling Lexicographical optimization Simulated annealing Multicriteria scheduling

1 Introduction The problem of scheduling for minimizing material waste (cost) exists in many manufacturing domains. Joineries, corrugated board plants and glass manufacturing are typical examples. In joinery manufacturing, products such as kitchens, bathrooms, and cabinets are produced mainly from two materials. The cost of any product is determined largely by the material used for the front, which is far more expensive than the melamine material used for the sides and the rear. The dominant objective is always to increase the prot by minimizing the waste in the front material. Additionally, the second objective is to minimize the tardiness of the jobs. The tardiness could, in theory, be relaxed based on the schedule that provides the maximum cost savings, however, achieving the minimum tardiness is required to satisfy the customers required due dates. This multiobjective nature of the problem requires reaching an acceptable
Samer Hanoun Saeid Nahavandi Centre for Intelligent Systems Research, Deakin University, Australia e-mail: samer.hanoun@deakin.edu.au

compromise, where the quality of a solution has to satisfy the two criteria in the order specied. The multiobjective optimization problem (also called multicriteria) is the problem of nding the vector X = [x1 , x2 , . . . , xk ]T which will satisfy the following m inequality: g i ( X ) 0, the l equality constraints: hi ( X ) = 0 , and optimize the objective vector F (X ) = [f1 (X ), f2 (X ), . . . . . . , fN (X )]T
T

i = 1, 2, 3, . . . . . . , m

(1)

i = 1, 2, 3, . . . . . . , l

(2)

(3)

where X = [x1 , x2 , . . . , xk ] is the vector of decision variables and N is the number of objective functions. The decision variables can either be all continuous within the respective lower and upper bounds (xL and xU ) or a mixture of continuous, binary (i.e., 0 or 1) and integer variables. The inequality and equality constraints restrict the solution space to be searched for the optimal solutions and dene the feasible region for vector X . The number of equality and inequality constraints can be none, a few or many depending on the application. For example, in manufacturing, the inequality constraints, g (x), which can be min or max (i.e., g (x) 0 or g (x) 0), are due to equipment, material, safety and other considerations. Examples of inequality constraints are the requirement that the pressure in a die casting machine should be below a specied value to avoid process run-away, failure of the material used for fabrication and undesirable defective products. In chemical engineering, the equality constraints, h(x) = 0, arise from mass, energy and momentum balances and can be algebraic and/or dierential equations. There have been tremendous eorts to solve multiobjective owshop scheduling problems. Evans [1] and Fry et al. [2] broadly classify the approaches as: (a) a priori approaches in which the objectives are combined into one composite utility function and only one solution (optimum solution) is computed (e.g., Nagar et al. [3]; Sridhar and Rajendran [4]; Framinan et al. [5]); and (b) a posteriori approaches in which a set of ecient (or non-dominated or Pareto-optimal) solutions is developed and presented to the decision maker to choose the best solution according to the preferences at the time of decision making. (e.g., Framinan et al. [5]; Tkindt et al. [6]). The posteriori approach is eective if it is hard to design one composite function to aggregate all the objectives, it is unknown what the function looks like or minimizing the utility function is computationally inaccessible. In such cases, the idea is to develop from the set of solutions a subset that contains the optimum solution and let the decision maker choose; thus following a generate-rst-choose-later approach. The use of these approaches is distinguished based on the decision makers preferences (ordering or relative importance of objectives and goals), and the presentation of the solutions (unique solution or a set of Pareto optima). Marler and Arora [7] surveyed the dierent methods for combining the criteria into one utility function for the priori approach. A comprehensive literature survey of multicriteria scheduling problems and solutions is given by Tkindt and Billaut [8]. In the case, no trade-o between the criteria is authorized by the decision maker, lexicographical optimization is applied to minimize the objective functions in a lexicographic order. The objective functions are arranged in order of importance and the following optimization problems are solved one at a time:

min Fi (X )
xX subject to Fj (X ) Fj (Xj ), j = 1, 2, . . . , i 1, i > 1, i = 1, 2, . . . , k .

(4)

) represents Here, i represents a functions position in the preferred sequence, and Fj (Xj the optimum of the j th objective function, found in the j th iteration. Note that, after the rst iteration (j = 1), Fj (Xj ) is not necessarily the same as the independent minimum of Fj (X ), because new constraints have been introduced. For a bicriteria optimization, Hoogeveen [9] explains that if one performance criterion, say f , is far more important than the other one g , then an obvious approach is to nd the optimum value with respect to criterion f , which is denoted by f , and then choose from among the set of optimum schedules for f the one that performs best on g . In the rst stage, the value of the more important criterion f is minimized, whereas in the second stage, the second criterion g is minimized subject to the additional constraint that f f . The resulting bicriteria problem is polynomially solvable only, if the primary criterion is the minimax type, like Lmax , Tmax or fmax , and the secondary criterion is one of for j Cj , Lmax , Tmax or fmax and strongly NP-hard w C . The computational complexity is still open to some problems with j j j j Uj as the primary criterion and L , T or E as the secondary criterion [9]. Probmax max max lems with minisum such as j Fj criteria are more dicult to j Tj and j Cj , solve and are combinatorial problems. The methods and approaches for solving combinatorial scheduling problems are classied into two groups: (a) nding the exact optimal solution using implicit enumeration methods which are based on either branch-and-bound or dynamic programming; (2) nding a near optimal solution using heuristic techniques. Heuristics are categorized as either constructive (e.g., Nawaz et al. [10], Panneerselvam [11]) or improvement derived from meta-heuristic approaches, such as genetic algorithm (GA) and simulated annealing (SA) (e.g., Reeves [12], Rajendran et al. [13], Noorul Haq et al. [14], Suman [15]). Simulated annealing, was formally introduced for solving combinatorial optimization problems in 1983 by Kirkpatrick et al. [16], has since proven very eective for obtaining an optimal solution to a single objective optimization problems and for obtaining a Pareto set of solutions for a multiobjective optimization problems [17], [18]. SA has been widely used to solve many scheduling problems, for job shop [19], open shop [20], and owshop problems [21], [22]. A comprehensive review of SA for solving single and multiobjective optimization problems is provided by Suman and Kumar [23]. Theoretical research focused for many decades on solving the m -machine, two machine and two -job owshop problems to minimize the makespan. Methods such as mathematical programming has been mainly used, while in practice, it has been abandoned because of its excessive computational burden and heuristic solution procedures are being developed instead. Also, in industry, other objectives such as the total owtime and the total tardiness are becoming more important for some manufacturers than the makespan. Research in owshop scheduling should be inspired more by real life problems rather than problems encountered in mathematical abstractions and must be motivated by what the researchers can achieve rather that what is important. This research is motivated by a real and practical owshop industry problem existing in the joinery manufacturing domain. The scheduling of jobs is carried out under

the cost reduction and the tardiness objectives. The cost of the product is determined by the amount of material sheets used in manufacturing. Minimizing the amount of waste in each material sheet can minimize the number of material sheets required, therefore, increasing the prot. This can be achieved by scheduling jobs with similar materials together. For example, given jobs J1 and J2 requiring 25 and 35 material sheets respectively. For a savings factor of 5% in scheduling these two jobs together, then the reduction in cost is $1500 for a material sheet of price $500. This shows the amount of prot that can be achieved as well as the practicality of the problem. The joinery setup resembles a owshop model, where the jobs are processed throughout ve sequential stages. Each stage has a single machine. The stages required to process a job depend mainly on the jobs type of operations (i.e., a job may skip some stages according to its technological operations). All jobs are processed through the rst stage, which determines the total material cost. The tardiness of a job depends on its number of operations, processing time of each operation, and the processing stages it requires. The tardiness of a job is determined once its last operation is completed. The decision maker orders the criteria by specifying a higher priority for the cost reduction criterion over the total tardiness time. This inuences our methodology of the proposed solution as maximizing the cost reduction criterion must be carried out rst before minimizing the total tardiness criterion. In this paper, we introduce the problem and explain the formulation in Section 2. Then, in Section 3, we present the proposed approach to solve the problem. In Section 4, the experimental results obtained are shown for the proposed algorithms. Finally, Section 5 presents the conclusion.

2 Problem Description The problem addressed in this paper can be described as follows. Given a set of n jobs to be processed through a ve-stage owshop (l = 5). Each stage has one machine available (i.e., m(l) = 1). All jobs have to operate through stage 1, 3 and 4 but may not operate through stage 2, and 5 depending on their technological requirements (i.e., type of operations). Each job Ji has ni operations and it is not necessary that ni = l (3 ni 5). Jobs are manufactured from dierent material types. Jobs with similar materials can be scheduled together in stage 1 to decrease the amount of material waste. Every two jobs with the same material type have a savings factor, which shows the reduction in material that can be achieved when producing the two jobs in sequence in stage 1. Each machine can perform only one operation at a time and the precedence between the operations must be preserved (i.e., operation Oij +1 cannot start unless operation Oij has nished). Figure 1 shows a schematic view of the possible routings followed by the jobs depending on the operations each job is requiring. It is worth noting that even though a job may have its own processing route, the job must visit the stages in its order of operations. The tardiness of a job is determined based on the completion time of its last operation and its due date. It is desired to nd the order (schedule) in which these n jobs should be processed through the ve-stage owshop to minimize the material waste (maximize the cost reduction) in stage 1 and minimize the total tardiness time throughout the ve stages. The decision maker does not authorize any trade-os between the objectives. In this case, the cost reduction criterion has a higher priority than the total tardiness time criterion.

S1

S2

S3

S4

S5

Fig. 1 Schematic view of the jobs routings in the ve-stage owshop

The problem of minimizing the material waste is found in other manufacturing domains such as corrugated board plants and glass industry. For example, in corrugated board plants, corrugated board is produced from rolls of paper, and cut and slit into sheets of board. Approximately 30 cardboard qualities are produced from approximately 25 qualities of paper. Boxes are produced from these cardboard sheets. The main objective is to schedule the jobs with the same quality of cardboard in order for the slitting and cutting of that specic quality of cardboard into cardboard sheets is optimized to minimize the material waste. The same objective exists in the glass industry where products have dierent qualities as well as the used glass sheets. In our presented problem, the following assumptions are made: Jobs are available at time zero and no preemption is allowed (i.e., any started operation has to be completed without interruptions). A machine can perform only one operation at a time of the same type as the stage it belongs to. An operation of a job can be performed by only one machine at a time. Once an operation has begun on a machine, it must not be interrupted. An operation of a job cannot be performed until its preceding operations are completed. Operation processing time and number of operations for each job are known in advance. Each operation is processed as early as possible. The rst stage, stage 1, in the owshop is the one aecting the cost of production. Each job i is produced using two materials (i.e material Ai and material Bi ). The cost of material A is far more expensive than the cost of material B . The prot revenue is determined mainly by the number of sheets used from material A. Jobs may have dierent types for material A but all have the same type for material B . Every two jobs, i and k, with the same type for material A have a savings factor Sik , which shows the reduction in material that can be achieved when producing the two jobs in sequence, where Sik = Ski . Given the number of material sheets Xi and Xk and the cost of material sheet Costi and Costk , the cost savings factor CSik and CSki , where CSik = CSki , is calculated as: CSik = Costi (Xi + Xk ) Sik CSki = Costk (Xk + Xi ) Ski if Ai = Ak ; 0 otherwise, where i = 1, . . . , n, k = 1, . . . , n The total cost savings CS , that must be maximized by sequencing the jobs in stage 1, is dened by (5) (6)

6 Table 1 Jobs and operation stages J1 J2 J3 S1 X X X S2 X X S3 X X X S4 X X X S5 X

Job J3 J2 J1

S1 S1 S1 S3 S2 S4 S3

S2

S3

S4

S5

S4

S5
Time

Fig. 2 The schedule of jobs J1 , J2 , J3 through the ve-stage owshop

CS =

n i i=1 k=1

CSik

(7)

For stage 2 to 5, the jobs operate based on the type of their operations. A job proceeds directly to the stage that its current operation requires. In the case that the machine in the required stage is busy, then the job waits until the current machine nishes. Table 1 gives an illustrative example of three jobs J1 , J2 , and J3 , and their operating stages. Figure 2 and Figure 3 show the overall schedule throughout the ve stages, given that {J1 J2 J3 } is the schedule that produces the maximum cost savings in stage 1. It is not necessary for the jobs to be scheduled according to the schedule produced from stage 1, however, it depends mainly on the number of operations, types of operations and the processing time of the operations in stage 2 to stage 5. This appears when operation S3 of job J2 starts before the same operation of job J1 because S3 of J1 has to wait for its preceding operation to nish and stage 3 is idle and not processing any jobs. The tardiness Ti of job i is determined by the completion time Ci of its last operation. It is calculated as: Ti = max(0, Ci di ) where di is the due date of job i. The total tardiness T , that must be minimized, is calculated as:
n i=1

(8)

T =

Ti

(9)

7
Stage S5 S4 S3 S2 S1

J1 J2 J2 J1 J1 J2 J3 J1 J3 J3 J1 J3

J3

Time

Fig. 3 The schedule of the ve stages for jobs J1 , J2 , J3

3 Proposed Approach The proposed approach is based on heuristic solutions to obviate the inherent computational complexity of the problem. The approach consists of two phases; each acts on solving one of the required criteria. The phases are carried out in order to satisfy the priority ordering of the criteria. In phase one, jobs with similar materials are clustered into batches and the schedule that maximizes the total cost savings for each batch is computed using a greedy heuristic. Clustering of jobs into batches increases the cost savings as jobs with dierent materials have zero cost savings (i.e., cost saving between dierent batches equals zero). For a set of materials M = {M1 , M2 , . . . , Mk }, the constructed batches are B1 , B2 , . . . , Bk , each with a computed optimum schedule S1 , S2 , . . . , Sk and corresponding total cost savings CS1 , CS2 , . . . , CSk . The global schedule produced by phase one is: SG = S1 + S2 + . . . + Sk and its total cost saving is:
k i=1

(10)

CSG =

CSi

(11)

The constructed global schedule preserves that the total cost savings criterion is always maximized and independent of the order in which the batches are sequenced. For example, batches B1 , B2 , B3 , with schedules S1 = {1, 3, 5}, S2 = {6, 2}, S3 = {4, 8, 7} and corresponding total cost savings CS1 = 120, CS2 = 400, CS3 = 310, the constructed global schedule SG will always have a total cost savings CSG = 830 for all possible sequences of S1 , S2 , S3 . This is achieved as the cost saving between the batches equals zero. In phase two, the batches are sequenced using a simulated annealing (SA) algorithm to minimize the total tardiness time of the constructed global schedule SG . The batch due date dbatch is considered to be the due date of the earliest job in the batch and its processing time pbatch is the sum of processing times of all jobs in the batch. The reason for choosing dbatch is to impose a high level of tightness on the SA algorithm for achieving the global optimum solution.

dbatchi = min(d1 , d2 , . . . , dnb ), i = 1 . . . k


nb j =1

(12)

pbatchi =

pj , i = 1 . . . k

(13)

The tardiness Ti of each job is calculated based on its completion time and due date. The completion time of a job is aected by 1) the position of the job in the global schedule SG , 2) the completion time of its last operation, 3) the number and processing time of each operation and 4) the assignment algorithm used for allocating the operations to the designated stages. The assignment algorithm assigns the ready operation to its designated stage once the stage is available and applies a FIFO (First-in-Firstout) dispatching rule for resolving the conicts between waiting operations. The FIFO rule is preferred over other dispatching rules such as SPT (shortest processing time), based on a manufacturing requirement to minimize the idle gaps between the jobs operations.

3.1 Greedy Heuristic In this section, the steps of the proposed greedy heuristic to maximize the cost savings for each batch are presented. The algorithm computes a set of schedules for each batch and the schedule with the maximum cost savings is considered the dominant schedule for the batch. Step 1 Input the following:

Number of jobs (nb ) in the batch The jobs in the batch [Ji , i = 1 . . . nb ] Cost savings matrix of the batch [CSi,j , i = 1 . . . nb , j = 1 . . . nb ] Step 2 Initialize the following:

List of schedules Si,j = 0, i = 1 . . . nb , j = 1 . . . nb List of costs Ci = 0, i = 1 . . . nb Step 3 /* Case only one job in the batch */ if nb = 1 then S1,1 = J1 proceed to Step 5 else proceed to Step 4 end if

Step 4 /* Case two or more jobs in the batch */ for i = 1 to nb do Si,1 = Ji

9 Table 2 The cost savings matrix for a set of ve jobs J1 J2 J3 J4 J5 J1 0 3739.68 2626.68 3635.8 3005.1 J2 3739.68 0 3339 3739.68 1840.16 J3 2626.68 3339 0 3940.02 2018.24 J4 3635.8 3739.68 3940.02 0 3339 J5 3005.1 1840.16 2018.24 3339 0

Ci = 0 current job = Ji for k = 1 to nb 1 do next job = Job with maximum cost savings value in row (CScurrent job ) and not currently in row (Si ) Si,k+1 = next job Ci = CScurrent job,next job current job = next job end for end for Step 5 return schedule Si that has the maximum Ci , i = 1 . . . nb

The proposed greedy algorithm is explained with the help of a numerical illustration. Consider ve jobs with the same material type (i.e., all jobs belong to the same batch) with their corresponding cost savings matrix presented in Table 2. The algorithm computes a set of ve schedules, each schedule has one of the given jobs as the starting job. Once these schedules are computed, the schedule that has the maximum cost savings C is considered the best schedule for these jobs. Each schedule is constructed in a greedy manner based on the cost savings matrix. For example, in iteration i = 1, the rst computed schedule S1 starts with job J1 , S1 = {1}. During iteration k = 1, J2 is added to S1 because J2 is the job with maximum cost savings value in CS1 and not in S1 , S1 = {1 2} and C1 = 3739.68. In iteration k = 2, J4 is added to S1 similar to J2 , S1 = {1 2 4} and C1 = 7479.36. Note that the algorithm avoids choosing J1 as the job with the maximum cost savings because J1 already exists in S1 . This condition prevents any loops from occurring in the produced schedule. In iteration k = 3, J3 satises the required condition leading to S1 = {1 2 4 3} and C1 = 11419.38. Finally, in iteration k = 4, J5 is the last job which leads to S1 = {1 2 4 3 5} and C1 = 13437.62. Figure 4 shows the greedy construction for schedule S1 at every iteration of the algorithm. In each iteration, the possible choices are shown and the job that has the maximum cost savings value and is not in S1 is considered to be the next job selected. The algorithm continues to compute the other remaining schedules (i.e., S2 , S3 , S4 and S5 ) and nally returns schedule S3 = {3 4 2 1 5}, the one that has the maximum total cost savings among the computed set of schedules. Table 3 presents the detailed summary of the procedure described above. It is worth noting that for n jobs, the heuristic constructs n schedules and selects among the one that has the maximum cost savings. The complexity of the heuristic is O(n2 ) compared to O(n!) for obtaining the optimal schedule by enumeration.

10

1
2626.68 3005.1 3739.68 3635.8 j=1 3739.68

3 5

2
1840 3339

4 5 3
3339 3940.02

j=2

3
2018.24

j=3

5
Fig. 4 The greedy construction of schedule S1

j=4

3.2 Simulated Annealing Simulated annealing (SA) is a meta-heuristic algorithm based on the basic idea of neighborhoods. It was derived from the analogy between the simulation of the annealing of solid and the strategy of solving combinatorial optimization problems [16]. A neighboring solution is derived from its originator solution by a random move, which results a new slightly dierent solution. This increases the chance of nding an improved solution within a neighborhood more than in less correlated areas of the search space. Also, SA overcomes the problem of getting stuck in local minima, by allowing worse solutions (lesser quality) to be taken some of the time (i.e., allowing some uphill steps). The simplicity of the approach and its substantial reduction in computation time [24], [25] has made it a valuable tool for solving owshop scheduling problems with the objective of minimizing the tardiness [26], [27]. In this section the main components of the SA algorithm are presented. The implementation details of the algorithm are described, as well as the procedure followed for setting the parameters. The initial sequence of batches (i.e., initial solution) is constructed using the EDD (earliest due date) rule and the Randomly Pairwise Interchange mechanism is used for obtaining the neighboring solution. The total tardiness time T is the cost function applied to each of the obtained neighboring solutions.

11 Table 3 Summary of the greedy heuristic algorithm Iteration i 1 Iteration k 1 2 3 4 2 1 2 3 4 3 1 2 3 4 4 1 2 3 4 5 1 2 3 4 Schedule Si {1} {1 2} {1 2 4} {1 2 4 3} {1 2 4 3 5} {2} {2 1} {2 1 4} {2 1 4 3} {2 1 4 3 5} {3} {3 4} {3 4 2} {3 4 2 1} {3 4 2 1 5} {4} {4 3} {4 3 2} {3 4 2 1} {4 3 2 1 5} {5} {5 4} {5 4 3} {5 4 3 2} {5 4 3 2 1} Cost Ci 0 3739.68 7479.36 11419.38 13437.62 0 3739.68 7375.48 11315.5 13333.74 0 3940.02 7679.7 11419.38 14424.48 0 3940.02 7279.02 11018.7 14023.8 0 3339 7279.02 10618.02 14357.7

3.2.1 Initial Temperature The selection of an initial temperature value inuences the behavior of the SA algorithm. The starting temperature must be hot enough to allow a move to any neighborhood state. If this is not done, then the ending solution will be the same (or very close) to the starting solution. Ideally, if the maximum distance (cost function dierence) between one neighbor and another is known, then it can be used for determining the starting temperature. One choice is starting with a too high value so as the search can move to any neighbor, however, this transforms the search (at least in the early stages) into a random search, but eectively, the search will act as a SA when the temperature is cool enough. Another choice is starting with a very high temperature and cooling it rapidly until about 60% of the worst solutions are being accepted. This forms an accurate starting temperature and it can now be cooled more slowly. In our approach, the initial temperature is chosen by experimentation. The range of change, f0 in the value of the objective function with dierent moves is determined. The initial value of temperature To is calculated based on the initial acceptance ratio o , and the average increase in the objective function, f0 : To = f0 ln(o ) (14)

The following steps describe the algorithm used to calculate the value of To . Nonimprover solutions are accepted with a probability of about 95 percent in the primary iterations (i.e., o = 0.95).

12

Step 1: /* Q represents the number of samples */ for q = 1 to Q do repeat Generate two solutions X1 and X2 at random until Z (X1 ) = Z (X2 ) |Z (X1 )Z (X2 )| q To = ln(0.95) end for Step 2: To =
1 Q

q q =1 To

3.2.2 Cooling Schedule The cooling schedule determines the way the temperature is changed. Enough iterations should be allowed at each temperature, so that the system stabilizes at that temperature; however, the number of iterations at each temperature to achieve this might be very high compared to the problem size. As this is impractical, a compromise is required. Either running large number of iterations at a few temperatures or a small number of iterations at many temperatures or a balance between the two. In our approach, the temperature is decremented in a proportional manner: T (i + 1) = T (i) where is the cooling factor constant and chosen to be 0.98. (15)

3.2.3 Number of Iterations The number of iterations at each temperature is chosen so that the system is suciently close to the stationary distribution at that temperature. Enough number of iterations at each temperature are carried out to ensure that all represented states are searched and to enable reaching the global optimum. For our problem, a 150 non-improving iterations are used to terminate the current temperature level.

3.2.4 Stopping Criterion Various stopping criteria have been developed: i) Total number of iterations and number of iterations to move at each temperature. ii) A minimum value of temperature and number of iterations to move at each temperature. iii) Number of iterations to move at each temperature and a predened number of iterations to get a better solution. In our approach a nal temperature value Tf equals to 5 percent of the initial temperature To is used for stopping the algorithm (i.e., Tf = 0.05 To ).

13

3.2.5 SA Algorithm Table 4 summarizes the parameter settings for the SA algorithm. The following steps represent the basic structure of the SA algorithm. Step 1: Parameters Settings Obtain the initial temperature To according to the preliminary experiment Initialize non-improving iterations at each temperature (nt = 150), cooling factor = 0.98 and nal temperature Tf = 0.05 To Step 2: Initial Solution Generate initial sequence of batches using the EDD rule (the EDD rule is used based on Johnson et al. [29] for starting with a good initial seed) Construct the global schedule SG for Assign operations of jobs in SG to stages in the owshop and compute the total tardiness time s Step 3: let T = To , best = , best = s while T Tf do let n = 1 while n nt do - Generate a neighbor sequence by randomly interchanging two batches in - Construct the global schedule SG for - Assign operations of jobs in SG to stages in the owshop and compute s if (s , s ) < 0 then = s = s if (s , best ) < 0 then best = best = s n=1 end if else Generate a random number U if U < exp((s , s )/T ) then = s = s end if end if n=n+1 end while T =T end while return best

14 Table 4 Settings for the SA algorithm Parameter Initial sequence Neighborhood structure Initial temperature Cooling schedule Probability of acceptance Relative percentage deviation in cost function Number of iteration per temperature Final temperature Setting EDD (earliest due date) Interchange two randomly selected batches f0 To = ln( , o = 0.95 o ) T (i + 1) = T (i), = 0.98 Pa = exp(/T ) = (s s ) 100/s 150 non-improving iteration Tf = 0.05 To

4 Experimental Study In this section, we address the computational results obtained from our proposed greedy heuristic and the SA algorithm developed in this paper. The objective is to compare the solution accuracy of the algorithms with the optimal solution for the addressed problem. The optimal solution is obtained using the complete enumeration method. The data for the set of problems used is randomly generated, based on existing rules in the application domain under consideration. The data generation rules are summarized as: 1. A job belongs to one of three size categories: small, medium and large. 30 percent of jobs are small size jobs, 50 percent are medium size jobs, and 20 percent are large size jobs. A small job consists of 20 to 30 material sheets, medium jobs consist of 40 to 60 material sheets and large jobs consist of 70 to 90 material sheet. All jobs have a ratio of 2:3 of front material (material A) to melamine material (material B). 2. 80 percent of the jobs have a stage 5 operation, while 30 percent of the jobs have a stage 2 operation. 3. The cost of a front material sheet ranges from $100 to $1000. 4. The savings factor ranges from 5 percent to 10 percent, depending mainly on the design and layout of the job. 5. The set of front materials (material A) consists of 10 materials. 6. The processing time of one material sheet in stage 1 or stage 2 is approximately 10 minutes. For example, a small job consisting of 25 sheets would require around 250 minutes in stage 1 while a large job consisting of 80 sheets needs around 800 minutes in stage 1. The processing time of the operations in stage 3 to stage 5 is relative to the job size, design and layout. The job size determines the number of material sheets while the design and layout species the number of pieces to cut and the holes to drill on every sheet. Table 5 shows the range of the processing times of the operations in stages 3 to 5 relative to the job size. The processing time pi of job i is the sum of the processing times of its operations throughout the ve stages. It is calculated as: pi =
5 k=1

Oik

(16)

where Oik is the processing time of job i at stage k. The due dates are generated with dierent levels of tightness as proposed in [28]. Once the processing times of all jobs are generated, the total processing time P =

15 Table 5 Range of processing times in stages 3 to 5 relative to the job size Job Size Stage 3 Small Medium Large [0.5, 1] [1, 2] [2, 3] Processing Times Range (in hours) Stage 4 [3, 4] [6, 7] [9, 10] Stage 5 [8, 10] [16, 20] [24, 30]

Table 6 Problem no. 1: Cost savings matrix for 5 jobs J1 J2 J3 J4 J5 J1 0 373 262 363 300 J2 373 0 333 373 184 J3 262 333 0 394 201 J4 363 373 394 0 333 J5 300 184 201 333 0

Table 7 Problem no. 2: Cost savings matrix for 6 jobs J1 J2 J3 J4 J5 J6 J1 0 916 549 1133 879 961 J2 916 0 357 845 1030 320 J3 549 357 0 567 732 288 J4 1133 845 567 0 881 453 J5 879 1030 732 881 0 769 J6 961 320 288 453 769 0

Table 8 Problem no. 3: Cost savings matrix for 7 jobs J1 J2 J3 J4 J5 J6 J7 J1 0 149 224 465 191 289 275 J2 149 0 144 241 161 239 223 J3 224 144 0 339 231 256 149 J4 465 241 339 0 437 517 446 J5 191 161 231 437 0 229 289 J6 289 239 256 517 229 0 353 J7 275 223 149 446 289 353 0

i=1 pi is computed. Then the due date for each job is generated from the uniform distribution:

[P (1 T F

RDD RDD )), P (1 T F + ))] 2 2

(17)

where T F is the average tardiness factor and RDD is the range of due dates. The settings of T F = 0.6 and RDD = 0.4 are used. Note that these settings produce tighter due dates. The relative percentage deviation (RPD) in the objective value from the optimal is used as the performance measured and is calculated as: RP D = Oheuristic Ooptimal 100 Ooptimal (18)

16 Table 9 Problem no. 4: Cost savings matrix for 8 jobs J1 J2 J3 J4 J5 J6 J7 J8 J1 0 377 495 435 406 600 413 304 J2 377 0 466 385 556 548 565 652 J3 495 466 0 208 295 580 369 365 J4 435 385 208 0 263 371 436 365 J5 406 556 295 263 0 548 614 440 J6 600 548 580 371 548 0 684 574 J7 413 565 369 436 614 684 0 478 J8 304 652 365 365 440 574 478 0

Table 10 Problem no. 5: Cost savings matrix for 9 jobs J1 J2 J3 J4 J5 J6 J7 J8 J9 J1 0 174 252 242 198 252 365 284 270 J2 174 0 165 348 240 110 185 216 121 J3 252 165 0 226 145 156 180 270 193 J4 242 348 226 0 444 264 453 503 317 J5 198 240 145 444 0 261 416 249 185 J6 252 110 156 264 261 0 300 210 154 J7 365 185 180 453 416 300 0 298 255 J8 284 216 270 503 249 210 298 0 191 J9 270 121 193 317 185 154 255 191 0

Table 11 Problem no. 6: Cost savings matrix for 10 jobs J1 J2 J3 J4 J5 J6 J7 J8 J9 J10 J1 0 143 143 198 275 255 232 356 287 325 J2 143 0 99 182 184 95 154 181 171 196 J3 143 99 0 202 158 171 232 113 171 98 J4 198 182 202 0 284 282 314 223 169 259 J5 275 184 158 284 0 240 246 304 309 209 J6 255 95 171 282 240 0 202 245 243 193 J7 232 154 232 314 246 202 0 299 337 206 J8 356 181 113 223 304 245 299 0 245 187 J9 287 171 171 169 309 243 337 245 0 276 J10 325 196 98 259 209 193 206 187 276 0

Table 12 Computational results of the greedy heuristic for the sample problem set Problem no. 1 2 3 4 5 6 Jobs 5 6 7 8 9 10 Optimal Solution {3 4 2 1 5} {6 1 4 5 2 3} {1 4 6 7 5 3 2} {3 6 7 5 2 8 4 1} {3 8 4 7 5 6 1 9 2} {2 10 1 8 5 9 7 4 6 3} Copt 1440 4362 1999 3886 2546 2594 Heuristic Solution {3 4 2 1 5} {6 1 4 5 2 3} {1 4 6 7 5 3 2} {3 6 7 5 2 8 4 1} {3 8 4 7 5 6 1 9 2} {2 10 1 8 5 9 7 4 6 3} CHeu 1440 4362 1999 3886 2546 2594

Additionally, the mean relative percentage deviation (MRPD) is calculated for each problem set. The greedy heuristic is tested on problem sets consisting of 5, 6, 7, 8, 9 and 10 jobs. These jobs are all clustered in one batch (i.e they all have the same material).

17

Table 6 to Table 11 show the cost savings matrix for a sample of the problem sets. The total cost savings using the greedy heuristic and the optimal total cost savings obtained by the complete enumeration method are presented in Table 12. Results show that the greedy heuristic achieves the optimal solution for the sample problem sets presented. The heuristic was tested on a total number of 150 problem sets consisting of 5 to 10 jobs and achieved the optimal solution for every problem set. The SA algorithm is tested on problem sets with combinations of n jobs and b batches. The RPD is calculated for every problem and the MRPD for every problem set to show the percentage of accuracy of the SA solution to the optimal solution. The results are summarized in Table 13. The SA achieved the optimal solution for small size problems and produced solutions within 0.56% of the optimal solution on the average with a maximum deviation of 1.87%. Given the experimental results, the SA produced very high quality solutions with low computational complexity based on the NP-hard nature of the problem.

5 Conclusion In this paper, a greedy heuristic and a SA algorithm were considered to minimize the material cost and the total tardiness time. The problem is NP-hard and is signicantly practical for many industrial applications, especially in the joinery manufacturing. The proposed greedy heuristic achieved the optimal solution for minimizing the material total cost savings. This is proven by comparing the accuracy of the produced solution to the optimal solution obtained by the complete enumeration method. Additionally, the SA algorithm obtained solutions at a signicant level of 0.56% from the optimal solution. This shows that both algorithms are very practical for use by industry practitioners based on their simplicity and low computations requirements. In summary, the experimental results presented in this paper are very encouraging and promising for the application of both the greedy heuristic and the SA algorithm in the joinery manufacturing domain. For future work, the owshop model will be extended to handle parallel machines in each stage. This research may also be extended in the direction of providing the decision maker with the set of Pareto optimal solutions by optimizing both criteria simultaneously, based on a linear composite objective function, which gives the decision maker the exibility of choosing the solution that best satises his or her preferences.

References
1. Evans GW (1984) An overview of techniques for solving multiobjective mathematical programs. Manage Sci 30:1268-1282 2. Fry TD, Armstrong RD, Lewis H (1999) A framework for single machine multiple objective sequencing research. Omega 17:595-607 3. Nagar A, Heragu SS, Haddock J (1995) A branch and bound approach for a two-machine owshop scheduling problem. J Oper Res Soc 46:721-734 4. Sridhar J, Rajendran C, (1996) Scheduling in owshop and cellular manufacturing systems with multiple objectives a genetic algorithmic approach. Prod Plan Control 7:374-382 5. Framinan JM, Leisten R, Ruiz-Usano R (2002) Ecient heuristics for owshop sequencing with the objectives of makespan and owtime minimisation. Eur J Oper Res 141:559-569 6. Tkindt V, Billaut J-C, Proust C (2001) Solving a bicriteria scheduling problem on unrelated parallel machines occurring in the glass bottle industry. Eur J Oper Res 135:42-49

18 Table 13 Computational results of the SA algorithm Problem (n b) 10 5 20 5 40 5 60 5 80 5 100 5 10 6 20 6 40 6 60 6 80 6 100 6 10 7 20 7 40 7 60 7 80 7 100 7 10 8 20 8 40 8 60 8 80 8 100 8 10 9 20 9 40 9 60 9 80 9 100 9 10 10 20 10 40 10 60 10 80 10 100 10 Average RPD Optimal 45 226 1386 3632 6094 10186 43 223 1558 3175 8997 10584 25 206 1579 2975 6071 9605 40 366 1229 3089 6030 9462 32 192 1211 2736 5400 10265 21 198 1118 3358 4810 8860

T SA 45 230 1386 3632 6094 10186 43 223 1578 3175 8997 10584 25 208 1607 2975 6071 9614 40 366 1252 3124 6053 9506 32 195 1232 2748 5400 10329 21 201 1132 3381 4872 8982

RPD

MRPD

0 1.77 0 0 0 0 0 0 1.28 0 0 0 0 0.97 1.77 0 0 0.09 0 0 1.87 1.13 0.38 0.47 0 1.56 1.73 0.44 0 0.62 0 1.52 1.25 0.68 1.29 1.38 0.561

0.295

0.213

0.471

0.641

0.725

1.02

7. Marler RT, Arora JS (2004) Survey of multi-objective optimization methods for engineering. Struct Multidisc Optim 26:369-395 8. Tkindt V, Billaut J-C (2001) Multicriteria scheduling problems: A survey. RAIRO Oper Res 35:143-163 9. Hoogeveen H (2005) Multicriteria scheduling. Eur J Oper Res 167:592-623 10. Nawaz M, Enscore E, Ham L (1983) A heuristic for the m-machine n-job owshop sequencing problem. Omega II:91-95. 11. Panneerselvam R (2006) Simple heuristic to minimize total tardiness in a single machine scheduling problem. Int J Adv Manuf Tech 30:722-726 12. Reeves CR (1995) A genetic algorithm for owshop sequencing. Comput Oper Res 22:5-13 13. Rajendran C, Ziegler H (2004) Ant-colony algorithms for permutation owshop scheduling to minimize makespan/total owtime of jobs. Eur J Oper Res 155:426-38 14. Noorul Haq A, Saravanan M, Vivekraj AR, Prasad T (2006) A scatter search algorithm for general owshop scheduling problem. Int J Adv Manuf Tech 31:731-736

19 15. Suman B (2002) Multiobjective simulated annealinga metaheuristic technique for multiobjective optimization of a constrained problem. Found Comput Decis Soc 27:171-191 16. Kirkpatrick S, Gelatt CD, Vecchi MP (1983) Optimization by simulated annealing. Science 20:671-680 17. Eglese RW (1990) Simulated annealing: a tool for operational research. Eur J Oper Res 46:271-281. 18. Ehrgott M, Gandibleux X (2000) A survey and annotated bibliography of multiobjective combinatorial optimization. OR Spektrum 22:425-460. 19. Van Laarhoven PJM, Aarts EHL, Lenstra JK (1992) Job shop scheduling by simulated annealing. Oper Res 40:113-125 20. Liaw CF (1999) Applying simulated annealing to the open shop scheduling problem. IIE T 31:457-465 21. Dipak L, Chakraborty UK (2009) An ecient hybrid heuristic for makespan minimization in permutation ow shop scheduling. Int J Adv Manuf Tech 44:559-569 22. Varadharajan TK, Rajendran C (2005) A multi-objective simulated-annealing algorithm for scheduling in owshops to minimize the makespan and total owtime of jobs. Eur J Oper Res 167:772-795 23. Suman B, Kumar P (2006) A survey of simulated annealing as a tool for single and multiobjective optimization. J Oper Res Soc 57:1143-1160 24. Rajasekaran S (1990) On the Convergence Time of Simulated Annealing. University of Pennsylvania Department of Computer and Information Science, Technical Report No. MSCIS-90-89 25. Bertsimas D, Tsitsiklis J (1993) Simulated Annealing. Statistical Science 8(1):10-15 26. Ben-Daya M, Al-Fawzan M (1996) A simulated annealing approach for the one-machine mean tardiness scheduling problem. Eur J Oper Res 93:61-67 27. Parthasarathy S, Rajendran C (1997) A simulated annealing heuristic for scheduling to minimize mean weighted tardiness in a owshop with sequence-dependent setup times of jobs-a case study. Prod Plan Control 8:475-483 28. Potts CN, Van Wassenhove LN (1991) Single machine tardiness sequencing heuristics. IIE T 23(4):346-354. 29. Johnson DS, Aragon CR, Mcgeoch LA, Schevon C (1989) Optimisation by simulated annealing: an experimental valuation. Part I, Graph partitioning, Oper Res 37:865-891.

You might also like