You are on page 1of 10

Computers & Industrial Engineering 83 (2015) 217225

Contents
lists
available
at

Computers
& Industrial Engineering
journalhomepage:

Integrated scheduling of production and distribution to minimize total cost using


an improved ant colony optimization method 1
Ba-Yi Cheng a,b, Joseph Y.-T. Leung a,b,c,, Kai Li a,b
a

School of Management, Hefei University of Technology, Hefei 230009, PR China b Key Laboratory of Process Optimization and
Intelligent Decision-making, Ministry of Education, Hefei 230009, PR China c Department of Computer Science, New Jersey Institute
of Technology, Newark, NJ 07012, USA

a r t i c l ei n f o

Article history:
Received 17 June 2014
Received in revised form 12 February
2015
Accepted 22 February 2015
Available online 2 March 2015

Keywords:
Scheduling
Production
Distribution
Ant colony optimization
Heuristics

abstract
In this paper, we consider an integrated scheduling problem of production and distribution for
manufacturers. In the production part, the batch-processing machines have fixed capacity and the jobs have
arbitrary sizes and processing times. Jobs in a batch can be processed together, provided that the total size
of the jobs in the batch does not exceed the machine capacity. The processing time of a batch is the largest
processing time of all the jobs in the batch. In the distribution part, the vehicles have identical transport
capacity and all the deliveries are done by a third-party logistic (3PL) provider. The objective is to
minimize the total cost of production and distribution for the manufacturer. Since the problem is NP-hard
in the strong sense, we propose an improved ant colony optimization method to solve the production part,
and a heuristic method for the distribution part. We derive a lower bound for the optimal total cost. We
generate a large number of random data to test the performance of the proposed heuristic versus the lower
bound. Our results show that the performance of the heuristic is excellent while the running time is no
more than five seconds for 200 jobs.

1. Introduction and literature review

2015 Elsevier Ltd. All rights reserved.

In recent years, integrated scheduling has generated a lot of


interest among researchers. In contrast to classical scheduling,
this type of scheduling problem is concerned with not only the

materials and the second step is to process the mode into


porcelain in a calcinatory. The calcinatory is heated to a very
high temperature that exceeds 1800 degrees Fahrenheit most of
the time. The calcinatory step often takes more than one day.
To keep a high temperature, a large amount of energy is

production part, but also the inventory, distribution and other


parts in the supply chain. The objective of integrated
scheduling is to obtain an overall optimization for the supply

needed. After the calcination step, qualified products can be


delivered to the customers. The porcelain products have
different sizes and the calcinatory has a fixed capacity. Thus, it

chain. The obtained plan will become a detailed schedule that


can provide an effective guidance for operations management.

is possible to put several porcelain products into the calcinatory


so that they are heated together, provided that the total size of
the porcelain products does not exceed the capacity of the
calcinatory. The processing time of a batch is the largest

In this paper, we consider a class of integrated scheduling


problems involving production and distribution. In the
vehicle has a fixed capacity. To obtain a low distribution cost, it is desirable to minimize the number of deliveries. This is especially true when the manufacturer
hires a third-party logistic (3PL) provider to deliver the products, since the 3PL provider will charge the manufacturer an amount proportional to the number of
deliveries. Clearly, the distribution schedule is influenced by the production
production part, we consider manufacturers having batchprocessing time of all the jobs in the batch. Clearly, minimizing
processing machines and jobs having arbitrary sizes and
the makespan will minimize the amount of energy used.
processing times. This production mode is widely used in many
After the porcelain products have been made, they need to be
industries such as the porcelain manufacturing industry, semishipped to the customers. There are a number of vehicles and each
conductor manufacturing industry (Jula & Leachman, 2010),
plan. Therefore, an integrated scheduling method is needed to
food processing industry (Melouk, Demodaran, & Chang,
achieve a total optimization of production and distribution.
2004), and so on. Take the porcelain manufacturing industry
for example (Carter & Norton, 2013). The production of
porcelain products consists of two steps, where the first step is
to make a standard mode using clay and other specific

Another example of batch-processing machines can be found


in semi-conductor industry (Uzsoy, 1994). In the final testing
stage, integrated circuits are subject to burn-in operation that
applies thermal stress to the circuits. Those circuits that pass the

1 This manuscript was processed by Area Editor Maged M. Dessouky. Corresponding author at: Department of Computer Science, New Jersey Institute of
Technology, Newark, NJ 07012, USA. Tel.: +1 973 596 3387; fax: +1 973 596 5777. E-mail address: leung@oak.njit.edu (J.Y.-T. Leung).
http://dx.doi.org/10.1016/j.cie.2015.02.017 0360-8352/ 2015 Elsevier Ltd. All rights reserved.

B.-Y. Cheng et al. / Computers & Industrial Engineering 83 (2015) 217225

burn-in test will be delivered to the customers, while those that are
found to be defective will be discarded. The circuits are put on the
boards which will then be put into an oven. The oven has a fixed
capacity. Each circuit has a size, e.g., several boards. Therefore,
several circuits can be put into the oven at the same time, provided
that the total size of the circuits does not exceed the capacity of
the oven. Since the processing time of the burn-in operation is
much longer than other operations (e.g., 120 h versus 45 h), the
burnin operation is often the bottleneck in the manufacturing
process. Minimizing the makespan will minimize the energy used
and maximize the throughput of the system. Once the circuits pass
the burn-in operation, they will be shipped to the customers.
Again, minimizing the number of deliveries will save money for
the manufacturer.
The purpose of this paper is to propose a method to tackle this
problem. We will develop an improved ant colony optimization
method to batch the jobs in the production part. We then employ
the First-Fit-Decreasing (FFD) rule used in the bin-packing
problem to group the jobs into delivery runs. Our objective is to
minimize the sum of the production and delivery costs.
Integrated scheduling includes the system of supplier,
manufacturer and customers. As such, it is a three-stage problem
(Hall & Potts, 2003). Selvarajah and Steiner (2009) proposed a 3 /
2 approximation algorithm to minimize delivery and inventory
holding costs. Sawik (2009) extended the problem to a long-term
product case. Yeung, Choi, and Cheng (2011) and Osman and
Demirli (2012) considered the three-stage problem with time
windows and synchronized replenishment, respectively. Yimer and
Demirli (2010) proposed a division technique for the problem. The
problem was divided into two phases, i.e., the manufacturing and
the delivery phase, and a genetic algorithm was used to solve it.
Two-stage integrated scheduling problems consist of two types,
where one type is concerned with the supplier and manufacturer
and the other type is concerned with the manufacturer and
customers. Chen and Vairaktarakis (2005) showed that most of the
two-stage problems are also NP-hard. Chen and Hall (2007)
investigated the conflict between the optimal schedules of the
supplier and the manufacturer. They proposed a cooperation
mechanism for the two sides. Agnetis, Hall, and Pacciarelli (2006)
proposed an interchange cost that is incurred when the orders of
jobs are different in the optimal schedules of the two sides. They
also provided a cooperation scheme. Torabi, Ghomi, and Karimi
(2006) considered the objective of minimizing the average of
holding setup and delivery costs per unit time of the supply chain
and they designed an intelligent algorithm. The other kind of twostage problem is the scheduling between the manufacturer and its
customers. Agnetis, Aloulou, and Fu (2014) considered the
coordination of production and interstage batch delivery using a
third-party logistic (3PL) provider. Algorithms for solving the
problem include approximation algorithms (Averbakh & Xue,
2007) and intelligent algorithms (Naso, Surico, Turchaiano, &
Kaymak, 2007; Zegordi, Abadi, & Nia, 2007). Other works
stressing the two-stage problems between the manufacturer and
customers include the joint scheduling of time-sensitive products
(Chen & Pundoor, 2006) and capacity allocation (Hall & Liu,
2010). Chen (2010) gave an excellent survey of integrated
production and outbound distribution scheduling.
Current research on joint scheduling focuses on the classical
production model (Pinedo (2002)), in which a machine processes
one job at a time. However, little research has been done on the
production model with batch-processing machines and
arbitrarysize jobs. In contrast to the classical production model,
this type of production is more complex to solve. Uzsoy (1994)
introduced several heuristics for the single-machine scheduling
problem and approximation ratios were analyzed by Zhang, Cai,
Lee, and Wong (2001). Indeed, the problems addressed in Uzsoy
(1994) , Zhang et al. (2001) are identical to the problem addressed

218

in this paper, except that they did not consider the delivery part.
Approximation algorithms with better performances were
proposed to solve single-machine problems (see Kashan, Karimi,
& Ghomi, 2009; Li, Li, Wang, & Liu, 2005) and multi-machine
problems (Cheng, Yang, & Ma, 2012; Cheng, Yang, Hu, & Li,
2014). Jula and Leachman (2010) provided a greedy heuristic
method and Parsa, Karimi, and Kashan (2010) provided a branch
and price algorithm. Intelligent algorithms were also applied to
solve the problem including genetic algorithms (Sevaux & Peres,
2003 ; Koh, Koo, Kim, & Hur, 2005; Demodaran, Manjeshwar, &
Srihari, 2006; Kashan, Karimi, & Jenabi, 2008), simulated
annealing (Melouk et al., 2004) and ant colony optimization
(Cheng, Wang, Yang, & Hu, 2013).
The rest of the paper is organized as follows. In Section 2,
we define the problem and provide a lower bound for the
optimal solution. In Section 3, we design an improved ant
colony optimization (IACO) method and show the detailed
implementation. Then we report the experimental results in
Section 4, where 36 levels of instances are tested to show the
performance of the algorithm. In Section 5, we conclude this
paper and give directions for future research.

2. Model and preliminaries


The integrated scheduling problem can be defined as
follows. There are n jobs to be processed and delivered to the
customer. The job set is J f1; 2;...; ng. The size of job j is s j
and its processing time is t j. Jobs are grouped into batches to be
processed on a batch-processing machine. The capacity of the
machine is B; i.e., jobs can be processed together provided that
the total size is no more than B. The processing of a batch b k
cannot be interrupted until all the jobs in it are completed. The
processing time of bk, denoted by Tk, is the longest processing
time among all the jobs in bk. The completion time of bk is
denoted by Ck. We assume that C0 0. Note that some batches
may be empty; i.e., there is no job in the batch. If batch b k is an
empty batch, then its processing time Tk is zero. We assume
that T0 0. The number of non-empty batches is denoted by
K. Given a set of batches, the production cost, PC, is a linear
function of its processing time; i.e.,

Xn
PC

Tk:
k1

Once jobs are completed, they can be delivered to the


customers. For simplicity, we assume that the size of a job on
the production side is the same as its size on the distribution
part. The vehicles have a common capacity G; i.e., products
can be delivered in one run provided that the total size of all
the products in the delivery run does not exceed G. The
delivery set is denoted by
D fd1d2;...; dng, where dl is the l-th delivery run. Note that
some delivery runs may be empty; i.e., there is no job in the
delivery run. The number of non-empty delivery runs is
denoted by L. The distribution cost, DC, is a linear function of
L since each delivery has a similar cost in practice. That is,

DC dL;
where d > 0. The objective is to minimize the total cost of
production and distribution; i.e.,

TC PC DC:
We first define four variables that will be used in the following
integer program. For all 1 6 k; l 6 n; wk 1 if bk is created,
otherwise wk 0; yl 1 if dl is created, otherwise yl 0. For
all 1 6 j 6 n; xjk 1 if job j is in bk, otherwise xjk 0; zjl 1

B.-Y. Cheng et al. / Computers & Industrial Engineering 83 (2015) 217225

if job j is in dl; otherwise zjl 0. We assume that w0 0 and


xj0 0. We use P to represent the integrated scheduling
problem under investigation. The integer program of P is
defined as follows.

Minimize TC PC DC
Subject to Tk maxftjjxjk 1g k 1;...;n
Ck Ck1 Tk

k 1;...;n

Xn xjk 1 j 1;...;n
k1

Xn xjksj 6 B
1;...;n

j1

xjk 6 wk

j 1;...;n; k 1;...;n

Xn zjl 1 j 1;...;n
l1

Xn
zjlsj 6 G

l 1;...;n

j1

j 1;...;n; l 1;...;n

zjl 6 yl
Xn
L

219

2.1. Lower bound


Since P is NP-hard in the strong sense, it is computationally
infeasible to obtain optimal solutions. Therefore, we design a
heuristic to tackle this problem. In order to test the performance of
our heuristic, we need a lower bound of the optimal solution
against which we compare our heuristic. Consider a relaxation of
the problem P where each job j with size s j is split into sj unit-size
jobs. Each unit-size job has a processing time t j, the same as job j.
After the relaxation, we can obtain a lower bound, LB, by the
following rule R. Rule R
Step 1. Sort the unit-size jobs in non-increasing order of their
processing times.
Step 2. Assign the first B jobs to the first batch, the next B jobs to
the second batch, and so on, until all jobs have been assigned. Let
fb01; b02;...; b0Kg be the batch set obtained in this process. Step 3.
Calculate T0 k using formula (2).
Step 4. In the distribution part, we can use the same relaxation and
P
get a lower bound of L as L0 d nj1sj=Ge.
Step 5. Compute LB P Kk1T0k dd

Pn

s =Ge.

j 1 j

yl

l1

Xn
K

wk
k1

PC

Xn
Tk
k1

DC dL d > 0
C0 T0 w0 xj0 0
wk;yl;xjk;zjl 2 f0;1g for all j;k;l
Objective (1) represents the total cost of production and
distribution which are shown in (12) and (13), respectively.
Constraint (2) gives the processing time of a batch. Constraints
(2) and (3) together ensure that the processing of a batch is
non-preemptive. Constraint (4) guarantees that each job is in
exactly one batch. Constraint (5) ensures that the total size of
all the jobs in a batch does not exceed the machine capacity B.
Constraint (6) ensures that if xjk 1 for some j, then wk must
also be equal to 1. Constraint (7) ensures that each job is in
exactly one delivery. Constraint (8) guarantees that the total
size of all the products in a delivery run does not exceed the
vehicle capacity G. Constraint (9) ensures that if zjl 1 for
some j, then yl must also be equal to 1. Eq. (10) gives the
number of non-empty delivery runs, while Eq. (11) gives the
number of non-empty batches. Constraint (15) indicates that
the four decision variables are binary variables.
We now determine the computational complexity of P. We
will show that bin-packing is actually a special case of P. Since
binpacking is NP-hard in the strong sense, P must also be NPhard in the strong sense. Consider a special case of P, where all
jobs have unit processing time and the machine capacity is
identical to the vehicle capacity; i.e., B G. Then we can
view each batch as a bin with bin capacity B. Let K be the
minimum number of batches (or bins). The optimal total cost
will be TC 1 dK. Since d is a constant, the optimal
total cost is a linear function of the minimum number of
batches (or bins) formed. Therefore, P becomes a binpacking
problem. Hence we have the following proposition.
Proposition 1. P is NP-hard in the strong sense even when all
jobs have identical processing times.

Proposition 2. LB P Kk1T0k dd
of the optimal total cost for P.

Pn

s =Ge is a lower bound

j1 j

Proof. In Rule R, we split each job j with size s j and processing


time tj into sj jobs with unit-size and processing time tj. Clearly,
this transformed instance is a relaxation of the original problem P.
Thus, an optimal total cost of this transformed instance is a lower
bound of the optimal total cost for P. We now determine the
optimal total cost of this transformed instance.
Since the jobs in the transformed instance have unit-size, the
P

minimum number of deliveries is given by L 0 d nj1 sj=Ge. In


the production part, the minimum number of batches is obtained
by
P

Step 2 of Rule R; i.e., K d nj1sj=Be. Since the processing


time of a batch is determined by the largest processing time of all
the jobs in the batch, it is better to group the B jobs with the
largest processing time into a single batch. Steps 1 and 2 of Rule
R accomplish this. Thus, the minimum production cost is P Kk1
T0k. Therefore, LB P Kk1T0k dd
total cost of the transformed instance. .

Pn

s =Ge is the optimal

j1 j

In Section 4, we will test the performance of our heuristic


against the lower bound. We compare the solutions obtained by
IACO against LB to show the effectiveness of our heuristic.
3. Improved ant colony optimization
Ant colony optimization (ACO) (Dorigo, Maniezzo, &
Colorni, 1996) is an intelligent algorithm inspired by the behavior
of ants searching for food in nature. ACO has been applied to
solve combinatorial optimization problems such as the Travelling
Salesman Problem (Mavrovouniotis & Yang, 2013), Vehicle
Routing Problem (Aalseiro, Loiseau, & Ramonet, 2011; Gajpal &
Abad, 2009) and Flowshop Scheduling Problem (Neto & Filho,
2011).
In this section, we propose an improved ant colony
optimization (IACO) method to minimize the total cost of the
integrated scheduling problem. First, a batch list is obtained based
on the sizes of the jobs. Then a new candidate list is designed to

220B.-Y. Cheng et al. / Computers & Industrial Engineering 83 (2015) 217225

improve the performance. A faster convergence and better


solutions are obtained in the production part. A heuristic is
proposed to obtain an effective distribution schedule.
3.1. Coding
M artificial ants are used to search solutions. Job j is
represented by a node j and the path from i to j is represented as i !
j. In each iteration, ants find new paths by the pheromone and
heuristic value of the paths. If a path is selected by an ant, the ant
will reinforce the pheromone on the path. The pheromone
evaporates with a constant speed. The maximum number of
iterations is represented by N max; i.e., after Nmax iterations, a
feasible production schedule is obtained.
The parameters of IACO is as follows.

M:
m:
sij:

number of ants
m-th ant
pheromone density on the path i ! j
minimum density of pheromone
maximum density of pheromone
probability of assigning job i into bk
heuristic value of assigning job i into bk
evaporation rate of pheromone
N-th iteration
maximum number of iterations
a feasible solution for P
the best solution in one iteration
the best solution among all the iterations

smin:
smax:
hik:
gik:
q:
N:
Nmax:
p:
plb:
pgb:
Dsij:
X k:

the change of pheromone density for sij


candidate list of batch bk

Jobs should first be grouped into batches before being


processed. Since K and the index of the current batch cannot be
determined before the end of IACO, we use h ik to denote the
affinity that job i is associated with bk, where hik is defined as:

P sij
j2bk

hik

; 16 jbkj

where jbkj is the number of jobs that have been assigned to b k. (We
assume that bk is non-empty when we compute hik.) In a schedule,
we can view each batch as a rectangle whose length is the
processing time of the batch and whose width is the machine
capacity B. A job in a batch occupies a rectangle whose length is
its processing time and whose width is its size. The unoccupied
area in the batch is the wasted space of the batch. Clearly,
minimizing the total wasted space of all the batches will minimize
the makespan. It is easy to see that smaller difference in
processing times between the jobs in a batch will minimize the
wasted space of the batch. Therefore, we define the heuristic value
of assigning job i to bk as:

gik

1 jTk tij;

17

where Tk is the processing time of bk.

the candidate list if the machine capacity is not exceeded. When


there are a large number of jobs, the candidate list may lead to a
long running time for ACO. In practice, jobs often have the same
size. So, we classify jobs with the same size as the same type. This
candidate list can reduce the running time and the number of types
is no more than n. In those instances where many jobs have the
same size, the efficiency is much better than traditional ACO.
Proposition 3. If we always choose the longest job among jobs
with the same size to join the current batch, it has a tendency of
lowering the makespan of the schedule.
Proof. Consider a job as a rectangle whose length and width
are respectively t j and sj. The processing of a batch is also a
rectangle and the length and width are respectively T k and B.
Then in a batch, denote the area that has not been covered by
jobs as idle space (IS k). For a feasible solution, we have

XK Xn
IS B
Tk
sjtj;
k1
j1

where IS is the total idle space in the solution. Since P nj1sjtj is a


P
K

constant, minimizing IS is equivalent to minimizing k1Tk


which is the makespan of the schedule. Now, if we always
choose the longest job among jobs with the same size to join
the current batch, it will lower IS and hence P Kk1Tk.
Proposition 4. When selecting jobs from the candidate list, if i
and j satisfies si < sj and siti > sjtj, then selecting i to add to the
current batch is better than selecting j.
Proof. Using the rectangles in the proof of Proposition 3, we
see that i can make a smaller ISk since si < sj and siti > sjtj.
Therefore, selecting i is better.
Using the above propositions, we create candidate lists using
the following rule.
Candidate List Rule
Step 1. Classify jobs into different types by their sizes. Select
the jobs with the longest processing time in each type
and add to the candidate list. When job j is put into a
batch from the candidate list, update the candidate list
as Steps 2 4.
Step 2. Find the type to which j belongs. If all the jobs in this
type have been assigned, delete j from the candidate
list and go to Step 4. Otherwise, go to Step 3.
Step 3. Put the job with the longest processing time into the
candidate list. Go to Step 4.
Step 4. Calculate the total size of jobs in the current batch and
delete jobs that have a size larger than the remaining
machine capacity. h

3.2. Candidate list


Candidate list is the set of jobs that can be selected and added
to the current batch. In traditional ACO, any job can be selected to

3.3. Update of pheromone

B.-Y. Cheng et al. / Computers & Industrial Engineering 83 (2015) 217225

In IACO, ants choose paths by pheromone density and


heuristic information. For ant m, the probability of choosing i
from candidate list Xk of bk is

have a larger value, and hence the interval [ smin;smax] will be


smaller. Since the problem is easier to solve, we can afford a
smaller interval. Conversely, if e is large, then
smaller value, and hence the interval

haikgb ik

if i 2 X k 18

< Pj2Xk hajkgbjk

[smin;smax] will be larger. Thus, our


easy cases versus hard cases.

Pik
:

221

smin is

smin will have a

chosen to adapt to

0 otherwise

where a and b are the weight of pheromone and heuristic


information, respectively.

3.4. Distribution algorithm

Pheromone evaporates at a constant speed to prevent an


excessive accumulation. In order to improve the convergence
rate, we use a rotation method to reinforce the pheromone of

After jobs are completed, they are delivered to the


customers by vehicles whose transport capacities are G.
Completed jobs are distributed by the following algorithm,
which is primarily the FirstFit-Decreasing heuristic used in the
bin-packing problem. Distribution Algorithm A1

plb and pgb. At

the same time, we define qij as the number of


assignments where jobs i and j are put in the same batch. The
updating rule of pheromone is

Step 1. Sort the completed jobs in non-increasing order of their


sizes. Jobs with the same size are ordered arbitrarily.
Put all the jobs in the candidate list.

s ijt 1 1 qsijt qijDsij; 19

where q is the constant evaporation rate. Dsij is the


reinforcement for the global best or local best solution and is
defined as

( fbest1
Dsij
0

Step 2. Put the first job in the candidate list into the first
delivery d1.
Step 3. Take the next job in the candidate list and put it into the
lowest indexed delivery dl for which the job fits. If no
delivery can accommodate this job, put it in a new
delivery.

if i and j are in the same batch


20
otherwise

Step 4. Repeat Step 3 until all jobs in the candidate list have
been assigned.
Output
the
distribution
schedule as

where f best is the objective value of a global best or local best


solution.
If only the pheromone of the global best solution is
reinforced, then ants will concentrate on the solution and it
often leads to an immature convergence. However, the local
best solution,

plb,

often has different values in different

iterations, so reinforcement of plb makes pheromone increase in


more solutions. In IACO, a rotation reinforcement method is
designed; i.e., after the Nth iteration (N kk; 1 6k6 Nmax), the

plb is increased. After other iterations,


the pheromone on paths of pgb is increased. Here k is a constant
pheromone on paths of

which is initialized in the beginning of the experiment.


Furthermore, in order to avoid a large distance between the
pheromone of best solution and other solutions, pheromone
density is restricted in the interval smin;smax; i.e., when
pheromone density becomes more than

smax,

it is changed to

smax. and when pheromone density becomes less than smin, it is


changed to smin.
The value of smax is defined as

fd1; d2;...; dLg. h


Proposition 5. The running time of algorithm A1 is On log n. It
generates a feasible distribution schedule for which the
distribution cost is no more than 2 DC, where DC is the optimal
distribution cost.
Proof. To obtain the time complexity, we analyze the four steps of
algorithm A1. Step 1 takes On log n time. Step 2 takes constant
time. In Step 3 we need to find the first delivery for which the jobs
fit. If we maintain all non-empty deliveries as a heap, then Step 3
takes Olog n time. Step 4 repeats Step 3 n times. Therefore, the
total time taken in Step 3 is On log n. Hence, the overall time
complexity of algorithm A1 is On log n.
Now we analyze the performance of algorithm A 1. Let

p be an

optimal distribution schedule, and let there be L deliveries in p.


We have

Xn

23

sj 6 G L ;

j1

smax

since the total size of completed jobs in one delivery is no more

; 21 1 qfpgb

where q is the constant evaporation rate and fpgb is the


objective value of the global best solution.

22

where e is the number of types and c is a constant that is often


set as c 0:05. Note that if e is small, then the problem is
easier to solve, and if e is large, then the problem is harder to
solve. From Eq. (22), we see that if e is small, then

24

j2dh

since otherwise dg and dh can be combined into one delivery. Select


ordered pairs g; h from the set fg; hjg h K 1g and
by (24), we have

s max1 p ffic
2

X X sj sj > G;
j2dg

The value of smin is defined as

smin e 1p ffic ;

than G. Consider the deliveries of A1. For any 1 6 g; h 6 L, we


have

smin will

Xn
2
j1

0
XL
s j

1
X
@

X
sj

g1

j2dg

j2dh

By (23)(25), we have

sjA > L G:

25

222B.-Y. Cheng et al. / Computers & Industrial Engineering 83 (2015) 217225

0
XL
L G< @
g1

1
X
sj

X
Xn
sjA 2 sj 6 2 G L :

j2dg

j2dh

4.1. Parameter setting

j1

Thus, L < 2 L. By (13), we have DC < 2 DC.


3.5. Implementation of IACO
The detailed steps of IACO is as follows.
Step 1. Classify all the jobs into types by their sizes. In each type,
order the jobs in non-increasing order of their processing
times.
Step 2. Set the parameters of IACO. Initialize the pheromone
density between jobs and set N 1; m 1.
Step 3. Create a batch and the candidate list for ant m. Select the
job with the longest processing time and put it into the
batch. Delete the job from the candidate list.
Step 4. Select a job i from the candidate list by (18). Compare the
Level
Value

J1
Factor
50

J2
J3
Numberof jobs
100
200

S1
[1,10]

job with other jobs in the candidate list, and if j satisfies


si > sj and siti < sjtj, put j in the batch. If more than one

Table 2
Experimental
results

B < G (%)
for J1
insta

B G (%)

J1S1T1

0.00

J1S1T2
J1S1T3
J1S2T1
J1S2T2
J1S2T3
J1S3T1
J1S3T2
J1S3T3
J1S4T1
J1S4T2
J1S4T3

0.00
1.63
0.00
1.68
1.11
1.33
1.24
3.88
1.58
1.42
2.71

In order to test the performance of IACO, we design 36 levels


of instances and report the results. We show the gap between the
solutions found by IACO and the lower bound, normalized by the
lower bound. We also show the average running time of IACO,
measured in seconds. In each level, there are three parameters: the
number of jobs, the processing times of jobs and the job sizes. The
number of jobs are denoted by J1, J2 and J3 which represent an
instance with 50, 100 and 200 jobs, respectively. The processing
times are divided into three levels as T1, T2 and T3 which indicate
that the processing times of jobs obey a uniform distribution in the
intervals of [1,10], [1,20] and [1,30], respectively. In a similar
fashion, job sizes are divided into four levels S1, S2, S3 and S4,
which indicate that the sizes of the jobs obey a uniform
distribution in the interval [1,10], [10,20], [10,30] and [1,40],
respectively. The levels are shown in Table 1. A level can be
denoted by JxSyTz. For example, J2S1T3 represents the level
where there are 100 jobs, sj obeys the uniform distribution in the
interval [1,10] and tj obeys the uniform distribution in the interval
[1,30]. For each of the 36 levels, we randomly generate 10
S2
Intervalof sj
[10,20]

instances.

S3

S4

T1

[10,30]

[1,40]

[1,10]

T2
Intervalof tj
[1,20]

T3
[1,30]

We choose three different combinations of the machine

B > G (%)

B < G (%)

B G (%)

B > G (%)

B < G (%)

B G (%)

B> G(%)

0.00

0.00

0.00

0.00

0.00

0.00

0.00

0.00

1.198

0.00
0.00
1.72
1.79
1.64
3.44
1.43
4.23
1.08
0.83
2.16

0.00
0.00
1.08
3.03
1.82
1.72
1.40
1.61
1.42
1.83
0.54

0.00
1.39
2.19
2.59
1.34
2.61
2.16
2.04
1.17
1.35
0.98

0.00
0.00
1.53
0.80
1.20
1.08
2.28
0.53
2.28
1.44
1.17

0.00
1.61
1.89
1.23
0.74
1.59
2.76
1.27
1.69
0.92
1.30

0.00
0.00
1.33
0.93
2.13
2.89
3.05
2.41
1.28
0.67
1.63

0.00
0.00
1.29
1.96
2.41
2.22
1.01
3.09
1.09
0.67
0.48

0.00
0.00
5.30
1.96
0.87
2.83
0.84
2.11
1.17
1.18
0.63

1.205
1.302
0.957
1.702
1.131
1.375
1.366
1.445
1.358
1.944
1.509

nces.

job satisfies the condition, select the job with maximal


sjtj siti. If no job satisfies the condition, put i in the batch.
Then update the candidate list.
Step 5. Repeat Step 4 until the candidate list is empty.
Step 6. Repeat Steps 35 until all the jobs have been assigned in
batches. Now a feasible solution has been obtained. Set
m m 1. If m > M, then go to Step 7, else go to
Step 3.
Step 7. Compare all the obtained solutions and find the best
solution. Update the pheromone by (19)(22). Test
whether N Nmax. If N Nmax, then go to Step 8, else
set N N 1 and begin the next iteration (i.e., execute
Steps 3 to 6).
Step 8. When N Nmax, update the global best solution and
output the production schedule.
Step 9. Execute the distribution algorithm A1 to generate the
distribution schedule. Calculate the production cost and
distribution cost by (10) and (11). Output the total cost
TC.
4. Computational experiments
Computational experiments are performed to test the
performance of our heuristic. In Section 4.1, we will describe the
parameter setting in the experiment. In Section 4.2, we will report
the experimental results.

capacity (B) and the vehicle capacity (G). These are: (1) B 40
and G 60 for the case B < G, (2) B G 40 for the case B
G, and
(3) B 60 and G 40 for the case B > G. We also want to test
the effect of the weight (d) of the distribution cost on the
performance of the heuristics. We choose five different values of
d, d 0:2; 0:5; 1:0; 1:5; 5:0.
In the experiment, we try different values of M. We find that
when M > 20, there is very little improvement. So we set

Table 1
Levels of instances.
Instances
d 0:5
d 1:0
Table 3
Additional experimental results for J1 instances.
Instances d 0:2
B <G
(%)
J1S1T1
J1S1T2
J1S1T3
J1S2T1
J1S2T2
J1S2T3
J1S3T1
J1S3T2
J1S3T3
J1S4T1

0.00
0.00
0.00
2.21
2.87
3.01
2.68
1.69
2.64
2.03

d 5:0
B G
(%)
0.00
0.00
0.00
2.72
2.79
2.06
1.90
2.71
3.61
1.68

B >G
(%)
0.00
0.00
0.00
0.98
1.85
3.25
2.38
2.36
2.90
2.37

B <G
(%)
0.00
0.00
0.00
2.45
2.33
2.68
3.01
3.27
3.11
1.80

B G
(%)
0.00
0.00
0.00
2.38
1.28
1.96
2.67
3.25
1.63
2.36

B >G
(%)
0.00
0.00
1.30
2.93
3.25
1.39
3.00
2.97
2.58
1.65

B.-Y. Cheng et al. / Computers & Industrial Engineering 83 (2015) 217225


J1S4T2
J1S4T3

2.39
2.01

2.08
2.57

1.39
1.63

2.48
2.93

1.03
1.67

2.89
2.61

J2S4T2
J2S4T3

M 20. In Eq. (18), we also try different values of a and b.


We find that in most instances IACO has a good performance
when a 1 and b 4. Other parameters are set as q 0:1;
Nmax 200 and in Eq. (22), c is set as c 0:05. In the
rotation scheme, k is set as k dNmax=ne 3, where dxe
represents the smallest integer larger than x.

2.88
2.20

3.24
3.13

223
2.13
2.19

2.61
2.41

We see that IACO obtains the best result when solving


J1S1T1 and J1S1T2, where the GAP values are zero across the
board (i.e., all values of d; B and G). In both cases, the size of
each job obeys the uniform distribution in the interval [1,10].
Since the job sizes are small compared with the capacity of the
machine, jobs have more opportunities to make full batches;
i.e., batches with total size B. Since many batches are full, it is
not difficult to find an optimal solution. Comparing the various

The algorithm is implemented using JAVA on a Intel Duo


CPU, 2.936 GHz/1.96G RAM computer. The solutions of
IACO are compared with the lower bound given in Proposition
2. For each level, we randomly generate 10 instances, and each
instance is executed 200 times. For each instance, we use GAP
d 0:5

d 1:0

d 1:5

Avg.Time

values of d, we see that IACO performs better when d 1:0,


where the largest GAP is 2.76%. For d 1:5, the largest GAP
is 5.30%, and for d 0:5, the largest GAP is

to show the solution of


IACO compared with the lower bound, where GAP is defined as

TC LB
GAP

2.61
2.46

Columns 24 show the results when d 0:5; column 2 is for


the case B < G, column 3 is for the case B G, and column 4
is for the case B > G. Columns 57 show the results when d
1:0, and columns 810 show the results when d 1:5.
Column 11 presents the average running time of the 10
instances, measured in seconds.

4.2. Experimental results

Instances

3.20
3.12

100%;
LB where TC is the best value among the 200

Table 6

Table 4
Experimental results for J2 instances.
Instances

d 0:5

d 1:0

d 1:5

Avg.Time

J2S1T1

B < G (%)
0.00

B G (%)
0.00

B > G (%)
0.00

B < G (%)
0.00

B G (%)
0.00

B > G (%)
0.00

B < G (%)
0.00

B G (%)
0.00

B>G(%)
0.00

2.293

J2S1T2
J2S1T3
J2S2T1
J2S2T2
J2S2T3
J2S3T1
J2S3T2
J2S3T3
J2S4T1
J2S4T2
J2S4T3

0.00
0.00
1.27
0.52
1.30
1.90
1.33
2.12
1.20
1.03
0.60

0.00
0.00
0.91
1.69
1.77
1.64
1.88
2.01
2.15
0.61
0.72

0.00
0.00
0.64
0.33
1.21
0.95
1.34
0.74
0.89
0.55
1.31

0.00
0.00
0.84
1.84
1.24
2.63
0.75
1.83
0.61
0.34
0.89

0.00
0.83
0.79
1.25
0.23
1.63
2.81
1.43
1.18
0.34
0.90

0.00
0.99
1.13
1.34
1.81
1.73
1.34
1.40
1.47
0.50
0.90

0.00
0.00
0.80
1.31
2.68
2.64
1.48
2.31
1.46
1.28
0.52

0.00
0.00
1.83
1.29
1.51
2.12
1.82
1.09
1.64
1.21
0.59

0.00
0.00
1.46
0.95
1.86
1.99
1.84
2.68
1.32
1.46
0.91

2.302
2.986
2.414
2.725
2.796
2.822
2.881
3.182
3.426
3.974
3.861

B > G (%)
0.75

B < G (%)
0.37

B G (%)
0.75

B>G
0.81

0.58
0.66
0.65
0.42
1.19
1.00
1.21
0.84
0.84
0.64
1.27

0.17
0.61
0.83
1.71
1.12
1.73
0.87
1.74
0.79
0.41
0.84

0.52
0.80
0.79
1.10
2.08
0.74
2.01
1.50
1.12
0.41
0.74

0.44
0.18
1.22
1.10
1.64
1.42
1.33
1.39
1.66
0.73
0.93

executions and LB is the lower bound given by Proposition 2.


For each level, we calculate the average of the 10 GAP values
of the 10 instances, and the results are reported in Tables 24
(which will be described later).
Table 2 presents the experimental results for instances with 50
jobs and d 0:5; 1:0; 1:5. Column 1 shows the level of
instances.

d 1:5

Avg. Time

Table 5
Additional experimental results for J2 instances.
Instances d 0:2
B<G
(%)
J2S1T1
J2S1T2
J2S1T3
J2S2T1
J2S2T2
J2S2T3
J2S3T1
J2S3T2
J2S3T3
J2S4T1

0.00
0.00
0.00
2.34
3.50
2.95
4.09
4.24
4.63
2.39

0.00
0.00
0.00
2.31
3.51
4.20
3.62
4.56
3.29
2.45

B>G
(%)
0.00
0.00
0.00
1.54
1.94
3.54
2.37
2.94
2.48
2.46

B<G
(%)
0.00
0.00
0.00
1.59
1.56
2.94
3.18
3.98
2.60
2.61

J3S1T2
J3S1T3
J3S2T1
J3S2T2
J3S2T3
J3S3T1
J3S3T2
J3S3T3
J3S4T1
J3S4T2
J3S4T3

0.12
0.24
1.18
0.66
2.22
1.01
1.20
2.00
1.14
1.07
0.50

0.69
0.59
0.77
1.62
1.74
1.55
1.78
2.16
2.10
0.77
0.41

4.23%. With respect to the running time, all running times are less
than two seconds. This shows that IACO has excellent
performance with a short running time.

d 5:0
B G
(%)

Experimental results for J3 instances.


B < G (%)
B G (%)
J3S1T1
0.21
0.39

B G
(%)
0.00
0.00
0.00
1.96
2.79
2.54
3.00
2.15
2.16
2.76

B >G
We
(%)

want to test whether extreme values of d can affect the


performance of the algorithm. Table 3 presents the experimental
results for instances with 50 jobs and d 0:2; 5:0. As can be
seen from Table 3, the performance of the algorithm when d
0:2 is similar to that of the case of d 5:0. Moreover, the results
shown in Table 3 are comparable to those of Table 2.
Table 4 reports the results for instances with 100 jobs and d
0:5; 1:0; 1:5. IACO also finds optimal solutions in J2S1T1 and
J1S1T2, where sj obeys the uniform distribution in the interval
[1,10]. The GAP values are even better than those in Table 2. The

224B.-Y. Cheng et al. / Computers & Industrial Engineering 83 (2015) 217225


Fig. 2. Performance of IACO in 1000 iterations; d 1:0.

largest GAP value is 2.81%, compared with 5.30% in Table 2. In


terms of performance, there is not much difference among the
various values of d. The average running time is higher than that
in Table 2. On all instances, the average running time is less than
four seconds.
Table 5 presents the results for instances with 100 jobs and d
0:2; 5:0. Again, the results for d 0:2 is similar to those of d
5:0, even though the two d values are quite different. The
results in Table 5 are slightly higher than those in Table 4, but not
by much.
Table 6 presents the results for large-scale instances; i.e., the
number of jobs is 200. In this case, IACO does not find optimal
solution in any instance; i.e., there is no GAP value that is zero.
Similar to Table 2, IACO gets better performance when d 1:0
(the largest GAP value is 2.08%), compared with d 0:5 (the
largest GAP value is 2.22%) and d 1:5 (the largest GAP value
is 3.32%). The average running time is a little higher than the
cases of 50 and 100 jobs, with the largest average running time
close to five seconds.
Table 7 presents the results for instances with 200 jobs and d
0:2; 5:0. As can be seen from Table 7, the results for d 0:2

Fig. 3. Performance of IACO in 1000 iterations; d 1:5.

GAP (%)
9
8
7

Table 7
Additional experimental results for J3 instances.
Instances d 0:2

J3S1T1
J3S1T2
J3S1T3
J3S2T1
J3S2T2
J3S2T3
J3S3T1
J3S3T2
J3S3T3
J3S4T1
J3S4T2
J3S4T3

B<G
B=G

d 5:0

B<G
(%)

B G
(%)

B>G
(%)

B<G
(%)

B G
(%)

B >G
(%)

0.00
0.00
1.31
1.95
2.17
2.34
1.91
4.35
2.84
2.08
2.80
3.10

0.00
0.25
2.03
2.55
2.61
4.45
2.45
2.13
2.37
2.39
3.12
2.17

0.00
0.00
2.50
2.46
2.74
2.33
2.37
2.87
2.49
2.16
2.74
2.46

0.00
0.40
1.98
1.78
2.80
2.18
2.44
2.66
1.67
2.30
2.58
2.54

0.00
0.00
2.18
2.31
3.00
2.47
3.00
2.17
3.54
2.09
2.60
2.22

0.31
0.00
2.46
3.15
2.49
2.18
2.77
6
3.33
2.39 5
1.99 4
2.34 3
2.61
2

B>G

1
100

200
700

300
800

400
900

500
1000

600

Number of iteraions
Fig. 4. Performance of IACO in 1000 iterations; d 0:2.

GAP (%)
12

B<G

10

B=G
B>G

8
6
Fig. 1. Performance of IACO in 1000 iterations; d 0:5.

4
2
0
100

200

300

400

500

600

700

800

Number of iterations
Fig. 5. Performance of IACO in 1000 iterations; d 5:0.

900 1000

B.-Y. Cheng et al. / Computers & Industrial Engineering 83 (2015) 217225

are very similar to those of d 5:0. Moreover, the results shown


in Table 7 are similar to those in Table 6.
From Tables 27, we see that IACO has excellent performance
and it runs fast. Most of the time, it gives a GAP value about 2
3%; the largest GAP is less than 6%. The running time is also fast.
Most of the time it takes 23 s; the longest one is about 5 s.
In order to make a further investigation on the performance of
IACO, we take a more complex instance where n 500; sj and tj
obey uniform distribution in the intervals [1, 40] and [1, 30],
respectively. We set Nmax 1000 and record the convergence
process of IACO. GAP is reported after every 100 iterations. The
results are shown in Figs. 13 for the cases of d = 0.5, 1.0 and 1.5,
respectively. In each figure, the blue line is for the case B < G,
the red line is for the case B G, and the green line is for the
case B > G. In all three figures, we find that after 200 iterations,
the GAP values are approximately 67%. After 400 iterations, the
GAP values are approximately 45%. If the number of iterations
continues to increase, there is hardly any improvement in the
solutions.
Figs. 4 and 5 show the results for the cases of d 0:2 and d

5:0, respectively. In Fig. 4, we see that after 500


iterations, the GAP values are approximately 46%. After 700
iterations, the GAP values drop to 35%, and there is hardly
any improvement with more iterations after 700 iterations. In
Fig. 5, we see that after 500 iterations, the GAP values are
approximately 68%. After 700 iterations, the GAP values
drop to 46%, and there is hardly any improvementwith more
iterations after 700 iterations. This shows that IACO has a
convergence after 700 iterations.

5. Conclusions
We study an integrated scheduling problem for the
manufacturers. In the production part, the batch-processing
machine has a fixed capacity and the jobs have arbitrary sizes
and processing times. Once a batch is being processed, no
interruption is allowed.
When jobs are completed, they are delivered by identical
vehicles with a common transport capacity. The objective is to
minimize the total cost of production and distribution. Since
the problem is NP-hard in the strong sense, we propose an
improved ant colony optimization method to schedule the jobs.
A new candidate list is designed according to the sizes of the
jobs. The performance of our heuristic is tested by 36 levels of
instances and the results show that our algorithm has excellent
performance.
Future research is needed to investigate an integrated
scheduling problem where the machine configurations are
more complex. For example, in practice, flow shop and job
shop are often encountered and the integrated problems are
even harder to solve. Since the single-machine problem is a
special case of flow shop and job shop, the two problems are
obviously NP-hard in the strong sense. Another direction for
future research is the supply chain scheduling that includes
production, inventory and distribution. Total cost and service
level are both interesting objectives.
Acknowledgements
This work is partly supported by the National Natural
Science Foundation of China under Grants 71202048,
71131002, 71071045 and 71471052. This work is also partly
supported by the Specialized Research Fund for the Doctoral
Program of Higher Education of China under Grant

225

20110111120015. The authors would like to thank the two


referees whose suggestions have greatly improve the
readability of the paper.
References
Aalseiro, S. R., Loiseau, I., & Ramonet, J. (2011). An ant colony algorithm
hybridized with insertion heuristics for the time dependent vehicle
routing problem with time windows. Computers & Operations Research,
38(6), 954966.
Agnetis, A., Aloulou, M. A., & Fu, L. (2014). Coordination of production and
interstage batch delivery with outsourced distribution. European Journal
of Operational Research, 238, 130142.
Agnetis, A., Hall, N., & Pacciarelli, D. (2006). Supply chain scheduling:
Sequence coordination. Discrete Applied Mathematics, 154(15), 2044
2063.
Averbakh, I., & Xue, Z. (2007). On-line supply chain scheduling problems
with preemption. European Journal of Operational Research, 181(1),
500504.
Carter, C. B., & Norton, M. G. (2013). Ceramic materials: Science and
engineering (2nd ed.). Springer.
Chen, Z.-L. (2010). Integrated production and outbound distribution
scheduling: Review and extensions. Operations Research, 58(1), 130
148.
Cheng, B., Wang, Q., Yang, S., & Hu, X. (2013). An improved ant colony
optimization for scheduling identical parallel batching machines with
arbitrary job sizes. Applied Soft Computing, 13(2), 765772.
Cheng, B., Yang, S., Hu, X., & Li, K. (2014). Scheduling algorithm for flow
shop with two batch-processing machines and arbitrary job sizes.
International Journal of Systems Science, 45(3), 571578.
Cheng, B., Yang, S., & Ma, Y. (2012). Minimizing makespan for two batch
processing machines with non-identical job sizes in job shop.
International Journal of Systems Science, 43(12), 21852192.
Chen, Z. L., & Hall, N. G. (2007). Supply chain scheduling: Conflict and
cooperation in assembly systems. Operations Research, 55(6), 1072
1089.
Chen, Z. L., & Pundoor, G. (2006). Order assignment and scheduling in a
supply chain. Operations Research, 54(3), 555572.
Chen, Z. L., & Vairaktarakis, G. L. (2005). Integrated scheduling of
production and distribution operations. Management Science, 51(4),
614628.
Demodaran, P., Manjeshwar, P. K., & Srihari, K. (2006). Minimizing
makespan on a batch-processing machine with arbitrary job sizes using
genetic algorithm.
International Journal of Production Economics, 103(2), 882891.
Dorigo, M., Maniezzo, V., & Colorni, A. (1996). The ant system: Optimization by
a colony of cooperating agents. IEEE Transactions on Systems, Man, and
Cybernetics Part B, 26(1), 2941.
Gajpal, Y., & Abad, P. (2009). An ant colony system for vehicle routing problem
with simultaneous delivery and pickup. Computers & Operations Research,
36(12), 32153223.
Hall, N. G., & Liu, Z. (2010). Capacity allocation in supply chains. Operations
Research, 58(6), 17111725.
Hall, N. G., & Potts, C. N. (2003). Supply chain scheduling: Batching and
delivery. Operations Research, 51(4), 566584.
Jula, P., & Leachman, R. C. (2010). Coordinated multistage scheduling of parallel
batch-processing machines under multi-resource constraints. Operations
Research, 58(4), 933947.
Kashan, A. H., Karimi, B., & Ghomi, F. (2009). A note on minimizing makespan
on a single batch-processing machine with nonidentical job sizes. Theoretical
Computer Science, 410(2729), 27542758.
Kashan, A. H., Karimi, B., & Jenabi, M. (2008). A hybrid genetic heuristic fo r
scheduling parallel batch-processing machines with arbitrary job sizes.
Computers & Operations Research, 35(4), 10841098.
Koh, S. G., Koo, P. H., Kim, D. C., & Hur, W. S. (2005). Scheduling a single
batchprocessing machine with arbitrary job sizes and incompatible job
families. International Journal of Production Economics, 98(1), 8196.
Li, S. G., Li, G. J., Wang, X. L., & Liu, Q. M. (2005). Minimizing makespan on a
single batching machine with release times and non-identical job sizes.
Operations Research Letters, 33(2), 157164.
Mavrovouniotis, M., & Yang, S. (2013). An ant colony optimization with
immigrants schemes for the dynamic travelling salesman problem with traffic
factors. Applied Soft Computing, 13(10), 40234037.
Melouk, S., Demodaran, P., & Chang, P. Y. (2004). Minimizing makespan for
single machine batch-processing with arbitrary job sizes using simulated
annealing. International Journal of Production Economics, 87(2), 141147.
Naso, D., Surico, M., Turchaiano, B., & Kaymak, U. (2007). Genetic algorithms
for supply-chain scheduling: A case study in the distribution of ready-mixe d
concrete. European Journal of Operational Research, 177(3), 20692099.
Neto, R. F. T., & Filho, M. G. (2011). An ant colony optimization approach to a
permutational flowshop scheduling problem with outsourcing allowed.
Computers & Operations Research, 38(9), 12861293.
Osman, H., & Demirli, K. (2012). Economic lot and delivery scheduling problems
for multi-stage supply chains. International Journal of Production Economics,
136(2), 275286.

226B.-Y. Cheng et al. / Computers & Industrial Engineering 83 (2015) 217225


Parsa, N. R., Karimi, B., & Kashan, A. H. (2010). A branch and price algorithm to
minimize makespan on a single batch-processing machine with arbitrary job
sizes. Computers & Operations Research, 37(10), 17201730.
Pinedo, M. L. (2002). Scheduling theory: Algorithms and systems (2nd ed.). NJ:
Prentice Hall.
Sawik, T. (2009). Coordinated supply chain scheduling. International Journal of
Production Economics, 120(2), 437451.
Selvarajah, E., & Steiner, G. (2009). Approximation algorithms for the supplier s
supply chain scheduling problem to minimize delivery and inventory holding
costs. Operations Research, 57(2), 426438.
Sevaux, M., & Peres, S. D. (2003). Genetic algorithms to minimize the weighte d
number of late jobs on a single machine. European Journal of Operational
Research, 151(2), 296306.
Torabi, S. A., Ghomi, S. M. T., & Karimi, B. (2006). A hybrid genetic algorithm
for the finite horizon economic lot and delivery scheduling in supply chains.
European Journal of Operational Research, 173(1), 173189.

Uzsoy, R. (1994). Scheduling a single batch processing machine with nonidentical job sizes. International Journal of Production Research, 32(7),
16151635.
Yeung, W. K., Choi, T. M., & Cheng, T. C. E. (2011). Supply chain scheduling
and coordination with dual delivery models and inventory storage cost.
International Journal of Production Economics, 132(2), 223229.
Yimer, A. D., & Demirli, K. (2010). A genetic approach to two-phase optimization
of dynamic supply chain scheduling. Computers & Industrial Engineering,
58(3), 411422.
Zegordi, S. H., Abadi, I. N. K., & Nia, M. A. B. (2007). A novel genetic algorithm
for solving production and transportation scheduling in a two-stage supply
chain. Computers & Industrial Engineering, 58(3), 373381.
Zhang, G., Cai, X., Lee, C. Y., & Wong, C. K. (2001). Minimizing makespan on a
single batch-processing machine with nonidentical job sizes. Naval Research
Logistics, 48(3), 226240.