You are on page 1of 42

Manufacturing Simulation and Optimization

The Use of Simulation Modelling Software in the Design


and Analysis of Manufacturing Systems
Author: Joshua Jones
Module: Manufacturing
E2
Tutor: Dr.F Zahedi
Last updated/submitted:
March 21, 2014
Joshua Jones 1
Table Of Contents
1 Introduction 4
2 Aims and Objectives 4
3 Introduction to modeling and Simulation 6
4 Model Details 8
4.1 The Exponential Distribution . . . . . . . . . . . . . . . . . . 11
4.2 The Normal Distribution . . . . . . . . . . . . . . . . . . . . . 12
4.3 The Triangular Distribution . . . . . . . . . . . . . . . . . . . 13
5 Experiment 1 13
6 Experiment: 2 17
7 Experiment: 3 19
7.1 warm up initilization . . . . . . . . . . . . . . . . . . . . . . . 20
8 Experiment: 4 20
8.1 replications . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
9 Experiment: 5 22
10 Experiment: 6 23
10.1 Iteration Testing . . . . . . . . . . . . . . . . . . . . . . . . . 26
11 Performance and Six Sigma analysis 27
11.1 What is Six Sigma 6 . . . . . . . . . . . . . . . . . . . . . . 28
11.2 Other Manufacturing Tools . . . . . . . . . . . . . . . . . . . 31
11.2.1 The Toyota Way . . . . . . . . . . . . . . . . . . . . . 31
11.2.2 Ishikawa diagrams . . . . . . . . . . . . . . . . . . . . 31
12 Comparison to real world methods 32
13 Comparison to alternate computational methods 33
13.1 Why use Simulation? . . . . . . . . . . . . . . . . . . . . . . . 33
13.2 Promodel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
13.2.1 Promodel in Project Management . . . . . . . . . . . . 34
13.3 Simul8 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
14 Benets and limitations of simulation 36
Joshua Jones 2
15 Project review and conclusions 37
15.1 Cumulative statistics for all experiments . . . . . . . . . . . . 37
16 References 39
Joshua Jones 3
Abstract and Executive Summary
The aim of this report is to explore the performance and capabil-
ity of the descrete event modeling software Promodel and simulation
software in general.
This was carried out by simlulating a manufacturing process and per-
forming iterative changes to produce a highly optimized theoretical
revision of the process.
The aim of the project was to learn the software, and gain experi-
ence in simulating industrial processes, in this way the student can
appreciate the benits and limits of the simulation, and elect to use
simulation in a proessional setting if deemed suitable, furthermore
learning the software enhances the skill set, and problem solving ap-
proach.
Understanding the descrete event approach allows for step by step
problem solving to be fostered.
In carrying out this project the student was encouraged to experi-
ment with changing situations and scenarios by conducting a range of
6 experiments, the varying nature of the experiments are designed to
give the student an understanding of each aspect of the manufacturing
process at the highest level: planning and optimization.
Where possible, the system has been modeled completely, and ac-
curately, the scenarios have been implemented and thouroughly tested
often for extended periods to asses underlying trends.
Eorts to produce a more complete description of the system by
assuming the type of product to be made and then researching the
potential costing of the system has been carried out, in this way the
student can assess the prot and cost, including initial setup and an
approximate overhead.
Statistical process control has been highlighted as important and a
range of methodologies have been discussed, with the Six Sigma defect
rate being calculated for each experiment.
This project has explored the nature of descrete event simulation
and looked at the options available in terms of software packages, for
a comparison of suitability.
Joshua Jones 4
1 Introduction
In the elds of mechanical and industrial engineering, an appreciation of in-
dustrial processes is key to producing a viable product, both correctly and
eciently in both nancial and environmental arenas. In todays economy op-
erating a manufacturing process must be focused on using the latest Process
control tools to ensure the product is produced as cheaply and quickly as
possible, with the fewest number of defects.
Many companies have developed systems to use when manufacturing,
such as The Toyota way obviously by Toyota, Six Sigma by Motorola,
Lean manufacturing, Lean Six Sigma, and many others. These methods
form the ethos and methods of reducing waste, defects and costs, and in-
creasing protability of a system, thus providing value to stakeholders and
customers. The overarching eld is known as statistical process control. This
report is aimed at exploring an industrial process via the use of descrete event
simulation, using the package solidworks.
The reccommended reading book for this course was given as
Descrete-event-Simulation by J.Banks, and J.S.Carson, et al.[5]
and will be used to form the basis of theoretical ideas and points.
The goal of the carrying out the simulation and the optimized, iterative
simulation with a number of constraints, is aimed at producing the maxi-
mum improvement in production rate, while simultaneously using the least
amount of resources, and least amount of resources added to the base model.
Using promodel the project has explored the viability of the simulation
by testing integrity and repeatability of the simulation, that is the simula-
tion should be predictable to an extent. The project has also explored the
benets and limitations of a purely simulated analysis of the manufacturing
system.
2 Aims and Objectives
The aim of the coursework is to build a model of a manufacturing process
using the promodel simulation package, using this model to carry out tests
and experiments to understand how the system behaves is a further aim, In
terms of the Six Sigma development Cycle see gure 1 this would be car-
Joshua Jones 5
ried out during the measure and analyze stage, also in Design for six sigma
DFSS, see gure 2, this experimentation will occur in the new product (in
this case manufacturing process) development stage. However in practise the
simulation could be used at all stages to validate and explore how making an
interative change may eect the system in an unforseen way, especially for
systems with a high level of process dependancy, that is many processes need
to succeed for a certain process to be possible. These dependancies should
be carefully designed in the system to reduce the number of root causes for
problems (see root cause analysis )
Figure 1: DMAIC Six Sigma Process
Joshua Jones 6
Figure 2: Design for Six Sigma (DFSS)
3 Introduction to modeling and Simulation
A simulation is the imitation of the operation of a real world process
or system over time.[5]
Simulation involves the generation of an articial history of a system, the
the observation of that system to draw inferences concerning the operating
characteristics of the real system.[5]. In the scope of promodel, observing and
interpreting the statistics produced by the output viewer can allow certain
trends to be picked out, and gather an undersanding of the system and how
changes eect it.
When simulation is suitable:
Simulation is suiable when the pervieved potential saving outweighs the
cost of conducting the simulation, in practise this is usually the case as the
simulation is a one time purchase, however continued saving only increases
with time.
Simulation enables the study of, and experimentation with, the internal
interactions of a complex system[5], and highlights the dependancies and
consequences of changing any small aspect of the system, and the eect this
has on the whole.
Simulation is suitable when changing the inputs and interpreting the out-
Joshua Jones 7
puts is possible, in this way the simulation can show how sensitive the output
is to a particular input, and this may allow running at a minimum for one
input thus eliminating waste without any performance decrease, if issues al-
ready exist, simulation can highlight the root causes of problems, and show
which variables and inputs are the most important (most critical).
Simulation can be used to verify analysical solutions, determine require-
ments for a certain machine to operate at peak eciency, and experiment
with new designs[5]
Manufacturing represents one of the most important applications of Sim-
ulation. This technique represents a valuable tool used by engineers when
evaluating the eect of capital investment in equipments and physical facili-
ties like factory plants, warehouses, and distribution centers. Simulation can
be used to predict the performance of an existing or planned system and to
compare alternative solutions for a particular design problem.[4]
Simulation should not be used when the problem can be solved using
common sense (Banks and gibson, 1997).
Simulation should not be used when the problem can be solved analyti-
cally, in practise for complex systems this may be possible however easier to
just simulate.
Simulation should not be used when the problem can be tested with di-
rect experiments, this is not possible for the scenario in this project however.
Simulation should not be used when the cost to do so exceeds the savings[5]
Disadvantages have been oset in some simulation packages by improving
the functionality and reusabillity of models, for example some softwares are
implementing system sections, and templates, to reduce the time it takes to
model a system [5]. Some simulation packages are osetting their costs by
including very detailed, thorough analysis. As technology improves so does
the capability of the software and simulations. Simulation packages can han-
dle very complex systems, which closed form models can not.[5]
Joshua Jones 8
Applications:
1. Manufacturing
2. Semiconductor manufacturing
3. Construction engineering
4. Military applications
5. Logistics, transport, and distribution
6. Business operations and processes
7. Human systems, Air trac control, Parking, population movement
4 Model Details
The simulation specication was the real world model of an assembly line
that produced two dierent products, using a range of engineering opera-
tions, at multiple stations, both parts share a common raw material, given
the nature of the description it will be assumed that the parts produced are
some kind of container, and the variation exists in that one type (Type II)
also requires the addition of a cover or lid. It is envisioned that the product is
perhaps some kind of barrel with the variation being, open/closed top, at in-
put the initial loading bay for incoming parts can hold 5 units of raw material.
Initial thoughts on this system are that given the fact the products use
a common raw material, sourcing this raw material is a critical node in the
process life cycle, indeed it is the rst one. Furthermore, the fact both parts
share an common rst node is cause for potential bottleneck, and supply
issues, ideally separation of the systems would be preerable is favouring
the most simplistic approach, however costing of producing two systems is
prohibitive, when similar production rates may be possible with one system.
The initial specication specied, that all machines were single capacity,
which is standard for real world machines such as CNC and turning/lathes,
which can only accomodate one work piece at a time.
The specication calls for a initial split between Type I and Type II
products, of 50/50, given that only 5 raw units can be supplied at once this
may give some dierence in the split between products.
Joshua Jones 9
Type I Type II
Turning Milling
Milling Drilling
Drilling Assembly (Cover From Storage)
Inspection
(Scrap(if Defective)) Inspection
Packing Scrap if Defective
Packing
Table 1: Type I and Type II Process
Between each process there will be conveyor belts used to transport WIP
(Work in progess), and a number of personnel:
Job Title Role
Skilled Worker Responsible for Turning, Milling, and
Drilling Operations
Semi-Skilled Responsible for Assembly, Inspection, and
Packing
Transporter Responsible for transporting WIP to stations
and conveyors
Table 2: Employees
All conveyors are designed to be 10m long, holding 3 parts, and a speed
of 50meters per minute, however this is something that could realistically be
changed.
The system is intended to give no priority to any WIP and process on a
rst come rst process method.
Fail rates for both type I and type II products are 0.05, or 5% defect
rate at inspection, note that this occurs after the aluminum cover is added
to type II product, and therefore a registered defect also contributes as a
wasted aluminium cover.
The transport activities are Transfer for next process, and transfer to
scrap from inspection (having failed inspection).
The method of processing parts varies and uses dierring distribution
methods:
Joshua Jones 10
Operation Distribution Value(Minutes)
Arrivals Exponential Mean =20
Turning Triangular (6,8,10)
Milling Triangular (9,12,14)
Drilling Normal Mean=5 Standard Deviation = 1
Assembly Normal Mean=7 Standard Deviation = 1
Inspection Normal Mean=5 Standard Deviation = 2
Packing Normal Mean=3 Standard Deviation = 0.5
Table 3: Distribution types
Joshua Jones 11
4.1 The Exponential Distribution
How much time will elapse before an earthquake occurs in a given region?
How long do we need to wait before a customer enters our shop? How long
will it take before a call center receives the next phone call? How long will a
piece of machinery work without breaking down? Questions such as these are
often answered in probabilistic terms using the exponential distribution. All
these questions concern the time we need to wait before a given event occurs.
If this waiting time is unknown, it is often appropriate to think of it as a
random variable having an exponential distribution. Roughly speaking, the
time X we need to wait before an event occurs has an exponential distribution
if the probability that the event occurs during a certain time interval is
proportional to the length of that time interval.[1]
Figure 3: Exponential Distribution
Joshua Jones 12
4.2 The Normal Distribution
The normal distribution is the most commonly used statistical distribution
used to model a range of events, its use in this projecthas been extensive,
for example the Six Sigma statistical process control methadology employs
the concepts of standard deviation to measure performance of a process. By
plotting the mean and then the sample values around it either side () we
can see how much the system varies, and also if the system is performing
within certain limits, the USL and LSL upper and lower specied limits,
respectively.
The Normal Distribution has:
mean = median = mode symmetry about the centre 50% of values less
than the mean and 50% greater than the mean
Figure 4: Normal Distribution
Joshua Jones 13
4.3 The Triangular Distribution
The Triangular distribution is used when there are a small number samples
and only the upper, lower, and modal expected limits are known, in this way
the algorithm for the distribution can set iterative and recursive limits on
the systems performance, also known as the lack of knowledge distribution,
A triangular distribution is a continuous probability distribution with a
probability density function shaped like a triangle. It is dened by three
values: the minimum value a, the maximum value b, and the peak value c.
Figure 5: Triangular Distribution
5 Experiment 1
The rst experiment was to simply implement the model in Promodel as
accurately as possible, and assess the performance. From gure 19 we can
see that the turning spends a long time blocked, and the drilling and assembly
stations are lightly blocked, this may be due to convey availability in terms
of speed and capacity, and also an inccorectly selected turning machine that
Joshua Jones 14
is unable to perform the operations quick enough.
The system was simulated for 40hrs, this was interpreted as a one week
period, assuming continual operation for 5, 8 hours shifts, with one shift
occuring per day. This system ran with 100% uptime.
Figure 6: Experiment 1 Outcomes
Joshua Jones 15
Experiment 1
Type 1 41
Type 2 57
Total 98
Reject 1 1
Reject 2 2
Reject Total 3
% defective 2.97%
% Sucessful 97.03%
Process Sigma 3.39
Aluminium covers 62
Aluminum Covers Wasted 5
% Aluminum waste 8.06%
Aluminum Process sigma 2.90
DPMO 29703
Throughput
Per second 0.00068
Per minute 0.040833333
Per hour 2.45
Table 4: Experiment 1 Results (DPMO = Defective parts per million Oper-
ations)
Figure 7: Experiment 1 initial layout
Joshua Jones 16
Figure 8: Experiment 1 nished layout
Figure 9: Experiment 1 output
Joshua Jones 17
6 Experiment: 2
Experiment 2 was aimed at optimizing experiment 1, throughput increased
by 0.3 parts per hour, leading to a volume increase of 13 parts, however
while overall volume has increased, so has the number of defects, and the
number of aluminum covers both used and wasted. The sigma performance
indicator has dropped from 3.39 in experiment 1 to 3.22 in experiment 2,
this drop in quality is unfortunate, however in terms of overall productivity
this experiment has been a positive impact, accruing 13 more parts for only 2
more defects. One vital statistic is the Defective parts per million operations,
initially only 29703 defects would occur on average, however this is nearly
doubled in experiment 2 to 43478, however the time take to produce a million
parts is reduced from 139.78 years, to 123.4 years assuming an 8 hour shift
constitutes a day and a year is 365 days of continuous operation.
In experiment 2 only the number of sta changes, with one more skilled
worker being recruited, and the lengths, both of the conveyor and the rout-
ing for the sta being optimized and balanced. As can be seen from the two
graphs gure 10 and gure 9 the bottle neck in the turning operation has
been eliminated, this was done by balancing the conveyor speeds either side
of the station.
Joshua Jones 18
Experiment 2
Type 1 52
Type 2 59
Total 111
Reject 1 1
Reject 2 4
Reject Total 5
% defective 4.21%
% Sucessful 95.69%
Process Sigma 3.22
Aluminium covers 66
Aluminum Covers Wasted 7
% Aluminum waste 10.61%
Aluminum Process sigma 2.75
DPMO 43478.00
Throughput
Per second 0.00077
Per minute 0.04625
Per hour 2.775
Table 5: Experiment 2 Results (DPMO = Defective parts per million Oper-
ations)
Figure 10: Experiment 2 output
Joshua Jones 19
7 Experiment: 3
Experiment 3 is concerned with the time taken to achieve steady state output,
or the utilization of a warm-up period, at each time interval (hours), the
productivity rate is erratic and diering, however for simulation with only
user specied inputs there should be a linearity to the results, by discounting
the erratic stage, these results can be found, by taking an iterative approach
to solving this problem the system warm up time was found to be 96 hours.
Experiment 3
Type 1 52
Type 2 58
Total 110
Reject 1 1
Reject 2 1
Reject Total 2
% defective 1.79%
% Sucessful 98.21%
Process Sigma 3.6
Aluminium covers 59
Aluminum Covers Wasted 1
% Aluminum waste 1.69%
Aluminum Process sigma 3.62
DPMO 16949
Throughput
Per second 0.00076
Per minute 0.045833333
Per hour 2.75
Table 6: Experiment 3 Results (DPMO = Defective parts per million Oper-
ations)
Joshua Jones 20
Figure 11: Experiment 3 output
7.1 warm up initilization
8 Experiment: 4
The number of replications to provide a steady state value of output forthe
simulation was determined iteratively to be around 20 replications.
Figure 12: Experiment 4 output replications
Joshua Jones 21
Experiment 4
Type 1 60
Type 2 56
Total 116
Reject 1 3
Reject 2 3
Reject Total 6
% defective 4.92%
% Sucessful 95.08%
Process Sigma 3.15
Aluminium covers 59
Aluminum Covers Wasted 3
% Aluminum waste 5.08%
Aluminum Process sigma 3.14
DPMO 50847
Throughput
Per second 0.00081
Per minute 0.048333333
Per hour 2.9
Table 7: Experiment 4 Results (DPMO = Defective parts per million Oper-
ations)
Figure 13: Experiment 4 output
Joshua Jones 22
8.1 replications
9 Experiment: 5
Experiment 5 was concerned with the possibility of changing the mix between
TypeI and Type II, turning was utilized more than previously, and the general
split was consistent with a 25% : 75% mix as seen in gure 14, due to the low
utilizaton of aluminum this may represent some cost saving, and the system
was quite ecient, running at 3.21 sigma, and 3.32 sigma for the aluminum
operation. The throughput increased to 2.775 parts per hour.
Figure 14: Experiment 5 output for mix
Experiment 5
Type 1 83
Type 2 28
Total 111
Reject 1 4
Reject 2 1
Reject Total 5
% defective 4.31%
% Sucessful 95.69%
Process Sigma 3.21
Aluminium covers 29
Aluminum Covers Wasted 1
% Aluminum waste 3.45%
Aluminum Process sigma 3.32
DPMO 46729
Throughput
Per second 0.00077
Per minute 0.04625
Per hour 2.775
Table 8: Experiment 5 Results (DPMO = Defective parts per million Oper-
ations)
Joshua Jones 23
Figure 15: Experiment 5 output
10 Experiment: 6
In experiment 6 the aim was to double the speed and optimize the system,
the previous values were from experiment 4 and the aim was to double the
performance. Initially total throughput was 116 parts at a rate of 2.9 parts
per hour, running at 3.15 sigma, Small incremental changes to map the be-
haviour of the simulation meant that by the 16th iteration the system was
producing 204 parts at a rate of 5.1 parts per hour, running at 3.18 sigma, in
real terms this means 88 more products and 4118 fewer defective parts per
million.
In model 6 another assembly station was added, along with having 2 of
each worker. The conveyor was further optimized and balanced by regulating
the speeds and having some sections move faster than others to allow for the
variation in completion time at each station.
Joshua Jones 24
Experiment 6
Type 1 98
Type 2 106
Total 204
Reject 1 4
Reject 2 6
Reject Total 10
% defective 4.67%
% Sucessful 95.33%
Process Sigma 3.18
Aluminium covers 108
Aluminum Covers Wasted 2
% Aluminum waste 1.85%
Aluminum Process sigma 3.59
DPMO 46729
Throughput
Per second 0.00142
Per minute 0.085
Per hour 5.1
Table 9: Experiment 6 Results (DPMO = Defective parts per million Oper-
ations)
Figure 16: Experiment 6 output
Joshua Jones 25
Figure 17: Experiment 6 output
Figure 18: Experiment 6 output
Joshua Jones 26
10.1 Iteration Testing
Figure 19: Experiment 6 iteration results
Joshua Jones 27
11 Performance and Six Sigma analysis
Figure 20: Six Sigma overview
Sigma 1.5 shift DPMO Defective Yield Short-term Cpk Long-term Cpk
1 -0.5 691,462 69% 31% 0.33 0.17
2 0.5 308,538 31% 69% 0.67 0.17
3 1.5 66,807 6.7% 93.3% 1.00 0.5
4 2.5 6,210 0.62% 99.38% 1.33 0.83
5 3.5 233 0.023% 99.977% 1.67 1.17
6 4.5 3.4 0.00034% 99.99966% 2.00 1.5
7 5.5 0.019 0.0000019% 99.9999981% 2.33 1.83
Table 10: Sigma levels used to quantify process performance, Other manu-
facturing tools
Joshua Jones 28
The 1.5 shift is indicative of defects over extended periods, and is beyond
the scope of this simulation for all but the extreme time scale cases.
11.1 What is Six Sigma 6
Six Sigma at many organizations simply means a measure of quality that
strives for near perfection. Six Sigma is a disciplined, data-driven approach
and methodology for eliminating defects (driving toward six standard devia-
tions between the mean and the nearest specication limit) in any process
from manufacturing to transactional and from product to service.[2] The sta-
tistical representation of Six Sigma describes quantitatively how a process is
performing. To achieve Six Sigma, a process must not produce more than 3.4
defects per million opportunities. A Six Sigma defect is dened as anything
outside of customer specications. A Six Sigma opportunity is then the total
quantity of chances for a defect. [2] The fundamental objective of the Six
Sigma methodology is the implementation of a measurement-based strat-
egy that focuses on process improvement and variation reduction through
the application of Six Sigma improvement projects. This is accomplished
through the use of two Six Sigma sub-methodologies: DMAIC and DMADV.
The Six Sigma DMAIC process (dene, measure, analyze, improve, control)
is an improvement system for existing processes falling below specication
and looking for incremental improvement. The Six Sigma DMADV process
(dene, measure, analyze, design, verify) is an improvement system used to
develop new processes or products at Six Sigma quality levels. It can also be
employed if a current process requires more than just incremental improve-
ment. Both Six Sigma processes are executed by Six Sigma Green Belts and
Six Sigma Black Belts, and are overseen by Six Sigma Master Black Belts.[2]
Joshua Jones 29
Figure 21: Six Sigma overview
Joshua Jones 30
Figure 22: Six Sigma overview perspective
According to the Six Sigma Academy, Black Belts save companies ap-
proximately $230,000 per project and can complete four to 6 projects per
year. (Given that the average Black Belt salary is $80,000 in the United
States, that is a fantastic return on investment.) General Electric, one of the
most successful companies implementing Six Sigma, has estimated benets
on the order of $10 billion during the rst ve years of implementation. GE
rst began Six Sigma in 1995 after Motorola and Allied Signal blazed the
Six Sigma trail. Since then, thousands of companies around the world have
discovered the far reaching benets of Six Sigma.
Many frameworks exist for implementing the Six Sigma methodology. Six
Sigma Consultants all over the world have developed proprietary methodolo-
gies for implementing Six Sigma quality, based on the similar change man-
agement philosophies and applications of tools.[2]
Six Sigma mostly nds application in large organizations. An important
factor in the spread of Six Sigma was GEs 1998 announcement of $350
million in savings thanks to Six Sigma, a gure that later grew to more
than $1 billion. According to industry consultants like Thomas Pyzdek and
John Kullmann, companies with fewer than 500 employees are less suited to
Joshua Jones 31
Six Sigma implementation, or need to adapt the standard approach to make
it work for them.[3] Six Sigma however contains a large number of tools
and techniques that work well in small to mid-size organizations. The fact
that an organization is not big enough to be able to aord Black Belts does
not diminish its abilities to make improvements using this set of tools and
techniques. The infrastructure described as necessary to support Six Sigma
is a result of the size of the organization rather than a requirement of Six
Sigma itself.[3]
11.2 Other Manufacturing Tools
11.2.1 The Toyota Way
The 6 Ms (used in manufacturing industry)
1.Machine (technology)
2.Method (process)
3.Material (Includes Raw Material, Consumables and Information.)
4.Man Power (physical work)/Mind Power (brain work): Kaizens, Sugges-
tions
5.Measurement (Inspection)
6.Milieu/Mother Nature (Environment)
The original 6Ms used by the Toyota Production System have been expanded
by some to include the following and are referred to as the 8Ms. However,
this is not globally recognized. It has been suggested to return to the roots
of the tools and to keep the teaching simple while recognizing the original
intent; most programs do not address the 8Ms.
7.Management/Money Power
8.Maintenance
11.2.2 Ishikawa diagrams
Ishikawa diagrams see gure 23 (also called shbone diagrams, herringbone
diagrams, cause-and-eect diagrams, or Fishikawa) are causal diagrams cre-
ated by Kaoru Ishikawa (1968) that show the causes of a specic event.[1][2]
Common uses of the Ishikawa diagram are product design and quality de-
fect prevention, to identify potential factors causing an overall eect. Each
cause or reason for imperfection is a source of variation. Causes are usually
grouped into major categories to identify these sources of variation.
The categories typically include:
People: Anyone involved with the process
Joshua Jones 32
Methods: How the process is performed and the specic requirements for
doing it, such as policies, procedures, rules, regulations and laws
Machines: Any equipment, computers, tools, etc. required to accomplish the
job
Materials: Raw materials, parts, pens, paper, etc. used to produce the nal
product
Measurements: Data generated from the process that are used to evaluate
its quality
Environment: The conditions, such as location, time, temperature, and cul-
ture in which the process operates
Figure 23: Cause eect diagram - Ishikawa diagrams
12 Comparison to real world methods
Simulation can never completely emulate a real world system with 100%
accuracy because the world is not in descrete values, precision is only limited
by the level of precision possible by measurement, there will always be more
information, and more decimal places in a
Joshua Jones 33
13 Comparison to alternate computational meth-
ods
13.1 Why use Simulation?
Accurate Depiction of Reality Anyone can perform a simple analysis
manually. However, as the complexity of the analysis increases, so does the
need to employ computer-based tools.
While spreadsheets can perform many calculations to help determine the op-
erational status of simple systems, they use averages to represent schedules,
activity times, and resource availability.[6]
This does not allow them to accurately reect the randomness and inter-
dependence present in reality with resources and other system elements. Sim-
ulation, however, does take into account the randomness and interdependence
which characterize the behavior of your real-life business environment.[6]
Using simulation, you can include randomness through properly identi-
ed probability distributions taken directly from study data. For example,
while the time needed to perform an assembly may average 10 minutes, spe-
cial orders may take as many as 45 minutes to complete. A spreadsheet will
force you to use the average time, and will not be able to accurately capture
the variability that exists in reality.[6]
Simulation also allows interdependence through arrival and service events,
and tracks them individually. For example, while order arrivals may place
items in two locations, a worker can handle only one item at a time. Simula-
tion accounts for that reality, while a spreadsheet must assume the operator
to be available simultaneously at both locations.
13.2 Promodel
Simulation is the Cornerstone for Decision Support With more than 4,000
companies using this technology including 42 of the Fortune 100, ProModel
is recognized as the industry leader tool for rapid and accurate simulation-
based decision support. The bottom line savings are realized in the following
areas:[6]
Hard-dollar savings Lower capital expenditure.
Increased existing facility utilization reduces net cost.
Joshua Jones 34
Proper labor assignments prevent unnecessary new hires.
Accurate and insightful facility planning eliminates unnecessary rework costs.[6]
Soft-dollar savings Facility rearrangement or reassignment of duties in-
creases productivity.
Reduced wait time improves customer satisfaction.
Accurate system depiction ensures valid decision-making information.[6]
Labor savings Rapid development establishes time and cost data quickly
and accurately.[6]
Intangible benets Increased understanding of the actual process im-
proves employee education.[6]
Coordinated simulation projects improve teamwork and communication and
focus resources in areas which will provide biggest benet.[6]
The ProModel Optimization Suite is a powerful yet easy to use simulation
tool for modeling all types of manufacturing systems ranging from small job
shops and machining cells to large mass production, exible manufacturing
systems, and supply chain systems.
ProModel is a Windows based system with an intuitive graphical interface
and object-oriented modeling constructs that eliminate the need for program-
ming.
It combines the exibility of a general purpose simulation language with the
convenience of a data-driven simulator.
In addition, ProModel utilizes an optimization tool called SimRunner that
performs sophisticated what-if analysis by running automatic factorial de-
sign of experiments on the model, providing the best answer possible.[6]
13.2.1 Promodel in Project Management
The major challenge in project management is being able to ensure that
projects are delivered within dened constraints such as scope, time, and
cost. Some of the most common problems faced by project managers are:
Outsourcing decisions
Joshua Jones 35
Inability to accurately predict resource
requirements and cost
Communicating a solution across an
Vrganization
Varying task times
Mitigating unperceived risks
Missed deadlines
Bottlenecks
Insucient and shared resources
Inability to align resources
Multiple conicting goals i.e. - fastest
Completion time at lowest cost
Accelerated schedules
Pzer, ITT, Laureate Pharma, Merck, Hot Topic and others are using Pro-
Models PPM (Project and Portfolio Management) solutions to address these,
and even more issues, in order to improve their project management and port-
folio planning results. ProModel Simulation Solutions for Project Managers
and Portfolio Planners ProModels Project Management solutions allow you
to Visualize, Analyze, and Optimize the execution of a project or portfolio of
projects by taking into account variability, resource contention, and complex
interdependencies. Unlike typical static analysis programs such as spread-
sheets and project or portfolio management software, ProModels technology
expresses information in ranges of answers, with condence levels and depen-
dencies, which more accurately reect how a project will actually perform.[7]
13.3 Simul8
SIMUL8 is a computer package for Discrete Event Simulation. It allows the
user to create a visual model of the system being investigated by drawing
objects directly on the screen. Typical objects may be queues or service
points. The characteristics of the objects can be dened in terms of, for
example, capacity or speed. When the system has been modelled then a
Joshua Jones 36
simulation can be undertaken. The ow of work items around the system is
shown by animation on the screen so that the appropriateness of the model
can be assessed. When the structure of the model has been conrmed, then
a number of trials can be run and the performance of the system described
statistically. Statistics of interest may be average waiting times, utilisation
of work centres or resources, etc.[8]
14 Benets and limitations of simulation
Advantages Disadvantages
Does not disrupt real process Requires special training
New processes can be tested for free/little
cost
Dicult to interpret
Reasons for process performance can be
tested for feasibility
Time consuming
Time sensitive tests can be sped up
Expensive
Insight to variable interaction can be found
Analytical models may be
cheaper and better
Bottleneck analysis
Understand system more completely
Aids Design and answers what-if questions
and iterative approaches
Table 11: Pros and cons of simulation
Joshua Jones 37
15 Project review and conclusions
15.1 Cumulative statistics for all experiments
Figure 24: All Results
Figure 25: All Results
Joshua Jones 38
Figure 26: All Results Sigma analysis
Figure 27: Long time period testing
Figure 28: Time to produce a million parts
In conclusion the system has been optimized to a high level, and considera-
tions of the system have been fully mapped out, moving on from this project,
Joshua Jones 39
a large veriety of other projects may be tackled in a similar way, however
Promodel did lack features in the outputing of data, and using excel was
just as powerful, in order to incorporate the output viewer as part of our
results, more powerful interpretation must be included. Overall promodel
was easy to use, however the outputs often gave little indication as to the
performance of the system, and an iterative approach was taken to ensure
incremental improvement.
In a real world project Promodel would be useful however the system
quickly becomes unwieldly when dealing with larger sections, it is reccomended
to break the system down into sub systems to gain an intricate look at the
relationship between variables and to establish variable input.output sense-
tivity before continuing on to producing a nished system.
16 References
[1] Unknown, (2014), Exponential distribution, Available:
http://www.statlect.com/ucdexp1.htm, Last accessed 21.03.14.
[2] http://www.isixsigma.com/new-to-six-sigma/getting-started/what-six-
sigma/ ,
[3] Dusharme, Dirk. Six Sigma Survey: Breaking Through the Six Sigma
Hype. Quality Digest.
[4] Benedettini, O., Tjahjono, B. (2008). Towards an improved tool to facil-
itate simulation modeling of complex manufacturing systems. Interna-
tional Journal of Advanced Manufacturing Technology 43 (1/2): 1919.
doi:10.1007/s00170-008-1686-z.
[5] Discrete event system simulation 3rd ed. , banks, carson, et al, prentice
hall international series in systems and industrial engineering
[6] SIMULATION MODELING AND OPTIMIZATION USING PRO-
MODEL, Deborah Benson , PROMODEL Corporation, Proceedings of
the 1997 Winter Simulation Conference, ed. S. Andradottir, K. J. Healy,
D. H. Withers, and B. L. Nelson
[7] ProModel Simulation Improves Project and Portfolio Management, pro-
model.com
Joshua Jones 40
[8] Jim Shalliker & Chris Ricketts, (2002), Intro to SImulate, Available:
http://www.wirtschaft.fh-dortmund.de/eurompm/bilbao/S8intro.pdf,
Last accessed 21.3.14.
Joshua Jones 41

You might also like