You are on page 1of 651

DET 2011

7
TH
INTERNATIONAL
CONFERENCE ON
DIGITAL ENTERPRISE
TECHNOLOGY






28 - 30 September 2011
Athens



























Hosted by:
Laboratory for Manufacturing Systems and Automation
Director Professor G. Chryssolouris
Department of Mechanical Engineering and Aeronautics
University of Patras
GREECE











































7TH INTERNATIONAL CONFERENCE ON
DIGITAL ENTERPRISE TECHNOLOGY

PROCEEDINGS OF THE 7
TH
DET 2011
28 - 30 SEPTEMBER 2011 - ATHENS
ed. by: Professor G. Chryssolouris
Professor D. Mourtzis




This work is subjected to copyright. All rights are reserved, whether the whole or part of the material
is concerned, specifically those of translation, reprinting, reuse of illustrations, broadcasting,
reproduction by photocopying machine or similar means and storage in data banks.


Preface


Dear Colleagues:



Welcome to the 7
th
Digital Enterprise Technology International Conference (7
th
DET 2011).
Digital Enterprise Technology (DET) can be defined as the collection of systems and methods
for the digital modeling and analysis of the global product development and realization
process, in the context of lifecycle management. The aim of DET 2011 is to provide an
international forum for the exchange of leading edge scientific knowledge and industrial
experience.

A number of technical papers are presented addressing a wide variety of topics, including
Enterprise Modeling and Integration Technologies, Manufacturing Systems and Processes
Simulation, Enterprise Resource Planning, Supply Chain Management, Digital Factory, Real-
time Decision Making and Decision Support Systems, Complex System Modeling and
Analysis, e-Business and e-Commerce, Lean Production and Agile Manufacturing, Flexible
and Reconfigurable Manufacturing, Concurrent Engineering, Logistics and Manufacturing
Data Management, Virtual Reality and Manufacturing, Web Services and Manufacturing, Life
Cycle Design and Manufacturing, Energy efficient and Green manufacturing processes,
Environmentally sustainable production systems, Collaborative Manufacturing and
Engineering, Rapid Manufacturing, Reverse Engineering, Advanced Metrology, Engineering
Education and Training


We wish to thank all of you for your participation, and we hope that you find this conference to
be an enriching experience.





Professor G. Chryssolouris
Professor D. Mourtzis
Laboratory for Manufacturing Systems and Automation (LMS)
Chairs, 7
th
DET 2011







7
th
DET 2011 Committees

CHRYSSOLOURIS G LMS / University of Patras, Greece Conference Chairman
MAROPOULOS P. University of Bath Conference Series Chairman
MOURTZIS D., LMS / University of Patras, Greece Conference Designated Chairman

International Program Committee (IPC)

BERNARD A. Ecole Centrale de Nantes, France
BOER C. ICIMSISUPSI, Switzerland
BYRNE G. University College Dublin, Ireland
CARPANZANO E. ITIA-CNR, Italy
CUNHA P. Instituto Politecnico de Setubal, Portugal
DENKENA B. Leibniz, Universitaet Hannover, Germany
DUFLOU J. Katholieke Universiteit Leuven, Belgium
DUMUR D. Ecole Superieure d Electricite, France
FISCHER A. TECHNION, Israel
GALANTUCCI L. Politecnico di Bari, Italy
HUANG, G. Q. University of Hong Kong
INASAKI I. Chubu University, Japan
JOVANE F. Politecnico di Milano, Italy
KIMURA F. Hosei University, Japan
KJELLBERG T. Royal Institute of Technology, Sweden
KOREN Y. University of Michigan, USA
KUMARA S. Pennsylvania State University, USA
LEVY G. Inspire AG, Switzerland
MAROPOULOS P. University of Bath
MONOSTORI L. Hungarian Academy of Sciences, Hungary
NEE A. National University of Singapore, Singapore
ROY R. Cranfield University, UK
SANTOCHI M. University of Pisa, Italy
SCHOLZ-REITER B. University of Bremen, Germany
SCHOENSLEBEN P. ETH Zurich, Switzerland
SELIGER G. Fraunhofer-Berlin, Germany
SHPITALNI M. TECHNION, Israel
SUH P. N. KAIST, Korea
SUTHERLAND J. Purdue University, USA
TETI R. University of Naples Federico II, Italy
TICHKIEWITCH S. Laboratoire G-SCOP, France
TOLIO T. ITIA-CNR, Italy
TOMIYAMA T. Delft University of Technology, Netherlands
TSENG M. Hong Kong University of Science and Technology, Hong Kong
VAN BRUSSEL H. Katholieke Universiteit Leuven, Belgium
VAN HOUTEN F. University of Twente, Netherlands
WERTHEIM R. Fraunhofer IWU, Germany
WESTKAEMPER E. University of Stuttgart, Germany

National Organizing Committee (NOC)

MOURTZIS D. LMS / University of Patras, Greece Chairman

MAKRIS S. LMS / University of Patras, Greece
MAVRIKIOS D. LMS / University of Patras, Greece
MICHALOS G. LMS / University of Patras, Greece
PANDREMENOS J. LMS / University of Patras, Greece
PAPAKOSTAS N. LMS / University of Patras, Greece
SALONITIS K. LMS / University of Patras, Greece
STAVROPOULOS P. LMS / University of Patras, Greece
Table of Contents


P01: Economical Analysis for Investment on Measuring Systems
M. X. Zhang, P.G. Maropoulos, J. Jamshidi, N. B. Orchard

01

P02: Measurement Assisted Assembly and the Roadmap to Part-to-Part Assembly
J. Muelaner, O.Martin, A.Kayani, P.G. Maropoulos

11

P03: Digital Factory Economics
Volkmann, J.; C.Constantinescu

20

P04: CORBA Based architecture for feature based Design WITH ISO Standard 10303 Part
224
D. Kretz, J. Militzer, T. Neumann, C. Soika, T. Teich


29

P05: Integrated Dimensional Variation Management in the Digital Factory
J E Muelaner, P.G. Maropoulos


39

P06: Integrated Large Volume Metrology Assisted Machine Tool Positioning
Zheng Wang, P.G. Maropoulos


47


P07: HSC Machining Centre Basic Errors Prediction For Accuracy Control
J. Jedrzejewski, W. Kwasny


57

P08: Decision-making for Metrology System Selection based on Failure Knowledge
Management
Wei Dai, Xi Zhang, P.Maropoulos, Xiaoqing Tang


65


P09: Comprehensive Support of Technical Diagnosis by Means of Web Technologies
M. Michl, J. Franke,. C. Fischer, J. Merhof


73

P10: Metrology enhanced tooling for aerospace (META): A live fixturing Wing Box assembly
case study
O. C. Martin, J. Muelaner, Z. Wang, A. Kayani, D. Tomlinson, P. G. Maropoulos, P. Helgasson


83

P11: Planning Software as a Service - A new approach for holistic and participative
production planning processes
R.Moch, E.Mller


93

P12: Beyond the planning cascade: Harmonized planning in vehicle production
S. Auer, W. Mayrhofer, W. Sihn


101


P13: Dynamic wavelet neural network model for forecasting returns of SHFE copper futures
price
Li Shi , L. K. Chu, Yuhua Chen


109

P14: A novel tool for the design and simulation of Business Process Models
J. Pandremenos, K. Alexopoulos, G. Chryssolouris


117

P15: Virtual Reality enhanced Manufacturing Systems Design

125
Xiang Yang, R. Malak, C. Lauer, C. Weidig, B. Hamann, H. Hagen, J. C. Aurich


P16: Real Options Model for Valuating China Greentech Investments
Qin Han, L. K. Chu


134

P17: User - Assisted Evaluation of Tool Path Quality for complex milling processes
C. Brecher , W. Lohse


142


P18: Implementation of kinematic mechanism data exchange based on STEP
Yujiang Li, M. Hedlind, T. Kjellberg


152

P19: A framework of an energy-informed machining system
Tao Peng, X. Xu


160

P20: Development of a STEP-based collaborative product data exchange environment
Xi Vincent Wang, X. Xu


170

P21: Life cycle oriented evaluation of flexibility in investment decisions for automated
assembly systems
A. Kampker, P. Burggrf, C. W. Potente, G. Petersohn


178

P22: The role of simulation in digital manufacturing applications and outlook
D.Mourtzis, N.Papakostas, D.Mavrikios, S.Makris, K.Alexopoulos


187

P23: A software concept for process chain simulation in micro production
Bernd Scholz-Reiter. J. Jacobi, M. Lutjen


204

P24: An Inventory and Capacity-Oriented Production Control Concept for the Shop Floor
based on Artificial Neural Networks
B. Scholz-Reiter, F. Harjes, O. Stasch, J.Mansfeld


213

P25: A multi-agent-enabled evolutionary approach to supply chain strategy formulation
Ray Wu, David Zhang


221

P26: Development of an assembly sequence planning system based on assembly features
Hong Seok Park, Yong Qiang Wang


227

P27: Multi-objective Optimization for the Successive Manufacturing Processes of the Paper
Supply Chain
M. M. Malik, J. Taplin, M. Qiu


237

P28: Performance of 3-D Textured Micro- Thrust Bearings with Manufacturing Errors
A.G. Haritopoulos, E.E. Efstathiou, C.I. Papadopoulos, P.G. Nikolakopoulos, L. Kaiktsis


247

P29: A multilevel reconfiguration concept to enable versatile production in distributed
manufacturing
S. Minhas, M. Halbauer, U. Berger


257

P30: Virtual Factory Manager of Semantic Data
G. Ghielmini, P. Pedrazzoli, D. Rovere, M. Sacco, C.R. Bor, W.Terkaj, G.Dalmaso, F.Milella

268


P31: Agile manufacturing systems with flexible assembly processes
S. Dransfeld, K. Martinsen, H. Raabe


278

P32: A virtual factory tool to enhance the integrated design of production lines
R. Hints, M. Vanca, W. Terkaj, E.D. Marra, S. Temperini, D. Banabic


289


P33: Methodology for monitoring and managing the abnormal situation (event) in non-
hierarchical business network
A. Shamsuzzoha, S. Rintala, T. Kankaanp, L. Carneiro, P. Ferreira P. Cunha


299

P34: Implementation and Enhancement of a Graphical Modelling Languages for Factory
Engineering and Design
C. Constantinescu, G. Riexinger


307


P35: Approach for the development of logistics enablers for changeability in global value
chain networks
B. Scholz-Reiter, S. Schukraft, M.E. zsahin


315


P36: Automation of the three-dimensional scanning process based on data obtained from
photogrammetric measurement
R Konieczny, A. Riel, M. Kowalski, W. Kuczko, D. Grajewski


322

P37: Recommending engineering knowledge in the product development process with Shape
Memory Technology
R. Thei, T. Sadek, S. Langbein


330

P38: Kinematic structure representation of products and manufacturing resources
M. Hedlind, Yujiang Li, T. Kjellberg, L. Klein


340

P39: Multi-agent-based real-time scheduling MODEL for RFID-enabled ubiquitous shop floor
T. Qu, George Q. Huang, YF Zhang, S.Sun


348


P40: Frequency Mapping for Robust and Stable Production Systems
R. Schmitt, S. Stiller


356

P41: An automated business process optimisation framework for the development of re-
configurable business processes: a web services approach
A. Tiwari, C. Turner, A. Alechnovic, K. Vergidis


363


P42: Virtual Rapid Prototyping Machine
E. Pajak, F. Gorski, R. Wichniarek, P. Zawadzki


373

P43: Manufacturing systems complexity an assessment of performance indicators
unpredictability
K. Efthymiou, A. Pagoropoulos, N. Papakostas, D. Mourtzis, G. Chryssolouris


383

P44: A Fuzzy Criticality Assessment System of Process Equipment for Optimized
Maintenance Management
Qi H.S., N. Alzaabi, S. Wood, M. Jani


392

P45: Realising the Open Virtual Commissioning of Modular Automation Systems
Xiangjun Kong, B. Ahmad, R. Harrison, Atul Jain, L. Lee, Y.Park


402

P46: Web-DPP: An Adaptive Approach to Planning and Monitoring of job-shop machining
operations
Lihui Wang, Mohammad Givehchi


411


P47: Simulation Aided Development of Alternatives for Improved Maintenance Network
A. Azwan, A. Rahman, P. Bilge, G. Seliger


421

P48: A framework for performance management in collaborative manufacturing networks
P. S. Ferreira, P. F. Cunha


430

P49: Risk management in early stage of product life cycle by relying on Risk in Early Design
(RED) methodology and using Multi-Agent System (MAS)
L. Sadeghi, M. Sadeghi


440

P50: Challenges in digital feedback of through-life engineering service knowledge to product
design and manufacture
T. Masood, R. Roy, A. Harrison, S. Gregson, Y. Xu, C. Reeve


447

P51: A digital decision making framework integrating design attributes, knowledge and
uncertainty in aerospace sector
T. Masood, J.Erkoyuncu, R.Roy, A. Harrison


458

P52: Product To Process Lifecycle Management in Assembly Automation Systems
B. Ahmad, I. U. Haq, T. Massod, R. Harrison, B.Raza, R.Monfared


467

P53: Digital Factory Simulation Tools for the Analysis of a Robotic Manufacturing Cell
A. Caggiano, R. Teti


478


P54: Diverse noncontact reverse engineering systems for cultural heritage preservation
T.Segreto, A. Caggiano, R.Teti


486


P55: A Two-phase Instruments Selection System for Large Volume Metrology Based on
Intuitionistic Fuzzy Sets with TOPSIS method
Bin Cai, J. Jamshidi, P.G. Maropoulos, P.Needham


494

P56: Viewpoints on digital, virtual, and real aspects of manufacturing systems.
H. Nylund, P.Andersson


504

P57: Virtual Gage Design for the Effective Assignment of Position Tolerances under
Maximum Material Condition
G. Kaisarlis, C. Provatidis


512

P58: Dimensional management in aerospace assemblies: case based scenarios for simulation
and measurement of assembly variations
P. Vichare, O. Martin, J. Jamshidi, P.G. Maropoulos


522

P59: Evaluation of Geometrical Uncertainty Factors during Integrated Utilization of Reverse
Engineering and Rapid Prototyping Technologies

532








G. Kaisarlis, S. Polydoras, C. Provatidis


P60: Knowledge capitalization into Failure Mode and Effects Analysis
G. Candea, C. Candea, C. Zgripcea


542

P61: Robust design optimization of energy efficiency: Cold roll forming process
J. Paralikas , K. Salonitis, G.Chryssolouris


550

P62: An Analysis of Human-Based assembly processes for immersive and Interactive
Simulation
L.Rentzos, G.Pintzos, K. Alexopoulos., D. Mavrikios, G. Chryssolouris


558

P63: PROTOTYPE designing with the help of VR techniques: The case of aircraft cabin
L.Rentzos, G. Pintzos, K. Alexopoulos, D. Mavrikios, G.Chryssolouris


568

P64: Knowledge management framework, supporting manufacturing system design
K. Efthymiou, K. Alexopoulos. P. Sipsas, D. Mourtzis, G. Chryssolouris


577

P65: A manufacturing ontology following performance indicators approach
K. Efthymiou, D. Melekos, K. Georgoulias, K. Sipsas, G. Chryssolouris


586

P66: Structuring and applying production performance indicators
G. Pintzos, K. Alexopoulos, G. Chryssolouris


596

P67 A Web-based Platform for Distributed Mass Product Customization: Conceptual Design
D. Mourtzis, M.Doukas, F. Psarommatis


604

P68: Machining with robots: A critical review
J. Pandremenos, C. Doukas, P. Stavropoulos, G. Chryssolouris


614

P69: A pushlet-based wireless information environment for mobile operators in human based
assembly lines
S. Makris, G. Michalos, G. Chryssolouris


622


P70: RFID-Based Real-time Shop-Floor Material Management: An AUTOM Solution and A
Case Study
T. Qu, H. Luo, Y. Zhang, X. Chen, G. Q. Huang


632
Proceedings of DET2011
7th International Conference on Digital Enterprise Technology
Athens, Greece
28-30 September 2011
1

ECONOMICAL ANALYSIS FOR INVESTMENT ON MEASURING SYSTEMS
Maria Xi Zhang
Department of Mechanical Engineering
The University of Bath, UK
X.Zhang@bath.ac.uk
Prof. Paul Maropoulos
Department of Mechanical Engineering
The University of Bath, UK
P.G. Maropoulos@bath.ac.uk


Jafar Jamshidi
Department of Mechanical Engineering
The University of Bath, UK
J.Jamshidi@bath.ac.uk
Nick Orchard
Manufacturing Technology
Rolls-Royce Plc, UK
Nick.Orchard@rolls-royce.com
ABSTRACT
Metrology processes contribute to entire manufacturing systems that can have a considerable impact
on financial investment in coordinate measuring systems. However, there is a lack of generic
methodologies to quantify their economical value in todays industry. To solve this problem, a
mathematical model is proposed in this paper by statistical deductive reasoning. This is done
through defining the relationships between Process Capability Index, measurement uncertainty and
tolerance band. The correctness of the mathematical model is proved by a case study. Finally,
several comments and suggestions on evaluating and maximizing the benefits of metrology
investment are given.
KEYWORDS
Metrology, Measurement, Economical Analysis, Process Capability Index, Statistical Process
Control.
List of Abbreviations and Annotations
Denotation Meaning
C
p
Process capability index.
Standard deviation.
USL Upper specification limits.
LSL Lower specification limits.
T Tolerance band, T = USL LSL.
Subscript 1 True value of the manufacturing process.
Subscript 2 Observed value determined by measurement.
Subscript m Measuring system.
U/T Uncertainty-Tolerance ratio.

1. INTRODUCTION
High volume and large scale assemblies and
fabrications with complex surfaces are increasingly
2

engaged in the capital intensive industries, such as
the aerospace, automotive and ship building
(Maropoulos et al. 2008). The metrology has come
to play an important role in the modern
manufacturing systems. The metrology process,
which employs the measuring and inspection
activities, will liaise all the phases of product
lifecycle from design, manufacturing, in-service to
after-market maintenance (Cai et al. 2008,
Kunzmann et al., 2005). Hence, the economical
analysis on purchasing new measuring systems and
metrology processes is expected to deliver high
rates of return by the manufacturers, who are also
normally the heavy investors (Quatraro 2003,
Renishaw Plc 2008, Swann 1994). However, in the
current industry, there is a lack of generic
methodologies to evaluate the economical value that
the metrology process brings to entire
manufacturing systems (Schmitt et al 2010, Schmitt
et al. 2010). There are a few comprehensive
methodologies, which can properly quantify the
economical benefits of the inspection processes
delivering to complex modern production
environments (Kunzmann et a., 2005).
The aim of this paper is to quantitatively estimate
the value that the metrology processes add to the
manufacturing system. To achieve the aim, we
firstly discuss whether or not the inspection process
can bring economic benefits to the manufacturers
and the investors by a brief literature review as
background study in Section 2. Then a
mathematical model is successfully established to
define the relationships between Process Capability
Index (C
p
), measurement uncertainty (U) and
tolerance band (T) in Section 3, followed by a case
study to prove the correctness of the mathematical
model (Section 0). Finally, in Section 4, the
conclusion is given that the metrology process is
economically beneficial to modern manufacturing
systems.

2. BACKGROUND STUDY
It is now well-agreed that the product life cycle is
built up by several linked phases, including design,
manufacturing, in-service and after-market
maintenance (Figure 1). The customers
requirements are captured in the design phase, and
the specifications of the products are defined.
Meanwhile the suppliers are called in and the
supply chain is established. Then the CAD model of
the product is sent to the manufacturing phase,
where the machining, inspection and assembly
activities are engaged leading to the physical
products. The products are distributed and serviced
in the market, and require maintenance and
overhaul service at certain times during their service
periods (Zheng et al. 2008). During this loop, the
metrology is the fundamental tool to gain
information and knowledge in all phases of the
product lifecycle establishing links between these
separate phases (Maropoulos et al. 2008, Schmitt et
al 2010 ). Therefore, the metrology plays a vital role
in the quality control process to guarantee the
product conformance. Metrology increases the
productivity of the engineering economics,
particularly in robust engineering projects.

Figure 1. Roles of Metrology in Product
Lifecycle

Historically concerns on metrology economics
started in the 1970s, when computer aided
measurement techniques were increasingly regarded
as a means to control industrial manufacturing and
quality of all kinds of products (Osanna 2002). In
1977, Peters raised the questions of why measure,
what to measure and how much the measurement
pays (Peters 1977). He was concerned with the
macro-economic contribution of metrology in
industrialized societies. But he did not specify the
details of questions related to the micro-economical
analysis. In 1993, Quinn, former director of Bureau
International des Poids et Mesures (BIPM), stated
that measurement and measurement related
operations have been estimated to account for
between 3% and 6% of the GDP of industrialized
countries. He concluded that the economics of
metrology would increase further due to the
continuous increase of high accuracy machining
requirements and online measurement
implementations (Quinn, 1993). Since then,
investigations on the macro-economical impact of
metrology have been conducted worldwide
particularly across the UK (Swann, 2009), EU
(LArrangement 2003), US (Tassey 1999, Tassey
3

1999) and Japan (McIntyre 1997) by national
authoritative organizations.
On the other hand, the micro-economics of
metrology has been the subject of academic
research and industrial applications for decades.
Some researches focused on improving the
efficiency and accuracy of CMM calibration
procedures thus to save time and money. The
temperature distribution along the machining
surface was theoretically modelled and
experimented to fast guide the groove depth in laser
machining processes (Chryssolouris and Yablon,
1993). The error sources of CMM measurement
were identified and deliberately analyzed (Savio,
2006). This method provides a time-efficiency
method to quick calibrate CMM performance. In
aerospace and automotive industries, where large
and complex parts are in demands, the data
sampling techniques for inspection of free-form
surfaces have been developed minimizing
inspection costs and time (Chryssolouris et al,
2001).
Recently, some researches began to focus on
identifying the costs and benefits of metrology as
part of the entire industrial manufacturing system
for robust engineering project. When considering
the micro-economical value that processes add on
the manufacturing systems, it is necessary to
compare the costs of the processes against the
benefits they generate (Greeff, 2004). The same
applies to the metrology processes. It is relatively
easy to calculate the cost of metrology using several
established guidelines (Semiconductor 2004,
Semiconductor 2004). However, the benefits
analysis of the investment on the metrology
equipment and processes remains problematic and
challenging. This requires detailed considerations
on fuzzy aspects, such as stability of production,
measurement uncertainty, sample size and cost of
risks (Schmitt 2010). And this is also what our
research concentrates on.
To solve the above problem, a mathematical
model is presented in this paper by preliminarily
observing the calculating method to quantify the
benefits of metrology process. An important
parameter for statistical process control (SPC), the
C
p
, is introduced during the mathematical reasoning
to reflect the cost effectiveness of metrology.
Hence, in the next section (Section 3), a
mathematical model related to C
p
, U and T is
established based on statistical analysis techniques
and logical reasoning steps.

3. MATHEMATICAL MODEL
C
p
is a measurement parameter to indicate the
ability of a process to produce outputs within
specification limits resulting from the statistical
analysis on the product quality of the entire
manufacturing system (IST, 2010). Since C
p
is
derived from the mathematical statistics, in this
section we firstly give a brief introduction on the
normal distribution, which is frequently used in
statistics and statistical process control to represent
the mathematical model of the C
p
. Then the sum of
two normal distributions is discussed. Finally, the
mathematical model is applied to the manufacturing
environment, where the relationship between C
p
, U
and nominal T is defined.
3.1. TERMS RELATED TO NORMAL
DISTRIBUTION
Normal distribution is one of the most frequently
used mathematical models in probability theory and
statistical analysis. It is a continuous probability
distribution that describes, at least approximately,
any variable that tends to cluster around the mean.
As shown in Figure 2, the graph of normal
distribution is bell shaped with a peak value at the
mean (Patel 1982). In probability theory, the
function which describes the relative likelihood of
the occurrence of these continuous random
variables at the mean value in the observation space
is called probability density function (PDF).
Figure 2. Normal distribution
The simplest case of a normal distribution is
known as the standard normal distribution, of which
PDF is described as
2
2
1
2
1
) (
x
e x

=


In more general cases, a normal distribution is
derived from exponentiating a quadratic function:
c bx ax
e x f
+ +
=
2
) (

This provides the classical bell curve shape of
normal distribution. To describe the function
expediently, rather than using a, b and c, it is
usually assumed that:
4

a
b
2
=
and
a 2
1
2
=
.
By changing the new parameters, the PDF is
rewritten in a convenient standard form as
) (
1
2
1
) (
2
2
2
) (
2

= =

x
e x f
x

Equation clearly shows that any normal
distribution can be regarded as a version of the
standard normal distribution that has been stretched
horizontally by a factor and then shifted rightward
by a distance (as shown in Figure 2). In statistics,
the parameter is called bias, which specifies the
position of the bell curves central peak. The
parameter
2
is called the variance, which indicates
how concentrated the distribution is close to its
mean value. The square root of
2
is called the
standard deviation and is the width of the density
function.
Therefore, a normal distribution can be denoted
as ) , (
2
. When a random variable X is
distributed normally with mean and standard
deviation , it is expressed as:
) , ( ~
2
X

3.2. DETERMINING THE RELATIONSHIP
BETWEEN C
P1
, C
P2
AND (U/T)
A process is usually defined as a combination of
tools, materials, methods, and people engaged in
producing a measurable output, e.g. a production
line for machined parts. Therefore all manufacturing
processes are subjected to statistical variability
which can be evaluated by statistical methods
(Altman, 2005). To measure the variability of the
manufacturing process, the process capability is
defined and expressed in the form of C
p
to reflect
how much natural variation a process experiences
relative to its specification limits. This allows for a
comparison between different processes with
respect to quality controls (Greeff, 2004).

Figure 3. Process capability
The C
p
statistics is defined based on the
assumption that measuring results of the
manufactured parts are normally distributed (Figure
3). Assuming a two sided specification, if and
are the mean and standard deviation, respectively,
of the normal data and USL, LSL are the upper and
lower specification limits respectively, and then the
process capability indices are defined as follows:
6
LSL USL
C
p

=
for non-bias process, or
)
3
,
3
min(

LSL USL
C
pk

=
for bias process
(Montgometry, 1996).
In order to clarify the problem, this research has
taken
6
LSL USL
C
p

=
(for non-bias process) for
consideration and discussion. But the methodology
developed from the research can be generalized to
the bias manufacturing process with slight
modifications (Please refer to Future Work).
As mentioned in previous section, the true value
of C
p
that is observed after the measuring process
can possibly be underestimated due to the
intervention of the measurement uncertainty. For
the convenience of understanding, we analogize the
process capability reduction as the traffic lights to
easily reveal go / no-go decision making in real
manufacturing environment (Figure 4). If a
production manager receives a process capability
evaluation result of a lower than expected C
p
value,
he/she receives a red light meaning he/she should
stop the manufacturing, or purchase a more accurate
machine tool or process. The unsatisfactory C
p

value may be caused by U from the inspection
process (flashing as yellow light) instead of the
machining process itself (green light). As the
measurement instruments with lower uncertainties
are generally cheaper than high precision machining
tools, it is more economical to investment in proper
5

measurement instruments or metrology processes.
This provides the manufacturing system with a
green light.

Figure 4. Influence of U on C
p
.
So the next step of our work aims to
quantitatively evaluate how much the observed C
p

value is reduced by the measurement uncertainty,
and determine the true original value. The
mathematical relationships between the true C
p

value, observed C
p
value and Uncertainty-Tolerance
ratio are deduced by mathematical transformations.
It is assumed that the outputs of the
manufacturing process are normally distributed, and
well centred to the mean value, that is to say there is
no bias (=0).
From the definition of C
p
, it follows that
1 1
1
6 6
T LSL USL
C
p
=

=

and
2 2
2
6 6
T LSL USL
C
p
=

=

thus
1
1
6
p
C
T
=

and
2
2
6
p
C
T
=

Assuming U is within 2, then
2
U
m
=

Given the sum of normal distribution proved by
Equation (A1) in Appendix A:
2 2
1
2
2 m
+ =

Inserting Equations , and into Equation
,

2 2
1
2
2
)
2
( )
6
( )
6
(
U
C
T
C
T
p p
+ =

Divided by T at both sides of the Equation we
achieve:
2 2
1
2
2
)
2
1
( )
6
1
( )
6
1
(
T
U
C C
p p
+ =

The mathematical relationship between C
p1
, C
p2

and U/T has been identified in Equation . The
successful quantification of the reduction value of
C
p
is due to measurement uncertainty. Note that U/T
value is a popular parameter for the inspection
engineers selecting measuring systems; therefore,
Equation can be utilized when making rational
investment decisions on purchasing new machining
or metrology systems.

4. METHOD VERIFICATION
In order to demonstrate the correctness of the final
result (Equation ), a series of correlation curves
were used to create explicably reflecting the
correlation between C
p1
, C
p2
and U/T by data
acquisitioning and processing method.
6

Table 1. Data collection to acquire the true C
p
value (C
p1
)

For data collection, it is assumed that there are
five manufacturing systems, of which the true C
p

values (C
p1
) are 2.00, 1.75, 1.50, 1.25, 1.00 and 0.75
respectively. Given the fact that the inspection
engineers consider U/T as an important reference
parameter for measurement instrument selection,
Fthe U/T in Equation is treated as a variable
continuously varying between 0 and 1, which means
that the measuring systems selected by the
inspection engineers changed from the perfect
measurement machines (without incurring any
uncertainty) to the worst situation. Finally, the
observed C
p
values (C
p2
) for each case of the
instrument selection (changing U/T values) under
various manufacturing systems (C
p1
) are acquired
via Equation . This reveals how the
manufacturing system and its capability index have
been influenced by the measurement uncertainty.

Following the method described above, a part of
the data collection and acquisition is listed in Table
1. Then the curve is constructed as shown in Figure
5, which inexplicably reveals how the process
capability drifts down due to the influence of the
measurement instrument selection associated with
the changes in the measurement uncertainties.
0
0.2
0.4
0.6
0.8
1
1.2
1.4
1.6
1.8
2
2.2
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 1.1
U/T
C
p
2
Cp1=2
Cp1=1.75
Cp1=1.5
Cp1=1.25
Figure 5. Influence of U/T on C
p
.
5. DISCUSSION
5.1. CONSISTENT WITH ISO STANDARD
AND NPL GOOD PRACTICE GUIDE
The result illustrated in Figure 5 has confirmed that
the C
p
of a manufacturing system is underestimated
due to the U introduced in the inspection and
verification process, as discussed in Section 2. More
importantly, the conclusion from this result is
consistent with ISO 14253 (ISO, 1998) and NPL
Good Practice Guide (Flack and Hannaford, 2006).
ISO 14253 provides clear guidelines about the
need to allow for the uncertainty of the
U/T C
p1
=2.00 C
p1
=1.67 C
p1
=1.5 C
p1
=1.33
0 2 1.666667 1.5 1.333333
0.01 1.985754 1.658395 1.493962 1.329087
0.02 1.944775 1.634301 1.476275 1.316588
0.03 1.881775 1.596377 1.448144 1.296516
0.04 1.803046 1.547461 1.411331 1.269899
0.05 1.714986 1.490712 1.367882 1.237969
0.06 1.623069 1.429155 1.319858 1.202031
0.07 1.53141 1.365387 1.269137 1.163341
0.08 1.442775 1.301448 1.217302 1.123029
0.09 1.358816 1.238824 1.165596 1.082046
0.1 1.280369 1.178511 1.114941 1.041158
0.11 1.207715 1.121121 1.065977 1.000951
0.12 1.140792 1.066974 1.019112 0.96185
0.13 1.079332 1.016185 0.974581 0.924145
0.14 1.022968 0.96873 0.932487 0.888021
0.15 0.971286 0.9245 0.892841 0.853579
0.16 0.923869 0.883332 0.855594 0.820859
0.17 0.880314 0.845034 0.820653 0.789854
0.18 0.840247 0.809405 0.787904 0.760528
0.19 0.803323 0.776244 0.757219 0.732822
0.2 0.769231 0.745356 0.728464 0.706665
0.21 0.737691 0.716556 0.701509 0.681978
0.22 0.708454 0.689672 0.676225 0.658679
0.23 0.681298 0.664544 0.652489 0.636684
0.24 0.656023 0.641026 0.630185 0.615913
0.25 0.632456 0.618984 0.609208 0.596285
0.26 0.610437 0.598298 0.589456 0.577726
0.27 0.589829 0.578857 0.570838 0.560165
0.28 0.570507 0.560561 0.553268 0.543534
0.29 0.55236 0.543318 0.53667 0.527772
0.3 0.535288 0.527046 0.520972 0.512821
0.31 0.519202 0.511671 0.506107 0.498624
0.32 0.504023 0.497125 0.492018 0.485134
0.33 0.489679 0.483346 0.478647 0.472303
0.34 0.476104 0.470277 0.465946 0.460088
0.35 0.463241 0.457869 0.453869 0.448449
0.36 0.451037 0.446073 0.442372 0.437349
0.37 0.439443 0.434848 0.431418 0.426755
0.38 0.428416 0.424155 0.42097 0.416634
0.39 0.417916 0.413959 0.410996 0.406958
0.4 0.407909 0.404226 0.401466 0.3977
7

measurement instruments by reducing the size of
the acceptance bands, and therefore argues that
there is considerable interest in having access to
measurement instruments with lower uncertainty
(ISO, 1998). One step further, NPL Good Practice
Guide No.80 (Flack and Hannaford, 2006)
illustrates the impact of the U on the manufacturing
process (Figure 6). It defines process tolerance as
an intangible tolerance variance that is developed
from the nominal tolerance range, but is narrowed
down as the U expands.
Figure 6. Impact of U (Flack and Hannaford,
2006).,
To this point we have further developed a
mathematical method which quantitatively and
directly discloses the impact of U on the process
capability.

5.2. WHAT DO THE CURVES INDICATE?
The series of curves in Figure 5 put forward several
suggestions on how U impacts on the manufacturing
system.
Firstly, as discussed in Section 0, it can be seen
that the observed C
p
value (C
p2
) of the
manufacturing system reduces as U increases. If we
follow the common rule of metrology where U is
considered to be 1/10 of the tolerance (U/T=10) for
a finely-controlled process (Cheng et al. 2009), it
can be read from Figure 5 that a 6 quality
controlled process of which the true C
p
value (C
p1
)
equals 2, will be underestimated at C
p
=1.28 due to
the intervention of uncertainties from measurement
and inspection processes. This result indicates that
the process capability can be improved by
purchasing new measuring systems. As normally a
more accurate measurement instrument is cheaper
than a more capable machining tool (Kunzmann et
al, 2005), investing in new measuring systems and
new inspection processes can be more economically
viable than purchasing new machining tools.
Secondly, it can be seen that the higher the C
p

value that the manufacturing system has, the more
sensitive the manufacturing systems will react
towards the measurement instruments. This is
consistent with the production engineers common
view that a highly capable machining system
requires a highly accurate measuring system to
inspect and verify the quality of the final products.
It also suggests that generally the highly capable
machining systems are worth the investment with a
highly accurate measuring system.
Thirdly and the most importantly, our work has
demonstrated that the economical value of the
investment on the measurement equipment and the
inspection processes can be quantitatively defined
by the means of the measurement uncertainty. As
the research continues, an accurate measurement
instrument can possibly be evaluated by modelling
the distribution density of the values of the parts
features and the effect of U on C
p
value,
contributing to decision making for more
complicated production environments. The
investments on the metrology processes can
generate direct value by reducing the scrap rate and
production cost, thus it brings economical benefits
for manufacturing systems.

6. CONCLUSION AND FUTURE WORK
The metrology process which involves
measurement and inspection activities plays an
increasingly vital role in the high value added
manufacturing industries. The heavy investors, such
as the manufacturers and venture capitalists, believe
that proper investments in more capable metrology
equipment and processes yield a high rate of return
in capital intensive industries.
This paper reviews the state of the art metrology
and dimensional measurement techniques within the
current manufacturing industries. The question is
how to evaluate the value added by metrology
processes on the manufacturing systems. To answer
the question, a mathematical model was
successfully established by statistical deductive
reasoning procedures, which defined the
relationships between C
p
, U and tolerance band,
followed by a case study to verify the mathematical
model. It is concluded that the metrology process is
economically beneficial to modern manufacturing
systems. Finally, several suggestions and comments
on economical and productive investment in
metrology systems were concluded based on the
mathematical model derived and data collected
during the case study.
In future work, an economical evaluation model
for investing in measurement equipment will be
developed based on the mathematical model in this
paper. It will concern the question that how much it
will cost if a manufacturer makes an error in the
8

uncertainty zones around the tolerance limits. In
other words, how much it costs to scrap a part that
is actually in tolerance, and how much it costs if a
non-conforming part is allowed to reach the
customer. These decision making strategies will be
developed by statistical analysis methodologies, and
will deploy the risk management techniques which
are commonly used in high value added capital
investment.

7. ACKNOWLEDGEMENTS
The work reported in this paper has been
undertaken as part of the EPSRC Innovative
Manufacturing Research Centre at the University of
Bath (grant reference GR/R67507/0) and has been
supported by a number of industrial companies. The
authors gratefully acknowledge this support and
express their thanks for the advice and support of all
concerned.
REFERENCES
Altman, W. Practical process control for engineers and
technicians, Newnes, New York, USA, 2006.
Cai, B., Guo, Y., Jamshidi, J., and Maropoulos, P.G.,
Measurability analysis of large volume metrology
process model for early design, Proceedings of the
5th International Conference on Digital Enterprise
Technology (DET), France, 22-24 October 2008,
2008, pp. 793 - 806.
Cheng, C. H., Huo, D., Zhang X., Dai W. and
Maropoulos, P. G., Large volume metrology process
model: measurability analysis with integration of
metrology classification model and feature-based
selection model, Proceedings of the 6th CIRP-
Sponsored International Conference on Digital
Enterprise Technology, pp. 1013-1026 , Springer,
Berlin, Germany, 2009.
G. Chryssolouris, A. Yablon, "Depth Prediction in Laser
Machining with the Aid of Surface Temperature
Measurements", CIRP Annals, Volume 42,
No.1, 1993, pp. 205-207
G. Chryssolouris, S. Fassois, E. Vasileiou, "Data
sampling technique (DST) for measuring surface
waving", International Journal of Production
Research, Volume 40, No.1, 2002, pp. 165-177.
Dance, D. L., Cost of ownership for metrology tools
NIST, Proceedings of Semiconductor Metrology
Workshop, Gaithersberg MD, USA, 30/01/1995.
Flack, D. R. and Hannaford, J., Fundamental good
practice in dimensional metrology, Measurement
Good Practice Guide No. 80, NPL, United Kingdom,
2006.
Gindikin, S. G. Distributions and convolution
equations, Gordon and Breach, New York, USA,
1992.
Gohberg, I. C. and Feldman, I. A., Convolution
equations and projection methods for their solution,
Translations of Mathematical Monographs, Vol. 41,
2006, pp.120-125.
Greeff, G., Practical E-manufacturing and supply chain
management, Newnes, New York, USA, 2004.
Hratch G. S. and Robert L. W., Impact of measurement
and standards infrastructure on the national economy
and international trade, Measurement, Vol. 27, No. 3,
2000, pp. 179-196.
ISO, ISO 14253-1:1998, Geometrical Product
Specifications (GPS) -- Inspection by measurement of
workpieces and measuring equipment -- Part 1:
Decision rules for proving conformance or non-
conformance with specifications, 2005.
Kunzmann, H., Pfeifer, T., Schmitt, R., Schwenke, H.
and Weckenmann, A., Productive Metrology -
Adding Value to Manufacture, CIRP Annals -
Manufacturing Technology, Vol. 54, No. 2, 2005,
pp155-168.
L'Arrangement de reconnaissance mutuelle (CIPM
MRA), Evolving needs for metrology in trade,
industry, society and the role of the BIPM, BIPM,
2003, pp. 80.
Maropoulos, P. G., Guo, Y., Jamshidi, J. and Cai, B.,
Large volume metrology process models: A
framework for integrating measurement with assembly
planning, CIRP Annals - Manufacturing Technology,
Vol. 57, No. 1, 2008, pp477-480.
Mendenhall, W., Beaver, R. J. and Beaver, B. M.,
Introduction to probability and statistics, Thomson
Brooks/Cole Publishing, New York, USA, 2005.
McIntyre, J.R., Japan's Technical Standards:
Implications for Global Trade and Competitiveness,
Westport, CT: Quorum, 1997.
Montgomery D. C., Introduction to Statistical Quality
Control, John Wiley & Sons, New York, USA, 1996.
Osanna, P. H. and N. M. Durakbasa., Trends in
Metrology,
, Wien, Austria ,
2002.
Patel, J. K., Handbook of the normal distribution,
Marcel Dekker, New York, UAS, 1982.
Peters, J., Metrology in Design and Manufacturing
Facts and Trends, The CIRP Annals Manufacturing
Technology, Vol. 26, No. 2, 1977, pp. 415421.
Pfeifer, T., Production Metrology, Oldenbourg Verlag,
Germany, 2002, pp. 256.
Quatraro, G., Innovative solutions in the metrology
field, Rivista di Meccanica Oggi, Vol. 14, No. 69,
2003, pp 236-238.
9

Quinn T.J. BIPM Rapport 94/5, 1993.
Renishaw Plc, Annual Report 2008, 2008, Renishaw
Plc.
E. Savio, "Uncertainty in testing the metrological
performances of coordinate measuring machines",
CIRP Annals - Manufacturing Technology,
Vol. 55, No. 1, 2006, pp. 535-538.
Semiconductor Equipment and Materials International,
SEMI E35-0304 - Cost of ownership for
semiconductor manufacturing equipment metrics,
Semiconductor Equipment and Materials
International, 2004.
Schmitt, R., Lose, J. and Harding, M., The management
of measurement processes Key to robust and cost
optimal production of high quality products,
International Journal of Metrology and Quality
Engineering, Vol. 1, No. 1, 2010, pp. 16.
Swann, P. G., The Economics of Metrology and
Measurement - Report for National Measurement
Office, Department for Business, Innovation and
Skills, Innovative Economics Limited, 2009.
Swann, P., The Economics of Measurement - Report
National Measurement System Review, 1999, pp.64.
retrieved 01/06/2010,
http://www.bis.gov.uk/files/file9676.pdf.
Tassey G., Lessons learned about the methodology of
economic impact studies: the NIST experience,
Evaluation and Program Planning, Vol. 22, 1999, pp.
113-119.
US National Institute of Standards and Technology
(NIST), What is process Capability? NIST/Sematech
Engineering Statistics Handbook, Retrieved:
01/06/2010,
http://www.itl.nist.gov/div898/handbook/pmc/section1
/pmc16.htm,
Zheng, L. Y. , McMahon, C. A., Li, L., Ding, L. and
Jamshidi, J., Key characteristics management in
product lifecycle management: a survey of
methodologies and practices, Proceedings of the
Institution of Mechanical Engineers; Part B; Journal
of Engineering Manufacture, Vol. 222, No. 8, 2008,
pp. 989-1008.
Zill, D. G. and Cullen, M. R., Advanced engineering
mathematics , Jones & Bartlett Publication, New
York, USA, 2006.

APPENDIX A. SUM OF NORMAL
DISTRIBUTIONS
In probability theory, the calculation of the sum of
normal distributions is based on the distributions of
the random variables involved and their
relationships.
Conclusively speaking, the sum of the two
independent normal distributions is also normally
distributed, with the bias which is the sum of the
two biases, and the variance which is the sum of the
two variances (i.e., the square of the standard
deviation is the sum of the squares of the standard
deviations). The above can be expressed in
mathematical formulae as:
if
) , ( ~
2
X
and
) , ( ~
2
Y
,
and they are independent, then
) , ( ~
2 2
+ + + = Y X Z
(A1)
The above proposition is proved using
convolutional proof method (Gohberg and
Feldman, 2006).
In mathematics analysis, convolution is a
mathematical operation on two functions f(x) and
g(x), producing a third function that is typically
viewed as a modified version of one of the original
functions (Gindikin, 1992). In order to determine
the sum of the normal distributions, according to the
total probability theorem (Mendenhall et al. 2005),
the probability density function of z is

=
xy
Z Y X Z
dxdy z y x f z f ) , , ( ) (
, ,

X and Y are independent, therefore

= . ) , | ( ) ( ) ( ) (
xy
Z Y X Z
dxdy y x z f y f x f z f

) , | ( y x z f
Z
is trivially equal to
)) ( ( y x z +
,
where is Dirac's delta function (Zill and Cullen,
2006), therefore

+ = . )) ( ( ) ( ) ( ) (
xy
Y X Z
dxdy y x z y f x f z f
(A2)
as it was previously assumed that Z=X+Y, we
substitute (z-x) for y in Equation (A2):

=
x
Y X Z
dx x z f x f z f ) ( ) ( ) (
(A3)
which is recognized as a convolution of f
X
with f
Y.

Therefore the PDF of the sum of the two
independent random variables X and Y with PDFs f
and g respectively is the convolution

+

= du u x g u f x g f ) ( ) ( ) )( * (

First, by assuming the two biases andare zero,
the two PDFs are transformed as
2
2
2
2
1
) (


x
e x f

=
and
2
2
2
2
1
) (


x
e x g

=
.
The convolution becomes
10

du
u x u
cons
du
u x u
cons
du
u x u
cons
du e e x g f
u x u
)
2
) ) ( (
exp( ] [
)
2
) (
2
exp( ] [
)
2
) (
exp( )
2
exp( ] [
)
2
1
( )
2
1
( ) )( (
2 2
2 2 2 2
2
2
2
2
2
2
2
2
2
) (
2
2
2
2
2

+

+

+

+
=

=
=






(A4)
where cons is short for constant, and the below is
the same.
Continuing the integral calculation from Equation
(A4):
du
x u
x
cons
du
x
x u
cons
du
u x u
cons

+

+

+

+
+

+
+
+
=
+
2 2
2
2 2
2
2 2
2 2
2
2 2
2
2 2
2
2 2
2
2 2
2 2
2 2 2 2
2
) )( (
exp
) ( 2
exp ] [
) ( 2 2
) )( (
exp ] [
)
2
) ) ( (
exp( ] [











(A5)
Note that the result of integral
du A u ) ) ( exp(
2

+


does not depend on A

(This
can be proved by a simple substitution: w = u A,
dw = du, and the bounds of integration remain
and +). Finally from the last part of Equation
(A5), we obtain the convolution of PDFs f(x) and
g(x) as below

] [
) ( 2
exp ] [ ) )( (
2 2
2
cons
x
cons x g f

=


(A6)
where constant means not depending on variable
x.
Equation (A6) simply reveals that the sum of the
two PDFs can be viewed as a constant multiple
of

) ( 2
exp
2 2
2

x
. In other words, the sum of
normally distributed random variables is also
normally distributed with the new variance
of ) (
2 2
+ .
Therefore, the initial proposition in Equation
(A1) has been proved. In Section 3.2, the equality
relationship in Equation (A1) will be utilized to link
the relationships between C
p
, U and T.
Proceedings of DET2011
7th International Conference on Digital Enterprise Technology
Athens, Greece
28-30 September 2011
11

MEASUREMENT ASSISTED ASSEMBLY AND THE ROADMAP TO PART-
TO-PART ASSEMBLY
Jody Muelaner
The University of Bath
jody@muelaner.com
Amir Kayani
Airbus in the UK
amir.kayani@airbus.com


Oliver Martin
The University of Bath
o.c.martin@bath.ac.uk
Prof Paul Maropoulos
The University of Bath
p.g.maropoulos@bath.ac.uk
ABSTRACT
Cycle times and production costs remain high in aerospace assembly processes largely due to
extensive reworking within the assembly jig. Other industries replaced these craft based processes
with part-to-part assembly facilitated by interchangeable parts. Due to very demanding interface
tolerances and large flexible components it has not been possible to achieve the required
interchangeability tolerances for most aerospace structures. Measurement assisted assembly
processes can however deliver many of the advantages of part-to-part assembly without requiring
interchangeable parts. This paper reviews assembly concepts such as interface management, one-
way assembly, interchangeability, part-to-part assembly, jigless assembly and determinate
assembly. The relationship between these processes is then detailed and they are organized into a
roadmap leading to part-to-part assembly.
KEYWORDS
Part-to-Part, Measurement Assisted Assembly, Interface management, Fettling, Shimming, One-
Way Assembly, Determinate Assembly

1. INTRODUCTION
Traditionally the production of large aerospace
assemblies has involved the inefficiency of craft
production; craftsmen fettling or shimming parts to
fit and carrying out a wide variety of highly skilled
operations using general purpose tools. Reliance on
monolithic jigs has also meant this approach has not
resulted in flexibility since the jigs are highly
inflexible, costly and have long lead times.
It could be said that the production of large
aerospace assemblies combines the inefficiency of
craft production with the inflexibility of the early
forms of mass production. This is clearly an issue,
but why is such an inefficient mode of production
used? It is not due to a lack of competence,
awareness of the issues or willingness to embrace
new technologies; the aerospace industry benefits
from access to many of the best minds in
engineering and is well known for utilizing the
latest technologies in many areas.
The root causes are the difficulties in maintaining
very close tolerance requirements over large
structures and the large number of different
operations for relatively low production volumes.
Issues related to maintaining high tolerances are the
biggest challenges; the lightweight aero structure
has flexible components; interfaces are often
imprecise especially for composite components and
it is very difficult to drill patterns of holes in
different components which will match and lock the
assembly into its correct overall form.
The traditional solution to these issues is to use a
monolithic jig which holds flexible components to
their correct final form as the assembly is built-up,
interface gaps are then measured in the jig so that
12

shims can be fitted and holes are drilled through the
stack of components. It is then necessary to break
the assembly apart to debur holes, clean and apply
sealant before the final assembly takes place
(Pickett et al. 1999 ; Muelaner and Maropoulos
2008). This process results in additional process
steps, inflexibility due to reliance on monolithic jigs
and inefficient craft based production due to high
levels of reworking in-jig. Additionally, the variety
of operations at low volumes combined with the
high tolerances required makes it very difficult to
automate processes. Further increasing the number
of craft based processes required while maintaining
close tolerances means that where automation is
used, it is generally based on inflexible gantry
systems.
There has in recent years been a great deal of
interest in moving away from the inefficiencies of
the traditional build process and concepts such as
Part-to-Part Assembly, One-Way Assembly,
Predictive Shimming, Measurement Assisted
Assembly and Determinate Assembly are being
discussed in the literature. The precise definition of
these terms is not always clear and a key objective
of this paper is therefore to provide clear definitions
of some commonly used terms.

2. INTERFACE MANAGEMENT
Interface management involves processes which
seek to ensure that any clashes and gaps between
components are maintained within acceptable
limits. It is therefore the key to ensuring the
structural integrity of an assembly. Where interfaces
cannot be managed through interchangeable
components it often results in inefficient craft based
in-assembly fitting processes.
The standard approach to interface management
employed in high-volume manufacturing is to
produce components to sufficiently tight tolerances
to facilitate interchangeability while maintaining
acceptable interface conditions. The alternative to
interchangeability is to create bespoke interfaces by
making adjustments to the form of components.
Such adjustments may be additive (shimming) or
subtractive (fettling). It should be noted that where
bespoke interfaces are used to manage the interfaces
it is still possible to have interchangeable parts
within the assembly. For example a rib may be an
interchangeable part but not be produced to
interchangeability tolerances and shims then used to
achieve interface management.
Bespoke interfaces, whether created by fettling or
by shimming, may be produced using traditional in-
assembly reworking processes or using
measurement assisted predictive processes.
In the traditional approach components are pre-
assembled, assembly tooling is often used at this
stage to control the form of the assembly. Any gaps
and clashes are measured in this pre-assembly
condition, the assembly is broken apart, components
are fettled or shims are produced and the structure is
reassembled.
In measurement assisted assembly (MAA), or the
predictive approach, components are measured pre-
assembly and this measurement data is used to
determine cutting paths for predictive fettling or the
manufacture of predictive shim. The components
and possibly also the shims can then be assembled
as though they were interchangeable.
The various options for interface management are
illustrated in the form of a Venn diagram in Figure
1.

Figure 1 Venn Diagram for Interface Management
The interface between components typically
involves direct contact between the surfaces of
components and also hole-to-hole interfaces into
which fasteners are inserted to join components
together. The above classification of interface
management may be applied to hole-to-hole
interfaces as well as to the interfaces between
surfaces of components.
For example, in the case of an interchangeable
assembly all holes are pre-drilled in components. In
the traditional in-assembly fitting approach to
producing bespoke interfaces first any fettling or
shimming is completed and then holes are drilled
through the stack of components. The pre-assembly
generally then needs to be broken, deburred and
cleared of swarf before sealant can be applied and
the final assembly carried out.
Bespoke hole placements can also be produced
using a predictive approach in which holes are first
placed in one component prior to any assembly and
to a tolerance insufficient for interchangeability.
The hole positions can then be measured and holes
in the second component placed. In the case of the
second component holes must be located to
interchangeability tolerances (Muelaner and

Maropoulos 2010). It may initially seem that this
approach offers no advantage over full
interchangeability since holes must be placed in the
second component to interchangeability to
The advantage is gained however when a large
component is joined to one or more smaller
components. It is then possible to place holes in the
large component which requiring a high level of
accuracy. The accurate holes are placed in the small
components which is a relatively easy task.
2.1. INTERCHANGEABILITY (ICY)
Interchangeability (ICY) is the ability of
components to fit to one another without requiring
any reworking (interface management). An
interchangeable part can therefore be taken from
one assembly and placed into another assembly
without changing the form of the part.
Low cost, high volume manufacturing typically
depends on interchangeability but it is generally not
possible to achieve the required tolerances for
majority of aircraft structure interfaces
2.2. SHIMMING
Shimming involves adding additional spacers or
packers normally referred to as shims to an
assembly in order to fill gaps between components.
As explained above, traditionally this involves
measuring actual gaps in a pre-assembled structure
using feeler gauges, producing shims and re
assembling with the shims in place. This traditional
in-assembly fitting process may require a number of
iterations before all gaps are within tolerance.
In the case of predictive shimmi
Gray 2009) components are measured before being
assembled. In this state the interface surfaces are
fully visible meaning that rather than simply
determining gaps using feeler gauges
surface profile can be characterized using 3D
scanning technology. It is then possible to produce
shims which more fully conform to the surface
profile of components.
The major advantage of this approach is that pre
assembly is not required and therefore one
assembly is facilitated.
2.3. FETTLING
Traditionally fettling is carried out in
way to traditional shimming operations. It is
however also possible to carry out predictive
fettling in which components are measured and
bespoke interfaces created before assembly.
Predictive fettling was used to
interface between the wing box ribs and the upper
cover on the Advanced Low Cost Aircraft
Structures (ALCAS) lateral wing box demonstrator.
In this process measurements of the cover profile
13
. It may initially seem that this
approach offers no advantage over full
interchangeability since holes must be placed in the
second component to interchangeability tolerances.
The advantage is gained however when a large
component is joined to one or more smaller
components. It is then possible to place holes in the
large component which requiring a high level of
accuracy. The accurate holes are placed in the small
ponents which is a relatively easy task.
ITY (ICY)
is the ability of
components to fit to one another without requiring
any reworking (interface management). An
interchangeable part can therefore be taken from
ne assembly and placed into another assembly
without changing the form of the part.
Low cost, high volume manufacturing typically
depends on interchangeability but it is generally not
possible to achieve the required tolerances for the
structure interfaces.
Shimming involves adding additional spacers or
packers normally referred to as shims to an
assembly in order to fill gaps between components.
traditionally this involves
assembled structure
using feeler gauges, producing shims and re-
assembling with the shims in place. This traditional
assembly fitting process may require a number of
iterations before all gaps are within tolerance.
shimming (Kayani and
components are measured before being
assembled. In this state the interface surfaces are
fully visible meaning that rather than simply
using feeler gauges, the full
surface profile can be characterized using 3D
scanning technology. It is then possible to produce
shims which more fully conform to the surface
The major advantage of this approach is that pre-
is not required and therefore one-way
Traditionally fettling is carried out in-jig in a similar
way to traditional shimming operations. It is
however also possible to carry out predictive
are measured and
bespoke interfaces created before assembly.
Predictive fettling was used to maintain the
interface between the wing box ribs and the upper
on the Advanced Low Cost Aircraft
Structures (ALCAS) lateral wing box demonstrator.
easurements of the cover profile
were used to generate machining paths for the
fettling of rib feet. The rib feet were then machined
using a standard 6-axis industrial robot mounted on
a gantry over the wing box
robot was greatly increased through the application
of closed loop control
photogrammetry system
In a traditional manual machining process high
accuracy is achieved by initially cutting features
oversized, measuring them and then using these
measurements to guide the further removal of
material in an iterative process. A similar but fully
automated process was used in which the robot
initially made roughing cuts of the rib feet,
measurements were made and these were used to
apply corrections to the finishing cut
process is illustrated in
2011).
Figure 2 ALCAS Rib Foot Fettling Process
The registration of point cloud data from multiple
instrument locations (
important in enabling this type of predictive
process.
2.4. DRILLING
Where interchangeability tolerances cannot be
achieved it is necessary to place holes at bespoke
positions in such a way that patterns of h
match so as to allow close fitting fasteners to pass
through both components.
achieve this is to drill through both components in
the pre-assembly state.
There are a number of disadvantages to this
approach, aerospace assemblies often contain
thousands of holes and the drilling of these holes
represents a significant percentage of the cost of
building an airframe (Bullen 1997
these operations within the capital intensive bottle
neck to production - which is the main assembly jig
were used to generate machining paths for the
The rib feet were then machined
axis industrial robot mounted on
a gantry over the wing box. The accuracy of the
ly increased through the application
closed loop control with feedback provided by a
photogrammetry system.
In a traditional manual machining process high
accuracy is achieved by initially cutting features
oversized, measuring them and then using these
measurements to guide the further removal of
material in an iterative process. A similar but fully
automated process was used in which the robot
initially made roughing cuts of the rib feet,
measurements were made and these were used to
to the finishing cut. The complete
process is illustrated in Figure 2 (Muelaner et al.

ALCAS Rib Foot Fettling Process
The registration of point cloud data from multiple
(Mitra et al. 2004) may be
important in enabling this type of predictive
Where interchangeability tolerances cannot be
achieved it is necessary to place holes at bespoke
positions in such a way that patterns of holes closely
match so as to allow close fitting fasteners to pass
through both components. The traditional way to
achieve this is to drill through both components in
assembly state.
There are a number of disadvantages to this
approach, aerospace assemblies often contain
thousands of holes and the drilling of these holes
represents a significant percentage of the cost of
Bullen 1997). By carrying out
these operations within the capital intensive bottle
which is the main assembly jig
14

- the cost of drilling these holes is greatly increased.
Furthermore, when a stack of components is drilled
through it is often necessary to break the assembly
to clean and debur before re-assembling, adding
costly additional operations.
Orbital drilling (Kihlman 2005) may remove the
need to break, clean and debur, and therefore
facilitate a one-way assembly process. It will not
however remove the need to drill through
components or facilitate part-to-part assembly and
therefore although some process steps are removed,
drilling must still be carried out within the bottle
neck of the jig.
Measurement assisted determinate assembly
(MADA) has been proposed as a potential
predictive approach to hole placement. In this
approach holes are first placed in large components
to relatively slack tolerances. The hole positions are
then measured and bespoke holes are accurately
placed to match in the smaller components, this is
illustrated in Figure 3 (Muelaner and Maropoulos
2010).



Figure 3 MADA Predictive Hole Placement
3. PART-TO-PART ASSEMBLY
Part-to-part assembly is an assembly process where
any interface management is conducted pre-
assembly allowing a rapid one-way assembly
process. Part-to-part assembly may therefore be
seen as the key requirement for an efficient build
process.
A full part-to-part assembly process would
involve either interchangeable components or
predictive fettling, shimming and hole placement
being carried out prior to assembly. Currently part-
to-part assembly is commonly achieved through
interchangeability but achieving this using
predictive processes is relatively unknown.
Figure 4 shows the Venn diagram used in section
2 with the interface management techniques which
are compatible with part-to-part assembly clearly
identified. It shows that both interchangeability and
the use of predictive processes to produce bespoke
interfaces are compatible with part-to-part
assembly, while the use of in-assembly fitting
processes to produce bespoke interfaces is not
compatible with one-way assembly.

Figure 4 Venn Diagram Showing Compatibility of
Interface Management Techniques with Part-to-part
assembly
15

4. ONE-WAY ASSEMBLY
One-way assembly is a process in which once parts
are assembled they are not removed from the
assembly; there is no requirement to pre-assemble,
break and reassemble. One way assembly is a
precondition for part-to-part assembly.

Figure 5 Venn Diagram Showing One-Way Assembly in
Relation to Part-to-Part Assembly and Interface
Management
In order to achieve one-way assembly the following
conditions must be met:-
Preassembly to measure gaps before carrying
out interface management must not be required
and therefore any bespoke interfaces which may
be required for interface management must
involve a predictive measurement assisted
process where component measurements are
used to predict gaps.
Any hole drilling operations must not require
de-burring between components or the breaking
apart of assemblies to remove swarf.
There must be sufficient confidence that an
assembly will be right first time that sealant can
be applied the first time components are
assembled.
The major difference between one-way assembly
and part-to-part assembly is therefore that in a one-
way assembly some drilling through of components
in the assembly is permitted provided this does not
require that the assembly is broken for cleaning and
deburring.
5. MEASUREMENT ASSISTED
ASSEMBLY
The term measurement assisted assembly (Kayani
and Jamshidi 2007) is used to refer to any process
where measurements are used to guide assembly
operations. This includes but is not exclusive to
predictive interface management processes in which
measurements of remote parts interfaces are used
to fettle or shim another component either before or
during assembly. It also includes the tracking into
position of components using measurement and
processes where automation operates under closed
loop control with feedback from an external
metrology system.
5.1. ASSEMBLE-MEASURE-MOVE
An assemble-measure-move (AMM) process is one
in which a component is approximately positioned
within an assembly, its position is then measured
and it is moved into the correct position. This is
generally an iterative process in which continuous
feedback is used to track a component into position.
Generally this is not compatible with a fully part-
to-part assembly process since once a component is
located using an assemble-measure-move process it
will then be necessary to drill through to fasten it in
position. It is of course possible to envisage a
process in which a component is fastened into
position using an adjustable clamping arrangement
but in practice for aerospace structures this is
unlikely.
This technique is of interest because although it
does not fully work within the goal of a part-to-part
assembly process it does allow the accurate
placement of components without requiring
accurate assembly tooling. It is therefore a useful
technique for certain difficult components within an
assembly in order to reduce tooling complexity or
as a get-out in a primarily determinate assembly.
These techniques are used within final aircraft
assembly at which stage the structure is largely
interchangeable and determinate (explained below).
6. ASSEMBLY TOOLING
Assembly tooling is used to hold components and in
the case of jigs to guide assembly machinery, during
the assembly process. In the case of jigs and fixtures
it incorporates highly accurate component locators
allowing the tooling to determine the form of the
emerging assembly. In the case of work holding it is
the components themselves which determine the
form of the assembly (determinate assembly) and
therefore the tooling does not require any accurate
locators. The various forms of assembly tooling and
associated assembly methods are summarized in
Figure 6 and described in detail below.


16


Figure 6 Assembly Tooling and Associated Assembly Methods
6.1. JIGS AND JIG BUILT STRUCTURES
Traditionally aerospace structures are jig built; both
overall form of the assembly and the position of
assembly features such as holes are determined by
the jig which controls component location and the
positioning of assembly machinery. A jig is
therefore a form of assembly tooling which
comprises accurate locators for both components
and assembly machinery.
It follows from these definitions that a jigless
assembly is a process which does not meet all of
these conditions. A process where fixtures are used
to locate components therefore controlling the form
of the assembly may be regarded as jigless provided
the tooling does not also control machinery
positioning.
6.2. FIXTURES AND JIGLESS ASSEMBLY
Jigless assembly within an assembly fixture follows
essentially the same process as for a jig built
structure. Components are still assembled within the
tooling which controls the form of the assembly and
in-assembly fitting processes are carried out.
The key difference is that an assembly fixture is
generally very much simpler than an assembly jig
since it is only required to locate components and
not also locate machinery for fettling and drilling.
These functions are instead generally carried out by
automation such as dedicated drilling robots
equipped with vision systems (Hogan et al. 2003 ;
Calawa et al. 2004 ; Hempstead et al. 2006) or
standard flexible robots with external metrology
control (Summers 2005 ; Muelaner, Kayani et al.
2011).
Therefore in jigless assembly although a large
number of operations continue to be carried out at
late stages of the assembly process, these operations
are completed more efficiently and the simpler
tooling means that less capital is being tied up in
these operations.
6.3. ASSEMBLE-MEASURE-MOVE USING
WORK HOLDING TOOLING
As discussed above the assemble-measure-move
technique is probably not suitable for the complete
assembly of an airframe but is a useful technique for
certain components. It is essentially a form of
fixture built assembly in which the fixture is a robot
operating under closed loop control from a large
volume metrology instrument.
6.4. DETERMINATE ASSEMBLY (DA) USING
WORK HOLDING TOOLING
A determinate assembly is one in which the final
form of the assembly is determined by the form of
its component parts. The location of components
interface features such as contacting faces and holes
will therefore strongly influence the final form of
the assembly. It is often assumed that a determinate
assembly must be made up of interchangeable parts
but this is not necessarily the case since determinate
assembly can be achieved using, for example,
measurement assisted determinate assembly, see
below.
6.4.1. DETERMINATE ASSEMBLY WITH KING
HOLES
King holes are holes which are placed specifically
to facilitate determinate assembly. In this approach
all of the holes which will finally be used to fasten
components are not placed during component
manufacture but just a few holes placed in the
components to facilitate a determinate assembly.
Once the components have been joined together
using the king holes the actual structural holes are
17

drilled through the component stack in the
conventional way. If required then the assembly can
be broken apart, cleaned, deburred and reassembled.
The king holes can also be drilled under size so that
once the other structural holes have been drilled and
temporary fasteners fitted to them the king hole
fasteners can be removed and full size holes drilled
though to replace the king holes.
Determinate assembly using king holes is
therefore an intermediate step towards the adoption
of a fully part-to-part determinate assembly.
6.3.2. MEASUREMENT ASSISTED DETERMINATE
ASSEMBLY (MADA)
Measurement assisted determinate assembly is a
process in which measurement assisted predictive
processes are used to create bespoke interfaces. In
general large components are measured and smaller
bridging components are machined to interface with
the less well dimensionally controlled larger
components.
This allows all interface management to be
carried out at the component manufacturing stage
and for a fully part-to-part assembly process to then
take place.
6.3.3. DETERMINATE ASSEMBLY WITH
INTERCHANGEABLE PARTS
Where sufficient tolerances can be achieved for
fully interchangeable parts then this will lead to the
minimum number of process steps and a fully part-
to-part and determinate assembly. This is the
ultimate goal for any assembly process.
6.4. COMPATIBILITY WITH PART-TO-PART
Any form of assembly tooling and associated
method, from a traditional jig built approach to a
fully determinate assembly, can be made to be
compatible with one-way assembly if predictive
fettling or shimming is combined with drilling
techniques which do not require deburring or
cleaning.
The only assembly methods which are fully
compatible with part-to-part assembly are MADA
and determinate assembly with interchangeable
parts.



Figure 7 Compatibility of Assembly Tooling and Associated Assembly
Methods with One-Way and Part-to-Part Assembly
6.4. RECONFIGURABLE TOOLING
Reconfigurable tooling involves constructing
assembly tooling from standard components which
can be readily adjusted or rebuilt to accommodate
design changes or new products (Kihlman 2002).
This can be thought of as being similar to
scaffolding. It solves the problems of inflexibility
inherent in reliance on jig built and fixture built
assembly processes. It does not however alleviate
issues associated with interface management
operations and in particular drilling being carried
out at a late stage in assembly.
7. THE ROADMAP TO PART-TO-PART
ASSEMBLY
Part-to-part assembly involves carrying the
maximum possible number of operations during
component manufacturing. This means that time is
not spent working on components within the final
assembly where a high level of capital expenditure
18

is then tied up in these operations and a bottle neck
to production exists.
It is the interfaces between component surfaces
and mating holes which ultimately determine the
form of any assembly, whether it has been built
within an assembly jig or as a determinate
assembly. Part-to-part assembly implies that all
holes and interfacing surfaces have been processed
to their final form before assembly takes place. It
therefore follows that there is no point in using an
assembly jig or fixture for a part-to-part assembly
since it would have no influence on the form of the
assembly once it was released from the jig. It is
therefore possible to state that achieving true part-
to-part assembly will require a determinate
assembly.
There are two approaches identified as facilitating a
fully part-to-part assembly process; MADA and
determinate assembly using interchangeable parts.
Since the king hole approach to determinate
assembly involves the through drilling of holes
during assembly it is not fully compatible with part-
to-part, it could however act as an important
intermediate step towards part-to-part assembly
using MADA. Similarly predictive shimming and
fettling processes may be initially developed within
a jigless assembly process and act as intermediate
steps towards part-to-part assembly using MADA.
The ultimate approach to part-to-part assembly
through the determinate assembly of
interchangeable parts will ultimately be facilitated
through design for manufacture which allows
reduced component tolerances and machine tool
development which allows tighter tolerances to be
produced.
Although processes and technologies such as
reconfigurable tooling, assemble-measure-move and
orbital drilling may bring important benefits in the
short term they are not seen as directly contributing
to the development of part-to-part assembly.
The way in which the various processes and
technologies discussed do or do not contribute to
the development of part-to-part assembly is
illustrated in Figure 8


Figure 8 The Roadmap to Part-to-Part Assembly
CONCLUSIONS
Concepts such as interface management, one-way
assembly, interchangeability, part-to-part assembly,
jigless assembly and determinate assembly have
been explained. The relationship between these
processes was detailed and it was shown that
predictive shimming, predictive fettling, design for
manufacture and the use of king holes will be of
particular importance in enabling part-to-part
assembly. These methods will have relevance to
other industries beyond the aerospace applications
discussed where bespoke interfaces are also
required. Examples of such applications include
steel fabrication, boat building and the construction
of power generation machinery.
19

ACKNOWLEDGEMENTS
Thanks is given to Carl Mason of CM Engineering
Services Ltd for support in defining the terms used
in this paper.
REFERENCES
Bullen, G. N. (1997). "The Mechanization / Automation
of Major Aircraft Assembly Tools." Production
and Inventory Management Journal: 84-87.
Calawa, R., S. Smith, I. Moore and T. Jackson (2004).
HAWDE Five Axis Wing Surface Drilling
Machine. Aerospace Manufacturing and
Automated Fastening Conference and
Exhibition. St. Louis, Missouri, USA, SAE
International.
Hempstead, B., B. Thayer and S. Williams (2006).
Composite Automatic Wing Drilling Equipment
(CAWDE). Aerospace Manufacturing and
Automated Fastening Conference and
Exhibition. Toulouse, France, SAE
International: 1-5.
Hogan, S., J. Hartmann, B. Thayer, J. Brown, I. Moore, J.
Rowe and M. Burrows (2003). Automated Wing
Drilling System for the A380-GRAWDE. SAE
Aerospace Automated Fastening Conference
and Exhibition - 2003 Aerospace Congress and
Exhibition. Montreal, Canada, SAE
International. 112: 1-8.
Kayani, A. and I. Gray (2009). Shim for Arrangement
Against a Structural Component and a Method
of Making a Shim, Airbus UK Ltd.
Kayani, A. and J. Jamshidi (2007). Measurement
Assisted Assembly For Large Volume Aircraft
Wing Structures. 4th International Conference
on Digital Enterprise Technology. P. G.
Maropoulos. Bath, United Kingdom: 426-434.
Kihlman, H. (2002). Affordable Reconfigurable
Assembly Tooling - An Aircraft Development
and Manufacturing Perspective. Department of
Mechanical Engineering. Linkping, Linkping
Universitet: 111.
Kihlman, H. (2005). Affordable Automation for Airframe
Assembly. Department of Mechanical
Engineering. Linkping, Linkpings
Universitet: 286.
Mitra, N. J., N. Gelfand, H. Pottmann and L. Guibas
(2004). Registration of point cloud data from a
geometric optimization perspective.
Eurographics/ACM SIGGRAPH symposium on
Geometry processing, Nice, France.
Muelaner, J. E., A. Kayani and P. Maropoulos (2011).
Measurement Assisted Fettling of Rib Feet to
Maintain Cover Interface on the ALCAS Wing
Box Demonstrator. SAE 2011 Aerotech
Congress & Exposition. Toulouse, France, SAE.
Muelaner, J. E. and P. G. Maropoulos (2008). Large
Scale Metrology in Aerospace Assembly,
Nantes, France.
Muelaner, J. E. and P. G. Maropoulos (2010). "Design
for Measurement Assisted Determinate
Assembly (MADA) of Large Composite
Structures." Journal of the CMSC 5(2): 18-25.
Pickett, N., S. Eastwood, P. Webb, N. Gindy, D.
Vaughan and J. Moore (1999). A Framework to
Support Component Design in Jigless
Manufacturing. Aerospace Manufacturing
Technology Conference & Exposition.
Bellevue, Washington, SAE International: 1-4.
Summers, M. (2005). Robot Capability Test and
Development of Industrial Robot Positioning
System for the Aerospace Industry. AeroTech
Congress & Exhibition. Grapevine,Texas, SAE
International: 1-13.


Proceedings of DET2011
7th International Conference on Digital Enterprise Technology
Athens, Greece
28-30 September 2011
20

DIGITAL FACTORY ECONOMICS
Johannes W. Volkmann
1

1
Institute of Industrial Manufacturing and
Management, University of Stuttgart
Johannes.Volkmann@iff.uni-stuttgart.de
Carmen L. Constantinescu
1,2

2
Fraunhofer Institute for Manufacturing
Engineering and Automation IPA
Carmen.Constantinescu@ipa.fraunhofer.de
ABSTRACT
Most factories already have a more or less adequate grasp on what a Digital Factory is, but hardly
realise the possible benefits that an implementation may achieve. Still, especially for SMEs, the
evaluation of situation based implementation scenarios of digital tools in the context of a Digital
Factory is an insurmountable challenge and often keeps potential users from a further investigation.
The main challenge is the lack of accepted and appropriate methods and tools to evaluate the
economic efficiency, effectiveness and therefore the expected benefits of the employment of
integrated digital tools. This evaluation needs to address a selection of digital tools and needs to be
scalable to be able to evaluate different alternative implementation scenarios. This paper presents
the foundations and the first steps aiming at the development of a scalable methodology for the
evaluation and therefore the selection of suitable digital tools.
KEYWORDS
Digital Factory, Digital Tools, DF, Economics, Process Modeling

1. INTRODUCTION
Due to the increasing globalisation of production in
the last decade, many factories modernised or
replaced their methods and digital tools used for
factory planning and the continuous optimisation.
Most of these systems have been heavily adapted
and are now an integral part of the digital factory.
Considering a life cycle of the methods and digital
tools, replacing them is essential in the upcoming
years (Buchta et al, 2009). Due to the crisis, these
replacements often have been halted and the
existing systems often are behind the possibilities,
the currently available technologies may provide.
With the end of the crisis factories are once again
concentrating on the challenges they face, especially
the increase in necessary interconnections for the
information and data exchange as well as the
amount of data that is generated now. Considering
these challenges, keeping control of the constant
planning tasks and at the same time, keeping the
planning processes flexible and fast is difficult to
achieve. One of the possible answers to get control
of this complexity and address these challenges is
the usage of methods and digital tools in the context
of a Digital Factory. They aim at keeping the
factory competitive, by introducing a new depth of
control. There is a plethora of tools to choose from,
all bringing their specific functions, risks and
implications to the table. Selecting the suitable
method or tool to fit into the existing systems and
addressing the specific needs of one factory is
difficult, therefore there is still a lot of potential to
be realised, especially in small and medium
enterprises (SME).
This paper presents the foundations and the first
steps aiming at the development of a scalable
method for the evaluation and selection of situation-
suitable methods and digital tools in the context of a
digital factory.
2. MOTIVATION
The planning processes in a factory are becoming
increasingly complex. On the one hand the product
world changes at an increasing pace, creating the
necessity of flexible production systems. A flexible
production system is defined by its capability and
ease to accommodate changes in the system (Karsak
and Tolga, 2000). It enables a production which is
21

cost effective, but at the same time able to handle
highly customised products (Gupta and Goyal,
1989). On the other hand the globalisation increases
the complexity of the planning itself. With the
analogy to the product life cycle, Westkmper et al
(2006) stated the new paradigm the Factory is a
product. This induced a holistic approach to the
life cycle of a factory. The different phases of the
planning processes have been ordered in the factory
life cycle approach (Constantinescu and
Westkmper, 2008). It divides the phases of a
factorys life into four groups, strategy, structure,
process and operation (Figure-1).

Maintenance
and Equipment
Management
Product
Develop-
ment
Site and
Network
Planning
Investment
and
Performance
Planning
Buildings,
Infrastructure
and
Media
Planning
Internal
Logistics
and
Layout
Planning Process,
Equipment and
Workplace
Planning
Ramp-up
and Project
Management
Factory
Operation
Digital
Factory
Fraunhofer IPA

Figure 1 The factory life cycle management phases
(Constantinescu et al, 2009)
The aforementioned groups are ordered and defined
according to their granularity of the information
used in the according phase. Inside those four
groups, ten phases in total are identified to cover the
whole life of a factory, ranging from the investment
and performance planning to the operation and the
final dismantling (Westkmper, 2008).
There is a plethora of different methods and
digital tools available to offer support for these
phases (Figure-2). The groups are created and
scaled considering the increasing granularity of the
data along the factory life cycle. The first phases are
in the group strategy, generally supported by tools
like manufacturing resource planning (MRP) or
production planning and scheduling (PPS).
The second group structure covers the planning
phases of the factory life cycle from the site and
network planning up to parts of the process
planning. The third group process details the
information further and covers most of the process
planning as well as the equipment planning. It also
includes the ergonomy and work place planning.
The group operation includes all detailed
information and reaches from the ramp-up to the
demantling and recycling. Some of these methods
and digital tools are specifically designed to support
very specific tasks in single phases, while others
address a whole phase or even the combination of
different phases. This creates a complexity in
selecting the suitable methods and digital tools to
address the challenges arising in a specific factory.

FEM
CAE / CAD
CAPP
CAP / CAQ
Process
simulation
MES
PDE
VIBN
CAM / NC
MRP / PPS
FDM
CAO/PLM/FLM
Fraunhofer IPA
Legend
MRP - Manufacturing Resource Planning
PPS - Production Planning and control
System
FEM - Finite Elements method
CAE - Computer aided engineering
CAD - Computer aided design
MES - Manufacturing execution system
PDE - Production data acquisition
VIBN - Virtual plant commissioning
CAM Computer aided manufacturing
NC Numerical control
CAPP Computer aided production planning
CAP Computer aided production
CAQ Computer aided quality
FDM Factory data management
CAO Computer aided organisation
PLM Product lifecycle management
FLM Factory lifecycle management

Figure 2 Digital tools supporting the factory life cycle
management (Westkmper, 2008)
A factor that is most of the time even increasing the
complexity of the selection process are the legacy
systems. Most of the factories do already use
methods and digital tools at least for some of their
planning tasks. These are called legacy systems and
have to be considered in the selection. They
increase the complexity as there are several
possibilities on how they can influence a new
selection. They can either be completely replaced,
be improved to be able to handle new challenges
(e.g. using new software modules or integrating new
interfaces) or they can be integrated with new,
complementary systems. They define the As-is
situation, influencing the system selection through
e.g. the existing data and the existing planning
workflow. Considering interfaces to legacy systems
can be crucial in order to ensure a smooth
introduction, acceptance amongst the users and the
long term usability.
22

The potential benefits of using methods of
industrial engineering and digital tools in the
context of a digital factory are not in question and
acknowledged by most factories (Bierschenk at al,
2004). It is generally accepted that there are relevant
potentials to be realised, but especially SMEs often
stop their approach to these technologies when
encountering the differences in trying to evaluate
the potential benefits and select the ideal match for
their specific situation. The methods and digital
tools used for the planning of the factories have a
huge impact on the value adding operation of a
factory and their initial as well as their maintenance
costs can be substantial. This induces, that a trial
and error approach is not an option.
To synchronize the three goals of the factory
planning activities decrease of time, money and
increase of quality is at first difficult to see, using
a digital factory. Considering the initial costs,
showing the decrease of planning costs is most of
the times not possible. Taking into account the
whole product costs, including the allocated
production and thus the production planning costs,
however may change this picture. Avoiding
unnecessary work and increasing the quality of the
planning may be able to lessen the overall expenses
even if the initial planning costs using the methods
and digital tools of a digital factory may be higher
than when being done manually. The general
quality of planning is the harmonisation and
optimisation of the process of production planning
by considering e.g. the information management,
the data and the seamless intregration of product
development and production planning (VDI 4499,
2008). These two points, the difficult to assess
suitable system selection and the sometimes
complex cost situation, induce the need for a
method to support the analysis, selection and
introduction of suitable methods and digital tools
for the factory specificly occuring planning tasks.
This method for the selection of the suitable
methods and digital tools and the evaluation of their
economics in a digital factory will consider both,
the integration and operation. It needs to be generic
in order to be able to cover the very divergent
planning phases. To be able to be used for SMEs as
well as big international factories, it needs to be
scalable. In a next paragraph, the most important
used terms in this paper are clarified and confined.
Economic efficiency in the general field of business
administration is usually defined as the value of the
output divided by the value of the input or as
earnings divided by expense (Whe and Dring,
2010b). This definition focuses on the financial
aspects. In the case of IT investments, only taking
the financial aspects into account is most of the time
not sufficient, as some of the benefits that the IT
creates are very difficult to quantify and therefore to
measure. This is especially the case, if not only
singular digital tools are considered, but a plethora
of tools in the context of a digital factory (Bracht et
al, 2011). Therefore, in this paper, we use the more
open definition of economics as costs, divided by
benefits (Vajna et al, 2009), as shown in equation-1.

benefit
cost
economics = (1)

The benefit as denominator implies a quantification
of the benefits, which can be very difficult
especially when trying to cover complex coherences
like they exist in the field of methods and digital
tools in the context of a digital factory. The term
digital tool is a varying one. The term is limited
here to the methods and tools specific to the factory
planning. A digital factory usually covers the
factory operation as well, but in this case it is not
explicitly taken into account, as the field of digital
tools there is vast and cannot be considered in
sufficient depth. The method DF approached in
this paper, needs to be generic and is therefore
applicable to the factory operation as well, if the
data basis is generated. Digital tools in this
approach will not be reflected on the scale of
specific software (e.g. Siemens NX), but on the
system scale. In our approach, the considered
system scale is the classification, e.g. CAD, CAQ,
CAM.
The Fraunhofer Institute for Manufacturing
Engineering and Automation (IPA) has the Grid
Engineering for Manufacturing Laboratory 2.0
(GEMLab 2.0) available, which is the
implementation of the grid-flow-based approach to
a holistic and continuously integrated factory
engineering and design. It uses a combination of
commercial and self developed tools, integrated into
the Grid Engineering Architecture through a
standardised service. For its validation, an example
product was designed, on which a continuous
factory planning and optimisation scenario is based.
This scenario includes all required factory planning
steps (Constantinescu et al, 2009). In this paper, this
basis is used to imagine an example scenario, where
in the existing GEMLab, the digital tool used for the
process planning step is wished to be updated or
replaced. This is used as a hypothetical case to
clarify the steps of the DF method presented in
chapter four.
3. STATE OF THE ART
The state of the art for this approach is structured
into three parts. The first part is the confinement of
23

the considered methods in this state of the art. The
second part shows the state of the art in evaluation
methods for economics. The third part is dedicated
to the method developing systematics and languages
needed for the approach DF.
There are two key aspects that may be seen as
basic properties of every evaluation method:

The time frame: It can either be ex ante or ex
post, depending if the evaluation is done before
or after the investment has taken place (Whe
and Dring, 2010a).
The subject: The evaluation can be either
partial/singular or a portfolio/selection of
investment objects (Adam, 1999).

In the case of the method to be approached in this
paper, a portfolio consisting of several methods is
the object of evaluation.
The methods for the evaluation of economics in
general can be divided into two groups (Schabacker,
2001):

Comparative calculations, which compare if
actions without investments, e.g. changes in a
production process, are expedient.
Investment calculation, which try to evaluate if
investments in a certain matter are expedient.

In this case, investment calculations are relevant,
as the method DF is about investments in methods
and digital tools in the context of a digital factory.
The method needs to be specified in respect to the
evaluated time frame and the object of evaluation.
Investment calculation can be divided itself into
several types of methods. Three approved and
established types are:

Static investment calculations: Static
investment calculations are based on average
costs and revenue over one defined period of
time. All input is gathered for one call date.
The differing value of money depending on the
time scale is therefore not considered. They are
the most common type of investment
calculations, as the effort for the data
acquisition is usually lower than with the other
investment calculation types (Perridon and
Steiner, 1995). Typical static methods are e.g.
the payback period rule, the comparative cost
method and the ROI reporting (VDI 2216,
1994).
Dynamic investment calculations: Dynamic
investment calculations take into account the
changing value of money over time. They
cover several periods of time, considering the
costs and expenses for each and putting it into
relation to the according cash value and the
generated revenue. Therefore, it takes into
account the timing of the cash flow and its
differing value over time (Perridon and Steiner,
1995). The effort for dynamic investment
calculations is significantly higher than for the
static investment calculations, due to the
information needed for every accounting
period. Typical dynamic methods are e.g. the
net present value method, the annuity method
and the method of actuarial return (VDI 2216,
1994).
Cost-benefit-analysis: This analysis is a
comparison of objects or different alternatives
for a decision, based on purely financial
aspects. It takes into account the comparison of
the discounted investments in the future. The
evaluation scales of the costs and benefits as
well as the extent of the considered factors
cannot be objectively defined (Venhoff and
Grber-Seissinger, 2004; Schabacker, 2001).

These three types of investment calculations are
entirely focused on financial aspects. Therefore,
they are at best only partly able to support decision
taking if there are substantial effects of the
investment which are not financal. In the newer
business management approaches, which lead to the
development of more specialised methods taking
into account not only financial aspects but to mix
them with other, but quantifiable aspects. A very
commonly used method here is the value analysis or
scoring model. It creates a comparison of
alternatives with quantified non-financial factors. It
enables the user to compare complex alternatives
taking into account his predefined weighting of the
considered factors. There are different methods to
approach a replicable weighting, which can be
applied here (e.g. paired comparison). Even if the
procedure becomes intricating with an increasing
number of factors, it is a very common method. The
result, the sum of the weighted values for every
factor, is the ordinal order of precedence
(Zangemeister, 1976a). This implicates that the
result is not suitable to be put into relation with e.g.
the total costs or the expected proceeds, which often
precludes its use for an investment decision.
Additionally, it is possible to integrate a statistical
confidence into the values of its factors. This allows
an inclusion of risks (Zangemeister, 1976b), but
highly increases the complexity of the method.
There are additional approaches to the investment
evaluations that include non financial factors. One
well known approach is the balanced scorecard. Its
consideration of non financial factors is restricted to
a causal connection to financial goals (Kaplan and
Norton, 1996; Whe and Dring, 2010). For several
years, the investments necessary in the IT have been
increasing (Renkema and Berghout, 1997). Several
24

authors have evaluated the existing/shown methods
and mostly concluded they are not sufficient
(Renkema and Berghout, 1997; Hirschmeier, 2005).
This is even more the case, if the evaluated
investment is for a specialised field. To tackle these
evaluations, specialised evaluation methods have
been approached. Relevant for this paper are
methods in the field of evaluating IT investments
for production planning methods and digital tools.
There are methods, aiming to address very specific
evaluation challenges. One example here is the
guideline VDI 2216, aiming at Introducing
strategies and economical benefits of CAD
Systems. It is used to estimate the economics ex
ante and to address the benefits from the
employment of CAD systems. As the title suggests,
the term benefit is limited here to easily
quantifiable, financial benefits. A verification of the
estimated economics is included. The method
covers a small, very specific and limited part of
challenges that are similar to the ones to be
addressed here (limited to CAD and the product
construction). Therefore typical effects that occur in
a digital factory are not covered, e.g. the synergies
of different methods and tools using a common data
pool (VDI 2216, 1994). There are similar
guidelines, covering a specific type of software.
One example is the guideline VDI 2219 for
EDM/PDM systems (VDI 2219, 2002). Based on
these guidelines, an approach to evaluate the
introduction and use of digital tools in the product
planning is the benefit asset pricing model (BAPM).
This method is considering the implementation and
operation of new technologies and especially digital
CAx tools. The method is specialised on tools for
the product planning and therefore does only partly
cover the production planning. The, in the
production environment, important synergy effects
are only touched, not covered in detail. These
effects are significant in the field of the production
planning in the context of a digital factory.
Additionally, BAPM is used not to evaluate a group
of methods and digital tools but for a single
selection (e.g. a specific software tool) and does not
cover the dependencies and interfaces. To evaluate
methods and digital tools in the context of a digital
factory, it is therefore not sufficient. There are other
approaches addressing the measurement of factors
relevant in the economics that sometimes are not
easily quantifiable. An example would be the
measurement of the flexibility, which influences the
economics mainly in the operation of the planned
factory (Alexopoulos et al, 2011). An approach
directly in the field of evaluating methods and
digital tools in a digital factory is the method
DigiPlant Check. In this method, the expected
benefits of introducing a digital tool are estimated in
a workshop (Schraft and Kuhlmann, 2006). The
selection of the workshop attendants has a great
effect on the results of the estimation, making the
result subjective and not repeatable. There are
currently no suitable methods to evaluate the
benefits as well as the costs of methods of industrial
engineering and digital tools in the context of a
digital factory.
For the approach presented in chapter four, the
necessary state of the art in relevant modelling
notations is reflected to support a selection. The
presented modelling notations are considered and
chosen taking some key properties into account:

Simplicity: The analysis of the current situation
of the evaluated factory is not directly
generating value and therefore needs to be as
simple as possible. Needing to learn complex
new languages or systematics in order to
analyse the existing situation would hinder or
even prevent the success of a method for
economic evaluation.
Scalability: The methods as well as the
language need to be able to easily scale to
analysis challenges in different sizes. This
scalabilty is two fold. On the one hand, while
sometimes a complete representation of
existing planning processes of a factory is
needed, other factories will only want to
analyse parts of their existing infrastructure.
On the other hand, the modelling needs to be
scalable taking into account the depth of
consideration. It needs to be able to consider
every last detail of a planning process as well
as on the overall processes and e.g. their
ranking.
Visualisation: The created model needs to be
easily readable. The more complex the existing
planning processes are, the more important is
the verification of the model to ensure its
realism and actuality. To be able to easily
include the affected planning staff, the
visualisation needs to be human readable.
Interchangeability: The model created during
the situation analysis in the beginning needs to
be transferrable into the other steps, even if
different languages are employed there. A
common interface for exporting and importing
already created models, is key to keeping the
necessary effort down.

The modelling notations that are considered for a
selection are:

Unified Modelling Language (UML),
Event-driven process chain (EPC), and the
Business Process Model and Notation
(BPMN).
25

The UML is an object oriented, graphic based
business modelling language. The notation is
available online in an extensive documentation. It
has no focus on any specific area and a modularised
approach is allowed in the specifications. This
modularisation may lead to interchange problems, if
two instantiations use a different set of notations. In
order to circumvent such problems, a set of
compliance levels has been created (OMG, 2011a).
To model a factory planning process, activity
diagrams can be used. As a graphical notation
without any specific focus, the vastness of
possibilities is shown in the wide range of available
symbols. This makes the notation difficult to learn
completely and possibly difficult to read. Actitivity
diagrams do not offer a detailed description of a
business organisation unit together with materials
and information that is used in each of its functions.
The EPC originates from configuring ERP
systems and from improving the existing business
processes. It is not a complete notation language,
but an ordered graph of events and functions. There
are free tools available to create EPC graphs, which
as such are very intuitive to read. In contrary to a
UML activity diagram, there are hardly restrictions
to the connections that can be displayed together in
one graph. This implies the difficulties that arise
when trying to display complex planning processes:
Due to the complex and numerous interconnections,
the graphs become hard to handle. On the other
hand, it is a fast and intuitive procedure to draw an
EPC.
BPMN is a commonly used notation if complex
business processes are modelled. It is based on a
flowcharting technique, where business processes
are easily modelled along work flows. It is able to
handle very complex planning processes and due to
very clear rules for the visualisation, it is human
readable. It is able to scale according to the targeted
information depth of the model (OMG, 2011b). The
notation is available on the internet and there are
numerous tools around to simplify the modelling
process. It is possible to export BPMN diagrams
into XML to provide a common interface.
4. APPROACH
The approach DF is comprised of five steps. These
steps, their dependencies and their sequence are
depicted in Figure-3. They are explained in detail in
this chapter and then clarified using the example
scenario described at the end of chapter two.
The first step of the DF method is the Target
definition. First of all the desired type of the
project e.g. a new planning of methods and digital
tools or the replacement or optimisation of the
existing landscape is defined. There are several key
aspects of the project that are specified, like the
timeframe, the financial borders and planning
phases that shall be covered. In order to measure the
outcome of the implementation at the end, a range
of KPIs and their expected changes are defined
here. Additionally, expected case specific benefits
can be defined here. Valid targets also include
strategic alignments (e.g. integrate product and
production data management). Specific key aspects
can be fixed here as well, if there are aspects that
cannot be changed (e.g. due to restrictions by clients
or law). The result of this phase is a definition of all
relevant requirement specifications and of the
relevant KPIs. In the example scenario of this paper,
the goals for the replacement of the process
planning digital tool are defined, e.g. maximum
introduction costs and hardware restrictions.

Target definition 1
Situation analysis 2
Economics evaluation 3
Implementation plans
/ recommendations
4
Validation and
improvement
5
Ideal portfolio of methods
and digital tools

Figure 3 Steps of the DF approach
The second step is the Situation analysis. Here,
the existing factory planning processes are analysed
in detail. This begins with the identification of the
affected departments. The existing planning
processes per department are then analysed. The
planning processes are modelled as depicted in
Figure-4 using existing reference models for the
factory planning processes as a basis
(Constantinescu and Westkmper, 2010). Their
interconnections are then analysed in detail. They
are then displayed to enable a verification of the
created instantiation of the planning process model
by the according departments, ensuring the accuracy
and realism of the created model. The model is
supplemented by a model of the data and
information, considering four aspects: Who
provides it, who uses it, who has rights on changing
it and where is the master data stored. This model is
created as an instantiation of the factory data
26

reference model. Afterwards, the model of the
planning processes is detailed including the used
methods for planning process.
FhG-IPA Factory
planning reference
model
Process model
Existing planning processes
Interconnections
Instantiation
FhG-IPA Factory data
reference model
Data model
Who provides it
Who uses it
Who has rights on changing it
Where is the master data
stored
2 Situation analysis: Basis for the DF method

Figure 4 Models in step 2 Situation analysis
The possible modelling languages are analysed in
chapter two, where one or a combination of several
can be used here. Additionally the used digital tools
are listed, including their version, the license and
the hardware they are running on. This list of digital
tools is then classified according to the groups of
digital tools, shown in chapter two. A connection
between the model of the information and the list of
digital tools complete the model of the factory
planning. The result is a model or map of the As-
is situation. The detail of the model or map is
adjusted to fit the goals, set in step one of the DF
method. If the goal only affects certain departments
or planning steps, the model is considered complete
if these are covered. It may be necessary to map
planning processes even if they are not directly
related to the field or department that the set goals
are centred on. This is the case, if the used
information in the specific non relevant phase is
relevant to the considered planning processes or
methods and digital tools used there cover one of
the targeted aspects of the planning process. All
steps of the DF method scale accordingly. The
model or map provides a basis for the identification
of the planning processes suitable for an evaluation.
This is necessary, as a complete evaluation could be
unnecessarily extensive. The model or map is
analysed to select points of action in accordance to
the base library, the evaluation method uses. This
ensures a timely execution and realistic
implementation propositions. The result of step two
is a map of the As-is status of the planning
processes, as well as a list of possible points of
action. In relation to the example scenario, the
existing process planning and adjacant steps are
modelled, including employed interfaces, data and
information. They are then analysed and a pre-
selection is made. In order to continue the thought
experiment of the example scenario, we assume
here the identified action point is the process
planning.
The third step Economics evaluation is the core
component of the DF method. Based on the
suitable planning processes identified in step two,
scenarios are created, each differing in the
employed portfolio of methods and digital tools. An
additional scenario is created, mirroring the current
As-is situation of the factory to be analysed. This
is the reference point for the comparison of the
economics. For each scenario, the implicated costs
are calculated. These costs can be split into the
initial costs, e.g. licensing fees, training costs,
hardware acquisition costs, and the running costs,
e.g. service costs, hardware running costs. Their
quantification is calculated based on the scenarios,
taking into account the existing infrastructure and
its costs models. The benefit is then determined,
based on the case specific list of factory specific
relevant benefits, defined in step one. To determine
the benefit, a specialised method is derived based on
the specific requirements of the digital factory
environment. These benefits include soft factors as
e.g. ease of usage as well as hard factors e.g.
specific decreases in planning time. To achieve a
overall quantification for a comparison that is
mainly considering relevant factors, the expected
changes of target KPIs are calculated based on the
data basis of the method DF. Using the expected
costs and benefits, a valuation factor is calculated.
The result is a quantified value for every scenario
including the As-is situation. This factor is the
basis of the selection of the first version of the ideal
implementation goal. The third step of DF is
depicted in Figure-5.

3 Economics evaluation: Basis for the DF method
As-is
situation
Created scenarios
Scenario 1
Scenario 2
Scenario N
Scenario 2
Ranked scenarios
Scenario 1
As-is
situation
Scenario N
Scenario 2
First selection

Figure 5 Scenarios in step 3 Economics evaluation
Applying this step on the example scenario, a list of
substitute digital tools for the process planning steps
is generated, each one is evaluated and the one with
the best evaluation value is selected.
Using the specific implementation goal from step
three, the implementation itself is planned in step
four. This step shows the typical challenges that
arise when implementing the specific choice of
methods and digital tools. These challenges are then
evaluated if they might apply in the specific
evaluated case and are valued by their possible
27

impact on the implementation process. Taking into
account these foreseeable challenges, a step by step
implementation plan is created. Using the process
planning model, created in step two of the DF
method, this implementation plan is adapted to
reduce the disturbing impact on the production and
its planning during the implementation and
migration to the new scenario. In case the expected
challenges are very significant or impossible to
overcome, an iterative loop back to step three is
created, to include this information into the
evaluation and selection of the ideal scenario. This
optional iteration is depicted in Figure-3 as a dashed
arrow. At the end of step four, the user has created a
step by step implementation plan including a list of
possible and inevitable challenges as well as plans
on how to overcome these. In the example scenario,
step four generates an implementation plan, taking
into account foreseeable challenges that might arise
with the specific new selection of a digital tool for
the planning process. For the example scenario,
such a challenge could be a non functional
information transfer into one of the adjacent
planning phases.
Step five of the DF method is the validation of
the implemented scenario. The gained experience
here is input into

step two in case of too extensive or incomplete
input from the situation analysis,
step three in case of a miss in fulfilling the
defined KPIs as predicted in the evaluation,
step four in case of new or different arising
challenges during the implementation.

This feedback is depicted in Figure-3 as arrows
on the left side, going into the according steps. It is
created through applying the DF method in
industry cases and the latter verification through
comparison with the situation as it was beforehand.
Another source of experience is the existing
experience of partners. The feedback is directly
included into the basic libraries of the DF method
and as such improve the results for the iterative
following evaluation. If the implemented selected
scenario is not able to be measured by the KPIs
defined in step one of the method or if it is not
possible to achieve or measure the set goals, an
optional feedback into step one enables the user to
adapt or even expand the set KPIs and goals. As this
usually changes the shape of the project, this step
should only seldomly be necessary and is therefore
depicted in Figure-3 as a dotted arrow. Using the
example scenario, step five is the validation of the
integrated new digital tool. The new implementation
is used with a former planning order, and the results
are compared against the goals set in step one.
5. ROADMAP
Due to the complexity of the approached
challenges, the upcoming activities are of special
interest. The next steps will be the design of the
steps one to four of the DF approach.
For the first step, a list of KPIs needs to be
developed and checked together with industry in
order to be complete, descriptive and realistic. A
ruleset will be created to give guidance to the
creation of realistic targets for the method, to ensure
the applicability of the evaluation results.
For the second step, a method and a tool to
describe the planning processes will be found. A
modelling notation or a combination of several will
be used to supply the basis for the selection of
points of action. A method for this selection will be
created, basing on the experience, gained by the
usage of numerous tools and industry cases.
The economic evaluation method in the third step
needs to be carefully designed, taking special
consideration for the implementation of experience
gained by the usage of the method as a whole.
The implementation plans and recommendations
to be developed in step four are in a field where
research is extensive. The existing work in the field
will be classified and checked for its applicability to
factory planning, especially considering the
complex situation of a Digital Factory. Most likely,
the existing research will need to be extended
extensively. It is cruicial for the realism of the given
plans and recommendations to base this research on
actual industry cases.
The libraries, methods and tools created for step
one through four, will then be combined and tested
intensively for their applicability on real world
scenarios. This will include ex ante evaluations as
well as the verification of already existing work.
REFERENCES
Adam D., Investitionscontrolling, 2nd Edition,
Oldenburg Verlag, Mnchen, 1999, p 40
Alexopoulos K., Papakostas N., Mourtzis D.,
Chryssolouris G, A method for comparing flexibility
performance for the lifecycle of manufacturing
systems under capacity planning constraints,
International Journal of Production Research, Vol.
49, No. 11, 2011, pp 3307-3317
Bierschenk S., Kuhlmann T., Ritter A., Stand der
Digitalen Fabrik bei kleinen und mittelstndischen
Unternehmen Auswertung einer Breitenbefragung,
1st Edition, Fraunhofer IRB Verlag, Stuttgart, 2004, p
19
Bracht U., Wenzel S., Geckler D., Digitale Fabrik
Methoden und Praxisbeispiele, 1st Edition, Springer,
Berlin Heidelberg, 2011, p 50
28

Buchta D., Eul M., Schulte-Croonenberg H.,
Strategisches IT-Management Wert steigern,
Leistung steuern, Kosten senken, 3rd Edition, Gabler,
Wiesbaden, 2009, p 231
Constantinescu C., Eichelberger H., Frank T., Flow-
based Approach for Holistic Factory Engineering and
Design, In: Teti R. (Chairman); University of Naples
Federico II: Innovative Production Machines and
Systems: 2nd International Researchers Symposium
2009, Neapel, 2009, pp 1-6
Constantinescu C., Westkmper E., Grid Engineering
for Networked and Multi-scale Manufacturing, In:
Mitsuishi, Mamoru (Ed.) ; Ueda, Kanji (Ed.) ; Kimura,
Fumihiko (Ed.), CIRP: Manufacturing Systems and
Technologies for the ew Frontier: The 41st CIRP
Conference on Manufacturing Systems, Springer,
2008, Berlin Heidelberg, pp 111-114
Constantinescu C., Westkmper E., A Reference Model
for Factory Engineering and Design, CIRP DET 2009
Proceedings, AISC 66, Springer, 2010, Berlin
Heidelberg, pp 1551-1564
Gupta Y. P., Goyal S., Flexibility of manufacturing
systems: Concepts and measurements, In: European
Journal of Operational Research, Vol. 43, No. 2, 1989,
pp 119-135
Hirschmeier M., Wirtschaftlichkeitsanalysen fr IT-
Investitionen, 1st Edition, WiKu Verlag, Berlin,
2005, p 4
Kaplan R. S., Norton D. P., The balanced scorecard
Translating strategy into action, Harvard Business
School, Boston, 1996, pp 7ff
Karsak E. E., Tolga E., Fuzzy multi-criteria decision-
making procedure for evaluating advanced
manufacturing system investments, International
Journal of Production Economics, Vol. 69, No. 1,
2001, pp 49-64
OMG, OMG Unified Modeling Language (OMG
UML), Infrastructure, omg.org, Retrieved:
15.06.2011, 2011a,
http://www.omg.org/spec/UML/2.4/Infrastructure/Bet
a2/PDF/
OMG, Business Process Model and Notation (BPMN),
omg.org, Retrieved: 15.06.2011, 2011b,
http://www.omg.org/spec/BPMN/2.0
Perridon L., Steiner M., Finanzwirtschaft der
Unternehmung, 8th Edition, Vahlen, Mnchen, 1995,
pp 36-66
Renkema T. J. W., Berghout E. W., Methodologies for
information systems investment evaluation at the
proposal state: a comparative review, Information
and Software Technology, No. 39, 1997, pp 1-13




Schabacker M., Bewertung der Nutzen neuer
Technologien in der Produktentwicklung, 1st Edition,
Otto-von-Guericke-Universitt - Lehrstuhl fr
Maschinenbauinformatik, Magdeburg, 2001, pp 25-27
Schraft, R.-D., Kuhlmann, T., Systematische
Einfhrung der Digitalen Fabrik, ZWF Zeitschrift fr
wirtschaftlichen Fabrikbetrieb, Vol. 101, 2006, pp 15-
18
Vajna S., Weber C., Bley H., Zeman K., CAx fr
Ingenieure, 2nd Edition, Springer, Berlin Heidelberg,
2009, p 469
VDI 2216, Computer-aided design Introducing
strategies and economical benefits of CAD systems,
1st Edition, Verein Deutscher Ingenieure e.V.,
Dsseldorf, 1994, pp 1-51
VDI 2219, Information technology in product
development Introduction and economics of
EDM/PDM Systems, 1st Edition, Verein Deutscher
Ingenieure e.V., Dsseldorf, 2002, pp 1-92
VDI 4499, Digital Factory Fundamentals, 1st Edition,
Verein Deutscher Ingenieure e.V., Dsseldorf, 2008, p
11
Venhoff M., Grber-Seissinger U., Der Brockhaus
Wirtschaft, 1st Edition, F.A. Brockhaus, Mannheim,
2004
Westkmper E., Fabrikplanung vom Standort bis zum
Prozess, In: mic - management information center:
Fabrikplanung: Planung effizienter und attraktiver
Fabriken. 8. Deutscher Fachkongress, Ludwigsburg,
28.-29.10.2008, Landsberg/Lech, 2008, pp 1-26
Westkmper E., Constantinescu C. and Hummel, V.,
New paradigms in Manufacturing Engineering:
Factory Life Cycle, Annals of the Academic Society
for Production Engineering, Research and
Development, XIII/1, Volume XIII, 2006, pp 143-147
Whe G., Dring U., Einfhrung in die allgemeine
Betriebswirtschaftslehre, 24th Edition, Vahlen,
Mnchen, 2010a, p 528
Whe G., Dring U., Einfhrung in die allgemeine
Betriebswirtschaftslehre, 24th Edition, Vahlen,
Mnchen, 2010b, p 38
Whe G., Dring U., Einfhrung in die allgemeine
Betriebswirtschaftslehre, 24th Edition, Vahlen,
Mnchen, 2010b, p 211
Zangemeister C., Nutzwertanalyse in der
Systemtechnik, 4th Edition, Wittemannsche
Buchhandlung, Mnchen, 1976a, p 158
Zangemeister C., Nutzwertanalyse in der
Systemtechnik, 4th Edition, Wittemannsche
Buchhandlung, Mnchen, 1976b, p 297
Proceedings of DET2011
7th International Conference on Digital Enterprise Technology
Athens, Greece
28-30 September 2011
29

CORBA BASED ARCHITECTURE FOR FEATURE BASED DESIGN WITH
ISO STANDARD 10303 PART 224
Daniel Kretz
University of Applied Sciences
Zwickau, Germany
daniel.kretz@fh-zwickau.de
Jrg Militzer
University of Applied Sciences
Zwickau, Germany
joerg.militzer@fh-zwickau.de


Tim Neumann
University of Applied
Sciences
Zwickau, Germany
tim.neumann@fh-zwickau.de
Christiane Soika
University of Applied
Sciences
Zwickau, Germany
christiane.soika@fh-
zwickau.de
Tobias Teich
University of Applied
Sciences
Zwickau, Germany
tobias.teich@fh-zwickau.de
ABSTRACT
Product design is a fundamental stage of the product lifecycle for the development of new products.
The success of a product, efficiency of production planning and manufacturing as well as accruing
costs are directly influenced by the decisions of the design engineers and the method of creating
product models regarding their further use. Today, especially small and medium-sized enterprises
are confronted with a gashed application landscape of incompatible computer systems e.g. for the
design, process planning and manufacturing which absolutely hampers an efficient product
development. To elaborate a solution for an efficient and integrated product development, we
require completely computer-interpretable product design models that provide the entire required
information in a standardized form. It is essential that these models support a very flexible data
exchange between different application systems as well as the reuse of existing product data and
finally a direct collaboration between design and manufacturing engineers together e.g. with expert
and assistance systems. This paper provides an insight of our solution to solve these issues by
utilizing feature based design with ISO standard 10303 application protocol 224 and the
development of a CORBA-based integrated architecture for process and production planning.
KEYWORDS
Computer Aided Design, Product Development, Feature Technology, ISO Standard 10303, Product
Model Data Exchange, CORBA

1. INTRODUCTION
Globalisation and the rapid development of
information and communication technology
increased the competitive pressure between
enterprises enormously. Especially for
manufacturing industries, it is very important to
produce parts cost-effectively, meet quality criteria
and reduce error rate wherever possible. In fact,
product design is one of the most important stages
of product development because it already includes
the planning and definition of a product as well as
the workout of details (Bertsche, 2008). Today,
physical parts are commonly modelled and
visualized either two- or three-dimensional with
computer aided design (CAD) systems. There are
many advantages of geometric modelling with CAD
systems e.g. models can be manipulated or analysed
from different points of view or complex
simulations like the construction of assemblies can
30

be performed. Furthermore, the design data can be
stored for ages and retrieved when necessary or
directly used for downstream processing (Regalla,
2010). Downstream processing of a design model
primarily addresses the succeeding product lifecycle
stages which mainly are process planning and
manufacturing. Therefore, design data is required as
input for process planning (Zhang and Alting, 1994)
and subsequently process planning establishes the
interface between the design process and
manufacturing of a product (Scallan, 2003).
Given the fact, that most of the later production
costs are already determined during the design phase
(Cubberly and Bakerjian, 1989), a direct integration
of downstream application systems is a key of
success for small and medium-sized enterprises.
Especially the design model is fundamental for
process planning. Hence it is necessary to provide
not only geometric aspects but also further details
like physical and technological requirements,
measurements and limitations as well as the
assignment of design intentions to relevant
geometric aspects.
Considering the previously discussed issues, an
efficient and cost-effective product development
requires reusable and downstream applicable design
models that support a wide spectrum for recording
design details. Furthermore, underlying data models
of the product design have to be computer-
interpretable and exchangeable to avoid an error-
prone manual adoption of any existing information.
A direct integration and hence an efficient product
development is still prevented by heterogenic
application systems and incompatible data models
or file formats (Xu and Nee, 2009).
Motivated by these facts, we have developed a
seamless integrated solution with the purpose to
generate process plans directly from a given CAD
drawing. We utilize the ISO standard 10303 for the
underlying data models to be open for other systems
based on the standard for the exchange of product
model data (STEP) and apply the feature based
design approach to provide completely computer-
interpretable design models.
In this paper, we want to present a short overview
of feature based design with machining features and
the realisation of our integrated software
architecture for automated process planning.
Afterwards, we are focussing on our CAD module
for feature based design and the CORBA-based
communication layer for application integration.
2. PRODUCT DEVELOPMENT
2.1 PRODUCT DESIGN
As introductory indicated geometric models are
essential for creating and visualizing the product
design. Although they are indispensable for the
development, their underlying data models involve
different weaknesses that prevent a direct
downstream processing of the provided product
information. Shah and Mantyla (1995) addressed a
lack of design intention, the single-level structure,
tedious construction and microscopic data as the
main deficiencies of geometric modelling.
Especially the microscopic data prevents an
automated interpretation of the physical form of a
part and also hampers an association of process
planning relevant attributes like measurements,
tolerances, material properties or surface finish.
2.2 FEATURE-BASED MODELLING
Features are a solution for a context-dependent
semantic and meaningful product definition. They
describe geometric, physical, functional and
technological properties and aspects of a product.
Additionally, they can also represent application
specific information like time, costs or physical
properties (Ehrlenspiel et al, 2007).
Most of the CAD systems and also downstream
applications like computer-aided process planning,
computer-aided manufacturing or computer-aided
engineering systems (summarized as CAx systems)
are already feature-based because features define a
macroscopic and appropriate data structure that
associates engineering semantics with geometric
entities (Li et al, 2007). The data structure is
comparable with the object-oriented paradigm
where a class name implies the semantic meaning of
the feature and significant attributes are member
variables. Furthermore, feature instances can be
associated, aggregated with each other or composed
to more complex feature types.
In context of engineering, feature models are data
structures representing parts or assemblies primarily
by features. Shah and Mantyla (1995) classified the
main feature types as form, tolerance, assembly,
functional and material features. Especially form
and assembly features are important for the
definition of the geometric representation of a
product and they are strongly coherent to the
geometric data model.
There are different approaches for the
instantiation of features within a feature model like
design by features, feature mapping and feature
recognition (Shah and Mantyla, 1995). The first
approach, feature based design, addresses the direct
instantiation and parameterization of features by the
design engineer to create the product model. Feature
mapping is a transformation of existing feature
instances into corresponding features of another
context e.g. from design to machining or inspection
features. Finally, feature recognition summarizes a
set of techniques and approaches to retrieve form
31

features automatically from a geometric model.
Neither feature mapping nor feature recognition can
ensure the creation of complete and valid feature
models. Feature mapping requires transformation
rules but possibly not every detail can be retrieved
because different contexts commonly require
distinct information from different points of view
and hence not all parameters are provided by the
source model. Automatic feature recognition is one
of the most difficult tasks because of a complete
lack of high level semantics and interpretable data
structures in geometric models. Furthermore, there
can be interpretation ambiguities or an intersection
of features could even change the resulting topology
dramatically (Farin et al, 2002). Against this,
interactive feature recognition requires user
interaction and is quite similar to the design by
feature approach. Especially for further processing,
it is essential to provide valid models that
completely contain necessary input information.
Because of the given facts, we have favoured a
feature based design approach for our solution. In
fact, form features or geometric features defined by
analytic surfaces like planes or cylinders, have
gained popularity in mechanical engineering and
they are now provided by most of the CAD
solutions (Tichkiewitch et al, 2007). During our
researches, we have analysed several market-
relevant CAD systems like Autodesk Inventor,
CATIA as well as ProEngineer and we have realized
that provided design features are not standardized or
get incompletely exported. Consequently the
provided information is not sufficient for
downstream processing.
2.3 FEATURE BASED DESIGN WITH
MACHINING FEATURES
Especially in the context of mechanical engineering,
machining parts are modelled with machining
features. They are special form features whose
semantic implies corresponding manufacturing
operations for a mechanical engineer. To give an
example, a hole feature implies a drilling and an
outer diameter feature a turning operation.
There are two basic approaches for the design by
machining features. The first one is destruction by
machining feature and the second is called synthesis
by design features (Shah and Mantyla, 1995).
Destruction by machining features describes
subtractive operations that remove volumes starting
from a raw stock until the final parts shape is
created. This design approach is similar to the
application of machining operations like turning,
drilling or milling. Against this, synthesis by design
feature supports subtractive as well as fuse
operations.
Considering the similarity of the destruction by
machining feature approach to the real production of
a machining part, we have decided to create the
design of machining parts directly by using only
machining features for the feature based design
model. The advantages are quite obvious. We have a
design model that is completely described with
features whose semantic imply suitable
manufacturing operations. Furthermore, we can
ensure that the entire relevant information for
process planning can be deposed into the design
model. Additionally, the design engineer can
simultaneously collaborate with the mechanical
engineer via their planning system. Because of the
integration of expert systems the designer does not
need to take care of the real manufacturing process.
This is either supported by the automated process
planning system or the process and production
planner.
Figure 1 - Simultaneous engineering with feature models
32

Figure-1 illustrates these explanations. In fact, the
destructive approach forces a change of thinking for
the designer because he does not create the final part
in the traditional way. But this enriches the design
model with information about the raw stock volume
and the topology of volumes that shall be removed
which is absolutely essential for an automated
process planning and further processing.
In the context of integration, Wang and Shen
(2007) mentioned that there exists a loose
interaction between the designer and manufacturer
since the last three decades which leads to
manufacturing problems and requires iterative
modifications leading to longer manufacturing
times. Additionally, Li et al (2007) mentioned that
information sharing between the applications has
not been handled very well so far, which can be seen
as one of the main problems for collaborative
engineering.
As shown in Figure-1, interpretation of feature
models between different application systems is an
important aspect. Context-dependent features for
creating the feature model are commonly provided
within a library. Consequently, we need a feature
library which solves the previously discussed issues
by providing a wide spectrum of machining features
and further features that represent additional
information for process planning like material
properties, surface finish, measurements and
tolerances.
2.4 ISO 10303 PART 224
Industrial automation systems and integration
Product data representation and exchange is the
official label of the international standard ISO
10303. It is commonly known as STEP Standard
for the exchange of product model data (Pratt,
2002) and associated with several preceding CAD
exchange formats like IGES (Initial Graphics
Exchange Specification), SET (Standard d
Exchange et de Transfer) or PDDI (Product data
definition interface). ISO 10303 consists of several
parts which define a description methodology,
integrated resources, application protocols,
conformance testing and implementation methods.
The great advantage of STEP is a continuous
product data description throughout the entire
product lifecycle independent from any particular
computer system (Niemann et al, 2008). Although
STEP is closely related to the data exchange via
clear text encoded STEP- and XML-files, it
provides valuable data models for a computer-
interpretable, logical representation of product data
(Sage and Rouse, 2009). In this context, application
protocols are the most important part for the
practical application of ISO 10303. They cover
different industries and branches e.g. automotive,
ship building, avionics, electronic and electro
technical industrial. Application protocols (APs)
define the scope, context, and information
requirements of different application areas as well as
related functions and processes.
Each application protocol contains an application
activity model that defines the scope, processes and
information flows within an IDEF-0 activity
diagram. Furthermore, the application reference
model (ARM) defines data models and constraints
of the application context in formal information
models from a users point of view. Finally, an AP
consists of the application interpreted model which
is necessary for mapping the application specific
information of the ARM into a neutral data structure
for a standardized data exchange and storage in
databases or STEP- and XML-files. In general APs
are based on common resources that contain
building blocks for the consistency of common
shared data structures which can be reused,
specialized and extended as explained by Makris et
al (2008).
Considering our previously discussed demand for
a standardized feature library especially for feature
based design with machining features, we have
identified AP 224 as applicable. This part of the ISO
10303 suite is titled as Application protocol:
Mechanical product definition for process planning
using machining features and its scope is the
specification of requirements for the representation
and exchange of information needed to define the
product data which is necessary in the
manufacturing of a single piece or assembly of
mechanical parts (ISO 10303-224, 2006).
AP 224 contains 13 different units of
functionality with manufacturing features for design
as well as entities for extended product information,
administration and manufacturing data.
Manufacturing features of this protocol are
classified into three groups, machining features,
transition features and replicate features. Machining
features identify volumes of material that shall be
removed to obtain the final part geometry from the
initial stock (ISO 10303-224, 2006). Replicate
features are copies of machining features that can be
arranged in various patterns. Transition features like
chamfers, fillets or round edges define the transition
area between manufacturing features. Furthermore,
there are also compound features which are defined
as complex feature definitions combining several
relevant machining or transition features together
(Li et al, 2006). Figure-2 illustrates the AP 224
feature hierarchy and classification of selected
manufacturing features, illustrated in form of a
shortened class diagram of the unified modelling
language (UML).
33


Figure 2 - UML class diagram representing selected ISO 10303 AP 224 manufacturing features
To create the design with the destruction by
machining features approach, we initially need a
definition of the base shape that identifies the raw
stock for subtraction of machining feature volumes.
The base shape can be defined either implicit or
explicit. An explicit base shape definition is
necessary to define non-traditional base shapes and
requires a boundary representation of the initial
volume. Against this, an implicit base shape is
defined by one of the three basic forms, either a
cylindrical, block or an ngon form.
Table-1 illustrates the feature based design
process of a safety bold with manufacturing features
of AP 224 and a selection of some significant
attributes starting with a cylindrical base shape.
Table 1 - Feature based design of a safety bold with ISO 10303 AP 224 manufacturing features

Cylindrical_base_shape
Base_shape_length: 88mm
Diameter: 25mm
Outer diameter
Diameter: 14mm
Feature_length: 20mm
Reduced_size: null
Outer diameter
Diameter: 16mm
Feature_length: 20mm
Reduced_size: null

Chamfer
First_offset: 3mm
Second_offset: 45
Round hole
Bottom_condition: through
Diameter: 5.5mm
Round hole
Bottom_condition: through
Diameter: 9mm

Open_slot with Square_U_profile Open_slot with Square_U_profile Open_slot with Square_U_profile
34

3. PROCESS PLANNING SOLUTION
For the development of a flexible software solution,
we have favoured a modular structure which ensures
a decoupled execution of several module-specific
tasks and hence a parallel processing. Furthermore,
a loose interconnection results in better
maintainability, exchangeability of modules or
services as well as flexible optimization of existing
algorithms without changing the entire system.
Additionally, for automated process planning, we
had to ensure that every module can access existing
data beyond the feature model whenever necessary.
Furthermore, we require clearly defined interfaces to
avoid an isolated execution of the different process
planning steps and also to support a logical data
exchange.
3.1 SOFTWARE ARCHITECTURE
To fulfil these conditions we have developed a
distributed five tier architecture which reflects
several abstraction layers as illustrated in Figure-3.
First of all, there is the presentation layer which
provides the graphical user interface primarily for
the designer and mechanical engineer. To coordinate
the planning process and to avoid an isolated
execution of different tasks, we define the control
layer below. This layer is characterized by a strong
coherency to the succeeding application layer,
because control events result from application states
or react on them respectively. To keep the
application modules decoupled, we have an event
based, asynchronous communication between the
control layer and application modules by
exchanging status messages. Consequently, we can
check the availability of services provided by
modules and queue tasks directly.
Underneath, there is the application layer which
contains application specific modules and hence
implemented services and algorithms. Initially, there
is our CAD module, which is required to create and
instantiate the AP 224 feature based product model
as discussed in the previous section. The process
variant module maps the knowledge of a process
planning engineer and generates, guided by an ant
colony algorithm, various variants for
manufacturing a part which can then be selected for
production planning. Afterwards, the generated
process plan proposal is enriched with information
from the resource module with e.g. suitable
machines, operation times and costs (Winkler et al,
2010).




Figure 3 - Software architecture
35

Finally, the generated and enriched process plan
is evaluated by utilisation of required resources,
priorities e.g. due dates or production costs and the
current state of the manufacturing environment.
This is done by a multi criterion genetic
optimization algorithm which performs a virtual
scheduling and calculates a weighted fitness value
for the evaluation and equation of potential
solutions (Neumann et al, 2011).
The best determined variant is then submitted to
the process variant module and influences the
generation via the ant colony algorithm. This
process is repeated iteratively until a determination
event was entered. Further details about the
combined optimization with ant colony and genetic
algorithms are published by Soika et al (2011) and
Militzer et al (2010).
Below the application layer, we define the
communication layer. This layer is a connector for
the integration and collaboration of the particular
modules. It provides access to permanent data of the
persistence layer and supports the exchange of
logical program data during runtime that results
from calculations and analysis. Furthermore, it
supports remote calls of functions and algorithms
provided via services by several application
modules e.g. to invoke further processing functions,
register services or to submit state information
regarding current calculations and results.
Bottommost, there is the persistence layer which
contains data that is permanently required by our
solution like product models, decision logic and
information from an enterprise resource planning
system. Consequently, these data sets are saved on a
data storage device depending on the application
state and external events.
3.2 CAD MODULE
Developing an enhanced product model based on
ISO 10303 AP 224 focuses directly on the
implementation of our AP 224 CAD module. In
general, main functions are design, definition and
representation of parts and assemblies but also the
association and annotation of extended product
information like surface finish, restrictions,
functional requirements, tolerances or material
properties which are required for the succeeding
process planning.
Consequently, this component is responsible for
creating, manipulating and managing the complete
product model beyond pure design aspects.
Additionally, because of the previously described
software architecture and decoupled integration by
the communication layer, our CAD module can
access services as well as the decision rules to use
the expertise of mechanical engineers to provide a
feedback system and assist the designer during the
drafting process. Creating and manipulating a
product model while focusing the logical
application layer addresses the instantiation of
arbitrary feature objects, their parameterization, and
association with each other.
Considering the instantiation and
parameterization of design feature, the CAD module
is responsible for deriving a geometric
representation of the current product model.
Therefore, the implementation of the feature based
design approach destruction by machining
features requires the retrieval of the described
feature volumes to subtract them from a base shape
or to derive the resulting form of given shape
aspects. The process of deriving a geometric
representation from given form features, is called
geometry from feature (GFF) transformation
(Shah and Mantyla, 1995). We can associate the
mapping of this approach into the meaning of
computer graphics with destructive solid geometry
(DSG). DSG is a specialization of constructive solid
geometry (CSG) but limited to subtraction
operations (Nasr and Kamrani, 2006). In general,
CSG is the creation of complex 3D solids by
performing Boolean operations on primitive solids
e.g. cylinders, cones, or blocks. Boolean CSG
operations which can be performed are addition,
subtraction and intersection. To provide an
attractive and expressive representation as well as
an efficient treatment, the CAD module requires a
solution for 3D visualization. This supports a
photorealistic rendering as well as a wireframe
representation but also 2D views for printing.
Consequently, we need a 3D geometry kernel which
provides complete CSG functionality. For the
visualization of form features from the product
model, we have implemented a modular viewer
concept which realizes the geometry from feature
function (Teich et al, 2010). Therefore, viewer
instances derive geometric parameters from the
given feature based product model and utilize
geometric objects and drawing functions to create
the geometric representation. This is realized with a
bounded 3D geometry kernel e.g. by constructing
auxiliary volumes and performing required Boolean
operations as illustrated in Figure-4 for our safety
bold from Table-1.


Figure 4 DSG auxiliary volumes
36

Our implementation concept is similar to the
proxy pattern as defined by Gamma et al. (2003).
The first implementation of the geometry from
feature approach as introduced by Teich et al.
(2010) realized the implementation with the
commercial ACIS geometry kernel from Spatial
Corp.
In fact, for our target group - small and medium-
sized enterprises - we prefer an open or free
geometry kernel to avoid additional licensing costs.
Consequently, we have selected the Open
CASCADE geometry kernel which is an open
source library, freely available for different
operating systems. It provides a wide range of
components for the development of CAD and CAM
applications like 3D surface and solid modeling,
CSG, numeric simulation, support of many CAD
formats like IGES, STEP AP 203, 214 as well as
209. It also divides geometric model data and the
representation view. Because of those benefits, we
have chosen the Open CASCADE geometry kernel
for further development of our CAD module and
integrated it with the QT cross-platform application
and UI framework to develop the user interface.
3.3 COMMUNICATION LAYER
The communication layer is fundamental for the
collaboration of the different application modules
and for exchanging required information like the
feature based product model, the extracted data of
the ERP system, as well as process variants, and the
response from our resource module.
Especially for the feature based product model,
we have implemented data exchange within the
logical application layer. Therefore we utilize object
models as defined by the application reference
model (ARM) of application protocol 224 which
aggregates generic entities of the application
interpreted model (AIM) to complex and
meaningful objects.
To realize the required communication and the
corresponding data exchange between the
application modules, we favored a middleware
solution which handles most of the communication
aspects and consequently provides the basis for
developing distributed applications.
There are different advantages resulting from this
approach: Modules are decoupled and work
independent but nevertheless they can collaborate.
Implementing a distributed solution forces the
definition of clearly defined interfaces and data
structures for invoking remote calls and exchanging
information.
These aspects are also the reason for the next
benefit: We can parallelize e.g. the planning and
scheduling processes e.g. either on different systems
or using multicore processors with independent
threads. Furthermore, the system can be extended at
any time e.g. by developing adaptors to external
CAD, CAM, CAP or ERP systems. Considering the
overall system we have different requirements to the
middleware platform. First of all, it needs to be
efficient for a huge amount of data e.g. from the
ERP system, the CAD module and the resource
module. Additionally, we need to support different
programming languages. As previously mentioned,
the development of our CAD module requires an
efficient 3D geometry kernel. All of our evaluated
geometry kernels are based on the C++
programming language. The process variant module
is based on the Java platform (Gaese et al, 2010)
and implementations of the scheduling module
utilize Java libraries as well as modules based on
Microsoft .NET framework (Neumann et al, 2011).
Commonly used distribution or middleware
solutions like Microsofts Component Object Model
(COM), Java remote method invocation (RMI),
Enterprise Java Beans (EJB), or .NET remoting are
either platform specific or programming language
dependent technologies. An alternative solution is
given by web services because they are platform-
and programming language independent. But they
involve a great disadvantage because of the
inapplicability for a huge amount of data which
needs to be communicated between our application
modules during program runtime. This is reasoned
by the matter of fact that web services are based on
XML documents and thus, we have an immense
communication overhead which results in higher
processing effort and processing time for all of the
communication participants.
Finally, we have evaluated the Common Object
Request Broker Architecture (CORBA) defined by
the Object Management Group (OMG) and
different Object Request Broker (ORB)
implementations which support common
programming languages like C, C++, Java and
many more. CORBA is a solution for the
development of large and distributed projects with
different programming languages as given in our
case. Furthermore, it is very efficient e.g. local
CORBA calls are nearly as efficient as traditional
function calls and the Internet Inter-ORB Protocol
(IIOP) uses a compact binary encoding of the
payload.
The specification of CORBA provides a
standardized and established middleware
architecture which is hardware-, platform- and
programming language independent. Programming
language independency is achieved by the special
interface definition language (IDL) which is used to
describe interfaces and communicated data
structures.
37

Consequently, this middleware solution provides
a great basis for the development of our integrated
and distributed software solution. For the
description of the interfaces and to access and
exchange data structures respectively, we have
utilized the UML tool Rational Rose Enterprise
from IBM which supports complete CORBA IDL
source code generation. To access and exchange the
feature based product data model between the
application modules, we have mapped the ARM of
AP 224. Figure-5 illustrates an extract of the UML
mapping from the class hierarchy of AP 224.
For compiling the generated IDL source files, we
have developed a set of automation routines which
build and generate programming libraries for our
target languages.


Figure 5 - AP 224 ARM mapping to UML for CORBA IDL generation
4. CONCLUSIONS
Feature technology provides basic data structures
for an efficient product description which can be
efficiently reused during further processing,
especially for the description of interpretable
geometric part aspects. The ISO standard 10303
defines standardized data models for the
representation and exchange of product model data
throughout the entire product lifecycle. In fact,
heterogeneity of application systems is a cross-
industry problem and there are many efforts for an
efficient data exchange as well as application
integration e.g. via XML as discussed by
Chryssolouris et al (2004) for the ship repair
industry.
The important aspect is the underlying data
model that is communicated between different
application systems. Hence it is beneficial to
involve established standardized data models as
provided by ISO 10303. In consequence, we have
decided to realize our solution for generative
process planning for machining parts based on but
not limited to the application protocol 224. This
paper focused on the development of an integrated
and distributed software solution which utilizes the
data models defined by the AP 224 reference model
for automated process planning. Therefore, we have
presented our development approaches and results
especially of the CAD module and communication
infrastructure. The combination of CORBA with
STEP supports direct and simultaneous online data
access through the middleware layer as well as
online and offline data exchange via STEP and
XML files. Our presented approach is currently
focussed on single peace parts especially in the area
of mechanical engineering. Further research deals
with the conjunction of ISO 10303 part 1102 for the
description of assemblies and the assignment of
assembling processes. Furthermore, we have
favoured the development of our own, independent
CAD module for feature based design with AP 224
38

machining features. This could be improved by
directly embedding the libraries and functionalities
e.g. as plugin-solution into common CAD systems.
5. ACKNOWLEDGMENTS
The authors and co-authors appreciate the support
of the European Union via the European
commission and their European Social Fund (ESF).
REFERENCES
Bertsche B., Reliability in Automotive and Mechanical
Engineering: Determination of Component and
System Reliability, 1st Edition, Springer, Berlin,
2008
Chryssolouris G., Makris S., Xanthakis V. and Mourtzis
D., Towards the Internet-based supply chain
management for the ship repair industry,
International Journal of Computer Integrated
Manufacturing, Vol. 17, No. 01, 2004, pp. 45-57
Cubberly W. and Bakerjian R., Tool and Manufacturing
Engineers Handbook, Society of Manufacturing
Engineers, 1989
Ehrlenspiel K., Kiewert A. and Lindemann A.,
Kostengnstig entwickeln und konstruieren
Kostenmanagement bei der integrierten
Produktentwicklung, 6th Edition, Springer, 2007
Farin G.E., Hoschek J. and Kim M.S., Handbook of
Computer Aided Geometric Design, North-Holland,
2002
Gamma E., Helm R., Johnson R.E. and Vlissides J.,
Design patterns, Addison Wesley, 2003
ISO 10303-224, Industrial automation systems and
integration Product data representation and exchange
Part 224: Application protocol: Mechanical product
definition for process planning using machining
feature, 2006
Li W.D., Ong S.K. and Nee A.Y.C., Integrated and
Collaborative Product Development Environment:
Technologies and Implementations, 1st Edition,
World Scientific Pub Co, 2006
Li W.D., Ong S.K., Nee A.Y.C., and McMahon C.,
Collaborative Product Design and Manufacturing
Methodologies and Applications, 1st Edition,
Springer, 2007
Makris S., Xanthakis V., Mourtzis D. and Chryssolouris
G., On the information modelling for the electronic
operation of supply chains: A maritime case study,
Robotics and Computer-Integrated Manufacturing,
Vol. 24, No. 1, 2008, pp. 140-149
Militzer J., Teich T., Kretz D. and Neumann T.,
Automated process planning with geometric
dependencies derived from a feature-based CAD
drawing., Proceedings of the 20th International
Conference on Flexible Automation & Intelligent
Manufacturing, 2010, pp 169 176
Nasr E.A. and Kamrani A.K., Computer-Based Design
and Manufacturing, Springer, Berlin, 2006
Neumann T., Kretz D., Militzer J. and. Teich T.,
Genetic Algorithms for the evaluation of process
variants. In: Proceedings of the 21th International
Conference on Flexible Automation and Intelligent
Manufacturing, 2011, pp. 567-574
Niemann J., Tichkiewitch S. and Westkaemper E.,
Design of Sustainable Product Life Cycles,
Springer, Berlin, 2008
Pratt M.J., Parametric STEP for concurrent
engineering, Advances in Concurrent Engineering:
Proceedings of the 9th ISPE International Conference
on Concurrent Engineering, Cranfield UK, 2002, pp
933-940
Regalla S. P., Computer Aided Analysis and Design,
1st Edition, I K International Publishing House Pvt.
Ltd, 2010
Sage A.P. and Rouse W.B., Handbook of Systems
Engineering and Management, 2nd Edition, John
Wiley & Sons, 2009
Scallan, P., Process Planning. The Design/Manufacture
Interface, Butterworth Heinemann, 2003
Shah J.J and Mantyla M., Parametric and Feature-Based
CAD/CAM: Concepts, Techniques, and Applications,
3rd Edition, John Wiley & Sons, Chichester, 1995
Soika C., Teich T., Militzer, J. and Kretz D., Generation
of Process Variants in Automated Production Planning
by using Ant Colony Optimization, Proceedings of
2011 International Conference on Computer and
Communication Devices, 2011, pp V2-61 V2-65
Teich T., Militzer J., Jahn F., Kretz D. and Neumann T.,
Using ISO 10303-224 for 3D Visualization of
Manufacturing Features, Advanced Manufacturing
and Sustainable Logistics, Vol. 46, Part 3, pp 198-209,
2010
Tichkiewitch S., Tollenaere M., Ray P., Advances in
Integrated Design and Manufacturing in Mechanical
Engineering II, Springer, Netherlands, 2007
Wang L. and Shen W., Process Planning and Scheduling
for Distributed Manufacturing, Springer, Berlin, 2007
Winkler S., Mueller M. and Gaese T., Development of a
resource model to support automatic process planning
as part of a study about future-oriented competence-
clustering and generating methods for production
processes, CIRP 7th International Conference on
Intelligent Computation in Manufacturing Engineering
(ICME), Italy, 2010
Xu X. and Nee A.Y.C., Advanced Design and
Manufacturing Based on STEP, 1st Edition, Springer,
2009
Zhang H.-C. and Alting L., Computerized
Manufacturing Process Planning Systems, 1st
Edition, Springer, Netherlands, 1994
Proceedings of DET2011
7th International Conference on Digital Enterprise Technology
Athens, Greece
28-30 September 2011
39

INTEGRATED DIMENSIONAL VARIATION MANAGEMENT IN THE DIGITAL
FACTORY
Jody Muelaner
The University of Bath
jody@muelaner.com
Prof Paul G Maropoulos
The University of Bath
p.g.maropoulos@bath.ac.uk

ABSTRACT
This paper describes how dimensional variation management could be integrated throughout design,
manufacture and verification, to improve quality while reducing cycle times and manufacturing cost
in the Digital Factory environment. Initially variation analysis is used to optimize tolerances during
product and tooling design and also results in the creation of a simplified representation of product
key characteristics. This simplified representation can then be used to carry out measurability
analysis and process simulation. The link established between the variation analysis model and
measurement processes can subsequently be used throughout the production process to
automatically update the variation analysis model in real time with measurement data. This live
simulation of variation during manufacture will allow early detection of quality issues and facilitate
autonomous measurement assisted processes such as predictive shimming.
A study is described showing how these principles can be demonstrated using commercially
available software combined with a number of prototype applications operating as discrete modules.
The commercially available modules include Catia/Delmia for product and process design, 3DCS
for variation analysis and Spatial Analyzer for measurement simulation. Prototype modules are used
to carry out measurability analysis and instrument selection. Realizing the full potential of
Metrology in the Digital Factory will require that these modules are integrated and software
architecture to facilitate this is described. Crucially this integration must facilitate the use of real-
time metrology data describing the emerging assembly to update the digital model.
KEYWORDS
Variation analysis, digital factory, measurability

1. INTRODUCTION
In its initial form the Digital Factory may be seen as
the simulation of every detail of the manufacturing
process before it happens allowing better planning
(Dwyer 1999). At a more advanced stage the
simulation can be used, not only during the planning
phase, but also to enhance the control of processes
on the production floor (Kuhn 2006)..
The importance of design for manufacture has
been well established (Womack et al, 1990;
Fabricius 1994; Maropoulos et al, 2000) it has also
been suggested that design for measurability should
be a part of this (Muelaner et al, 2009). Additionally
process modelling has been shown to contribute
significantly to process planning (Maropoulos, Yao
et al. 2000; Maropoulos et al, 2003). Previous work
has laid out a generic framework for measurement
planning (Cai et al, 2008) and presented prototype
instrument selection and measurability analysis
software (Muelaner et al, 2010).
This paper extends this work to show how
simulations of product variation created during the
product design phase can be integrated with
measurement simulation. This will initially give an
enhanced understanding of product variation and
verification.

40

At later stages in the product life cycle the use of
metrology to control processes, enable flexible
processes and manage component interfaces, will be
enhanced through the use of these integrated
simulations of product variation and measurement
uncertainty.
The manufacture of high quality products
requires close tolerances to be achieved. This is a
particular issue for large composite structures such
as the next generation of passenger aircraft and off-
shore wind turbines. The conventional methods for
maintaining close tolerances over large structures
involve the use of jigs to control the external form
of the structure combined with manual shimming
and fettling processes to maintain the interface
tolerances between components. These methods are
time consuming and dependent on highly skilled
manual operations. The conventional methods, in
their current form, are also not able to improve on
current external form tolerances due to the
limitations of environmental factors such as the
thermal expansion of jigs. This means that
improvements in aerodynamic profiles required for
increased efficiencies can not be realized.
As an example of a conventional assembly process
components are loaded into a precisely aligned
assembly jig, gaps between the components are then
carefully mapped using slip gauges and shims are
produced to these measurements. The components
are removed from the jig, reassembled with the
shims in place and the measurement of gaps using
slip gauges is repeated. It may be necessary to
repeat the shimming process due to the inaccuracy
inherent in such a manual process. Once the gaps
have been filled to within the required tolerances the
components are drilled through and then again
removed from the jig so that sealant can be applied.
They are then finally assembled. This process is
illustrated in
Figure 1.


41


Figure 1: Conventional Aerospace Assembly Process
The conventional approach described above is not
suitable to achieving the cost and process time
reductions required for the increased rates of
production forecast for products such as off-shore
wind turbines and next generation single aisle
passenger aircraft.
Alternative methods of maintaining tolerances are
in development. These generally also rely on jigs to
control the external form of structures with
alternative processes used to maintain the interface
tolerances between components. These approaches
have been generically described as Measurement
Assisted Assembly (MAA) (Kayani and Jamshidi
2007). MAA includes processes such as predictive
shimming (Kayani and Gray 2009) and fettling
where interface components are first measured and
this measurement data is then used to produce shim
or fettle interfacing components.

Although these approaches reduce the level of
manual rework required at the assembly stage they
still generally require measurements to be taken in

42

the assembly jig since they are not associated with
models able to predict the form of components
within the jig. They also do nothing to address the
limitations inherent in using large assembly jigs,
which are subject to thermal expansion, to control
aerodynamic form.
Determinate Assembly (DA) has been
demonstrated as a solution to reliance on jigs (Stone
2004) although in many applications it is not
possible to achieve the required component
tolerances. Measurement Assisted Determinate
Assembly (MADA) has therefore been suggested as
a way to implement DA for large assemblies with
tight tolerances (Muelaner and Maropoulos 2010).
An integrated approach to the design of products
and planning, monitoring, and control of processes
is required to design products; which minimise the
need for dimensional control during manufacture
while maximising the achievable aerodynamic
profile accuracy and other key characteristics.
This approach must consider the propagation of
variation through the product assembly during the
early stages of design ensuring that tolerance
requirements do not put unnecessary demands on
products and that the key characteristics of the
assembly can be practically measured.
The design of processes must take into account
the variability in outputs from forming, assembly
and measurement processes. It is therefore necessary
to have models of machine tools, robots and
measurement instruments which include the
variability and uncertainty of these operations.
2. VARIATION MODELLING USING
SIMPLIFIED REPRESENTATIONS OF
KEY CHARACTERISTICS
Geometric Dimensioning and Tolerancing (GD&T),
the standard for Geometrical Product Specification
(GPS) provides a continuous definition that ALL
points on a surface are within a specified zone; of
course this can never be fully verified. In reality
representative discrete coordinate measurements are
typically taken to verify that a surface is within
tolerance. In a similar way a point based model can
be used to represent continuous geometry for the
purpose of simulating product variability.
It is logical that the points defined for simulation
purposes should also be used for measurement. It is
important that random measurement locations are
also used however. This is because if consistent
points are used for measurement then a process may
become optimized for these points meaning that
they are no longer representative of the variability of
a surface as a whole.
Rules are required to streamline the process of
deciding how many control points are required to
verify given features. Such features should include
surfaces, holes, pins etc. It will then be possible for
the designer to work in a system where he specifies
the intent of his design and this is coded as both
GD&T for a standards based approach as well as
being discretised to a point based model for
variation analysis, measurement planning etc.
Use of a point based model has the advantage of
facilitating relatively simple calculation of the
propagation of variability within an assembly. It
also gives greatly reduced data file sizes. For
example a complex aircraft component such as a
composite wing cover could have a data size of 100
Mb when stored as a CATIA file. If this component
were characterized quite rigorously with a point
placed every 10 mm the total data required would
still be reduced to less than 5 MB. More detail on
this calculation is given in Table 1. It is therefore
clear that even where large profile tolerances are
represented using reasonably detailed point based
representations considerable reductions in data file
sizes are possible.
Table 1: Data required for Point based Model
Total surface area for controlled
surfaces
60 m
2
Grid spacing 10 mm
Total control points 600,000
Measurement Resolution 1 m
Max Scale for Measurements 100 m
Data required per point
measurement
4 Bytes
Data required including data
label
8 Bytes
Total data required to describe
component
4.58 MB
2.1. DEFINING COMPONENT INTERFACES
Points representing two components can be used to
simulate the interface between those components.
Further inputs will however be required from the
designer in determining exactly how components
will interface with each other. This can be
understood by considering a component with two
pins, one of which has a shoulder, and a plate with
one hole and a slot, as shown in Figure 2. The
assembly condition can be simulated by calculating
the distances between points and applying
translations and rotations to bring the movable plate
part into its assembled condition with the target pin
part.
Different assumptions can be made regarding the
details of how the pins and shoulders will constrain
the movement of the plate. For example if it is
assumed that the pin in a hole is relatively tight, and
will therefore control rotation about the x and z

43

axes, then a simple 3-2-1 fit can be used. In this
case three points on each of the target and the
movable parts are used to simulate the assembly
interface conditions and the following
transformations are carried out:-
Translate component to make C1 coincident
with T1. The distances in x, y and z between
points C1 and T1 are calculated. These distances
are then simply subtracted from all of the points
defining the component geometry.
Rotate component about x and y (with origin
at T1) so that C2 lies on the line through T1
and T2. These rotations can be carried out one
at a time. The angles between the lines C1-C2
and T1-T2 are first calculated in the x-y and y-z
planes and the corresponding rotations then
carried out by applying a rotation matrix.
Rotate component about z (with origin at T1) so
that C3 lies on the plane through T1, T2 and T3.

Figure 2: Pin Shoulder Slot Location Example
It should be pointed out that this type of 3-2-1 fit is
called a Three-Point Move in 3DCS while the term
3-2-1 Move is used to describe a different type of
move using 6 points on each component!
It should be noted that different assumptions
about how the assembly will locate will lead to
different methods of fitting the points. For example
if it is assumed that the pin in a hole is relatively
loose but that the plate is clamped down onto the
shoulder so that it is the shoulder that controls
rotation about x and z then a more complex form of
3-2-1 is required, sometimes referred to as a step-
plane move which involves the following steps:-
The part is located onto the shoulder controlling
translation in y and rotation about both x and z.
Points C1, C2 and C3 are moved into contact
with a plane through T1, T2 and T3.
C1 translated to T1
Rotate about x and z
The part is then located onto the pin in one
translation
Other methods of fitting are also possible, for
example a least-squares best fit could be used
although it is this unlikely to accurately simulate
real world conditions.

Figure 3: Shoulder Pin Slot Location Example
If it is not known whether the pin or the shoulder
will control rotation about x and z then it is possible
to apply a number of different fitting algorithms
with the transformation of the movable component
taking place in small iterations. It is then possible to
apply some test condition such as measuring the
distance between points to check which contact
condition will come into play first and then allow
this to position the component. By applying this
type of test it is possible to run a simulation in
which, due to component variability, the contact
conditions between components vary from assembly
to assembly.
The requirement for a rules based translation of
GD&T into a point based model of component
geometry was described above. Ultimately the CAD
system should also read the component interfaces
from the CAD assembly model and automatically
convert these into coordinate transformations with
iterative solutions to correctly simulate interface
conditions. Initially it is unlikely that such an
approach could be applied to the full range of
interfaces seen in complex aerospace assemblies.
The simulation of standard connections such as the
examples shown with pins and holes should
however be automated.
2.2. RUNNING MONTE CARLO SIMULATION
OF THE SIMPLIFIED GEOMETRY
Once a simplified, point based, representation of
parts has been created and the interface conditions
between the parts in an assembly has been defined,
it is then possible to simulate variability in the
assembly using the Monte Carlo method. Based on
the GD&T definitions or Statistical Process Control
(SPC) data, randomly generated errors are added to
each point, simulating component variability.

44

Additional randomly generated errors may also be
added to some of the points in order to simulate the
assembly variability due to float between, for
example, an oversized hole or slot and an undersized
pin. The complete simulation process for the Pin-
Shoulder-Slot example is shown in Figure 4.


Figure 4: Simulation of Pin-Shoulder-Slot Assembly using 3-
2-1 Fit
3. INTEGRATION OF DIMENSIONAL
VARIATION MANAGEMENT ACROSS
THE DIGITAL FACTORY
The variation models described above can be used
to simulate product variability in order to optimize
tolerances during product and tooling design. The
simplified, point based, representation of product
key characteristics which was created for the
variation model can then be used to carry out
measurability analysis and process simulation. For
example measurement simulation (Calkins 2002;
New River Kinematics 2007; Muelaner, Cai et al.
2010) can be used to establish the uncertainty of
measurement for each of the points representing the
product geometry. Simulation of component
forming operations may also be carried out at this
stage to obtain improved estimates of actual
component variability.
Improved estimates of component variability and
measurement uncertainty can then be fed back into
the variation model to obtain improved simulation
results. The simulation of product variation is
therefore an iterative process with the results refined
a number of times as the product and process is
developed.
Since the measurement process planning has been
based on the point based model originally created
for variation simulation, it is also possible to feed
live measurement results back into the simulation.
This allows the actual as-built condition of an
emerging assembly to be simulated. It is never
possible to know exactly what the as-built condition
is since there is always a degree of uncertainty of
measurement. The uncertainty of measurement
therefore replaces component variability in the
model to allow improved estimates of the final build
to be generated as the build process progresses.
This live simulation of variation during
manufacture will allow early detection of quality
issues and corrective actions to be taken. It will also
facilitate measurement assisted processes such as
predictive shimming and MADA, discussed above.
This integration of dimensional variation
management can be demonstrated using
commercially available software combined with a
number of prototype applications operating as
discrete modules. The commercially available
modules might include Catia/Delmia for product
and process design, 3DCS for variation analysis and
Spatial Analyzer for measurement simulation.
Prototype modules are used to carry out
measurability analysis and instrument selection.
The complete integrated dimensional variation
management process is summarized in Figure 5.


45


Figure 5: Generic Overview of Dimensional Variation Management
4. CONCLUSIONS
Rules are required to streamline the process of
deciding how many control points are required to
verify and/or simulate given features. Such features
should include surfaces, holes, pins etc. It will then
be possible for the designer to work in a system
where he specifies the intent of his design and this
is coded as both GD&T for a standards based
approach as well as being discretised to a point
based model for variation analysis, measurement
planning etc.
Models of machine tools, robots and
measurement instruments are required which
include the variability and uncertainty of these
operations. These will provide inputs to the
variation simulation for assemblies.
Realizing the full potential of integrated
dimensional variation management will require that
these modules are integrated and software
architecture to facilitate this is described. Crucially
this integration must facilitate the use of real-time
metrology data describing the emerging assembly to
update the digital model.
REFERENCES
Cai, B., Guo, Y., Jamshidi, J. and Maropoulos, P. G.,
"Measurability Analysis Of Large Volume Metrology
Process Model For Early Design", 5th International
Conference on Digital Enterprise Technology, Nantes,
France, 2008
Calkins, J. M. (2002). Quantifying Coordinate
Uncertainty Fields in Coupled Spatial Measurement
Systems Mechanical Engineering. Blacksburg,
Virginia Polytechnic Institute and State University.
PhD: 226.
Dwyer, J. "Simulation - the digital factory."
Manufacturing Computer Solutions 5(3):1999 48-50
Fabricius, F. "Seven step procedure for design for
manufacture." World Class Design to Manufacture
1(2):1994 23-30
Kayani, A. and Gray, I. (2009). Shim for Arrangement
Against a Structural Component and a Method of
Making a Shim, Airbus UK Ltd.

46

Kayani, A. and Jamshidi, J., "Measurement Assisted
Assembly For Large Volume Aircraft Wing
Structures", 4th International Conference on Digital
Enterprise Technology, Bath, United Kingdom, 2007
Kuhn, W. "Digital factory - Simulation enhancing the
product and production engineering process",
Monterey, CA, United states, 2006,
Maropoulos, P. G., Bramall, D. G. and McKay, K. R.
"Assessing the manufacturability of early product
designs using aggregate process models." Proceedings
of the Institution of Mechanical Engineers, Part B:
Journal of Engineering Manufacture 217(9):2003
1203-1214
Maropoulos, P. G., Yao, Z., Bradley, H. D. and Paramor,
K. Y. G. "Integrated design and planning environment
for welding Part 1: product modelling." Journal of
Materials Processing Technology 107(1-3):2000 3-8
Muelaner, J. E., Cai, B. and Maropoulos, P. G., "Large
Volume Metrology Instrument Selection And
Measurability Analysis", 6th International Conference
on Digital Enterprise Technology, Hong Kong, 2009
Muelaner, J. E., Cai, B. and Maropoulos, P. G. "Large
Volume Metrology Instrument Selection and
Measurability Analysis." IMechE, Part B: J.
Engineering Manufacture 224:2010 853-868
Muelaner, J. E. and Maropoulos, P. G., "Design for
Measurement Assisted Determinate Assembly
(MADA) of Large Composite Structures", CMSC,
Reno, Nevada, 2010
New River Kinematics (2007). SpatialAnalyzer.
Stone, P. R. "Reconfigurable Fixturing". Aerospace
Manufacturing and Automated Fastening Conference
and Exhibition, St. Louis, MS, USA, 2004,
Womack, J. P., Jones, D. T. and Roos, D. "The Machine
that Changed the World"Rawson Associates New
York1990,


Proceedings of DET2011
7th International Conference on Digital Enterprise Technology
Athens, Greece
28-30 September 2011
47

INTEGRATED LARGE VOLUME METROLOGY ASSISTED MACHINE TOOL
POSITIONING
Zheng Wang
University of Bath
zw215@bath.ac.uk
Paul G Maropoulos
University of Bath
p.g.maropoulos@bath.ac.uk

ABSTRACT
The concept of integrating metrology systems into production processes has generated significant
interest in industry, due to its potential in reducing production time and defective parts. One of the
most interesting methods of integrating metrology into production is the usage of external
metrology systems to compensate machine tools in real-time. The development and preliminary
experimental performance evaluations of a prototype laser tracker assisted 3 axis machine are
described in this paper. Real-time corrections of the machine tools absolute volumetric error have
been achieved. As a result, significant increases in static repeatability and accuracy have been
demonstrated, allowing the low cost 3 axis machine to reliably reach static positioning accuracies
below 45m through out the working volume without any prior calibration or error mapping,
showing that the proposed methods are feasible, and can have very wide applications.
KEYWORDS
Machine Tool, Metrology, Laser Tracker, Real-time Error Compensation

1. INTRODUCTION
The field of machine tool error compensation arises
from the fact that no matter how much time and
effort is spent on the design, it is physically
impossible to construct the perfect machine tool.
Error compensation and accuracy enhancement of
machine tools has become a very heavily researched
area, due to the increasing demand on the
performance of machine tools for precision
manufacturing. There are two major categories of
error compensation: one approach is to attempt to
calibrate or measure the error map of the machine
before or after machine operations, which is then
applied during the operations; the other approach is
to monitor the error during the machine operations,
which is then used to alter the machining process
while the machine is operating, this is commonly
refereed to as real-time compensation. The
advantage the later approach is that it is more
accurate, and allows a lower cost, lower
performance machine to be used in operations that
demands high accuracy (J. Ni, 1987).
The majority of the body of work on real-time
error compensation focuses on minimizing or
compensating for the intrinsic and environmental
sources of error for each component of the machine
tool (R. Ramesh et al 2000, J. Yuan and J. Ni 1998,
C.H. Lo 1995, W.T. Lei and Y.Y. Hsu 2003). Using
these traditional methods, in order to achieve
complete compensation of all the possible sources
of error, all of the individual contributors such as
geometric (21 errors for a 3 axis machine),
kinematic, thermal, and cutting forces must be
painstakingly modelled, and a large array of sensors
such as temperature sensors, load cells, and laser
interferometers must be installed to monitor the
status of the machine. The complexity of this
method means that it is time consuming to setup,
and is sensitive to the performance and position of
the sensors (S. Yang et al, 1996, and R. Ramesh et
al 2000).
In this paper, a simpler and more straightforward
real-time method of using an external metrology
instrument to directly measure the 3D position of
the tool is proposed as an alternative to the
48

traditional real-time machine tool error
compensation methods.
2. DEVELOPMENT OF THE SYSTEM
The metrology assisted positioning system
described in this paper is a part of a larger
undertaking in the Metrology Assisted Assembly
(MAA) project at the University of Bath. The MAA
project aims to develop a flexible, scalable and low
cost assembly cell to demonstrate the integration of
metrology systems directly into the manufacturing
processes of large aerospace components. The
integration of metrology instruments has the
potential of reducing the cost of the tooling, and
because the parts are measured while they are being
manufactured, inspection time can be reduced if not
eliminated, and the probability of reject parts is also
reduced.
The MAA cell will attempt to use metrology
instruments and reconfigurable tooling rather than
the traditional heavy and expensive jigs and fixtures
to solve the problem of locating the part and the
tool. This paper will be focused on the development
and verification of the system that is designed to
perform the tool locating task.
2.1. SYSTEM DESIGN GOALS
The system design is primarily driven by the
requirements from the aerospace industry, to be
used in manufacturing processes such as wing
assembly, where high accuracy must be maintained
over a large working volume. Therefore the system
needs to be able to meet the following requirements:
Large volume 6 DOF positioning
50-100m coordinate accuracy
Real-time metrology control
Scalable
Flexible multi-process
Off-the-shelf hardware
Low cost
2.2. PROPOSED SOLUTION
While originally the design focused on using
metrology instruments to guide an industrial robot,
it soon became clear that real-time control required
access to the low level control system of the robot,
which most industrial robot manufacturers are not
willing to grand access, because of the sensitivity of
the IP.
Therefore the decision was made to build a
simple and small 3 axis actuator using off-the-shelf
components, which meant that the researcher had
complete control over every aspect of the software
and hardware design. Early design concept mock-
ups can be seen in Figure 1.


Figure 1 Concept mock-ups envisioning the MAA 3 axis
actuator on static mount and as the end effector of an
industrial serial robot, monitored by a laser tracker
The main design philosophy of the system is to
use a laser tracker to provide an absolution position
reference to the actuator, such that despite of its
inherently low working volume and lack of rigidity,
the 3 axis system can be moved around a large
assembly either manually or by a serial robot, and
perform tasks typically can only be carried out with
very large machine tools. Since the position error of
the device is compensated at all times by the laser
tracker, high feature to feature accuracy can be
achieved.
Although a full 6 DOF solution would have been
more preferable, a simpler 3 axis system
dramatically reduces the complexity and cost of the
system, allowing the actuator to be fully
compensated using a single laser tracker. The lack
of 6 DOF ability can be somewhat mitigated if a
serial robot is used to position the 3 axis actuator in
different orientations.
2.3. FINAL SYSTEM DESIGN
This section describes the major components of the
final system. Since the proposed solution calls for
the construction of a custom machine tool, all of the
software and hardware needed to be designed and
constructed by the researcher.
49


Figure 2 Pictures of the prototype system
Pictures of the final prototype system can be seen
in Figure 2. It consists of a PC, a 3 axis machine, a
laser tracker and a box housing the motion
controller and servo drives. The laser tracker tracks
the position of a Spherical Mirror Reflector (SMR)
magnetically mounted on a SMR nest fixed to the Z
axis of the machine.
2.3.1. Laser tracker
The Laser Tracker (LT) is the instrument that
provides the absolution coordinate information to
the machine. It utilises interferometry for measuring
length and a pair of high resolution angle encoders
to measure the horizontal and vertical angles of the
laser beam. (Figure 3) shows a schematic of the
internal components of a typical laser tracker. In the
interferometry technique a coherent laser beam of
known wavelength passes through a beam-splitter.
One beam is reflected back within the system while
the other is aimed at a Spherical Mirror Reflector
(SMR) that is a sphere with an embedded corner
cubed reflector. When the two beams combine,
constructive and destructive interference at the laser
wavelength can be observed by the detector. The
number of the bright and dark patterns is counted by
the relevant electronics to calculate the distance.
The SMR is used as the instrument probe, thus the
laser tracker is a contact measurement system.
Laser trackers are considered to be one of the
most reliable and well established metrology
systems. An international standard exists for the
systems performance evaluation (ASME B89.4.19,
2006). Their main drawback is that the line of sight
between the laser tracker head and the SMR must be
maintained at all times, and only one SMR at any
time can be tracked.


Laser
Position
Electronics
Detector
Beam Splitter
Position Sensor
Interferometer/
Beam Splitter
Servo Electr.
Servo Mirror
Retroreflector

Figure 3 Interferometry in Laser Trackers (Estler WT et al, 2002).
Some laser trackers provide an Absolute Distance
Measurement (ADM) system, which modulates the
laser beam and detects the phase of the returned
light (Estler WT et al, 2002). By gradually reducing
the modulation frequency, the absolute distance of
the target can be determined with a high degree of
accuracy. ADM enabled laser trackers are more user
friendly, since when the line of sight is broken, the
tracker can reconnect with the SMR without homing
the SMR to the trackers initial position, as is
Laser
Tracke
r
MC and
servo drives
3 axis
machine
SMR
50

required for an interferometer system. The ease of
use however, comes at the cost of a slight decrease
in accuracy ((FARO EUROPE GmbH & Co. KG,
2010).
The laser tracker used to compensate the 3 axis
actuator is an ADM only FARO ION. It has a
Maximum Permissible Error (MPE) (ASME
B89.4.19, 2006) of 16m + 0.8m/m for distance
measurements and 20m + 5.0m/m for angular
measurements. Typical measurement uncertainties
are less than half of the MPE. It has a maximum
range of 40m when using normal 1.5 inch SMRs.
(FARO EUROPE GmbH & Co. KG, 2010)

2.3.2. Hardware design and construction
The 3 axis actuator was designed and analyzed in
Dassault Systemes Catia as a fully functional virtual
machine tool with a kinematic model (Figure 4).

Large volume 6 DOF
positioning
50-100m coordinate
accuracy
Real-time metrology control
Scalable
Flexible multi-process
Off-the-shelf hardware
Low cost
Large volume 6 DOF
positioning
50-100m coordinate
accuracy
Real-time metrology control
Scalable
Flexible multi-process
Off-the-shelf hardware
Low cost
Requirements and part selection CAD and digital mock-up
Working prototype Analysis
Large volume 6 DOF
positioning
50-100m coordinate
accuracy
Real-time metrology control
Scalable
Flexible multi-process
Off-the-shelf hardware
Low cost
Large volume 6 DOF
positioning
50-100m coordinate
accuracy
Real-time metrology control
Scalable
Flexible multi-process
Off-the-shelf hardware
Low cost
Requirements and part selection CAD and digital mock-up
Working prototype Analysis

Figure 4 Hardware design process
DS Catia NC simulation is used to verify the axis
movements for clashes, to simulate NC machining
processes including material removal (Figure 5),
and generate G-Codes.
The hardware system consists of 3 THK linear
slides, with 20m positioning repeatability, an
extruded aluminium frame and custom parts for
mounting the slides to each other.
While basic FEA analysis was performed on the
parts connecting the axes to ensure the machine will
be able to carry the specified payload weight, no
attempt was made to manufacture them with high
accuracy. In fact, the parts are hand made from
aluminium plates, and hand welded together.
Measurements of the axes squareness using a laser
tracker showed that the axes are misaligned by up to
0.4 degrees.


Figure 5 - Catia machine tool simulation with tool path G-
Code generation
Overall specifications of the 3 axis actuator are
listed below:
Weight: 20kg (With frame)
Working volume (mm):
a. X = 300
b. Y = 200
c. Z = 80
Payload: 5kg
3 THK KR linear slides
Servo motors and drives: OMRON SMGAH
Motion controller: OMRON Trajexia

2.3.3. Software
The integration of the laser tracker and the machine
is handled by the main control software. The main
control software runs on a Windows XP PC, and is
written in C# (Figure 6). Communication with the
motion controller is achieved using the serial port,
and communications with the laser tracker is
handled through the laser tracker SDK provided by
Faro. The main control software also provides an
easy to use user interface to control the machine
manually, plot position information of the machine
and the tracker, loading and executing G-Code files,
enable or disable compensation and record and save
measurement results.

51


Figure 6 - Main control software run on PC written in C#
Two concurrently running programs written in
OMRON BASIC run on the OMRON Trajexia
motion controller (Figure 7). One of the programs
provides machine status feedback to the PC, the
other processes commands sent from the PC and
converts them into servo motion commands.


Figure 7 - Motion controller is programmed in OMRO
BASIC
2.3.4. Controls and communications
The overall layout of the connections and data flows
is illustrated in Figure 8. The laser tracker is
connected to the PC via 100Mpbs Ethernet, and the
PC is connected to the motion controller through a
38400 Baud RS232 serial connection. The Motion
controller drives the servos through the proprietary
OMRON Mechatronlink-II connection.
A simplified overview of the control software,
including those running on the motion controller is
shown in Figure 9. The first step before the real-
time compensation can start is to locate the 6DOF
position of the 3 axis machine in the laser tracker
coordinate system. This is accomplished by pressing
a button on the main controller software, which
moves the machine through a series of three points,
the positions of which are measured by the laser
tracker. This provides enough information to
compute an Euler rotation matrix and an offset
vector to convert the machine coordinate system
into the tracker coordinate system and vice-versa.


Figure 8 - Layout of physically connections and data flow
Then a G-Code command is read from the input
file. The G-Code command is sent to the G-Code
interpreter, which converts the G-Code command
into a simpler machine command that is
subsequently sent to the motion controller through
the RS232 serial link. Currently the G-Code
interpreter only supports a small subset of the G
commands. A motion controller program then
receives, parses, processes and executes the
command sent from the PC.
If the error compensation loop is disabled, the
motion controller will wait until the motion is
complete, and then sends a motion buffer empty
command to the PC, which triggers the main control
software to load the next line of G-Code.

1
0
0
M
b
p
s

E
t
h
e
r
n
e
t
3
8
4
0
0

B
a
u
d

R
S
2
3
2

S
e
r
i
a
l
Laser tracker commands
Laser tracker measurements at 1024Hz
Machine status at 40Hz
Motion commands, up to 40Hz
Faro Ion Tracker
Omron Trajexia Motion Controller
Omron servo controllers and motors
PC Running Windows XP
1
0
0
M
b
p
s

E
t
h
e
r
n
e
t
3
8
4
0
0

B
a
u
d

R
S
2
3
2

S
e
r
i
a
l
Laser tracker commands
Laser tracker measurements at 1024Hz
Machine status at 40Hz
Motion commands, up to 40Hz
Faro Ion Tracker
Omron Trajexia Motion Controller
Omron servo controllers and motors
PC Running Windows XP
52


Figure 9 - Overview of the control software
If error compensation is enabled, a compensation
vector is sent to the motion controller, which then
performs a synchronized 3 axis move command on
3 virtual axes using the compensation vector. The
movement of the virtual axes are then added to the
physical axes. On the OMRON MC, a virtual axis
is implemented exactly like a physically axis, and
includes all the properties a real axis would have,
such as movement speed and acceleration. It is
essentially a virtual simulation of a physically axis.
By superimposing virtual axes moves on top of the
physical axes moves, the end point of a currently
executing move can be seamlessly modified,
without having to abort the current move and
starting a new one, causing the machine to briefly
stop all motion.
A detailed flowchart of the compensation loop
and its relationship with the other two concurrently
running threads is illustrated in Figure 10.

The laser tracker thread is an always running
background thread that continuously takes
measurements from the laser tracker at 1024Hz. The
motion controller status update program is a
program that runs on the motion controller. It sends
the key statuses of the machine including axis
positions and motion buffer state to the PC through
the serial connection. Since the compensation loop
depends on the data sent back from the motion
controller, the frequency of the feedback loop
depends directly on the frequency of the update
program. The status update is set at approximately
40Hz.

Main error compensation thread
Wait for serial port data
received event
Parse axes positions
Store axes positions to time
history array
Compute 6 DOF position in
LT frame from axes
positions
-
Position error in
LT Frame
Error >
tolerance?
Generate compensation
vector control gain
Transform LT frame error to
machine frame
Compensation
vector
Parse compensation vector
Read machine status Send status
Fixed delay
Processes on PC
Processes on MC and
Servo
Virtual axes
motion buffers
full?
Synchronized move of
virtual axes
Physical axes motion Virtual servo axes
commands
Servo axes commands
+
Wait for LT measurement
received event
Averaging filter
Store measurements to
time history array
Convert spherical to
Cartesian coordinates
Measurement
data
Motion controller status update program
Laser tracker
measurement thread
False
False
True
True
Main error compensation thread
Wait for serial port data
received event
Parse axes positions
Store axes positions to time
history array
Compute 6 DOF position in
LT frame from axes
positions
-
Position error in
LT Frame
Error >
tolerance?
Generate compensation
vector control gain
Transform LT frame error to
machine frame
Compensation
vector
Parse compensation vector
Read machine status Send status
Fixed delay
Processes on PC
Processes on MC and
Servo
Virtual axes
motion buffers
full?
Synchronized move of
virtual axes
Physical axes motion Virtual servo axes
commands
Servo axes commands
+
Wait for LT measurement
received event
Averaging filter
Store measurements to
time history array
Convert spherical to
Cartesian coordinates
Measurement
data
Motion controller status update program
Laser tracker
measurement thread
False
False
True
True

Figure 10 - Detailed flow chart of the error compensation loop showing the three concurrent threads
When the main control software receives an axis
position update from the motion controller, the
positions of the axes are transformed into the laser
tracker coordinate system, this coordinate represents
the position the machine thinks it is at. This
position is then compared with the actual
measurement from the tracker. If the error between
the predicted position and the measured position is
Connect to motion controller
and laser tracker
Initialize tracker, home
machine
Locate machine in laser
tracker coordinate frame
Read G-Code
G-Code interpreter
MC axes
commands
Command processing and
execution
Physical axes motion Laser tracker measurement
Compute error
Error >
tolerance?
Generate compensation
vector control gain
MC virtual axes
commands
Servo axes commands
Motion
complete?
End of G-Code? End
True
True
False
False
True
Virtual servo axes
commands
Processes on PC
Processes on MC and Servo
Command processing and
execution
Error compensation loop
False
+
Processes on PC
Processes on MC and Servo
Connect to motion controller
and laser tracker
Initialize tracker, home
machine
Locate machine in laser
tracker coordinate frame
Read G-Code
G-Code interpreter
MC axes
commands
Command processing and
execution
Physical axes motion Laser tracker measurement
Compute error
Error >
tolerance?
Generate compensation
vector control gain
MC virtual axes
commands
Servo axes commands
Motion
complete?
End of G-Code? End
True
True
False
False
True
Virtual servo axes
commands
Processes on PC
Processes on MC and Servo
Command processing and
execution
Error compensation loop
False
+
Processes on PC
Processes on MC and Servo
53

greater than the preset tolerance, a compensation
vector is generated, multiplied by a gain value and
sent back to the motion controller. The gain value is
used to fine tune the feedback response and prevent
oscillations. This is a purely proportional control
loop, thus the gain value is the proportional loop
gain.
3. EVALUATION OF STATIC
POSTIONING PERFORMANCE
An experiment was carried out to assess the static
positioning performance of the machine with and
without real-time compensation from the laser
tracker.
3.1. EXPERIMENT DESIGN
A G-Code tool path of a 3D grid of 30 points was
generated covering the entire working volume of the
machine. The grid consists of 2 planar grids of 15
points, separated by 60mm in the machine Z axis, as
shown in Figure 11 and Figure 12. Each planar grid
is traversed twice, in both the forward and reverse
directions, in order to indentify backlash errors,
which are not compensated in the servo controller.
The complete tool path is repeated 10 times;
therefore each point is reached 20 times, 10 in the
forward and 10 in the reverse directions, taking
approximately 1.5 hours.

Parking
Position
Forward pass
Reverse pass
X = 280mm
Y

=

1
8
0
m
m
Parking
Position
Forward pass
Reverse pass
X = 280mm
Y

=

1
8
0
m
m

Figure 11 - Tool path for each planar grid of 15 points
At each grid point, the machine dwells for 3
seconds, during which a 2 second laser tracker
measurement is taken, after allowing the machine to
stabilize for 1 second.
During the experiment with laser tracker
feedback, the error tolerance was set at 10m, and
gain was set at 50%.
3.2. ANALYSIS OF RESULTS
3.2.1 Repeatability results
Since each point is repeated 20 times, the drift over
time of the subsequent measurements compared to
the first measurement can be plotted to demonstrate
the repeatability of the system. The positioning drift
over time for both the forward and backward passes
is plotted in Figure 12, providing a qualitative visual
representation of repeatability.
It is clear that significant drift and backlash errors
are present in the 3 axis machine, especially in the
Y and Z axes. When real-time compensation is
enabled, drift is considerably reduced.

1400
1450
1500
1550
1600
1650
1700
350
400
450
500
550
600
-450
-400
-350

x (mm)
Measured grid of points and exaggerated drift
y (mm)
z

(
m
m
)
Points
Forward Drift x 1000
Reverse Drift x 1000
1400
1450
1500
1550
1600
1650
1700
400
450
500
550
600
-420
-400
-380

x (mm)
Measured grid of points and exaggerated drift
y (mm)
z

(
m
m
)
Points
Forward Drift x 1000
Reverse Drift x 1000
A
B
1400
1450
1500
1550
1600
1650
1700
350
400
450
500
550
600
-450
-400
-350

x (mm)
Measured grid of points and exaggerated drift
y (mm)
z

(
m
m
)
Points
Forward Drift x 1000
Reverse Drift x 1000
1400
1450
1500
1550
1600
1650
1700
400
450
500
550
600
-420
-400
-380

x (mm)
Measured grid of points and exaggerated drift
y (mm)
z

(
m
m
)
Points
Forward Drift x 1000
Reverse Drift x 1000
A
B

Figure 12 - Plot of programmed tool path and exaggerated
drift for runs with (B) and without (A) real-time
compensation
The increase in repeatability can be seen more
quantitatively in Figure 13. The saw tooth shaped
54

plots for the Y and Z axes of the uncompensated
experiment show the differences between the
forward and reverse passes, illustrating the effect of
backlash on these axes.
Although the results of the uncompensated case
shows that the machine has reasonable repeatability,
on the order of 20 - 30m, when compensation is
enabled, backlash error is almost completely
eliminated, and repeatability errors are at least
halved.

2 4 6 8 10 12 14 16 18 20
-0.02
0
0.02
Trials
D
r
i
f
t

X

(
m
m
)
2 4 6 8 10 12 14 16 18 20
-0.02
0
0.02
Trials
D
r
i
f
t

Y

(
m
m
)
2 4 6 8 10 12 14 16 18 20
-0.02
0
0.02
Trials
D
r
i
f
t

Z

(
m
m
)
2 4 6 8 10 12 14 16 18 20
-0.02
0
0.02
Trials
D
r
i
f
t

X

(
m
m
)
2 4 6 8 10 12 14 16 18 20
-0.02
0
0.02
Trials
D
r
i
f
t

Y

(
m
m
)
2 4 6 8 10 12 14 16 18 20
-0.02
0
0.02
Trials
D
r
i
f
t

Z

(
m
m
)
A
B
2 4 6 8 10 12 14 16 18 20
-0.02
0
0.02
Trials
D
r
i
f
t

X

(
m
m
)
2 4 6 8 10 12 14 16 18 20
-0.02
0
0.02
Trials
D
r
i
f
t

Y

(
m
m
)
2 4 6 8 10 12 14 16 18 20
-0.02
0
0.02
Trials
D
r
i
f
t

Z

(
m
m
)
2 4 6 8 10 12 14 16 18 20
-0.02
0
0.02
Trials
D
r
i
f
t

X

(
m
m
)
2 4 6 8 10 12 14 16 18 20
-0.02
0
0.02
Trials
D
r
i
f
t

Y

(
m
m
)
2 4 6 8 10 12 14 16 18 20
-0.02
0
0.02
Trials
D
r
i
f
t

Z

(
m
m
)
A
B

Figure 13 - Run chart of drift for runs with (B) and without
(A) real-time compensation
3.2.2. Inter-point distance comparison
While repeatability is an important measure of
machine performance, the absolute volume metric
accuracy of positioning is a more meaningful
assessment of the machine.
The volumetric accuracy of the machine is
evaluated by comparing the measured inter-point
distances (IPD) to the nominal distances. That is, a
list of the distances between all of the measured
points for each pass is compared to the list of
nominal distances. Looking at the accuracy this way
instead of comparing the coordinates of the points
directly avoids the possible errors introduced in the
least squares point fitting and coordinate
transformation process which is required for direct
coordinate comparisons.
Figure 14 shows the IPD error plotted against
corresponding IPD, note the 10 difference in Y
axis scale. Without compensation, the IPD error is
very large, and increases with IPD, reaching almost
1mm for the longest distances. This means that the
machine by itself has very poor volumetric
accuracy, which is not surprising, considering that
no attempt was made to align the axis, or
compensate for any errors such as backlash.
When real-time compensation is enabled
however, IPD error is reduced dramatically to well
below 15m for all IPDs.

50 100 150 200 250 300 350
-1
-0.8
-0.6
-0.4
-0.2
0
0.2
0.4
0.6
0.8
1
Comparison of measured interpoint distance with nominal
Interpoint pair distance (mm)
D
e
v
i
a
t
i
o
n

t
o

n
o
m
i
n
a
l

(
m
m
)
50 100 150 200 250 300 350
-0.1
-0.08
-0.06
-0.04
-0.02
0
0.02
0.04
0.06
0.08
0.1
Comparison of measured interpoint distance with nominal
Interpoint pair distance (mm)
D
e
v
i
a
t
i
o
n

t
o

n
o
m
i
n
a
l

(
m
m
)
A
B
50 100 150 200 250 300 350
-1
-0.8
-0.6
-0.4
-0.2
0
0.2
0.4
0.6
0.8
1
Comparison of measured interpoint distance with nominal
Interpoint pair distance (mm)
D
e
v
i
a
t
i
o
n

t
o

n
o
m
i
n
a
l

(
m
m
)
50 100 150 200 250 300 350
-0.1
-0.08
-0.06
-0.04
-0.02
0
0.02
0.04
0.06
0.08
0.1
Comparison of measured interpoint distance with nominal
Interpoint pair distance (mm)
D
e
v
i
a
t
i
o
n

t
o

n
o
m
i
n
a
l

(
m
m
)
A
B

Figure 14 - Inter-point distance error vs. inter-point
distance for runs with (B) and without (A) real-time
compensation, note the change in y axis scale
This large reduction in error is shown more
clearly in histogram form in Figure 15. The
reduction in the parts per million (PPM) error is
approximately 100 fold.

55

-5000 -4000 -3000 -2000 -1000 0 1000 2000 3000 4000
0
200
400
600
800
1000
1200
1400
Parts per million interpoint distance error distribution
PPM Error
F
r
e
q
u
e
n
c
y


Without feed back
With feed back

Figure 15 - Inter-point distance PPM error comparison
The reduction of IPD error vs. IPD is plotted
Figure 16, showing a remarkable 40 to 140 times
reduction in error when real-time compensation is
enabled.

Reduction of Maximum Interpoint Distance Deviation
0.00
20.00
40.00
60.00
80.00
100.00
120.00
140.00
50 100 150 200 250 300 350
Interpoint Distance (mm)
R
e
d
u
c
t
i
o
n

o
f

M
a
x

I
n
t
e
r
p
o
i
n
t

D
i
s
t
a
n
c
e

D
e
v
i
a
t
i
o
n

(
x

t
i
m
e
s
)

Figure 16 - Reduction of IPD error vs. IPD when
compensation is enabled
Considering that the laser tracker is
approximately 1.5m away from the machine during
the test, the trackers measurement uncertainty
would be better than 30m. Therefore the total
positioning accuracy of the machine would be
below 45m, and is likely to be considerably less
since IPD error is the combination of 2 positioning
errors. What is more, the positioning accuracy of
the machine is very much limited by the
performance of the laser tracker.

4. CONCLUSIONS AND DISCUSSIONS
This paper has described the development, working
principles, and performance assessment of a
prototype 3 axis machine with real-time laser
tracker error compensation. The static positioning
experiments have shown very encouraging results,
demonstrating that the laser tracker real-time
compensation produced improvements in both
volumetric positioning repeatability and accuracy.
The inter-point distance results show that the
machine is able to consistently reach positioning
accuracies of better than 45m. While 45m static
accuracy may not be particularly impressive for a
small sized 3 axis CNC, because the system is
designed to be mobile and flexible, it can be moved
to a different position when required while still
maintaining positioning accuracy close to that of the
laser tracker.
This implies that since the laser trackers
measurement uncertainties are typically below
100m at distances up to 10 metres, the system
described in this paper is effectively capable of
meeting typical aerospace positioning accuracy
within a spherical volume 20m in diameter,
achieving what usually requires a much larger and
more expensive machine.
Although only the static performance is studied
in the paper, because the compensation is real-time,
there will also be dynamic performance
improvements. The dynamic performance of the
machine, more advanced control algorithms and
more optimal methods of filtering and combining
the data from both the servo encoders and the laser
tracker will be explored in future work. A milling
spindle attachment is also being designed, so that
the real machining performance of the system can
be evaluated as well.

REFERENCES
ASME B89.4.19. Performance Evaluation of Laser-
Based Spherical Coordinate Measurement Systems,
2006
Chih-Hao Lo, Jingxia Yuan, and Jun Ni, An Application
of Real-Time Error Compensation on a Turning
Center International Journal of Machine Tools &
Manufacture, Vol 35, No. 12, 1995, pp 16691682.
Estler WT, Edmundson KL, Peggs GN and Parker DH,
Large Scale Metrology-An Update, CIRP Annals -
Manufacturing Technology, Vol 51, No. 2, 2002, pp
587-609
FARO EUROPE GmbH & Co. KG. Faro Laser Tracker
Ion Faro UK technical specification sheet, 2010
J. Ni, Study on Online Identification and Forecasting
compensatory control of volumetric Errors for
Multiple Axis Machine Tools, PhD Dissertation,
University of Wisconsin-Madison, 1987.
Jingxia Yuan and Jun Ni, The Real-Time Error
Compensation Technique for CNC Machining
Systems, Mechatronics, Vol 8, 1998, pp 359-380.
R. Ramesh, M.A. Mannan, and A.N. Poo, Error
compensation in machine tools a review Part I:
geometric, cutting-force induced and fixture-
56

dependent errors, International Journal of Machine
Tools & Manufacture, Vol 40, 2000, pp 12351256.
R. Ramesh, M.A. Mannan, and A.N. Poo, Error
compensation in machine tools a review Part II:
thermal errors, International Journal of Machine
Tools & Manufacture, Vol 40, 2000, pp 12571284.
S. Yang, J. Yang, and J. Ni, The Improvement of
Thermal Error Modelling and Compensation on a
Turning Centre, International Journal of Machine
Tools & Manufacture, Vol 37, No. 11, 1996, pp 527
537.
W.T. Lei and Y.Y. Hsu, Accuracy Enhancement of
Five-Axis CNC Machines through Real-Time Error
Compensation, International Journal of Machine
Tools & Manufacture, Vol 43, 2003, pp 871877.

Proceedings of DET2011
7th International Conference on Digital Enterprise Technology
Athens, Greece
28-30 September 2011
57

HSC MACHINING CENTRE BASIC ERRORS PREDICTION FOR ACCURACY
CONTROL
Jerzy Jedrzejewski
Wroclaw University of Technology, Poland
Jerzy.Jedrzejewski@pwr.wroc.pl
Wojciech Kwasny
Wroclaw University of Technology, Poland
Wojciech.Kwasny@pwr.wroc.pl

ABSTRACT
The paper deals with the modelling and compensation of the main errors of 5-axis high-speed
machining centres and presents a strategy for reducing and effectively compensating thermal errors.
A hybrid model of thermal errors and errors caused by high spindle rotational speeds is described.
Special attention is given to the modelling of errors arising in the spindle assembly and in the tilting
table with direct drives in axes A and C. The conditions which the prognostic modelling of thermal
errors must satisfy to ensure the effective compensation of the latter, the error identification
methods and the results of compensation by means of the multiregression function are presented.
KEY WORDS
Machine Tool, Error Prediction, Compensation
1. INTRODUCTION
As manufacturing competitiveness increases, the
attention of machine tool researchers and designers
focuses on increasing the productivity of machine
tools. One of the main productivity parameters is
machine tool accuracy in production conditions. The
accuracy can be improved through the accurate
identification, real-time prediction, reduction and
compensation of errors.
The more complex the multiaxial machine tool,
the more difficult it is to precisely experimentally
and theoretically determine the time-varying
individual component errors in the machine tool
workspace.
In order to effectively reduce and compensate
errors they need to be accurately identified,
especially when the individual sources of errors
interact with one another. Thus the identification
must take into account the interactions between the
particular errors. Therefore in order to model errors
one needs an integrated model of errors. An accurate
predictive model requires fine-tuning based on
precise measurements.
A detailed survey of the most precise error
identification methods can be found in (Turek et al1,
2010). As the most advanced method among them
one should single out the tracking laser method,
which, however, has some limitations, especially the
fact that it requires a large machine tool workspace.
In order to effectively reduce and compensate
errors one needs a well developed strategy. Such a
strategy developed by the authors can be found in
(Jedrzejewski et al1, 2011) and the strategy used by
OKUMA is described in (Catalogue of OKUMA
Co.).
The authors strategy, improved on the basis of
their latest studies, is presented below. A
particularly important result of the research on the
reduction of errors is their effective compensation in
real time (Turek et al2, 2010), which should lead to
a substantial improvement in machining accuracy.
The higher the cutting speed and the larger the
number of controllable axes, the more difficult it is
to predict volumetric errors and write them in a
program to ensure real time error compensation
(Moriwaki, 2006).
2. ERROR REDUCTION STRATEGY
The starting point for the machine tool error
reduction strategy is error recognition based on the
latest knowledge (Figure 1). Error recognition is
effected through measurements, modelling and
58

numerical simulation of the behaviour of errors in
natural machine tool operating conditions, taking
into account the variation of the manufacturing
environment over time. On this basis one can
preliminarily determine the possibilities for
reducing errors through the analysis of: the heat
sources, the intensity of heat transfer within the
machine tool and to the environment and the
resulting thermal displacements.


Figure 1 Thermal error minimizing strategy
Thermal displacements are the largest component
in the machine tool error and for this reason they are
the focus of attention of machine tool designers and
researchers. Thanks to analyses based on numerical
simulations one can ensure the thermal symmetry of
a machine tool through changes in its design. After
the heat sources are reduced the symmetry is the
principal way of minimizing thermal displacements.
Through such analyses one can also determine the
effectiveness of error reduction by forced heat
removal from the field of action of large heat
sources, such as high-torque motors in axes A and
C, linear drives in axes X, Y, Z, auxiliary drives,
bearing sets, rolling gears and so on.
The minimization of errors and their variation
over time and the optimization of the costs involved
play a major role in the strategy of improving the
properties of machine tools. As regards costs, they
can be optimized by reducing the energy
consumption by the machine tool in accordance
with the JIT principle (Tsuschiya, 2010). By
applying the above measures one can bring a
machine tool to a state in which the thermal error
will be minimal at minimum costs of the
minimization solutions. In this way one can also
create a machine tool thermal model suitable for the
accurate prediction of the error, aimed at its
compensation.
The displacements of the spindle in a high-speed
machining centre are the result of not only thermal
action, but also of the phenomena taking place in the
bearing unit and in the spindle itself when the speed
changes (Jedrzejewski and Kwasny. 2010),
(Jedrzejewski and al, 2008). Thus in order to
compensate displacements one needs a hybrid
prognostic model which takes into account both
thermal and dynamic displacements.
The error reduction strategy may also take into
account the compensation of geometric errors which
are usually reduced on the basis of their
identification through measurements of machine
tool geometric accuracy. Geometric errors are
compensated by the control system through an
algorithm using measurement data.
3. INTEGRATED HYBRID ERROR
MODEL
The machine tool thermal model presented by the
authors at the NEWTECH 2011 conference
(Jedrzejewski and Kwasny, 2011) takes into account
all the main error sources located in the spindle
assembly, linear drives X,Y,Z, the direct drives of
tilting table axis A and C and the path measuring
systems as well as the effect of changes in ambient
temperature and machining process temperature.
The idea of the hybrid model consists in the
integration of the machine tool thermal model with
the Jedrzejewski2011dynamic model of the spindle
assembly, which is particularly important in the case
of high-speed machine tools (Figure 2). Owing to
this it became possible to interrelate the thermal
displacements of the machine tool assemblies with
the axial displacements of the spindle tip, occurring
during changes in rotational speed. The
displacements have a dynamic character since they
are directly caused by the shortening or elongation
of the spindle as a result of an abrupt change in the
centrifugal forces acting on it.
In addition, the centrifugal forces change the
angles and internal forces in the spindles rolling
bearings and the gyrostatic moment in designs with
a slidable sleeve leads to axial motions of the sleeve
with the spindle. These abrupt axial displacements
of the spindle tip in the case of a large change in
rotational speed (e.g. 45000) may amount to as
much as 40% of its total displacements, as shown in
figure 3. For a speed below 15000 rpm these
phenomena are difficult to observe. For the purposes
of determining the behaviour of the spindle the
model is double hybrid because of the combination
of finite element method (FEM) with the finite
difference method (FDM) and the integration with
the displacements produced by dynamic forces.
59



Figure 2 Integrated model of machine tool errors

Since the thermal mode takes into account the
interaction between heat sources and forced cooling,
displacements can be determined very accurately.
Owing to this the possibilities of reducing them, by
reducing the intensity of the heat sources and the
cooling systems operation, can be thoroughly
examined. The model can also be used to determine
temperature measuring points which ensure the most
accurate representation of displacements for the
purposes of their prediction.
Accurate prediction is achieved when the model
is fine-tuned on the basis of a comparison of the
calculated temperatures and displacements with the
measured ones. Hence the precise measurement of
both temperature and displacements is of critical
importance. The fine-tuning of the model covers
both heat generation and transfer.


Figure 3 Components of total spindle tip displacement
along axis Z

60

If the prognostic model is to be directly used for the
compensation of displacements in high-speed
machining centres, it must take into account not
only thermal displacements, but also displacements
generated by centrifugal forces. Moreover, it must
be capable of performing computations fast enough
for real time compensation. In high-speed
machining centres, feed rates exceed 70 m/min,
spindle rotational speeds reach 50 000 rpm, thermal
displacements change dynamically and a large shift
occurs.
Exemplary calculation results for spindle tip
displacement in axis Z and tilting table displacement
are shown in figure 4.


Figure 4 Results of thermal deformation calculations for
50.000 rpm and selected tilting rotary table duty cycle
In the example, because of the heating up of the
whole structure the table and the spindle got as close
together as 73m. Considering that the axial spindle
shift amounts in this case to 40 m, it is apparent
that the values are high and a hybrid model is
needed.
The error contributed by the tilting tables
controllable axes A and C is defined by the model
shown in figure 5. The model covers the generation
and transmission of heat in bearings and motors, and
thermal deformations. A precise algorithm,
containing iteration procedures (separate for
respectively motors and bearings), is used to
determine power losses. Power losses in motors
should be determined with the tilting tables
repeatable duty cycle (comprising the starting stage,
a period of continuous operation and the braking
stage) taken into account (fig. 6). The torque (T
total
)
loading the motor during the continuous operation
stage is the sum of moment of friction T
friction
,
cutting force torque T
machining
and particularly the
torque associated with motor acceleration and
inertia (T
acc
acceleration torque), which is
expressed by the relation:

T
total
= T
acc
+ T
friction
+ T
machining
(1)

The torque value equivalent for the tilting
table duty cycle (T
RMS
), marked in figure 5, is
calculated for the whole cycle, as the weighted root-
mean-square average from the three cycle stages,
using the formula:
cycle

i
i i
RMS
t
t T
T

=

=
1
2
(2)



A-axis Duty Cycle
C-axis Duty Cycle

Figure 5 Model of thermal behaviour of tilting rotary tables with direct drive

61



a)
b)
0
50
100
150
200
250
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6
Time [sec]
T
o
r
q
u
e

[
N
m
]
Feed time = 70% of total cycle
50 Nm
222 Nm
178 Nm
T
RMS
= 118 Nm
C axis, Tc(Max)=167 Nm
0
100
200
300
400
500
0 0.5 1 1.5 2 2.5 3
Time [sec]
T
o
r
q
u
e

[
N
m
]
70 Nm
440 Nm
T
RMS
= 228 Nm
353 Nm
A axis, Tc(Max)=315 Nm
Feed time = 70% of total cycle


Figure 6 Duty cycle of tilting rotary table for axis A or C
Exemplary table displacements along axis Z,
calculated using the model, are shown in figure 7.
The results were obtained for the duty cycle of axes
A and C and the torque values shown in figure 6.
It is apparent that the displacements of axis A in
direction Z contribute significantly more to the error
than the deformations of the tilting table being in
stationary motion at a rotational speed of 20 rpm.
The displacement of axis A was calculated for
swinging motion at a speed (in stationary motion) of
10 rpm, a rotation angle of 90C and an ambient
temperature of 20C. Displacement X, causing the
axial tensioning of the bearings in the supports of
axis A, assumes a significant value, which is
detrimental to the life of the bearings.
The above precise hybrid model of the machine
tool behaviour in the controllable axes in the


Figure 7 Results of thermal deformation computations
machine tool workspace makes it possible to
analyze the causes of errors, and the displacements
in order to minimize them. The model is highly
complex whereby the computation of the
displacements to be compensated takes much longer
than compensation in real time would require. Even

62

the use of the state-of-the art computing systems
does not sufficiently shorten the computing time.
Therefore prognostic models for the compensation
of such errors need to be simpler, but accurate
enough to ensure that compensation reduces
machining errors to a degree required for the given
machine tool and machining process.
4. ERROR PREDICTION AND ERROR
COMPENSATION PROCEDURES
Machine tool error compensation is the simplest and
most promising method of eliminating the effect of
errors on machining accuracy. Consequently,
research centres all over the world and machine tool
manufacturers conduct detailed research aimed at
developing effective compensation methods. As
already mentioned, in order to achieve this, one
must first have a precise error measuring method
and then develop an accurate error prediction model
and an effective compensation procedure. It
concerns not only the prediction of thermal errors,
but also the surface form errors due to deflections of
the cutter and the workpiece (Wan and Zhan, 2006),
and also with reason of the machining errors
induced by linear interpolation between cutter
locations (Chu and al, 2008).
Many methods (table 1) are employed to measure
errors, but despite the fact that many of them use
laser technology it is still difficult to achieve the
volumetric (e.g. thermal) error identification
accuracy required by precision- and high-precision
machine tools. The most advanced methods based
on tracking laser technology can be used only to a
limited extent since, as a rule, they do not guarantee
a measuring accuracy higher than 3 m and they are
dedicated to large-size machine tools. On the other
hand, it is very difficult to rely on test object
measurements, which require a very complicated
measuring procedure and much experience.
The important factors are: how capable the given
method/device is to measure the volumetric error,
the number of measured components, to what
machine tools the method is dedicated, how time
consuming and costly the measurements are and the
availability of the given method.
Table 1- Characteristics of measuring devices used to determine volumetric error
Device/
method
Volumetric error
components
Machine tool
volumetric error
Application Time Cost Availability
on market
ITERMEDIATE METHODS
DBB rod 12 measured as
group
calculated after
additional
measurements
medium-sized
machine tools
I I Yes
KGM grid 5 measured as group calculated medium-sized
machine tools
I II Yes
DIRECT METHODS
1D laser 21 measured
individually
calculated 3-axis machine tools IIIIIIIII III Yes
3D laser 21 measured as group calculated 3-axis machine tools III IIII Yes
Laser
vector
method
21 measured 4
diagonals
calculated medium-sized 3-axis
machine tools
IIIII IIIII Yes
LBB rod 21 measured from 3
positions
calculated medium-sized 3-axis
machine tools
IIII IIIII Yes
3D LBB rod all for linear and
rotational axes
measured medium-sized
multiaxial machine
tools
II IIIIII No
Tracking
laser
all for linear and
rotational axes
measured large- and medium-
sized machine tools
IIII IIIIIII Yes
Tracking
laser with
active target
all for linear and
rotational axes
measured large- and medium-
sized machine tools
II IIIIIIIII Yes
OTHER
3PSD system 21 measured as
group
calculated small-sized 3-axis
machine tools
III I No
4DOF system 4 for rotary tables calculated rotary tables IIII I No
HSM system 12 for two rotation
axes
measured rotation axes II II Yes

63

LIMITATIONS
DBB rod Only circular motion, limited rod length
KGM grid Limited diameter of optical grid
1D laser Only linear axes
3D laser Lower accuracy in axes perpendicular to axis being measured
Laser vector method Only linear axes, limited extent of motions x,y,z
LBB rod Only linear axes, limited telescope length, dead zone
3D LBB rod Limited telescope length, dead zone
Tracking laser Limited reflector angle of view, dead zone,
accuracy of 0.2-3um in range of 0- 5m
Tracking laser with active
target
accuracy of 0.2-3um in range of 0- 5m
3PSD system Only linear axes, small measuring range
4DOF system Only rotational axes
HSM system Only machine tools with FIDIA class C control system

It may turn out that it makes sense to use not a
single highly complicated method, but several easily
available methods, e.g. one method for the linear
axes and another method for the rotational axes. The
choice of a method will always be determined by
the required precision of the machine tool and by
the measurement cost and time.
On the basis of a detailed survey of methods of
modelling errors, aimed at compensating the latter,
described in the literature one can distinguish the
following:
o linear and nonlinear regression;
o neural networks, fuzzy logic, transfer function;
o gray system theory;
o B splines and NURBS surfaces;
o independent compensation analysis;
o homogenous transformation functions.
From among them the regression functions are
most often used because of the simplicity of the
model and the ease of their implementation in the
compensation algorithm. The neural network
methods are often used too. In that case they
should take into consideration also own errors
(Chryssolouris and al, 1996).
For the modelling one uses temperatures
measured in the machine tools places/points in
which they are best correlated with the machine tool
error. The places are determined by means of an
accurate thermal model of the machine tool, which
enables the numerical simulation of thermal errors.
As few temperature measuring points as possible are
selected since the larger the number of temperatures
input into the prognostic model, the more
complicated the function becomes. The polynomial
function, mathematically describing displacement
versus temperature, is highly suitable for
determining thermal errors. The function is
experimentally verified through displacement
measurements and theoretically, through model
computations.


Figure 8 Experimentally identified and predicted high speed machining centre spindle displacements and theoretical
compensation results

64

At the prognostic models input there are
measured machine tool temperatures and at its
output there are displacements which are input into
the compensation algorithm written into the control
system and integrated with the interpolation model.
For each machine tool it is necessary to determine
the proper function and verify it experimentally in
order to find out what the effectiveness of the
proposed compensation is.
Besides the thermal module, the hybrid
prognostic model includes an error (due to
centrifugal forces/spindle rotational speed) module.
The latter predicts the value of shift versus
rotational speed. The current values of rotational
speed for the duty cycle being performed are
acquired from the control system.
Figure 8 shows a comparison of the spindle
displacements in axis Z, measured during the
operation of a 3-axis milling centre, with their post-
compensation values predicted by the integrated
hybrid model.
One can notice that the largest errors occur during
changes in rotational speed, caused by uncontrolled
spindle jumps. The errors can be considerably
reduced by dividing the correction function into two
functions: a thermal one (for temperatures) and a
shift function (for spindle revolutions).
5. CONCLUSION
A rationally designed strategy aimed at minimizing
the errors of highly efficient machine tools forms
the basis for the consistent and effective increasing
of machining accuracy. When applied to error
reduction, the precise hybrid model of the behaviour
of the main machine tool assemblies in machining
task performance conditions enables one to obtain a
considerable increase in the accuracy of the
processes involved. Such models, however, need to
be improved in order to achieve higher effectiveness
of error prediction in real time. Also the procedures
for real-time error compensation should be subject
to further improvement.
6. ACKNOWLEDGMENTS
The authors are grateful to the Ministry of Science
and Education of Poland and the Ministry of
Knowledge Economy the Republic of Korea for
sponsorship.
REFERENCES
Chryssolouris G., Lee M., Ramsey A., Confidence
Internal Prediction for Neural Network Models, IEEE
Transactions on Neural Networks, Vol. 7, No. 1, 1996,
pp. 229-232
Chu C.H., Huang W.N., Hsu Y.Y., Machining accuracy
improvement in five-axis flank milling of ruled
surfaces, International Journal of Machine Tools and
Manufacture, Vol. 48, Issues 7-8, 2008, pp. 914-921
Jedrzejewski J., Kwasny W., Modelling of angular
contact ball bearings and axial displacements for high
speed spindles, CIRP Annals Manufacturing
Technology, Vol. 59, 2010, pp 377-382
Jedrzejewski J., Kwasny W., Reduction of machine tool
thermal errors in drive axis, NEWTECH 2011 Brno,
Proceedings, 2011, pp 133-136
Jedrzejewski J., Kwasny W., Modrzycki W.,
Identyfication and reduction of thermal errors in high
performance 5 axis machining centr, T, Total Quality
Management & Excellence, Vol. 39, No. 1, 2011, pp
17-22
Jedrzejewski J., Kwasny W., Kowal Z., Modrzycki W.,
Operational behaviour of high speed spindle unit,
MM Science Journal, October, 2008, pp 40-43
Moriwaki T., Trends in resent machine tools
technology, NTN Technical Review, No. 74, 2006,
pp 2-7
Tsuschiya S., Continuously renovating manufacturing
machines and systems adaptable for new era,
JIMTOF 2010, The International Machine Tool
Engineers Conference, Proceedings, 2010, pp 13-26
Turek P., Kwasny W., Jedrzejewski J., Advanced
methods for the identification of machine tool errors,
Inzynieria Maszyn, Vol. 15, No. 1-2, 2010, pp 7-37,
(in Polish)
Turek P., Jedrzejewski J., Modrzycki W., Methods of
Machine Tool Error Compensation, Journal of
Machine Engineering, Vol. 10, No. 4, 2010, pp 5-25
Wan M., Zhang W.H., Efficient algorithms for
calculations of static form errors in peripheral milling
Journal of Materials Processing Technology, Vol. 171,
Issue 1, 2006, pp. 156-165
Catalogue of OKUMA Co. Universal center MU-400VA
5-axis vertical machining center

Proceedings of DET2011
7th International Conference on Digital Enterprise Technology
Athens, Greece
28-30 September 2011
65

DECISION-MAKING FOR METROLOGY SYSTEM SELECTION BASED ON
FAILURE KNOWLEDGE MANAGEMENT
Wei Dai
School of Reliability and Systems Engineering,
Beihang University,
Beijing, 100191, P.R. China
dw@buaa.edu.cn
Xi Zhang
Department of Mechanical Engineering,
University of Bath,
Bath, BA2 7AY, United Kingdom
X.Zhang@bath.ac.uk


Paul G. Maropoulos
Department of Mechanical Engineering,
University of Bath,
Bath, BA2 7AY, United Kingdom
P.G.Maropoulos@bath.ac.uk
Xiaoqing Tang
School of Mechanical Engineering and Automation,
Beihang University,
Beijing, 100191, P.R. China
tangxq@buaa.edu.cn
ABSTRACT
Decision-making in relation to product quality is indispensable in order to reduce product
development risk. Based on the identification of the deficiencies of Quality Function Deployment
(QFD) and Failure Modes and Effects Analysis (FMEA), a novel decision-making method is
presented that concentrates on a knowledge management network under various failure scenarios.
An ontological expression of failure scenarios is presented together with a framework of failure
knowledge network (FKN). A case study is provided according to the proposed decision-making
procedure based on FKN. The methodology is applied in the Measurement Assisted Assembly
(MAA) process to solve the problem of prioritizing the measurement characteristics. The
mathematical model and algorithms of Analytic Network Process (ANP) are introduced into
calculating the priority value of measurement characteristics, together with an optimization
algorithm for combination between measurement targets and measurement systems. This paper
provides a practical approach for improved decision-making in relation to quality control.
KEYWORDS
Decision-making in quality control, Failure knowledge management, Decision-making model,
Analytic network process

1. INTRODUCTION
The inner and outer environments where the
products are being designed and developed are
complex and variable. Within such creative and
uncertain surroundings, potential risks can never be
fully avoided or mitigated (Jerrard et al. 2008).
Therefore, risk assessments are required at critical
decision-making points to keep product
development at the possible lowest risk.
Traditionally, the decision-making process is
implemented qualitatively by experts of subject
domain. Among the numerous research papers
available in manufacturing system and process
planning, most of them focus on the operations
which are directly related to the processing phases
(Chen and Ko 2009) and only a few discuss the
decision-making and optimization in term of global
quality control.
Failure knowledge can be employed as a
quantitative methodology for decision-making in
product quality. Different types of failures, which
lead to the breakdown of certain functions, emerge
in design, production and enterprise departments
66

with respective possibilities of occurrence during
the life cycle of similar products. The correlation
degrees for each failure with different
manufacturing characteristics, such as product
functions, components, processes and organizations
structure, are essential parts of failure knowledge.
By observing the knowledge network of failure
scenarios, it is important to examine the perceived
weight of manufacturing characteristics obtained
from the market and customers and compare them
with the overall consideration at the time of task
launch. The decision-making related to product
quality based on failure knowledge is composed of
the following six tasks: (i) predicting and
identifying risks and faults, (ii) analyzing the cause
and mechanism of the past similar failures, (iii)
presenting optional proposals, (iv) selecting the
optimal scheme, (v) conducting the designated plan,
and (vi) verifying the execution results.
In this paper, a quantitative approach for
decision-making in product quality is proposed. An
ontological expression of failure scenario is
presented together with a framework of failure
knowledge network (FKN). The decision-making
process in product quality based on FKN is
discussed in detail, followed by a case study carried
out to verify the novel decision-making process.
2. LITERATURE REVIEW
The main reason why similar failure cases are
repeated in practice is that the knowledge of past
failures is not well captured and communicated to
related people (Hatamura et al. 2003). In order to
utilize the knowledge of past failure cases, an
efficient and unified method has to be provided for
communicating failure experience.
2.1. FAILURE EXPRESSIONS AND
MANAGEMENT
Many organizations have constructed the databases
that store failure information in addition to manuals,
documents and procedures (Colding 2000).
However, because of poor transferring of failure
information to other parts of the organization, the
failure knowledge is not effectively communicated
and the same failures are repeated.
Failure Modes and Effects Analysis (FMEA) is a
method that is used to identify and prioritize
potential failures in products or processes, and has
been widely applied to acquire and update failure
knowledge within an organization (Dai et al. 2009a).
The advantages and disadvantages of applying
FMEA are extensively examined both in industry
and academia. Conclusively speaking, the
traditional FMEA uses three factors, Occurrence,
Severity and Detection, to determine the Risk
Priority Number (RPN), which is used to address
and prioritize the potential failures rapidly. However,
it has drawbacks such as the deficiencies in the
relationship expressions between different failure
components, so FMEA cannot be used as a
technique for knowledge formulation. In order to
address this deficiency of FMEA, failure scenario is
introduced (Kmenta and Ishii 2000) and the
ontological view of failure scenario is shown in
Figure 1.
Parts
Failure Scenario
Components
Organizations
Processes
Functions
Operators
Assembly Systems Products
Workgroups
Departments
Failure_
Conjunctions
Product_
Characteristics
Process_
Characteristics
Responsibilities
Roles
Identification Causes Effects Measures
Pro_Functions
Sys_Functions
Ass_Functions
Failure_
Transmission
Function Units
Enterprises

Figure 1 Ontological view of a failure scenario
Failure scenarios of the mechanical products refer
to any customer-perceived deviations from the ideal
function of the product, including overload, impact,
corrosion, fatigue, creep, rupture, deformation and
cracking. There are four entities engaged in a failure
scenario: functions, components, processes and
organizations. The component entity is the carrier
of the failure. The amount of failure types regarding
one component is finite (Arunajadai et al. 2004),
and different types of failures have conjunctions if
they are related to the same component. The
conjunctive failures are subject to certain variations
of product characteristics, which play an important
role in the occurrence of conjunctive failures. The
function entity is used to record connections
between the failure and functions. When the failure
takes place, it is usually followed by a breakdown of
the corresponding function, and other failures will
occur if no corrective measure is adopted. On many
occasions a function is interfered by different
failures with respective probability, and the
breakdown of this function can also give rise to
many other failures. The processes entity is used
to trace the chronological progression of the failure.
A failure can be regarded as a unified process,
through which input leads to output. As a
developing failure becomes evident, its effects are
firstly established, and the corrective actions will be
taken after analyzing the causes that need to be
addressed to deal with the event. The
organizations entity should be regarded as a
monitor to take care of the failure scenario. Each
individual in the organization has different roles
with different responsibilities during the failure
process. Their respective actions and behaviours, as
67

well as the failure status, are supervised and
classified to construct and improve a quality system.
2.2. RELATIONSHIPS BETWEEN DECISION-
MAKING AND KNOWLEDGE MANAGEMENT
Decision-making is one of the most common
thinking activities and one of the most crucial
processes of any business. It has been explained in
many theoretical frameworks (Hammond et al.
1980 ; Kaplan and Schwartz 1977) in early
researches carried during 1980s and 1990s. In the
digital manufacturing environment, which is often
referred to as the knowledge age (Stutt and Motta
1998), more and more decisions related to
productivity highly dependent on decision makers
experience and knowledge (Kidd 1994). Therefore,
the decision-making techniques or decision-making
support tools are in need to be developed to meet the
timeliness and utility of the decision information
that will be required by these people at all levels of
the organization. The lack of knowledge is a
major shortcoming of important business decisions
(Wiig 1997).
To create, store, transfer and apply the large
volume of knowledge within the business processes
for the distributed manufacturing organizations,
Knowledge Management (KM) has been proposed,
referring as a discipline and a managerial policy
initiative that encapsulates the strategies, systems
and processes that enable and simplify creation,
capture, sharing, distribution and utilization of an
organizations knowledge (Oliver and Kandadi
1997). The detailed process of the knowledge
management is shown in Figure 2. The main aspects
of KM involve in the creation of value from an
organizations intangible assets and the information
systems designed to facilitate the sharing and
integration of knowledge (Alavi and Leidner 2001;
Schultze and Leidner 2001). However, KM is still
an immature discipline, mostly because a codified,
generally accepted framework has not been
established (K. Metaxiotis 2005; McDermott 1999).
Knowledge
Identification
K
n
o
w
l
e
d
g
e

C
r
e
a
t
i
o
n
K
n
o
w
l
e
d
g
e

C
o
d
i
f
i
c
a
t
i
o
n
K
n
o
w
l
e
d
g
e

S
t
o
r
a
g
e
K
n
o
w
l
e
d
g
e

D
i
f
f
u
s
i
o
n

&

U
s
e
Knowledge
management process

Figure 2 Knowledge management process
Both of the KM and the decision-making activity
during the product development process concern the
representation and processing of knowledge by
machines, human beings, organizations or societies
(Borghoff and Pareschi 1997). The overall aim of
the KM in decision-making is to ensure that the
right knowledge is available in the right forms to the
right entities at the right time for the right cost
(Kotnour et al. 1997). The relationships between the
decision-making and KM, therefore, can be
concluded as that the decision-making is a
knowledge intensive managing activity requiring
knowledge as its raw materials. The proficiency and
efficiency in KM is increasingly important to the
competitiveness of the decision makers.
3. PROPOSED APPROACH
3.1. FAILURE KNOWLEDGE NETWORK
In order to manage and structure the failure
knowledge network, research is required to deal
with the connections between the system
characteristics and the triggers, as well as the
connections between the system characteristics and
the results. Once the relationships have been
identified and clarified, it is possible to view the
failures, effects, causes, and actions in terms of
characteristics, with a trigger leading to a result.
Generally, failure scenarios are induced by
unexpected variations of certain manufacturing
characteristics during the new product development
(NPD), which includes design, processing, assembly
and validation. For this reason, the relationship
between failures and characteristics for both
processes and products, as well as the experiences
dealing with the similar failure processes, are the
invaluable source of knowledge for NPD. The
failure knowledge network (FKN) is comprised by
the following five parts: (i) the connection between
failures and product functions, (ii) the relationship
between failures and product components, (iii) the
correlation between failures and organizations, (iv)
the association between failures and product
processes, and (v) the conjunction among different
failures.
68


M
M
M
M
M
M
M
M
M
M
M
M
M
M
M
M
M
M
M
M
M
M
M
M
Figure 3 A schematic of FK

As shown in Figure 3, FKN can be described as a
four-dimensional matrix, including components,
functions, processes and organization. Each element
in the matrix is a failure scenario and represents the
related failures within the corresponding
dimensions. The conventional factors of failures are
embodied in the representation, including event,
detection, effect, severity, solution weight, cause,
monitor, reappearance, operation, efficiency and
precaution. The indexes of the factors are provided
by the subject matter experts and the engineers
according to the degree of correlation between the
corresponding characteristics and failures.
3.2. DECISION-MAKING MODEL BASED ON
FAILURE KNOWLEDGE MANAGEMENT
Traditionally, quality function deployment (QFD) is
employed in decision-making processes to
quantitatively map the customer requirements to
characteristics of design, processing, assembly and
validation. This is known as a top-down approach,
in which the qualitative requirements of customers
are related to the quantitative weights of
manufacturing characteristics during product
development (Labodova 2004; Thornton 1999). A
novel approach for QFD based on failure
knowledge management is proposed in this paper,
enabling the selection of the optimal schemes by
analyzing the correlation between similar product
failures and the relationships between the failures
and the manufacturing characteristics.
Herein, the use of the analytic network process
(ANP) (Satty 1996) is proposed in order to
incorporate the dependency issues between the
failures and manufacturing characteristics in a
decision-making model. ANP differs from analytic
hierarchy process (AHP) in that it allows the inner
dependency within a cluster and outer dependencies
among clusters. Based on the hierarchical structure,
one can calculate the weights of manufacturing
characteristics by using the ANP method. ANP
provides a complete structure by which it is possible
to find the interactions between the elements and
the clusters from the problems, and then deduce the
priority values and proportion value of each
scheme. The ANP method includes two parts: (i) the
control hierarchy, which refers to the network
relationship of guideline and sub guideline,
influencing the internal relationship of systems, (ii)
the network hierarchy, which refers to the network
relationship between elements and clusters.
L
L
L
L L
L L
L
L
L
L

Figure 4 Decision-making model
69

Figure 4 has shown the representation of the
decision-making model, which is based on two parts:
(i) the decision-making targets and (ii) the failure
knowledge network (FKN). The decision-making
targets include the precaution targets, the monitor
targets, the control targets, and the improvement
targets. The structure of FKN includes a cluster of
failure scenarios and four extra manufacturing
characteristics clusters, namely, functions,
components, processes and organizations.
The first step of the decision-making
methodology is to identify the failures and the
corresponding characteristics. The second step is to
determine the importance value of characteristics. In
the third step, the body of the house will be filled by
comparing the characteristics with respect to each
target or characteristic. Finally, the interdependent
priorities of the characteristics are obtained by
analyzing the dependencies among the targets and
characteristics. The supermatrix representation of
decision-making model can be obtained as shown in
Figure 5.

Figure 5 Decision-making supermatrix
As shown in Figure 5, W
11
, W
22
, , W
66
are the
inner dependency matrices of the targets and
characteristics respectively. The other matrices are
outer dependency matrices including the column
eigenvectors with respect to each target or
characteristic. The priority values and proportion
values of each scheme can be obtained after
multiplying the weighted supermatrix until its constringency.
3.3. OPTIMIZATION ALGORITHM
The final goal of metrology systems selection is to
obtain an optimal arrangement between metrology
tasks and metrology systems as well as to
economically satisfy the customer requirements and
engineering restrictions. Generally, there are more
than one metrology systems that can be applied to
accomplish the same inspection and verification
task, and in turn one single metrology system has
the ability to accomplish numerous inspection and
verification tasks with different matching degrees
(Singhal et al. 2007; Zhou and Zhao 2002).
In the metrology systems selection and optimization
process, there are two types of constraints between
inspection and verification tasks and metrology
systems: (i) all inspection and verification tasks
must be accomplished, and (ii) the capacity limits of
metrology system can not be exceeded. Regarding
to the above constraints, it is pre-assumed that there
are m metrology systems that can be applied to
assure n tasks. For this kind of complexity in
optimal planning process, multiple tasks
optimization may be involved. Other constraint
factors such as time, cost and priority must also be
taken into account in the product development
stages. Weighted zero-one goal programming
(WZOGP) is a feasible method to optimize the
matching process, and the general mathematical
model is presented as follows.

= =

r
x
t
z
xz
xz x
W
w
d
1 1
)) ) 1 ( ( ( min (1)
wherein:
) ( max
1
1
xz
t z
r x
w W
< <
< <
= ,
1 0 or d
xz
= ,
1
1
=

=
r
x
xz
d , z = 1, 2, t. and
k
r
x
t
z
xz
k
xz
R d r

= = 1 1
, k = 1, 2, m.
In Formula (1),
x
is the weight of task T
x
(i =1,2,
, r), w
xz
is the matching degree between task T
x
and metrology system MS
z
. d
xz
is a 0 or 1 variable,
wherein d
ij
=1 means the quality system MS
j
( j
=1,2, , n ) is selected to implement task T
i
. R
k
represents the kth resource restriction (including
time, cost etc.), and
k
xz
r is the amount which
resource R
k
will be needed when utilizing metrology
system MS
z
to implement tasks T
x
. The optimization
of metrology systems selection for inspection and
verification tasks can be obtained by employing
WZOGP. Modification of the mapping result can be
made by the designers and engineers based on the
result of further simulations and evaluations in the
digital environment.
4. EXPERIMENTATION AND CASE
STUDY
To verify the decision-making model proposed in
previous section, a simple artifact has been created
using the popular CAD/CAM modelling software.
As shown in Figure 6, it is a rectangular block, on
the top face of which two vertical holes are slotted
symmetrically, as well as a shwallow groove is cut
in the centre. The design characteristics of the
artifact is derived and simplified from the diesel
engine blocks, which will be measured on
Coordinate Measurement Machines (CMMs) for the
purpose of 6-Sigma Quality Control in
Measurement Assisted Assembly (MAA) process.
70

In order to determine the weights and values of
Measurement Characteristics (MCs) in the
Decision-Making Model, all the kinds of
information required to construct a complete
measurement specification has been loaded on the
CAD model in Figure 6, including its design
specifications and related tolerance features.

Figure 6 The tolerance features to represent Measurement
Characteristics
In the case study, the decision-making target is to
prioritize the MCs of the part and select appropriate
metrology system to satisfy the customer
requirements and engineering restrictions
economically. To generate the decision-making
supermatrix, four of the design specifications and
related tolerance features have been picked as the
presentation of the MCs, which are MC
1
=
cylindricity, MC
2
= location of the slotted hole,
MC
3
= location of the cut groove and MC
4
= the
diameter of the bigger round step on top of the
slotted hole. The weight of each characteristic has
to be decided before executing the metrology
resource planning activities(Dai et al. 2009b).
Hence, the decision-making model is employed to
express and codify the relationship between the
MCs and failure scenarios.
Four MCs are set as cluster MC, and two dominant
failure scenarios, which are F
1
= overheating and
F
2
= detonation, are selected as FS cluster. From
the hierarchical structure, cluster FS is the control
level whilst cluster MC is regarded as the network
level. By extracting information from the FKN, one
can obtain the pairwise comparison matrix A, B and
C, and then evaluate its eigenvectors to group into
the supermatrix SM in Figure 7.
Measurement
Characteristics
(MC)
MC
0.15
0.15
0.17
0.36
0.31
0.24 0.27 0.43
0.32 0.30
0.26 0.13
0.08
Failure
Scenarios
(FS)
FS
0.17
0.28
0.38
0.17
0.11 0.29
0.13 0.22
0.36 0.41
0.31
0.69 0.47
0.31 0.53
SM
A
B C
F
1
F
2
F
1
F
2 F
1
F
2
MC
1
MC
2
MC
3
MC
4
MC
1
MC
2
MC
3
MC
4
MC
1
MC
2
MC
3
MC
4

Figure 7 Supermatrix structure Decision-making in MAA
To group the supermatrix SM, one needs to firstly
transform it into a weighted supermatrix. The
constringent supermatrix (shown in Figure 8) is
obtained after multiplying the weighted supermatrix
until its constringency, and it can show the weight
of each MC. The following procedures of
measurement planning can be conducted thereafter.

Figure 8 Decision-making supermatrix in MAA
In order to implement those quality aims in the
product inspection process, seven metrology
systems, including five laser trackers and two laser
radars, are ready to be deployed. The usage cost c
z
and capacity limits R
z
of each metrology system
MS
z
are listed in Table 1. Assuming that the total
budget for this project is 2000. The economical
optimization for metrology system selection, then,
has to be applied.

Table 1- The usage cost and capacity limits of each
metrology system
1 2 3 4 5 6 7
c
z

R
z

200
1
300
1
500
1
1600
2
300
1
1000
2
500
3


According to the constraint factors, including cost,
processing time and capacity limits of the metrology
operations, WZOGP is applied as described in
71

section 3.3. Finally, the measurement system
selection result has been determined, as shown in
Figure 9.
Measurement
Targets
Measurement
Systems
1
T 1
MS
2
T
3
T
2
MS
5
MS
7
MS
4
T
3
MS
4
MS
6
MS

Figure 9 Result of metrology system selection
5. CONCLUSION
This study shows that the knowledge from past
failures of similar products is very useful for
decision-making in relation to product quality
control. This research has set up a failure
knowledge based framework for decision-making in
product quality control, and embodies its
materialization to calculate the priority and lower
the risk of manufacturing characteristics during new
product development. The methodologies developed
include: (i) identifying relationship between the
failure scenarios and manufacturing characteristics,
(ii) defining failure knowledge network according
to the quantitative factor obtained, and (iii)
employing the ANP method to deduce the priority
and proportion of each scheme. Future research
includes the construction of an IT-assistant system,
which can assist to conduct decision-making by
utilizing the failure knowledge management.
6. ACKNOWLEDGMENTS
The authors wish to acknowledge the financial
support of the State Scholarship Fund of China as
well as the Engineering and Physical Sciences
Research Councils Innovative Design &
Manufacturing Research Centre at the University of
Bath, United Kingdom.

REFERENCES
Alavi, M., and Leidner, D. E. (2001). "Review:
Knowledge Management and Knowledge
Management Systems: Conceptual Foundations
and Research Issues." MIS Quarterly, 25(1),
107-136.
Arunajadai, S. G., Uder, S. J., Stone, R. B., and Tumer, I.
Y. (2004). "Failure mode identification through
clustering analysis." Quality and Reliability
Engineering International, 20(5), 511-526.
Borghoff, U. M., and Pareschi, R. (1997). "Information
technology for knowledge management."
Journal of Universal Computer Science, 3(8),
835-842.
Chen, L. H., and Ko, W. C. (2009). "Fuzzy linear
programming models for new product design
using QFD with FMEA." Applied Mathematical
Modelling, 33(2), 633-647.
Colding, B. N. (2000). "Prediction, Optimization and
Functional Requirements of Knowledge Based
Systems." CIRP Annals - Manufacturing
Technology, 49(1), 351-354.
Dai, W., Maropoulos, P., Tang, X., Huo, D., and Cai, B.
(2009a). "Quality Planning Based on Risk
Assessment." In: Proceedings of the 6th CIRP-
Sponsored International Conference on Digital
Enterprise Technology, 223-237.
Dai, W., Maropoulos, P., Tang, X., Jamshidi, J., and Cai,
B. (2009b). "Measurement Resource Planning:
A Methodology That Uses Quality
Characteristics Mapping." In: Proceedings of
the 6th CIRP-Sponsored International
Conference on Digital Enterprise Technology,
999-1012.
Hammond, K. R., McClelland, G. H., and Mumpower, J.
(1980 ). Human judgment and decision making:
Theories, methods, and procedures, Praeger
Publishers.
Hatamura, Y., Ilno, K., Tsuchlya, K., and Hamaguchi, T.
(2003). "Structure of Failure Knowledge
Database and Case Expression." CIRP Annals -
Manufacturing Technology, 52(1), 97-100.
Jerrard, R. N., Barnes, N., and Reid, A. (2008). "Design,
Risk and New Product Development in Five
Small Creative Companies." International
Journal of Design, 2(1), 21-30.
K. Metaxiotis, K. E. (2005). "Exploring the world of
knowledge management: agreements and
disagreements in the academic/practitioner
community." Journal of Knowledge
Management, 9(2), 6-18.
Kaplan, M. F., and Schwartz, S. (1977). Human judgment
and decision processes in applied settings,
Academic Press.
Kidd, A. (1994). "The marks are on the knowledge
worker." In: Proceedings of the SIGCHI
conference on Human factors in computing
systems: celebrating interdependence, Boston,
United States, 186-190.
Kmenta, S., and Ishii, K. (2000). "Scenario-Based
FMEA: A Life Cycle Cost Perspective." In:
Proceedings of ASME Design Engineering
Technical Conferences, Baltimore, Maryland.
Kotnour, T., Orr, C., Spaulding, J., and Guidi, J. (1997).
"Determining the benefit of knowledge
management activities." In: IEEE International
Conference on Systems, Man, and Cybernetics,
Orlando, United States, 94-99.
Labodova, A. (2004). "Implementing integrated
management systems using a risk analysis based
approach." Journal of Cleaner Production,
12(6), 571-580.
72

McDermott, R. (1999). "Why information technology
inspired but cannot deliver knowledge
management." California management review,
41(4), 103117.
Oliver, S., and Kandadi, K. R. (1997). "How to develop
knowledge culture in organizations? A multiple
case study of large distributed organizations."
Journal of Knowledge Management, 10(4).
Satty, T. L. (1996). Decision Making with Dependence
and Feedback: The Analytic etwork Process,
RWS Publications, Pittsburgh.
Schultze, U., and Leidner, D. E. (2001). "Studying
knowledge management in information systems
research: discourses and theoretical
assumptions." MIS Quarterly, 26(3), 213.
Singhal, K., Singhal, J., and Starr, M. K. (2007). "The
domain of production and operations
management and the role of Elwood Buffa in its
delineation." Journal of Operations
Management, 25(2), 310-327.
Stutt, A., and Motta, E. (1998). "Knowledge Modelling:
an Organic Technology for the Knowledge
Age." In: The Knowledge Web: Learning and
Collaborating on the et, M. Eisenstadt and T.
Vincent, eds., Kogan Page, London, 211-224.
Thornton, A. C. (1999). "A Mathematical Framework for
the Key Characteristic Process." Research in
Engineering Design, 11(3), 145-157.
Wiig, K. M. (1997). "Knowledge management: Where
did it come from and where will it go?" Expert
Systems with Applications, Special Issue on
Knowledge Management, 13(1), 1-14.
Zhou, M., and Zhao, C. (2002). "An optimization model
and multiple matching heuristics for quality
planning in manufacturing systems." Computers
& Industrial Engineering, 42(1), 91-101.


Proceedings of DET2011
7th International Conference on Digital Enterprise Technology
Athens, Greece
28-30 September 2011
73

COMPREHENSIVE SUPPORT OF TECHNICAL DIAGNOSIS BY MEANS OF
WEB TECHNOLOGIES
Markus Michl
Institute for Manufacturing Automation and
Production Systems, University of Erlangen-
Nuremberg
michl@faps.uni-erlangen.de
Christian Fischer
Institute for Manufacturing Automation and
Production Systems, University of Erlangen-
Nuremberg
cfischer@faps.uni-erlangen.de


Jochen Merhof
Institute for Manufacturing Automation and
Production Systems, University of Erlangen-
Nuremberg
merhof@faps.uni-erlangen.de
Jrg Franke
Institute for Manufacturing Automation and
Production Systems, University of Erlangen-
Nuremberg
franke@faps.uni-erlangen.de
ABSTRACT
Competing in todays production environment requires complex automated production facilities that
have to be operated at high efficiency. In order to achieve this, software support for technical
diagnosis is necessary in order to ensure quick reaction and troubleshooting in case of failures. This
paper presents several system approaches that support efficiently the stages monitoring, diagnosis
and therapy by using web technologies. In detail SVG (Scalable Vector Graphics) and VRML
(Virtual Reality Modeling Language) based monitoring solutions will be discussed as well as expert
systems or multi agent systems that are used during diagnosis and therapy stages. These well-known
system concepts that have been pursued for several decades can be enhanced by integrating various
web technologies in their realization. This renders them more reusable and user friendly thus
making them more efficient and broadly applicable in the field of manufacturing automation.
KEYWORDS
Diagnosis, Monitoring, Web technologies, Expert systems, Scalable Vector Graphics

1. INTRODUCTION
These days the production environment is
influenced by multiple fast occurring changes
emerging from technological progress, customers,
competitors and legislation. Steadily progressing
miniaturization and functional integration lead to
increasingly capable and complex product
components respectively end products. Additionally
more and more product variants in combination
with highly dynamical product life cycles are
required in order to fulfil the customers wishes of
individualization. Global markets together with
technological levelling cause an increasing cost
pressure on companies. Last but not least regulation
for environmental protection forces enterprises to
get more and more resource efficient in terms of
energy and material.
One important step in tackling the
aforementioned requirements and prevailing in
todays production environment is the usage of
flexible automated manufacturing facilities. With
these mechatronic production systems enterprises
can satisfy the customers needs for highly
innovative products at varying amounts (Ghringer,
2001; Pachow-Fraunenhofer et al., 2008).
Unfortunately these systems have one important
drawback. In general, the more complicated a

74

manufacturing system gets, the more it suffers from
reduced availability. Reasons for this causality are
two facts. First of all, the mean time between
failures (MTBF) goes down. This is due to the
increased combination of components ranging from
mechanical parts over electronic circuits to software
in order to make up an up-to-date manufacturing
system. The consequence is complete systems that
get more and more fault-prone since the individual
availabilities of the components multiply to form
the availability of the system. The second fact is a
growing mean time to repair (MTTR) due to the fact
that the identification of a failure gets more and
more time consuming since the different failure
modes of the components add up (Pachow-
Fraunenhofer et al., 2008; Nser and Mller, 2006).
Availability is a critical factor in the calculation
of the overall equipment effectiveness (OEE), a
widely used key performance indicator for judging
the performance of manufacturing facilities. Due to
the calculation of the OEE figure (see equation 1) a
reduced availability influences the OEE value
negatively (Vorne Industries, 2008).

Quality e Performanc ty Availabili OEE = (1)

In order to stay competitive in the previously
sketched production environment this cannot be
tolerated in the long run if the enterprise should
prevail. Measures have to be searched and
implemented that can counter this ramification.
2. INTRODUCING TECHNICAL
DIAGNOSIS
One significant action that can positively influence
the availability of manufacturing facilities is the
application of various software tools during
technical diagnosis in order to improve the overall
process. In the following the term technical
diagnosis as it is used in this paper will be defined
and it will be pointed out how web technology
based software solutions can support it.
2.1. DEFINING TECHNICAL DIAGNOSIS
The process of technical diagnosis consists of the
core steps monitoring, diagnosis and therapy and
the associated steps prevention and improvement
(see Figure 1).
Monitoring has the duty to collect, to process and
to analyse production data. Moreover processed
data and detected failures are visualized and
statistics together with detected failures are
reported. The next step in the process is the
diagnosis. It builds upon the data gathered in the
previous phase and has the task to further process
available data and create significant symptoms that
can aid in order to determine failure causes. At last
diagnosis reports the failure cause to system
instances responsible for therapy. Therapy deals
with finding suitable counter measures to fix the
failure. Furthermore a well-developed therapy
system also supports a user during the workflow of
failure elimination by providing guidance.


Figure 1 Structure of the technical diagnosis process
Prevention and improvement can be seen as
additional benefits of technical diagnosis since they
operate on information provided by the three core
steps. Based upon the continuous collection and
analysis (e. g. trend analysis) of production data
critical production system states can be predicted
and preventive measures initiated before a failure
occurs. Improvement works upon statistics about
costs of failures and then tries to optimize the
production system in a way that the most expensive
failures are significantly reduced or eliminated from
the system. (Hrdtner, 1992; Faupel, 1992; Birkel,
1995)
2.2. SUPPORTING TECHNICAL DIAGNOSIS
BY MEANS OF WEB BASED SOFTWARE
There exist a lot of software solutions to support the
technical diagnosis process in order to make the
different phases as effective as possible. The right
part of Figure 2 shows an overview of different
approaches that can assist monitoring, diagnosis or
therapy. Especially for supporting the more
challenging phases of diagnosis and therapy there is
a wide array of tools with varying complexity and
capabilities available. They range from e. g. audio-
or videoconferencing and e-mail based system
solutions over concepts from the world of social
software like wikis or blogs to highly complex and
powerful solutions like agent based systems or
expert systems (Lehner, 2009).
Some of the possible approaches for supporting
technical diagnosis depicted in Figure 2 have their
origin in the web environment while other concepts
can profit considerable when blending them with
web technologies. Nowadays web technologies are
available for many fields such as document
technologies, communication protocols, and client-
server architecture and provide a lot of advantages
(Klasen et al, 2006).

75


Figure 2 Software approaches for supporting technical
diagnosis can be comprehensively enhanced by the
application of web technologies
Web technologies do not require installation of
software and use a web browser as foundation for
their usage. This allows for easy access as well as
broad and quick availability making them suitable
as basis for systems that are to be used at various
on- or off-site locations what is one of the typical
use cases in todays distributed production
environment. Actual trends like faster mobile
internet connections and the prolific growth of
mobile end devices underline the importance of this
fact. Furthermore building upon open royalty free
available web standards provides the basis for
system solutions that can be jointly developed,
flexible adapted to individual needs and used by
several parties (Tapscott and Williams, 2006). The
impact of these features will later be addressed in
more detail, when individual systems are presented.
The rapid process in the field of web technologies
in the last years concerning development,
standardization and refinement has led to a level of
maturity that along with the presented advantages
provides the motivation for the work presented in
the following of this paper. It will focus on showing
how web technologies can beneficially be applied in
technical diagnosis software solutions.
3. WEB BASED MONITORING
SOLUTIONS
This chapter will present a monitoring solution
developed at the FAPS institute that makes use of
web technologies in order to provide data
acquisition and graphical visualization on the shop
floor as well as on the workcell level. This
combination provides a versatile, powerful, easy to
use and broadly accessible tool for the operator.
3.1. FLEXIBLE SYSTEM FOR PRODUCTION
DATA ACQUISITION
Core component of the developed production data
acquisition (PDA) system is a client-server based
architecture that is implemented in the
programming language Python (see Figure 3). For
each assembly cell that shall be monitored one
telemonitoring client (TM-client) is active. Its task
is to provide a standardized method for transmitting
data from the different automation components (e.
g. robots, transfer systems, etc.) to the
telemonitoring server (TM-server). This is done by
leaving the vendor-depended communication
interface at a very low level and transforming the
gathered data as early as possible to a standardized
data format that has been defined based upon XML.
This renders the TM-client very versatile and allows
for easily deployed at various assembly cells. The
only part that has to be adapted if the client is to
support other automation components is to write or
integrate a vendor specific converter. The
standardized XML data chunk is then sent to the
TM-server. It is capable of processing the data
received by different clients and storing them in the
respective data tables that have been set up in a
PostgreSQL database. The contents of this database
provide the input data for the visualization methods
that will be presented in the following chapter.


Figure 3 Architecture of the production data acquisition
system
3.2. VERSATILE SHOP FLOOR
MONITORING BASED ON SVG
Scalable Vector Graphics (SVG) is a royalty free
XML based markup language for describing two-
dimensional scalable graphics in the web and is
developed and standardized by the W3C (W3C,
2010). It is the basis for our development of a
dynamic, versatile, browser based sketch of the
shop floor that has the capability to continuously
display the current state of all assembly cells
connected to the described telemonitoring system.
Functionality like this is highly useful for
production planners or maintenance personnel since
they have at their disposal an easy accessible and

76

meaningful overview of the whole production
environment by the use of a mere web browser.
3.2.1. Overview of Approach
Figure 4 shows the developed prototype during its
use in the FAPS demonstration factory. The SVG
based shop floor representation (upper part of
figure) is integrated inside a HTML webpage. Two
assembly cells - highlighted in grey and bright
green - have been connected to the system.

Figure 4 SVG based GUI for shop floor monitoring and
detailed analysis of individual automation components
This is already one of the other features that can be
implemented by the use of SVG in combination
with a server sided Python script. The script
processes the gathered production data and
according to the current situation changes the
highlight colour of the assembly cell to one of the
following:
Table 1- Colour codes used in the SVG shop floor layout
Colour Meaning
grey work cell offline /
status unknown
green normal operation
yellow warning
red failure

In the lower part of Figure 4 more detailed
information concerning automation components
integrated in an assembly cell can be reviewed by
clicking on one of the components in the SVG
visualization. This makes use of the xlink feature of
SVG which in our case calls a PHP-Script that
fetches a description and production data
concerning the selected components from the
database and displays them in the bottom half of the
website. Displayed data is continuously kept up to
date due to the usage of the AJAX mechanism (see
Garrett, 2005) that allows updating the displayed
data without having to reload the whole page.
3.2.2. Creation of the SVG Based Shop
Floor Representation
The SVG based shop floor representation makes use
of the modular architecture of the previously
mentioned telemonitoring system and its
configuration files in order to provide a highly
versatile and easy adaptable visualization. The
whole concept is based upon a SVG model library
that is stored in the database. For each automation
component a SVG representation of the respective
component is once created and then stored. When a
TM-client comes online for the first time or after an
adaption, the TM-server requests a configuration
file that specifies amongst other information also
position coordinates for the area the assembly cell
occupies on the shop floor, a list of automation
components that are situated in it and how they are
oriented within the cell environment (see Figure 5
left side). This information is then stored in the
database. By using this information, the PHP-Script
Hallenplan.php responsible for displaying the
website depicted in Figure 4 is able to create step by
step the SVG based shop floor representation (see
Figure 5 right side).


Figure 5 Creation of the SVG based shop floor
representation
This step is carried out each time the SVG-
visualization is called, what provides for the
versatile and dynamic shop floor representation. As
soon as changes in the production environment
occur they are incorporated in the graphical
representation automatically by the means described
above. The only step that has to be done is to
specify the change in the respective configuration
file of the assembly cell. Typical adaptions like
integration of new production cells
rearrangement of existing production cells
integration of new automation components
into existing production cells
removal of automation components from
existing production cells
that occur regularly in todays production
environment are easy to deal with this system
concept.

77

3.2.3. Evaluation of the SVG Based
Approach
Besides the typical advantages of web based system
approaches, SVG based visualization features
further valuable advantages. In contrast to a picture
based representation like gif or jpeg it is freely
scalable. Unlike other web technologies like Adobe
Flash it is not proprietary and requires no plug-in. It
has to be mentioned that especially the widely used
Microsoft browsers have some weaknesses in
supporting the whole SVG standard while other
browsers like Opera or Chrome lead the way in this
domain (Schiller 2010). As explained in the
previous chapter in combination with a well-
designed system concept for data acquisition and
visualization a SVG based approach can be
effortlessly adapted to depict the current shop floor
situation. This only requires changing a
configuration file thus making the concept suitable
for usage in todays versatile production
environment.
3.3. 3-D ASSEMBLY PROCESS
MONITORING BASED ON VRML
Virtual Reality Modeling Language (VRML) is a
markup language for describing 3-D scenery and is
developed by the Web3D consortium. It is an open
standard that provides means to specify geometry,
lighting, animations and user interactions. Although
a follow-up standard exists in Extensible 3D (X3D)
since a couple of years, VRML is still broadly used
due to the availability of lots of tools and products
supporting it (Web 3D Consortium, 2011).
In this work, VRML was used as one of the core
components for a powerful 3-D assembly process
visualization, which wisely expands the capabilities
of the telemonitoring system. While the SVG based
shop floor representation gives a quick overview of
the production environment, the VRML based
representation serves for the detailed analysis of
processes within individual assembly cells.
3.3.1. Overview of Approach
The bottom left part of Figure 6 shows the GUI for
the VRML based process visualization that has been
developed. The most part of the webpage is
reserved for the 3-D environment visualizing the
assembly cell. As mentioned the models for the
different automation components can be acquired
through several ways. One option is to design on
ones own with CAD tools. Most of them are
capable to save to the VRML file format .wrl.
Another option is to build upon existing models that
are distributed by many manufacturers of
automation components. Either they offer .wrl files
or another CAD file format that can be converted to
.wrl. Programs for this task are commonly available
due to the broad distribution of the VRML format.
In case the available models are static - which
means they lack the interpolators in order to carry
out movements - some pre-processing steps have to
be carried out in order to convert them into
dynamic, movable models.


Figure 6 Visualization modes of the VRML based process
visualization
There exist two different visualization modes. First
of all the user can choose to view the current
situation in the assembly scene. This requires a
running telemonitoring system in the background in
order to provide continuously new data that can be
displayed in the 3-D VRML scenery. The second
visualization mode does not deal with the current
situation of the assembly cells, but with past ones.
Via another GUI that is depicted on the right side of
Figure 6 the user has the possibility to specify an
interesting time frame, e. g. one in which a failure
took place or bad quality was produced, and can
start the processing of that specific situation in the
visualization. By this means the user is gaining a
vivid and expressive overview that significantly
helps to assess past failure situations. Both
visualization modes can be enhanced by blending in
accurate machine and process data into the scenery
in a head-up display like functionality. With this
tool the user has access to exact numeric values and
their graphical representation at the same time,
improving user support even more.
3.3.2. Creation of the VRML Based 3-D
Assembly Process Visualization
In order to create a 3-D browser based visualization
the system concept outlined in Figure 7 has been set
up. The visualization environment consists of a
HTML file that has the VRML world and
JavaScript embedded. Since VRML needs a plug-in
in order to be processed by a web browser we utilise
the Cortona VRML plug-in. The HTML-file
together with its components is once transferred at
the beginning of the visualization from the web

78

server to a client via HTTP. In the following
JavaScript carries out XML-HttpRequests in order
to continuously reload new data. This data is then
processed on the client side and used to model the
assembly process occurring in the real environment
by carrying out the necessary manipulations in the
VRML scene. This is done by a combination of
JavaScript and Cortona Engine functionalities.
On the server side the XML-HttpRequest is
processed by a Python script that extracts the
required data out of the database and then sends it to
the client side. The visualization of past assembly
sequences builds upon the same architecture but
loads selected process data instead of the current
one.


Figure 7 System concept for VRML based process
visualization
3.3.3. Evaluation of the VRML Based
Approach
The presented system concept provides the user
with a powerful web based 3-D visualization of
assembly processes. The implemented visualization
modes support the user during monitoring tasks and
also support diagnosis steps that are carried out
user-centric without further software support. In
comparison to other visualization methods like
tables, diagrams or 2-D sketches, information
contained in a 3-D environment is easy to grasp and
judge for humans. VRML provides a means that this
can be done within a web browser making the
system solution easy to access and broadly
available. One drawback of the VRML approach is
of course the time needed to adapt the automation
components models so that they can carry out
movements. The more complex assembly processes
are and the more detailed they shall be depicted, the
more time consuming these adaptions to the scenery
become (see also Ng et al, 2008).
Another web based approach to give the user an
overview on what is going on in an assembly cell
would is the usage of webcams. They can be placed
into the cell and their images can then be embedded
in a website. This possibility has also been pursued
in our work. In comparison to the VRML approach
this concept is much easier and faster to adopt but
has some drawbacks (see also Ng et al, 2008). First
of all the user only disposes of fixed viewpoint or -
in case that several cameras are used - a couple of
viewpoints, but has no possibility to move freely in
the 3-D environment like it is possible with VRML.
So, occlusion of interesting areas of the workspace
becomes an issue. Secondly the required data rates
in order to access the VRML based visualization are
much lower than in the case of video streams. The
scenery only has to be transferred once and in the
following only small data chunks containing
machine and process values requested by the XML-
HttpRequest have to be transferred.
4. WEB BASED DIAGNOSIS AND
THERAPY SOLUTIONS
While the system solutions presented in chapter 3
focused mainly on supporting the monitoring part of
technical diagnosis, this chapter will focus on
introducing approaches that support diagnosis and
therapy. First of all the application of an expert
system solution with a web based GUI in the field
of electronics production will be demonstrated.
After that the usage of document technologies and
communication protocols from the web
environment in order to set up an easy maintainable
and configurable multi agent system for diagnosis
and therapy of manufacturing processes will be
explained.
4.1. EXPERT SYSTEM SOLUTIONS WITH
WEB BASED GUI
This chapter will present the application of a
commercially available expert system solution to
the solder paste stencil printing process. The
benefits of integrating the solution in this crucial
process step of electronics production will be
outlined in detail.
4.1.1. Motivation
The solder paste stencil printing process is the first
and also the crucial step when manufacturing
electronic components in surface mount technology
(SMT). Approximately two thirds of all
manufacturing failures originate here making it the
most fault-prone process step (Oresjo, 2008). When
failures are detected during the printing process by
manual or automated optical inspection the present
situation is that it is up to the operator to analyse
and correct them. Thus this workflow heavily relies
on the individual experience of operators
concerning this complex process step along with its
complicated and incomplete documented causation.
In order to tackle this deficit and empower operators
in solving failure situations more efficiently in the
future, process knowledge was gathered at our
institute in form of literature surveys as well as
series of experiments and integrated in an expert
system solution.

79

4.1.2. Overview of Approach
A commercially available expert system toolkit was
used as basis for our system solution. It provides the
usual components a knowledge based system is
supposed to provide (Beierle and Kern-Isberner,
2008). This includes:
knowledge base
inference mechanisms
acquisition mechanisms
explanation mechanisms
dialog component
For interaction between users and the system there
exist different user interfaces each suited for the
needs of users with certain roles such as knowledge
engineer, administrator or end user.
4.1.3. Integration and Usage of Information
in the Expert System
In a pre-processing step relevant information
gathered on the solder paste stencil printing process
was structured in form of mind maps in order to
facilitate its modelling in the expert system. As
already mentioned the system provides a
comfortable web based GUI where the knowledge
engineer has the possibility to map gathered
knowledge onto the internal data model of the
expert system (see Figure 8 upper part). In order to
do this he can create cases for the different failure
situations that the system shall support. These cases
are then filled with a network-like structure
containing question nodes whose answers help to
narrow down the failure cause. During operation
this question will be answered by the user. Besides
the question nodes there also exist solution nodes
that each contain information on dealing with a
specific failure cause. Depending on the responses
to the questions the diagnosis process will end at a
specific solution node providing cure for the current
problem. Both node types - besides the usual name
and description - can be attributed additional
content like documents, audio- and video files or
pictures. This additional information fosters the
correct and competent answering of the posed
question or in case of solution nodes gives detailed
information on how a troubleshooting action has to
be carried out and what prerequisites have to be
met. Additionally for reusing part of a case in
another case an additional subdiagnosis node exists
that allows structuring several question and solution
nodes under one node.
Once the knowledge has been modelled, checked
and made available to users on the shop floor via
administration functionalities, users have access to
the stored knowledge via another GUI that is
specifically designed for their needs (see Figure 8
bottom). The usage of the system is triggered by
machine failures or the violation of defined quality
criteria that are detected either manually or
automatically. The user can then address the expert
systems with his current failure situation by
specifying error codes or not met quality criteria
and is then guided through the previously explained
question-solution-network via a dialog based
process in a web based GUI. This GUI is also
capable of displaying the additional multimedia
content mentioned earlier. By this means the user
has comprehensive support during the process of
identifying the failure cause and conducting
countermeasures what leads to reduced downtimes


Figure 8 Integration and usage of information in the
expert system
4.1.4. Evaluation
Since the 90ies, there has been numerous research
work on applying expert systems in the field of
manufacturing automation as a means to support
machine or process diagnosis (Zllner 1995; Birkel
1995; Warnecke and Bullinger 1991). But up to
now this powerful approach has not managed to
achieve broad application in the industrial
manufacturing environment. Reasons for this are the
rather complex underlying system architecture that
has to be dealt with and limited user acceptance.
But with more and more capable technologies
from the field of computer science at ones disposal,
the barriers for the application of such systems in
the industrial solution are currently reduced.
Construction kits respectively frameworks for

80

expert systems like the one used in our work
became available eliminating the complex
development and maintenance of the underlying
software infrastructure. Another huge contribution
comes from the progress in the field of web
technologies in the last years. Powerful and user
friendly GUI for operating these complex systems
comfortably are now available inside browser based
applications (Hasan and Isaac, 2011). Due to the
web based approach the systems can be updated,
maintained and used at different workcells at
various production sites without having to deploy a
software tool to each client. This provides the
necessary large user community such systems need
to thrive as costs per deployment are reduced and
thus leading to shorter amortization periods of the
investment. Furthermore a large user community
fosters that the contained data grows faster and
contents are more up-to-date thus increasing the
benefit for the individual user and in succession
user acceptance. Lastly the web based GUI allows
for the effortless integration of multimedia content
what also contributes to usability and acceptance.
4.2. MULTI AGENT SYSTEM SOLUTIONS
WITH WEB BASED GUI
As mentioned before, quick or even preventive
detection of - possible - failures and competent
reaction in case of failures during the production
process is crucial in todays highly competitive
production environment. One approach that can
provide for these capabilities without having to rely
on user interaction is the usage of multi agent
systems (MAS). The underlying concepts of this
approach will be explained in the following chapter.
Furthermore a demonstrator application developed
at our institute that makes use of web technologies
in order to enhance setup and usability of this
approach will be presented.
4.2.1. Overview of the approach
MAS building blocks are individual agents. An
agent is a piece of software that is capable of acting
autonomously within its environment in order to
pursue its goals. The combination of individual
agents to a larger system is called MAS. For further
information on the theoretical concepts regarding
MAS see e.g. Woolridge (2009).
Our demonstrator for experimenting with agent
based technologies in the field of technical
diagnosis is an assembly cell with cooperating
industrial robots performing the assembly of door
modules for automobiles. Besides the robots other
major components of the assembly cell are a
transfer system, a screwing system, a screw feeder,
a vibratory bowl feeder, an assembly gripper and a
control computer.
For the development of the MAS these
components were each attributed a monitoring and a
diagnosis agent. Each monitoring agent has the task
to extract relevant information for his domain from
a database and then test it for anomalies. The same
database that provides the input for the system
solutions presented in chapter 3 is used for the
MAS. In case of detected anomalies the monitoring
agent is supposed to issue a diagnosis request to the
responsible diagnosis agent via the provided
communication architecture. The diagnosis agent is
then able to assess the failure situation in detail by
referring to a knowledge base modelled by using
fault trees. They have been constructed with
information that has been gathered by a previously
conducted failure modes and effects analysis
(FMEA).
Besides the various monitoring and diagnosis
agents the system also uses one agent that acts as an
agent management system (AMS) and one acting as
directory facilitator (DF). The latters role is to
provide directory services and store information on
what diagnosis capabilities are available in the
system and how they can be accessed. The AMS is
responsible for distributing diagnosis requests based
on the information provided by the directory
facilitator, handling user interaction and system
administration.
In order to enable the previously explained
system interactions, internal agent behaviour and
communication acts were specified. Internal agent
behaviour is provided to the agents by XML based
configuration files providing necessary input such
as:
agent description (name, purpose)
connection details (AMS, DF)
dependencies on other agents
capabilities
Furthermore communication acts for scenarios like:
register a service to DF
request diagnosis
gather new data from database
and others had to be modelled. The definition of the
communication acts is based upon FIPAs agent
communication language (ACL), specifically the
XML based specification provided by FIPA (2005).
The communication basis for the MAS also comes
from the field of web technologies, namely the
Extensible Messaging and Presence Protocol
(XMPP) which is used for transmitting messages
between the individual agents. XMPP is a XML
based instant messaging protocol that provides built
in mechanisms for authentication and presence
information what are useful features for the MAS.

81

The presented MAS is a powerful approach in
order to support automated technical diagnosis of
assembly processes at reasonably expenses. Besides
the quick and competent reaction that these system
solutions provide, they offer modularity and usage
of open standards. These factors allow reusing
individual components in other application
scenarios with or without slight adaptions thus
effectively reducing the development costs for such
type of systems in the long run. The reuse and
recombination of modules becomes more and more
important in nowadays mutable production
environment where monolithic systems can no
longer be applied cost-effectively.
4.2.2. Web Based Configuration Interface
for MAS to Enhance Usability
Besides the mentioned advantages, MAS also have
one drawback. Due to the rather complex internal
behaviour and architecture, they are difficult to set
up and maintain by the users. This is also a main
reason why this system approach has up to now not
had the impact in the industrial production
environment it should have had judging only by its
capabilities. Although there are many sample
applications form the 90-ies or 00s (e. g. Ouelhadj
et al., 2000; Greenwood et al, 2007; Albert et al.,
2002), their distribution in industrial manufacturing
environments is rather scarce so far. In order to
tackle this issue and provide an easy access to agent
based systems, a web based GUI allowing the easy
configuration of our MAS was developed.
As depicted in Figure 9 the user has a modelling
area (1) along with a menu bar (2) at his disposal
where he is able to create agents and model their
dependencies (expressed by black lines). For the
creation of agents, a library offering templates for
different automation components is available (3).


Figure 9 SVG based GUI facilitating creation of XML
based configuration file for the MAS
Via the menu bar on the right hand side (4) the user
can assign individual capabilities to an agent along
with the agent type and transform the modelled
content into the XML based file format of the MAS
thus creating a new system configuration.
Additionally the menu at the bottom part (5) of the
web page allows for the possibility to load and save
modelled configurations. Once again, the core
technology applied was SVG which was used in
order to create the modelling area and the
components contained in it. The surrounding menu
bars have been created using HTML and JavaScript.

4.2.3. Evaluation
MAS are a powerful method for creating capable
systems for technical diagnosis. Efforts in this
direction have been going on for decades. With the
rise of web technologies, the implementation of
these systems can be simplified by building upon
standardized data formats and communication
protocols from the web environment for
specification and execution of the data exchange
between agents. Furthermore web technologies can
also be used in order to provide easy to use GUIs
for setting up a MAS. This eliminates usability
issues that have hindered their success in the past.
The GUI provides a hierarchical overview of the
dependencies in the agent based system thus
reducing configuration errors and also offers a
comfortable way for developing agent behaviour
without having to directly edit text based
configuration files.
The modular approach and usage of open web
standards makes the MAS architecture ideal for
designing vendor independent systems. Singular
components form different distributors can interact
in order to fulfil the diagnosis tasks for complex
automated production facilities. This fosters future
developments where MAS can be combined with
newly arising concepts from the web environment
such as software as a service or cloud
computing. Additionally emerging concepts like
mass collaboration platforms for engineering
manufacturing facilities could also combine with
modular, agent based diagnosis system since their
trademarks like modularity, vendor-independence or
reusability overlap (Tapscott and Williams, 2006).
5. CONCLUSIONS
This paper motivated in detail why technical
diagnosis becomes more and more important in
todays manufacturing environment and why
approaches that rely solely on the operator are no
longer sufficient. Various methods on how to build
powerful software systems supporting or fulfilling

82

monitoring, diagnosis and therapy tasks with the
help of software technologies originating mostly
from the web environment have been presented.
The typical advantages of the different systems
solutions were stated. By enhancing long known
system solutions like expert systems or MAS with
web technologies they become more standardized
and easy to use. This might finally facilitate a
breakthrough with the following broad integration
of these system concepts in the field of
manufacturing automation.
6. ACKNOWLEDGMENTS
Work concerning the expert system solution for the
solder paste printing process is carried out in the
research project PADUA funded by the BMWi
under grant number KF2305702.
REFERENCES
Albert M., Lngle T., Wrn H., Development Tool for
Distributed Monitoring and Diagnosis Systems, In:
Proceedings of Thirteenth International Workshop on
Principles of Diagnosis, Semmering, 2002, p 158-164
Beierle C., Kern-Isberner G., Methoden
wissensbasierter Systeme, 4th Edition,
Vieweg+Teubner Verlag, Wiesbaden, 2008, p 18
Birkel G., Aufwandsminimierter Wissenserwerb fr die
Diagnose in flexiblen Produktionszellen, 1st Edition,
Springer-Verlag, Berlin, 1995, p 9, 18-19
Faupel B., Ein modellbasiertes Akquisitionssystem fr
technische Diagnosesysteme, 1st Edition, trans-aix-
press, Aachen, 1992, p 4-5
FIPA, FIPA Specifications, URL:
http://www.fipa.org/specifications/index.html, 2005,
Retrieved: 31/05/2011
Greenwood D., Lyell M., Mallya A., Suguri H., The
IEEE FIPA Approach to Integrating Software Agents
and Web Services, In: AAMAS, Honolulu, 2007
Garrett J. J., Ajax: A New Approach to Web
Applications, URL: http://www.adaptivepath.com
/ideas/ajax-new-approach-web-applications, 2005
Retrieved: 04/06/2011
Ghringer J., Integrierte Telediagnose via Internet zum
effizienten Service von Produktionssystemen, 1st
Edition, Meisenbach Verlag, Bamberg, 2001, p 5-9
Hasan S. S., Isaac R. K., Mller, E., An integrated
approach of MAS-CommonKADS, ModelView
Controller and web application optimization strategies
for web-based expert system development, Expert
Systems with Applications, Vol. 38, No. 1, 2011, pp
417-428
Hrdtner G. M., Wissensstrukturierung in Diagnose-
expertensystemen fr Fertigungseinrichtungen, 1st
Edition, Springer-Verlag, Berlin, 1992, p 14-15
Klasen F., Wollschlaeger M., et al, Einsatz von Web-
Technologien in der Automation, 1st Edition,
Berthold Druck und Direktwerbung GmbH, Frankfurt,
2006, p 9
Lehner F., Wissensmanagement: Grundlagen, Methoden
und technische Untersttzung, 3rd Edition, Carl
Hanser Verlag, Mnchen, 2009
Nser P., Mller E., Ganzheitlicher Ansatz zur
Anlagenverfgbarkeit, wt Werkstattstechnik online,
Vol. 96, No. 7/8, 2006, pp 555-560
Ng A. H. C., Adolfsson J., Sundberg M., De Vin, L. J.,
Virtual manufacturing for press line monitoring and
diagnostics, International Journal of Machine Tools
& Manufacture, Vol. 48, No. 5, YEAR 2008, pp. 565-
575
Oresjo S., Results from 2007 Industry Defect Level
Effectiveness Studies, In: IPC Printed Circuit Expo,
APEX and Designer Summit, 2008, p 4
Ouelhadj D., Hanachi C., Bouzouia B., Multi-agent
Architecture for Distributed Monitoring in Flexible
Manufacturing Systems (FMS), In: Proceedings of
the IEEE International Conference on Robotics &
Automation, San Francisco, 2000
Pachow-Fraunenhofer J., Heins M., Nyhuis P.,
Zustandsbeschreibung von Produktionsanlagen, wt
Werkstattstechnik online, Vol. 98, No. 7/8, 2008, pp
622-627
Schiller J., Codedread SVG Support, URL:
http://www.codedread.com/svg-support.php, 2010,
Retrieved: 04/06/2011
Tapscott D., Williams A. D., Wikinomics: How mass
collaboration changes everything, 1st Edition,
Portfolio, New York, 2006
Vorne Industries, Fast Guide to OEE, URL:
http://www.oee.com/tools/fast-guide-to-oee.pdf, 2008
Retrieved: 31/05/2011
W3C, Scalable Vector Graphics (SVG), URL:
http://www.w3.org/Graphics/SVG/, 2010, Retrieved:
04/06/2011
Web 3D Consortium, Open Standards for Real-Time 3D
Communication, URL: http://www.web3d.org
/x3d/vrml/, 2011, Retrieved: 04/06/2011
Warnecke H. J., Bullinger H. J., Expertensysteme in
Produktion und Engineering, 1st Edition, Springer
Verlag, Berlin, 1991
Woolridge M., An Introduction to MultiAgent
Systems, 2nd Edition, John Wiley & Sons Ltd,
Chicester, 2009
Zllner B., Adaptive Diagnose in der
Elektronikproduktion, 1st Edition, Carl Hanser
Verlag, Mnchen, 1995, p 126-139
Proceedings of DET2011
7th International Conference on Digital Enterprise Technology
Athens, Greece
28-30 September 2011
83

METROLOGY ENHANCED TOOLING FOR AEROSPACE (META): A LIVE
FIXTURING, WING BOX ASSEMBLY CASE STUDY
O. C. Martin
University of Bath
O.C.Martin@bath.ac.uk
Z. Wang
University of Bath
zw215@bath.ac.uk
P. Helgosson
University of Nottingham
epxph3@nottingham.ac.uk


J. E. Muelaner
University of Bath
J.E.Muelaner@bath.ac.uk
A. Kayani
Airbus UK
Amir.Kayani@airbus.com
D. Tomlinson
Airbus UK
David.Tomlinson@Airbus.com


P. G. Maropoulos
University of Bath
P.G.Maropoulos@bath.ac.uk

ABSTRACT
Aerospace manufacturers typically use monolithic steel fixtures to control the form of assemblies;
this tooling is very expensive to manufacture, has long lead times and has little ability to
accommodate product variation and design changes. Traditionally, the tool setting and
recertification process is manual and time consuming, monolithic structures are required in order to
maintain the tooling tolerances for multiple years without recertification. As part of a growing
requirement to speed up tool-setting procedures this report explores a coupon study of live
fixturing; that is, automated: fixture setting, correction and measurement. The study aims to use a
measurement instrument to control the position of an actuated tooling flag, the flag will
automatically move until the Key Characteristic (KC) of the part/assembly is within tolerance of its
nominal position. This paper updates developments with the Metrology Enhanced Tooling for
Aerospace (META) Framework which interfaces multiple metrology technologies with the tooling,
components, workers and automation. This will allow rapid or even real-time fixture re-certification
with improved product verification leading to a reduced risk of product non-conformance and
increased fixture utilization while facilitating flexible fixtures.
KEYWORDS
Dimensional Metrology, Measurement, Tooling, Fixture, Assembly, META

1. BACKGROUND
In the wider community tooling can include a wide
spectrum of tools, in the context of this paper
tooling is used to refer to Assembly Tooling, this
encompasses both Jigs and Fixtures
Monolithic aerospace assembly fixtures consist
of large traditional steel structures configured for a
single aircraft. For stability, the tooling is secured to
the reinforced-concrete factory floor. This
traditional build philosophy controls all the features
by: common jig location, master jig datum, jig
setting, certification points, build slips and pin
diameters. The positional, dimensional and
geometric accuracy of the assembly is implied from
the tooling. That is to say, if the tooling is correct
and the components are positioned correctly within
the tooling, then the assembly is correct. These
mechanical metrology checks ensure tolerances are
maintained.

84

Verification involves manually rotating pins and
moving slips to ensure that the assembly is correctly
positioned and held within the fixture. However, the
combined tolerance of the fixture and location
pins/slips must be less than the assembly tolerances;
ideally <10% although this is rarely possible at the
wing assembly scale; often the design tolerances are
<300m over 30m. Subsequently, tooling is built to
a tolerance of around 150m, consuming up to 50%
of the assembly tolerance budget. Next Generation
Composite Wings (NGCW) hold new challenges as
the composite materials cannot be reworked easily
if concessions are identified. Consequently, more
accurate assemblies - and therefore assembly
fixtures - will be required. These requirements will
further drive up the cost of traditional fixtures.
In addition, the size and complexity of fixtures
means that they typically have construction lead
times in excess of 6 months making late design
changes or the employment of concurrent
engineering a challenge. It is estimated that
assembly tooling accounts for approximately 5% of
the total build cost for an aircraft (Rooks, 2005) or
10% of the cost for the air frame (Burley et al,
1999). Fixture manufacture times and non-recurring
costs (NRCs) could be reduced if assembly fixtures
moved away from traditional hard tooling and
moved towards soft tooling, that is: away from
large, rigid structures and towards reconfigurable
and flexible tooling. A strong measurement
platform and infrastructure is required to maintain
the required tolerances within the tooling and the
assembly process.
2. INTRODUCTION
The key requirement for large-scale assembly is to
overcome the constraints associated with the
physical size of products and assemblies and the
corresponding dimensional and form tolerances
(Maropoulos et al, 2008). Advances in large volume
metrology are increasingly important in order to
achieve this; subsequently the realisation of
metrology enhanced tooling will become possible.
2.2. METROLOGY ENHANCED TOOLING
FOR AEROSPACE (META)
Gauge-less and fixture-less manufacture are reliant
on the exploitation of advanced metrology in the
dimensional inspection and monitoring of the
tooling, components and assemblies. Firmly
embedded metrology systems within the
manufacturing processes are still not a reality as
most systems are outside of the tooling, and not
embedded within it. Metrology-assisted robotic
processes are being developed within manufacturing
cells with an emphasis on assembly, and not
conventional automated drilling processes
(Jayaweera and Webb, 2010). In order to place
metrology systems within the control loop of a
manufacturing cell prerequisites such as:
autonomous operation, high reliability, high speed
measurement, and flexibility are paramount (Gooch,
1998). This exploitation of technologies is stifled
due to the lack of integration with core design and
assembly processes.
The future of metrology enhanced tooling relies
on the effective synergy of complimentary
interfaces accommodated by a strong software
platform. These hybrid systems could utilise many
metrology technologies, for example: a macro co-
ordinate system could be set-up using
photogrammetry or a network of lasers this would
effectively surround and monitor key characteristics
of the tooling. Localised metrology could sit within
this larger metrological environment laser radar,
portable co-ordinate measurement machines
(PCMMs), actuators, sensors, arms, scanners, etc.
providing fine measurement of difficult features,
freeform surfaces, tooling pick-ups, part location
and verification. Potentially this environment could
provide the prerequisite of any automation attempt,
determining the sources and magnitude of any
dimensional variations of the components that are
currently being experienced during the manual
assembly stage (Saadat and Cretin, 2002). Figure 1
gives an overview of the Metrology Enhanced
Tooling Aerospace (META) environment
introduced by Martin et al (2010).
The META frameworks primary function is to
monitor the key characteristics of the tooling and
assembly requiring a real-time or quasi-real-time
metrology system ensure the fixture condition.
This monitoring eliminates the need to recertify
fixtures periodically removing the need to take the
fixture out of production current practice can take
weeks to recertify and rework a fixture, causing
down time that will have increasing impact as
production rates increase. Secondary functions -
Enhanced Processes - such as live tooling do not
require real-time feedback as the movements can be
iterative, unlike a machining operation. Machining
operations and automation where an iterative loop is
not appropriate must run directly from information
fed from the instrument for example a laser
tracker and not through the core software.
The META frameworks tertiary function is the
collection of information. This information could
not only enhance the tooling and assembly during
operation, but begin a large scale data collection for
the use of SPC, providing learning for future
optimization of the assembly processes.


85


Figure 1 - META Framework Overview
2.2. DATA FUSION
The META framework relies on instrument
networks for a number of reasons, mainly: reducing
measurement uncertainty, increasing the
measurement volume and providing complementary
technologies to enhance data collection. Due to the
expense of measurement instruments, instrument
networks can be performed by roving or multi-hop
systems using a single instrument many times.
Instrument hardware networks have many
challenges; using the data from each instrument in
the most efficient way is paramount. As different
instruments have differing strengths data
management has to have an awareness of such
attributes and respond appropriately. Multi-sensor
data fusion is a method for centrally combining and
processing data from a number of different sensors
(Huang et al, 2007). The data fusion can be
described as either: complementary, competitive
and cooperative (Durrantwhyte, 1988).
Complementary if sensors are independent but can
offer additional information by complementing
other another; competitive if the sensors are
independently measuring the same area/targets in
order to eliminate random error and reduce
measurement uncertainty; and co-operative sensors
are independent but different from each other and
the combination of sensors provides a level of
information that each sensor cannot achieve alone.
Within dimensional metrology, examples of such
multi-sensor data fusion include: field of image
fusion, tactile and optical coordinate metrology,
coherent and incoherent optical measuring
techniques, computed tomography and scanning
probe microscopy (Weckenmann et al, 2009). It is
likely that the future of multi-sensor data fusion will
become increasingly important as higher levels of
integration with fast processing speeds become a
necessity for full field - large volume metrology and
automation.
3. CASE STUDY: FIXTURE
AUTOMATION
This paper looks at the Secondary Function of the
META framework, metrology as an enabler of
live fixtures, as part of a growing requirement to
speed up tool-setting procedures; that is, automated:
fixture setting, correction and measurement. The
case study aims to use a measurement instrument to
control the position of an actuated tooling flag, the
flag will automatically move until the Key
Characteristic (KC) of the part/assembly is within
tolerance of its nominal position. This reduces the
measurement uncertainty stack-up associated with
constructing and employing tooling held tolerances;
effectively the tolerance budget is only occupied by
the instruments uncertainty and not the
manufacturing tolerances of the fixture. In the
META framework the measurement can focus on
assembly/component and not the fixture.

86

In the case of this study the actuated flag is a
Hexapod from Physik Instrumente (PI) and the KC
is the hinge line axis that runs through the hinge
bracket's bore. This trial was carried out
concurrently with the Airbus Tooling Hub activities,
based at the University of Nottingham.
3.1. HEXAPOD LOCATION
The methods used to build the trial fixture (Figure
2) cannot perfectly align the native co-ordinate
frame of the hexapod (Figure 3) to the jig co-
ordinate frame; aligning these frames accurately
would be a time consuming and laborious exercise.
A more robust and quicker method is to identify the
location and orientation of the hexapod's frame and
transform the relevant information into the jig co-
ordinate system when required; if calculations are
completed with an appropriate degree of accuracy
no loss of information will occur when changing
from frame-to-frame. This method allows the
hexapod to be approximately placed in its nominal
position without considering the hexapod's position
and orientation. This speeds up the tool setting
processes making reconfiguration fixtures
quicker. However, in order to manipulate the
hexapod into its CAD nominal position, firstly the
hexapods native co-ordinate frame must be defined
relative to the jig axis system.


Figure 2 - Location of study on the demonstration fixture;
highlighted: the jigs co-ordinate frame

Figure 3 - ative hexapod co-ordinate frame in its CAD
nominal position
The hexapod is moved to the extreme of each
axis in isolation using PI's proprietary software
interface (Figure 4); each axis extremity is
measured using a Leica AT901 laser tracker and
ew River Kinematics: SpatialAnalyzer (SA). This
enables the definition of the working envelope
(x=50mm, y=50mm, z=25mm) and the creation of
the physical, native co-ordinate frame of the
hexapod relative to the fixture's co-ordinate frame.
Subsequently, the hexapod can be manoeuvred into
its CAD nominal position by obtaining the
translations [x, y, z]
T
and rotations [, , ]
T
from
the SA function: compare to CAD. This method is
consistent with the fixture build philosophy used for
the construction of the fixture. In turn the physical
location of hexapods frame can be compared to the
CAD nominal location of the hexapod frame
(Figure 5). The transformation matrix from native to
CAD nominal (Equation-1) gives us the offsets
required to reach the intended CAD nominal
position.
This is a specific transformation matrix that uses
the sequence: rotate about x (), followed by y
rotation (), then rotated about z (), finally,
performing a translation in x, y, z; this is the
sequence that the SA software uses.

+
+
=
1 0 0 0
cos cos sin cos sin
sin cos cos sin sin cos cos sin sin sin cos sin
sin sin cos sin cos cos sin sin sin cos cos cos
t
t
t
z
y
x
T



(1)


87


Figure 4 - PI hexapod controller interface

Figure 5 - Actual position of the Hexapod's native co-
ordinate frame

3.2. HEXAPOD COMMUNICATION AND
CONTROL
The measurement information from the laser tracker
is continuously streamed into SA. SA converts the
native spherical co-ordinates from the laser tracker
to the Cartesian co-ordinates required for the
hexapod control. This post-processed data is
streamed via a User Datagram Protocol (UDP) to a
bespoke program created by the University of Bath
(UoB); designed to bridge the interface gap between
the PI hexapod interface and SA. The UoB interface
program (Figure 6) samples the UDP data stream,
checks whether the KC is within tolerance, sends
the required corrective movement to the hexapod
and checks whether the hexapod is stationary before
cycling again. The communication paths between
the hardware and software are shown in Figure 7.
The program also enabled the control of a selection
of parameters, such as: the tolerance threshold,
hexapod velocity, enabling and disabling the
hexapod's degrees of freedom and closed or open
loop control.


Figure 6 - UoB SA - Hexapod interface software

88


Figure 7 - Schematic of hardware/software communication

3.3. METROLOGICAL FEEDBACK
The metrology requirement is to measure the
deviation from the hinge bracket's bore to its
nominal CAD position; the hexapod will move,
attempting to reclaim the hinge lines CAD nominal
position. The hexapod is attached to the spar via a
zero point clamp (Figure 8), there is a substantial
offset from the point of attachment to the point of
interest (POI) (Figure 9) between hexapod and the
POI are compliant connective elements: zero point
clamp, spar, hinge bracket and vector bar. As the
relationship between the hexapod and the POI
cannot be considered as a rigid body the metrology
feedback will have to be in a closed loop (Figure
10); if however there was a rigid relationship or a
predicable relationship between the movement of
the hexapod and the POI, the PI hexapod is
inherently accurate enough to support an open loop
system - this is quicker and less resource intensive
(Figure 11). An open loop system is advantageous
when considering measurement resources and time;
a closed loop system requires continuous
measurement, whereas an open loop system requires
a single measurement. If many POIs require
measurement and actuation, closed loop systems are
bottlenecked by the metrology resource, this
happens to a much lesser extent with an open loop
system; as the measurement system can sequentially
measure each POI without stopping.


Figure 8 - Close up of the Zero point clamp
Figure 9 highlights the hexapod's native co-
ordinate after the origin has been translated to the
POI; it follows that measurements taken from this
new co-ordinate frame are essentially deviations
from the POIs nominal position. Consequently, the
co-ordinates - and hence the deviations from
nominal - are streamed from SA via the UDP.


Figure 9 - Hexapod's native co-ordinate frame after
transformation to CAD nominal position of hinge line axis



89


Figure 10 - Closed loop control of fixture

Figure 11 - Open loop control of fixture
The measurement instrument used for the trial
was a Leica AT901 laser tracker, this was a readily
available instrument with a good level of accuracy
capable of real-time, three dimensional
measurement. 3D co-ordinates were assumed as
appropriate since the compliance of the material is
limited to two dimensions and this phase of the
trials is assessing the feasibility of the metrology
enhanced tooling philosophy. Figure 12 shows the
laser tracker point of measurement relative to the
zero-point clamp attachment point.



Figure 12 - Facility tooling for targeting the hinge line
4. RESULTS
The trial focus on moving two axis, without
rotation; engaging the hexapod's y-axis (Figure 9:
Green Arrow) and z-axis (Figure 9: Blue Arrow).
The reason for not actuating the x-axis was
structural: as longitudinal movement was likely to
add additional stress to the fasteners as the structure
had high rigidity in this plane. Rotational
movements were excluded at this stage because
only one POI was monitored, rotational movement
is more appropriate when best-fitting multiple
points. The most out of tolerance axis was z. The
closed loop configuration moved the POI a total of
0.421mm in the y-axis and negative 1.572mm in z-
axis; the hexapod achieved the designated tolerance
threshold (300m) within two iterative cycles
this is summarized in Figure 13.
However, the hexapod's encoders registered a
movement of 1.103mm in the y-axis (Figure 14) and
negative 2.412mm in the z-axis (Figure 15); this

90

difference can be attributed to the material
compliance. This confirms the assumption that the
POI and hexapod do not act as a rigid body.
However, Figure 14 and Figure 15 show that
after around 5 iterations the deviation between the
POI and hexapod displacements begin to level out,
reducing the significance of the component
deflection and offset.



Figure 13 - POI displacement from nominal in y-axis and z-axis after each move iteration, with measurement uncertainty bars
indicated

Figure 14 - Measured POI displacement (with uncertainty indicated) compared with displacement from hexapod's encoders; in
the y-axis

91



Figure 15 - Measured POI displacement (with uncertainty indicated) compared with displacement from hexapod's encoders; in
the z-axis

5. CONCLUSIONS
Figure 16 shows the elements of the META
framework exercised through the live fixturing
study.
The closed loop model holds obvious limitations
in terms of measurement resources; if the fixturing
requires 3DOF or 6DOF manipulation within a
global co-ordinate system then the measurement
instrument is likely to be prohibitively expensive for
deployment on each actuated part. Subsequently,
the closed loop model has to multi-hop instruments
to each actuated pick-up. This is an inherently time
consuming process; however, bottle-necking due to
metrology resource on multiple POIs could be
reduced by cycling through each of the POIs and
assuming that a small number of iterations is
necessary to achieve the tolerance. This would
negate the requirement of metrology monitoring and
is substantiated in Figure 13, Figure 14 and Figure
15. However this is reliant on a degree of actuator
accuracy.
On the other hand, a closed loop model does
mean that the actuators do not need to be accurate,
just a fine resolution of movement. If the pick-up
only needs local measurement or describing in one
or two dimensions then inexpensive measurement
systems could be deployed on each manipulator and
closed loop systems could be used.
Open loop systems may be a more economical
solution as an enabler to live fixturing; one laser
tracker or photogrammetric survey could measure
all the POIs and the accuracy on the actuators could
be relied on to position the pick-ups to within
tolerance. However, this would need rigid body
relationships to be established, or known deflections
compensated for.

92


Figure 16 - META Framework with aspect utilized in hexapod control highlighted
6. ACKNOWLEDGMENTS
We would like to thank Steven Lockett and Geraint
Green (PI) for your technical support; as well as Dr
Tony Smith and the team (University of
ottingham) for co-ordinating resource and hosting
the project.
This paper is part of a PhD sponsored by Airbus
UK and the EPSRC Innovative Manufacturing
Research Centre at the University of Bath (grant
reference GR/R67507/0).
REFERENCES
Burley G, Odi R, Naing S, Williamson A and Corbett J,
Jigless aerospace manufacture - The enabling
technologies, in Aerospace Manufacturing
Technology Conference & Exposition. Bellevue,
Washington: Society of Automotive Engineers, 1999
Durrantwhyte HF, Sensor models and multisensor
integration, International Journal of Robotics
Research, Vol. 7, No. 6, 1988, pp 97-113
Gooch R, Optical metrology in manufacturing
automation, Sensor Review, Vol. 18, No. 2, 1998, pp
81-87
Huang Y, Lan Y, Hoffmann WC and Lacey RE,
Multisensor Data Fusion for High Quality Data
Analysis and Processing in Measurement and
Instrumentation, Journal of Bionic Engineering, Vol
4, No. 1, 2007, pp 53-62
Jayaweera N and Webb P, Metrology-assisted robotic
processing of aerospace applications, International
Journal of Computer Integrated Manufacturing, Vol.
23, No. 3, 2010, pp 283-296
Maropoulos PG, Guo Y, Jamshidi J and Cai B, Large
volume metrology process models: A framework for
integrating measurement with assembly planning,
CIRP Annals - Manufacturing Technology, Vol. 57,
No. 1, 2008, pp 477-480
Martin OC, Muelaner JE, Tomlinson D, Kayani A and
Maropoulos PG, Metrology Enhanced Tooling for
Aerospace (META) Framework, Proceedings of the
36
th
International MATADOR Conference,
Manchester, 2010, pp 363-366
Rooks B, Assembly in aerospace features at IEE
seminar, Assembly Automation, Vol. 25, No. 2, 2005,
pp 108-111
Saadat M and Cretin C, Dimensional variations during
Airbus wing assembly Assembly Automation, Vol.
22, No. 3, 2002, pp 270-279
Weckenmann A, Jiang X, Sommer KD, Neuschaefer-
Rube U, Seewig J, Shaw L and Estler T, Multisensor
data fusion in dimensional metrology, CIRP Annals -
Manufacturing Technology, Vol. 58, No. 2, 2009, pp
701-721



Proceedings of DET2011
7th International Conference on Digital Enterprise Technology
Athens, Greece
28-30 September 2011
93

PLANNING SOFTWARE AS A SERVICE - A NEW APPROACH FOR
HOLISTIC AND PARTICIPATIVE PRODUCTION PLANNING PROCESSES
Robert Moch
Chemnitz University of Technology
robert.moch@mb.tu-chemnitz.de
Egon Mller
Chemnitz University of Technology
egon.mueller@mb.tu-chemnitz.de



ABSTRACT
The application of planning software for production systems is on the point of changing
fundamentally. Technological innovations as Cloud Computing and resulting developments, for
instance the rising adaption of Software as a Service license models, offer manifold benefits for
enterprises within their software management and establish new possibilities of using software for
project driven work. This Paper proceeds from the relation of current software to specific planning
phases and processes and reveals possible application scenarios for Planning Software as a Service
with a description of the arising chances and risks. On the basis of a modest morphological analysis
of software applications, crucial determinants are identified and afterwards valued based on a
scenario building. The result is the guidance for adopting Software as a Service in planning phases
and processes to support the user effectuating the upcoming change actively with minimized risks.
KEYWORDS
Factory Planning, Cloud Computing, Software as a Service, Web 2.0 Technologies


1. INTRODUCTION
Cloud Computing vendors promise manifold
benefits for customers utilising this new technology,
although possible users are still uncertain about the
chances and security of these innovative services.
Research in the field of internet services lack,
especially in the field of digital enterprise
technologies for planning and controlling
production systems.
This article arose from research work within the
interdisciplinary project IREKO (Gnther and
Moch, 2009], which is founded by the European
Social Fond and the Free State of Saxony. In this
project, researchers of the field of factory planning
and management and organizational researchers
work on sustainable implementation of innovation
in regional work context. This paper will illuminate
the occurring innovation Cloud Computing and the
effect on planning production systems.
Initially this paper will clarify the objective of
research and research questions. After defining the
term Cloud Computing and underlying
terminologies, the paper will outline the state of the
art of factory planning projects. Following the
conducted methods will be depicted and the
findings and conclusions will be explained.
2. OBJECTIVE
The purpose of this paper is to discuss the
advantages of utilizing factory planning software
in the cloud and give guidance for adopting Cloud
Computing technologies to the work context of
factory planning issues. The research questions to
be answered are the following:
(1) What chances and risks appear through the
availability of Cloud Computing for planning
factories and production systems?

94

(2) In which way can these chances be utilized
for planning projects?
(3) How can disadvantages and risks in using
Cloud Computing technologies for planning
production systems be avoided?
3. STATE OF THE ART
3.1. CLOUD COMPUTING TERMINOLOGIES
Cloud Computing is seen to be the new paradigm to
provide software and hardware for business and
private customers. In a very simplified way, Cloud
Computing can be described as a technology, which
enables users to access data and software by using a
remote station with a display, without the need of
having special computing performance on their site.
This idea of providing computing power on demand
is first mentioned by Parkhill (1966). Irwin (1967,
p. 223) describes Parkhills Concept of Computer
Utilization as the following:
This development enables subscribers to have
access to and to share simultaneously a centrally-
located computer. The user need not be adjacent to
the computers site. On the contrary, the trend is for
users to be located at some remote station or
terminal and through telephone lines gain
admittance to the computers logic and memory.
Still Cloud Computing is a developing
technology and various definitions can be found in
literature. For this article, the definition of the
National Institute of Standards and Technology
(NIST) is adopted (Mell and Grance, 2011, pp. 2):
Cloud computing is a model for enabling
ubiquitous, convenient, on-demand network access
to a shared pool of configurable computing
resources (e.g., networks, servers, storage,
applications, and services) that can be rapidly
provisioned and released with minimal management
effort or service provider interaction. This cloud
model promotes availability and is composed of five
essential characteristics, three service models, and
four deployment models.
The essential characteristics of cloud computing
are:
On-demand self service
Broad network access
Resource pooling
Rapid elasticity
Measured service
The deployment models can be subdivided into:
Private Cloud, Community Cloud, Public Cloud
and Hybrid Cloud. A Private Cloud is a service
that is exclusively provided for one enterprise or
organisation. This service can be offered by the
organisation itself or by third party. The term
Community Cloud refers to providing IT
performance via internet for a certain group of
enterprises and organisations. This service also can
be offered by a third party or by one of the
organisations itself. The term Public Cloud is
understood as a deployment model, which is
characterised by one provider and several
enterprises or special groups of users as customers.
A Hybrid Cloud can be provided private, communal
or public. The crucial criterion is the
interconnection of several discrete clouds through
adjusted and standardized technologies. This
deployment model is commonly applied for load
balancing between clouds.
Currently the majority of Cloud Computing
services are provided by large-scale companies, like
Microsoft, Amazon, IBM or Google as Public
Clouds. Though there is a trend of small and
medium sized IT vendors offering specific customer
driven cloud solutions as Private or Public Clouds.
These offers particularly aim customers
appreciating regional contiguity and individual
support considering regional directives and
requirements.
NIST distinguishes three different Cloud
Computing service models: Infrastructure as a
Service (IaaS), Platform as a Service (PaaS) und
Software as a Service (SaaS). IaaS is understood as
the deployment of virtual hardware systems via
internet. Users can freely install operation system
and software. Hence the customer is able to
determine the configuration of data storage,
applications and network components. The
provisioning and allocation of the virtual and
physical resources is assumed by the service
provider. Hence the customer cannot control the
utilised hardware The term PaaS refers to the
deployment of hardware and operation system as
development platform for applications. Users are
able to install, develop and test certain software,
without having the relating administration and
configuration effort for hardware and middleware.
Applications, which are available as SaaS, are
complete computer programs supported through
browsers or similar thin client systems. The
software is not installed on the customers site,
which releases her or him from managing hardware,
operation system, drivers and applications. The
benefit of all service models is that no asset costs
occur; the services are accounted for consumption.
(Mell & Grance, 2011)
When discussing Cloud Computing
terminologies, Web 2.0 conceptions cannot be
ignored. Principally Web 2.0 represents public SaaS
applications. Hoegg et al. (2006) state that Web 2.0
is not a specific technology, it is a philosophy of
cooperating within open standards and open minded

95

thinking. Web 2.0 is a participatory, user-centric
and collaborative way to create and obtain
information (Dearstyne, 2007). Web 2.0 enables
users to create content by exchanging information
and underlies self-regulating evolution processes
with embedded or formalized quality assurance
mechanisms (Hoegg et al., 2006). Conventional
knowledge management systems do not have the
performance to create collective intelligence
(Grossmann and McCarthy, 2007). Web 2.0
facilitates efficient knowledge creation through
intuitive operations for many users (Knights, 2007).
Referring to McAffe (2006) and Hinchcliffe (2007)
the implementation of Web 2.0 into a company has
to be associated with social and organisational
innovations to be successful.
3.2. DIGITAL ENTERPRISE TECHNOLOGIES
AND CLOUD COMPUTING
Cloud Computing still is a research challenge
(Zhang, Q., Cheng, L. and Boutaba, R. (2010). The
development of the Cloud Computing industry is
seen to be a chance for small and medium sized
enterprises (SME) to utilise IT applications not yet
implemented because of financial or strategic
reasons. For many SME it is not profitable to invest
into the acquisition, implementation, administration
and maintenance of IT systems. Though they lose
possible potential in planning and controlling of
processes and furthermore their innovativeness.
Through the change of paradigm from IT buying to
IT renting, new utilisation strategies are possible.
Within very short time, enterprises are able to
deploy requested IT, because hard- and software is
already pre-installed by the service vendor. Merely
the configurations of user-specific settings and
access privileges have to be adjusted. Hence, the
enterprises gain IT flexibility. Appropriate to the
changing requirements, the IT systems are being
adapted. If e.g. larger storage capacity is needed, the
virtual scaling is possible at the click of a mouse. If
new software features are required, they can be
booked. No longer needed services can be
countermanded not causing costs anymore. A
further benefit is the increasing data security. Hence
the files are secured by a service vendor, whose
core competency is the provision of safe cloud
services.
The largest obstacle still to conquer for Cloud
Computing technologies is the availability of the
internet. The potential of the applications is directly
correlated to the obtainable bandwidth. For instance
complex 3D applications require high data
frequencies between the cloud and the users
computer. 3D-CAD (three-dimensional Computer
Aided Design) software within the cloud are
possible and already developed by several vendors.
The benefits of such applications are the elimination
of installation and configuration effort, the decrease
of computation performance at the users site and
the creation of new possibilities for cooperation. For
a inter-company product development project for
example, the utilised CAD systems mostly vary.
The work with exchange formats is unavoidable,
causes extra effort and additional sources of error.
Through the possibility of software application on
demand, enterprises are able to use common CAD
systems with their cooperation partners. By
realising a cloud-based Product Lifecycle
Management (PLM) even more benefits emerge for
cooperating enterprises. Cloud Computing
technologies offer the possibility of creating a
common database for enterprises of one supply
chain to ensure a cross-company PLM. Enterprise
Resource Planning (ERP) systems are already
successful SaaS applications. Especially SME
realize the advantages and chances Cloud-ERP
offers compared to traditional ERP systems. Though
changing the ERP system vendor is very complex
and expensive. Unpredictable efforts and costs may
rise and the risk of data loss is high. Through
combining SaaS and PaaS approaches, those
disadvantages can be avoided and the benefits of
Cloud Computing solutions utilised. In this use
case, the hardware and operating system is provided
as PaaS and the accustomed ERP system is installed
by a second service vendor offering the system as a
service. Hence the customer enterprise wins
resources, because administration of software and
scaling of hardware is done by service partners
while being able to use the system as usual.
Furthermore through cloud-able ERP systems
possibilities rise concerning planning and
controlling the occupancy of production and order
progresses from remote locations, for example by
browser based Production Planning and Control
(PPC) applications.
A clear cut in the vertical integration of cloud
software systems in enterprises emerges between
PPC and Manufacturing Execution Systems (MES).
Production near systems like factory data capture or
production and logistic control centres, or even
complete MES contribute essentially to the
maintenance of the entire production and are fully
integrated. The majority of the MES hardware
systems are on the shop floor and cannot be moved
into the cloud. Service and troubleshooting require
corresponding competencies of the workers on the
shop floor, which should be hard to gain by service
providers. Only the enforcement of universal
standards for MES will bring the possibility and
reasonableness of MES software as a internet
service.

96

3.3. PRODUCTION SYSTEM PLANNING
In the research field of planning production
systems, the Department for Factory Planning and
Factory Management of the Chemnitz University of
Technology has developed an essential contribution
to the structuring and execution of factory planning
projects. Schenk et al. (2010) evolved the 0 +5 +X
Planning Model. This planning model describes
the line of actions necessary to process a factory
planning project, consisting of three complexes 0,
5 and X. The first complex 0 is the project
definition. The second complex, project
development has a high intense in software usage.
The third and last complex is called project
implementation. In this research article, the focus
lies on the second complex, hence the high intense
of software usage. This complex holds five sub-
complexes:
Determination of the Production and
Performance Program
Determination of Functions
Dimensioning
Structuring
Design
Furthermore, all production system planning
projects can be associated with one or more of these
four basic cases:
Basic case A The new development of
production facilities
Basic case B The reconfiguration of
existing production facilities
Basic case C Expansion of existing
production facilities
Basic case D The Decommissioning of
existing production facilities, also called
revitalization
For all basic cases, the above mentioned planning
model can be applied. For the second planning
complex the assignment of digital factory planning
tools to the sub-complexes referring to Gnther
(2005) is shown in Figure 1. Spreadsheets and
calculation software is commonly utilized to
determine the production and performance of a
factory and to specify and quantify the functions
und processes. Computer Aided Design (CAD) and
Computer Aided Manufacturing (CAM) systems are
also used to determine certain processes and to
support quantitative decisions. Virtual Reality (VR)
systems are utilized to support planning projects in
very detailed phases of structuring and designing
plant systems. Model-based tools, such as
visTABLE (Plavis, 2010), conduce dimensioning,
structuring and designing tasks. Simulation tools are
also deployed to dimension and structure plant
systems, and to determine functions and processes.
Holistic production system planning tools can be
utilized for entire planning projects, though they
mostly consist of several of the above mentioned
tools.
Through moving these tools from several
planning service companies and institutes to the
internet and offering them as SaaS applications,
manifold benefits are expected. Two anticipated
very relevant advantages are improved participation
and holistic and continuous planning processes;
hence all stakeholders are able to operate with the
same systems and databases. Especially
participative planning processes are seen to be the
key for long-termed success.
4. METHODS
4.1. MORPHOLOGICAL ANALYSIS
After this paper related current software to specific
planning phases and processes, it will reveal
Production and
performance
program
Determination of
functions and
processes
Dimensioning Structuring Design
Spreadsheets
CAD/CAM
systems
VR systems
Model-based tools
Simulation
Production system
planning tools
Figure 1 Production system planning project development and according digital tools

97

possible application scenarios for Planning Software
as a Service with a description of the arising
chances and risks. Beginning with a modest
morphological analysis of software applications for
production planning projects, crucial benefits are
identified and valued based on Cloud Computing
application scenarios.
The modest morphological analysis is a method
of the morphologic research (Zwicky, 1989). To
reach the morphology of an object, Schlicksupp
(1989) suggests 5 steps:
(1) Analyses, definition and generalization of
the object.
(2) Definition of all characteristics of the
research object
(3) Arrangement of all characteristics in one
column of the morphological table
(4) Definition of all Attributes (practical and
theoretical) belonging to the characteristics
and assignment to the lines of the
morphological table
(5) Combination of all possible attributes, to
figure out all varieties of the research
object
Modest morphology means, that not all
characteristics exclude themselves and are
independent on each other, which would be the case
of an exact morphology. The modest morphology of
planning tools is shown in Figure 2 referring to a
characterization of Gnther (2005). The grey fields
of the attributes in this figure highlight the strengths
gained by the utilisation of planning tools through
Cloud Computing technology. On the other hand,
the white cells in this figure represent weaknesses
for Cloud Computing technologies. Through
applying the procedure of morphological analyses,
the main benefits of planning tools in the cloud
appear to be among others the multi-user
application for a collaborative work and fast
planning support especially being locally
independent from the planning team. The
compatibility with CAD-systems and databases can
also be assured by cloud computing and enhanced
by inter-company data management systems. The
morphology further indicates that the procedure of
utilising planning software as an internet service
should start with the beginning of the project. The
application of such software in hindsight may cause
more effort than benefit. An additional indication
through the morphology is the performance of the
planning tool concerning depiction and scope. For
2D-applications and rough layouts the performance
of cloud services, which mainly depends on
bandwidth, suffices. 3D and VR systems for
detailed layouts generate enormous data rates.
Hence these services are difficult to put into the
cloud. Therefore the development and changing of
complex models in the cloud is a challenge from
present view. The application of a model library or
a data management tool has most realization
possibilities.
4.2. SCENARIO METHOD
Scenario method, scenario building or scenario
technique are methodical research approaches to
prognosticate future situations (Mietzner, 2009). All
scenario research methods have one in common:
they are generated on basis of present developments
(Lippold and Welters, 1976). Referring to Slaughter
(2000), scenarios are exerted to reveal trends and
alternative developments. To Porter (1999)
scenarios are especially useful for young industries
with high uncertainties, to expose different possible
situations of the future. These images of the future
are the basis for planning processes, testing ideas
and inspiring new developments (Ratcliffe, 1999).
For the case of factory planning and the emerging
new technology of Cloud Computing, two scenarios
were developed to describe possible future trends,
concerning service and deployment models.
The first scenario is shown in Figure 3. In this
case scenario a Cloud Computing software vendor
Characteristics Attributes
Procedure Initial Corrective
Type Analog Digital
Type of model Formal Analog Pictorial
Depiction 2D 3D VR
Operation Multi-user Apportioned Intuitive Specialist Planning Support
Compatibility CAD Database
Modelling Development Changes Library
Scope Rough Layout Detailed Layout
Evaluation Flow intensities
Figure 2 Modest morphology of digital planning tools

98

offers planning software as a service to customers.
The benefit for customers is that they do not have to
buy planning software and purchase licenses. They
are able to rent the software only for the life span of
a project. A further advantage is the facilitation of
collaborative work on data in a community cloud,
whilst being independent on the local site.
Within the second scenario, shown in Figure 4,
the SaaS vendor is also an IaaS customer. That
offers the possibility of rapidly adjusting computing
performance to the customer needs, without having
the adjustment of hardware on the planning service
vendors site. For the planning service customers
apparently no difference to scenario 1 occurs, but
their risks change.
The main risk for customers in both scenarios is
their dependence on the cloud service vendors.
Especially data security and the persistence of the
service are important risk factors. In scenario 1 the
risk of data piracy is lowered, because only one
vendor processes the data in a community cloud,
where exclusively enterprises operating in one
project access them. In scenario 2, the risk of
dependence on one vendor is lowered by splitting
up the two core services: application and
infrastructure. If one of the vendors vanishes, the
remaining vendor deputizes on an interim basis.
Another benefit for the vendors is the opportunity to
concentrate on their core business, such as cloud
software or cloud infrastructure.

5. FINDINGS
Summarizing the results of the morphological
analyses and the scenario method, the chances and
risks of digital factory planning tools utilized by
Cloud Computing technology are unveiled and the
research questions can be answered. The chances of
planning systems as an internet service are:
more opportunities for collaborative
planning projects
less exchange formats for files
consistent data stock
involvement of all stakeholders in the
complexes of the planning processes
holistic planning through all planning
levels
Risks of the investigated tools are:
rising dependence on service vendors
persistence
data security risks
lower performance through bandwidth
dependence
To utilize the chances, two possible future
scenarios were developed, based on present trends
and knowledge. Those scenarios show that in future
planning software vendors offer their digital tools
for a certain span of time to their customers with
different business models. To avoid the risk of
vendor dependence, a splitting of core functions can
be launched. To avoid the risk of data piracy, the
vendor should offer only community services to a
Planning Service
Vendor
Planning-SaaS in a
Community Cloud
Planning Service
Customer 3
Cloud Service
Vendor
IaaS in a Pblic Cloud
Planning Service
Customer 2
Planning Service
Customer 1
Planning Data
and Software
Providing
Hardware
Figure 4 Future case scenario 2
Planning and Cloud Service Vendor
Planning-SaaS in a Community Cloud
Providing Hardware
Planning Service
Customer 3
Planning Service
Customer 2
Planning Service
Customer 1
Planning Data and Software
Figure 3 Future case scenario 1

99

certain group of customers.
As guidance for the project management of a
factory planning project the following
considerations can be stated in addition to Schenk et
al.s (2010) planning model:
I) Project definition: The planner needs to
define the types of data, files and models
to be used in the project and consider
how and where they can be stored in due
consideration of collaborative chances
and security risks.
II) Project development: The required software
functions according to the sub-complexes
have to be stated and corresponding
Cloud Computing services have to be
chosen or developed considering
emerging risks and chances.
III) Project implementation: The project planner
needs to assure, that corresponding data
of the planning outcomes are available
for the stakeholder realizing their share
of the project. In this complex the
benefits of Cloud Computing services
like having one data base, no
redundancies and a flexible data access
management show enormous facilitation.
6. CONCLUSION
The debate in this articles shows, that Cloud
Computing facilitates participative and holistic
planning processes. Still these new approaches have
to be successfully applied to become innovations
(OSullivan and Dooley, 2008). Global cooperating
enterprises continue to challenge the proper
development of digital enterprise technologies
(Wiendahl, 2009). Also regional production
networks seek for the adequate support by IT
systems (Moch and Mller, 2010). Cloud
Computing can contribute to solving these problems
in a manner of Teichs (2002) Extended Value
Chain Managements (EVCM), which describes the
enterprise-overreaching production planning and
controlling and proposes the support by an
Application Service Provider (APS) that is similar
to SaaS.
Therefore more research is required especially
investigating the technological and organisational
limits of Cloud Computing technologies.
REFERENCES
Dearstyne BW, Blogs, mashups & wikis, oh my!,
Information Management Journal, Vol. 41, No. 4,
2007, pp. 24-33
Grossman M and McCarthy RV, Web 2.0: Is the
enterprise ready for the adventure Issues in
Information Systems, Vol. 8, No. 2, 2007
Gnther L and Moch R IREKO Sustainable
Implementation of Innovation in Regional
Workcontext, TU Chemnitz, 2009, Retrieved:
30/06/2011, <http://ireko.tu-
chemnitz.de/index.html.en>
Gnther U, Methodik zur Struktur- und Layoutplanung
wandlungsfhiger Produktionssysteme, IBF TU
Chemnitz, Chemnitz, 2005
Hinchcliffe D, The state of Enterprise 2.0, ZDNet,
2007, Retrieved: 30/06/2011,
<http://www.zdnet.com/blog/hinchcliffe/the-state-of-
enterprise-20/143>
Hoegg R. et al., Overview of business models for Web
2.0 communities, Paper Presented at the GeNeMe,
Dresden, 2006
Irwin MR, The Regulatory Status of the Computer
Utility, Land Economics, Vol. 43, No. 2, 1967, pp.
223-227
Knights M, Web 2.0, Communication Engineer, Vol. 5,
No. 1, 2007, pp. 30-35
Murugesan S, Understanding Web 2.0, IT
Professional, Vol. 9, No. 4, 2007, pp. 34-41
McAffe AP, Enterprise 2.0: The Dawn of Emergent
Collaboration, Sloan Management Review, Vol. 47
No. 3, 2006, pp.20-29
Mell P and Grance T, NIST Definition of Cloud
Computing v15, NIST, 2011, Retrieved: 30/6/2011,
<http://csrc.nist.gov/publications/drafts/800-
145/Draft-SP-800-145_cloud-definition.pdf>
Mietzner D, Strategische Vorausschau und
Szenarioanalysen Methodenevaluation und neue
Anstze, Reger, G. and Wagner, D. (eds.), Gabler
Research, Wiesbaden, 2009
Moch R and Mller E, Digital Enterprise Technologies
in Regional SME Production Networks, Book of
abstracts of the CENTERIS 2010 - Conference on
ENTERprise Information Systems, Quintela Varajao
JE, Cruz Cunha MM, Putnik GP, Trigo Ribeiro A
(eds.), UTAD & IPCA, Portugal, 2010, pp. 19
Plavis, visTABLE innovative Planungswerkzeuge,
2010, Retrieved: 30/06/2011,
<http://www.vistable.de>
Porter ME, Wettbewerbsstrategie: Methoden zur
Analyse von Branchen und Konkurrenten, Campus
Verlag, Frankfurt, 1999
OSullivan D and Dooley L, Applying Innovation,
Sage, Thousand Oaks, 2008
Parkhill D, The challenge of the computer utility,
Addison-Wesley, Reading, 1966
Ratcliffe J, Scenario Building: A Suitable Method For
Strategic Property Planning, The Property Research
Conference of the RICS St Johns Collage,
Cambridge, 1999

100

Schenk M, Wirth S, Mller E, Factory Planning Manual
Situation-Driven Production Facility Planning,
Springer, Berlin Heidelberg, 2010
Schlicksupp H, Innovation, Kreativitt, Ideenfindung,
Vogel, Wrzburg, 1989
Slaughter R, Futures: Tools and Techniques, In-
dooroopilly, Qld., 2000
Teich T, Extended Value Chain Management ein
Konzept zur Koordination von
Wertschpfungsnetzen, TU Chemnitz, Chemnitz,
2002
Wiendahl HP, Global Manufacturing Challenges and
Solutions, Digital enterprise technology: perspectives
and future challenges, Cunha P and Maropoulos P
(eds.), Springer, New York, 2009, pp. 15-24
Zhang Q, Cheng L and Boutaba R, Cloud computing:
state-of-the-art and research challenges Journal of
Internet Services and Applications, 2010, Vol. 1, No.
1, pp.7-18
Zwicky F, Morphologische Forschung, Baeschlin,
Glarus, 1989
Proceedings of DET2011
7th International Conference on Digital Enterprise Technology
Athens, Greece
28-30 September 2011
101

BEYOND THE PLANNING CASCADE:
HARMONISED PLANNING IN VEHICLE PRODUCTION
S. Auer
Fraunhofer Austria Research GmbH
Fraunhofer Project Center for Production
Management and Informatics at SZTAKI
Stefan.Auer@fraunhofer.at l
W. Mayrhofer
Fraunhofer Austria Research GmbH
Vienna University of Technology
Fraunhofer Project Center for Production
Management and Informatics at SZTAKI
Walter.Mayrhofer@fraunhofer.at

W. Sihn
Fraunhofer Austria Research GmbH
Vienna University of Technology
Fraunhofer Project Center for Production Management and Informatics at SZTAKI
Wilfried.Sihn@fraunhofer.at



ABSTRACT
Medium-term sales and operations and medium to short-term production planning in the automotive
industry often employ cascading planning processes. One of the shortcomings of cascading
planning is the lack of coordination and feedback between different planning phases. Costly
problems in production due to unfeasible production programs and necessary troubleshooting are
often caused by unavailable resources or limited supplier capacities, because these restrictions of
subsequent levels werent discovered during the long-term planning. The establishment of a system
for the classification of planning restrictions and their originators is the main topic of this paper. In
addition, it will highlight the connection between single planning tasks and the correlation of
restrictions between different planning horizons. Finally, an experimental setup for implementation
approach for such a harmonized system will be presented.
KEYWORDS
Sequencing, Constraint Programming, Integrated Planning, Harmonised Planning

1. THE PERILS OF CASCADING
PLANNING
In the automotive and other industrial sectors
various systems for planning sales, purchasing,
production and supply chain management are used.
Those systems are often poorly harmonised, and in
extreme cases, incompatible. Rivalling requests for
scope, planning horizons, interest spheres,
functionalities and organisational structure are
frequently observed results of those shortcomings.
None of the currently available bill of material
based planning systems with sequencing
capabilities supports planning throughout the
different planning and organisational levels.
Experience from industry shows that the planning
function often is congruent with the organisational
structure, i.e. sales planning is done by the sales
department and production planning is performed by
the production or logistics department. Such a
separated approach to planning seems reasonable,
since departments are considering different levels of
abstraction for their planning, i.e. the sales
department plans overall car numbers and possible

102

combination of the different product versions, while
production planning deals with real customer orders
and subsequently with specific configurations.
However, this segmented planning approach causes
inefficiencies that are also reflected in a fragmented
planning systems landscape that are in operation.
As the negotiations with and selection of
suppliers require forecasts of the production output
generated from sales plans at an early stage, the
function of supply chain planning often placed
between sales and production planning. In order to
identify cumulated material and part requirements,
detailed forecasts about vehicle configurations and
demands per part number are required.
Generally, the tools used in those planning
processes allow for the integration of known
restrictions into the planning model. However, only
major constraints are recognized and less prominent
impediments are only discovered by chance or the
skill of an experienced planner. Small changes of
the planning criteria, i.e., the sales department
overestimates the number of vehicles to be sold are
compensated by the adapting internal or external
capacities to make production possible. At the
moment, such changes are conducted manually
across all hierarchical planning levels, which is
tedious and fault-prone. The resulting complexity
and data volume generated by the breakdown of the
bill of material is not supported by most of the
planning solutions for the long- and medium-term.
The development of an integrated planning
approach as the basis of a software tool that
harmonises the planning tasks over the different
planning horizons is subject of an ongoing research
project called HarmoPlan. This paper will present
the groundwork for the integrated planning
approach that is based on a classification of the
origins of planning constraints and the resulting
software tool to bridge the usually employed
planning cascade.
2 PRODUCTION FACTORS AND THE
SEQUENCING OF HIGH VARIETY
PRODUCTION LINES
Industrial production is the transformation of
production factors into products. As shown in Fig.
1, factors for planning purposes and fundamental
factors can be differentiated. The key to a firms
success is the harmonisation of its production
factors.


Figure 1 Production factors (Gutenberg, 1983)
Synchronized flow production is a flow-oriented
production system where parts are moved by means
of a transportation system through the production
stations arranged in sequence, in which the
machining time is restricted by a cycle time (Kis,
2004). The ongoing project, HarmoPlan focuses on
the planning process of the final assembly in vehicle
and component manufactories where variant flow
production with low automation and high labor
intensity exists (Boysen et al, 2009).
Figure 2 shows a planning cascade for a
synchronized production line. Production factors
have an impact on different aggregation levels and
therefore have to be taken into account on every
level involved.
The key target of planning is to align market
requirements with disposable production factors.
Production restrictions in long- and mid-term
planning are fixed and production factors are limited
according to stated context-related restrictions.
Tasks and boundaries have to be coordinated to
avoid conflicts like bottlenecks on the one hand or
under-utilization of the production factors on the
other hand.
Usually, a new planning cycle is triggered by a
new or updated market analysis. The next step is an
annual budget planning, which results in
continuously updated sales forecasts, with a horizon
from seven to ten years. Subsequently, sales
planning specifies models by their main criteria as
engines, auto body, gear box, etc. and assigns
possible production plants and production volumes
to them. Location related costs and conditions of
existing or planned production plants and suppliers
affect the decision about the production site. Sales
projections, installation rates and production
numbers are input factors for the production
program planning. Examples for planning
restrictions are i.e. minimum load of an assembly
line resulting from the model mix, the capacity of
plants with regard to working hours, technical
situation and potential bottlenecks.
dispositive
factors
fundamental
factors
consumables
potential
factors
derivative
factors
planning
equipment material workforce
products
transformation
processes
production
factors

103

Typically production program planning is done
on a continous basis. In a sub-process, the so called
slotting, real customer orders are allocated to
usually produced quantities per week, day or shift.
Orders in daily or weekly order pools are often not
fully specified hence called dummy orders. If a
real order is placed, an eligible dummy order is
replaced by a fully specified customer order (Mrz
et al, 2010).
The production capacity is balanced by moving
orders up or down the planning sequence, in order to
take restrictions (i.e. capacity, material,) into
account. Planning and slotting is continued up to a
certain point in time, where the sequence is
frozen, which means that the sequence assigns a
decided production cycle to each order from the
order pool (Auer et al, 2010).



Figure 2 Planning tasks and interrelationships with fundamental factors, product and market (Auer et al, 2011)
3. PLANNING CONSTRAINTS
Fundamental factors of production (workforce,
equipment, material) are adjusted to needed
capacities of a production program during the
production planning. The uncertainty of the
requirements increases with the planning horizon. In
the early planning steps (sales planning) only
planned quantities, based on sales volumes of the
last and forecasts of the next period, which are
specified by main items (engine, body, design, etc.)
and no real customer orders exist. Whereas
sequencing in short term planning requires fully
specified orders, which include a delivery date and a
dedicated customer (build-to-order) or a dealer or
market allocation (build-to-stock).
Accurate information about capacity limitations
and required capacities for the existing or planned
orders of each period are necessary to realise valid
and consistent results across the whole planning
cascade. Basic planning data are also dubbed
constraints, if solution space is limited by excluding
certain events or sequences of events. Dangelmaier
distinguishes between inherent constraints and
task related constraints (Dangelmaier, 2009).
Inherent constraints are balancing equations or
conditions valid for the entire production system.
Task related constraints denote technical,
organisational and economic characteristics of the
production system. This paper focuses on task
related constraints, which are relevant for every
planning step of sequenced assembly lines.
Furthermore originators of planning constraints can
be classified into five different groups:
Equipment,
Workforce
Material
Product
Market.
These groups build the five branches of the
Ishikawa-diagram in Figure 3. The diagram shows
the fundamental factors to describe a production
Sites
Suppliers
Factories
Staff capacities
Quantity quotas
Brands
Models
Options
Annual planning
Budget planning
Sales planning
Production plan
generation
Specifications
Brand
strategy
Sales
projection
Model mix
Labour cost
Line capacity
Restrictions
Location-related costs
Local conditions
Sales
forecast
Sales plan
Order pools
Market / Product Planning purposes
factors
Fundamental factors
(material, equipment, workforce)
Sequencing Line
Line staff
Restrictions
Process
Sequence
Staff
Resources
Parts list
Availibility
Model mix

104

system, which are essential to ensure the needed
output, at the uppermost three branches:
Equipment
Workforce
Inventory
The output of the production system is presented
by the branch product, that defines which brands,
models and types are available and how they can be
configured. The market branch represents
customer demands and needs as well as the
outbound logistics, which became more important
as minimization of transport is in focus (Bong,
2002). For a better understanding, the constraints
are illustrated with factual examples:
Equipment: A manipulator that is used to mount
the front left door of a vehicle has a constant cycle
time of 120 seconds, limiting the overall cycle time
of the assembly line to 120 seconds. The resulting
production constraint defines the number of vehicles
that can be assembled per shift, day, week or month
by multiplying the cycle time with the available
working hours.
Workforce: In order to mount an electric sunroof
a station is usually manned with three persons to
cope with the workload of a common production
program. In spring and summer, when sunroofs are
ordered more frequently, it might become necessary
to loosen the restriction and opt for an additional
worker.
Inventory: If every vehicle produced requires a
certain part and the part supplier has a maximum
capacity of 500 parts per week, the output of cars is
also limited to 500 per week.
Product: Constraints on the product level mainly
depend on the product structure and the
interdependencies of different versions that can be
ordered. In truck manufacturing the speed of the
assembly line in combination with the truck length
results in a cycle time and the number of three and
four axle trucks in an order pool limit the number of
trucks that can be produced.
Market: A British market survey resulted in an
increased sales forecast for right hand driven
vehicles in the next period with 600 vehicles per
month. This setting as a strategic decision limits the
number of left hand driven vehicles directly.
Constraints can be defined as absolute or relative
constraints (Bong, 2002). Absolute constraints are
quantity or time constraints. A quantity constraint
has a variable quantity and fixed time period (e.g.
capacity 900 parts/week).


Figure 3 Originators of planning constraints

105

Constraints regarding the time have fixed
quantities and a variable time period (e.g. a product
carrier has a capacity of 10 parts and will be shipped
when full). Constraints can also represent a
combination of two or more events. Such constraints
are called relative constraints, e.g., sequence or
distance constraints. Sequence constraints prohibit
for example inefficient vehicle colour orders in the
paint shop, i.e., that a white car body is succeeded
by a black one. Distance constraints can set the
minimum number of repetitions of a cycle, so that
there are three cycles required until the same option
is allowed to be assembled again. These constraints
are particularly relevant in short-term planning
(sequencing) and have to be reinterpreted in means
of quantity or time constraints.
The boundary, presented by a constraint, can also
be flexible, i.e. a weekly delivery lot size of a
supplier, which can be adapted under special
circumstances. These restriction are called soft
constraints, whilst restriction which are caused by
technological limitations cannot be violated and are
denoted as hard constraints. During the
implementation of the planning tasks it is important
to take into account, that soft restrictions can turn
into hard constraints and that the boundaries are not
necessarily constant across all planning steps.

4. PRODUCT STRUCTURE


Figure 4 Example of product structure (Wagenitz, 2007)
The generic planning model mentioned above was
developed to balance the actual theoretical thinking
with common practices of the European automotive
industry (OEMs of passenger cars and trucks).
Different planning tasks are allocated to different
planning levels (e.g. strategic, tactical and
operational) and horizons (e.g. short-term, mid-
term, long-term) within this model. Every planning
task has to deal with various different planning
objects such as numbers of vehicles per type or
model in sales planning.
Observing a shorter planning horizon the data
needs to be more detailed, thence other planning
objects are relevant (Mrz et al, 2010). Figure 4
shows the relevant planning objects within the
product structure, which allows the planner to derive
a specific definition of each possible car
configuration out of the different types of vehicles.
The main level defines the different model versions.
Each version can further be customized with the
addition or removal of optional components
(Michalos et al, 2010).
To simulate a possible demand situation for
required materials at an early time, when no or not
enough customer orders exist to perform a planning
only based on real customer orders, an estimated
percentage distribution is assigned to every branch
in the product structure. The material requirements
can be calculated with the help of a combination of
the determined quantities of material, the planned
amount of vehicles to be produced within a certain
period and coding rules, which describe the
interrelations of different versions (Sinz, 2003). An
example for the coding rule for the part number
heavy battery is given below.
Option start-stop (O1)
Option independent vehicle heater (O2)
Option high-level audio/ video system (O3)

A heavy battery is required if a start-stop system
or () the combination of vehicle heater and (^)
high-level entertainment system are ordered. Below
the notation of the rule:
O1 (O2 ^ O3)

A rule-based installation logic exists for each
part number in the bill of materials (BOM).
Moreover a hierarchical BOM, where every single
product is documented, could be a possibility to
organise such a logic coding.
The accumulated demand of materials has to be
harmonised with existing capacities after the
demand calculation. Subsequently the identified
constraints need to be mapped according to the
described product structure. Constraints can
influence a single part number and the combined
rule or it might be affecting another level of the
product structure (e.g. body design level).

Type
Model USA Model Germany Model GB
Engine
1.8 R4
Engine
3.0 V6
Engine
2.0 Diesel
Body Design
Convertible
Body Design
Sedan
Body Design
Station Wagon
Options:
Colour,
Radio/GPS
Headlight
Seats
...
Options:
Colour,
Radio/GPS
Headlight
Seats
...
Options:
Colour,
Radio/GPS
Headlight
Seats
...

106

5. A NEW APPROACH FOR
HARMONISED PLANNING
The development of a planning system, which
supports a harmonised planning across all different
levels, is the objective of the project HarmoPlan.
Central issues of the research project are shifting
responsibilities and different levels of abstraction
along the planning cascade (i.e. the sales department
plans based on the number of cars to meet market
requirements, whereas production planning
concentrates on planning objects like specific car
configurations or lists of parts to charge the existing
production facilities to capacity). The resulting
complexity of the planning problem is an issue for a
tool that harmonises long-, mid- to short-term
planning. Therefore, the chosen approach should
provide a marketable solution to implement
planning constraints in each planning task.
The established quantities are represented as
monthly, weekly and daily volumes or order pools
in long- and mid-term planning or as order
sequences in short-term planning. The different
planning steps are influenced by various input data,
which are contingent upon the respective planning
horizons. It is important to detect appropriate
constraints for each planning step and each possible
configuration in order to align existing capacities
with customer requirements and moreover to
identify shortages at an early stage. Thus,
constraints engendered by earlier described
originators are saved in the so called constraint
manager, which collects the constraints and stores
them in a standardised format.


Figure 5 Proposed planning approach
The most important attribute of the constraint
manager is the traceability of the originator of each
constraint within the database. If a planning task has
to be conducted, a filter extracts the relevant
constraints from the constraint manager. If a
boundary is exceeded the planner needs to identify
the reason, in order to set possible measures to
widen the bottleneck or to solve the problem by
re-planning the process regarding the critical
constraints. The concept of the planning workflow,
which will be covered by one planning tool, is
shown in Figure 5.

6. IMPLEMENTATION APPROACH
The target of the ongoing research project
Harmoplan is the realisation of an executable
prototype based on the planning approach described
above. This includes suitable interfaces with a state
of the art ERP system.
In order to develop the software solution
according to the needs of the future customers, it
was planned from the beginning to develop the
solution together with an industrial application
partner. Possible application partners are companies
which produce their products on sequenced
assembly lines. This logically leads to the major
OEMs from the automotive industry. However,
since organisational structures at the OEMs known
to the project consortium are not conducive for the
implementation of a prototype, a company from the
special vehicle industry was chosen. This company
produces vehicles on various international sites and
faces the problems that are expected to be solved by
the proposed solution.
In this company the planning of quantities for the
long, middle and short term planning horizon are
performed with the help of Excel-solutions. The
variant part lists were stored in the existing ERP-
system. In order to be able to generate the required
information concerning quantities and dates for
purchased parts and assembly groups, orders are
stored within the system too, whereas scheduling is
exclusively performed with the mentioned Excel-
solutions. The sequencing of the single lines is
enabled mainly due to the experience of every single
planner. A rough set of standard rules exists.
However, those rules do cover the constraints only
on a very rough level.
Permanently increasing numbers of parts, variants
and additional options produce a considerable
complexity of the planning problem along the
process chain and beyond all planning horizons.
This makes it nearly impossible for the employees
to oversee the interdependencies of decisions
Required Fundamental Planning Factors
Sales Filter
Sequencing
Filter
Program
Planning Filter
Equipment Workforce Inventory
Product Market
Constraint Manager
Sales Planning
Program Planning Sequencing
Frozen
Zone
Vehicles, Config.,
Sequence
Planned Amount of vehicles and
Configurations splittet in order pools
Identification of reasons for violated constraints
Forward information flow

107

between the different planning tasks. For this reason
capacity overloads are often detected late and a
mutual basis to exchange information within
different departments does not exist.
The initial situation found in this company is
ideal to implement a prototype of the planning
system and enables a step by step implementation of
the concept. The procedure used to analyse the
processes and establish the prototype is described
below:
In order to map all business processes its
essential to collect data about all information flows
(manual and automated). Therefore a so called
Function and Dataflow Chart (FDC) (Drr, 2009) is
used. The advantage of a FDC is its suitability for
daily use and its easy application. The following
figure shows a simplified example of a FDC.


C
o
m
p
l
e
t
i
o
n

C
o
n
f
i
r
m
a
t
i
o
n
P
r
o
d
u
c
t
i
o
n

S
e
q
u
e
n
c
e

f
o
r

t
h
e

u
p
c
o
m
i
n
g

W
e
e
k
Figure 6 Function and Dataflow Chart (FDC) of the implementation
In addition to the process mapping the analysis of
data and data structure is needed. This includes the
following topics:
Product structure
Part lists structure
Options
Code rules
Building rates of different options
Existing sequencing rules
This concept of the future planning processes can
be adapted to the needs of the different departments
and to the planning tasks within different planning
horizons. As mentioned before, this is done best by
means of generic planning processes.
Provision are made for a step-by-step
implementation of the planning solution. A critical
point for the success of the implementation is the
right modelling of the sequencing tool and the
according restrictions. That is the reason that the
implementation starts with the short term planning
and the solutions will be extended to the other
planning horizons step-by-step in order to finally
cover the whole planning chain with one solution.

7. CONCLUSIONS
In order to obtain the goal of a harmonised
planning process and to harmonize the planning
cascade from long- and mid-term to short-term
planning all relevant constraints have to be available
for each planning task in the required dimension.
For the purpose of collecting all required factors that
influence the available and required capacity their
originators can be classified in five groups:
equipment
workforce

108

inventory
product
market
Planning restrictions from each group can be
defined as absolute or relative constraints. In order
to have the constraints easily available, for each
planning task the constraints are stored in an overall
constraint management database in a standardized
format. The realization of an integrated planning
tool can help to realise the following potential:
A harmonised planning process that reduces
friction between different departments
A common data set without redundancies
Early detection of bottlenecks and ability to
reference to causes, resulting in a reduction
of expensive troubleshooting
Validation of the production program in
each planning horizon and task
Constraints based on common planning
language for interdepartmental planning
Detection of objects in product structure
causing bottlenecks and setting of adequate
measures in re-planning
Based on the methodology described in this
paper, system specifications and the conceptual
design of the envisaged planning tool are derived.
Due to the high complexity of the combined
planning problem future work needs to focus on
efficient solving algorithms that support well
founded and prompt decisions of the planning
personnel. From a technical point of view one of the
key problems will be interoperability and necessary
interfaces between the ERP systems and the tool to
be developed. For an extensive testing phase and in
order to secure the viability of the approach an
experimental setup of the solution is developed in
cooperation with industrial partners.

REFERENCES
Auer S., Winterer T., Mayrhofer W., Mrz L., Sihn W.,
Integration of Personnel and Production Programme
Planning in the Automotive Industry, Sihn W.,
Kuhlang P. , Proceedings: Sustainable Production and
Logistics in Global Production Networks, NWV,
Vienna, 2010, p 900-908
Auer S., Mrz L., Tutsch, H., Sihn W., Classification of
interdependent planning restrictions and their various
impacts on long-, mid- and short term planning of high
variety production, Proceedings: 44th CIRP
Conference on Manufacturing Systems in Madison,
Wisconsin, 2011
Bong H.-B., The Lean Concept or how to adopt
production system philosophies to vehicle logistics,
Survey on Vehicle Logistics, ECG The Association
of Vehicle Logistics, Brussels, 2002, p 321-324
Boysen N., Fliedner M., Scholl A., Sequencing mixed-
model assembly lines: Survey, classification and
model critique, European Journal of Operational
Research, Vol 1922, 2009, 349-373
Dangelmaier W., Theorie der Produktionsplanung und
Steuerung, Springer, Germany, 2010
Drr P., Frhlich, T., Optimierung der IT-Struktur im
Auftragsabwicklungsprozess von kleinen und mittleren
Unternehmen, Industriemanagement, Vol. 3/, 2009, p
61-64
Gutenberg E., Grundlager der Betriebswirtschaftslehre,
Springer, Germany, 1983
Kis T., On the complexity of the car sequencing
problem, Operations Research Letters, Vol. 32/4,
2004, p 331-335
Mrz L., Tutsch H., Auer S., Sihn W., Integrated
Production Program and Human Resource Allocation
Planning of Sequenced Production Lines with
Simulated Assessment, Dangelmaier, W., et al,
Advanced Manufacturing and sustainable Logistics,
Springer, Germany, 2010, p 408-419
Michalos G., Makris S., Papakostas N., Mourtzis D.,
Chryssolouris G., Automotive assembly technologies
review: challenges and outlook for a flexible and
adaptive approach, CIRP Journal of Manufacturing
Science and Technology 2, 2010, p 81-91
Sinz C., Verifikation regelbasierter
Konfigurationssysteme, Fakultt fr Informations- u.
Kognitionswissenschaften der Eberhard-Karls-
Universitt, Tbingen, 2003
Wagenitz A., Modellierungsmethode zur
Auftragsabwicklung in der Automobilindustrie,
Dsseldorf, 2007
.

Proceedings of DET2011
7th International Conference on Digital Enterprise Technology
Athens, Greece
28-30 September 2011
109

DYNAMIC WAVELET NEURAL NETWORK MODEL FOR FORECASTING
RETURNS OF SHFE COPPER FUTURES PRICE
Li Shi
Dept. of IMSE, The University of Hong Kong
amyshi0629@hku.hk
LK Chu
Dept. of IMSE, The University of Hong Kong
lkchu@hkucc.hku.hk


Yuhua Chen
Research institute of Tsinghua University in Shenzhen
jasonchen1227@gmail.com
ABSTRACT
Appropriate forecasting of commodity futures price returns is of crucial importance to achieve
hedging effectiveness against the returns volatility risk. This paper presents a nonparametric
dynamic recurrent wavelet neural network model for forecasting returns of Shanghai Futures
Exchange (SHFE) copper futures price. The proposed model employs a wavelet basis function as
the activation function for hidden-layer neurons of the neural network. The aim of this arrangement
is to incorporate the fractal properties discovered in futures price return series. In the wavelet
transform domain, fractal self-similarity information of the returns series over a certain time scale
can be extracted. Input variables are analyzed and selected to facilitate effective forecasting.
Statistical indices such as normal mean square error (NMSE) are adopted to evaluate forecasting
performance of the proposed model. The forecasted result shows that dynamic wavelet neural
network has good prediction properties compared with traditional linear statistical model such as
ARIMA and other neural network forecasting models.
KEYWORDS
Wavelet Neural Networks, SHFE Copper Futures, Forecasting, Financial Time Series, Fractal
Market
1. INTRODUCTION
Copper is an important material for industrial
production and yet its volatile price movement has
been a major concern of the manufacturing industry
in the past few years. Some practitioners choose to
use coppers exchange-traded commodity futures for
hedging against such price volatility. The
availability of an effective approach for forecasting
copper futures price is crucial in the performance of
such hedging in order to optimize production plans
or investment portfolios.
Since China is the worlds largest consumer of
copper, Shanghai Futures Exchange (SHFE) has
hosted active trading of copper futures contracts and
become an important venue for deciding the market
price of copper price. Therefore, the study of the
price fluctuation of SHFE copper futures has
assumed importance as it is a prerequisite for
effective forecasting. This study develops a dynamic
recurrent wavelet neural network model for
forecasting the returns of SHFE copper futures
prices. The model will try to minimise forecasting
errors and enhance forecasting capability compared
with other approaches.
The popular view of commodity futures prices is
due the theory of storage originating in the work of
Kaldor (1939), Working (1948), and Brennan
(1958). They try to explain the difference between
synchronous spot and futures commodity prices in
terms of storage carrying cost and convenience yield
on inventory. The theory is further developed by
Gibson and Schwartz (1990a), Miltersen and
Schwartz (1998), and Schwartz (1997), etc. into

110

term structure commodity futures price models.
These models are based on storage theory. However,
they do not seem to possess the flexibility in
explaining futures price fluctuation.
Since the simplest random walk model is proved
to perform better than many complex structural
models in financial time series forecasting (Meese
and Rogoff, 1983, 1986), researchers have started
using time series analysis to forecast commodity
futures price (Taylor, 1986). Numerous studies have
found that univariate time series such as Box-
Jenkins ARIMA model performs well in forecasting
commodity futures prices (Lawera, 2000). Time
series analysis is useful in forecasting but lacks an
economic foundation. In this paper, an ARIMA
model of copper futures price will be applied as a
benchmark for comparing with the proposed model.
Commodity futures prices are not only
determined by supply and demand but are also
susceptible to inter-market capital movements,
political developments and so on. These factors
contribute to strongly fluctuating and non-linear
price behaviours. Since neural networks have a high
level of nonlinear approximation and adaptive self-
learning capabilities, they offer enormous potential
for the construction of a nonlinear forecasting model
of commodity futures prices based on certain facts
dataset. The use of neural network for forecasting
various commodity or commodity futures prices has
been extensively studied, for example, by
Grudnitski and Osburn (1993), and Zou etc., (2007).
However, most existing studies on copper futures
prices have used simple Back-Propagation neural
networks (BPNN) in their forecasting models. A
novel recurrent neural network forecasting model is
proposed in this paper. It has input-output feedback
and hidden neuron self-feedback incorporated into
BPNN. These feedback loops create internal states
of the network which allows it to exhibit dynamic
temporal behaviour, and hence enhance the
nonlinear approximation ability.
An important feature of the forecasting approach
proposed in this paper lies in the use of wavelets as
the activation function for feature detection by the
neural network. Wavelet can be used to approximate
target functions with good effect. The combination
of wavelets and neural network can form a powerful
forecasting model. Wavelets were first integrated
with neural network by Zhang and Benveniste
(1992), and various architectures are then developed
by Pati and Krishnaprasad (1993), Zhang, J. (1995),
Zhang (1997), Billings (2005), etc. Commodity
futures prices have been proved to possess self-
similarity and self-affinity structure, based on the
fractal market hypothesis and fractal theory
(Mandelbrot, 1982; Edgar, 1996). Since wavelet
analysis procedure is implemented with temporal
translation and dilation of a mother wavelet, they are
found to be powerful in approximating commodity
prices. The proposed model employs wavelet basis
function as the activation function of hidden-layer,
whose purpose is to incorporate the fractal
properties of SHFE copper futures price.
This paper is organized as follows. In Section 2,
input data pre-processing and selection is discussed
for constructing a forecasting model expression. In
Section 3, the basics of wavelet transform are first
described. Then, a proper wavelet function is
chosen for better fitting to the target time series.
This is followed by the description of the
architecture and training of proposed model. Section
4 gives the forecasting results of SHFE copper
futures price returns using proposed model. The
concluding remarks are given in Section 5.
2. FORECASTING MODEL EXPRESSION
AND INPUT DATA SELECTION
As mentioned in Section 1, there exist two kinds of
data that can be used to construct a forecasting
model of copper futures prices. One is the external
data, which include copper futures storage level and
inter-market influence. The other is copper futures
price historical data. As such, the nonlinear
forecasting model is constructed as follows:
1 2
1 2 1 1 1
( ) ( , ,..., ) ( , , ..., ) ( )
m
t t t l t t t
p t f p p p g x x x e t

= + +
(1)
where ( ) p t is the copper futures price series;
1 2
( , ,..., )
t t t n
p p p

the historical data of the ( ) p t ;
(.) f the representation of a nonlinear auto-
regression function;
1 2
1 1 1
( , ,..., )
m
t t t
x x x

the external
factors; (.) g a nonlinear mapping function and ( ) e t
the error between the forecasted and real prices.
It is obvious that the above dataset are non-
stationary time series, which have to be scaled in
order to have their non-stationary components
removed. Such scaling is required for reducing the
search space of the neural networks, and help to
obtain the optimal coefficient easily. Logarithmic
first difference is applied to transform the non-
stationary time series into a stationary series
(McNelis, 2005). Considering that prices change
little or even do not change between two
neighbouring days, the logarithmic first difference
of every five days is taken. The following scaling
function is applied:
5 5
5
5 5
log log log log(1 )
t t t t
t t t
t t
p p p p
p p p
p p



= = +
(2)
So, log
t
p has the meaning of weekly price return
rate. This approach will forecast the weekly return
rate of copper futures price instead of the prices.

111

Actually, after the forecasted value of the rate is
determined, the copper futures price can be readily
obtained by applying the scaling function. Then
equation (1) can be scaled and represented as
follows:
1 2
1 2 1 1 1
( ) ( , ,..., ) ( ( ), ( ),..., ( )) ( )
m
t t t l t t t
r t f r r r g r x r x r x e t

= + +

(3)
Five most influential external factors are
selected to form the input vectors
1 2
1 1 1
( , ,..., )
m
t t t
x x x

.
First, Changjiang copper spot price in China and
copper futures inventory level in the SHFE
appointed warehouse are chosen based on the
traditional storage price theory. They are denoted by
SP and INV respectively. Then, London Metal
Exchange (LME) three-month copper price is
chosen since LME and SHFE copper futures prices
have been shown to exhibit significant correlation
(Zhang, 2003). WTI crude oil price is also included
because crude oil can impact economy welfare and
indirectly influence the copper futures price. Since
crude oil and other energy sources are priced in US
dollar, EUR/USD exchange rate is also adopted as
an input, which is denoted by EUR. The input and
output variables are listed in table 1.
Table 1 Input and output variables
Denotation Equations
Input ( ) r P l

1...... ; l l =

( )
5 ( 1) -5
100 ln - ln
t l t l
p p


r(INV)
( )
5
100 ln ln
t t
inv inv



r(SP)
( )
5
100 ln ln
t t
sp sp



r(LME)
( )
5
100 ln ln
t t
lme lme



r(WTI)
( )
5
100 ln ln
t t
wti wti



r(EUR)
( )
5
100 ln ln
t t
eur eur



Output r(P)
( )
5
100 ln - ln
t t
p p
+


3. DYNAMIC WAVELET NEURAL
NETWORKS
3.1. WAVELET TRANSFORM
This section will briefly describe wavelet transform,
and discuss how wavelets can be applied to
reconstruct functions or data series. Wavelet
transform can be divided into two categories, which
include continuous wavelet transform (CWT) and
discrete wavelet transform (DWT).
The continuous wavelet transform
( ) ,
f
CWT a of function
( ) f t is given by
Daubechies, (1992):
( ) ( ) ( )
,
1
,
f a
t
CWT f t t f t dt
a a

| |
= =
|
\

(4)
where ( ) t is the mother wavelet function, and
2
( ) ( ) t L . Its Fourier transform has to satisfy
the following condition:
( )

C d

= <

(5)
a and are dilation and translation parameter
respectively. In CWT, a and vary continuously
over (with the constraint 0 a ). By scaling and
shifting the mother wavelet, a set of wavelet basis
functions are obtained as follows:
( )
,
1
; ,
a
t
t a
a a



| |
=
|
\
(6)
Equation (4) transforms
( ) f t from the time
domain into the wavelet domain ( , a domain). In
the wavelet domain, frequency information at
certain time can be obtained. It means that in the
time domain, wavelet basis functions can be used to
approximate both the smooth global variation and
sharp local variation of the function.
( ) f t can be
reconstructed from wavelet basis functions by using
an inverse wavelet transform, which is given as
follows:
( ) ( )
2
0
1 1
,
f
da t
f t CWT a d
C a a a


+ +

| |
=
|
\

(7)
In DWT, both a and take discrete values only.
For binary representation, it is convenient to sample
a and

based on the so called dyadic grid. This
special case of DWT is defined as dyadic wavelet
transform. In this case, a and are represented as:
,
2 , 2 ; ,
j j
j j k
a k j k Z

= = (8)
Thus the definition of dyadic discrete wavelet is
( ) ( )
2
,
2 2 ; ,
j
j
j k
t t k j k Z


= (9)
where j represents the number of wavelet basis
functions, and k determines the time position of the
wavelets.
Both CWT and DWT can be incorporated into
neural networks as activation function. The existing
wavelet neural networks can be categorized into two
types (Billings, 2005). The adaptive wavelet neural
network has wavelets as the activation function,
which is obtained by performing CWT. The
unknown parameters of such network include the
weighting coefficients of the network and the
dilation and translation factors of the wavelets. The
other type is fixed grid wavelet neural network,
whose activation function is obtained from the
DWT. In such a wavelet neural network, the
position and dilations of wavelets are predetermined,

112

and only the weighting coefficients need to be
optimized.
In this paper, an adaptive wavelet neural
network is developed to achieve the desired
flexibility and for the accurate reconstruction of
continuous time series. For practical implementation
and computational efficiency, the inverse wavelet
transform (equation (4)) can be expressed as
( ) ( )
, ,
,
j k j k
j k
f t w t =

(10)
Equation (10) is used as the expression of nonlinear
mapping function
1 2
( , ,..., )
t t t n
f r r r

in equation (3).
So, the model can track the self-similarity and self-
affinity properties of the return series, and result in a
better approximation.
3.2. WAVELET FUNCTION SELECTION
An appropriate wavelet has to be selected in order to
better reconstruct
( ) f t . Since wavelets will be
employed as activation function in neural networks,
the wavelet functions have to be differentiable and
the Mexican hat wavelet (See figure 1) function is
chosen. This wavelet function is commonly used in
time series decomposition and reconstruction. Its
function is expressed as follows:
( ) ( )
2
2
cos 5
t
t Ce t

= (11)
Mexican hat wavelet offers other advantages for
reconstructing
( ) f t . First, Mexican hat wavelet is
based on CWT, is symmetrical, and provides an
exact time frequency analysis. This makes it a good
choice to process data that vary continuously in time.
Second, Mexican hat wavelet has a rapid vanishing
function, which leads to an accurate and efficient
approximation of target time series.
-8 -6 -4 -2 0 2 4 6 8
-0.5
0
0.5
1

Figure 1 Mexican hat wavelet

1
z
1
z
1
z

1
z

1
z
1
z
1
z

( ) g ( ) g ( ) g
, c j
w
b
1
( )
cn
r t
2
( )
cn
r t ( )
cn
c
r t
j
w
1

1
a
2
a
i
a
, u i
w
1
( )
wn
r t
2
( )
wn
r t ( )
wn
u
r t
( 1) r t
( ) r t
i
w
( ) ( ) ( )
, s i
w
, o i
w
Figure 2 Structure of proposed recurrent wavelet neural network
3.3 ARCHITECTURES OF A RECURRENT
WAVELET NEURAL NETWORK (RWNN)
As mentioned above, the returns of SHFE copper
futures price originate from the storage theory, and
are inevitably influenced by inter-market factors. In
order to extract useful information from related
factors for effective forecasting, commonly used
sigmoid functions are adopted to produce a
nonlinear mapping from these databases to target
returns. The sigmoid active function used in the
conventional neural network part of the model is
expressed as follows:
( ) ( )
( ) 1/ 1 exp ( )
j j
g x t x t
(
= +


5
,
1
( ) ( )
cn
j c j c
c
x t w r t
=
=


( ), ( ), ( ), ( ), ( )
cn
c
r r IV r SP r LME r WTI r EUR =
(12)


113

where
j
x is the input of hidden neuron j in the
conventional neural network part.
cn
c
r is the input of
the conventional neural network part as listed in
table 1.
In order to incorporate dynamic temporal
behaviour and enhance the nonlinear approximate
ability, a dynamic wavelet neural network with
feedback topology is developed. In this model, the
feed-forward part consists of a wavelet network
combined with a conventional neural network using
sigmoid activation function. The feedback part
includes input-output feedback loop and wavelet
neuron self-feedback loop. Therefore, the nonlinear
estimator in the proposed forecasting model can be
expressed as follows:
1 1
( )
( ) ( ) ( ( ))
M
i i
i i j j
i j i
y t
r t w w g x t b
a

= =

= + +


, , ,
1
( ) ( ) ( 1) ( 1)
l
wn
i u i u o i s i i
u
y t w r t w r t w y t
=
= + +


( ) ( ) ( ) 1 ,......
wn
u
r t r P r P l = (13)
where
i
y is the input of hidden neuron j of the
wavelet neural network part.
wn
u
r is the input of the
conventional neural network part, listing before in
table 1.
The architecture of the proposed dynamic
recurrent neural network model is shown in figure 2.
3.4 TRAINING THE PROPOSED NEURAL
NETWORK
In order to obtain all the parameters in the model,
i.e. weights, bias, and translation and dilation
parameters, a learning algorithm is used for training
the network. In this paper, an improved real-time
training algorithm for recurrent networks called
real-time recurrent learning algorithm (RTRL)
(Williams e.a., 1995) is adopted due to of its fast
convergence for an accurate training.
RTRL is a real-time back-propagation (BP)
gradient descent training algorithm. It does not
employ the error measure, which is obtained by
summing up the error between real returns and
model output (equation (14)) during the training
period. Instead, only the instantaneous error
measure (see equation (15)) is used for calculating
parameter updates at each time instant of the
continually running network.
( )
0
2 1
( ) ( )
2
n
t
Total real
t t
E r t r t
=
=

(14)
( )
2 1
( ) ( ) ( )
2
real
E t r t r t =
(15)
So, at every time t, the parameters are adapted
according to
( )
.( )
.
E t
par t
par

(16)
where is the learning rate. Thus in this
model, the added value of each parameter is
calculated in the following equations:
( ) ( )
( ( ) ( ))
( )
real
E t r t
b r t r t
r t b


= =

(17)
( ) ( )
( ( ) ( )) ( )
( )
j real j
j
E t r t
w r t r t g t
r t w


= =


(18)
( ) ( )
( ( ) ( )) ( )
( )
i real i
i
E t r t
w r t r t t
r t w


= =

(19)
,
,
( )
( ) ( )
( ) ( )
( ( ) ( )) ( ) ( )
j
c j
j c j
cn
real j j c
g t
E t r t
w
r t g t w
r t r t w g t r t


=

=
(20)
( )
, , , ,
,
,
,
. , , ,
,
( ) ( ) ( )
,
( ) ( )
( 1)
( ( ) ( )) ( ) (
( 1)
( ) ( 1) ( 1))
i
x i u i s i o i
i x i
i
real i i s i
x i
wn
o i x u u x s i x o
x i
t E t r t
w w w w
r t t w
t
r t r t w t w
w
r t
w r t t r t
w




=


= +


+ + +


, . ,
, , ,
, ,
( ) ( 1) ( 1)
( ) ( ( )
( 1) ( 1))
wn i i
i s i o i x u u
x i x i x i
x s i x o
t t r t
t w w r t
w w w
t r t




= + + +

+

, . ,
, , ,
, ,
( 1) ( ) ( 1)
( ) ( ( )
( 1) ( 1))
wn i
i i s i o i x u u
x i x i x i
x s i x o
t r t r t
w t w w r t
w w w
t r t




= + + +

+ +

, ,
(0) (0)
0; 0
i
x i x i
r
w w

= =

(21)
2
( ) ( ) ( )
( ) ( )
( ( ) ( )) ( ) /
i
i
real i i
t E t r t
a
r t t a
r t r t w t a



=

=
(22)
( ) ( ) ( )
( ) ( )
( ( ) ( )) ( )
i
i
real i i
t E t r t
r t t
r t r t w t





=

=
(23)
In order to prevent the RTRL process from being
trapped in a local maximization, momentum factors
are introduced for adjusting the network parameters.
Taking
j
w for brief description, the momentum
factor of
j
w can be expressed as:
( ( ) ( 1))
j
w j j
m k w t w t = (24)
Then the
j
w (t+1) is updated in accordance with the
following rules:

114

( 1) ( ) ( 1)
j
j j j w
w t w t w t k m + = + + + (25)
4. PERFORMANCE EVLUATION OF
DYNAMIC RECURRENT WAVELET
NEURAL NETWORK MODEL
4.1. DATA SETS
The input and output data are obtained and listed in
table 1 for model evaluation.
First, the daily close price of copper futures
and the INV are collected from the SHFE official
website. Unlike LME, SHFE have the copper
futures contracts expired in each month of the year.
The exact three-month-to-maturity contracts may
not be available on a given day. Since every trading
day has different futures prices, in order to deal with
the discontinuity of futures prices, the nearest to the
three-month-to-maturity contract is used. The
nearest to the three-month-to-maturity contract is
selected for its actively trading. Since the first
delivery day of a contract is the first business day of
each month, at the beginning of each month, a
nearest to the three-month-to maturity contract is
selected and kept for a month. Then, at the
beginning of the next month, the following nearest
to the three-month-to-maturity contract will be
selected for replacing the previous one. In this way,
a series of continuous copper futures price can be
formed.
Second, the daily close prices of LME, SP,
WTI and EUR are collected from the website of
Bloomberg. Third, the weekly returns series of the
above obtained price are computed using the
equations listing in table 1.
The sampling time window is from January 4
th
,
2005 to November 9
th
, 2010. A total of 1,400
samples are obtained. There exists significantly rise
and fall economic cycle during this period, which
make the proposed forecasting approach more
comprehensive and meaningful. The weekly returns
of SHFE copper futures price during this time
window is shown in figure 3.
0 500 1000 1500
-25
-20
-15
-10
-5
0
5
10
15
20
Number of Sample
W
e
e
k
l
y

r
e
t
u
r
n
s

o
f

S
H
F
E

c
o
p
p
e
r

f
u
t
u
r
e
s

p
r
i
c
e

Figure 3 Weekly returns of SHFE copper futures price
4.2 FINDING THE INPUT DIMENSION
The sample autocorrelation function (ACF) and
sample partial autocorrelation function (PACF) are
employed to determine the input dimension of the
auto-regression part of the model. Figure 4 is
plotted using weekly return rate in figure 3. The
input dimension is found to be four.
0 2 4 6 8 10 12 14 16 18 20
-0.5
0
0.5
1
Lag
S
a
m
p
l
e

A
u
t
o
c
o
r
r
e
l
a
t
i
o
n
Sample Autocorrelation Function
0 2 4 6 8 10 12 14 16 18 20
-0.5
0
0.5
1
Lag
S
a
m
p
l
e

P
a
r
t
i
a
l

A
u
t
o
c
o
r
r
e
l
a
t
i
o
n
s
Sample Partial Autocorrelation Function

Figure 4 ACF and PACF of copper futures price returns
4.3 FORECASTING ACCURATE
ASSESSMENT
Several error measurements have been chosen to
evaluate the performance of proposed model.
Normalized mean square error (NMSE) and mean
absolute error (MAE) are adopted to evaluate the
error between target returns and the model output.
Direction sign (DS) is adopted to show the hit ratio,
defined by the models output moving in the same
direction as target returns. Table 2 lists above
performance measurements and their equations.
Table 2 Performance measurements
Measurements Equations
NMSE
( )
( )
2
1
2
1
1
1

1
;
1
1
n
i i n
i
i n
i
i
i
y y
n
y y
n
y y
n
=
=
=


MAE
1
1

n
i i
i
y y
n
=


DS
( )( )
1 1
1
1 0 100
;
0
n
i i i i
i i
i
y y y y
d d
n otherwise

=

=


4.4 FORECASTING RESULTS
In this approach, a fixed forecasting scheme is
adopted. The scheme involves training and
estimating parameters of the proposed model on the
first 1,000 data in the dataset, and uses these
estimates to produce all the forecasts for the

115

following out-of-sample data. Figure 5 shows the
comparisons of the following 200 out-of-sample
testing data and the estimated outputs using the
proposed model. Figure 5(a) shows the forecasting
targets and Figure 5(b) shows the absolute
forecasting error.
0 20 40 60 80 100 120 140 160 180 200
-15
-10
-5
0
5
10
15
20

Sample data
R
e
t
u
r
n
s

o
f

S
H
F
E

c
o
p
p
e
r

f
u
t
u
r
e
s

p
r
i
c
e
target output
RWNN output

0 20 40 60 80 100 120 140 160 180 200
0
2
4
6
8
10
12
14
Sample number
E
r
r
o
r

Figure 4 comparison of out-of-sample and estimation data
Three different out-of-sample lengths (listing
in table 3) are chosen for testing the forecasting
performance of the proposed model. Variance
testing data lengths are applied for testing the near-
term and long-term forecasting ability of the
proposed model.
Table 3 In-sample and out-of-sample time windows
Sample Time windows
In
sample
data
Feb. 1
st
, 2005 to Aug 18
th
, 2009
(1000 data)
Out of
sample
data
Aug. 19
th
, 2009 to Aug. 11
st
, 2009
(100 data)
Aug. 19
th
, 2009 to Jan. 7
th
, 2010
(200 data)
Aug. 19
th
, 2009 to Nov. 9
th
, 2010
(400 data)
Three other conventional forecasting models are
also set up for comparing with our proposed model.
The three models are feed-forward wavelet neural
network model (WNN), fully recurrent BP neural
network (FRNN), and ARIMA model. In figure 4,
ACF is trailed and PACF truncated at lag 1. It
indicates the need for an AR (1) model out of
ARIMA family models. The results due to these
comparisons are listing in table 4, 5 and 6.
Table 4 Performance comparison when forecasting 100
data
RWNN WNN FRNN AR(1)
NMSE 0.8993 1.1070 1.3425 1.6657
MAE 4.3104 4.8678 5.0864 5.9252
DS 66 59 58 56
Table 5 Performance comparison when forecasting 200
data
RWNN WNN FRNN AR(1)
NMSE 0.9593 1.3290 1.4327 1.8532
MAE 3.6325 4.5435 4.6325 7.2353
DS 63 55 56 54
Table 6 Performance comparison when forecasting 400
data
RWNN WNN FRNN AR(1)
NMSE 1.0625 1.5242 1.5076 1.9543
MAE 3.6982 5.0634 4.8735 7.8963
DS 62 54 56 53
The forecasting results in table 4, 5 and 6 show
that the dynamic recurrent wavelet model
outperforms other neural network and ARIMA
model both in value accuracy and directional
accuracy. The model achieves the best performance
in the near- and long-term forecasting.
5. CONCLUSIONS

This research has advanced the study of
conventional neural network in forecasting returns
of SHFE copper futures price by presenting a
dynamic recurrent neural network model. The
proposed model combines the feature detection
property of wavelet and temporal memory
behaviour of recurrent neural network for capturing
the dynamics of copper futures returns and results in
a better forecasting. The proposed model also
considers the influence of exogenous factors and
extracts useful information for assistant forecasting.
The forecasting results show that the dynamic
recurrent wavelet neural network model
outperformed in the near- and long-term forecasting
of returns of SHFE copper futures price compared
to other conventional models.
6. ACKNOWLEDGMENTS
The authors would like to thank the anonymous
reviewers for their valuable comments.

116

REFERENCES
Billings, Stephen A. and Wei, Hua-Liang, A new class
of wavelet networks for nonlinear system
identification, IEEE Transation on eural networks,
Vol. 16, No. 4, 2005
Brennan, M. J., The supply of storage, American
Economic Review, Vol. 48, 1958, pp 50- 72
Daubechies, Ingrid, Ten Lectures on Wavelets, Society
for Industrial and Applied Mathematics Philadelphia,
PA, USA, 1992
Edgar E. Peters, "Chaos and Order in the Capital
Markets", John Wiley & Sons, NY. 1996
Gibson, R., and E. S. Schwartz. "Valuation of Long Term
Oil-Linked Assets", In Stochastic Models and
Options Values, D. Lund and B. Oksental, Elsevier,
Amsterdam, 1990
Grudnitski, Gary and Osburn, Larry, Forecasting S&P
and gold futures prices: An application of neural
networks, The Journal of Futures Markets, Vol.
13, Issue 6, 1993, pp 631643
Kaldor, N., Speculation and economic stability, Review
of Economic Studies Vol. 7, 1939, pp 1-27
Lawera, Martin Lukas, Futures prices: Data mining and
modelling approaches, PHD thesis
Mandelbrot B. B, The fractal geometry of nature, W. H.
Freeman, NY. 1982
Mcnelis, Paul D., Neural neworks in finance, gaining
predictive edge in the market. Elsevier Inc., Oxford,
2005
Meese, RA and Rogoff, K., Empirical exchange rate
models of the seventies: do they fit out-of-sample?
Journal of international Economics. Vol. 14, 1983, pp
3-24
Meese, RA and Rogoff, K. (1986). Was it real? The
exchange rate-interest differential relation over the
modern floating-rate period, Jounal of Finance, Vol.
43, No. 4, 1983, pp 933-948
Miltersen, K. R., and E. Schwartz. "Pricing of Options on
Commodity Futures with Stochastic Term Structure of
Convenience Yields and Interest Rates", Journal of
Financial and Quantitative Analysis, Vol. 33, 1998,
pp 33-59
Nowrouz Kohzadia, Milton S. Boydb, Bahman
Kermanshahib and Iebeling Kaastrac, A comparison
of artificial neural network and time series models for
forecasting commodity prices, eurocomputing, Vol.
10, Issue 2, 1996, pp 169-181
Pati, Y. C. and Krishnaprasad, P. S., Analysis and
synthesis of feed forward neural networks using
dscrete affine wavelet transformations, IEEE
Transation on eural etwoks, Vol. 4, No. 1, 1993
Schwartz, E. S. "The Stochastic Behavior of Commodity
Prices: Implications for Valuation and Hedging"
Journal of Finance, Vol. 52, 1997, pp 923-973
Taylor, Stephen J., Modelling financial time series,
World Scientific, Singapore, 1986
Williams, R. J, Zipser, D., Gradient-Based Learning
Algorithms for Recurrent Networks and their
computational complexity, Book chapter of
Backpropagation: Theory, Architectures and
Applications, Lawrence Erlbaum Associates
Pubulishers, New Jersey, USA
Working, H., Theory of the inverse carrying charge in
futures markets, Journal of Farm Economics, Vol. 30,
1948, pp 1-28
Zhang Jun, Wavelet neural networks for function
learning, IEEE Transation on Signal Processing, Vol.
43, No. 6, 1995
Zhang, Q, Using wavelet network in nonparametric
estimate, IEEE Transation on eural etwoks, Vol. 8,
No. 2, 1997
Zhang, Q., and Benveniste, A., Wavelet networks
IEEE Transation on eural etworks, Vol. 3, No. 6,
1992
Zhang, G.P., Correlation analysis between Shanghai
copper futures and international copper futures
markets, working paper, Shanghai Futures Exchange,
2003
Zou, H.F, Xia, G.P, Yang F.T. and Wang H. Y., An
investigation and comparison of artificial neural
network and time series models for Chinese food grain
price forecasting, eurocomputing, Vol. 70, Issue 16-
18, 2007, pp 2913-2923


117

A NOVEL TOOL FOR THE DESIGN AND SIMULATION OF BUSINESS
PROCESS MODELS
John Pandremenos
Laboratory for Manufacturing Systems &
Automation, University of Patras, Rio, Patras
26500, Greece
jpandrem@lms.mech.upatras.gr
Kosmas Alexopoulos
Laboratory for Manufacturing Systems &
Automation, University of Patras, Rio, Patras
26500, Greece
alexokos@lms.mech.upatras.gr


George Chryssolouris
Laboratory for Manufacturing
Systems & Automation,
University of Patras, Rio,
Patras 26500, Greece
xrisol@lms.mech.upatras.gr

ABSTRACT
In this paper, a novel software is presented, serving both as a Business Model (BM) design tool and
as a Decision Support System for the measurement and assessment of different BMs, towards
specific Key Performance Indicators (KPIs). One of this tools novelties lies in its capability to
easily model and assess BMs that are oriented to Mass Customization (MC), through a set of
dedicated functionalities and KPIs. This capability is demonstrated and evaluated through a case
study, having stemmed from the shoe industry.
KEYWORDS
Business Model, KPI, Simulation, Mass Customization, software tool

1. INTRODUCTION
Over the years, global business and the economic
environment are changing rapidly. The
environment is being developed and becomes more
and more competitive and uncertain, thereby
making business decisions increasingly difficult. Of
course, any manager or operator does feel the way
his business works and the way it produces
whatever products or services for the market, but
there is still the need for a systematic way to be
found that would be defining the scope of this
business, describing the way ones enterprise
operates as a productive unit and finally for
strategic decisions to be made. The people in
business, are faced with questions, which they
would find much easier to answer if there was a
dedicated method or a set of procedures - tools
through which they would be able to understand
which is the Business Model (BM) and the actual
elements forming part of the BM, so that they
communicate more easily and compare to other
similar companies or even to try changing some of
the factors of production in order to explore
business opportunities without a great risk.
Osterwalder (2004) defines a BM as a conceptual
tool that contains a set of elements and their
relationships and allows expressing a companys
logic for earning money. It is a description of the
value that a company offers to one or several
categories of customers and the architecture of the
firm and its network of partners for creating,
marketing and delivering this value and relationship
capital, in order to generate profitable and
sustainable revenue streams.

118

The objective of the work presented in this paper,
was the development of a software tool for the
design, simulation and assessment of BMs,
emphasizing on Mass Customization (MC). In the
following chapter, the state of the art on similar
tools is reviewed. Based on the limitations of these
tools, the capabilities of the new software are
pointed out in Section 3. In Section 4, the tool is
applied to a case study, coming from the shoe
industry, in order for its applicability to be
investigated. In the same section, the results are
discussed, while in Section 5, conclusions are
drawn.
2. EXISTING TOOLS FOR BM
ASSESSMENT
Over the past few years, a large number of new
commercial software tools, dedicated to the design,
simulation and assessment of business models,
sprang up. Most of these tools define business
processes, having graphical symbols or objects,
with individual process activities depicted as a
series of boxes and arrows. Special characteristics
of each process or activity may then be attached as
attributes to the process. For the aforementioned
description, there is a wide range of notations and
languages, dedicated to the BMs representation.
The Business Process Modelling Notation (BPMN)
is a widely used standard of business process
modelling, and provides a graphical notation for the
specification of business processes. Furthermore,
the majority of the tools using BPMN allow for
some type of analysis, depending on the
sophistication of the tools underlying methodology.
The simulation performed can be either continuous
or discrete-event, dynamic and stochastic.
Moreover, the simulation tools typically provide
animation capabilities that allow the process
designer to observe how customers and/or work
objects flow through the system. The capabilities of
some indicative commercial tools are presented in
the following paragraph.
The Adeptia BPM Suite is a web based software
tool, capable of modelling and simulating a business
process along with the BPMN. It helps business
managers to calculate cost and time parameters of
As-Is and To-Be processes (ADEPTIA
website). The Bizflow Process Modeller, also based
on the BPMN, has modelling, simulating, executing
and optimizing process capabilities. The Process
Modeller compares the As-Is and To-Be model by
measuring their operational performance, based on
built-in and custom-made KPIs. Some indicative
KPIs are the turnaround time, lead time, customer
satisfaction and delivery time (HANDYSOFT
website). ARIS Business Architect & Designer is a
business process modelling tool, aiding IT managers
to discover the relationships between processes and
the used resources. Furthermore, it provides a wide
range of templates and supports many architectural
concepts (BPMN, BPEL etc.). ARIS Simulator
belongs in the same software family and simulates
the designed models. It analyses KPIs such as the
process throughput time, dynamic wait states,
organizational centre utilization and cost rates in
order to predict risks and identify bottlenecks
(SOFTWARE AG website). The Oracle BPM Suite
is another tool, based on BPMN and BPEL for
modelling, managing, simulating, optimizing and
executing business processes. The Oracle BPM
Suite provides business managers with real-time
lists, charts and KPIs analysis (ORACLE website).
Similarly to the previous tools, ProVision supports a
wide range of notations and languages and enables
users to analyse, design and simulate business
processes through the Monte Carlo and other
discrete event simulators. The As-Is and To-Be
analysis is another feature of ProVision which
compares information such as that on service level,
time, resources, error and cost reduction
(METASTORM website). Finally, Prosim uses the
IDEF3 language and is capable to automatically
generate simulation models, analyse the efficiency
of current models and test the strength of the
proposed scenarios (KBSI website).
Besides the commercial software available,
several scientific papers have been published on
tools and methodologies for the modelling and
simulation of BMs which, in the near future, could
be commercialized as well. Some indicative works
are reported hereafter.
Barjis (2009) proposed a method, which
employed the DEMO Methodology, developed by
Dietz (2006) for the modelling of business
processes, but adapted graphical notations and
formal semantics so as to generate models that led
to an automatic analysis or simulation. Soshnikov
and Dubovik (2004) presented an approach
combining a knowledge-based business process
description with an industry-standard functional
decomposition in order to obtain a structured
process modelling. Moreover, the approach could
provide out-of-the-box a technology for business
process simulation by logical inference in a
corresponding knowledgebase. Finally, Ren et al
(2008) from the IBM China Research Laboratory,
introduced an IBM asset named Supply Chain
Process Modeller (SCPM), which provided a
tailored business process modelling and a
simulation environment for business consultants.
This effort represents a viewpoint of how a better
trade-off can be achieved between the usability and
flexibility of a business process tool.

119

3. BUSINESS PROCESS MODELLER
AND SIMULATOR
The Business Process Modeller and Simulator
(Figure 1) with the acronym BPM is a BM design
tool as well as a Decision Support System (DSS) for
the measurement and assessment of BM
performances. BPM provides an integrated
approach to describe new business processes and
process changes with the use of the tools modelling
features, specify the level of customization of the
products being produced, determine alternative
scenarios and strategies for these BMs, define Key
Performance Indicators (KPIs) in order to measure
quantitatively the BMs performance and finally,
evaluate business process models using the
simulation features. Simulation enables the
examination and testing of alternative BM scenarios
prior to their actual implementation in the real
environment. Finally, through BPM, the user is
capable of observing the way that the level of
customization affects the BMs KPIs (such as profit
and lead time).
The main differentiation of the BPM compared
with other existing tools lies in its capability to
easily model and assess BMs that are oriented to
MC, through a set of dedicated functionalities and
KPIs.
3.1. MODELLING FEATURES
Modelling of the business processes is performed
through a graphical BPMN editor (Eclipse 2001).
The BPMN is a standard of business process
modelling and provides a graphical notation for
specifying business processes in a Business Process
Diagram, based on a flowcharting technique very
similar to the activity diagrams from the Unified
Modelling Language -UML). The Resources are
also modelled in BPM, in a flexible way. The
flexibility lies in the tools capability to formulate
resource groups and alternative resources. The
different types of resources that can be set are:
Human, Equipment and Software. Cost attributes
(fixed and/or variable cost) that may characterize
the resource is also possible to be specified.
In BPM, the process arrival rate and process
instance data can be modelled (see Figure 2). The
BPM supports two types (uniform and normal) to
model the arrival rate distribution. The BPM
supports two ways of specifying the input data; it
can be generated or it can be imported directly from
either an Excel or a flat file.
After the Business Processes and Resources have
been modelled, the assignment of the latter to the
processes may take place. Process time and quality
attributes characterize such assignments (see Figure
3).
In order to easily include in the model
customization related information, two dedicated
Graphical User Interfaces (GUIs) were developed:
the first one provided information about the market
segment, while the second one described the value
proposition (Figure 4).
In the Market segment GUI, the profile of the
customer(s) can be determined by selecting one or
more profiles, from the predefined ones available in
the GUI, which better describe the customer(s).
This functionality enables the direct comparison of
the profile with the KPIs (e.g. profit) under
investigation and obviously, is of great interest in
the case of MC. In the same GUI, the market share
per country is possible to be specified, aiding in
taking managerial decisions regarding the
production volume per season, the material
ordering, the arrangement of the logistics etc.
In the second GUI, the Value Proposition, the
value of the product proposed by the company is
expressed through the determination of the product
type, the cost range, quality drivers as well as the
level of customization described in a quantitative
manner. These parameters can be later included in
the KPIs expression, in order for the influence of
the value proposition to the companys BM to be
investigated.
In BPM, a number of KPIs to be calculated
during simulation, can be determined:
Company strategy KPIs: impacts/restrictions
of the company strategic objectives (in
relation to the basic shoe design)
KPIs for the Logistics: suppliers, customers
etc.
Manufacturing KPIs: time, cost, production
parameters
Local PI calculation at each alternative
process/routing
Cumulative KPI calculation for a business
scenario simulation
Global KPI estimation at strategic level (e.g.
KPIs for a whole seasons production)
In BPM, there is a set of built-in KPIs. These
KPIs are the Cost, Time, Quality and Flexibility
indicators.
The cost KPI reports the total cost of the
resources, performing the tasks of one business
process instance.
The time KPI reports the total time for each
process instance to be completed.
The quality KPI measures the quality of a
process instance.
The flexibility KPS measures the sensitivity
of the business process to changes using the Penalty
of Change Method (Chryssolouris 1992,
Alexopoulos 2005)

120

However, the tool also provides the possibility of
user defined KPIs. Such an example is the KPI
Profit. Since a BM allows expressing a
companys logic of earning money, the Profit can
be considered as a standard one for any BM
simulation. This KPI has not been integrated into
the default ones of the BPM, due to the fact that
each company uses its own expressions for the
calculation of profit. Finally, Jufer et al (2010) have
determined in their study, a set of KPIs related to
MC, which could be employed by BPM. Javascript
is used for describing user defined KPIs.



Figure 1 - Business Process Modeller and Simulator BPM software tool

Figure 2 - Preparing the simulation input

Figure 3 - Specifying resource process time and quality

121


Figure 4 - Market segment (left) and value proposition (right) GUIs

3.2. SIMULATION FEATURES
In order for the BM generated in BPM, to be
simulated, the process generation as well as the
simulation time are determined by the user. The
process generation time is possible to be defined in
a probabilistic way, through a standard or uniform
distribution. The results data are stored in Excel
spread sheets. Each time a different simulation
scenario is created (by altering some the BM
attributes) and therefore, a new simulation is run,
and also a file is generated with new results. This
feature enables the easy storage of the previous
results, in order for the different scenarios and
business strategies to be easily compared and
evaluated. For this assessment, multi-criteria
decision making approaches can be employed,
taking into consideration the KPIs, used in the
model, in order to identify the optimum BM
(Chryssolouris, 2006). At a conceptual level, the
simulation is performed in a typical way in which
entities / and parts move through a series of queues
and buffers, acquiring and releasing resources as
they move through the business model domain. The
entire model is driven by a sequence of discrete
events, which occur when a task is completed, and
the entity movement that occurs as a consequence
of these events occurring. At the very low level, the
business process is simulated with the use of a
Discrete Event Simulation engine (DesmoJ, 2011).
However, the core of the BPM is a set of objects
that wrap DesmoJ objects and map them directly
onto the BPMN diagram symbols.
4. CASE STUDY
4.1. DESCRIPTION
The BMs of two different shoe industries have been
assessed with the help of the BPM tool. The first
shoe industry is massively producing shoes, having
no customization options, while the second one may
produce highly customizable shoes.
The business processes, constituting the BM of
each case, are almost identical (Figure 5). The BM
initiates with the Product Design and
Development process, where the
conceptualization, design and development of the
shoe is taking place. The Material Purchasing,
Manufacturing and Logistics processes follow,
for the shoe production and delivery. At the same
time, a Marketing and Sales process is run for the
shoe.


Figure 5 - Shoe industrys BM abstract representation

122

Table 1 - BM modelling and simulation data
Mass
Production
Mass
Customization
Process generation time 6 months 12 hours
Simulation time 1 year 3 months
Produced pairs/Simulation time 1500 1
Customization types N/A
1. Functional
2. Aesthetical
3. Fitting
Customization levels and
coefficients
N/A
1. Low = 0.03
2. Medium = 0.2
3. High = 0.3
Business Processes data
Product Design and
Development
Cost ()
22500 7500 2 0.5
Design for MC
Cost ()
N/A 60/hour
Material Purchasing
Cost ()
150 375 K
MP cost *
[Customization
level] (see Eq. 5)
Time (hours)
Pre-ordered 4-240
Manufacturing
Cost ()
90 150 K 7.5 - 12
Time (days)
35 - 50 1 3
Logistics
Cost ()
N/A 4 5
Time (days)
N/A 1 - 3
Marketing and Sales
Cost ()
18 24 K 1.2 1.6

The afore-described BM representation can be
considered as abstract and top level. Each of these
business processes may be explicitly described
through a set of sub-processes.
The differentiation between the two productions
lies in the Design for MC process (red box in
Figure 5) that is required in the case of the
customized shoe production, in order for the
customization features, provided by the customer, to
be designed according to the shoe model.
Additionally, the attributes characterizing each
business process are completely different for the
two cases. For instance, the cost of material
purchasing in the case of the customized shoe
production is higher, since a build-to-order
production is followed. On the other hand, in the
case of mass production, the materials required for
the seasons production are ordered all together and
thus, a better price is achieved.
The main data utilized for the modelling and
simulation of these BMs are listed in Table 1. All
of them stem from a real shoe industry. For the
case of MC, three customization types are
considered, each one having three levels.. The
levels coefficients provided, are user-defined and
configured according to several aspects, such as
statistical data on the price the customer is willing
to pay for the shoe and the companys
particularities. These coefficients act as an
accession to a couple of costs and prices (shoe
pairs price, material cost etc.), affected by the
addition of the customization options.
The KPI of interest in this study is the profit,
obtained by each case, in order for the two different
production strategies, as well as the lead time (the
time required from the shoe order by the customer,
until its delivery to him) to be compared.
Additionally, the effect of the customization level
on the profit is also examined. Since, the profit is
not built-in KPI, the equations from which it derives
are given below:


123

Profit = [Income] [Expenses] (1)
Income = [Price per pair] * [Produced pairs per
season] (2)
Expenses = [Cost per pair] * [Produced pairs per
season] (3)
Price per pair = [Market average price] *
[Customization level] (4)
Customization level = 1 + ([Functional] +
[Aesthetical] + [Fitting]) (5)

4.2. RESULTS - DISCUSSION
A set of graphs has been generated in order to for
the results to be better visualized and compared. In
the graph of Figure 6, the profit/pair for the two
production types is compared. It is obvious that
through the production of customized shoes, the
profit can be drastically increased (up to 661 %).
This is mainly observed due to the fact that the extra
money the customer is willing to pay for a
customized shoe, is much more compared with the
additional production expenses, accruing from the
customization.


Figure 6 - Profit/Pair for the different production types
Another interesting graph produced, is the
profit/pair versus the different customization levels,
for the MC case (Figure 7). It can be easily seen
that the higher the customization level is, the higher
the profit/pair. The reason that the profit is not kept
constant, is that the price the customer is willing to
pay, is getting higher with the customization level
being increased. However, the production cost is
not relatively increased from level to level and thus,
a higher profit is observed.



Figure 7 - Profit/Pair for the different customization levels
The last graph derived from the simulation
results, is the time required for each event (the event
starts with the shoe order and ends with its delivery)
to take place (lead time), for the cases of MP and
MC (Figure 8). The time for the MP refers to the
whole seasons production, while for the MC to the
production of a single pair. As it can be noticed,
although the time in both cases does not increase,
there is a fluctuation from event to event around a
value. This means that all the BM processes are
carried out on time and thus, no delays are observed
on the events execution. For the case of MC, the
time ranges from 4,45 to 28,09 days. This range
may be acceptable by some companies, while by
some others it may not. In the latter case, remedial
actions should be taken in order for the process
time, in some time-consuming tasks, to be reduced.
The same actions should also be taken with
reference to the increase in time from event to
event. This would happen if some processes were
not accomplished on time, causing delays in the
execution of the event.


Figure 8 Time versus event for MP and MC
5. CONCLUSIONS
A novel tool for the design and simulation of BMs
oriented to MC has been presented in this paper.
This tool, compared with the state of the art, can
address customization parameters during the
modelling and simulation and thus, evaluate the
alternative BMs, in terms of customization aspects.

124

The tools efficiency has been demonstrated and
proved through a case study having stemmed from
the shoe industry, where the effect of MC on the
companys profit has been investigated.
Additionally, it was shown that the management
and post-processing of the results data is quite
simple and easy, since it is stored in Excel sheets
(generation of graphs, use of different sheets for
each BMs results etc.).
In future work, the authors intend to enhance the
BPM with more built-in KPIs with special attention
being given to flexibility indicators. Moreover, the
applicability of the tools will be further investigated
with case studies from different industrial sectors.
6. ACKNOWLEDGEMENT
The work was partially supported by the EU funded
project: FP7-NMP-2007-3.3-1-213180, Design of
Customer Driven Shoes and Multi-site Factory
DOROTHY.
7. REFERENCES
K. Alexopoulos, A. Mamassioulas, D. Mourtzis, G.
Chryssolouris, "Volume and Product Flexibility: a
Case Study for a refrigerators Producing Facility",
10th IEEE International Conference on Emerging
Technologies and Factory Automation, 2005, pp. 891-
897
ADEPTIA website, Retrieved: 16 August 2011,
<http://www.adeptiabpm.com/>
Barjis J, A Business Process Modeling and Simulation
Method Using DEMO, Lecture otes in Business
Information Processing, Vol. 12, 2009, pp 245-265
Chryssolouris G, Manufacturing systems: Theory and
Practice, 2nd edition, Springer, New York, 2006, p
577
G. Chryssolouris, M. Lee, "An Assessment of Flexibility
in Manufacturing Systems", Manufacturing Review,
1992, Volume 5, No.2, pp.105-116
DesmoJ website, Retrieved: 16 August 2011,
<http://desmoj.sourceforge.net/home.html>
Dietz JLG, Enterprise Ontology Theory and
Methodology, Springer, New York, 2006
Eclipse website, retrieved 16 August 2011,
http://www.eclipse.org/bpmn/
HANDYSOFT website, Retrieved: 16 August 2011,
<http://www.handysoft.com/products/bizflow_bpm>
Jufer N, Daaboul J, Bathelt J, Politze D, Laroche F,
Bernard A and Kunz A, Performance Factory in the
Context of Mass Customization, ICE 2010 16th
International Conference on Concurrent Enterprising
(ICE), Lugano, Switzerland, June 21, 2010
KBSI website, Retrieved: 16 August 2011,
<http://www.kbsi.com/COTS/ProSim.htm>
METASTORM website, Retrieved: 16 August 2011,
<http://www.metastorm.com/products/product_sheets/
Metastorm_ProVision_Product_Overview.pdf>
ORACLE website, Retrieved: 16 August 2011,
<www.oracle.com/us/technologies/bpm/029418.pdf>
Osterwalder A, The business model ontology, A
proposition in a design science approach, PhD
dissertation, University of Lausanne, 2004
Ren C, Wang W, Dong J, Ding H, Shao B and Wang Q,
Towards a flexible business process modeling and
simulation environment, Proceedings of the 2008
Winter Simulation Conference, 2008
SOFTWARE AG website, Retrieved: 16 August 2011,
<http://www.softwareag.com/corporate/products/aris_
platform/aris_design/business_architect/capabilities/de
fault.asp>
Soshnikov D and Dubovik S, Knowledge-Based
Business Process Modeling and Simulation,
Proceeding of the 6th International Workshop on
Computer Science and Information Technologies
(CSIT 2004), Budapest, Hungary, 2004, pp 169-176
Proceedings of DET2011
7th International Conference on Digital Enterprise Technology
Athens, Greece
28-30 September 2011
125

VIRTUAL REALITY ENHANCED MANUFACTURING SYSTEM DESIGN
Xiang Yang
University of Kaiserslautern
yang@cpk.uni-kl.de
Ren C. Malak
University of Kaiserslautern
malak@cpk.uni-kl.de
Christian Lauer
University of Kaiserslautern
lauer@cpk.uni-kl.de

Christian Weidig
University of Kaiserslautern
weidig@cpk.uni-kl.de
Hans Hagen
University of Kaiserslautern
hagen@informatik.uni-kl.de

Bernd Hamann
University of California, Davis
hamann@cs.ucdavis.edu
Jan C. Aurich
University of Kaiserslautern
aurich@cpk.uni-kl.de
ABSTRACT
During the analysis and design of manufacturing systems, enterprises are challenged by existing
restrictions and the running production. To deal with these key issues, different virtual factory
approaches and tools have been widely implemented in recent years. By mean of such approaches
and tools, a manufacturing system can be adapted effectively as changes occur. Virtual Reality
(VR), one of the most important approaches, is now applied in scientific and industrial fields.
Current studies of VR applications are mainly focusing on the design of products but not
manufacturing systems. This paper presents VR as an innovative and collaborative design platform
for manufacturing systems, which enables a holistic use of virtual factory tools. Based on this
platform, three applications have been implemented, which are addressed at different levels of a
manufacturing system. Furthermore, a noise simulation and a virtual machining tool have been
visualized in a Cave Automatic Virtual Environment (CAVE).
KEYWORDS
Virtual Reality, Manufacturing System, Virtual Factory Tools, CAVE, Visualization

1. INTRODUCTION
Customers demands as well as legal requirements
are changing in a rapid manner. Facing these
challenges requires a fast and systematic method to
adjust production systems (Schnsleben, 2009). In
the context of worldwide competition, the
production of innovative and low-cost products,
within an appropriate manufacturing system,
became a crucial part in the whole product life
cycle. As one of the most important virtual factory
tools, Virtual Reality (VR) takes a significant role
as it deals with definition, modelling and validation
of manufacturing systems (Smith and Heim, 1999).
Changes in a manufacturing system occur at
several levels and in different domains. An adaption
in one section often influences several other ones.
The total number of new designed factories is
declining in developed and developing countries.
Therefore, to adapt established manufacturing
systems gets more important due to the creasing
changed requirements (Khn, 2006).
Considering this background, a systematic
planning of necessary changes in a manufacturing
system is essential for two major reasons. On the
one hand, the impacts of changes in an established
manufacturing system have to be analysed and rated
in a holistic manner. Downtime in factories should
be prevented. On the other hand, the efficiency of
analysing and planning has to be improved to cover
the increased change demands. In order to face
these challenges, the virtual factory has
considerable, still not fully exploited potential. This
paper is organized as follows.

126

The VR framework is introduced after discussing
related work. Then, three virtual factory tools are
presented, which cover different application fields
in the range of manufacturing system design and
improvement. Two of them are visualized in a Cave
Automatic Virtual Environment (CAVE). The last
section concludes the paper and gives an outlook.
2. RELATED WORK
2.1 TERM DIFINITION
Based on common understanding, the digital factory
is data- and information-centred, but a virtual
factory is constructed by using models. In this paper
the term virtual factory is used to cover both
meanings. A virtual factory is built with geometric
models and involved data and information.
Hence the virtual factory comprises modeling and
visualization, simulation and evaluation, data
management and communication. A holistic view
upon manufacturing systems can be provided by the
virtual factory. The rebuilding and visualization of a
manufacturing system is the base on which further
processes are simulated. Machining operations,
assembly processes or material flows are examples
for the opportunities a consistent database can play
out (Weimer et al, 2008).
2.2 DESCRIPTION OF A MANUFACTURING
SYSTEM
Todays production is a networked process, in
which more production units are included, so that a
comprehensive investigation of the manufacturing
system is necessary. Depending on the chosen
views on a manufacturing system, the associated
problems vary. To focus each of these different
problems, the whole manufacturing system is
divided into several hierarchical levels (Wiendahl et
al, 2010).
Production network
Production location
Production segment
Production system
Production cells
workstation
1
2
3
4
5
6

Figure 1 Production levels in a manufacturing system
According to the cluster and classification of the
major production units, six production levels are
identified and scaled in a top-down fashion. In
Figure 1, they are shown as production network,
production location, production segment,
production system, production cells and workstation
(Westkmper and Zahn, 2009). Not all these six
levels are considered in this paper, only the levels
four through six are taken into account.
2.3. CURRENT USE OF VR
According to Chawla and Banerjee (2001), a virtual
environment provides a framework for
representing a facility layout in 3D, which
encapsulates the static and the dynamic behavior of
the manufacturing system.
A direct link of simulations to an immersive,
virtual environment, allowing user interaction and
changes during simulation processes, offers high
potential for exploring complex interactions
between users, objects and operations (Dorozhkin et
al, 2010). Therefore, VR is a reliable platform for
several virtual factory tools and suitable to support a
wide range of applications. They are, for example,
production planning, product design or the technical
qualification of employees (Oliveira et al, 2007).
Cecil and Kanchanapiboon (2007) decompose the
software applications in the field of manufacturing
system into three sub-areas:
factory-level prototyping
virtual assembly environments
virtual prototyping of lower-level activities
At the factory-level, VR is used in the majority of
the applications to support modification and
simulation of existing shop floors or to improve the
design of new layouts (Kesavadas and Ernzer,
1999). A well-designed layout is the basis, on which
idle time and bottlenecks in a manufacturing system
can be prevented. Improving the parts flow through
a shop floor or a factory is another key issue, which
can be solved by using those measures (Cecil and
Kanchanapiboon, 2007).
At the workplace-level, VR is used to analyse
single cells or assembly processes within a
workstation (Chryssolouris et al, 2000). Comparing
several virtual approaches, according the related
problems occurring in a physical assembly situation,
leads to suitable benchmarks. By identifying
constraints and oblique problems in early design
stages, other possibilities can be improved (Sharma
et al, 1997).
In lower-level manufacturing processes,
simulations and software tools like CAD/CAM are
widely distributed in industry today. But just a few
of them can be simulated properly in a virtual
environment (Cecil and Kachanapiboon, 2007).

127

3. A VR FRAMEWORK FOR
MANUFACTURING SYSTEM DESIGN
In this chapter a VR framework is illustrated and
divided into several phases: modeling, application
and adaption (Figure 1). It facilitates the virtual
factory tools for manufacturing system design and
enables an integrated use of them in a virtual
environment.

Figure 2 Workflow of VR framework
3.1. MODELING
The modeling phase provides a data basis for
further use. Usually it starts with geometric
modeling and then generates a VR model. During
geometric modeling, different objects in a
manufacturing system are involved, such as the
machines, people, parts, transports, materials etc.
Due to the large data volume, several levels of
detail (LOD) are used in order to depict the objects
according to the top-down approach. For example,
in the engineering change application, the LOD is
kept at machine level and the parts, cutting tools or
work pieces are not taken into account.
Furthermore, additional information from the
manufacturing system are described and integrated
to the completed geometric model as a VR model.
By using different VR platforms, this VR model is
visualized and manipulated. This set-up defines the
so-called a virtual environment, in which the
applications are implemented.
3.2. APPLICATION
In an application phase, two main components are
usually included: simulation and visualization.
Simulation is the kernel of the whole work flow.
Based on the modeled virtual world, it rebuilds the
manufacturing processes and provides essential
information for visualization and VR. Users are able
to get more realistic perception in a virtual
environment. In other words, the simulation gives a
virtual model life.
Due to the complexity of the processes in
manufacturing systems, the information obtained
from the tools mentioned above is also complex, not
intuitive and difficult to understand. Visualization
makes the viewing and analyzing of complex data
in VR easier. Users are allowed to find the useful
information for customized needs and are able to
use them more efficiently. Therefore, visualization
is a key method to help analysts to verify models,
understand simulation results and communicate
them to non-technical audience.
3.3. ADAPTION
According to the analysis results from applications,
the manufacturing system will be adapted. A VR-
supported Continuous Improvement Process (CIP)
Workshop is used to implement such adaption. An
application-related discussion of this method is not
provided in this paper. Figure 3 shows this method,
for more details we refer to Aurich et al (2009).

Figure 3 Procedures of VR-based CIP-workshop
The workflow in Figure 3 starts with a data entry,
which includes essential production and process
data for generating an appropriate virtual
environment. Not only available geometric models
of machines and facilities, but also the simulation
results, measurement data or other manufacturing
data are integrated. Within the virtual environment a
CIP-workshop is performed in five steps: 1)
detection of problems, 2) analyzing selected
problems, 3) developing improvement measures, 4)
realizing the measures with workers and 5)
evaluating the results. Following the successful
implementation of the workshop the results are
immediately transferred to enable the realization of
these improvements in a physical production
environment.

128

4. IMPLEMENTATION
In this chapter, three applications are introduced,
which are addressed at different levels of
manufacturing systems. All these three applications
have been implemented by using the introduced
framework. The modeling level is generally
introduced and shows fewer differences among
these three applications. At the application level,
different simulations and visualizations have been
implemented to achieve different objectives.
However, the adaption level is not discussed in this
chapter.
4.1 MODELLING
The modeling level consists of geometric modeling
and modeling of the manufacturing system. The
geometric modeling describes, for example, the
shapes, materials and textures for objects in virtual
environments, which includes the room, the
machines, the transport and the support elements
etc. By using modeling software 3ds Max, the
numbers of polygons at the CAD objects are
optimized for the balance of visual effect and
computing performance (see Figure 4). Not all of
the objects are modeled directly in this paper, for
example, the indexable insert and the insert holder
are provided by the manufacturer in order to ensure
a high level of detail.

Figure 4 Geometric modeling
The modeling of manufacturing systems provides
a database for information of objects as well as the
interrelationships between them. Such data contains
for example machine object features, layout
information, or dynamic process information. These
models are integrated together into a VR model and
exported using the Virtual Reality Modeling
Language (VRML) standard or OBJ format. In case
of VRML standard, a VRML editor such as
VrmlPad is used to construct sensors, events or
other interactions in a VRML file. At the same time,
Java and JavaScript are embedded into VRML. Java
3D is used to manipulate the OBJ file. As result, it
enables the modification of geometry, interactive
user interface, performing a simulation and building
data interfaces. After these steps, the data for
simulation and visualization is prepared.
4.2 APPLICATION
To discuss different issues in manufacturing
systems, three applications are shown in this
chapter. They are noise investigation, engineering
change management and a virtual cutting tool with
chip formation simulation. All of them are based on
the same geometric model and use customized data
from simulation, measurement or theory.
4.2.1 Sound Simulation
Noise from machining processes influences
employees health and it even causes serious
diseases. It became one of the most frequent
occupational hazards in manufacturing. To ensure
the health and safety of the employees in a factory,
there are existing laws and guidelines. For example,
in Germany, the Federal Ministry of Labor and
Social Affairs (BMAS) limits noise and vibration
levels within Germanys Occupational Safety Law
(Arbeitssicherheitsgesetz-ASiG), German ordinance
(LrmVibrationsArbSchV) and other additional
legal guidelines, see Yang et al (2010).
This application investigates the noise issue in
industry and is using a simulation as well as VR-
supported method. The visualization of simulation
results and enhanced analysis capabilities in VR
provide a new point of view to understand this issue
and fill the requirements of noise control/reduction
during factory planning.

Figure 5 Geometric modeling
In order to determine the influence of noise,
acoustic simulations have been implemented.
Different numerical simulation methods based on
solving wave equations such as Finite Element
Methods (FEM) and Finite Difference Time
Domain (FDTD) are discussed and compared by
Deines (2008). A geometric approach called Phonon

129

Mapping was developed by Deines, which is
implemented for this application.
The simulation kernel acts as server, loading the
model of the room geometry and generating user
interface elements. The resulting VRML code is
delivered to the VRML compliant VR platform via
HTTP. The server implementation is done by using
C++ and Qt. Qt supports a simple graphical user
interface for starting the server on selected network
ports and generating an initial VRML file, which
has to be opened by the VRML viewer application.
At the same time, Qt provides a simple interface for
managing network sockets which is a basis for an
HTTP connection.
After start, the server loads a VRML model file
and adds additional interactive user interface
elements as VRML code. Buttons and sliders have
been implemented using VRML and JavaScript.
Their visual appearance is modeled using simple
VRML geometry and saved as prototype nodes for
repeated use. The button geometry is connected to a
TouchSensor and the sliders to a PlaneSensor.
These sensors release events which are routed to
Script nodes containing simple interaction logic
written in JavaScript. Commands are sent back to
the server by loading a special URL which encodes
the action. The simulation is calculated by the
server and delivered to the viewer again. This
communication is done via HTTP connections. The
viewer opens a new connection using an HTTP
request asking for a file encoding commands in the
filename. The server does its calculation and
answers with a new VRML file delivered by this
existing HTTP connection.
The simulation and visualization is implemented
first using the VRML Viewer Instant Player and
Cortona3D Viewer, which enables the user to
navigate and manipulate a VRML-based scene
graph in a desktop-based workstation. In Figure 6,
two control modules are shown. The left side shows
the module for sound source placing and simulation
starting/stopping. After loading the geometric
model into the VRML viewer, the user can explore
the room and place the sound source within the
viewer application and start the acoustic simulation.
In the right module, one or more listeners
(employees) are placed in the explored room
according to predefined operation positions. After
those settings, the phonon collection step can be
performed, which calculates the sound levels at
each specified position.
When the simulation step is done, the sound
propagation inside the room is visualized by
animated phonon paths (see Figure 7). The playback
speed can be adjusted using the '++' and '--' buttons,
and the current simulation time step can be selected
by a slider (see Figure 6).
The phonon collection method calculates the
sound levels at the listener positions and enables
users to view the results interactively. Figure 8
shows the sound levels at different operator
positions. The virtual workers are shown with
corresponding colors according to the sound
pressure level: green for low sound pressure levels
<80dB, yellow/orange for critical sound pressure
levels <84dB and red when the sound pressure level
is too high according to the standard. The
simulation and visualization improved the
understanding of noise in the environment
significantly.

Figure 6 Simulation using VRML viewer

Figure 7 Sound propagation

Figure 8 Sound levels at different positions
4.2.2 Engineering Change
Enterprises have to change their manufacturing
system continuously. The numbers of engineering
changes (ECs) which are necessary to manufacture
new products and to increase productivity are
increasing. A VR-supported method is developed to
validate EC analysis and design.

130

Geometric modeling and manufacturing system
modeling, showed in chapter 4.1, are used in this
application as well. An additional modeled solution
database provides basic algorithms for planners to
generate project plans. The solution database is
directly linked to the application; by this the access
to the knowledge management out of the VR
environment is granted.
This application can be performed easily with
different hardware platforms and the standard Java
library. From Desktop based systems up to an
immersive CAVE is the visualization by several
output devices possible. A graphic user interface is
divided into four fields, which are marked from A
to D in Figure 9. The panel A enables system
controls and project configurations. The user can
manage files, generate new EC projects or input
additional information for changed process chains.
Within panel B, users are able to view the static and
dynamic information of objects, layouts as well as
interrelationships among them. The C panel is an
evaluation area for EC results according to different
criteria, such as completeness and production
bottlenecks. A 3D view of the layout is shown in
window D. Besides the basic functions, such as
view and navigation, more functions have been
implemented by using Java 3D classes. Users are
allowed to select, move, add and delete objects.

Figure 9 Graphic user interface for analyzing and
planning of ECs in manufacturing system
A 3D visualization of current manufacturing
systems provides an understanding and panoramic
view for users. Via interaction, the users can access
and view objects information directly and change
the manufacturing system directly in this 3D virtual
environment. After performing changes, the
resulting impacts including suggested solutions of
ECs are visualized to the user immediately. This
application is connected to a manufacturing system
database, which contains necessary object attributes
such as machine size, facility layouts, and material
flows. The database is realized as an object library
which contains information out of several areas
from the manufacturing system and bundles them.
All relevant information is illustrated on the user
interface and can be modified directly.
The tasks to realize various ECs are predefined
and stored in a solution database. Expert knowledge
and lessons learned from previous ECs are
accumulated to support planners. From the 3D
environment, users can access this database and
view recommended solutions. Using a simplified
user interface, the experienced planners and experts
can supplement and improve the solutions
continuously. The process chains are visualized as
well. The ECs, processed by users, affect changes of
these process chains. Based on the cycle times,
estimated by planners, this program calculates the
effects on the process chains.
The evaluation panel gives planners direct
feedback by estimating the impact of changes
related to cycle time and costs of the underlying
process chains. As an expeditious measure the
application enables the planner to rate the impact of
ECs on MS in a qualitative but also comprehensive
manner. Additionally, the completeness of ECs is
proved and the required operations are illustrated.
The visualization of machine capacities helps
planners to allocate the resources and gives them a
reminder when machine capacities are exceeded.
After change, the requirements considering the
machines and the layout are illustrated to planners
as well. Eventually, this application generates
project plans for the realization from the solution
database automatically.
Using this application can accelerate analyzing
and planning of ECs in manufacturing systems and
increase the quality of the process at the same time.
It reduces inconsistent planning of ECs and hence
the necessity of reconfiguring ECs. Finally, it
provides a transparent decision comparing different
options.
4.2.3 Virtual Machining
VR allows an animation of machining operations as
well. The machining kinematics and part geometries
are allocated prevalently to VR using VRML. In
order to animate the machining process closely to a
realistic one, the chip formation process needs to be
visualized as well. This application presents the
animation of external cylindrical turning
considering the chip formation and the results of the
machining operation, such as process forces.
The chips are described numerically using
JavaScript which is embedded into the VRML. The
chip form is determined using equations to calculate
the chip side-curl radius (Nakayama and Arai,
1990), the chip up-curl radius (Li and Rong, 1999),

131

and the radius of the spiral chips (Nakayama and
Arai, 1992). In addition to these specific values, the
chip lengths are required. They are determined
experimentally. For details about these parameter
determinations we refer Yang et al (2011).

Besides the chip form, the animation of chip
formation also requires the determination of chip
flow, and consequently the chip flow angle. The
chip flow angle specifies the angle between the
tangent to the chip flow direction and the surface of
the machined work piece. According to Colwell
(1954), the chip flow angle is influenced by the
cutting condition and the corner radius of the
indexable insert. The chip flow angles are
calculated regarding to these factors of influence.
In order to display the results of the machining
operation, such as tool wear, energy consumption
and surface roughness, the JavaScript accesses
additionally an experimentally generated database.
Figure 10 shows the graphic user interface
performed in the VRML viewer, which consists of
four function panels which are labeled from A to D.

Figure 10 User interface for virtual machining tool
The orthogonal process forces, the roughness of
the machined work piece, the energy need of the
machine tool during chip removal and tool wear are
shown on panel A regarding preset cutting
conditions. The process forces are displayed
numerically, and tool wear, surface roughness as
well as energy consumption are evaluated using the
colored scales low, ordinary and high. Figure
11 illustrates the process results during the
animation of turning using two different cutting
conditions: feed f1 = 0.3 mm/rev, depth of cut
ap1 = 2 mm, cutting speed vc1 = 175 m/min (Figure
11-a) as well as f2 = 0.1 mm/rev, ap2 = 1.5 mm and
vc2 = 175 m/min (Figure 11-b).
The length and the diameter of the work piece as
well as the feed travel are determined through Panel
B. The cutting conditions are defined using Panel C
(Figure 10). Panel D offers a system control. With
the system control the user can select different
points of view.


Figure 11 Process display at different cutting conditions
This application focuses on the animation of chip
formation during machining. The real-time-
animated and the experimentally observed chip
formations as well as chip flow are illustrated in
Figure 12. The chip formation during turning is
recorded by using a high-speed camera.

Figure 12 Chip formation in VR vs. in reality
Slight differences are observed between the VR
animation and the real cutting process, which can be
attributed mainly to the various assumptions of the
equations used to describe the chip formation
process. Nevertheless, the benefit of this application
for the virtual training and learning is significant.
5. IMPLEMENTATION IN CAVE
In this chapter the hardware and software
configurations of the CAVE system are briefly
introduced. Using this system two of the
applications described earlier in chapter 4.2 have
been implemented.
5.1 CAVE SYSTEM DESCRIPTION
The CAVE system located at the FBK institute,
University of Kaiserslautern, is a complex system,

132

which is constructed by using different hardware
technologies and software solutions. Figure 13
shows a few hardware components of the CAVE
set-up. Eight projectors (Figure 13-a), projecting on
four walls, offer an immersive virtual environment
of more than 17m
3
. Passive stereo technology with
circular polarization is used for stereoscopic
rendering of the 3D scene. The system is operated
by a VR cluster, which contains eight clients and
one server. Four IR-cameras are part of a user
tracking system (Figure 13-c). The user interacts
with the virtual environment via different input
devices such as a fly stick shown in Figure 13-b.

Figure 13 Components of CAVE system: a) passive stereo
projectors, b) a fly stick with tracking markers, c) an IR
camera over the CAVE
COVISE and VRUI are used as software
platform, due to their wide range of hardware
support and the broad spectrum of different
functionality modules. Both of them enable an easy
integration of different modules as well as
visualization functionality. For more details about
COVISE and VRUI we refer to Lang and Wssner
(2004) as well as Kreylos et al (2006).
5.2 VISUALISATION IN CAVE
The benefit of visualization in CAVE has been
discussed in chapter 2.3. The visualization of sound
propagation and sound levels in an immersive
environment helps to identify possible noise
problems in a manufacturing system and determine
improvement strategies.

Figure 14 Sound simulation in CAVE
Figure 14 shows, a user exploring the virtual
factory in CAVE and simulating the sound to
analyze the workstation or layout planning. The
scales of the virtual employees are similarly set to a
real person, so the user has a strong perception of a
real factory environment.
In Figure 15, an animation of a cutting process is
visualized in the CAVE. One of the advantages of
immersive animation is the free choice of ones
viewpoint. Users are allowed to navigate to any
interesting and relevant investigation points.
Different aspects can be analyzed specifically.
Another advantage in this case is the fact that the
full immersive environment improves the users
perception and the result of virtual operation
training.

Figure 15 Virtual machining implementation in CAVE
6. CONCLUSIONS
This paper presented an approach to support
analysis and design of manufacturing systems at
their different levels. The combination of various
tools and the integrated use of them enable the study
of a holistic application scenario.
Changes to a manufacturing system can be
applied to it virtually in order to minimize impacts
on the real manufacturing system. It improves the
planning quality, accelerates the planning velocity
and avoids production shutdowns.
Future research will focus on implementation of
more applications in order to improve the design of
a whole manufacturing system. Additionally, the
interrelationships between different levels will be
taken into account. Further research is required to
enable a cooperative application of these tools and
to consider the interactions between employees and
manufacturing systems.
Besides the networking of existing tools, the
development of further applications will be pursued.
The three introduced tools serve as examples of the
framework. More highly adapted applications will
be implemented and close the gaps in the digital
support of manufacturing system design. The
implementation of all those applications in VR as a
joint and scalable set of methods will be advanced.

133

7. ACKNOWLEDGMENTS
This work was funded by the German Science
Foundation (DFG) within the International Research
Training Group (IRTG) 1131 Visualization of Large
and Unstructured Data Sets Applications in
Geospatial Planning, Modeling, and Engineering.
The implementation of engineering change in VR
resulted from a project funded by the German
Research Foundation (DFG) Impact Mechanisms
of Engineering Changes in Production (fund
number AU 185/15-1). And CIP-Workshop resulted
from a project funded by DFG VR supported
continuous improvement of the mechanical
manufacturing (fund number AU 185/13-1 and -2).
REFERENCES
Aurich, J.C., Ostermayer, D. and Wagenknecht, C.,
Improvement of manufacturing processes with virtual
reality based CIP workshops, International Journal of
Production Research, 47, 2009, pp 5297-5309
Cecil, J. and Kanchanapiboon, A., Virtual engineering
approaches in product and process design,
International Journal of Advanced Manufacturing
Technology, No. 31, 2007, pp 846-856
Chawla, R. and Banerjee, A., A virtual environment for
simulating manufacturing operations in 3D,
Proceedings of the 2001 Winter Simulation
Conference, Arlington, 2001, pp 991-997
Chryssolouris, G., Mavrikios, D., Fragos, D., and
Karabatsou V., A Virtual Reality based
experimentation environment for the verification of
human related factors in assembly processes,
Robotics & Computer Integrated Manufacturing, Vol.
16, No 4, 2000, pp 267-276
Colwell, L.V., Predicting the angle of chip flow for
single-point cutting tools, Transaction of the ASME,
76, 1954, pp 199-204
Deines, E., Acoustic simulation and visualization
algorithms, PhD thesis, University of Kaiserslautern,
2008
Dorozhkin, D.V., Vance, J.M., Rehn, G.D. and Lemessi,
M., Coupling of interactive manufacturing operations
simulation and immersive virtual reality, Virtual
Reality, 2010, pp 1-9
Kesavadas, T. and Ernzer, M., Design of virtual factory
using cell formation methodologies, American
Society of Mechanical Engineers, Material Handling
Division, 1999, pp 201-208
Kreylos, O., Bawden, G., Bernardin, T., Billen, M.I.,
Cowgill, E.S., Gold, R.D., Hamann, B., Jadamec, M.,
Kellogg, L.H., Staadt, O.G. and Sumner, D.Y.,
Enabling Scientific Workflows in Virtual Reality,
Proceedings of ACM SIGGRAPH International
Conference on Virtual Reality Continuum and its
Applications, ACM Press, New York, 2006, pp 155-
162
Khn, W., Digitale Fabrik Fabriksimulation fr
Produktionsplaner, Hanser-Verlag, Munich, 2006
Lang, U. and Wssner, U., Virtual and augmented
reality developments for engineering applications,
Proceedings of the European Congress on
Computational Methods in Applied Sciences and
Engineering, Finland, 2004
Li, Z. and Rong, Y., A study on chip breaking limits in
machining, Machining Science and Technology, 3/1,
1999, pp 25-48
Nakayama, K. and Arai, M., The breakability of chip in
metal cutting, Proceedings of the international
conference on manufacturing Engineering, Melbourne,
Australia, 1990, pp 6-10
Nakayama, K. and Arai, M., Comprehensive chip form
classification based on the cutting mechanism,
Annals of the CIRP, 41/1, 1992, pp 71-74
Oliveira, D.M., Cao, S.C. Hermida, X.F. and Rodrguez,
F.M., Virtual reality system for industrial training
Proceedings of 2007 IEEE International Symposium
on Industrial Electronics, Vigo, 2007, pp 1715-1720
Schnsleben, P., Changeability of strategic and tactical
production concepts, CIRP Annals Manufacturing
Technology, No. 58, 2009, pp 383-386
Sharma, B., Molineros, J. and Raghavan, B., Interactive
evaluation of assembly sequences with mixed (real
and virtual) prototyping, Proceedings of the IEEE
International Symposium on Assembly and Task
Planning (ISATP), New York, 1997, pp 287-292
Smith, R. P. and Heim, J. A., Virtual facility layout
design: the value of an interactive three-dimensional
representation, International Journal of Production
Research, 37(17), 1999, pp 847-860
Weimer, T., Kapp, R., Klemm, P. and Westkmper, E.,
Integrated Data Management in Factory Planning and
Factory Operation An Information Model and its
implementation Proceedings of 41th CIRP
Conference on Manufacturing Systems, Tokyo, 2008,
pp 229-234
Westkmper, E. and Zahn, H.E., Wandlungsfhige
Produktionsunternehmen Das Stuttgarter
Unternehmensmodell, Springer, Berlin, 2009
Wiendahl, H.-P., Nyhuis, P. and Hartmann, W., Should
CIRP develop a Production Theory? Motivation
Development Path Framework, Sustainable
Production and Logistics in Global Networks, May
26-28, Neuer Wissenschaftlicher Verlag, Vienna,
2010, pp 3-18
Yang, X., Deines, E., Lauer, C. and Aurich, J.C.: Virtual
reality enhanced human factor an investigation in
virtual factory, Proceedings of Joint Virtual Reality
Conference, Stuttgart, 2010
Yang, X., Max, T., Zimmermann, M., Hagen, H. and
Aurich, J.C., Virtual Reality animation of chip
formation during turning, Proceedings of 13th CIRP
Conference on Modeling of Machining Operations,
Sintra, 2011, pp 203-211
Proceedings of DET2011
7th International Conference on Digital Enterprise Technology
Athens, Greece
28-30 September 2011
134

REAL OPTIONS MODEL FOR VALUATING CHINA GREENTECH
INVESTMENTS
Han Qin
The University Of Hong Kong
qinhan1120@hotmail.com
L.K. Chu
The University Of Hong Kong
lkchu@hkucc.hku.hk
ABSTRACT
The environmental issues of China have attracted global concern. Recent past has witnessed
significant investments made in a broad range of greentech businesses encouraged by significant
political and economical drive. Among these, Carbon Capture and Storage (CCS) is considered a
promising business. This paper describes a decision problem faced by a firm in determining the
optimal timing to invest in a CCS project to reduce CO2 emissions and therefore minimize the
purchase of emissions credits. A real options model is developed to simulate the decision process in
which the price of emissions credits is assumed to follow a binomial process. It quantized the effect
of inflation and depreciation, and worked out the optimal time to invest greentech project. In this
model, the firm is assumed to make optimal decisions about a CCS project when the price of
emissions credits reaches critical value.
KEYWORDS
Environmental Issues, Real Options, Carbon Capture and Storage, China Greentech Investment,
Optimal Timing

1. INTRODUCTION
Environmental issues have increasingly influenced
corporations overall strategy and operations all
over the world. New policy terms like Energy
Saving/Lean Energy, Carbon Emission Reduction,
Greenhouse Gases (GHG) Trading, Cap-and-Trade
Program, and so on, are spreading globally
constraining companies development. Every
company from any country has to obey those rules
to avoid trouble from all of their stakeholders.
Among all the counties, China, with its gigantic
development achievement, immense population and
economy scale, and up surging energy consuming
and demand, has become the focus of global
environmental issues. International policy principles
and international policy organizations have
significant affect on the development of Chinas
greentech markets. In addition, certain international
agreements like the 1997 Kyoto Protocol claim
China government to promulgate specific
environmental policy actions. Almost half of the
Certified Emission Reduction Certificates (CERs)
registered under the Kyoto Protocol have got their
corresponded projects in China. Meanwhile,
bilateral relationships with other countries and
economic blocs also have affected Chinas
environmental markets.
Under the combined forces of politics and
economics, China is driving to and has to become a
responsible global citizen. By far, China has done a
lot to protect environment from deteriorating rapidly,
and laid a substantial foundation for greentech
market. Chinas government has already established
plans and programs, laws and standard, fiscal
incentives and subsidies, industrial promotion, and
price management policies to respond to the urgent
environmental issues. The government has released
series of greentech relevant industry revitalization
plans for industries such as new energy, equipment
manufacturing, and logistics. As a result, a wide
range of businesses begin to implement mature or
new emerging greentech solutions so as to respond
to the broad environmental issues.
China has already set a target of deriving 20% of
energy source from renewable sources by 2020.

135

However, even if this aggressive target is achieved,
there is still 80% of its energy derived from coal. In
this paper, we choose to research the application of
Carbon Dioxide Capture and Storage (CCS) in the
cleaner conventional energy sector for academic and
practical reasons.
From the view of firms involved in the
environmental issues, for example, whether to invest
in the CCS project, it faces not only the legal
constraints, but also economical and financial
structure changes. GHG emissions credits,
government fines, stakeholders intervention and so
on may cost a lot of money and resources;
consequently, they may finally decrease the profit of
a company and even hinder its development. In
order to minimize environmental cost, companies
can either purchase credits from market or reduce
GHG emissions through installing green technology
equipments and systems or introducing new
processes from other companies. The purchase of
credits may be economic in the short run, however,
the cost of these credits are assumed to increase over
time as the number of available credits gets
decreased by regulation. With the emerging markets
in GHG emissions trading, it is becoming
increasingly important for managers to determine
whether to invest in environmental programs and
green technology or to purchase emissions credits,
and to make tradeoffs in cost between them.
Traditionally, investment selection decisions are
evaluated by Discounted Cash Flow method (DCF),
where the Net Present Value (NPV) is often used.
However, DCF often leads inevitable underestimate
to high technology projects which may actually be
feasible, mainly because of the high risk rate
estimated in the beginning. Moreover, DCF is not a
sufficient methodology in the situation of strategic
flexibility where investment decision would be
deferred to some proper future date. It cannot deal
with the possible varying cost of GHG emission
credits, either.
In essence, corporations have the option to defer
purchase to some future time. One tool that can
prove beneficial in this type of investment
environment is the use of real options. This
approach treats options at different stages as part of
its decision making process. A real options method
is applied in this paper to evaluate the investment
decisions of greentech projects. The optimal time to
invest greentech project is determined through real
options model. According to the characteristics of
investment in greentech project, it quantized the
influence of inflation and depreciation and took full
consideration of emission credits cost and profit. Its
feasibility and advantage have been investigated in
the analysis of a demonstration projects supported
by government. The result further provides practical
and managerial insights into the application of the
real options analysis to greentech investment.

2. REVIEW OF CHINA GREENTECH
MARKET
According to China Greentech Report 2009(The
China Greentech Report
tm
2009, 2009), greentech is
defined as Technologies, products and services that
deliver benefits to users of equal or greater value
than those of conventional alternatives, while
limiting the impact on the natural environment and
maximizing the efficient and sustainable use of
energy, water and other resources.
Since the policy of reform and opening in 1978,
China has gone through rapid economic growth and
turns into a huge and resilient economy, with a
sound improvement in living standards for its
people. With an annual economy growth rate of
10% on average, China becomes the third largest
economy and the second energy consumer in the
world (The China Greentech Report
tm
2009, 2009).
However, this tremendous achievement comes at
significant environmental cost. It is now the largest
emitter of greenhouse gases (GHGs), and constitutes
of over 20% of CO
2
emissions from the burning of
fossil fuels annually, 80% of which comes from
burning coal, Chinas predominant energy source.
Whats more, China is facing the dual problems of
water scarcity and water pollutions, and also serious
land degradation. Although the environmental
problems are unavoidable to any country with the
same experiences of industrialization, Chinas
immense scale and rapid growth speed, as well as
the urgent state of the worlds environment, make
Chinas environmental issues a global concern.
International policy principles like Sustainable
Development and Common But Differentiated
Responsibilities, and international policy
organizations, such as the United Nations(UN), the
World Trade Organization(WTO), the International
Monetary Fund(IMF), the World Bank and the
Asian Development Bank(ADB), all have
significant affect on the development of Chinas
greentech markets. In addition, certain international
agreements claim China government to promulgate
specific environmental policy actions, of which the
1997 Kyoto Protocol is the most famous. Almost
half of the Certified Emission Reduction Certificates
(CERs) registered under the Kyoto Protocol have
got their corresponded projects in China. Last but
not the least, bilateral relationships with other
countries and economic blocs, for example, the
China-US Strategic and Economic Dialogue
(S&ED), the EU-China Energy and Environment

136

Programme, also have affected Chinas
environmental markets.
Under the combined forces of politics and
economics, China is driving to become a responsible
global citizen. By far, China has done a lot to
protect environment from deteriorating rapidly, and
laid a substantial foundation for greentech market.
Guided by the policy principles of Scientific
Approach to Development, Harmonious Society,
Equal Emphasis on Mitigation and Adaptation,
Efficiency Improvement and Conservation, Energy
Structure Optimization and so on, Chinas
government has already taken a lot of actions to
solve the problem.
Specifically, China has already established: 1.
Plans And Programs, including the Five Year
Guidelines and the 4 trillion Yuan economic
stimulus plan in 2008; 2. Laws And Standard,
including Renewable Energy Law (2005) and the
Circular Economy Law (2008); 3. Fiscal Incentives
And Subsidies, including tax exemptions,
consumption related taxes, natural resource related
taxes, and subsidizes in areas like New Energy
Vehicles and Biomass Power Generation; 4.
Industrial Promotion, including favourable
financing to greentech sectors and requirement of
environmental and energy disclosures from listed
companies; 5. Price Management Policies, including
the feed-in tariff for wind power is released in July
2009. The government has released series of
greentech relevant industry revitalization plans for
industries such as new energy, equipment
manufacturing, and logistics. As a result, a wide
range of businesses begin to implement greentech
solutions so as to respond to the broad
environmental issues. The range of businesses
constitutes of 3 broad categories (energy supply,
resource use and other markets), 9 broadly-defined
market sectors and 40 focused segments across, as
illustrated by Table-1.
Table 1-The China Greentech Market Map
source: The China Greentech Report 2009

Energy Supply Resource Use Other Markets
Sector
Cleaner
Conventional
Energy
Renewable
Energy
Electric Power
Infrastructure
Green
Building
Cleaner
Transportation
Cleaner
Industry
Clean Water
Waste
Management
Sustainable
Forestry And
Agriculture
S
E
G
M
E
N
T

Cleaner Coal Solar Energy Transmission
Optimized
Design
Cleaner Road
Optimized
Design
Water
Extraction
Waste
Collection
Sustainable Forest
Management
Cleaner Oil Wind Power Distribution
Sustainable
Materials
Cleaner Rail
Sustainable
Materials
Water
Treatment
Waste
Recycling
Sustainable Land
Management
Cleaner Gas Bio-Energy Energy Storage
Energy
Efficiency
Cleaner Air
Efficient
Processing
Water
Distribution
Energy From
Waste
Sustainable
Farming
Communities
Nuclear
Power
Hydro-Power
Demand
Management
Water
Efficiency
Cleaner
Waterway
Water Use
Waste
Treatment
Optimized Crops

Wave Power
Supply
Flexibility
Wastewater
Treatment
Sustainable
Waste
Disposal


Geothermal
Energy

China has already set a target of deriving 20% of
energy source from renewable sources by 2020.
However, even if this aggressive target is achieved,
there is still 80% of its energy derived from coal. In
this paper, we choose to research the application of
Carbon Dioxide Capture and Storage (CCS) in the
cleaner conventional energy sector for academic and
practical reasons.

First, China is the worlds largest CO
2
emitter,
China accounts for 24% of global energy related
CO2 emissions, US for 21%, EU-15 for 12%.
Second, coal remains the main source of energy
even though strong policy incentives for energy
efficiency, renewable and other low carbon
technologies. In 2009, China derived 70% of its
primary energy from coal, and this heavy
dependence is projected to continue into the long
future(Seligsohn, Liu, Forbes, Dongjie, & West,

137

2010). Third, CCS is able to reduce GHGs
emissions while coal use continues, whereby this
technology is a key element in current state if China.
Last but the least, the Ministry of Science and
Technology is developing a long-term CCS strategy
and, some leading Chinese energy enterprises such
as Petro China and Shenhua Group, have been
investing in CCS technology demonstration projects.
This undoubtedly makes our research feasible in
data collection and useful in practical aspects.

3. LITERATURE REVIEW
As organizations become increasingly
environmentally conscious, investment decision on
greentech has been capturing attention of the
management. Some researchers have discussed how
environmental investments could benefit
organizations (Bonifant et al, 1995; Nehrt, 1996;
Porter et al, 1995). Nevertheless, these studies
mainly investigated the issues in an empirical or
conceptual way, and did not propose effective
decision models. In practice, some models have
been proposed for internal use by corporation
management, such as stochastic dynamic
optimization (Birge et al, 1996), mixed integer
programming (Mondschein et al, 1997), activity
based costing (Presley et al, 1994), and data
envelopment analysis (Sarkis, 1999). Even though
these models are advanced and scientific, the most
popular technique used by organizations is the
utilization of the Discounted Cash Flow method
(DCF) based on cost-benefit analysis. The DCF
method, where the Net Present Value (NPV) is often
used, is simple and practical, but, to a large extent,
ignores the option to defer an investment. Therefore,
the dynamic option value embedded in the options,
which can be very significant in some investments,
is neglected.
The fundamental hypothesis of the traditional
DCF method is that future cash flow is static and
certain, and management does not need to rectify
investment strategy according to the changing
circumstances (Myers, 1977). Nevertheless, this is
inconsistent with the facts. In practice, corporations
often face plenty of uncertainties and risks, and
management tends to prior the operating flexibility
and other strategy issues, to which they would even
like to sacrifice current valuable cash flow
(Donaldson, 1983).
Ross(1995) pointed out that it might lead to
wrong decision when applying NPV (Net Present
Value) method to investment evaluation, for
example, some investments, which are not one-off
but consist of several follow-up investments, may be
objected by management because of minus NPV
value of the early investment. NPV method
maintains principle of accept now or accept
never, and this obviously goes against the tradeoff
between the values of present investment and future
reinvestment. Myers(1987) argued that, although
part of the investment evaluations may have been
improper in its application, the embedded limitation
of traditional DCF, which became especially
apparent in the process of evaluation to investment
with operational or strategic options, could not be
denied.
Other than DCF, Real options analysis can be
devoted to deal with investment options and
managerial flexibility.
The Real options method has its root in financial
options, and it is the application and development of
financial options in the field of real assets
investment. It is generally believed that the
pioneering literature in real options is by Myers
(1977), in which he proposed that, although
corporation investments does not possess of forms
like contract as financial options do, investments
under situation of high uncertainty still have the
characteristics similar to financial options, therefore,
the options pricing method could be used to evaluate
the investments. Subsequently, Myers(1977)
suggested a Growth Options to corporation
investment opportunity, and Kester (1984) further
developed Myers research, and argued that even a
project with negative present NPV value could also
have investment value only if manager had the
ability to defer investment in order to wait for the
beneficial opportunity.
After three decades of development, the theory of
real options has become an important branch and
hot research topic. According to different
managerial flexibility embedded in real options,
Copeland (1992) and Trigeorgis (1993) divided real
options into seven categories, including Option to
Defer, Option to Alter Operating Scale, Option to
Abandon, Option to Switch, and so on. Researchers
have devoted into different options (O'Brien et al,
2003; Sing, 2002).
So far, real options theory has been applied to
investment problems in many different fields, such
as biotechnology, natural resources, research and
development, securities evaluation, corporation
strategy, technology and so on (Miller et al, 2002).
It has also been applied to a gas company in British
and leads to conclusion that certain projects are not
economically feasible unless permits have a faster
price rise(Sarkis et al, 2005).
In conclusion, traditional evaluation
methodologies like DCF have undeniable
limitations in face of investment options and
managerial flexibility. In the particular field of
greentech investment, where investment decisions
may be deferred to certain appropriate future dates

138

and the cost of emission credits may vary from time
to time, the real options method can be proved
beneficial in this type of investment.

4. REAL OPTION MODEL FOR
VALUATING CHINA GREENTECH
INVESTMENTS
The price of credit is unknown, and could vary from
time to time according to factors such as future
environment policy legislation, cost of alternative
fuels, product market demand and so on. Assuming
the prices of credit follow a multiplicative binomial
process, based on the statistical growth rate of
credit , the expected price at the end of first time
period can be described as:

(1)

where
= the price of credit at the end of fist time period
=the initial price of the credit
=growth rate of the credit
To set up the real options model for valuating
China greentech investments, a binomial lattice,
which is the most popular and widely employed
model for option valuation, is applied to illustrate
the price process of the credits.
The value of the credit price would move either
upward or downward at each predefined time
interval. Denote the rate of upward move as and
the rate of downward move as , where
. By the volatility , the and are
calculated as follows:

and (2)

where
=the rate of volatility
Let denote the probability of credit price
moving upward, thereby denote the
probability of credit price moving downward. Since
there is no substantial reason to assume any
specified probabilities, we set .
In the binomial lattice, the expected credit price is:

(3)

Now we can solve by setting the right-hand of
Equations (1) and (3) equal, and attain as a
function of . According to (Sarkis et al, 2005),
taking consideration of long-term real growth rate
and inflation rate, a nominal growth rate is
estimated as 0.05366, therefore, the value of
volatility is estimated as 0.3305.
Denote the total numbers of times of the decision
periods which a organization decide whether to
invest the greentech equipment as T, for example, if
the organization considers the CCS investment
decisions annually during 20 years, then .
Let denotes the number of upwards moving of the
credit price, and denotes the number of
downwards moving of the credit price,
where . In general, the expected
price of credits is given:

(4)

where
=price of credits at time t with j times of
upward moves
j= the number of upward moves, 1
k= the number of downward moves,
1
Under risk-neutral probabilities, the value of
options is the discounted expected value. Let
denote the probability for an up move, and it is
given as:

(5)

In addition, the effect of inflation and
depreciation should not be neglected in the process
of a long-term decision making. Generally speaking,
the initial cost of the installation of the equipment
will be increased by the expected inflation rate and
decreased by the present value of the tax shield from
depreciation. Following the assumption of (Sarkis et
al, 2005), given a 7-year accelerated depreciation
schedule and a cost of capital about 10%, the
depreciation reduces the investment by 30%.
Although it is not accurate to discount the 7-year
depreciation tax shield into a present value, because
the depreciation in fact occurs year by year during
the seven years, it reflects the effect of depreciation
on initial cost to some extent. However, managers
should keep in mind that this schedule of
depreciation results in a decision of delaying
installation. Another factor that will delay
installation is the operation cost of the equipment,
but we are going to ignore it in this model as its
effect to decision is comparatively tiny.

(6)

Where

139

= total cost of time period t
=initial cost of the equipment
=cost increased by inflation
= cost decreased by the present value of
depreciation tax shield
Now we are ready to calculate the option value at
each time period via the comparison with the net
present value (NPV). The NPV at the last time point
is the present value of income, which equals the
emissions savings (denoted by ), multiply the price
of the credits ( ), minus the present value of cost
( ), that is . However, in the
middle of the decision period, situation is different
to some extent, as we need take the expected cash
flows next period into consideration. Therefore, the
NPV at the middle node is the value of income less
the cost and plus the expected cash flows next
period under risk neutral probabilities discounted by
the risk free rate:


(7)

To put them into one formula, NPV is given by:


(8)

Where:
=net present value at time t after j upward
moves
=emission savings in tons at time t
As pointed out above, there are only two possible
movements from one time point to the next period
of the price of credits, is for an upward
move, while is for a downward move.
The option value can be solved backward. Thats to
say, we start at the final period, when , if the
NPV is positive, management implements the
investment, if not, discards the investment.
Therefore the option value here is
the . Working back through the
lattice, when it comes to , the option value
would be the expected option value in the next
period under the risk neutral probabilities ( )
discounted by the risk free rate , that is to say,
. In general,
the option value is given by:

(9)

Where
=option value at time t after j upward moves
After we gain all the values of options and NPVs,
we can find the optimal time to implement the
option, for example, to install the CCS equipment in
a forward way. The optimal time to implement is the
first time when the NPV is greater than the value of
options., that is:

(10)

At the beginning of the decision making, the NPV
would be smaller than the value of options. As the
increase of the price of credits, both the NPV and
the value of options increase, however, the NPV has
a bigger rate of increment than the value of options.
Consequently, there will be a crossover at some
time point which the NPV exceeds the value of
options. In other words, if the price of credits never
rises, there will no possibility for the NPV reach the
level of the value of options, and the firm does not
need to consider the investment of greentech
equipment to mitigate the purchase of credits. This
obviously should draw the attention of related
policy makers.

5. ANALYSIS OF THE DECISION TO
INVEST CCS PROJECT
We apply the model to a case of a hypothetical
power plant faced with decision of whether to install
CCS equipment or to purchasing CO
2
credits, and
aim to work out the optimal timing, which is the
critical price of credits, to exercise the decision.
Data on the cost of installing a CCS equipment is
taken from the project of Shenhua CTL (Ren, 2008) ,
one of the demonstration projects supported by
government. In this example we assume the initial
investment for the CCS project is 1.4 billion,
, , , . The
price of credits is about 14.72 Euro to Dec, 2010
according the Point Carbon's OTC price
assessments("Point Carbon's Otc Price Assessments
", 2010), in addition, the Clean Development
Mechanism (CDM) could provide roughly 11-
32$ of CO2 for CCS investors, therefore we set
for calculation. We build the lattice in 20
years steps and the result is demonstrated as Tabl-2:

140

Table 2-Binomial lattice for CCS project
Year 0 1 2 3

14 15 16

20
Item PV RO PV RO PV RO PV RO PV RO PV RO PV RO PV RO
20



105761 105761
19 75670 75670
18 54048 54048
17 38512 38512
16 59901 59901 59901 59901 46866 46866 46866 46866 27347 27347
15 42775 42775 42775 42775 38195 38195 38195 38195 42748 42748 42748 42748 33404 33404 33404 33404 19325
19325
13561 14 30475 31109 30450 30450 30450 30450 27185 27185 27185 27185 30423 30423 30423 30423 23730 23730 23730 23730 13561

6


1243 1393 1218 1218 1218 1218 1078 1078 1078 1078 1191 1191 1191 1191 796 796 796 796

-111 0
5

614 766 589 589 589 589 539 539 539 539 562 562 562 562 343 343 343 343 -405 0
4

161 353 136 209 109 109 109 109 100 100 100 100 -617 0
3 90 2964

-164 123 -189 53 -216 14

-769 0
2 -124 2351 -144 1981 -398 28 -423 7 -450 0 -878 0
1 -272 1854 -292 1550 -312 1280 -566 3 -591 0 -618 0 -957 0
0 -372 1453 -392 1203 -24 982 -432 787 -686 0 -711 0 -738 0 -1013 0

6. CONCLUSIONS
This paper presented a real options method to
evaluate the investment decisions of greentech
project. The optimal time to invest CCS project is
determined through real options model. According
to the characteristics of investment in greentech,
this model quantized the influence of inflation and
depreciation and took full consideration of emission
credits cost and profit. The result of the analysis of a
demonstration project verified its feasibility and
further provided practical and managerial insights
into the application of the real options analysis to
greentech investment.
A limitation in this model is that it ignored the
operating cost of the greentech project. Although
we can take it into consideration through adjustment
of the initial investment cost, the operation cost of
greentech may significantly influence the result of
lattice. Another limitation possibly exists in the
simulation of the process of price paths as it may
bring unbounded stochastic prices, while
government will intervene in the price process. The
usefulness of real options model to greentech
investment is not influenced by these limitations,
however, we will seek to work out these problems
in future work.

REFERENCES
Birge, J. R., and Rosa, C. H., Incorporating Investment
Uncertainty into Green-House Policy Models, The
Energy Journal, Vol.17, No.1,1996, pp79-90.

Bonifant, B. C., Arnold, M. B., and Long, F. J., Gaining
Competitive Advantage through Environmental
Investments, Business Horizons,
Vol.38,No.4,1995,pp 37-47.

China Greentech Initiative, The China Greentech Report
2009, 2009, pp12

Donaldson, G., Lorsch, J., Decision Making at the
Top:The Shaping of Strategic Direction, New York:
Basic Books, 1983.

Kester, W. C., Today's Options for Tomorrow's
Growth, Harvard Business Review, No.2,1984,
pp153-160.

Miller, L. T., and Park, C. S., Decision Making under
Uncertainty-Real Options to the Rescue?,
Engineering Economist, Vol.47,No.2,2002,
pp105.

Mondschein, S. V., and Schilkrut, A., Optimal
Investment Policies for Pollution Control in the
Copper Industry, Interfaces, Vol.27, No.6,
1997, pp69-87.

Myers, S. C. , Determinants of Corporate Borrowing,
Journal of Financial Economics, No.5, 1977,
pp147-176.

Myers, S. C., Finance Theory and Financial Strategy,
Midland Corporate Finance Journal,
Vol.5 ,No.1, 1987, pp6-13.


141

Myers, S. C., and Turnbull, S. M., Capital Budgeting,
and the Capital Asset Pricing Model: Good
News and Bad News, Journal of Finance,
Vol.32, 1977, pp321-333.

Nehrt, C., Timing and Intensity Effects of
Environmental Investments, Strategic
Management Journal, Vol17, No.7, 1996,
pp535-547.

O'Brien, J. P., Folta, T. B., Douglas, R. J., and Timothy,
B., A Real Options Perspective on
Entrepreneurial Entry in the Face of
Uncertainty, Managerial & Decision
Economics, Vol.24, No.8, 2003, pp515-544.

Point Carbon's Otc Price Assessments,2010,
http://www.pointcarbon.com/

Porter, M. E., and Van der Linde, C., Green and
Competitive: Ending the Stalemate, Harvard
Business Review, Vol.73, No.5, 1995, pp120-
134.

Presley, A., and Sarkis, J. ,An Activity Based Strategic
Justification Mehodology for Ecm Technology,
THe International Journal on Environmentally
Conscious Desigh and Manufacturing, Vol.3,
No.1,1994, pp5-17.

Ren, X.,Shenhua Ctl (Clean Coal) Projects and Co2
Sequestration, 2008.

Ross, S. A., Uses, Abuses, and Alternatives to the Net-
Present-Value Rule, Financial Management,
No.24, 1995, pp96-102.

Sarkis, J., A Methodological Framework for Evaluating
Environmentally Conscious Manufacturing
Programs, Computers & Industrial
Engineering, Vol.36, No.4, 1999, pp793-810.

Sarkis, J., and Tamarkin, M. , Real Options Analysis for
Green Trading: The Case of Greenhouse
Gases, Engineering Economist, Vol.50,No.3,
2005, pp273-294.

Seligsohn, D., Liu, Y., Forbes, S., Dongjie, Z., and West,
L., Ccs in China: Toward an Environmental,
Health, and Safety Regulatory
Framework ,2010, http://www.wri.org.

Sing, T. F., Time to Build Options in Construction
Processes, Construction Management &
Economics, Vol.20, No.2, 2002, pp119-131.

Proceedings of DET2011
7th International Conference on Digital Enterprise Technology
Athens, Greece
28-30 September 2011
142

USER-ASSISTED EVALUATION OF TOOL PATH QUALITY FOR COMPLEX
MILLING PROCESSES
Christian Brecher
WZL of RWTH Aachen University
c.brecher@wzl.rwth-aachen.de
Wolfram Lohse
WZL of RWTH Aachen University
w.lohse@wzl.rwth-aachen.de

ABSTRACT
Complex machining technologies such as simultaneous five-axis milling become increasingly
significant for todays production industry. Though most existing CAM systems support planning
engineers to design these processes, they do not assist in evaluating the quality of resulting NC
programs concerning given objectives. Therefore, a new approach presented in this paper is
currently developed for identifying potentially critical areas of planned five-axis tool paths. It bases
on different criteria aggregating information from process signals and succeeding evaluation
functions as part of an inference network. The network interprets fuzzy and crisp rules for
computing risk coefficients that specify the inclination of tool paths to different machining deficits
such as surface marks, inaccuracy and low productivity. For evaluating machining processes, the
inference network demands input data that can be acquired on real and virtual machine tools. The
latter must consider effects and interactions of controllers, machines and cutting processes in a co-
simulation for providing adequate results. A virtual machine tool covering these requirements has
been built up in current research. New findings from this work and its integration into the assistance
approach presented before are also described in this paper.
KEYWORDS
CAM Planning, User Assistance, Virtual Machine Tool, Simulation

1. INTRODUCTION
Manufacturing of complex products with high
quality requires efficient, robust and accelerated
production processes in order to maintain
competitiveness on the world market. Knowledge
about organisational concepts and modern
technologies increasingly gains significance, as
Schuh et al (2011) have observed for the branch of
toolmaking. Investing more engineering time to
optimise processes such as high-speed machining
often proves to be economic when conservative,
low technological settings can thus be avoided
(Altan et al, 2001). At the same time, product
variety is growing so as to complicate process
standardisation and re-use of known technological
settings (Schuh, 2010). These developments lead to
requirements that can only be covered with
expertise and highly skilled personnel. However,
companies are facing difficulties in finding a
sufficient number of qualified employees, as a
recent survey for toolmakers in the United States
has shown (Moldmaking, 2011). These observations
may be transferred to other branches where
machining takes place, e.g. automotive and
aerospace industries.
The extended need for process knowledge and
higher production efficiency despite increasing
complexity of variety demands for systematic
approaches that support the design of machining
processes. Though current CAM systems already
provide a large range of functionality for planning
operations, knowledge from simulations or
experiences from the shopfloor are frequently
neglected in commercial solutions.
This paper presents a knowledge-based approach
for assisting operations planners in evaluating
quality of tool paths by densifying process data to

143

information regarding machining deficits. This way,
process knowledge is prepared systematically and
can be provided to less experienced employees in a
production enterprise.
The remainder of this paper is organised as
follows: First, several existing user assistance
approaches, which address different facets of CAM
planning, are discussed. Second, some remarks on
knowledge aggregation by employing fuzzy logic
are given. Third, the structure of the user assistance
approach and two possible sources for providing
data, namely process data traces and integrative
simulations are outlined. Fourth, the paper contains
an example application before it concludes with
some final remarks.
2. USER-ASSISTED CAM PLANNING
2.1. PATH-SYNCHRONOUS VISUALISATION
OF PROCESS DATA
As aforementioned, experiences gained from
machining in production are generally not returned
to work preparation for what machining drawbacks
cannot be considered in earlier planning phases. A
first prerequisite for assisting CAM planners thus
consists in establishing systematic feedback
mechanisms for returning process data. Moreover, a
specification of the signals that are apt to contribute
to NC program improvements is necessary. This
definition depends on the optimisation objective. If
the workpiece must fulfil high standards concerning
surface quality, the dynamic behaviour of the
machine tool exerts a distinct influence on the
result. Consequently, process signals such as axis
positions, velocities and accelerations have to be
included in a thorough analysis. On the contrary, the
assessment of energy-efficiency must consider
different signals, e.g. currents of drives and the
main spindle.
Tracing of process data mainly gives time-based
curves that enable the examination of trends and
oscillations. However, geometrical information is
not directly accessible so that errors at critical
locations and features of a workpiece, e.g. surface
marks, violated allowances, etc., cannot be assigned
to signal patterns. A combined data representation
of 2D plots for values over time and a 3D
assignment to the tool path is a helpful tool for
making the analysis of NC programs as well as tool
and workpiece movements more efficient.
Considering dynamic-related signals such as
accelerations with respect to critical locations
constitutes a first mean to support CAM planners in
optimising tool paths.
This kind of visualisation has been realised at
WZL as extension of a commercial CAM system.
The base functionality is supplemented by a custom
module for administrating process data. The latter
can be provided by an integrated manufacturing
database and trace files (Brecher, Lohse and Vitr,
2011). The database enables systematic storage of
process information together with boundary
conditions such as NC programs, CAM setups,
active controller commands, etc. Trace files
represent a less complex way for returning process
information to CAM systems. These files contain
the converted output of software tools that are
developed by controller vendors for acquiring
process signals (e.g. Siemens SinuComNC Trace
and Heidenhain TNCScope).
The velocity of the tool centre point (TCP) along
the path of a test freeform surface is exemplarily
presented in figure-1. At locations with high
curvature, the programmed feed rate is never
reached as the smaller spikes above the path show.
While this behaviour can be expected, it is more
interesting to see that the same goes for linear path
segments. Hence, the dynamic potential of the
employed machine tool is insufficient for covering
the requirements specified in operations planning,
and the real process behaviour is prone to
deviations.
The visualisation offers an intuitive access to
process signals and thus to machining difficulties.
However, an analysis may be time-consuming due
to the variety and the extent of available data.
Therefore, CAM user assistance must go beyond
visualisation. In the following, some existing
approaches are discussed.
2.2 USER ASSISTANCE APPROACHES
FOR MACHINING PLANNING
First, a simplified conventional CAM process,
which can be found in similar ways in todays
production industry, is described in order to assign
the approaches being presented below to the
planning phases in which they can be employed
(figure-2). Starting with a designed workpiece,
CAM planning in the narrower sense is carried out.

Figure 1 Path-synchronous visualisation of process signals

144

This phase may be subdivided into several steps of
which, in particular, path and technology planning
as well as a CAM-internal simulation are crucial
within the scope of this paper. Subsequently, NC
simulations, of which the employed models map
machining operations with a higher level of detail
than the CAM-internal tools, provide a second
possibility for optimisation prior to production.
Finally, the NC program is set-up on the machine
before manufacturing starts (cf. Kief, 2009).
User assistance can take place in several phases
along the presented planning process. The numerous
existing approaches are often related to intelligent
computing methods and optimisation techniques.
For example, they may base on fuzzy logic, genetic
algorithms, knowledge inference, neural networks,
simulated annealing, etc. (Teti and Kumara, 1997).
In early planning, manufacturing resources can be
selected semi-automatically or, under certain
circumstances, automatically. For example,
Brecher, Malk et al (2011) present a knowledge-
based system that assists in selecting appropriate
cutting tools for milling. They employ an inference
algorithm working on a set of given rules for
filtering a cutter database. Apart from geometrical
characteristics, the authors consider economical and
organisational criteria, namely availability of tools,
maximal material removal rate and costs. A similar
system has been developed by Carpenter and
Maropoulos (2000). It evaluates rules with respect
to different operation types and criteria, e.g. variety
reduction, and suggests feasible cutting conditions.
In both cases as well as in most other applications in
this domain, only general characteristics of the
target workpiece and suitable machining resources
are considered while specific process data, which
can be derived from simulations or earlier
machining operations, is omitted.
Many research approaches provide user support
for the phase of path and technology planning (e.g.
Wang et al, 2002; Tansel, Ozecelik et al, 2006;
Erdim et al, 2006). Wang et al (2002) rely on a
deterministic optimisation approach that comprises
a target function addressing productivity and
boundary conditions for the dynamic limits of main
and feed drives. The approach is restricted to
turning, and geometries have to be kept simple.
Tansel et al (2006) employ a hybrid algorithm for
selecting optimal cutting conditions. The authors
use neural networks that have been trained to
correlate inputs, e.g. feed rate and spindle speed, to
surface roughness and machining time. A selection
strategy taking over the results of the networks
iteratively generates new sets of cutting conditions
as part of optimisation cycles. The approach thus
considers physical impacts as a black box with
unknown relations so that the quality of
optimisation depends on the fuzzy network and the
teaching data set.
A different CAM assistance system has been
developed by Erdim et al (2006) who adapt feed
rates according to simulated process forces. The
latter are computed in two steps, i.e. a geometrical
analysis of the contact between workpiece and tool,
and empirical force equations. For adapting the feed
rate at each path segment, the authors use a linear
equation which results from experimental studies
and takes process forces as inputs. Influences on the
machining result that are exerted by the numerical
controller or the dynamical behaviour of the target
machine tool are not considered. Moreover, the
linearity that has been validated for a simple test
part and an airfoil geometry may not be
transferrable to different applications.
Further works focus on user-assisting
functionalities for evaluating the manufacturability
of a given workpiece. For example, Korosec et al
(2005) have developed a neuro-fuzzy model that
identifies expected machining difficulties by
combining different criteria. These include
geometrical parameters such as ratio of surface area
to feature volume and curvature. Further, non-
geometric parameters, e.g. allowances and quality
requirements, are taken into account. Both kinds of
parameters are aggregated with a fuzzy inference
mechanism of which membership functions have
been parameterised by neural networks (cf.
Section 2.3). Data from actual or simulated
machining operations is not considered.
An assessment of process signals measured
during turning operations has been studied by
Tansel, Wang et al (2006). The authors evaluate
data from an acceleration sensor for calculating
specific criteria that indicate instable cutting
conditions. This step bases on time-frequency
amplitudes, a damping index and a fuzzy inference
mechanism.
As this non-exhaustive list of applications shows,
CAM assistance can have many facets. This paper
deals with an approach for assessing quality of
planned tool paths by considering process data from
CAM planning Set-up
NC program
release
Resource
selection
Path and
technology
planning
Simulation
Post
processing
% NCPROG
N100 G90
N200 G00 X50 Y100 Z10
N300 X60
N400 Y140 Z200
Work preparation (CAM, )
NC simulation
Pro-
duction
De-
sign
Optimisation

Figure 2 - Phases of a conventional CAM planning process

145

real or simulated milling operations. It contains
elements that exhibit similarities to the approaches
of Korosec et al (2005) and Tansel, Wang et al
(2006). However, employed data sources, structure
and aim of analysis differ in several parts.
2.3. DATA AGGREGATION
2.3.1. Knowledge Representation
To derive a concept that provides user support
further than visualisation (see Section 2.1), some
basic elements of knowledge management must be
considered first. Common concepts in this domain
distinguish between explicit and implicit
knowledge. Explicit knowledge can be expressed in
a form that is understandable for humans and,
depending on the expression syntax, for computers.
On the contrary, implicit knowledge is strongly
related to cognitive patterns of humans who have
gained this knowledge by experience; it cannot be
formulated explicitly, nor can computers interpret it.
Since an assistance approach built up as
enhancement of CAM systems demands for an
explicit representation, the approach is restricted to
the first category in the following.
Furthermore, there are several layers making up a
so-called knowledge space, commonly represented
as a pyramid. According to Aamodt and Nygrd
(1995), knowledge can be found on the second
highest level in knowledge space. It can be reached
by passing through four other layers, namely
characters/numbers, words/values, data and
information. To contribute to an improved CAM
planning, an assistance approach must lead from the
current knowledge layer towards a higher one in the
pyramid. Since the acquired process signals can be
assigned to the data layer, information must be
reached in order to provide effective user support.
Of course, further aggregation, in particular to
knowledge, is also desirable.
2.3.2. Theoretic background of Fuzzy Logic
Inter alia, fuzzy logic can be employed to achieve
the demanded knowledge aggregation. For
processing data with this technique, three steps have
to be carried out: fuzzification, inference and
defuzzification. The first step, fuzzification, implies
the introduction of linguistic variables. The latter
enable a linguistic description of data; for example,
axis accelerations can be classified as high,
average and low. To relate a crisp value to
these terms, membership functions are applied.
They assign a set of values B (e.g axis
accelerations) to the interval [0, 1] (Kahlert and
Frank, 1994):

] 1 , 0 [ : B (1)

The definition of linguistic variables and
membership functions belongs to the process of
knowledge representation and may be found by
interviews with experts, experimental studies, etc.
As long as equation-1 is fulfilled, these functions
may be chosen arbitrarily; commonly, types such as
triangle and trapezoid functions or singletons are
chosen for .
After fuzzifying input values, inference takes
place. This step requires the definition of rules that
consist of two elements: premises and implications.
The former are built up of conditions evaluating an
assignment of a variable to a linguistic value, and
fuzzy operators that concatenate the resulting sets
by unification (logical OR) or intersection
(logical AND). The concatenation gives the
degree of fulfilment of a rule, and, consequently, of
the implication. The latter assigns a linguistic output
variable to a specific classification so that the
membership of the variable is expressed as a value
in the interval [0, 1] (Kahlert and Franke, 1994).
Exemplarily, a rule of the form

IF P
1
AD P
2
AD AD P
n
THE I
j
(2)

with premises P
i
, i = {1, , n}, and an
implication I
j
is considered. For each premise,
i
is
computed according to equation-1. Subsequently,
the rules degree of fulfilment h can be computed as
follows:

{ } { } n i h
i
,..., 1 | min = = (3)

In general, several rules exert an influence on an
output value for which a fuzzy inference system
(FIS) must be applied. A common and often
employed FIS has been designed by Mamdani and
Assilian (1975). It combines the fulfilment degrees
of all rules with the help of minimum and maximum
operators and thus merges the corresponding
fuzzy sets. The result is an aggregated membership
function
res
(y) that depends on the output
parameter y.
Finally, defuzzification has to be carried out in
order to obtain a crisp value y
res
. This operation may
base on the centre-of-gravity technique where the
centre-of-gravity of the aggregated fuzzy set is
determined by equation-4 (Klir and Yuan, 1995).

( )
( )

=
0
0
dy y
dy y y
y
res
res
res

(4)


146

Since calculating the integrals of equation-4 can
lead to a high computational effort, an
approximation is commonly used.
3. ASSESSMENT OF NC PATH QUALITY
3.1. DESCRIPTION OF THE ASSISTANCE
APPROACH
An assistance approach that allows assessing of
CAM-NC planning quality is currently developed at
WZL. This approach contains deterministic
elements as well as inference mechanisms basing on
fuzzy logic which enable the densification of data to
information.
3.1.1. Risk and potential coefficients
First, measures for judging NC planning quality
have to be found. Further, quantification methods
must be defined to enable comparisons between
several planning variants and to indicate
possibilities for optimisation. For this purpose, the
presented approach introduces risk and potential
coefficients. These coefficients are computed for
discrete locations on the tool path; they are defined
as values between 0, expressing an uncritical area,
and 1, indicating a high probability for the
occurrence of machining difficulties.
Risk coefficients describe the inclination of
locations to machining errors such as surface marks
and violations of tolerances. Since numerous
different errors may occur, several risk coefficients
must be defined. Apart from errors, NC planning
quality can also be assessed with respect to
productivity. Since this objective is related to
machining time, technological settings influencing
the dynamics of machine motions have to be
analysed. These characteristics are considered by
potential coefficients that evaluate the degree of
exploitation of the available drive potential.
3.1.2. Inference networks
Coefficients are computed on the base of so-called
inference networks. The latter can contain six
different element types, namely input ports, criteria,
calculation blocks, inference systems, output ports
and connections. An abstract representation of an
inference network containing these elements
(configurations and data flows different from the
depicted ones are possible) is shown in figure-3.
Except for connections, the different element types
are explained in the following.
Input ports represent interface elements that
allow importing base data into networks. They are
connected to two main sources: process signals and
CAM data from operations planning. The former
comprise axis-related values, in particular positions,
velocities and accelerations, as well as dynamic
signals of the tool centre point (TCP). The second
data type originates from the CAM setup. For
example, geometric information about the target
workpiece, programmed feeds and speeds of
machining operations and tool properties may be
extracted so that they are accessible for further
aggregation in inference networks.
In the presented approach, input signals are
processed with different means. The first possibility
consists in deterministic transformations that
consider physical relations and experiences
regarding the occurrence of machining errors. These
transformations are called criteria; they prepare
signals for further steps in an inference network. As
second possibility for pre-processing input signals,
simple mathematic operations can be used. For
Criteria
Machining
task
NC programm
(after virtual
set-up)
2 3 4 5 6 7 8 -0.02
0
0.02
0.04
0.06
0.08
0.1
Zeit [s]


TCP-Punktabstand [mm]
TCPdist < 0,8 sigma (gef iltert)
Alpha [rad]
Alpha Gaussglttung [rad]
Meta
rules
Rules
Eval-
uation
Optimi-
sation
Process
Simulation
&

-1
1
A
A&C E
(-1)BD

N10 BASIS_NPV
N20 M482 M582
N30 G40 G17 ...
N40 STOPRE
N60 TRAORI(1)
N80 SUPA G0 ...
...
Input data
Inference
system
User assistance approach

Figure 3 Abstract representation of an inference network

147

example, blocks computing the norm of a vector,
trigonometric and arithmetic functions can be part
of inference networks.
Risk and potential coefficients offer a wide range
for interpretation; particularly appropriate scales are
hard to be defined unambiguously. The same goes
for the assessment of dynamic values. For example,
it depends on various factors, e.g. the machining
process, the workpiece and experimental results,
whether the acceleration value at a specific location
is judged as critical. Linguistic descriptions and the
introduction of intervals for specifying overlapping
ranges constitute adequate means for coping with
the given assessment ambiguity. Therefore, the
presented approach comprises fuzzy inference
systems (FIS) that can be applied on the outcome of
other blocks (cf. Section 2.3). These FIS elements
follow the design of Mamdani and Assilian (1975)
and make use of the outlined centre-of-gravity
technique for determining crisp output values.
If an FIS is inserted into an inference network,
fuzzification and defuzzification mechanisms as
well as inference rules must be configured. Firstly,
it is necessary to define linguistic terms for the input
variables and to specify the corresponding
membership functions. Because of the heterogeneity
of variables, these specifications may differ
strongly. In a next step, rules are to be formulated.
Premises and implications are significantly related
to the character of risk and potential coefficients so
that their definition bases on expertise in milling
and results from reference workpieces. Finally,
linguistic terms for the risk coefficient have to be
assigned to membership intervals.
Though fuzzy mechanisms are prevailing in the
inference networks, it is also possible to use
Boolean inference in case of clearly defined
boundary conditions. This enhancement enables the
definition of crisp limits; for example, rapid and
cutting movements are thus distinguishable without
much effort; as a consequence, different movement
conditions can be considered when risk coefficients
are computed. Cascading of inference networks is
also possible. For example, an FIS can provide
input values for a criterion or another FIS. This
way, modelling gains flexibility.
Output ports must be part of inference networks
as terminators of the data flow. They represent a
sink for risk and potential coefficients and provide
access to the inference results for further analysis
tools.
3.1.3. CAM integration
Using the assistance functionalities requires a direct
integration into software tools that are employed in
operations planning. Therefore, the modelling and
evaluation environment for information networks
has been implemented as extension of the
commercial CAM system already mentioned in
Section 2.1. This way, process signals that are
available for visualisation may also be used as input
for aggregation to information, i.e. computing of
criticalness coefficients. Moreover, the inference
results may be returned to the graphical 3D context
of the CAM system. This visualisation enables users
to analyse these coefficients interactively, and
optimisation steps can be carried out easier.
3.2. DATA SOURCES
3.2.1. Acquisition of real process data
As explained above, inference networks transform
data into information that supports users in
assessing quality of CAM-NC planning. This
process requires adequate input data.
This data can inter alia be retrieved from real
numerical controllers (cf. Section 2.1). In general,
NCs are apt to acquire axes positions, velocities,
etc. with a sample time corresponding to the cycle
time of the fine interpolator. Since no external
sensors are necessary, data acquisition can easily be
carried out during machining. On the contrary, the
band width of signals that can be traced
simultaneously is often limited so that a complete
data set may require several repetitions of the
process before analysis can start. A second
drawback consists in the lacking observability of
physical effects and interactions. Firstly, the sample
frequency may be too low for capturing specific
effects, and information is lost. Secondly, measured
values and values of interest may differ. For
example, deformations and oscillations of structural
machine components, tool deflections and
measuring errors prohibit a precise reconstruction of
TCP loci and orientations by transforming acquired
axis positions.
Alternatively, supplementary sensors can be
installed on target machine tools. Elaborated
measurement concepts can contribute to a higher
accuracy regarding process signals to be captured;
some effects such as the tool tip position are
nevertheless inaccessible due to characteristics of
the cutting process. Moreover, additional sensor
equipment is expensive so that its aptness for
common analysis of machining deficits is
decreased.
3.2.2. Acquisition of virtual process data
Both ways of acquiring real process data may prove
to be useful for providing the required inputs for the
presented inference networks. However, real
machining must take place before data can be
analysed, and the drawbacks of inaccessible data
may be critical for the presented approach.

148

Therefore, virtual technologies are considered as
supplementary or alternative source for process
data.
To map milling processes reliably, many
different influences have to be taken into account.
In current work at WZL and Fraunhofer Institute for
Production Technology (IPT), a virtual
manufacturing system (VMS) is built up as
integrative simulation for machining processes
(figure-4). This simulation considers four sub-
systems of machine tools: numerical controllers,
drive controller loops, dynamic behaviour of the
machine tool structure and the cutting process in a
narrower sense (Brecher et al, 2009).
Each sub-system exerts a significant influence on
machining. Planning information stored in an NC
program is not translated directly into movements of
the target machines axes. Instead, the NC interprets
the commands depending on its settings. For
example, dynamic axis restrictions have to be
considered. Whenever an NC recognises that a
programmed feed rate cannot be reached, it lowers
the path velocity, and planned technological settings
differ from actual ones. Moreover, algorithms for
smoothing movements are often activated to obtain
better surface qualities (Volkwein, 2006). This way,
the tool path also deviates from the planned
scenario. Since NC models commonly do not map
the described behaviour precisely, VMS contains an
NC simulation with an embedded virtual controller.
The latter is a simulation component being
developed by the NC vendor that relies on the same
source code as hardware NCs and accepts the same
configuration parameters. Consequently,
computation of lead positions nearly matches
entirely.
The simulated lead positions represent the input
values for models of drive controller loops, which
are implemented within a software system for
Computer-Aided Control Engineering (CACE).
These models frame two complex simulation
components that are connected via programming
interfaces: a flexible moveable multibody system
mapping the dynamic machine behaviour with high
accuracy (developed by Hoffmann, 2008), and a
module for computing cutting forces (Klocke et al,
2008).
The force model comprises three elements.
Firstly, the contact situation between the current
workpiece and the active cutter is evaluated. The
tool is represented as discretised hull geometry
consisting of flat cutting disks. These may have
different diameters in case of ball nose cutters. After
calculating the current volumetric penetration of the
workpiece, entry and exit angles are determined for
each disk. Secondly, analytical models take over
these values for computing a more precise shape of
the current chip with small discrete elements (the
resolution depends on available calculation time). In
this step, the number of flutes, the helix angle of the
cutter, the rotary spindle position, etc. are
considered. Thirdly, the discrete elements are
processed with empirical approaches from Altintas
et al (1991) for calculating cutting forces.
4. APPLICATION ON CONTOUR MILLING
4.1. EXAMPLE INFERENCE NETWORK
In this section, the application of an example
inference network on the assessment of surface
quality achieved with contour milling operations is
described. This network contains a specific criterion
and a fuzzy inference system for calculating a risk
coefficient. Possible influences on the surface may
result from abrupt machine movements. Therefore,
mainly acceleration signals of the machine axes are
considered as inputs.
To quantify the term abrupt, a criterion called
smoothed (acceleration) differences (SD) is
introduced. It bases on the idea that brisk
accelerations which do not follow a general
dynamical trend are prone to causing surface marks.
As mathematically representation of this idea, the
acceleration signal of an axis is processed with a
low-pass filter. The difference between the
unfiltered and the smoothened signal is then
calculated, giving an indicator for the abruptness of
the movement. This criterion is applied on each axis
being active during a considered machining
operation.
In a second step, the criterion results must be
aggregated because the occurrence of abrupt
movements for several axes can lead to an
intensification of the effect on the surface. For this
aggregation, an FIS is introduced that takes over the
criterion results as inputs. As described in
Section 2.3, the inputs must be fuzzified first. Since
the masses carried by each axis and the installed

Figure 4 - Components of the Virtual Manufacturing
System (Brecher et al, 2009)

149

drive power may differ, fuzzification requires
individually adapted definitions of membership
functions for each axis. In the discussed inference
network, three levels, high, average and low,
with trapezoidal membership functions are chosen,
and parameters according to the characteristics of
the machine axes are set.
Subsequently, linguistic levels for the risk
coefficient are specified. Again, three levels have
proven to be adequate: unremarkable,
noticeable and critical. The configuration of the
FIS is completed by rules for assigning the SD
criteria of several axes to implications on the risk
coefficient (RCF). For example, rotary movements
(represented by the axes A and C for a machine tool
with tilting table kinematics) have shown a high
sensitivity on the result so that the following rule is
part of the inference base:

IF SD
A
=high AD SD
C
=high
THE RCF
SD
=critical
(5)

Finally, the FIS is connected to an output port that
provides access to the computed risk coefficient.
4.2. RESULTS
The inference network has been applied on a test
workpiece (figure-5) that was machined on a five-
axis milling centre of type Chiron Mill 2000 with an
NC of type Siemens 840D SolutionLine. This
scenario has been modelled within VMS, requiring
an extensive modelling effort for the dynamic
behaviour of the utilised machine tool. Moreover,
the NC configuration parameters being active on the
machines NC have been extracted and transferred
to the virtual controller.
The aim consisted in evaluating the surface
quality after finishing. An end mill having a
diameter of 10 mm and cutting with 0.06 mm feed
per tooth at 2,500 rev/min was used for this
machining operation. The subsequent analysis bases
on (i) the surface of the produced workpiece, (ii)
TCP positions and axis accelerations traced for
comparison and validation during machining, (iii)
lead positions and accelerations of the axes
computed by the NC simulation, and (iv) simulation
values generated by a reduced VMS (structural
machine behaviour and virtual controllers only).
The TCP positions are necessary for correlating
identified surface marks to signal patterns as part of
the visualisation, and accelerations can be used as
inputs for the inference network.
The results for the cut on the third level are
shown in figure-5. In the magnified section on the
left side of the figure, surface marks can be seen
(numbers 1 to 4). The same workpiece is depicted
as graphical object in the framing CAM system.
Along the tool path, the result from the data
aggregation, i.e. the risk coefficient for assessing
surface quality, is plotted as process signal. Distinct
peaks can be recognised at the very same locations
where surface marks occur. Since the analysis based
on simulated signals, a prediction of these
machining deficits prior to actual machining has
been possible for the examined finishing operation.
4.3. FURTHER APPLICATIONS
The presented network has been used for assessing
surface quality. However, different objectives may
prevail in operations planning. Given the setup from

Figure 5 Surface marks related to computed risk coefficients

150

Section 4.1, a typical analysis can also focus on
productivity. In this case, the introduction of a
potential coefficient, again with values in an
interval of [0, 1], is apt to express the untapped
potential of a specific machining process
adequately. Each axis must be examined with
respect to the deviations between planned and
reached velocity at different locations along the tool
path. Since the workpiece geometry and the tool
orientation affect the dynamics of a movement, the
assessment process must take these boundary
conditions into account. For example, the latter can
be included as parameters in the definition of
membership functions that are part of an FIS
employed for aggregating potential coefficients of
the axes. Finally, the CAM planner obtains
aggregated information on the degree of
productivity at specific locations that are related to
NC lines, and feed rate optimisation can be carried
out.
The approach may also be taken on in an NC-
integrated system for online compensation by
combining assessment algorithms with real-time
adaptation of feed rates. Despite the principal
applicability, other techniques allowing faster
calculations or directly including optimisation are
probably more suitable.
5. CONCLUSIONS
In this paper, an approach for assessing quality of
CAM-NC planning by analysing the resulting tool
paths with respect to different characteristics has
been proposed. It could be shown that an
aggregation basing on deterministic signal
transformations and fuzzy inference can prove to be
useful for predicting machining deficits prior to
production, as long as adequate input data, for
example provided by integrative simulations, is
available.
However, there are open issues. For example, the
current knowledge base has to be extended in order
to enable a more general assessment. Moreover,
influences determining membership functions must
be investigated. Further enhancements of virtual
machine tools are also needed for reducing the
dilemma between computational effort and
reliability of results. These topics are part of
ongoing research at WZL.
6. ACKNOWLEDGEMENTS
The authors sincerely thank their colleagues
Gustavo F. Cabral, Marcel Fey and Mirco Vitr for
their contributions to the presented research in the
domains of cutting force prediction, machine
simulation and CAM visualisation.
The depicted research has been funded by the
German Research Foundation DFG as part of the
Cluster of Excellence Integrative Production
Technology for High-Wage Countries.
REFERENCES
Aamodt A, Nygrd M, Different roles and mutual
dependencies of data, information, and knowledge
An AI perspective on their integration, Data and
Knowledge Engineering, Vol. 16, No. 3, 1995,
pp. 191-222
Altan T, Lilli B, Yen Y C, Manufacturing of Dies and
Molds, CIRP Annals Manufacturing Technology,
Vol. 50, No. 2, 2001, pp. 404-422
Altintas Y, Spence A and Tlusty J, End Milling Force
Algorithm for CAD Systems, CIRP Annals
Manufacturing Technology, Vol. 40, No. 2, 1991,
pp. 31-34
Brecher C, Lohse W and Vitr M, CAx Framework for
Planning Five-Axis Milling Processes, Proceedings
of the 6th CIRP-Sponsored International Conference
on Digital Enterprise Technology, Hong Kong, Hong
Kong, 14-16 December 2009, pp. 419-432
Brecher C, Lohse W and Vitr M, CAM-NC Planning
with Real and Virtual Process Data, Proceedings of
the 17th IEEE International Conference on Concurrent
Enterprising, Aachen, Germany, 20-22 June 2011,
pp. 478-485
Brecher C, Malk A, Servos M and Vitr M,
Automatisierte Auswahl von Frswerkzeugen, wt
Werkstattstechnik online, Vol. 101, No. 5, 2011,
pp. 298-302
Carpenter I D and Maropoulos P G, A flexible tool
selection decision support system for milling
operations, Journal of Material Processing
Technology, Vol. 107, 2000, pp. 143-152
Erdim H, Lazuglu I and Ozturk B, Feedrate scheduling
strategies for free-form surfaces, International
Journal of Machine Tools and Manufacture, Vol. 46,
pp. 747-757
Hoffmann F, Optimierung der dynamischen Bahnge-
nauigkeit von Werkzeugmaschinen mit der Mehrkr-
persimulation, PhD. Dissertation, RWTH Aachen
University, 2008
Kahlert J and Frank H, Fuzzy Logik und Fuzzy
Control, 2nd Edition, Vieweg, Braunschweig,
Germany, 1994
Klir, G J and Yuan B, Fuzzy Sets and Fuzzy Logic
Theory and Applications, Prentice Hall, Upper
Saddle River, USA, 1995
Klocke F, Bergs T, Meinecke M, Kords C, Minoufekr M,
Witty M and Glasmacher L, Model Based
Optimization of Trochoidal Roughing of Titanium,
Proceedings of the 11th Conference on Modeling of
Machining Operations, 16-18 September, Gaithers-
burg, USA, 2008

151

Korosec M, Balic J and Kopac J, Neural network based
manufacturability evaluation of free form machining,
International Journal of Machine Tools and
Manufacture, Vol. 45, 2005, pp. 13-20
Kief H B, CNC-Handbuch 2009/2010, Hanser,
Munich, Germany, 2009
Mamdani E H and Assilian S, An experiment in
linguistic synthesis with a fuzzy logic controller,
International Journal of Man-machine Studies, Vol. 7,
1975, pp. 1-13
MoldMaking Technology, AMBA Survey Reveals
Hiring Issues, www.moldmakingtechnology.com,
2011, Retrieved: 12 June 2011
Schuh G, Htte Produktion und Management,
Springer, Berlin, Germany, 2010
Schuh G, Boos W, Breme M, Hinsel C, Johann H,
Schoof U, Stoffel K, Kuhlmann K and Rittstieg M,
Synchronisierung im industriellen Werkzeugbau,
Proceedings of the Aachener Werkzeugmaschinen-
kolloquium, Aachen, Germany, 26-27 May 2011,
p. 373-404
Tansel I N, Ozecelik B, Bao W Y, Chen P, Rincon D,
Yang S Y and Yenilmez A, Selection of optimal
cutting conditions by using GONNS, International
Journal of Machine Tools and Manufacture, Vol. 46,
2006, pp. 26-35
Tansel I N, Wang X, Chen P, Yenilmez A, Ozcelik B,
Transformations in machining. Part 2. Evaluation of
machining quality and detection of chatter in turning
by using s-transformation, International Journal of
Machine Tools and Manufacture, Vol. 46, 2006,
pp. 43-50
Teti R and Kumara S R T, Intelligent Computing
Methods for Manufacturing Systems, Annals of the
CIRP, Vol. 46, No. 2, 1997, pp. 629-652
Volkwein G, Konzept zur effizienten Bereitstellung von
Steuerungsfunktionalitt fr die NC-Simulation,
PhD. Dissertation, Technical University of Munich,
2006
Wang J, Kuriyagawa T, Wei X P and Guo D M,
Optimization of cutting conditions for single pass
turning operations using a deterministic approach,
International Journal of Machine Tools and
Manufacture, Vol. 42, 2002, p. 1023-1033
Proceedings of DET2011
7th International Conference on Digital Enterprise Technology
Athens, Greece
28-30 September 2011
152

IMPLEMENTATION OF KINEMATIC MECHANISM DATA EXCHANGE
BASED ON STEP
Yujiang Li
KTH Royal Institute of
Technology,
Production Engineering
yujiang.li@iip.kth.se
Mikael Hedlind
KTH Royal Institute of
Technology,
Production Engineering
mikael.hedlind@iip.kth.se
Torsten Kjellberg
KTH Royal Institute of
Technology,
Production Engineering
torsten.kjellberg@iip.kth.se

ABSTRACT
In this paper, the first known valid implementation of kinematic mechanism based on STEP
(ISO 10303, STandard for the Exchange of Product data) is presented. The result includes a general
conceptual framework and two developed prototype applications. The framework is designed for
integration of the STEP-based kinematic mechanism modeling with existing commercial CAx
systems. The two applications are implemented for kinematic data exchange between Siemens NX
and STEP-NC Machine via STEP AP214 (ISO 10303-214) files. Experiences of design and
development of the applications are presented in this paper, and a valid example of data exchange
using the developed applications is shown. As the first valid STEP implementation on kinematics, it
demonstrates the feasibility of STEP-based data exchange for kinematic mechanism. The research
result can also motivate deeper understanding and wider application of the STEP standard in the
field of digital factory.
KEYWORDS
Kinematics, CADCAM, STEP, modeling
1. INTRODUCTION
Numerous commercial CAx software systems have
been developed and applied in different fields of
digital factory. Meanwhile, diverse partnerships
have been built between IT software vendors,
industrial practitioners, and academic researchers.
And people have to face the problem on how to
translate data formats among diverse software
systems used by different partners. Therefore, the
demand for a system neutral solution of product data
exchange comes up in many perspectives: geometry,
kinematics, tolerances, classification and so on.
Kinematic mechanism is one of the most
important aspects in the field of industrial product
data exchange and sharing. The basic conceptual
technique to represent kinematic mechanism in
CAD is common among majority applications: links
and joints are combined to describe topology and
geometry. And different types of motion constraints
have been defined, e.g. revolution, translation,
cylinder, etc. The need arises very often for the
kinematic mechanism data exchange between
miscellaneous information systems. Meanwhile, as a
well-known standard for product data exchange,
STEP addresses a solution for this demand with a
particular integrated application resource, p105 (ISO
10303-105). However, applications for kinematic
data exchange, based on a standard, are very rare.
The widely-used STEP application protocol
AP214 (ISO 10303-214), integrating p105, offers a
standardized data model schema for integration of
the kinematics with geometry and assembly models.
Several research projects, as mentioned in the next
section, have tried to implement the kinematic
model of AP214 as subtasks. But until now, no any
known valid implementation for STEP-based data
exchange of kinematic mechanism has been created.
STEP, as a system-independent neutral standard
for product data exchanging, is introduced to
provide a means for the representation and
unambiguous exchange of computer-interpretable

153

product information (stated in ISO 10303-1:1994).
As the development of global market, companies
collaborate in different forms, e.g. virtual enterprise,
supply chain, or extended enterprise (Chryssolouris
et al 2008). In this context, it is common that
organizations in different locations using different
software systems have to work together. Therefore,
designers have to face the complex problem which
is how to seamlessly exchange and share data with
each other in a highly collaborative environment.
Besides, designers and producers should exchange
not only the product geometry data, but also
information about processes and resources. Thus,
standardized data exchange is often an intuitive
solution. It is the comprehensive structure of the
STEP standard and imperative needs for neutral
product data format that make almost all major CAx
software vendors support it more or less, especially
in the representation of 3D geometry.
However, applications with the support for
kinematic mechanism data are hardly implemented
in industries. At present, most relevant companies
have to use slides, fax, telephone, or paper-based
documents to describe and exchange kinematic data.
Skilled CAx operators manual re-input usually is
the only option to bridge the gap between different
data formats. Such situation often makes huge waste
of resources.
This research intends to help people with a
feasible framework to automate and standardize the
exchange of kinematic mechanism information in a
most efficient way. The first known valid STEP-
based implementation for kinematic mechanism will
be presented with experiences from design and
development processes. This will demonstrate how
to use STEP models to facilitate the integrated data
exchange with kinematics, geometry, and assembly.
2. LITERATURE REVIEW
STEP p105 is a member of the integrated
application resources of STEP standard. It is a data
model providing support for kinematic mechanism
exchange and sharing for computer-aided kinematic
design and analysis systems. This part of STEP
standard was published in 1996 and then two
technical corrigenda were published in 2000. At
present, the second edition is under development
and the first draft of its usage within an AP is
planned in year 2011 by ISO.
The major features presented in p105 are
structure, motion, and analysis related to kinematic
mechanism. Typically, the kinematic structure is
composed by links, joints, and pairs. As described in
the standard document, links topologically represent
the rigid parts in kinematic representation, pairs
define the geometric aspect for the kinematic motion
constraints and joints define the topology aspect of
kinematic structure. These concepts are applied
during the development in this research.
So far, there has been no known implementation
for p105 valid modeling. Almost all found
literatures regard it only as a conceptual model
rather than an implementable integrated application
resource. An important reason for such a blank is
that there has been no guide or example on
kinematics in any documents of STEP standard.
Therefore the application of STEP-based kinematic
modeling only has a simple history, as shown in
Figure-1.


Figure 1 Milestone of kinematic modeling using STEP
In the beginning of the ESPRIT Projects
2614/5109 NIRO (Neutral Interfaces for Robotics),
the project partners developed a proposal for
kinematics in STEP. This proposal was accepted by
ISO as a basis for further STEP integration (Bey et
al, 1994).
Two years later, the ESPRIT Project 6457
InterRob (Interoperability of Standards for Robotics
in CIME) published the Specification of a STEP
Based Reference Model for Exchange of Robotics
Models. The aim of this specification is very
similar to application protocols of STEP: accurate
exchange of manufacturing data between different
systems. And its kinematic schemas were developed
mostly based on ISO/DIS 10303-105:1994
(Haenisch et al, 1996). Nevertheless, there has been
no any known application that uses this
specification since it was published.
A research about machine control software in the
context of industrial economy is an early attempt to
involve the kinematic mechanism based on p105
(Birla and Kang, 1995). The STEP-based kinematic
model is defined for the members of machining
processes, in terms of fixtures, workpieces, and
tools. But the result does not fully conform to STEP
standard.
An expandable conceptual model for assembly
information was proposed in a project held by
National Institute of Standard and Technology
(NIST) in USA, named Open Assembly Model
(OAM) (Sudarsan et al., 2005, Rachuri et al., 2006).
This model focuses on representations of geometry,
kinematics, and tolerances, and it is claimed to use
STEP as the underlying data structure. But it only
adopts the concepts defined in p105, rather than the
actual data model defined within its schema.

154

A small subset of p105 is used as a key part in a
semantic-based machine tool modeling approach for
5-axis machining application by (Tanaka et al.,
2008). The research extends the subset in its
machine tool kinematic model. It is developed as a
prototype system for ISO 14649 CNC data model.
An important attempt to implement kinematics of
the STEP standard is the IDA-STEP project
(Integrating Distributed Applications on the Basis of
STEP Data Models, LKSoft, 2004). An outcome of
the IDA-STEP project is a software prototype that
can access, view, and edit STEP data which can be
stored in a STEP database for internet-based
exchange and sharing between multiple devices.
This project succeeded to develop an early
prototype of a kinematic editor and the result STEP
file has a relatively complete description of the
kinematic structure: joints, links, a limited number
of pair types, and range values, although slight
errors make it not valid to standard. A VRML file
can also be produced and viewed in a web browser.
According to the report, this is the first
implementation attempt based on p105 since it was
published.
Another implementation related to p105 that
needs to be mentioned is the SKM (Space
Kinematic Model) modular within the ongoing
STEP-TAS (Thermal Analysis for Space) project
which aims to build the thermal network and test
environment for space missions (European Space
Agency, 2007). The SKM module utilizes the
kinematic structure of AP214 ARM (Application
Reference Model) to describe the motion constraints
of rigid bodies. But it is the AIM (Application
Interpreted Model) schema that is intended for
implementation and computer interpretation, rather
than the ARM.
3. RESEARCH APPROACH
Two integrations are in focus during the
development: integrating a kinematic mechanism
with existing geometric model in a STEP AP214 file,
and integrating kinematic data translation with
existing commercial CAx systems.
At first, this research has to face challenges of
exploring a way to merge kinematic mechanism
representation to an existing STEP AP214 file that
includes a 3D geometry model. To guide this part of
the research, an AP214 valid kinematic model
developed by (Hedlind et al., 2010) is used.
The research also focuses on the integration
between STEP translators and existing commercial
CAx systems. The STEP model, regarding the
representations of geometric aspects, has been
supported in lots of CAD systems, such as Siemens
NX, Autodesk CAD, Pro/Engineer, and CATIA.
Different ways are used in these systems, such as
independent translators, command line, or even
directly opening/saving, so that designers and
researchers can use STEP files to exchange
geometric data of their designs between different
software systems. Hence, there is no need to
develop a translator for geometric data model.
In order to demonstrate using STEP files to
bridge kinematic mechanism between different
systems, Siemens NX and STEP-NC Machine are
selected. The reason why Siemens NX is chosen is
that it provides a relatively open programming
interface, NX Open, which enables the accesses to
most of its functions including kinematic motion
simulation. The choice of the STEP-NC Machine
application was for its ability to simulate tool-paths
described in AP238 (ISO 10303-238) together with
AP214 machine tool geometry models. The machine
tool kinematics is natively defined in XML format.


Figure 2 - Operation design

155

In this research, it is explored how AP214 can be
utilized to exchange kinematic mechanism to
support machine tool motion analysis and operation
planning. The essential operations can be illustrated
with the work flow diagram shown in Figure-2. The
software developed in this research is named
KIBOS (KTH Implementation Based On STEP). In
this work flow, KIBOS for NX is executed within
the session of Siemens NX 7.5. It acquires a STEP
file, without kinematics, from the Siemens NX
native STEP exporter. Meanwhile, it also collects
kinematic data from the native NX model. Then,
KIBOS for NX merges the kinematic data with the
original STEP file in a data repository, and at last
exports a new STEP AP214 file including both
geometry and kinematics.
KIBOS for NX is designed to be firmly
integrated with Siemens NX. It is able to be
executed within NX and access NX functions. The
aim of this integration is to make the users
operations as simple as possible. This solution can
easily migrate to other CAx systems with the
similar functionality, i.e. a built-in STEP exporter
and an API (Application Program Interface) with
necessary functions.
The conceptual framework, as illustrated in
Figure-3, is applied here. Existing CAx software is
used with limited STEP export functionality.
Together with an external integration application to
get the requested additional information, new
possibilities for STEP-based data exchange is
enabled with less effort compared with developing a
completely new translator. This integration
procedure requires that the chosen CAx software
should have an accessible API or specified data
format to get requested information and ability to
export the first STEP file to be extended. These
requirements can be fulfilled by many kinds of CAx
software. Therefore this procedure can be applied in
different contexts when implementing exporter from
CAx to STEP.


Figure 3 - An application integrated with CAx
On the other hand, KIBOS for STEP-NC
Machine is an independent application to read the
STEP file with geometry and kinematics, and to
output an XML file describing kinematic
information. The XML file conforms to the format
defined by STEP-NC Machine. Together with the
STEP file of a certain machine, it can be used to
simulate motion of a machining operation in STEP-
NC Machine.
4. SYSTEM DESIGN
This research includes development of two
applications, KIBOS for NX and KIBOS for STEP-
NC Machine. The two applications are similar in
technical background and conceptual design. In this
section they will be introduced in detail separately.
KIBOS is a STEP implementation based on
AP214 AIM schema. It is developed with Java
language, because both Siemens NX and STEP
standard have powerful programming interfaces for
Java. The API for Siemens NX, named NX Open
for Java, enables access to all functions required in
this implementation. Via NX Open, KIBOS for NX
retrieves information from NX native model and
executes STEP translation to produce the STEP file
with geometry and assembly.
ISO 10303-22 SDAI (Standard Data Access
Interface) specifies a programming interface to
access data models based on EXPRESS (ISO
10303-11), the language describing STEP data
model. ISO 10303-27 specifies a Java binding to
SDAI and is implemented in JSDAI, an open source
development package produced by LKSoftWare
GmbH. Applications in this research are developed
with JSDAI.
During system design, the integration of three
layers is focused on: physical data, resources, and
application. From the view of user demand, KIBOS
is simple software with simple functionality: data
format translation. But it needs to be seamlessly
integrated with other resources, e.g. NX Open,
STEP, and JSDAI. Hence, these resources are
collected in a layer to link the application with
physical data. Physical data is the data physically
stored in the hard disk, mostly used as input and
output. The application layer contains the developed
program with data manipulation logic.
4.1 KIBOS FOR NX
The system architecture of KIBOS for NX is
illustrated in Figure-4. At first, before development
of the application it is needed to compile the AP214
AIM schema to an SDAI dictionary with JSDAI, so
that necessary Java classes can be imported for
early binding.
CAx
Integration
application
Request to execute
application
Request to export
STEP file
STEP file
Request additional
information e.g . kinematics
Additional information Export STEP file with
both geometry and
additional information

156

KIBOS for NX requires a native NX model with
assembly, geometry and kinematics. Optionally, in
order to make the output STEP file able to be used
in KIBOS for STEP-NC Machine, the user also
needs to label the faces where the cutting tool and
the workpiece should be placed. In this
implementation, the function of PMI (Product and
Manufacturing Information) note in NX is used to
label the surfaces.
4.2 KIBOS FOR STEP-NC MACHINE
The system architecture of KIBOS for STEP-NC
Machine (see Figure-5) is similar to KIBOS for NX.
The compiled library of AP214 AIM schema is also
required to read and parse the STEP file by the
developed STEP reader. Then, the XML creator
generates the XML file according to the data of
kinematic mechanism and general features. The
XML file is created in a format defined and
recognized by STEP-NC Machine. It stores
kinematic information for a certain machine, e.g.
kinematic chain definitions, axis definitions, axis
placements, and motion ranges. It also includes
placement data for cutting tool and fixture. KIBOS
for STEP-NC Machine will automatically place
both the STEP file and the XML file in the
machine folder of the program folder of STEP-
NC Machine.


Figure 4 - System architecture of KIBOS for X
AP214 AIM schema
External Java
Library (.jar file)
Early binding
STEP reader
JSDAI EXPRESS
compiler
JSDAI API
XML file describing
machine motion
Physical data Logical model Application
STEP AP214 file with
geomtry, kinematics,
and general features
XML creator
STEP-NC Machine

Figure 5 - System architecture of KIBOS for STEP-C Machine

157

5. IMPLEMENTATION
Both KIBOS for NX and KIBOS for STEP-NC
Machine are typical STEP implementations
developed in the same development environment.
JSDAI and the compiled dictionary of AP214 AIM
provide full support for all the operations defined in
SDAI and the data model of AP214 AIM. The GUI
(graphical user interface) of KIBOS is developed
with SWT (Standard Widget Toolkit), which is an
open source library for platform-neutral GUI design
and implementation. All the source codes of KIBOS
are written and complied with Eclipse which is a
well known open-source IDE (Integrated
Development Environment).
KIBOS for NX is developed with the focus on
high integration with existing design environment
of Siemens NX. The implementation relies on NX
Open for Java to interact with NX session and NX
Open MenuScript to integrate the user operation.
Thus, the application can be used to export the
needed STEP file in the same way as other built-in
exporters, as shown in Figure-6.


Figure 6 - The integrated menu button
The user interface of KIBOS for NX is shown in
Figure-7. It is tailored similar to other common data
format translators. The input model is the current
working model in NX, and the file path and name of
the output STEP file can be easily defined in the
textboxes of the user interface, or selected from a
file dialog by clicking the Browse button.


Figure 7 - Interface of KIBOS for X
KIBOS for STEP-NC Machine is an independent
application. The GUI is designed to finish all
necessary configurations, as shown in Figure-8. The
user can set the path of STEP-NC Machine, the
input file, and the output XML file name. The actual
orientation of the axis (Z axis) and the reference
direction (X axis) of the machine is also able to be
defined in case they are not set to the default
orientation. The kinematic motion solver algorithm
can be specified from a list of predefined variants
defined by STEP-NC Machine. Note that,
sometimes, the user would not like to repeat typing
same values in the interface. Therefore, a text file is
provided to pre-define the default values of all these
configurations.


Figure 8 - Interface of KIBOS for STEP-C Machine
6. CASE STUDY
This case study demonstrates and validates the
presented system neutral solution for kinematic
mechanism data exchange.
A CAD model of a DMG 5-axis machine tool is
used in this sample, as shown in Figure-9. Although
it is a simplified model, it still has full capability to
demonstrate the motion of its 5 axes. This sample is
used to perform the following tasks:
1. Creating a kinematic model in NX,
2. Exporting a STEP file with geometry and
kinematics by KIBOS for NX,
3. Importing the information within the STEP file
by KIBOS for STEP-NC Machine,
4. Simulating machining operation with the
machine tool model in STEP-NC Machine.
The first task here is to create a complete
kinematic model of this machine with special
configuration such as axis definition and motion
range. PMI notes are used to label the faces where
the tool and the workpiece should be placed. In the
component-based motion simulation module, six
components/subassemblies are selected as the links
to form the five kinematic joints to represent the
five axes.
Then, using KIBOS for NX and KIBOS for
STEP-NC Machine, the kinematic mechanism of
the machine tool can be imported to STEP-NC
Machine from Siemens NX via a STEP AP214 file.


158


Figure 9 - CAD model of DMG machine in X
The kinematic mechanism is displayed from a
motion simulation in STEP-NC Machine. An
AP238 file for machining of an impeller is used as a
sample during this case study. This AP238 file is
downloaded from the sample data set on the official
website of STEP-NC Machine (STEP Tools Inc.,
2011). The machining operations described in this
file include fixture definition, tool paths, cutting
tools, and operation sequence. During the motion
simulation, the five-axis machining can be
simulated and displayed in STEP-NC Machine, as
shown in Figure-10.


Figure 10 - Demonstration in STEP-C Machine
7. CONCLUSIONS
This paper focuses on the strategy of STEP-based
integration for kinematic mechanism exchange with
existing commercial CAx software systems. The
solution is presented with a general framework for
system neutral integration. And two applications are
developed to implement this framework. The major
features of this solution include:
Seamless linkage with existing CAx systems,
Full integration with exported data on STEP
using CAx native exporter,
Valid standard data model with geometry,
assembly, and kinematic mechanism,
Standardized development environment.
As the first STEP valid implementation for
kinematic modeling, KIBOS validates and
demonstrates the capability of STEP AP214 AIM to
represent and exchange kinematic mechanisms.
The industrial and academic significances are
demonstrated in the result of this research. It
provides the industrial practitioners an
implementable framework for kinematic modeling
exchange between different CAx systems. The IT
vendors can be benefited by enhancing their
products with STEP based kinematic modeling or
data exchange in addition to their current support
for standard geometric modeling. Besides, the
developed application can assist academic
researchers to create STEP files with valid
kinematic mechanism. In addition to machine tool
motion simulation (as shown in the case study), this
result can be applied in other perspectives of digital
factory, e.g. process planning, machine investment
management, factory layout design, manufacturing
configuration, ergonomics and so on.
8. ACKNOWLEDGMENTS
We are grateful for the support from Scania,
Sandvik Coromant, Volvo, VINNOVA, XPRES
(Initiative for excellence in production research),
and Siemens PLM Software, and for fruitful
discussions with members of ISO TC184 SC4 WG3
T24.
REFERENCES
Bey, I., Ball, D., Bruhm, H., Clausen, T., Jakob, W.,
Knudsen, O., Schlechtendahl, E. G., and Srensen, T.,
"Neutral Interfaces in Design, Simulation, and
Programming for Robotics", 1
st
Edition, Springer-
Verlag, Berlin, 1994, p 18, ISBN 3-540-57531-6
Birla, S. and Kang, S., "Software engineering of machine
control systems: an approach to lifecycle economics",
Robotics and Automation 1995 Proceedings IEEE
International Conference, 1995, pp 1086-1092, ISBN
0-7803-1965-6

159

Chryssolouris, G., Makris, S., Mourtzis, D., and
Papakostas, N., "Knowledge Management in a Virtual
Enterprise - Web Based Systems for Electronic
manufacturing", Methods and Tools for Effective
Knowledge Life-Cycle-Management, 1
st
Edition, Part
1, Springer-Verlag, Berlin Heidelberg, 2009, pp 107-
126, DOI 10.1007/978-3-540-78431-9_6
European Space Agency, "thermal control STEP-TAS
SKM module", ESA, 2007, Retrieved: 08 01 2011,
<http://www.esa.int/TEC/Thermal_control/SEMUCO
S0LYE_0.html>
Haenisch, J., Kroszynski, U., Ludwig, A., Srensen, T.,
"Specification of a STEP Based Reference Model for
Exchange of Robotics Models: Geometry, Kinematics,
Dynamics, Control, and Robotics Specific Data", 1
st

Edition, Forschungszentrum Karlsruhe, Karlsruhe,
1996, p 6 - 1, ISSN 0948-1427
Hedlind, M., Lundgren, M., Archenti, A., Kjellberg, T.
and Nicolescu, C. M., "Manufacturing resource
modeling for model driven operation planning", CIRP
2nd International Conference on Process Machine
Interactions, 2010, ISBN 978-0-9866331-0-2
LKSoft, "Integrating Distributed Applications on the
Basis of STEP Data Models, Final Report",
1
st
Edition, LKSoftWare GmbH, Germany, 2004, p 26
Rachuri, S., Han, Y.-H., Foufou, S., Feng, S. C., Roy, U.,
Wang, F., Sriram, R. D. and Lyons, K. W., "A Model
for Capturing Product Assembly Information",
Journal of Computing and Information Science in
Engineering, vol. 6, No. 1, 2006, pp 11-21, DOI
10.1115/1.2164451
STEP Tools Inc., "STEP-NC Sample Data: Impeller
Part", STEP Tools Inc., 2011, Retrieved: 27 05 2011,
<http://www.steptools.com/products/stepncmachine/sa
mples/impeller/>
Sudarsan, R., Fenves, S. J., Sriram, R. D. and Wang, F.,
"A product information modeling framework for
product lifecycle management", Computer-Aided
Design, vol. 37, No. 13, 2005, pp 1399-1411, DOI
10.1016/j.cad.2005.02.010
Tanaka, F., Onosato, M., Kishinami, T., Akama, K.,
Yamada, M., Kondo, T. and Mistui, S., "Modeling and
implementation of Digital Semantic Machining
Models for 5-axis machining application",
Manufacturing Systems and Technologies for the ew
Frontier, 2008, pp 177-182, DOI
10.1007/978-1-84800-267-8_36

Proceedings of DET2011
7th International Conference on Digital Enterprise Technology
Athens, Greece
28-30 September 2011
160

A FRAMEWORK OF AN ENERGY-INFORMED MACHINING SYSTEM
Tao Peng
Department of Mechanical Engineering,
University of Auckland, New Zealand
Email: tpen024@aucklanduni.ac.nz
Xun Xu
Department of Mechanical Engineering,
University of Auckland, New Zealand
Email: xun.xu@auckland.ac.nz
ABSTRACT
Sustainable manufacturing is regarded as an essential criterion to advance competiveness of
manufacturing enterprises. Energy consumption or energy efficiency of a manufacturing system is
one of the key sustainable performance indicators. Monitoring of a manufacturing system requires
inclusion of energy information, e.g. total consumption recording and detailed energy-flow tracking
in the entire system. Analysis of the gathered energy information in connection with other
applications, such as machining parameter optimization, is a necessary step towards energy-
informed manufacturing. Based on literature review, a framework of an Energy-informed
Machining System (EiMS) is proposed. Machining processes, such as milling, turning, drilling,
grinding, are the scope of this research. With a true energy consumption picture, a more efficient,
competitive and environmental-conscious production can be achieved. In order to integrate energy
information into the CAx chain, STEP/STEP-NC standards are used as the data representation and
exchange protocol, giving manufacturing system an intelligent and interoperable nature.
KEYWORDS
Energy consumption, Sustainable machining, STEP/STEP-NC, Optimization

1. INTRODUCTION
Manufacturing as a main sector of value creation
has played an important role in most industrialized
country for a long time. Three goals i.e. time, cost,
and quality are widely accepted and pursued by
enterprises. Finding the optimized trade-offs among
these goals attracts a great amount of research and
development activities in this area. Moving into the
21
st
century, environmental issues or crisis are
looming large, which forced the manufacturers to
take issues of sustainability into account, if not
shifting focus from profit to environmental
awareness. Companies started to realize sustainable
manufacturing is not just law-abiding behaviour, but
a great opportunity to increase their competitiveness
in the global market (Romvall et al. 2010). Thus, the
three goals need to be reconsidered and harmonized
under the perspective of sustainability.
These days, more than 70% of manufacturing
businesses in the UK and the US rely on the CNC
(Computerized Numerical Control) machines
(Swamidas and Winch 2003). In most developing
countries, such as China, the use of CNC machines
is increasing rapidly. While taking the advantage of
the high efficiency, accuracy, productivity brought
by CNC machines, effective and continuous
monitoring and control of the automatic machine is
also demanded to ensure safety and stability.
CNC machining as one of the fundamental
manufacturing technologies consumes a significant
amount of energy. To understand the machining,
which is usually regarded as material removal
process, under sustainability scenarios, it is
necessary to analyze the system from the energy
point of view. In this research, machining process is
reconsidered as an energy utilization process.
Conventional machining operations, such as turning,
milling, boring, grinding etc., are considered. In
Section 2, literature in sustainable manufacturing
area is reviewed, based on which a framework of an
energy-informed machining system is presented in
Section 3. Energy-informed machining parameters
optimization is stated in Section 4, as an example to
integrate energy data. The STEP-NC based data


modal is then proposed for machining parameters
optimization in Section 5. In Section 6, discussion
and future work is presented.
2. LITERATURE REVIEW
2.1. ENERGY CONSUMPTION I
MACHINING
Having a clear and comprehensive view of
flow within a production process
perform any energy efficiency research
machining process, different types of machine tools
and auxiliary devices are the basic functional
elements. Industrial motors as a major energy source
received a remarkable research attention, a total
energy breakdown is presented in a review by R.
Saidur (2010). More than one component
motor, pump etc., consumes energy
15% of the total energy is consumed by actual
cutting component (Gutowski et a
than 55% of the total energy is consumed in coolant
and oil supply. An early work by Byrne and Scholta
(1993) pointed out that the cutting fluids were also
the main source of pollution, and it was evident that
the conventional approach to develop manufacturing
processes seriously limited the sustainability
improvement.
However, to reduce energy consumption is not
simply to reduce the use of cutting fluids. This is
because the reduction of cutting fluids will result in
more energy consumption during machining
(Weinert et al. 2004), and will induce tool
degradation and thermal transformation
al. 1999).
Centrifuge (10.8%)
Coolant (31.8%)
Oil pressure pump (24.4%)
Cooler, mist collector, etc.
(15.2%)
No. of vehicles produced
Machining (14.8%)
Figure 1 Energy use breakdown for machining
et al. 2005)
Besides, in another point of view, research on
energy consumption in different machining states
found out that actual cutting process only
small percentage of total energy. Figure
161
modal is then proposed for machining parameters
n 5. In Section 6, discussion
ENERGY CONSUMPTION IN
and comprehensive view of energy
process is the basis to
efficiency research. In a
process, different types of machine tools
are the basic functional
Industrial motors as a major energy source
received a remarkable research attention, a total
in a review by R.
ore than one component, such as
consumes energy (Figure-1). Only
15% of the total energy is consumed by actual
et al. 2005). More
than 55% of the total energy is consumed in coolant
and oil supply. An early work by Byrne and Scholta
pointed out that the cutting fluids were also
the main source of pollution, and it was evident that
the conventional approach to develop manufacturing
processes seriously limited the sustainability
However, to reduce energy consumption is not
simply to reduce the use of cutting fluids. This is
because the reduction of cutting fluids will result in
more energy consumption during machining
, and will induce tool
degradation and thermal transformation (Popke et
C
o
n
s
t
a
n
t

(
8
5
.
2

%
)

rgy use breakdown for machining (Gutowski
in another point of view, research on
energy consumption in different machining states
process only takes a
. Figure-2 shows
that only 25% of the input power is consumed by
actual processing, while
state or losses (Dietmair and Verl 2008; 2009)
Anderberg et al. (2010)
manner, indicating setup cost 31.3%, idle cost
26.1% and direct machining
Figure 2 A Sankey diagram illustrates the distribution of
the power consumed into losses and effective power for a
machine tool (Dietm
In spite of difference in numbers, it is clearly
revealed that energy
directed towards the machine tool to minimize idle
energy consumption and to introduce more energy
efficient systems (Anderberg
information in machining processes requires being
thoroughly monitored and analyzed. In this
endeavor, it is important to first do a comprehensive
research of energy consumption in machining
(Vijayaraghavan and Dornfeld 2010)
2.2. ENERGY CONSUMPTION M
To understand the energy usage in the machining
system, an energy consumption model of the
machining processes is indispensable. One study
conducted by Munoz and Sheng
operating parameters, such as depth of cut, speed,
feed-rate, tool rake angle, in relation with
environmental factors. It is regarded as the
fundamental work for environmental
manufacturing. Another study by Draganescu et al.
(2003) presented the relationship between
parameters, such as cutting force, torque, spindle
speed, feed-rate depth of cut, and speci
energy using Response Surface Methodology based
on experimental data.
A generic model is proposed by Dietmair and
Verl to model energy consumption behaviour of
machines and components based on a statistical
discrete event formulation. Using th
framework, decisions can be made by prediction of
the energy consumption of different configurations
(Dietmair and Verl 2008;
of four milling machines, a system
only 25% of the input power is consumed by
actual processing, while 45% is consumed in idle
(Dietmair and Verl 2008; 2009).
2010) estimated the ratio in a cost
manner, indicating setup cost 31.3%, idle cost
26.1% and direct machining cost 38.7%.

A Sankey diagram illustrates the distribution of
the power consumed into losses and effective power for a
(Dietmair and Verl 2008)
In spite of difference in numbers, it is clearly
revealed that energy-saving activities can be
directed towards the machine tool to minimize idle
energy consumption and to introduce more energy
(Anderberg et al. 2010). Energy
information in machining processes requires being
monitored and analyzed. In this
endeavor, it is important to first do a comprehensive
research of energy consumption in machining
(Vijayaraghavan and Dornfeld 2010).
ENERGY CONSUMPTION MODEL
To understand the energy usage in the machining
system, an energy consumption model of the
machining processes is indispensable. One study
conducted by Munoz and Sheng (1995) examined
operating parameters, such as depth of cut, speed,
rate, tool rake angle, in relation with
l factors. It is regarded as the
fundamental work for environmental-conscious
manufacturing. Another study by Draganescu et al.
presented the relationship between
parameters, such as cutting force, torque, spindle
rate depth of cut, and specific consumed
energy using Response Surface Methodology based
A generic model is proposed by Dietmair and
Verl to model energy consumption behaviour of
machines and components based on a statistical
discrete event formulation. Using the modelling
can be made by prediction of
the energy consumption of different configurations
nd Verl 2008; 2009). By energy analysis
ling machines, a system-level

162

environmental analysis is presented by Dahmus and
Gutowski (2004). Cutting fluid preparation,
machine tool construction, tool preparation, and
cleaning are included in the analysis. Gutowski et al.
proposed a thermodynamic framework to
characterize the material and energy resources used
in manufacturing processes which is the first step in
proposing and/or redesigning more efficient
processes (Gutowski et al. 2007; 2009).
Nevertheless, limited research in this area can be
found. An accurate and complete energy model
which solves the problem is still lacking. A
mathematical model, independent on specific
machine tool or settings rather based on the physical
or theoretical energy model of the machine tool, will
provide a powerful tool for reducing the energy
consumption (Newman et al. 2010).
2.3. ENVIRONMENTALLY CONSCIOUS
MACHINING
Environmentally conscious machining is another
area that attract research efforts world widely.
Studies of environmental impact of different
manufacturing processes, e.g. grind-hardening
process (Salonitis et al. 2006), joining process
(Pandremenos et al. 2010), have been reported,
which provide results of reducing environmental
impact in production cases. In this paper,
minimizing energy consumption is the main
concern. Although it should be integrated in the
system, currently energy data is not regarded as an
integral part of the processes, such as parameters
optimization process (Gupta 2005).
Sheng et al. (1995) stated that In order to fully
evaluate the trade-offs in these different alternatives,
a set of quantifiable dimensions such as energy
consumption, production rate, mass flow of waste
streams and quality parameters need to be analyzed
at the planning state. A model-based approach to
process planning is proposed for a rapid, robust
estimation of energy and mass flows. Then, they
presented the feature-based two-phase planning
scheme, micro and macro-planning in detail. In a
micro-planning process, predictive models are used
to obtain process energy, machining time, mass of
waste streams and quality parameters (Srinivasan
and Sheng 1999). In macro-planning, the
interrelationship between features is examined to
generate overall sequence. Also micro and macro-
planning schemes are integrated as an analytical
platform (Srinivasan and Sheng 1999). A prototype
process planner is designed based on the above
scheme to interact with a conventional planner as an
advisory environmental agent (Krishnan and Sheng
2000).
A multi-objective nonlinear programming model
for environmentally conscious process planning is
proposed by Jin et al. aiming at minimizing
machining cost, time and environmental impact (Jin
et al. 2009). Another model and methodology is
developed by Rajemi et al. for optimizing tool-life
and energy of a turning process, while considering
the energy budget (Rajemi et al. 2010).
Mouzon et al. (2007) proposed operational
methods for minimizing energy consumption and
total completion time by multi-objective
programming model. Through experiments, the
dispatching rules provided an effective means to
accomplish the optimization task.
Cannata et al. (2009) presented a procedure
which effectively supports energy efficiency and/or
environmental impact analysis, management, and
control. An example application of the procedure,
three scenarios of scheduling case study, served as
an initial proof of concept.
Mani et al. (2009) re-emphasized on the
importance of energy monitoring and discussed the
potential use of energy information, and
recommended research on energy efficient process
planning.
Avram et al. (2010) developed a unified
methodology of multi-criteria decision for
sustainability assessment of use phase of machine
tool systems. Economical, technical and
environmental criteria are considered together, and a
two-level analytic process is presented. Following
that, they proposed a methodology to estimate the
total mechanical energy requirements of the spindle
and feed axes by taking into account transient and
steady-state phases. Energy profile is obtained by
careful monitoring of the machine tool and auxiliary
equipments (Avram and Xirouchakis 2011).
Regarding online monitoring, current research
applies part of energy-related signals for various
monitoring purposes. Spindle motor current (Lee
and Tarng 1999), cutting force (Huang and Chen
2000), energy per tooth analysis (Amer et al. 2005),
electrical power (Al-Sulaiman et al. 2005), and
audible energy sound (Rubio and Teti 2009) are
used for tool condition monitoring, including tool
breakage, tool wear and cutting parameters.
On the whole, it is evident that on the way
towards sustainable machining, energy consumption
should be considered systematically and concretely.
The current literatures clearly suggest that
improvement in one aspect of the machining process
is not enough to achieve overall sustainability. In
this paper, a framework of an energy-informed
machining system is proposed to bridge the gaps.
3. FRAMEWORK OF ENERGY-
INFORMED MACHINING SYSTEM (EIMS)
3.1. OVERVIEW OF THE ENERGY FLOW


To develop an energy-informed machining system,
it is important to have a clear picture of the energy
flow within the entire system. The energy flow is
firstly studied in detail and critical energy
components of the machining are structured in three
levels. This energy flow analysis provides an insight
of machining process, and knowledge on how
energy is consumed in each part of different levels.
The energy input to the system is composed mainly
Figure
The bottom level is named as
where data are obtained in five basic parts, i.e.
machine tool, tooling, material supply, auxiliary
devices, and energy losses. During machining, on
one hand, various sensors extract
both machine tool and auxiliary devices
force, vibration, while on the other hand,
programmed machining parameters
offline data, such as cutting speed, feed
of cut and tool properties. Energy
raw material, coolant, and lubrication
obtained as well for overall analysis.
At the shop floor level, the problems lie in task
scheduling. Three primary states of
identified namely machining, start/stop, and idl
which correspondingly, represent direct, indirect,
and non-related energy usage. Moreover,
consumption of a machine tool is considered with
inclusion of machining time, start-
time. At this level, systematic energy consumption
optimization is looked into
163
informed machining system,
it is important to have a clear picture of the energy
flow within the entire system. The energy flow is
firstly studied in detail and critical energy
components of the machining are structured in three
. This energy flow analysis provides an insight
of machining process, and knowledge on how
energy is consumed in each part of different levels.
The energy input to the system is composed mainly
by electricity. Then, it is utilized by machine tools,
auxiliary devices and infrastructures to fabricate the
final products.
Figure-3 depicts an overview of the energy flow.
The top-to-bottom arrows represent the actual
energy flow in the system
directly. The bottom-up
data that collected in each
Figure 3 Overview of the energy flow in a three-level structure
as process level,
five basic parts, i.e.
machine tool, tooling, material supply, auxiliary
During machining, on
sensors extract online data from
devices e.g. cutting
while on the other hand,
machining parameters are imported as
such as cutting speed, feed-rate, depth
nergy consumption for
and lubrication supply is
for overall analysis.
the problems lie in task
hree primary states of equipment are
identified namely machining, start/stop, and idle,
which correspondingly, represent direct, indirect,
d energy usage. Moreover, energy
machine tool is considered with
-up time, and idle
energy consumption
ation is looked into and intelligent
scheduling algorithm
indirect energy consumption
In order to provide a
energy consumption in
level will cover all
infrastructure such as lighting, air
detailed discussion at enterprise level is beyond the
scope of this paper.
3.2. FRAMEWORK OF EIMS
In a machining process,
expressed as,
idle mc total
E E E E + + =
where E
total
total energy input to the system
E
mc
energy consumed in machining
E
idle
energy consumed in idling or waiting
state
E
loss
energy losses in the system
During machining, energy usage is expressed as
by electricity. Then, it is utilized by machine tools,
ry devices and infrastructures to fabricate the
3 depicts an overview of the energy flow.
arrows represent the actual
in the system which is not visible
up arrows describe the energy
collected in each part.
level structure
scheduling algorithm is adopted to minimize
indirect energy consumption.
a comprehensive view of the
energy consumption in an organization, enterprise
all the departments, and
lighting, air-conditioning. But
detailed discussion at enterprise level is beyond the
FRAMEWORK OF EIMS
In a machining process, energy consumption can be
loss
E
(1)
total energy input to the system
energy consumed in machining
energy consumed in idling or waiting
energy losses in the system
During machining, energy usage is expressed as,

164

other ac mc
E E E + =
(2)
where E
ac
energy consumption of actual cutting
E
other
energy consumed in other component
to support cutting, e.g. coolant pump
Based on energy model of specific energy
consumption presented by Draganescu et al. (2003),
the extension of the equation-2 is given as,

* * * *
* *
*
2
Z a v C
F D
V E
p f coef
ct
ac
=
(3)
where F
ct
cutting force
v
f
feed-rate
a
p
depth of cut
v
c
cutting speed
a
e
width of cut
Z number of teeth
D cutter diameter
machine efficiency
V material removed volume
Energy consumption in idle state and different
wastes is represented as,
( )
ip w tc idle
E t t E * + =
(4)
vib heat loss
E E E + =
(5)
where t
tc
tool change time
t
w
time to wait for next operation
E
ip
specific idle energy consumption per
unit time
E
heat
energy transmitted into heat
E
vib
energy transmitted into vibration
In practice, signals such as cutting force, pump
power, can be selected as indicators for E
mc
.
Information such as specific idle energy, start/stop
energy, idle time should be used to calculate E
idle

energy usage. Lastly, vibration signal or work-piece
temperature can be used to ensure proper control of
E
loss
.
A framework of an Energy-informed Machining
System (EiMS) is developed for conventional
material removal machining processes, e.g. milling,
turning, drilling etc. (Figure-4). The energy listed in
equation-1 is covered in the EiMS. Basically, there
are four modules in the framework, e.g. data
collection, monitoring, analysis, and database,
which are explained in following sections.
3.2.1. Energy data during machining
Integration of different sensors for monitoring has
become a technology that has a major impact on
machining (Zhou et al. 1995). In this data collection
module, energy-related data is obtained by various
machine tool sensors, e.g. cutting force sensor,
vibration sensor etc., and the duration of each
machining state is measured by controllers. These
data are appropriately pre-conditioned and
formatted, then sent to monitoring module locally,
or remotely while web-based exchange standards,
e.g. MTConnect (2008). MTConnect is an open
light-weighted royalty-free standard to facilitate the
organized retrieval of process information from
CNC machines. It is designed to foster greater
interoperability between shop floor equipments and
software applications used for monitoring and data
analysis (MTConnect 2008). As a matter of fact,
intensive research activities imply the vast potential
of adopting standards to support sustainable
machining (Lee et al. 2010; Mani et al. 2010), and
the efficient and effective means to adopt standards
as enabler is demanded continuously.
3.2.2. Data monitoring
In the monitoring module, after rich data is obtained
from machining processes, they are organized and
managed in a hierarchical structure (Figure-3).
Three different groups, namely direct, indirect, and
waste machining consumption are formed to
organize the data. The data obtained from machine
tool part, e.g. cutting force, pump power, are
organized in direct machining consumption. The
data collected from tooling, materials supply, and
auxiliary devices are regarded as indirect machining
consumption. Heat and chatter emission is
monitored in waste consumption. Hence, data can
be easily accessed and manipulated for different
purposes. Time measurements are recorded to give
time duration, e.g. machining time, idle time, tool
change time, tool usage time etc. Together with tool
life estimation, these data will be used in the
optimization process to minimize non-value adding
energy consumption, e.g. idle.
3.2.3. Data analysis
Up to now, shop floor data have been successfully
applied to tool condition monitoring (Byrne et al.
1995; Rehorn et al. 2005), and quality control
(Ramesh et al. 2000; 2000). In sustainable
machining, how to effectively use these energy-
related signals is the main job of this module.
In this module, both real-time analysis and
statistical analysis can be carried out. Real-time
analysis focuses on online data processing during
machining and supports real-time control, such as
machining parameters optimization, quality control.
Statistical analysis concentrates on comprehensive
processing and historical data analyzing. Statistical
results can also provide valuable information to
other application e.g. CAD, CAPP, and ERP. In
addition, machining parameters setting, energy
usage prediction, and energy constraints are taken
into account.


To ensure complete data representati
transfer from analysis module to other applications,
standardized neutral format is an ideal means.
STEP/STEP-NC standards, also known as ISO
10303 (ISO 1994; ISO 1994; ISO 1994)
14969 (ISO 2003; ISO 2003; ISO 2003)
adopted which provide a common basis upon which
EiMS can be built. STEP-NC, designed especially
for machining process, can carry complete
information about design, planning, and production.
Newly developed ISO 14649-201 machine tool data
model (ISO 2011) starts to support environmental
evaluation. This serves as a good start to further
develop energy-informed machining data model.
3.2.4. Database
As there is rich data related to machining process,
database needs to be constructed to manage different
Figure
4. MACHINING PARAMETERS
OPTIMIZATION
Attainment of energy information from machining
process alone is not sufficient to improve system
sustainable performance. Finding approaches to
effectively and efficiently use these
to upgrade performance. Machi
optimization, one of the key problems in machining
research, is primitively exploited as means to utilize
energy information in this paper. Even small
improvement in energy cost can help the enterprise
165
To ensure complete data representation and
transfer from analysis module to other applications,
standardized neutral format is an ideal means.
NC standards, also known as ISO
(ISO 1994; ISO 1994; ISO 1994) and ISO
(ISO 2003; ISO 2003; ISO 2003), are
ich provide a common basis upon which
NC, designed especially
for machining process, can carry complete
information about design, planning, and production.
201 machine tool data
starts to support environmental
evaluation. This serves as a good start to further
informed machining data model.
As there is rich data related to machining process,
abase needs to be constructed to manage different
data in a trouble-free manner, and provide the
machining configuration data to energy
analysis module. It is known that the energy
consumption of an operation depends on not only
machine tool, but also cutting tool, and material
properties. Hence, machine tools
tools database, materials
ratings database will be constructed
database is a proposed base in this framework. It is
mainly used to embed
EiMS to evaluate the energy consumption
performance. With this ratings database, the energy
consumption of a machining process can be
assessed by the level of efficiency. If the energy
usage of a machining setting is dropped int
level zone, the knowledge
given to adjust one or more parameters.
Figure 4 Energy-informed Machining System (EiMS)
PARAMETERS
Attainment of energy information from machining
process alone is not sufficient to improve systematic
sustainable performance. Finding approaches to
se data is the key
performance. Machining parameters
optimization, one of the key problems in machining
research, is primitively exploited as means to utilize
energy information in this paper. Even small
improvement in energy cost can help the enterprise
fight the competition in the global ma
2005). Those enterprises, involving intensive
machining operations, can take
the energy-informed parameters optimization in
building up the digital factory.
An activity model of the machining parameters
optimization is shown in Figure
undertaken in real-time analysis module. We take
one of the energy-sensitive signals
an example. The planned feed
machine tool in STEP
codes. During the machining process, signals from
free manner, and provide the
machining configuration data to energy-informed
analysis module. It is known that the energy
consumption of an operation depends on not only
also cutting tool, and material
properties. Hence, machine tools database, cutting
materials database, and energy
will be constructed. Energy ratings
database is a proposed base in this framework. It is
mainly used to embed expertise knowledge in the
EiMS to evaluate the energy consumption
performance. With this ratings database, the energy
consumption of a machining process can be
level of efficiency. If the energy
usage of a machining setting is dropped into low
level zone, the knowledge-based suggestions will be
given to adjust one or more parameters.
fight the competition in the global market (Gupta
. Those enterprises, involving intensive
machining operations, can take larger benefits from
informed parameters optimization in
building up the digital factory.
An activity model of the machining parameters
optimization is shown in Figure-5. This task is being
time analysis module. We take
sensitive signals feed-rate as
an example. The planned feed-rate is sent to the
machine tool in STEP-NC file or machine tool
codes. During the machining process, signals from


sensors and events from CNC controllers are
collected and transferred to energy monitoring
module. After an increase of the energy
consumption is detected, each energy components in
equation-1 is analyzed, and it indicates that more
energy is waste in vibration due to the aggressive
A1
Machining
Process
STEP-NC Part 21 file
Machine codes
Machine tool
Cutting tool
Materials
Dynamometer
Acoustic emission
CNC controller
Time vector
Real time feedback data
Optimised machine
Parameters
(Modified Part 21 file)
Figure 5 IDEF0 of energy
5. DATA MODEL FOR OPTIM
To integrate the energy-informed monitoring and
analysis information into the CAx chain, a
standardized neutral format that can represent and
exchange data clearly is required.
Figure 6 Integrate energy information into CAx chains
For this purpose, STEP/STEP-
Literatures in this field indicate that
directional data flow enabled by STEP/STEP
standards improves the intelligen
interoperability of CAx chain, studies are
the CAD to CNC direction. However, taking
benefits of the energy monitoring and analysis
system proposed in this paper, the CNC to CAD
direction is enhanced (Figure-6).
166
sensors and events from CNC controllers are
erred to energy monitoring
module. After an increase of the energy
consumption is detected, each energy components in
1 is analyzed, and it indicates that more
vibration due to the aggressive
feed-rate. Then multi-
will be performed without sacrifice of productivity
or quality. Finally, optimized feed
update the actual machining. In this way, the goal of
sustainable machining could be reached.
A2-1
Energy
Measurement
A2-2
Energy Analysis
A2-3
Energy
Optimisation
Materials
Finished part
Data from
MTConnect
Dynamometer
Acoustic emission
Accelerator
Thermometer
Power sensor
Signal pre-conditioning
Signal processing
Time domain
data
Energy performance
energy estimation
Theoretical energy
Machine tool capability
Production requirements
Feature-based
Energy evaluation
IDEF0 of energy-informed machining parameters optimization
DATA MODEL FOR OPTIMIZATION
informed monitoring and
the CAx chain, a
format that can represent and

Integrate energy information into CAx chains
-NC is adopted.
that although the bi-
flow enabled by STEP/STEP-NC
the intelligence and
, studies are leaning on
However, taking the
energy monitoring and analysis
, the CNC to CAD
. To the authors
knowledge, energy information
included in STEP-NC standards.
ISO 14649-201 is a newly finalized part of
machine tool data for cutting process
which environmental evaluation is firstly
introduced. It is regarded as a good start to develop
energy data model. Figure
diagram illustrates the proposed data mod
energy usage optimization.
ENTITY energy_usage_optimization is added as
one of the attributes of machining_operation that is
defined in ISO 14649
ENTITY machining_operation_scheduling. Two
subtypes under machining_operation_s
ENTITY machining_parameter_optimization, and
element_working_state. There are three attributes
cutting_force, vibration, thermal_energy under
entity machining_parameter_optimization, and
feed_per_tooth, depth_of_cut, cutting_speed,
specific_cutting_resistance are attributes of
cutting_force. ENTITY element_working_state
connect to attributes of entity
maching_tool_element in ISO 14649
model the energy data in each element.
-criteria energy optimization
will be performed without sacrifice of productivity
or quality. Finally, optimized feed-rate is fed back to
update the actual machining. In this way, the goal of
ble machining could be reached.
A3
Energy
Knowledge
Database
Optimised
energy estimation
Energy reference
Energy efficiency data
Machine tool capability
Production requirements
informed machining parameters optimization
information of machining is not
NC standards.
201 is a newly finalized part of
machine tool data for cutting process (ISO 2011), in
which environmental evaluation is firstly
introduced. It is regarded as a good start to develop
energy data model. Figure-7, the EXPRESS-G
diagram illustrates the proposed data model for
energy usage optimization.
ENTITY energy_usage_optimization is added as
one of the attributes of machining_operation that is
defined in ISO 14649-10. It has one subtype
ENTITY machining_operation_scheduling. Two
subtypes under machining_operation_scheduling i.e.
ENTITY machining_parameter_optimization, and
element_working_state. There are three attributes
cutting_force, vibration, thermal_energy under
entity machining_parameter_optimization, and
feed_per_tooth, depth_of_cut, cutting_speed,
cutting_resistance are attributes of
cutting_force. ENTITY element_working_state
connect to attributes of entity
maching_tool_element in ISO 14649-201, which
model the energy data in each element.

167

Figure 7 Proposed STEP-C data model for energy usage optimization
6. CONCLUSION AND FUTURE WORK
Despite the fact that the focus in machining has long
been on productivity and product quality, it is
important to re-evaluate the process within the
context of sustainability. In short, due to the fact
that machining systems are complex and dynamic,
current energy efficiency research is not mature
enough to improve the performance of industrial
day-to-day practices, especially on shop floor
activities. Through the study, two major
observations are given.
Firstly, sustainability issues in machining
processes are relatively new concerns, and up to
now, studies are still at the early stage. Limited
practical model and absence of theoretical
mathematical model for energy consumption in
machine tool systems are found from the literature
survey. Therefore, better knowledge regarding the
relationship between important components or
parameters and energy consumption will offer great
opportunities to improve the energy reduction in the
CNC machining industry.
Secondly, the goal of improving sustainability
requires proper utilization of the energy information
to upgrade the system performance in all aspects.
Current machining processes focus on achieving
time or quality objective, so that it can bring the
largest profit to the manufactures. Energy
consumption is not considered as the main objective
in the processes. Employment of energy information
in machining processes is ineffective and
integration of energy data is lacking. Thus,
continuous research is required in the processes
such as process planning, task scheduling, and
environmental impact analyzing.
In this paper, the framework of an Energy-
informed Machining System (EiMS) is proposed
and explained to face the advances of sustainability
requirements. The paper gives an overview of the
energy flow in the machining system, which
categorizes the energy data into logical groups to
assist information management. Then, the authors
outline functionality of each module in the proposed
EiMS. Machining parameters optimization is
chosen to demonstrate the mechanism of the EiMS,
and the system can also be extended to processes
e.g. energy-informed process planning.
Future work will be concentrated on developing
energy consumption model of a machine tool
system, and extending and modifying the STEP-NC
based data model for energy-informed machining. A
pilot case study to assess the performance of the
proposed system is being considered. On the whole,
the EiMS introduced in this paper actively supports
sustainable machining in inclusive adoption among
the digital enterprises.
7. ACKNOWLEDGMENTS
The authors have been sponsored by China
Scholarship Council and University of Auckland
Joint Scholarship.
REFERENCES
Al-Sulaiman, F. A., M. A. Baseer, et al., "Use of
electrical power for online monitoring of tool

168

condition", Journal of Materials Processing
Technology, Vol. 166, No. 3, 2005, pp 364-371
Amer, W., Q. Ahsan, et al., "Machine tool condition
monitoring system using Tooth Rotation Energy
Estimation (TREE) technique", IEEE Symposium on
Emerging Technologies and Factory Automation,
2005
Anderberg, S., S. Kara, et al., "Impact of energy
efficiency on computer numerically controlled
machining", Proceedings of the Institution of
Mechanical Engineers, Part B: Journal of Engineering
Manufacture, Vol. 224, No. 4, 2010, pp 531-541
Avram, O., I. Stroud, et al., "A multi-criteria decision
method for sustainability assessment of the use phase
of machine tool systems", International Journal of
Advanced Manufacturing Technology, Vol. 53, No. 5-
8, 2010, pp 811-828
Avram, O. I. and P. Xirouchakis, "Evaluating the use
phase energy requirements of a machine tool system",
Journal of Cleaner Production, Vol. 19, No. 6-7, 2011,
pp 699-711
Byrne, G., D. Dornfeld, et al., "Tool Condition
Monitoring (TCM) - The Status of Research and
Industrial Application", CIRP Annals - Manufacturing
Technology, Vol. 44, No. 2, 1995, pp 541-567
Byrne, G. and E. Scholta, "Environmentally Clean
Machining Processes - A Strategic Approach", CIRP
Annals - Manufacturing Technology, Vol. 42, No. 1,
1993, pp 471-474.
Cannata, A., S. Karnouskos, et al., "Energy efficiency
driven process analysis and optimization in discrete
manufacturing", Proceedings of Industrial Electronics
Conference, 2009
Dahmus, J. B. and T. G. Gutowski, "An environmental
analysis of machining", Proceedings of International
Mechanical Engineering Congress and RD&D Expo,
Anaheim, California, USA, 2004
Dietmair, A. and A. Verl, "Energy Consumption
Modeling and Optimization for Production Machines",
Proceedings of the IEEE International Conference on
Sustainable Energy Technologies, 2008
Dietmair, A. and A. Verl, "A generic energy
consumption model for decision making and energy
efficiency optimisation in manufacturing",
International Journal of Sustainable Engineering, Vol.
2, No. 2, 2009, pp 123-133
Draganescu, F., M. Gheorghe, et al., "Models of machine
tool efficiency and specific consumed energy", Journal
of Materials Processing Technology, Vol. 141, No. 1,
2003, pp9-15
Gupta, D. P., "Energy Sensitive Machining Parameter
Optimization Model", Master thesis, Department of
Engineering and Mineral Resources, West Virginia
University, 2005, pp 1-81
Gutowski, T., J. Dahmus, et al., "A thermodynamic
characterization of manufacturing processes",
Proceedings of the IEEE International Symposium on
Electronics and the Environment, 2007
Gutowski, T., C. Murphy, et al., "Environmentally
benign manufacturing: Observations from Japan,
Europe and the United States", Journal of Cleaner
Production, Vol. 13, No. 1, 2005, pp 1-17.
Gutowski, T. G., M. S. Branham, et al., "Thermodynamic
analysis of resources used in manufacturing
processes", Environmental Science and Technology,
Vol. 43, No. 5, 2009, pp 1584-1590.
Huang, P. T. and J. C. Chen, "Neural network-based tool
breakage monitoring system for end milling
operations", Journal of Industrial Technology, Vol. 16,
No. 2, 2000, pp 1-7.
ISO 10303-1, Industrial Automation Systems and
Integration - Product Data Representation and
Exchange - Part 1: Overview and Fundamental
Principles, 1994
ISO 10303-11, Industrial Automation Systems and
Integration - Product Data Representation and
Exchange - Part 11: Description Methods: The
EXPRESS Language Reference Manual, 1994
ISO 10303-21, Industrial Automation Systems and
Integration - Product Data Representation and
Exchange - Part 21: Implementation Methods: Clear
Text Encoding of the Exchange Structure, 1994
ISO 14649-1, Data Model for Computerized Numerical
Controllers: Part 1 Overview and Fundamental
Principles, 2003
ISO 14649-10, Data Model for Computerized Numerical
Controllers: Part 10 General Process Data, 2003
ISO 14649-11, Data Model for Computerized Numerical
Controllers: Part 11 Process Data for Milling, 2003
ISO 14649-201, Data Model for Computerized
Numerical Controllers: Part 201 Machine tool data for
Cutting process, 2011
Jin, K., H. C. Zhang, et al., "A multiple objective
optimization model for environmental benign process
planning", Proceedings of the IEEE 16th International
Conference on Industrial Engineering and Engineering
Management, 2009
Krishnan, N. and P. S. Sheng, "Environmental versus
conventional planning for machined components",
CIRP Annals - Manufacturing Technology, Vol. 49,
No. 1, 2000, pp 363-366
Lee, B. E., J. Michaloski, et al., "MT-Connect-Based
Kaizen for Machine Tool Processes", Proceedings of
the ASME International Design Engineering
Technical Conference & Computers and Information
in Engineering Conference, Montreal, Quebec,
Canada, 2010
Lee, B. Y. and Y. S. Tarng, "Application of the discrete
wavelet transform to the monitoring of tool failure in
end milling using the spindle motor current",
International Journal of Advanced Manufacturing
Technology, Vol. 15, No. 4, 1999, pp 238-243

169

Mani, M., K. W. Lyons, et al., "Introducing sustainability
early into manufacturing process planning",
Proceedings of the 14th International Conference on
Manufacturing Science and Engineering, Evanston,
IL, USA, 2009
Mani, M. L. M., S. Leong, et al., "Impact of Energy
Measurements in Machining Operations", Proceedings
of the ASME International Design Engineering
Technical Conference & Computers and Information
in Engineering Conference, Montreal, Quebec,
Canada, 2010
Mouzon, G., M. B. Yildirim, et al., "Operational methods
for minimization of energy consumption of
manufacturing equipment", International Journal of
Production Research, Vol. 45, No. 18-19, 2007, pp
4247-4271
MTConnect Institute, "What is MTConnect", 2008
<http://mtconnect.org/index.php?option=com_content
&task=view&id=15&Itemid=1>
Munoz, A. A. and P. Sheng, "An analytical approach for
determining the environmental impact of machining
processes", Journal of Materials Processing Tech.,
Vol. 53, No. 3-4, 1995, pp 736-758
Newman, S. T., A. Nassehi, et al., "Energy efficient
process planning for CNC machining." Draft, 2010, pp
1-22
Pandremenos, J., J. Paralikas, et al., "Environmental
assessment of automotive joining processes." 43rd
CIRP International Conference on Manufacturing
Systems, Vienna, Austria, 2010
Popke, H., T. Emmer, et al., "Environmentally clean
metal cutting processes - Machining on the way to dry
cutting", Proceedings of the Institution of Mechanical
Engineers, Part B: Journal of Engineering
Manufacture, Vol. 213, No. 3, 1999, pp 329-332
Rajemi, M. F., P. T. Mativenga, et al., "Sustainable
machining: Selection of optimum turning conditions
based on minimum energy considerations", Journal of
Cleaner Production, Vol. 18, No. 10-11, 2010, pp
1059-1065
Ramesh, R., M. A. Mannan, et al., "Error compensation
in machine tools - a review: Part I: geometric, cutting-
force induced and fixture-dependent errors",
International Journal of Machine Tools and
Manufacture, Vol. 40, No. 9, 2000, pp 1235-1256
Ramesh, R., M. A. Mannan, et al., "Error compensation
in machine tools - a review: Part II: thermal errors",
International Journal of Machine Tools and
Manufacture, Vol. 40, No. 9, 2000, pp 1257-1284
Rehorn, A. G., J. Jiang, et al., "State-of-the-art methods
and results in tool condition monitoring: A review",
International Journal of Advanced Manufacturing
Technology, Vol. 26, No. 7-8, 2005, pp 693-710
Romvall, K., M. Wiktorsson, et al., "Competitiveness by
integrating green perspective in production - a review
presenting challenges for research and industry",
Flexible Automation and Intelligent Manufacturing,
FAIM, California, USA, 2010
Rubio, E. M. and R. Teti, "Cutting parameters analysis
for the development of a milling process monitoring
system based on audible energy sound", Journal of
Intelligent Manufacturing, Vol. 20, No. 1, 2009, pp
43-54
Saidur, R., "A review on electrical motors energy use and
energy savings", Renewable and Sustainable Energy
Reviews, Vol. 14, No. 3, 2010, pp 877-898
Salonitis, K., G. Tsoukantas, et al., "Environmental
impact assessment of grind-hardening process." 13th
CIRP International Conference on Life Cycle
Engineering, Leuven, Belgium, 2006
Sheng, P., M. Srinivasan, et al., "Multi-Objective Process
Planning in Environmentally Conscious
Manufacturing: A Feature-Based Approach", CIRP
Annals - Manufacturing Technology, Vol. 44, No. 1,
1995, pp 433-437
Srinivasan, M. and P. Sheng, "Feature-based process
planning for environmentally conscious machining -
Part 1: Microplanning", Robotics and Computer-
Integrated Manufacturing, Vol. 15, No. 3, 1999, pp
257-270
Srinivasan, M. and P. Sheng, "Feature-based process
planning in environmentally conscious machining -
Part 2: Macroplanning", Robotics and Computer-
Integrated Manufacturing, Vol. 15, No. 3, 1999, pp
271-281
Swamidas, P. M. and G. W. Winch, "Exploratory study
of Adoption of Manufacturing Technology
Innovations in the USA and the UK", International
Journal of Production Research, Vol. 40, No. 12,
2003, pp 2677-2700
Vijayaraghavan, A. and D. Dornfeld, "Automated energy
monitoring of machine tools", CIRP Annals -
Manufacturing Technology, Vol. 59, No. 1, 2010, pp
21-24
Weinert, K., I. Inasaki, et al., "Dry machining and
minimum quantity lubrication", CIRP Annals -
Manufacturing Technology, Vol. 53, No. 2, 2004, pp
511-537
Zhou, Y., P. Orban, et al., "Sensors for intelligent
machining - a research and application survey",
Proceedings of the IEEE International Conference on
Systems, Man and Cybernetics, Part 2 (of 5),
Vancouver, BC, Canada, 1995


Proceedings of DET2011
7th International Conference on Digital Enterprise Technology
Athens, Greece
28-30 September 2011
170

DEVELOPMENT OF A STEP-BASED COLLABORATIVE PRODUCT DATA
EXCHANGE ENVIRONMENT
Xi Vincent Wang
Department of Mechanical Engineering
School of Engineering, University of
Auckland, Auckland 1142, New Zealand
xwan262@aucklanduni.ac.nz
Xun W Xu*
Department of Mechanical Engineering
School of Engineering, University of
Auckland, Auckland 1142, New Zealand
xun.xu@auckland.ac.nz
ABSTRACT:
In a modern manufacturing enterprise, CAD/CAM/CNC solutions are normally provided by various
vendors. This forms a heterogeneous application environment. Despite the many integration
approaches developed in the last decades, software integration and product data exchanging are still
challenging issues that need to be addressed. In this paper, the authors proposed a collaborative
product data exchanging mechanism based on a Distributed Interoperable Manufacturing Platform
(DIMP). In this platform, STEP (ISO 10303) and STEP-NC (ISO 14649) data formats are utilized
to support the data flow. A novel data exchanging mechanism is developed to provide the right
amount and level of product data subset to the users.
KEYWORDS
STEP, STEP-NC, interoperable, data exchange, product data sharing

1. INTRODUCTION
During the past few decades, manufacturing
business has been developed remarkably with the
help of CAx software and CNC tools. The product
design starts from CAD (Computer Aided-Design)
application and CAPP (Computer Aided-Process
Planning) software helps the users to work on the
detailed process planning. After the manufacturing
information is finalized by the CAE (Computer-
Aided Engineering) system, the output will be sent
to CNC (Computer Numerical Control) system
which will drive the machine tools to manufacture
the product finally.
Although such computer-aided technologies can
bring benefits to the manufacturing industry
magnificently, integration and interoperability
issues are still unsolved. Due to heterogeneous
enterprise environments in which business partners
find themselves, multiple data formats, interfaces
and databases are defined and used, thus forming a
highly heterogeneous data environment. According
to the calculation done by Parasolid's business
development manager (Anonymous, 2000),
approximately 20% of the product models imported
from different software kernels still contain errors
that have to be manually fixed, not to mention the
data loss during conversions among different
software applications. Based on the survey among
the German manufacturing industry (Konstruktion,
2006.), it is reported that more than 75% of the large
design problems are directly related to the causes of
different CAD versions or systems, different file
formats and conversions.
Therefore, it is necessary to bridge the gap between
different applications and establish a high-
performance data flow in such an environment. In
the rest of this paper, recent research work achieving
system interoperability and data portability has been

171

reviewed. A data exchanging mechanism based on
DIMP is presented toward the end.
2. STATE-OF-ART INTEROPERABLE
RESEARCH
In recent years, research has been carried out all
around the world to achieve an interoperable and
collaborative environment with heterogeneous
software applications. In the following part of this
section, recent research works are reviewed and
discussed.
2.1. STANDARDIZED FILE FORMATS
In the current industrial and economic context,
heterogeneousness has become a noticeable issue
for manufacturing enterprises. System integration
and interoperability is addressed as one of the key
needs that have to be achieved (Panetto and Molina,
2008). A widely recognized information model is in
need, especially for a collaborative and distributed
environment.
To work on multiple versions and views of a shared
model, Sadeghi et al. proposed a collaborative
architecture to allow experts to share and exchange
design information (Sadeghi et al., 2010). In this
architecture, product design is exchanged through a
standardized constraint-based model to maintain
complex relationships in multi-disciplinary
collaborative design. Thanks to this data model,
conflicts happening during synchronization process
can be described and resolved via the notification
mechanism.
Besides the design applications, research has also
been taken to integrate the whole CAD/CAM/CNC
chain. For facilitating a web-based design-
manufacturing environment, lvares and Ferreira
proposed a web-based system using a data structure
similar to ISO14649 data model (lvares et al.,
2008). In their system, files in neutral formats are
passed along a serial software chain composed by
WebCADFeatures, WebCAPP and WebTuring
applications. To integrate more applications
seamlessly and efficiently, Brecher et al. proposed
an Open Computer-Based Manufacturing system
(OpenCBM) (Brecher et al., 2009). In this system,
standardized file formats are chosen to reduce the
cost of data transferring and exchange.
In a heterogeneous environment, data exchanging is
a challenging issue when proprietary software tools
are integrated within the same architecture. Oh and
Yee presented a method for semantically mapping
different business documents to a conforming
document format, given inevitable existence of
multiple product representations (Oh and Yee,
2008). In this research XML format is adopted to
support web-based applications and an SOA
(Service-Oriented Architecture) model through web.
2.2. STEP/STEP-NC TO BRIDGE THE GAP
Since standardized format is a potential solution to
realizing interoperability, research using
international standard format is taken as well. STEP
(the Standard for the Exchange of Product data
(ISO, 1994)) is established to describe the entire
product data throughout the life cycle of a product.
STEP contains different Application Protocols
(APs) which provide data models for targeted
applications, activities or environments. Compared
with previous standards, these data models offer a
set of effective tools for CA-interoperability
solutions (Gielingh, 2008).
Recently, a system named INFELT STEP was
proposed to maintain the integration of
CAD/CAM/CNC operations based on STEP data
models (Valilai and Houshmand, 2010). In this
three-layered system, different sections are defined
in each layer to interface different CAD,
CAPP/CAM and CNC software packages. INFELT
STEP has a distinct capability of enabling
collaboration of different enterprise-wide
CAD/CAPP/CAM/CNC systems in the design and
production of a product using multiple APs of the
STEP standard.
In the past few years many companies have studied
and introduced PDM, focusing on cost-cutting and
shortening the product development cycle. To
provide a solution via a common method of sharing
standard product and design information, a STEP-
compliant PDM system is developed to fulfil the
demand for logically integrated product data which
is stored physically in a distributed environment
(Yang et al., 2009). In this system, STEP-based
PDM schema is defined in XML format to support
the web service connecting PDM systems of several
partners through an open network accessible via the
internet. As another utilization of XML, Makris et
al. propose an approach providing efficient data
exchanging in which the web is utilized as a
communication layer (Chryssolouris et al., 2004).
Combining STEP concept with XML, this work
supports the integration of decentralized business

172

partners and enables the information flow within the
value added chain (Makris et al., 2008).
Moreover, the data model for computerized
numerical controllers, which is also known as
STEP-NC (ISO, 2003), was established as an
international standard in 2003. As a data model to
connect CAD/CAM systems with CNC machines,
STEP-NC completes the integrated loop of
CAD/CAM/CNC. It has been proven that STEP-NC
provides contribution to both system interoperability
and data traceability (Asiabanpour et al., 2009).
Hence, it becomes possible to implement
interoperability in a STEP/STEP-NC complaint
environment (Newman et al., 2008).
Nassehi et al. proposed a framework to combat the
incompatibility problem among CAx systems
(Nassehi et al., 2008). In this framework, STEP-NC
data model is utilized as the basis for representing
manufacturing knowledge augmented with XML
schema while a comprehensive data warehouse is
utilized to store CNC information. The platform is
further explained in (Newman and Nassehi, 2007).
The system consists of manufacturing data
warehouse, manufacturing knowledgebase,
intercommunication bus, and diverse CAx interfaces
as main structures. Mobile agent technology is used
to support the intercommunication bus and CAx
interfaces.
Recently, Mokhtar and Houshmand (Mokhtar and
Houshmand, 2010) studied a similar manufacturing
platform combining with the axiomatic design
theory to realise interoperability among the CAx
chain. Two basic approaches are considered,
utilizing interfaces and utilizing neutral format
based on STEP. The methodology of axiomatic
design is proposed to generate a systematic roadmap
of an optimum combination of data exchange via
direct (using the STEP neutral format) or indirect
(using bidirectional interfaces) solution in the CAx
environment.
Besides the approaches mentioned above, there are
methods developed to strengthen the interoperability
along STEP/STEP-NC based CAD/CAM/CNC
chain. For instance, Vichare et al. developed data
models to describe all the elements of a CNC
machine tool (Vichare et al., 2009). In this approach
called UMRM (Unified Manufacturing Resource
Model), machine specific data is defined in the form
of an STEP-compliant schema. This data model acts
as a complementary part to the STEP-NC standard
to represent various machine tools in a standardized
form, which provide a universal representation of
the manufacturing information at the tail of
CAD/CAM/CNC chain.
3. PRODUCT DATA EXCHANGING
MECHANISM VIA DIMP
Although using STEP/STEP-NC is a possible
solution to achieving system interoperability, some
drawbacks can be observed at the same time. Since
the key concept of STEP is to provide an integrated
information resource, it may bring up
synchronization and confidentiality issues. In the
modern industry, manufacturing business is
normally conducted cooperatively. When the same
type of product is manufactured or provided by
different suppliers or sub-contractors
simultaneously, it is necessary for these suppliers/
sub-contractors to communicate with each other,
and to achieve synchronization and traceability.
Moreover, when a product is manufactured by
different suppliers one after another, a serial data
flow forms. Passing an integrated product data
package along the supply chain may breach the
confidentiality requirement of the suppliers,
especially the one on the upper data stream. To
combat these synchronization and confidentiality
issues, a data exchange mechanism is designed
based on a Distributed Interoperable Manufacturing
Platform (DIMP) to minimize data exchange and
data transfer.
3.1. SYSTEM ARCHITECTURE OF DIMP
DIMP is proposed to achieve an interoperable
environment integrating multiple CAD/CAM/CNC
software packages (Wang et al., 2010). In DIMP, a
Service-Oriented Architecture is utilized to fulfil the
generic demands from the users directly (Figure 1).
Initially DIMP collect the users request and
generate a service request list in a pre-defined
document format. Based on this list, the platform
allocates related software packages and product
document, and then organizes them as a serial of
software service before it is passed to the user. From
the users point of view, the process can be
concluded as request, find, bind and provide,
which is detailed discussed in (Wang and Xu, 2011).
As shown in Figure 1, the platform consists of three
main modules which are Supervisory Module,
Database Module and Application Warehouse
Module. Supervisory Module (SM) plays as the
global control centre of the platform, which is
detailed discussed in. After the users request being

173

collected, SM analyzes the request list and generates
an optimized service list to achieve this request. In
the Application Warehouse Module, software tools
are repackaged based on the functionality as
Application Modules (AMs), which are developed
to be self-contained and executed autonomously.
Based on the service list, selected AMs will be
meshed as a Virtual Service Combination and
delivered to the user to fulfil his/her own need.
In the Database Module, all the information of the
AMs are defined and stored, such as basic
functionality description, input/output type,
authorization level and etc. Such information will be
mapped to the users request list and analyzed by
the Supervisory Module before appropriate AMs are
selected and delivered. Besides the AM information,
DIMPs database contains Project/Product
documents as well. Along with the native data
format of AMs, the product data are also saved in
the STEP and STEP-NC neutral data format for
archiving purposes. As mentioned above, despite the
advantage of STEP/STEP-NC data format,
intelligent property protection issues still exit, thus a
collaborative product data exchange mechanism is
designed and developed.
Figure 1. Service-oriented DIMP architecture (Wang and Xu, 2011)
3.2. COLLABORATIVE PRODUCT DATA
EXCHANGING MECHANISM
As mentioned above, when a number of suppliers,
contractors and sub-contractors are working on one
project cooperatively, it is even more important to
overcome the synchronization and confidentiality
issues. Based on DIMP, the concept of Data Packet
(DP) is conceived. A DP is defined as a set of self-
contained mobile cluster of data. The key concept of
DP is to provide the right amount of data to the
right people in a right manner. Once the Service
List allocates the data subset in need, the Data-
Localization Mechanism extracts and generates a
stand-alone file before it is packaged in the Virtual
Service Combination. After the DP is processed by
the user, the modified information needs to be
updated. In this mechanism, DP is able to be
reassembled back to the data source, which is
defined as the Data-Integration Mechanism. In
general, the goal is to develop an algorithm to
identify the logical connections amongst different
data subsets across different levels. According to
these connections, a stand-alone file containing DP
is generated and delivered to the user. In practice,
since STEP/STEP-NC describes product
information from an object-oriented perspective, it

174

is possible to identify and extract data in a specific
scope. This provides the user an interoperable
environment to work on the appropriate subset of
data. Then the synchronization issue mentioned
above is relieved and confidentiality issue
overcome.
3.2.1. Data-Localization Mechanism
The concept of DP is realized by the pre-/post-
processors encapsulated along with the AMs. After
the service list is generated by SM, the pre-
processor will search in the database and locate the
top level information assign by the work task. After
the DP is located, all the related information is
extracted and transformed into a self-contained file
for the target application.
To realize the DP concept, it is required to extract
and process a DP from a STEP physical file (ISO,
2002). However, since the instance of entities are
defined in a text-based structure, it is necessary to
re-describe the product information in the STEP
files. Because STEP/STEP-NC describes the
information in a task-oriented way using entities
and reference relationships, such logics can be
described in a tree-structure, thus a meta-data model
is defined to re-represent the product information
from a Part 21 file. In this data model, the instance
of entities stored in a STEP file can be denoted as
Nodes and the relationships between nodes
denoted as Edges. For one tree node, all the
information defined in the original data source will
be kept.
The process of DPs extraction process can be
summarized as Figure 2. When the top level entity
of a DP is assigned, this entity will be defined as the
first Father Node. All the information defined in
the entity is transformed and kept as attributes of the
Father Node. When the attribute is pointing to
another node (Child Node), a temporary Edge will
be built and attached to the Father Node. After
Father Nodes analysis is done, the mechanism will
re-check the Service List. If the DP has got enough
information according to the Service List, the
system will withdraw unused temporary Edges and
generate an executable STEP file based on the DP
extracted. If the last Father Node has not reached the
end of DP, which means the user requires more
information around the Node, the system will
allocate the Child Node attached on the Edge as the
next Father Node and run the analyse process again
until all the information in need is collected. By
defining father-child logics, this mechanism enables
user to access the right amount of data fulfilling
his/her demand across various data levels. For
instance, DP generator is able to provide the general
information to the user such as the name and owner
of a project, while the technical details, dimensions,
and parameter information of a specific task can be
packaged in DP as well.

Figure 2. DP Analyze process
3.2.2. Data-Integration Mechanism
After the user finishes his/her job with the DP file,
DIMP detects the output results and the post-
processer is initialized before data packets are re-
assembled to the data source since the STEP/STEP-
NC stores the product data in an object-oriented
way, which helps maintain the integrity of the data
and reconnect the broken links between DP and
original data source. In some complex cases, if
multiple changes are applied to the same data set by
different users, the Data Integration Mechanism will
synchronize the changes and made a backup version
for each of them in the Product/Project Database.

175

Thus the history of the product data is made
traceable. To maintain consistent data semantics and
syntax, a validation mechanism is in place.
As part of the post-processor, the validation
mechanism detects the unreasonable parameters,
such as too large/small dimensions for a
manufacturing feature and ill-defined manufacturing
information (e.g. minus diameter of a driller).
Furthermore, harmonization between DP and data
source will be validated as well. For instance, if the
depth of a pocket is changed in the DP while the
toolpath depth remains unchanged in the data
source, the validation system will detect the conflict
and send a warning message to the user. At
completion of the validation process, the post-
processor shuts down and a service-end message is
delivered to the Supervisory Module before a new
service is launched. Thanks to the object-oriented
nature of STEP/STEP-NC, it is possible to develop
the validation mechanism based on the concept of
DP.
3.2.3. Case study
To implement the DP concept and evaluate the
algorithm, a Graphical User Interface (GUI) is
developed for the data exchanging environment. The
test product information is stored in a STEP-NC file
of ISO14649-11 example 2 (ISO, 2004). As
illustrated in Figure 3, the first section of the part
program is the header section marked by the
keyword HEADER (Figure 3). In this section,
some general information and comments concerning
the project are given, e.g. filename, author, date and
organization. The second and main section of the
program file is the data section marked by the
keyword DATA. This section contains all the
information about manufacturing tasks and
geometries.
To develop the system, JAVA language has been
chosen because of its interoperability amongst
different platforms. The system is developed using
Java Development Kit (JDK) in Java Runtime
Environment (JRE). First of all, the STEP-NC
physical file is read in by the system and interpreted
in to the meta-data model, which is defined in a tree-
structure. In this way, the text-based STEP-NC
information is transformed into programmable
classes which are capable of being analysed and
processed. After receiving the control message from
the Supervisory Module, the system searches in the
structure tree and start the DP process.
Assume that a user requires viewing basic
information about a
MACHINING_WORKINGSTEP and detailed
information of its feature called PLANAR_FACE.
According to the service list based on the
customers request, the system first of all locates the
top level of the DP and scans its entity classes. All
the information of this entity is delivered to the new
DP tree structure and the connections between this
entity and others are tagged and recorded in the
system.
ISO-10303-21;
HEADER;
FILE_DESCRIPTION(('ISO 14649-11 EXAMPLE 2',
'COMPLEX PRORGRAM WITH VARIOUS
MANUFACTURING FEATURES'),'1');
FILE_NAME('EXAMPLE2.STP',

);
FILE_SCHEMA(('MACHINING_SCHEMA','MILLING_SCHEM
A'));
ENDSEC;
DATA;
#1= PROJECT('EXECUTE_EXAMPLE2',#2,(#7),$,$,$);
#2= WORKPLAN('MAIN WORKPLAN',(#4,#5,#6),$,#14,$);
...
...
#7= WORKPIECE('PART 2',#13,0.01,$,$,$,(#9,#10,#11,#12));
...
#78=BORING($,$,'BORING_HOLE2',20.,$,#266,#270,#230,$,
15.,15.,$,$,$,.T.,1.,$);
...
#101=COMPOUND_FEATURE('COMPOUND_FEATURE_H
OLE2',#7,(),#371,(#108,#109,#110));
#108=ROUND_HOLE('HOLE2_FLAT_BOTTOM',#7,(#78),#36
9,#545,#546,$,#214);

ENDSEC;
END-ISO-10303-21;

Figure 3. STEP-C data structure
After the first level of data is collected, the process
of extracting next level of product information
starts. The PLANAR_FACE feature PLANAR
FACE1 is set as the new Father Node and
transmitted to the DP automatically. Then the
system will continue the loop of data-collecting.
When all the required information is extracted, the
reference relationships between the DP and the rest
of the file are saved in the Database Module for the
future updating. Meanwhile DP data model is
written back in the Part21 physical file format and
saved to the system. As illustrated in Figure 5, the
output file contains the exact amount of information
requested by the user, and all the logic relationship
between entity instances are kept intact.


176


Figure 4. STEP-C Interpreter/DP Generator

Figure 5. Output Data Packet file
4. CONCLUSIONS
Nowadays, manufacturing organizations are facing
interoperation difficulties because of the
heterogeneous data environment they are in and
software vendors they are dealing with. Meanwhile
product lifecycle management is challenged by the
conflict Synchronization and Confidentiality issues.
This paper introduces a novel Data-Packet concept
to represent the product semantics via STEP/STEP-
NC data model. The Data Localization/Integration
mechanisms prove that it is practicable to process a
data subset containing reasonable amount of
information. These methods enable the users to
work on product components across different levels
of abstraction using different representations.
Furthermore, users are able to synchronize the
parallel modifications after the data subsets are
processed in a distributed environment.

177

REFERENCE
lvares AJ, Ferreira JCE, Lorenzo RM. An integrated
web-based CAD/CAPP/CAM system for the remote
design and manufacture of feature-based cylindrical
parts. J Intell Manuf 2008; 19: 643-659.
Anonymous. Healing the wounds of data conversion.
CAD User AEC Magazine 2000; Vol. 13.
Asiabanpour B, Mokhtar A, Hayasi M, Kamrani A, Nasr
EA. An overview on five approaches for translating
cad data into manufacturing information. Journal of
Advanced Manufacturing Systems 2009; 8: 89-114.
Brecher C, Lohse W, Vitr M. Module-based Platform for
Seamless Interoperable CAD-CAM-CNC Planning. In:
Xu XW, Nee AYC, editors. Advanced Design and
Manufacturing Based on STEP. Springer, London,
2009, pp. 439-462.
Chryssolouris G, Makris S, Xanthakis V, Mourtzis D.
Towards the Internet based supply chain management
for the ship repair industry. International Journal of
Computer Integrated Manufacturing 2004; 17: 45-57.
Gielingh W. An assessment of the current state of product
data technologies. CAD Computer Aided Design
2008; 40: 750-759.
ISO. ISO 10303 -1: Industrial automation systems and
integration -- Product data representation and
exchange -- Part 1: Overview and fundamental
principles. Geneva, Switzerland: International
Organization for Standardization, 1994.
ISO. ISO 10303-21. Industrial automation systems and
integration -- Product data representation and
exchange -- Part 21: Implementation methods: Clear
text encoding of the exchange structure Geneva,
Switzerland: International Organization for
Standardization, 2002.
ISO. ISO 14649-1. Industrial automation systems and
integration -- Physical device control -- Data model for
computerized numerical controllers -- Part 1:
Overview and fundamental principles. Geneva,
Switzerland: International Organization for
Standardization, 2003.
ISO. ISO 14649-11 Industrial automation systems and
integration -- Physical device control -- Data model for
computerized numerical controllers -- Part 11: Process
data for milling. Geneva, Switzerland: International
Organization for Standardization, 2004.
Konstruktion NNW. Ungenutztes Potential im
Engineering: Status, Trends und Herausforderungen
bei CAD und PDM. Autodesk, Mnchen, 2006.
Makris S, Xanthakis V, Mourtzis D, Chryssolouris G. On
the information modeling for the electronic operation
of supply chains: A maritime case study. Robotics and
Computer-Integrated Manufacturing 2008; 24: 140-
149.
Mokhtar A, Houshmand M. Introducing a roadmap to
implement the universal manufacturing platform using
axiomatic design theory. International Journal of
Manufacturing Research 2010; 5: 252-269.
Nassehi A, Newman ST, Xu XW, ROSSO Jr RSU.
Toward interoperable CNC manufacturing. Computer
Integrated Manufacturing 2008; 21: 222-230.
Newman ST, Nassehi A. Universal Manufacturing
Platform for CNC Machining. Annals of the CIRP
2007; 56: 459.
Newman ST, Nassehi A, Xu XW, R.S.U. Rosso Jr., L.
Wang , Y. Yusof, et al. Strategic advantages of
interoperability for global manufacturing using CNC
technology. Robotics and Computer-Integrated
Manufacturing 2008; 24(2008)699-708.
Oh SC, Yee ST. Manufacturing interoperability using a
semantic mediation. International Journal of Advanced
Manufacturing Technology 2008; 39: 199-210.
Panetto H, Molina A. Enterprise integration and
interoperability in manufacturing systems: Trends and
issues. Computers in Industry 2008; 59: 641-646.
Sadeghi M, Hadj-Hamou K, Noel F. A collaborative
platform architecture for coherence management in
multi-view integrated product modelling. International
Journal of Computer Integrated Manufacturing 2010;
23: 270-282.
Valilai OF, Houshmand M. INFELT STEP: An
integrated and interoperable platform for collaborative
CAD/CAPP/CAM/CNC machining systems based on
STEP standard. International Journal of Computer
Integrated Manufacturing 2010; 23: 1095.
Vichare P, Nassehi A, Newman S. A unified
manufacturing resource model for representation of
computerized numerically controlled machine tools.
Proceedings of the Institution of Mechanical
Engineers, Part B: Journal of Engineering Manufacture
2009; 223: 463-483.
Wang XV, Xu X. DIMP: an Interoperable Solution for
Software Integration and Product Data Exchange.
ENTERPRISE INFORMATION SYSTEMS, a
Special Issue on Information Integration
Infrastructures Supporting Multidisciplinary Design
Optimization, in progress 2011.
Wang XV, Xu X, Hmmerle E. Distributed Interoperable
Manufacturing Platform Based on STEP-NC. The 20th
International Flexible Automation and Intelligent
Manufacturing Conference(FAIM 2010). California
State University, California, USA., 2010, pp. 153-160.
Yang J, Han S, Grau M, Mun D. OpenPDM-based
product data exchange among heterogeneous PDM
systems in a distributed environment. International
Journal of Advanced Manufacturing Technology 2009;
40: 1033-1043.

Proceedings of DET2011
7th International Conference on Digital Enterprise Technology
Athens, Greece
28-30 September 2011
178

LIFE CYCLE ORIENTED EVALUATION OF FLEXIBILITY IN INVESTMENT
DECISIONS FOR AUTOMATED ASSEMBLY SYSTEMS

Prof. Dr.-Ing. Achim Kampker
WZL, Laboratory for Machine Tools and
Production Engineering, RWTH Aachen
University
a.kampker@wzl.rwth-aachen.de
Dipl.-Ing. Peter Burggrf
WZL, Laboratory for Machine Tools and
Production Engineering, RWTH Aachen
University
p.burggrf@wzl.rwth-aachen.de


Dipl.-Ing. Cathrin Wesch-Potente
WZL, Laboratory for Machine Tools and
Production Engineering, RWTH Aachen
University
c.wesch@wzl.rwth-aachen.de
Dipl.-Wirt.-Ing. Georg Petersohn
WZL, Laboratory for Machine Tools and
Production Engineering, RWTH Aachen
University
g.petersohn@wzl.rwth-aachen.de

ABSTRACT
Due to fast changing market requirements and short product life cycles, flexibility is one of the
crucial characteristics of automated and partly automated assembly systems besides purchasing and
operation costs. Since the life cycle of an assembly system is longer than the one of the assembled
products, flexibility enables an assembly system to adapt to future product requirements as well as
production scenarios. The approach proposed in this paper strives for a systematic and economic
measurement of flexibility in investment decisions. It offers methods and key-figures supporting the
investment decisions for automated assembly systems. The right levels of flexibility and automation
of an assembly system are evaluated by using a set of potential future scenarios of the systems life
cycle. Based on two new key-figures called Return on Automation and Return on Flexibility, the
approach allows comparing different configurations of an assembly system and therefore makes
well-informed investment decisions.
KEYWORDS
Assembly System, Flexibility, Decision Making, Life Cycle

1. INTRODUCTION
Companies in manufacturing industries are
confronted with the challenges of an increasing
market dynamic, an increasing competition and an
uncertain environment, caused by globalisation of
the markets and economic crises. Shorter product
lifecycles, more product variants and volatile
product demands and a concurrent increasing
product complexity are characteristic consequences
for companies in this market environment (Schuh et
al, 2004; Schuh et al, 2005; Seidel and Garrel,
2011).
In this context, the ability to adapt to the
changing requirements is becoming an important
competitive factor. A continuous adaptation of the
manufacturing system to the market requirements is
necessary. Since the future requirements for the
manufacturing system cannot be forecasted exactly,
a proactive adaptation of the system is rarely
possible and the manufacturing system is not
optimally configured for the upcoming situation.
Therefore, manufacturing flexibility is an important

179

goal to achieve in the early planning phases of the
system (Schuh et al, 2004; Schuh et al, 2005). In
addition low production costs are an important
factor for competitiveness. The automation of
manufacturing systems is one solution to reach this
goal. While automation usually effects a reduction
of the flexibility of the manufacturing system, a
trade-off has to be made between these two goals.
The approach proposed in this paper, attempts to
give support in finding the right trade-off between
flexibility and automation in investment decisions
for automated assembly systems. Automated
assembly systems are one example of a system with
high investment costs on the one hand and the need
for flexibility over the systems life cycle on the
other hand. The approach is supposed to be used in
the early planning phases of automated assembly
systems. Chapter two of this paper illustrates the
required types of flexibility of assembly system.
Challenges in the economic evaluation of flexibility
and existing approaches are summarized in chapter
three. In chapter four the approach to a life cycle
oriented evaluation of flexibility in investment
decisions for automated assembly systems is
introduced. In chapter five an industry case is
presented. Chapter six concludes this paper.
2. FLEXIBILITY OF ASSEMBLY
SYSTEMS
The economic and life-cycle oriented evaluation of
flexibility in investment decisions requires a clear
definition of the necessary types of flexibility
provided by assembly systems. As there are
numerous approaches on the description and
measurement of manufacturing flexibility, a
common definition of manufacturing flexibility and
its various types in literature is not available (De
Toni and Tonchia, 1996).
Flexibility acts as a counterbalance to
uncertainty (Newman et al, 1993). It describes the
ability of a manufacturing system to cope with
unforeseen changes. The two main types of
unforeseen changes which necessitate flexibility are
external changes (demand, supply) and internal
changes (system breakdown, lack of material,
delay). Manufacturing systems with a high degree
of flexibility adapt to new situations caused by
external and internal changes quickly and without
significant, new investments (Chryssolouris, 1996;
De Toni and Tonchia, 1996). Chryssolouris (1996,
2005) suggested that flexibility of a manufacturing
system should be evaluated by the expected costs
necessary for the adaptation of the system. There
are numerous approaches to classify flexibility into
different types (e.g. Browne et al, 1984; Sethi et al,
1990). In the next step the main types of flexibility
for assembly systems are derived from the external
and internal requirements for the assembly system.
As in most industrial sectors, the life cycle of an
assembly system is longer than the one of the
assembled products. Thus, the necessity for
flexibility of automated and partly automated
assembly systems is evident. Furthermore, the
necessity intensifies by an increasing frequency of
product changes caused by shorter product life
cycles. These challenges can be met by a type of
flexibility, which allows the set of products to be
changed easily (Schuh et al, 2004).
Frequent product changes result in a mass of
different variants of the same product type. In
addition, the complexity of the product variants
increases. Thus, the assembly system has to
assemble different variants and types of products at
the same time to remain competitive (Schuh et al,
2004).
Volatility of the demand during a product life
cycle is typical for most markets. To enable
profitable assembly at different volumes, this
challenge has to be counterbalanced by flexibility
(Schuh et al, 2004).
With regards to these requirements three main
types of flexibility seem to be adequate for the
classification of the assembly system flexibility
(Figure 1) (Browne et al, 1984; Suarez et al, 1991):
Product flexibility describes the ability of the
production system, to produce a changed set
of products without serious updates and
replacements of the present resources.
Mix flexibility describes the ability of a
system to produce a number of different
products at the same time.
Volume flexibility describes the ability of an
assembly system to vary the volume of
products without remarkable consequences on
production costs.

Assembly System Flexibility
Product
Flexibility
Mix
Flexibility
Volume
Flexibility
t
n
A B C
t
n
A B C
t
n
t
n
t
n
n
t
n
n

Figure 1 Main types of assembly system flexibility
3. CHALLENGES IN ECONOMIC
EVALUATION OF FLEXIBILITY
The fact, that no commonly accepted approach to
the evaluation of the flexibility of manufacturing
systems exists, shows the need for new decision
support methodologies in industry (Rogalski et al,

180

2009). The multi-dimensionality of flexibility and
the lack of direct measures of flexibility make an
evaluation of manufacturing flexibility difficult
(Cox 1989). This is particularly true for the
financial or economic evaluation of flexibility.
While the investment in a flexible manufacturing
system is easy to quantify, the financial benefits of
an increased manufacturing flexibility are hard to
determine (Zh et al, 2006). Based on a review of
existing approaches to the evaluation of
manufacturing flexibility, the necessity for a life
cycle oriented evaluation of assembly system
flexibility in investment decisions is going to be
derived.
Schuh et al (2004) developed a system of key
figures for the evaluation of product, mix and
volume flexibility. The system is able to measure
the flexibility on different organisational levels
(workstation, production line and production system
or production networks). A detailed monetary
evaluation of flexibility is not possible.
Abele et al (2006) extended in their approach the
net present value method by the real options
analysis for the evaluation of flexibility. The
approach considers the temporal structure of the
decision relevant cash flows. The approach by Zh
et al (2003) is another example for using the real
option analysis in flexibility evaluation. Since the
real option analysis presupposes the existence of a
market traded financial option with the same cash
flows offer time, the usage of the approaches is very
restricted.
Alexopoulos et al (2007) developed the
DESYMA approach for the determination of the
flexibility of a manufacturing system by statistical
analysis of the estimates of the manufacturing
systems lifecycle cost. The estimates are calculated
with discounted cash flows over a time horizon and
for different market scenarios using a linear
program. The approach does only consider possible
adaptations caused by different demand volumes.
Georgoulias et al (2009) integrated the DESYMA
approach into a toolbox approach for flexibility
evaluation.
The approach for the flexibility evaluation by
Reinhart et al (2007) is divided into three steps
(definition of alternatives to evaluate, modelling the
future with uncertain states of the environment,
determination of the most economic alternative).
Using discounted cash flows for the economic
evaluation the approach just considers the volume
flexibility.
Rogalski and Ovtcharova (2009) developed the
ecoFLEX approach for the comparison of different
manufacturing systems regarding their flexibility.
The comparison is based on a linear program,
calculating flexibility areas, considering the mix
and volume flexibility of a system. Monetary
parameters are not considered in detail.
The approach developed by Rhl (2010) strives
for the economic evaluation of manufacturing in the
design phase, considering the flexibility and risk
criteria. The approach is not life cycle oriented and
just considers volume and mix flexibility.
Table 1 Summary of the relevant approaches
Approach
V
o
l
u
m
e

f
l
e
x
i
b
i
l
i
t
y
M
i
x

f
l
e
x
i
b
i
l
i
t
y
P
r
o
d
u
c
t

f
l
e
x
i
b
i
l
i
t
y
D
e
t
a
i
l
e
d

m
o
n
e
t
a
r
y

e
v
a
l
u
a
t
i
o
n

L
i
f
e

c
y
c
l
e

o
r
i
e
n
t
e
d
Schuh (2004)
Abele (2006)
Zh (2003)
Alexopoulos (2007)
Reinhart (2007)
Rogalski (2009)
Rhl (2010)
Characteristics
fulfilled
Characteristics
partially fulfilled
Characteristics
not fulfilled


Table 1 summarises the characteristics of the
relevant approaches introduced in this chapter.
None of these approaches totally fulfils all
necessary requirements for a life cycle oriented
evaluation of flexibility in investment decisions for
automated assembly systems. The following chapter
introduces a new approach for an economic
evaluation of flexibility based on two main key-
figures.
4. LIFE CYCLE ORIENTED EVALUATION
OF FLEXBILITY IN INVESTMENT
DECISIONS
As the future flexibility of assembly systems is
determined within the investment decision and
therefore at the beginning of the life cycle, the
approach to a life cycle oriented evaluation of
flexibility aims to supply support in investment
decisions for assembly systems in the early phases
of the systems design. The approach bases on two
main key-figures. The first key-figure is the Return
on automation (ROA) and the second key-figure is
the Return on flexibility (ROF). These key-figures

181

and the approach itself will be detailed in the
following paragraphs.
4.1 RETURN ON AUTOMATION AND
RETURN ON FLEXIBILITY
The ROA measures the cost and benefits of the
assembly system with regard to its automation and
the ROF measures the cost and benefits of the
assembly system with regard to its flexibility level,
especially considering the three types of flexibility
defined in chapter 2. So, the ROF is an economic
measure for the ease with which an automated
assembly system can adapt to new situations.
Both key-figures, ROA and ROF are based on the
definition of the return on investment. The return on
investment (ROI) is the top key figure of the
DuPont-System of Financial Control, developed by
the company DuPont de Nemours in 1919 (Meyer,
2006). The ROI is the ratio of the earnings of a
system and the total investment within a system.
The earnings are the sales of the system minus the
cost of sales within a period. The total investment is
the sum of the permanent investment and the
current assets. The ratio of the earnings and the total
investment is the basis of the definition of the ROA
and ROF.
The calculation of the earnings of the assembly
system for n years is based on the net present value
(NPV) approach. The NPV as a dynamic investment
analysis considers the temporal variability of the
Revenues and Expenses by discounting them with
the required rate of interest. By using the NPV a life
cycle oriented evaluation of the assembly system is
possible.
A configuration of an assembly system is
beneficial with regard to its automation level, if the
ROA is positive and vice versa. Using the ROA,
different configurations and automation approaches
of an assembly system can be compared to each
other. The most beneficial configuration of the
assembly system is the configuration with the
greatest ROA. A fair comparison of the different
configurations is only possible if the comparison is
based on the same basic future scenario. The basic
future scenario describes the most probable future
demand of products and the product range which is
going to be assembled in the assembly system. For
the comparison of the different configurations and
their automation approach a realistic basic future
scenario has to be defined (see next paragraph).
Equation 1 in Figure 2 shows the input parameters
necessary for the calculation of the ROA. The ROA
is only a measure for an economic automation of the
assembly system for the most probable scenario not
considering the systems flexibility.


%
I
I
i) (
) AC OE (AS
ROA
T
t
t
t t t
100
1
0
0
1

+

=

=
%
I
i) (
AC AC OE OE AS AS
ROF
T
t
t
t BS t OS t BS t OS t BS t OS
OS
100
1
) ( ) ( ) (
0
1
, , , , , ,

+

=

=
(2)
(1)
With:
ROA = Return on Automation [%]
ROF = Return on Flexibility [%]
AS = Adjusted sales []
OE = Operating expenses []
AC = Adaptation costs in year t []
I
0
= Investment in the assembly system []
i = Required rate of interest [-]
T = number of years [-]
t = Index for the year [-]
BS = Index for the basic future scenario [-]
OS = Index for the optional future scenario [-]

Figure 2 Equations of the ROA and ROF
To be able to measure the facility of the assembly
system to adapt to new situations at least one
optional future scenario has to be defined. These
optional future scenarios describe the changes of the
future demand of products or the changes of the
product range, which is going to be assembled in the
assembly system. The optional future scenarios
represent the uncertain future, which the assembly
system should be able to adapt to. Examples for
these changes are the introduction of a new product
or product variants to the assembly system or a
change in the demand of the products. The ROF

182

measures the changes of the revenues and expenses
of the assembly system in comparison to the basic
future scenario, if the optional future scenario
becomes real. Using the ROF, the different
configuration of the assembly system can be
compared with regard to their facility to adapt to the
new situation. The ROF of an assembly system
which is able to adapt to new situations very easily
will be greater than the ROF of an assembly system
which is not able to adapt to the new situation as
easily. The most beneficial configuration of the
assembly system with regard to the flexibility is the
configuration with the greatest ROF. Equation 2 in
Figure 2 shows the ROF for a specific optional
future scenario.
Both key-figures are measured as the percentage
of the initial investment at the beginning of the
systems life cycle. The initial investment (I
0
)
includes all necessary expenses to enable the
assembly system to start with the assembly process.
The revenues of the assembly system are termed
as adjusted sales (AS). The adjusted sales are the
Sales of the products assembled in the considered
assembly system minus the expenses for the
products which are not caused by the considered
Assembly System. Expenses which are not caused
by the considered assembly are for example
material costs, expenses for up- and downstream
production processes or selling and administrative
expenses. The adjusted sales can be interpreted as
the value added which the assembly system
contributed to the products and the margins which
can be achieved.
The expenses of the assembly system are
separated into two categories, operating expenses
(OE) and adaptation costs (AC). Operating
expenses include all cost categories necessary for
the daily operation of the assembly system:
Labor costs
Energy costs
Costs for the workspace
Tooling costs
Costs of maintenance
Logistics costs
Costs of operating supplies
Quality costs.
While the operating expenses describe the regular
expenses for the daily operations of the system, the
adaptation costs describe the irregular expenses
necessary for adapting the system to a new situation
caused by future incidents. Examples for future
incidents are the introduction of a new product or a
new product variant to the assembly system. The
adaptation costs are the main indicator for the
flexibility of the considered assembly system. The
adaptation costs are low, if the assembly system is
flexible and vice versa. For the basic future scenario
the adaptation costs should normally be near to
zero, since the possible configurations of the
assembly system have to be able to produce the
demand of the basic future scenario. Cost categories
within the adaptation costs are:
Project costs
Engineering costs
Ramp up costs
Adaptation investments.
4.2 METHOD FOR EVALUATING
FLEXIBILITY IN INVESTMENT DECISION
The method for evaluating flexibility in investment
decisions is separated into four steps (see Figure 3).
The steps of the method will be detailed in the
following paragraphs.
4.2.1 Definition of future scenarios
As already mentioned in the paragraph above, the
ROA and ROF are calculated on the basis of
different future scenarios. Thus, the first step of the
method for evaluating flexibility in investment
decisions is the definition of the different future
scenarios. These scenarios have to be developed by
experts with intense knowledge of the market in
question, concerning the market development and
the product strategy of the company (e.g. marketing
department). The scenarios have to provide the
demand of all products and product variants, which
are going to be produced on the considered
assembly system. Different scenarios may be a
certain rise in demand or the introduction of a new
product at a specific point in time. It is also possible
to construct certain scenarios to specifically test the
potential of only one of the types of flexibility.
Different scenarios for a specific test of the volume
flexibility are for example scenarios, in which only
the demand of the product changes and nothing else.
In addition to the arrangement of the future
scenario, the probability or likelihood of its
occurrence also has to be estimated by the experts.
The scenario with the highest likelihood is the basic
future scenario. All other scenarios are the optional
future scenarios.
4.2.2 Calculation of the ROA
The only scenario of concern for the calculation of
the ROA is the basic future scenario. The
calculation of the ROA needed the variables in
equation 1 to be obtained. The revenues and
expenses which are included in these variables have
to be collected and calculated for the different
configurations of the assembly system. Possible
data basis for these figures are the controlling of the
company and external quotations for the resources,
equipments, etc. for the specific configuration of the

183

assembly system. Based on the collected data the
ROA can be calculated for the different
configurations of the assembly system and an
evaluation of the configurations of the assembly
system concerning their automation approach is
possible.
Definition of
future scenarios
Calculation of the ROA
Calculation of the expected ROF
Comparison of the
configurations
t
n
p= x%
t
n
p= x%
t
n
p= x%
t
n
p= x%
t
n
p= x%
t
n
p= x%
BS OS I
OS II
%
I
I
i) (
) AC OE (AS
ROA
T
t
t
t t t
100
1
0
0
1

+

=

=
%
I
i) (
AC AC OE OE AS AS
ROF
T
t
t
t BS t OS t BS t OS t BS t OS
OS
100
1
) ( ) ( ) (
0
1
, , , , , ,

+

=

=
t
n
p= x%
t
n
p= x%
t
n
p= x%
t
n
p= x%
t
n
p= x%
t
n
p= x%
Configuration
ROA/
eROF

Figure 3 Method for evaluating flexibility in investment decisions
4.2.3 Calculation of the expected ROF
The calculation of the expected ROF for the
configurations of the assembly system is divided
into two steps. In the first step a flexibility check of
the assembly system verifies if the assembly system
is capable to produce the demand of the different
future scenarios. Based on the flexibility check the
ROF for every optional future scenario and
configuration will be calculated. In the second step
the expected ROF for one scenario will be
calculated by taking the average over all optional
future scenarios.
First of all, it is necessary to find out whether the
configurations of the assembly system are capable
of producing the customer demand of the future
scenarios. Therefore, a comparison between the
needed product requirements of the optional future
scenario and the provided capabilities of the
configurations of the assembly system is made. The
flexibility check compares the requirements and the
capabilities regarding process accuracy, product
size, tooling possibility, process time and volume
capacity. The outcome of the flexibility check
provides a detailed account of aspects of the
assembly system, which would have to be adapted
to meet the requirements of the optional future
scenarios. Examples for these aspects are fixtures,
tools or human capacities.
If an adaptation of the assembly system is needed
the examination of the revenues of an adapted
system with regard to the expenses needed to adapt
the assembly system is necessary. Based on the
detailed account of aspects of the assembly system,
which would have to be adapted, the required
adaptation costs can be calculated. Using the
information from the optional future scenario and
the account of aspects to be adapted, the adjusted
sales and the operating expenses of the
configuration of the Assembly System in the
specific optional future scenario can be calculated.
The investment has not changed in comparison to
the basic future scenario and the adjusted sales,
operating expenses and adaptation costs of the
configuration of the assembly system in the basic
future scenario are also established from the
calculation of the ROA. Using equation 2, the ROF
for the option of an adaptation of the specific
configuration of the assembly system in the specific
optional future scenario can be calculated. If no
adaptation is necessary, the ROF for the specific
optional future scenario is zero.
Finally, the expected ROF can be calculated with
regard to the ROFs of the different optional future
scenarios. The ROFs of all scenarios are weighted
with their individual estimated likelihoods (step 1 of
the method), by multiplying the ROF
OSX
with the
probability of the optional future scenario X.

184

Afterwards, the weighted ROFs of all scenarios are
added to result in the expected ROF of one
configuration of the assembly system (Equation 3).

OSX

I X
X
ROF P eROF =

=
(3)
With:
eROF = expected ROF []
P
X
= Likelihood for optional future
scenario X [-]
ROF
OSX
= ROF for the optional future
scenario X []

4.2.4 Comparison of the configurations
Based on the calculated ROA and expected ROF a
comparison of the different configurations of the
assembly system is possible. Within the following
chapter the method is going to be applied to an
industry case.
5. INDUSTRY CASE
The proposed approach is applied to an industry
case of the electronic industry in this chapter. Three
different configurations of an assembly system have
been proposed for the production of electronic parts.
The required type of flexibility for the assembly
system is the volume flexibility. The assembly
system has to be capable of producing an increasing
number of products per year.

L
a
b
e
llin
g
T
e
s
t
in
g
I
I
P
a
c
k
a
g
in
g
A
s
s
e
m
b
ly
I
I
I
P
r
e
-
A
s
s
e
m
b
ly
A
s
s
e
m
b
ly
I

&

I
I
T
e
s
t
in
g
I
S
o
ld
e
r
in
g

Figure 4 Configuration A of the assembly system
Figure 4 illustrates configuration A of the
assembly system. The assembly process starts with
the automated pre-assembly of the products
followed by two manual assembly stations, a testing
rig, a soldering station, a second testing rig, a third
assembly station, an automated labelling station and
a packaging station. Configuration B differs from
the configuration A by an automated third assembly
station. Configuration C extends configuration B by
an automated packaging stations. Table 2
summarizes the main information of the different
configurations. Configuration A is the configuration
with the lowest initial investment and capacity.
Assembly station three is the first capacity
constraint and the packaging station is the second
capacity constraint. To extend the capacity of
configuration A to the capacity of configuration C
adaptation costs of 100,000 for an automated third
assembly station (capacity of configuration B) and
120,000 for an automated packaging station are
necessary. The operating expenses of the
configurations can be calculated by using the
variable costs per produced unit and the fix cost
determined by the number of employees (labor costs
are 40,000 per employee and year). The Adjusted
Sales are 3 per produced unit for every
configuration of the assembly system.
Table 2 Configurations of the assembly system
1,020,000 900,000 800,000 Investment
1.82 1.75 1.72
Variable
costs/ unit
330,000 320,000 305,000
Capacity/
year
2 3 4 Employees
Config. C Config. B Config. A
1,020,000 900,000 800,000 Investment
1.82 1.75 1.72
Variable
costs/ unit
330,000 320,000 305,000
Capacity/
year
2 3 4 Employees
Config. C Config. B Config. A


While volume flexibility is the required type of
flexibility of the assembly system three different
scenarios with different product demands are
defined in Table 3. The scenarios differ in the
percentage of yearly demand growth and the
likelihoods. Scenario I is the scenario with the
lowest percentage growth. Because of the highest
likelihood scenario A is the basic future scenario.
Scenario II and III are the optional future scenarios.
Table 3 Scenarios of product demand
320,443 305,878 297,138 4
15% 35% 50% Likelihood
326,852 311,995 303,081 5
314,160 299,880 291,312 3
308,000 294,000 285,600 2
301,961 288,235 280,000 1
Scenario III
(+10% /
year)
Scenario II
(+5% /
year)
Scenario I
(+2% /
year) Year
320,443 305,878 297,138 4
15% 35% 50% Likelihood
326,852 311,995 303,081 5
314,160 299,880 291,312 3
308,000 294,000 285,600 2
301,961 288,235 280,000 1
Scenario III
(+10% /
year)
Scenario II
(+5% /
year)
Scenario I
(+2% /
year) Year


Based on the given information and on the
supposition of a required rate of interest of 9 % p.a.
the ROA of the configurations can be calculated.

185

All configurations are capable of producing the
demand of the basic future scenario and adaptations
of the assembly system are not necessary. Figure 5
summarizes the results of the comparison.
Configuration B is the configuration with the
highest ROA (5.04 %) and therefore most economic
configuration for the basic future scenario.
Configuration C has a low ROA because of the high
initial investment and the high variable cost per
unit.
For the calculation of the ROFs the information
whether an adaptation of the system is necessary or
not is needed. In this industry case the comparison
of the product demand and the capacity of the
configurations shows if an adaptation of the
assembly system is needed or not. In scenario II an
adaptation of configuration A in year 4 is necessary.
The adaptation costs are 100,000 . In scenario III
configuration A has to be adapted twice (year 2 and
4) and configuration B has to be adapted in year 4.
The expected ROFs of the different configurations
are also summarized in Figure 5. Because of the
adaptation in all scenarios configuration A has a
expected ROF of -0.14 %. Configuration C has the
highest volume flexibility and therefore also the
highest expected ROF with 2.88 %.
In this industry case configuration B is the most
economic configuration over all scenarios. With a
sum of ROA and expected ROF of 7.48 % and an
ROA of 5.04 % configuration B has the best trade-
off between initial investment and volume
flexibility.

Figure 5 Results of the comparison
6. SUMMARY AND CONCLUSIONS
Because of increasing market dynamics and
competition companies in the manufacturing
industries have to consider the flexibility of their
manufacturing system in the early planning phases
and especially in investment decisions. The
approach proposed in this paper is capable of coping
with the challenge of evaluating flexibility in
investment decisions. This paper introduces the two
main aspects of the approach:
Definition of the key-figures Return on
automation and Return on flexibility
Introduction of the method for evaluating
flexibility in investment decisions.
Further on an application in an industry case has
verified the relevance and potential of the approach.
The challenges within the new approach will be
on the one hand the implementation of the method
within an IT-application and on the other hand the
integration of such an application into the structure
of existing companies decision processes. The IT-
application has to provide different tools. Beside a
calculation tool tools for the definition of the
different scenarios and the estimation of their
likelihoods as well as a tool for the flexibility check
of the system are essential. To ensure an easy
integration of the IT-application within a company a
central sever with different front-end types is one
possibility for an implementation of the approach.
An efficient collection of the necessary data is very
important for the integration of the application.
Therefore a standardized application for the data
collection is necessary. On the one hand this data
collection application has to be able to cope with
different data sources like ERP-systems, existing
databases, or companies experts. On the other hand
the data collection application has to be able to
select the data in the right quality which are
essential for the evaluating approach. The data
collector module suggested by Georgoulias et al
(2009) is an example for a data collection
application.
The proposed approach is capable of evaluating
volume, mix and product flexibility of automated
assembly systems and gives companies the support
for a well-informed investment decision.
7. ACKNOWLEDGMENTS
The new approach is a result of the COSMOS
(Project N. 246371-2) research and development
project, which is funded by the European
Commission within call FP7-NMP-2009-SMALL-3
as part of the Seventh Framework Programme
(FP7). COSMOS is the acronym for Cost-driven
Adaptive Factory based on modular self-contained
factory units. The main objectives for COSMOS
are the design, development and implementation of
a distributed control system for the management of
a factory with a flexible, modular and evolvable
automation (www.cosmosproject.eu).
Configuration A
Configuration B
Configuration C
2.96%
-0.14%
5.04%
2.44%
0.18%
2.88%
7.48%
3.06%
2.82%
ROA ROA+eROF eROF

186

REFERENCES
Abele E., Liebeck T., Wrn A., Measuring flexibility in
investment decisions for manufacturing systems,
Annals of the CIRP, Vol. 55, No. 1, 2006, pp 433-436
Alexopoloulos K., Mourtzis D., Papakostas N.,
Chryssolouris G., DESYMA: assessing flexibility for
the lifecycle of manufacturing systems, International
Journal of Production Research, Vol. 45, No. 7,
2007a, pp 1683-1694
Browne J., Dubois D., Rathmill K., Sethi S.P., Stecke
K.E., Classification of flexible manufacturing
systems, The FMS Magazine, Vol. 2, No. 2, 1984, pp
114-117
Chryssolouris G., Flexibility and its measurement,
Annals of the CIRP, Vol. 45, No. 2, 1996, pp 581-587
Chryssolouris G., Manufacturing systems: theory and
practice, 2
nd
Edition, Springer-Verlag, New York,
2006
Cox T., Towards the measurement of manufacturing
flexibility, Production & Inventory Management
Journal, Vol. 20, No. 1, 1989, pp 68-89
De Toni A., Tonchia S., Manufacturing flexibility: a
literature review, International Journal of Production
Research, Vol. 36, No. 6, 1996, pp 1587-1617
Georgoulias K., Papakostas N., Mourtzis D.,
Chryssolouris G., Flexibility evaluation: a toolbox
approach, International Journal of Computer
Integrated Manufacturing, Vol. 22, No. 5, 2009, pp
428-442
Meyer C., Betriebswirtschaftliche Kennzahlen und
Kennzahlen-Systeme 3rd Edition, Verl. Wiss. &
Praxis, Sternenfels, 2006
Newman W.R., Hanna M., Maffei M.J., Dealing with
the uncertainties of manufacturing: flexibility, buffers
and integration, International Journal of Operations
and Production Management, Vol. 13, No. 1, 1993, pp
19-34
Reinhart G., Krebs P., Rimpau C, Czechowski D.,
Flexibilittsbewertung in der Praxis: Einsatz einer
Methode zur lebenszyklusorientierten Bewertung von
Flexibilitt in der Produktion, wt Werkstattstechnik
online, Vol. 97, No. 4, 2007, pp 211-217
Rogalski S., Ovtcharova J., Flexibilittsbewertungen
von Produktionssystemen: ecoFLEX eine
branchenbergreifende Methodik, Zeitschrift fr
wirtschaftlichen Fabrikbetrieb, Vol. 104, No. 1-2,
2009, pp 64-70
Rhl J., Monetre Flexibilitts- und Risikobewertung:
Stochastische Simulation von Produktionssystemen
whrend der Produktentwicklungsphase, Shaker
Verlag, Aachen, 2010
Schuh G., Gulden A., Wemhner N., Kampker A.,
Bewertung der Flexibilitt von Produktionssystemen:
Kennzahlen zur Bewertung der Stckzahl-, Varianten-
und Produktnderungsflexibilitt auf Linienebene, wt
Werkstattstechnik online, Vol. 94, No. 6, 2004, pp
299-304
Schuh G., Wemhner N., Friedrich C., Lifecycle
oriented evaluation of automotive body shop
flexibility, In: Zaeh M.F. et al (eds), CARV 2005.
Utz, Munich, 2005
Seidel H., Garrel J. von, Flexibilitt in der Produktion
kleiner und mittelstndischer Unternehmen, wt-
Werkstattstechnik online, Vol. 101, No. 4, 2011, pp
278-279
Sethi A.K., Sethi S.P., Flexibility in manufacturing: a
survey, Journal of Flexible Manufacturing Systems,
Vol. 2, No. 4, 1990, pp 289-328
Suarez F.F., Cusumano M.A., Fine C.H. Flexibility and
performance: a literature critique and strategic
framework, Sloan School WP# 3298-91-BPS, MIT,
1991
Zh M.F., Sudhoff W., Rosenberger H., Bewertung
mobiler Produktionsszenarien mit Hilfe des
Realoptionenansatzes, Zeitschrift fr wirtschaftlichen
Fabrikbetrieb, Vol. 98, No. 2, 2003, pp 646-651
Zh M.F., Bredow M., Mller N., Mssig B., Methoden
zur Bewertung von Flexibilitt in der Produktion,
Industrie Management, Vol. 22, No. 4, 2006, pp 29-32

Proceedings of DET2011
7th International Conference on Digital Enterprise Technology
Athens, Greece
28-30 September 2011
187

THE ROLE OF SIMULATION IN DIGITAL MANUFACTURING
APPLICATIONS AND OUTLOOK
Dimitris Mourtzis
University of Patras
mourtzis@lms.mech.upatras.gr
Nikolaos Papakostas
University of Patras
papakost@lms.mech.upatras.gr


Dimitris Mavrikios
University of Patras
mavrik@lms.mech.upatras.gr
Sotiris Makris
University of Patras
makris@lms.mech.upatras.gr
Kosmas Alexopoulos
University of Patras
alexokos@lms.mech.upatras.gr
ABSTRACT
Digital manufacturing technologies have been considered an essential part of the continuous effort
towards the reduction in a products development time and cost as well as towards the expansion in
customization options. The simulation-based technologies constitute a focal point of digital
manufacturing solutions, since they allow for the experimentation and validation of different
product, process and manufacturing system configurations. This paper investigates simulation-
based applications in a series of different technological and manufacturing domains. At first, the
paper discusses the current industrial practice, focusing on the use of Information Technology.
Next, a series of simulation-based solutions are explored in the domains of product and production
process design as well as in the area of enterprise resource planning. The current technologies and
research trends are discussed in the context of the new landscape of computing hardware
technologies and the emerging computing services, including the initiatives comprising both the
internet cloud and the internet of things.
KEYWORDS
Information Technology, Simulation, Computer-Aided Design, Computer-Aided Engineering,
Computer-Aided Manufacturing, Enterprise Resource Planning

1. INTRODUCTION
Information Technology (IT) has become one of the
cornerstones of modern manufacturing. IT has
helped manufacturers to reduce development time,
to eliminate a significant part of the design and
build cycles as well as to address the need for more
customer-oriented product variants (Chryssolouris,
2006; Chryssolouris et al, 2008).
The recent events of the volcanos eruption in
Iceland and the nuclear disaster in Fukushima have
reaffirmed the need for greater flexibility in order
for manufacturing organizations to cope with the
dynamic nature of the market and its fluctuations.
This section deals with the current practice in
manufacturing and emphasizes on the technologies
used in todays IT applications in the production
environment. The following section deals with the
IT applications used for the design of parts and
products, including Computer Aided Design with
Virtual and Augmented Reality, Engineering (CAD,
CAE) and Product Lifecycle Management (PLM)
applications. Next, a set of applications specializing
in the manufacturing process design are studied,
including applications belonging to the Computer
Aided Process Planning and Manufacturing (CAPP,
CAM) categories.
The Enterprise Resource Planning (ERP) systems
are explored in the next section, focusing on the
way that simulation is used for generating,
evaluating and selecting production planning
alternatives.

188

The current and future trends are discussed in the
end, elaborating on the anticipated hardware and
software developments and the emerging needs for
a seamless integration and collaboration among
multi-disciplined engineering teams.
1.1. SIMULATION AND INFORMATION
TECHNOLOGY IN MANUFACTURING
Information technologies (IT) are the key to
manufacturing competitiveness. It is widely
accepted that IT is a major contributor to
manufacturing innovation and productivity. Over
the past few decades, the extensive use of IT in
manufacturing has allowed these technologies to
reach the maturity stage.
Manufacturing IT systems today support the
design / development and production / operation
functions of companies. Design / development
functions refer to the implementation of new
products and the production systems that produce
them, while production / operation functions
comprise the activities concerned with planning and
controlling the processes used in producing goods
and services. IT systems stand at the root of these
activities. Their application ranges from supporting
simple machining applications, to manufacturing
and supply chain planning and control. In this
chapter, IT technologies, used for the product
development and operation and for the control of
manufacturing systems, are being discussed.
1.2. PRODUCT DEVELOPMENT
Nowadays, few products are developed by an
individual person or even a single company
working on its own. This is especially true for
complex products, such as cars, aircraft, electronics
and white goods or even for presumably less
complex products, such as shoes and clothing.
Development and design are carried out by a team
(including managers, engineers and stylists), which
may be collocated or be geographically dispersed
and its members belong in the same company or are
employees of different companies. There are
different reasons that lead to a distributed
development team, including local competence
expertise or low costs in specific areas. Under this
perspective, collaborative engineering is a method
of product development, which integrates
distributed teams for virtual collaboration.
Collaborative Product Development (CPD) is an
Internet based computational architecture that
supports the sharing and transferring of knowledge
and information of the product life cycle amongst
geographically distributed companies in aid of
taking right engineering decisions in a collaborative
environment (Rodriguez and Ashaab, 2005). The
main goal of CPD is to integrate and leverage
knowledge, technologies, and resources among all
the collaborators through the full life cycle of
product development. In the last decades,
significant efforts have been made in the research of
CPD. In Mavrikios et al (2011), a CPD is presented
that consists of: a web-based platform for content
management and users interaction, a tool for real-
time collaborative geometry modeling, Virtual &
Augmented Reality (VR/AR) platforms and a tool
for collaborative decision making, which is
demonstrated by a real life design case, related to
the development of a new laser machine.
Production engineering projects are usually
multidisciplinary and inter-organizational in nature.
The current situation in digital production
engineering, is characterized by a large number of
different IT tools, based both on PCs and on
workstation applications. Alexopoulos et al (2011)
presented a workflow system for collaborative
computer aided production engineering. This
workflow system supports the execution of
production engineering activities in the Extended
Enterprise and is built upon web services and the
Business Process Execution Language (BPEL).
1.3. OPERATION AND CONTROL
At the shop-floor level, the future controller
selection will be based on factors such as adherence
to open industry standards, multi-control discipline
functionality, technical feasibility, cost-
effectiveness, ease of integration, and
maintainability. More importantly, embedded
systems and small-footprint industrial strength
operating systems will gradually change the
prevailing architecture, by merging robust hardware
with open control. Integration of control systems
with CAD and CAM, scheduling and simulation
systems as well as real-time control, based on the
distributed networking between sensors and control
devices (Ranky, 2004) currently constitute key
research topics.
Recent developments made in the use of wireless
technologies on the shop floor, such as
radiofrequency identification (RFID), as a part of
automated identification systems, involve retrieving
the identity of objects and monitoring the items
moving through the manufacturing supply chain,
which enables accurate and timely identification
information (McFarlane, 2003). However, the
integration of wireless IT technologies at an
automotive shop floor level is often prevented
because of the demanding industrial requirements,
namely immunity to interference, security, and high
degree of availability.
At an extended enterprise level, the logistics and
the supply chain deal with the flow and storage of
goods and the elated information. Supply chain is

189

emerging as a key source of competitive advantage
and a leading reason for the emergence of inter-
organizational systems (Lewis and Talalayevsky,
1997). It is common practice for IT internet
technology environments to be implemented, in
order to improve communication and collaboration
among all the parties within a supply chain
(Papazoglou et al, 2000). Since different systems
use specific data storage mechanisms, a direct
exchange of data among these systems is not
possible. This is the main reason for the slow
execution of the business process and the reduced
performance of the entire supply chain. In order to
step up the computer-based data exchange within
the supply chain, the effort required to accomplish
the communication among the different software
systems should be reduced, thus enabling the easy
and fast flow of information among the partners.
The requirement of communication among software
systems can be addressed by the adoption of a
neutral data format. In Chryssolouris et al (2004),
an XML, 3-tier, web-based approach is presented
for supporting the communication of different
partners for enabling the information flow within
the chain of the ship repair industry. In a similar
manner, Makris et al (2008) discuss how
information technology, particularly the ISO 10303
- STEP and XML can be jointly utilized in support
of the communication and data exchange among the
partners, within the ship repair industry, worldwide
and by using the web as a communication layer. In
Makris et al (2011), an internet-based supply chain
control logic, where supply chain partners provide
real time or near real time information, regarding
the availability of parts required for the production
of highly customizable vehicles, is presented.
1.4. SIMULATION
Simulation is a very helpful and valuable IT
instrument in manufacturing. It can be used in an
industrial environment, allowing the system`s
behaviour to be tested. It provides decision makers
and engineers with a tool for low cost, secure and
fast analysis to investigate the complexity of their
systems and the way that changes in the systems
configuration or in the operational policies may
affect the performance of the system or organization
(Hosseinpour and Hajihosseini, 2009).
Simulation models are categorized into static,
dynamic, continuous, discrete, deterministic, and
stochastic. Simulation is used both during a
manufacturing systems design and operation.
Usually, it is referred to as off-line and on-line
simulation respectively (Mirdamadi et al, 2007).
The system design generally involves making long
term decisions, such as facility layout and system
capacity configuration. In this case, simulation run
time is not a significant factor during the simulation
process. Computer simulation offers the great
advantage of studying and statistically analysing
whatif scenarios, thus reducing the overall time
and cost required for taking decisions. On the other
hand, the systems operation involves short term
decision making and as such, the simulation run
time is an important factor (Smith, 2003). In that
case, the run time of simulation is strongly related
to the number of entities contained in the production
system, the number of events they generate, the
complexity of the activities and the time horizon of
the simulation. If the on-line simulation is
integrated with the IT system, two important
achievements are possible: a) its capacity to reliably
predict the future behaviour of the shop floor, and
b) its capacity to emulate and / or determine the
control logic of a manufacturing system. Rao et al
(2008) present an approach of an on-line simulation
system for real-time shop floor control in a
Manufacturing Execution System (MES). The
simulation system can collect data from a physical
shop floor through the MES and the MES can also
execute the shop floor control strategy, which is
resolved by the simulation system.
Since the late 1980s, the simulation software
packages have been providing visualization
capabilities, including animation and graphical user
interaction features. Apart from these enhanced
visualization and reporting capabilities, simulation
software should be easily integrated with a
companys IT tools. An important problem to deal
with, especially in on-line simulation
configurations, is the requirement of data
acquisition in the real production system. It
requires useful information to be extracted from the
physical system elements during their operation.
The data required for the simulation model should
have two main characteristics: availability and
quality. Data availability may be achieved when all
the necessary data is recorded or measured (e.g.
through sensors). The data quality depends both on
the errors of acquisition and the errors of
measurement. For this reason, simulation packages
are usually equipped with interfaces for the
exchange of data with different types of
applications, such as CAD tools, PDM, office tools,
MES, production planning and optimization
systems either in a specific data format or by using
standards, such as XML.
While the factory Digital Mock-Up (DMU)
software allows manufacturing engineers to
visualize the production process via a computer,
which allows an overview of the factorys
operations for a particular manufacturing job, the
Discrete Event Simulation (DES) helps engineers to
focus closely on each individual operation. DES

190

may support decision making in the early phases
(conceptual design and pre-study), evaluating and
improving several aspects of manufacturing and
assembly processes, such as the location and the
size of the inventory buffers, the assessment of a
change in product volume or mix, and the
throughput analysis (Papakostas, 2006).
There are several aspects that need to be
addressed by the simulation community for the
endorsement of a widespread use of modelling and
simulation for decision support in current and future
manufacturing systems. According to Fowler
(2004) some important aspects are:
Reduce the time required to design, collect
information / data, build, execute, and analyze
simulation models in support of manufacturing
decision making. This will lead to an increased
number of analysis cycles.
New or improved simulation approaches to be
used in manufacturing for operational / real-
time decisions. Today, due to the increased
amount of data and information collected and
maintained by the current shop floor
information systems, the application of such
simulation models is feasible. In this context,
the development of simulation / virtual and
synchronized counterparts of the real factory
should also be considered.
Improved integration, into a plug & play
manner, of the simulation software packages in
the existing IT infrastructure.
2. PRODUCT DESIGN
The introduction of advanced simulation-based
visualization, interaction and collaboration
technologies has revolutionized the product design
process during the last fifteen years.
The Virtual Reality (VR) technology is often
defined as the use of real-time digital computers
and other special hardware and software to generate
the simulation of an alternate world or environment,
believable as real or true by the users (Lu et al,
1999). VR allows humans to immerse within a
1:1 scale computer-generated 3D simulation of real
environments and interact with them in a realistic
way, which is something that traditional simulation
cannot do. VR has provided designers and
engineers with the means of virtually prototyping
and evaluating products already at the early stages
of the design phase. Functional simulation,
usability analysis, product ergonomics,
maintainability analysis are just some fields of
virtual prototyping applications, in which the VR
technology has brought significant added value (Ye
et a. 2007, Mavrikios et al, 2007a). In the field of
virtual manufacturing, a number of VR-based
environments have been demonstrated, providing
desktop and/or immersive functionality for process
analysis and training, already in their early product
development phases, in such processes as
machining, assembly, and welding (Mavrikios et al,
2006; Pappas et al, 2006; Chryssolouris at al 2002).
The VR-based human modelling and simulation
have also been extensively used in the recent years
for the investigation of human interactions with
complex products or work environments
(Alexopoulos et al, 2007; Mavrikios et al, 2007b).
The simulation of engineering applications, based
on the VR technology, is very challenging due to its
highly interactive context, imposed by the number
of functions and the need for realism. On that basis,
more and more research works recently focus on the
development of novel interface features in the
fundamental cornerstones of the VR technology,
namely in immersive visualization and interaction.
In this context, intuitive 3D interfaces and user-
friendly interaction metaphors for VR-based
immersive simulations (Figure 1) have been
introduced and validated in typical industrial use
cases (Rentzos et al, 2011).
Augmented Reality (AR) is an innovative, fast
emerging technology, which can overlay a real-
world environment with layers of spatially-related
computer-generated information, such as
alphanumeric data, multimedia and virtual 3D
elements. The AR technologies have been applied
to engineering applications relevant with the entire
product lifecycle, from product concept and design
to production and maintenance (Azuma et al, 2001).
AR has become a major part of the prototyping
process.


Figure 1 Drilling, riveting and cabin design test-cases, executed on a virtual aircraft (Rentzos et al., 2011)

191


Figure 2 Creating and displaying primitives in a 3D space (Kaufmann and Schmalstieg 2003)


Figure 3 A multi-user collaborative product design session (Mavrikios et al. 2011)

For example, in the automotive industry, AR has
been used for assessing interior design, on real car
body mock-ups, by overlaying different car
interiors, which are usually available only as 3D-
models in the initial phases of development (Frnd
et al, 2005). Some other recent studies introduced
AR-based re-formable mock-up systems for design
evaluation allowing the interactive creation and
modification of shapes as well as colors, textures,
and user interfaces (Figure 2) (Kaufmann and
Schmalstieg 2003, Park 2008).
Besides product development, several research
projects have been investigating the use of AR
techniques in production, as well as in service
scenarios, ranging from aircraft and car
manufacturers to machine tool and power plant
manufacturers (Weidenhausen et al, 2003). AR has
been applied into assembly systems in order to
simulate and display in real time process
information and guidelines with reference to the
components or subassemblies and the workflow
(Salonen et al, 2007). A number of research papers
have also suggested projection-based AR display
mechanisms to allow users to visualize machining
data that is projected onto real machining scenes
(Olwal et al, 2008)
Computer Supported Collaborative Design
(CSCD) and CPD have been two widely used terms,
describing the process of designing a product
through the collaboration among multidisciplinary
product developers, associated with the entire
product lifecycle. Several research activities related
to the development of web-based methodologies
and prototype systems for CSCD have been
reported in the scientific literature (Shen et al,
2008). Extensive research work on collaborative
CAD has been reported, addressing issues, such as
co-design systems and feature / assembly-based
representations, web-based visualization, 3D
representations for web-based applications and 3D
streaming over networks (Fuh and Li 2005). The
integration of different commercial client CAD
systems into a co-design platform has been
demonstrated in some cases (Li et al, 2007). Shared
product visualization and collaborative design
review has been another major area of research and
development work. Methods of sharing virtual
product representations over the web and a number

192

of CAD-integrated shared workspaces have been
presented in the scientific literature for distributed
design review (Sharma et al, 2006; Hren and
Jezernic, 2008). Shared virtual reality based
environments have also been suggested in support
of interactive collaboration in product design
review (Pappas et al 2006; Chryssolouris et al
2008). A number of recent studies have also
investigated the use of AR for the visualization of
product information in collaborative design
environments (Shen et al, 2008). The development
of an integrated web-based platform (Figure 3) for
collaborative design, including real-time
collaborative geometry modelling, interactive and
immersive product visualization along with a smart
decision support mechanism for collaborative
design evaluation have also been recently reported
(Mavrikios et al, 2011).
3. PROCESS DESIGN
A variety of approaches have been developed for
assisting the design of manufacturing processes in
digital manufacturing. The design of a production
system involves the solution of multiple problems,
such as those of the production technology selection
(Chuang et al, 2009), the equipment selection from
a set of candidate solutions for each operation
(Khouja et al, 2000; Manassero et.al, 2004), the
balancing and dimensioning of workstations, the
dimensioning of buffers, the layout and resource
placement problem (Aly et al, 2010) and so forth.
The large number of product variants, is the driving
factor of flexible manufacturing systems. For
planning and developing automated production
cells, the consideration of mechanics, electrics and
software (robotics, PLC) in a mechatronic resource
model is required (Baer, 2008). Digital production
engineering is a process that involves distributed
engineering teams, using heterogeneous IT tools, by
which they collaboratively design and implement a
manufacturing system. Indicatively, the process of
designing an assembly line can be seen in Figure 4.
The implementation of a workflow system for the
collaborative computer-aided production
engineering, which supports the simulation and
execution of production engineering activities in the
Extended Enterprise (EE) has been suggested and it
is built on the basis of Web services and the BPEL
(Business Process Execution Language). It also
manages the electromechanical data exchange, with
the use of XML that conforms to the
AutomationML format. An application of the tool,
developed for an assembly engineering project in
the automotive industry, is being presented
(Alexopoulos et al, 2011).
A new approach in a manufacturing systems
design involving the generation of assembly line
design alternatives and their simulation / evaluation
against multiple user defined criteria, has been
presented. The design problem has been formulated
in a way that it can be transformed from a decision
making problem into a search problem. A
systematic way of generating the solution space of
the search problem, based on real life design project
requirements, has been described in detail.
Investment cost, availability, equipment
reutilization, annual production volume and
flexibility metrics are the criteria used for the
evaluation of the alternative designs. The results of
this simulation-based application in an automotive
case, indicate that the tool is appropriate for
supporting decision making during the stage of
Rough Planning, although several functionalities
can be used during other stages, such as the detailed
planning concept (Michalos et al, 2011b).
The key features of cooperating robotic cells in
the automotive assembly have been considered from
the perspective of the assembly process design. The
key elements in the engineers decision making
process are highlighted, while designing, simulating
and implementing an assembly line for the Body In
White (BIW). A case study is demonstrated, with
different scenarios being compared, considering the
use of a conventional fixture-based configuration of
a robotic cell for performing a welding operation,
while a second one features the use of cooperating
robots. These cases are compared with the aid of a
simulation platform and future potential
developments are also discussed with focus being
given to the assembly cell design (Papakostas et al,
2011a). In Papakostas et al (2011b), a desktop
assistant has been implemented for generating and
simulating a series of alternatives in a 3D digital
manufacturing environment that defines the base
position of the cooperating robots in an assembly
cell and the paths of their Tool Centre Points (TCP)
(Figure 5).
A variety of metrics have been designed along
with different approaches for their integration into
software tools. The feasibility of integrating
flexibility evaluation techniques into the state of the
art of Product Lifecycle Management (PLM)
packages, is being investigated. The existing
product data models and structures are capable of
storing most of the required data for the evaluation
of flexibility. As a result, flexibility evaluation
tools, capable of providing instant flexibility
performance estimation of the production lines / cell
designed, can be developed as add-ons to the design
packages. The concept is demonstrated on the
design of an assembly cell with the use of the
Penalty of Change (POC) metric (Michalos et al,
2011).

193



Figure 4 Automotive cell production planning use case diagram
Y X
z

Figure 5 Design of a cooperating robots cell with a simulation-based digital manufacturing environment (Papakostas et al, 2011)

194

The assembly line flexibility and its role in the
market have been evaluated in the automotive
industry. In simulated Body in White (BIW)
automotive assembly lines, different levels of
flexibility have been introduced. Market demand
has been assumed to be varying widely. The level
of flexibility has been quantifiably estimated. The
way that the different assembly lines respond to a
fluctuating demand has been determined and
compared. The effect of the assembly line
flexibility is discussed for each case (Georgoulias et
al, 2008). On the other hand, manufacturers are in
need of methods and tools, related to the assembly
process, able to quantify the exact cost elements. A
series of methods have been suggested for providing
a generalized cost model for assembly processes
that can be used as a tool to support decision
making during both the design and operation of
assembly systems. The advantage of such a model
lies in its generality and simplicity that allows for a
quick (yet not rough) estimation of cost
implications, related to different alternative
solutions to the design and operation problems that
manufacturers are confronted with. The approach
proposed is based on the Activity Based Costing
(ABC) technique and aspires to identify and
combine major assembly cost factors into a single
cost model, virtually applicable to any assembly
process. However, special emphasis is given to
applications coming from the automotive sector
(Michalos et al, 2008), where this cost model may
be easily combined with accurate simulation
models.
A challenge that has to be met during the process
design is the multi aspect nature of the problem
itself, meaning that a lot of objectives and
constraints need to be accounted for during the
design. Investment cost, annual production
volumes, availability, reutilization, flexibility,
environmental friendliness and a vast range of
criteria that need to be minimized / maximized
should be satisfied. The simulation and evaluation
of the performance of a single line design against
these criteria is considered a very hard task to be
undertaken, let alone the evaluation of several
hundreds or thousands of different design
alternatives. Decision support tools are required to
take up the computational tasks and allow the
simulation and examination of a wide population of
design alternatives, in search of the ones that better
satisfy the users criteria (Matta et al, 2001).
Product mix and volumes, number of variants and
cost, are the main design parameters. The
combination of these parameters yields different
design alternatives. However, quite often, it is
difficult to decide which configuration is the
optimum. Flexibility, in general, is a desired
attribute; nevertheless, the introduction of more than
four or five models in a line may increase the
complexity and the cost to prohibitively high levels.
Tools are required for the break-even analysis and
the establishment of the golden ratio among the
critical parameters. For instance, as shown in Figure
6, nomograms can be derived for each system
showing how the operational cost and production
volume are affected by different operating
parameters of the system. In this context, and based
on the companys objectives, areas of the
nomogram, where acceptable configurations exist,
can be identified and used for the selection of a
satisfactory configuration (Michalos et al, 2010).

Figure 6 omogram for the selection of line operating
parameters (Michalos et al, 2010)
4. ENTERPRISE RESOURCE PLANNING
The need for reduced development time together
with the growing demand for more customer-
oriented product variants have led to the next
generation of information technology (IT) systems
in manufacturing (Chryssolouris et al, 2008).
Manufacturing organizations strive to integrate their
business functions and departments with new
systems in an enterprise database, following a
unified enterprise view (Chryssolouris, 2006).
These systems are based on the digital factory
concept, according to which, production data
management systems and simulation technologies
are jointly used for optimizing manufacturing
before starting the production and supporting the
ramp-up phases (Westkaemper, 2007). As
displayed in Figure 7, the digital factory and
manufacturing concept comprises technologies,
such as computer-aided design, engineering, process
planning and manufacturing, product data and life-
cycle management, simulation and virtual reality,
automation, process control, decision support,
manufacturing and enterprise resource planning,
logistics, supply chain management, and e-
commerce systems (Chryssolouris et al, 2008). This
section mainly focuses on the planning, scheduling,
simulation and supply chain management aspects of
the digital concept.

195


Figure 7 - ERP Systems and digital manufacturing
(Adapted from Kuhn, 2006)
The Enterprise Resource Planning (ERP) systems
are becoming more and more prevalent throughout
the international business world. Most production /
distribution companies nowadays use ERP systems
to support their production and distribution
activities. Furthermore, these information systems
are designed to integrate and partially automate
financial, resource management, commercial, after-
sale, manufacturing and other business functions in
to one system around a database. ERP software
packages are often available as part of much larger
software frameworks, which provide integration
capabilities while, on the other hand, are far more
expensive and usually require great customization
and configuration effort, along with enormous
installation and configuration costs (especially with
regard to SMEs). Moreover, the ERP software
packages offer web-enabled and e-commerce
capabilities. A key ingredient of most ERP systems
is the use of a unified database for the storage of
data associated with various system modules. ERP
systems attempt to integrate all data and processes
of an organization into a unified system. A typical
ERP system will use multiple software and
hardware components to achieve the integration.
Industrial sectors such as the shipbuilding, the
aerospace, the textile, the food and chemical
industries, have applied simulation, planning and
scheduling methods in order to support their
production activities (Chryssolouris et al, 2003).
The ERP systems often incorporate optimization
capabilities for cost and time savings virtually from
every manufacturing process. Indicative examples
involve cases from simple optimization problems,
shopfloor scheduling, and production planning to
todays complex decision-making problems
(Chryssolouris et al, 2000 and Chryssolouris et al,
2005). The all around solution offered by ERP
suites is one of the basic parts of the digital
manufacturing paradigm. The implementation of
digital manufacturing with the latest planning tools
and techniques is proved to have positive impact on
the enterprises and is showing great potential for
future growth.
4.1. APPLICATIONS IN THE PRODUCTION
The benefits of digital manufacturing have received
much attention, resulting in numerous research
studies and applications over the last decade.
Below, we find some indicative examples from
recent simulation-based developments on the digital
manufacturing concept with applications in a
variety of industrial domains.
In 2000 Chryssolouris et al proposed an approach
that involved the generation of scheduling
alternatives, their transformation through a rule
based mechanism into nesting solutions (Figure 8)
and finally their evaluation using diferent criteria
that reflected the overall production objectives, such
as meeting due dates, minimizing the cost and
maximizing the machines and stock sheet utilisation
(Chryssolouris et al, 2000).

Figure 8 - A carpet nesting schedule (Chryssolouris et al,
2000)
A four-level hierarchical model for scheduling
purposes, adapted to the characteristics and the
requirements of a refrigerator manufacturer was
proposed in 2003 (Chryssolouris et al, 2003). This
approach focused on the manufacturing procedures.
The method required the modeling of the factorys
facilities and its workload with the help of an
hierarchical model (Figure 9). The results showed
that the method applied produces adequate and easy
to use results as well as, that depending on the
dispatching rule used, a schedule with different
performance measures is produced. Thus, the user
is able to select the appropriate rule in order to
produce a suitable plan.
Chryssolouris et al in 2005 had proposed a
method that was focusing on the short-term
scheduling of the crude oil unloading, blending and
distillation activities in a typical refinery (Figure
10). The conclusions drawn from the
implementation / installation phase in the refinery,
showed that the proposed approach, compared with
manual spreadsheet-based calculations, may
accelerate the scheduling process, proposing
alternative solutions of a comparable quality, while
increasing the accuracy of computations and
allowing the investigation of what-if scenarios.

196


Figure 9 - The four-level hierarchical model applied to a ship repair yard (Chryssolouris et al, 2004)


Figure 10 - Example of the refinery model (Chryssolouris et
al, 2005)
Based on a pilot case from the ship repair
industry, Mourtzis in 2005, proposed a concept that
supported the management of a ship repair yard by
integrating, in a modular, open, platform-
independent and flexible system, a number of
important business functions including estimating,
tendering, purchasing, contract preparation /
monitoring and invoicing with the production
planning, scheduling and control (Figure 11). The
application of the framework could change the way
of planning and monitoring the critical shipyard
activities (Mourtzis, 2005). In 2005, Sun et al
presented a framework for critical success factor
(CSF) assessment of ERP system implementations
and proposed a structured approach to help a small
manufacturing enterprise (SME) identify the key
requirements and measurements that determine its
achievement of ERP implementation through
simulation (Figure 12) (Sun et al, 2005).
A simulation-based hybrid backwards scheduling
framework for manufacturing systems was propoesd
in 2006 by Lalas et al, referred to as HBS and
mainly addressed discrete manufacturing
environments (Lalas et al, 2006). Following the
vertical loading logic, HBS can provide efficient
scheduling solutions in a relatively short processing
time in situations where finding the optimal
schedule is not feasible, as in real manufacturing.
Resource Invoice
Schedule
Work
Contract
Customer
Estimate
Material
Subcontra
ct
Customer
Subcontractor Supplier
Issue Quotation
and/or Order
Issue Quotation
and/or Order
Resource Information
Resource
Information
Facility Information
Customer Details
Customer
Details
Customer
Details
Issue Invoice
Production Planning
Information
Production
Planning
Information
Issue Tender
Estimate
Details
Contract
Information
Workload
Information Jobs
Information
Jobs
Information
Material
Information
Supplier &
Material
Information
Supplier &
Material
Information
Resources
Material
Information
Subcontractors
Information
Subcontractors
Information
Subcontract
Information

Figure 11 - System integration concept and components
interaction of the ship repair yard case (Mourtzis, 2005)


Figure 12 - Simulation model flow diagram and ERP
integration for CSF assessment (Sun et al, 2005)
A planning methodology and its application to
the food industry were presented in 2006 (Mourtzis,
2006). This method included an hierarchical model
of the systems resources and their workload. The
system simulated the factorys operation and
created both a schedule for its resources and a set of
performance measures, which enabled the user to
evaluate the proposed schedule (Figure 13).


197


Figure 13 Selection of planning policy and the scheduling
output produced by the system (Mourtzis, 2006)

Figure 14 - Structure of the scheduling system with real-
time feedback from the production (Monostori et al, 2007)
Figure 15 - Maintenance alternatives generation and
evaluation at a decision point (Papakostas et al, 2009)
Monostori et al in 2007, proposed a scheduling
system capable of real-time production control
(Figure 14). This system received feedback from
the daily production through the integration of
information coming from the process, quality, and
production monitoring subsystems. The system was
able to monitor a series of deviations and problems
of the manufacturing system and to suggest possible
alternatives for handling them (Monostori et al,
2007).
A research work by Papakostas et al (2009)
described a short-term planning methodology of the
line maintenance activities of an airline operator at
airports, during the turn-around time (TAT). The
proposed methodology supported decision making
for deferring maintenance actions that affected the
dispatching of the aircrafts, aiming at a high fleet
operability and low maintenance cost (Figure 15).
Amongst the recent research approaches on the
ERP solutions we find the Logistics Platform
(Vancza et al, 2008), that presents a novel
coordination mechanism for supply planning that
rules out the opportunistic use of private
information. The proposed framework increases the
responsiveness of a network operating under
uncertain market conditions, but it prerequisites that
the members of the network be committed to
undertake the responsibility of using the resources
economically (Figure 16).

Figure 16 - The Logistics Platform and its connections
(Vncza et al, 2008)
Considering the current trend in the
manufacturing world, for maximizing their
communication and collaboration, the ERP system
functionality has also been extended with supply
chain management solutions. A research work in
2004 (Chryssolouris et al, 2004), demonstrated the
way that modern information technology could
support the communication of different partners and
enable the information flow within the value added
chain (Figure 17).
The data exchange between the different modules
is performed with the use of the eXtended Markup
Language (XML) and a set of markup declarations
that define the XML document type, known as
Document Type Definition (DTD). The work
indicated that the performance of a supply chain
could be improved by applying a generic
hierarchical model through the appropriate planning
of the critical manufacturing operations. Further to
that, Michalos et al, in 2010, proposed a job rotation
tool that, at the planning phase, could be able to
determine and evaluate the possible alternatives for
the next operator rotation by accounting for a set of
user defined criteria. This work was further
extended in 2011 (Michalos et al, 2011) by

198

implementing the method in a web-based tool, able
to generate job rotation schedules for human based
assembly systems and to test the tool on a truck
assembly case. Through user-friendly web
interfaces, production engineers can represent an
assembly line, the tasks to be performed for each
product and the operators characteristics. An
intelligent search algorithm was used for generating
alternative solutions to the scheduling problem
whilst multiple criteria decision making was used
for evaluating the job rotation schedule alternatives,
according to criteria deriving from industrial
assembly line requirements (Figure 18).

Figure 17 - The value added chain communication model
(Mourtzis, 2006)
A web-based collaboration framework among
manufacturing companies with reference to
planning and coordinating their manufacturing
activities, was presented by Mourtzis (2011). The
Planning module (SPIRIT-P) implements the
planning and scheduling of manufacturing
operations, based on a 4-level hierarchical model.
The planning method, together with the 4-level
production network modeling approach, is flexible
enough to adapt to changes and disturbances, which
may occur in a dynamic supply network, such as:
the addition or removal of partners and orders,
highly customized products, lack of resources or
lack of materials (Figure 19).
During the last decades, a new generation of
decentralized, simulation-enabled, agent-based
factory control algorithms has appeared in the
literature. A software agent a) is a self-directed
object, b) has its own value systems and means of
communication with other such objects and c)
continuously acts on its own initiative. A system of
this kind of agents, called a multi-agent system
consists of a group of identical or complementary
agents that act together in order to achieve a set of
goals (Baker, 1998).
A flexible agent-based system called RIDER
(Real tIme Decision-making in manufacturing), was
developed in (Papakostas et al, 1999). The system
encompassed both real-time and decentralized
manufacturing decision making capabilities. The
overall schedule for the manufacturing procedures
was generated by a backward scheduling algorithm,
by obtaining real-time information through a special
data exchange mechanism, besides communicating
with other manufacturing information technology
systems. The built-in evaluation mechanism
developed was flexible enough so as to take into
account the varied importance of different business
aspects, namely the production costs, as they were
affected by the proposed actions of each alternative
that was generated.
In 2005, Dammann et al, presented an agent-
based system for resource allocation and production
control. Agents were assigned to all orders and the
resources were adaptively conditioned according to
the logistic situation. Simulation experiments
proved that the approach delivered nearly the same
logistic results as the established ERP methods did.
However, the proposed agent-based system reacted
on disturbances and unexpected events better than
other ERP systems had (Dammann et al, 2005).


Figure 18 - Dynamic job rotation tool architecture (Michalos et al, 2011)

199

GUI
forms
ODBC
Legacy
DB
SPIRITC forms
JDBC
SPIRITC
Database
GUI
forms
ODBC
Legacy
DB
Company 1 Company 2
SPIRITP
Database
SPIRITP
Database
SPIRITP
forms
JDBC
SPIRITP
forms
JDBC
SPIRITMP
forms
J
D
B
C
SPIRITMP
Database
Company 3
Orders
Schedule
Orders
Products
Orders
Products
Orders Orders
Schedule Schedule

Figure 19 - Communication scheme (Mourtzis, 2011)
4.2. TECHNOLOGICAL TRENDS
The ERP solutions whether in the form of software
package or in customized on-demand solutions, will
continue to play a major role in the realization of
digital manufacuting. Digital manufacturing would
allow for: a) the shortening of development time
and cost, b) the integration of knowledge coming
from different manufacturing processes and
departments, c) the decentralized manufacturing of
the increasing variety of parts and products in
numerous production sites, and, d) the focusing of
manufacturing organizations on their core
competences, working efficiently with other
companies and suppliers, on the basis of effective
IT-based cooperative engineering (Chryssolouris et
al, 2008). According to several studies, the ERP
market faces several technological trends at the
moment. The major drivers of these trends on
technological level include:
Software as a service (SaaS): The tendency of
being able to obtain ERP functionality as a
service has to be mentioned. Especially in the
mid-market, the ERP suites will no longer be
hosted internally but instead will be obtained as
a service offered by the ERP provider. New
ways of providing a software are to be
investigated, mainly linked with the
development of cloud computing (Benlian et al,
2009; Walther, 2009; Borovskiy and Zeier,
2009).
Mobile technology: Ubiquitous access to
information by using mobile devices has
become a reality even for end consumers over
the last years. The ERP system providers face
these challenges increasingly by offering
mobile-capable ERP solutions. (Schabel, 2009
and Su, 2009)
Tightly integrated Business Intelligence (BI):
The importance of reporting and data analysis
grows with the users information needs. BI is
becoming both easier to use over time and
tighter integrated into ERP suites (Suchnek,
2010 and Winkelmann and Thiemann, 2010).

5. DISCUSSION
Digital manufacturing solutions have already
become an integral part of all engineering activities
taking place in a typical manufacturing
organization. Simulation is a core part of these
solutions in the form of feature-rich 3D
collaborative environments, allowing for the
realistic validation of alternative solutions.
In the future, it is expected that digital
manufacturing tools and applications will be
capable of generating and simulating more accurate
and detailed alternative solutions for a multitude of
product and process design activities. Simulation
experiments are anticipated to integrate macro- and
micro- models of manufacturing systems, ranging
from manufacturing sites to specific resources and
processes even to the ones requiring the use of
molecular dynamics based modeling. They are also
expected to embody some sort of knowledge
capturing abilities and intelligence, in some cases,
demonstrating basic forms of human intuition.
The evolution of software, computing hardware
and communication technologies has already driven
the engineering community to a new reality:
collaboration platforms that may be web-based, or
Service-oriented architecture-based (SoA), Grid-
based, Cloud-based or Agent-based.
The service-oriented architecture, in particular, in
which software resources are packaged as services,
allows the use of different web services, supporting
a series of communication platforms and protocols.
Data will be easier to exchange and use and
software will be integrated in a straightforward and
more concrete fashion. Grid computing and data
grids, offered to the users over a transparent grid
layer, allow the more efficient sharing of resources
and data over the Web. Cloud computing enables
the on demand access to a shared pool of computing
resources, allowing for the better utilization of the
existing hardware and software infrastructure. It is
expected that future simulation-based digital
manufacturing tools will make use of idle
computing resources, allowing for the
experimentation with much more detailed and
complex simulation models, thus leading to a
reduced number of design and development cycles.
Agent-based collaboration systems are expected
to replace a vast number of todays time consuming
data retrieval and exchange activities, streamlining,

200

at the same time, the entire communication process
among engineers, teams, departments and
organizations distributed all over the world.
All these concepts and technologies are part of
the on-going research and are anticipated to
virtually improve every aspect in the context of
product and process design and deployment
activities.
6. ACKNOWLEDGMENTS
This work has been partially supported by the
Integrated Projects MyCar (FP6-2004- NMP-NI-4-
026631) and FUTURA (FP6-2004-NMP-NI-4-
026621), funded by the European Commission.
REFERENCES
Alexopoulos K., Makris S., Xanthakis V. and
Chryssolouris G., A web-services oriented workflow
management system for integrated digital production
engineering, to be published in CIRP Journal of
Manufacturing Science and Technology, 2011
Alexopoulos K., Mavrikios D., Pappas M., Ntelis E. and
Chryssolouris G., Multi-Criteria Upper Body Human
Motion Adaptation, International Journal of
Computer Integrated Manufacturing, Vol. 20, No.1,
2007, pp. 57-70
Aly M. F., Abbas A. T. and Megahed S. M., Robot
workspace estimation and base placement
optimisation techniques for the conversion of
conventional work cells into autonomous flexible
manufacturing systems, International Journal of
Computer Integrated Manufacturing, Vol. 23, No. 12,
2010, pp. 1133-1148
Azuma R., Baillot Y., Behringer R., Feiner S., Julier S.
and MacIntyre B., Recent advances in augmented
reality IEEE Computer Graphics and Applications,
Vol. 21, No. 6, 2001, pp. 3447
Baer T., Flexibility demands on Automotive Production
and their Effects on Virtual Production Planning, 2nd
CIRP Conference on Assembly Technologies and
Systems, Toronto, Canada, 2008, pp. 16-28
Baker A., A survey of factory control algorithms chat
can be implemented in a multi-agent. Hetararchy:
dispatching, scheduling, and pull, Journal of
Manufacturing Systems, Vol. 17, No. 4, 1998, pp.
297320
Bechrakis K., Papakostas N., Giannelos N., Mourtzis D.
and Chryssolouris G., The Rider Tool: A Real Time
Approach to Manufacturing Decision Making, 4th
International Seminar on Intelligent Manufacturing
Systems Theory and Practice, Belgrade, Servia, 1998,
pp. 11-18
Borovskiy V. and Zeier A., Enabling enterprise
composite applications on top of ERP systems,
Services Computing Conference, APSCC 2009 IEEE
Asia-Pacific, 2009, pp. 492-497
Bracht U. and Masurat T., The digital factory between
vision and reality, Computers in Industry, Vol. 56,
No. 4, 2005, pp. 325333
Chryssolouris G., Manufacturing Systems: Theory and
Practice, 2nd Edition, Springer-Verlag, New York
2006
Chryssolouris G., Makris S., Xanthakis V. and Mourtzis,
D., Towards the Internet-based supply chain
management for the ship repair industry,
International Journal of Computer Integrated
Manufacturing, Vol. 17, No. 1, 2004, pp. 4557
Chryssolouris G., Mavrikios D. and Pappas M., A Web
and Virtual Reality Based Paradigm for Collaborative
Management and Verification of Design Knowledge,
Methods and Tools for Effective Knowledge Life-
Cycle-Management, Springer-Verlag, Berlin
Heidelberg, 2008, pp. 91-105
Chryssolouris G., Mavrikios D., Fragos D., Karabatsou
V. and Pistiolis K., A Novel Virtual Experimentation
Approach to Planning and Training for Manufacturing
Processes-The Virtual Machine Shop, International
Journal of Computer Integrated Manufacturing, Vol.
15, No.3, 2002, p. 214-221
Chryssolouris G., Mavrikios D., Papakostas N., Mourtzis
D., Michalos G. and Georgoulias K., Digital
manufacturing: history, perspectives, and outlook,
Proceedings of the Institution of Mechanical
Engineers Part B: Journal of Engineering
Manufacture, Vol. 222, No. 5, 2008, pp. 451-462
Chryssolouris G., Mourtzis D. and Geronimaki M., An
approach to planning of industry: A case study for a
refrigerators producing facility, CIRP Journal of
Manufacturing Systems, Vol. 32, No. 6, 2003, pp.
499-506
Chryssolouris G., Mourtzis D., Papakostas N.,
Papachatzakis Z. and Xeromeritis S., Knowledge
Management Paradigms in Selected Manufacturing
Case Studies, Methods and Tools for Effective
Knowledge Life-Cycle-Management, A. Bernard, S.
Tichkiewitch (eds.), Part 3, 2008, pp. 521-532
Chryssolouris G., Papakostas N. and Mourtzis D., A
Decision Making Approach for Nesting Scheduling: A
Textile Case, International Journal of Production
Research, Vol. 38, No.17, 2000, pp. 4555-4564
Chryssolouris G., Papakostas N. and Mourtzis D.,
Refinery Short-term scheduling with tank farm,
inventory and distillation management: an integrated
simulation-based approach, European Journal of
Operations Research, Vol. 166, No. 3, 2005, pp. 812-
827
Chuang M., Yang Y. S. and Lin C. T., Production
technology selection: Deploying market requirements,
competitive and operational strategies, and
manufacturing attributes, International Journal of
Computer Integrated Manufacturing, Vol. 22, No. 4,
2009, pp. 345 355
Dammann M., Ouali K. and Wiendahl H. P., Adaptable
Agent-based Production Control, 3rd International

201

Conference on Reconfigurable Manufacturing, May
10-12 2005, Ann Arbor, Michigan, USA
Fowler W. J. and Rose O., Grand Challenges in
Modeling and Simulation of` Complex Manufacturing
Systems Simulation, Vol. 80, No. 9, 2004, pp. 469-
476
Frnd J., Gausemeier J., Matysczok C. and Radkowski,
R., Using Augmented Reality Technology to Support
the Automobile Development, Lecture Notes in
Computer Science, Vol. 3168, 2005, pp. 289-298
Fuh J. Y. H., Li W. D., Advances in collaborative CAD:
the-state-of-the art, Computer-Aided Design, Vol.
37, No. 5, 2005, pp. 571-581
Georgoulias K., Michalos G., Makris S. and
Chryssolouris G., The Effect of Flexibility on Market
Adaptation, 2nd CIRP Conference on Assembly
Technologies and Systems, Toronto, Canada, 2008,
pp. 280-288
Hossain L., Patrick J. D. and Rashid M. A., Enterprise
resource planning: Global Oportunities &
Challenges, Idea Group Publishing, London, 2002
Hosseinpour F. and Hajihosseini H., Importance of
Simulation in manufacturing, World Academy of
Science, Engineering and Technology, Vol. 51, 2009,
pp. 292-295
Hren G. and Jezernik A., A framework for collaborative
product review, International Journal of Advanced
Manufacturing Technology, Vol. 42, No. 7-8, 2008,
pp. 822-830
Jacobson S., Shepherd J., DAquila M. and Carter K.,
The ERP Market Sizing Report, 20062011, AMR
Research, 2007
Kaufmann H. and Schmalstieg D., Mathematics and
Geometry Education with Collaborative Augmented
Reality, Computers & Graphics, Vol. 27, No. 3,
2003, pp. 339-345
Khouja M., Booth E. D., Suh M. and Mahaney J. K.,
Statistical procedures for task assignment and robot
selection in assembly cells, International Journal of
Computer Integrated Manufacturing, Vol. 13, No. 2,
2000, pp. 95-106
Khn W., Digital factory - integration of simulation
enhancing the product and production process towards
operative control and optimization, International
Journal of Simulation, Vol. 7, No. 7, 2006, pp. 27 - 39
Lalas C., Mourtzis D., Papakostas N. and Chryssolouris
G., A Simulation-Based Hybrid Backwards
Scheduling Framework for Manufacturing Systems",
International Journal of Computer Integrated
Manufacturing, Vol. 19, No. 8, 2006, pp. 762-774
Lewis I. and Talalayevsky A., Logistics and information
technology: a coordination perspective, Journal of
Business Logistics, Vol. 18, No. 1, 1997, pp. 141-157
Lu S. C. Y., Shpitalni M. and Gadh R., Virtual and
Augmented Reality Technologies for Product
Realization, Annals of the CIRP Keynote Paper, Vol.
48, No. 2, 1999, pp. 471-494
Makris S., Xanthakis V., Mourtzis D. and Chryssolouris
G., On the information modeling for the electronic
operation of supply chains: A maritime case study,
Robotics and Computer-Integrated Manufacturing,
Vol. 24, No.1, 2008, pp.140-149
Makris S., Zoupas P. and Chryssolouris G., Supply
chain control logic for enabling adaptability under
uncertainty, International Journal of Production
Research, Vol. 49, No. 1, 2011, pp. 121-137
Manassero G., Semeraro Q. and Tolio T., A new
method to cope with decision makers' uncertainty in
the equipment selection process, CIRP Annals -
Manufacturing Technology, Vol. 53, No. 1, 2004, pp.
389-392
Matta A., Tolio T., Karaesmen F. and Dallery Y., An
integrated approach for the configuration of automated
manufacturing systems, Robotics and Computer
Integrated Manufacturing, Vol. 17, No 1-2, 2001, pp.
19-26
Mavrikios D., Alexopoulos K., Xanthakis V., Pappas M.,
Smparounis K. and Chryssolouris G., A Web-based
Platform for Collaborative Product Design, Review
and Evaluation, Digital Factory for Human-oriented
Production Systems: The Integration of International
Research Projects, L. Canetta, C. Redaelli, M. Flores
(eds), 1st printing, Springer, 2011
Mavrikios D., Karabatsou V., Alexopoulos K. and
Chryssolouris G., A Virtual Reality based Paradigm
for Human-Oriented Design for Maintainability in
Aircraft Development Proceedings of the 7th
Aviation Technology, Integration and Operations
Conference (AIAA), Belfast, Northern Ireland, 2007
Mavrikios D., Karabatsou V., Fragos D. and
Chryssolouris G., A Prototype Virtual Reality Based
Demonstrator for Immersive and Interactive
Simulation of Welding Processes, International
Journal of Computer Integrated Manufacturing, Vol.
9, No. 3, 2006, pp. 294-300
Mavrikios D., Karabatsou V., Pappas M. and
Chryssolouris G., An efficient approach to human
motion modeling for the verification of human-centric
product design and manufacturing in virtual
environments, Robotics and Computer Integrated
Manufacturing, Vol. 23, No. 5, 2007, pp. 533-543
McFarlane D., Auto ID systems and intelligent
manufacturing control Engineering Applications of
Artificial Intelligence, Vol. 16, No. 4, 2003, pp. 365-
376
Michalos G., Makris S. and Chryssolouris G., An
approach to automotive assembly cost modelling,
2nd CIRP Conference on Assembly Technologies and
Systems, Toronto, Canada, 2008, pp. 478-487
Michalos G., Makris S. and Mourtzis D., An intelligent
search algorithm based method to derive assembly line
design alternatives On the Assembly Line Design

202

Problem, to be published in International Journal of
Computer Integrated Manufacturing, 2011
Michalos G., Makris S., Papakostas N. and Chryssolouris
G., A Framework for Enabling Flexibility
Quantification in Modern Manufacturing System
Design Approaches, 44th CIRP International
Conference on Manufacturing Systems, Madison,
USA, 2011
Michalos G., Makris S., Papakostas N., Mourtzis D. and
Chryssolouris G., Automotive assembly technologies
review: challenges and outlook for a flexible and
adaptive approach, CIRP Journal of Manufacturing
Science and Technology, Vol. 2, No. 2, 2010, pp. 81-
91
Michalos G., Makris S., Rentzos L. and Chryssolouris
G., Dynamic job rotation for workload balancing in
human based assembly systems, CIRP Journal of
Manufacturing Science and Technology, Vol. 2, No.
3, 2010, pp.153-160
Mirdamadi S., Fontanili F. and Dupont L., Discrete
Event Simulation-Based Real-Time Shop Floor
Control, 21st EUROPEAN Conference on Modelling
and Simulation (ECMS), Prague Czech Republic,
2007
Monostori L., Kdr B., Pfeiffer A. and Karnok D.,
Solution Approaches to Real-time Control of
Customized Mass Production, CIRP Annals -
Manufacturing Technology, Vol. 56, No. 1, 2007, pp.
431-434
Mourtzis D., An Approach to Planning of Food Industry
Manufacturing Operations: A Case Study, CIRP
Journal of Manufacturing Systems, Vol. 35, No.6,
2006, pp. 551-561
Mourtzis D., An integrated system for managing ship
repair operations, International Journal of Computer
Integrated Manufacturing, Vol. 18, No 8, 2005, pp.
721-733
Mourtzis D., Internet based collaboration in the
manufacturing supply chain, CIRP Journal of
Manufacturing Science and Technology, Article in
Press, 2011
Olwal A., Gustafsson J. and Lindfors C., Spatial
Augmented Reality on Industrial CNC-Machines,
Proceedings of SPIE 2008 Electronic Imaging, Vol.
6804, San Jose, CA, 2008
Papakostas N., Kopanakis A. and Alexopoulos K.,
Integrating digital manufacturing and simulation
tools in the assembly design process: A cooperating
robots cell case, CIRP Journal of Manufacturing
Science and Technology, Volume 4, Issue 1, 2011, pp.
96-100
Papakostas N., Makris S., Alexopoulos K., Mavrikios D.,
Stournaras A., and Chryssolouris G., Modern
automotive assembly technologies: status and
outlook, In Proceedings of the First CIRP
International Seminar on Assembly systems, Stuttgart,
Germany, 2006, pp. 39-44
Papakostas N., Michalos G., Makris S., Zouzias D. and
Chryssolouris G, Industrial applications with
cooperating robots for the flexible assembly,
International Journal of Computer Integrated
Manufacturing, Vol. 24, No. 7, 2011, pp. 650-660
Papakostas N., Mourtzis D., Bechrakis K., Chryssolouris,
G., Doukas D. and Doyle R., A flexible agent based
framework for manufacturing decision making, In
Proceedings of the Conference on Flexible
Automation and Intelligent Manufacturing (FAIM99),
Tilburg, The Netherlands, 2325 June 1999, pp. 789-
800
Papakostas N., Papachatzakis P., Xanthakis V., Mourtzis
D. and Chryssolouris G., An approach to operational
aircraft maintenance planning, International Journal
of Decision Support Systems, Vol. 48, No. 4, 2010,
pp. 604-612
Papazoglou M., Ribbers P. and Tsalgatidou A.,
Integrated value chains and their implications from a
business and technology standpoint, Decision
Support Systems, Vol. 29, No. 4, 2000, pp. 323-342
Pappas M., Karabatsou V., Mavrikios D. and
Chryssolouris G., Development of a Web-based
Collaboration Platform for Manufacturing Product and
Process Design Evaluation using Virtual Reality
Techniques, International Journal of Computer
Integrated Manufacturing, Vol. 19, No. 8, 2006, pp.
805-814
Pappas M., Mavrikios D., Karabatsou V. and
Chryssolouris G., VR-based methods and
developments for assembly verification, 1st CIRP
International Seminar on Assembly Systems, Stuttgart,
Germany, 2006, pp. 295-300
Park J., Augmented Reality Based Re-formable Mock-
Up for Design Evaluation, Proceedings of the 2008
International Symposium on Ubiquitous Virtual
Reality, IEEE Computer Society Washington, DC,
USA, 2008, pp. 17-20
Ranky P. G., A real-time manufacturing / assembly
system performance evaluation and control model
with integrated sensory feedback processing and
visualization, Assembly Automation, Vol. 24, No. 2,
2004, pp. 162167
Rao Y., He F., Shao X. and Zhang C., On-Line
Simulation for Shop Floor Control in Manufacturing
Execution System, Intelligent Robotics and
Applications, Volume 5315, 2008, pp. 141-150
Rentzos L., Pintzos G., Alexopoulos K., Mavrikios D.
and Chryssolouris G., Advancing the interactive
context of immersive engineering applications, 2nd
International Conference of Engineering Against
Fracture (ICEAF II), 2011, Mykonos, Greece
Rodriguez K. and Al-Ashaab A., Knowledge web-based
system architecture for collaborative product
development, Computers in Industry, Vol. 56, No. 1,
2005, pp. 125-140
Salonen T., Sski J., Hakkarainen M., Kannetis T.,
Perakakis M., Siltanen S., Potamianos A., Korkalo O.

203

and Woodward C., Demonstration of assembly work
using augmented reality, Proceedings of the 6th
ACM International Conference on Image and Video
Retrieval, 2007
Schabel S., ERP - Mobile Computing Thesis, Wien:
Universitt, Wien, 2009
Shankarnarayanan S., ERP systems using IT to gain a
competitive advantage, 2000, URL:
http://www.angelfire.com/co/troyc/advant.html
Sharma M., Raja V. and Fernando T., Collaborative
design review in a distributed environment,
Proceedings of the 2nd IPROMS Virtual International
Conference: Intelligent Production Machines and
Systems, 2006, pp. 65-70
Shen W., Hao Q. and Li W., Computer supported
collaborative design: Retrospective and perspective,
Computers in Industry, Vol. 59, No. 9, 2008, pp. 855-
862
Shen Y., Ong S. K. and Nee A. Y. C., AR-assisted
Product Information Visualization in Collaborative
Design, Computer-Aided Design, Vol. 40, No. 9,
2008, pp. 963-974
Smith S. J., Survey on the use of simulation for
manufacturing system design and operation, Journal
of Manufacturing Systems, Vol. 22, No. 2, 2003, pp.
157-171
Su C. J., Effective Mobile Assets Management System
Using RFID and ERP Technology, WRI
International Conference on Communications and
Mobile Computing, CMC, 2009, pp. 147-151
Suchnek P., Business Intelligence - The Standard Tool
of a Modern Company, 6th International Symposium
on Business Administration - Global Economic Crisis
and Changes, 2010
Sun A. Y. T., Yazdani A. and Overend J. D.,
Achievement assessment for enterprise resource
planning (ERP) system implementations based on
critical success factors (CSFs), International Journal
of Production Economics, Vol. 98, No. 2, 18
November 2005, pp. 189-203
Umble E. J., Haft R. and Umble M., Enterprise resource
planning: Implementation procedures and critical
success factors, European Journal of Operational
Research, Vol. 146, No. 2, 2003, pp. 241257
Vncza J., Egri P. and Monostori L., A coordination
mechanism for rolling horizon planning in supply
networks, CIRP Annals - Manufacturing
Technology, Vol. 57, No. 1, 2008, pp. 455-458.
Walther B., SAP Strategien und Lsungen fr Klein-
und Mittelstndische Unternehmen im Vergleich mit
Open-Source ERP-Systemen, Jena, 2009
Weidenhausen J., Knoepfle C. and Stricker D., Lessons
learned on the way to industrial augmented reality
applications, a retrospective on ARVIKA, Computers
& Graphics, Vol. 27, No 6, 2003, pp. 887-891
Westkaemper E., Strategic development of factories
under the influence of emergent technologies. CIRP
Annals, Vol 56, No. 1, 2007, pp. 419422
Winkelmann A. and Thiemann S., Strategisches
Marktverhalten von ERP-Anbietern vor dem
Hintergrund von Marktkonzentration und
technologischem Wandel, Springer, Heidelberg, 2010
Ye J., Badiyani S., Raja V. and Schlegel T.,
Applications of virtual reality in product design
evaluation, Proceedings of the 12th international
conference on Human-computer interaction:
applications and services, Lecture Notes in Computer
Science, Vol. 4553, 2007, pp. 1190-1199
Proceedings of DET2011
7th International Conference on Digital Enterprise Technology
Athens, Greece
28-30 September 2011
204

A SOFTWARE CONCEPT FOR PROCESS CHAIN SIMULATION IN MICRO
PRODUCTION
Bernd Scholz-Reiter
BIBA Bremer Institut fr Produktion und
Logistik GmbH at the University of Bremen
bsr@biba.uni-bremen.de
Janet Jacobi
BIBA Bremer Institut fr Produktion und
Logistik GmbH at the University of Bremen
jac@biba.uni-bremen.de




Michael Ltjen
BIBA Bremer Institut fr Produktion und
Logistik GmbH at the University of Bremen
ltj@biba.uni-bremen.de

ABSTRACT
Microstructure technology is one of the key technologies of the 21st century. With a higher level of
miniaturization, complexity of components and production processes increases. Despite high
process uncertainties, micro components require very small manufacturing tolerances. As micro
production is characterized by short production times and comparatively long setting-up times,
process chain planning becomes an important factor for production efficiency. This article
introduces a concept for the simulation of production process chains, which covers fabrication and
material flow planning, while addressing process uncertainties and software needs. The main
section concentrates on the software concept -ProST (-Process-chain Simulation Tool), which
maintains the workflow and optimizes process chain design, by mapping an already existing
methodical model into the software concept. Finally, the article provides an evaluation of this work,
by presenting a prototypic simulation of a micro manufacturing decision scenario for a process
configuration.
KEYWORDS
micro production, process chain, software concept, logistic size effects, simulation tool

1. INTRODUCTION
Nowadays, many manufacturing areas benefit from
extremely small and reliable products. A sector in
which established applications are settled very well
is the medical one. For instance, minimal invasive
surgery causes a minimum of stress as possible to
the patient and accelerates the healing process. An
current research field and future application of
micro production is micro fluid engineering, where
miniaturized laboratories enable secure mixing
processes for explosive fluids. Although the trend of
miniaturization is increasing, there is a knowledge
lack of micro production processes, caused by two
main reasons. First of all, size effects (Vollertsen
2008) affect the knowledge transfer from macro to
micro production. Such a size effect could be a
change of the surface-to-volume ratio which occurs
by downscaling the product volume. Secondly,
micro manufacturing imposes special logistic
constraints (see Chapter 2 for examples).
Consequently, a better understanding of micro
production processes is becoming increasingly
important. Common tools to gain information about
a physical system, such as a production process, are:

Mathematical models,
real-world experiments, and
computational simulations.


205

A mathematical model is a clearly defined process
description which approximates the behaviour of a
complex process. Naturally, the quality of a
mathematical model depends mainly on the quantity
and quality of the underlying system observations,
mostly given by experimental data. To predict
unknown or future process behaviour the use of
virtual experiments, i.e. computational simulations,
can be of great importance. Imagine a scenario with
the aim to predict the CO
2
amount of the
atmosphere in the year 2050. Obviously, it is not
acceptable to wait until the end of the experiment to
get the wanted information. However, a simulation,
based on given models and experiment data, might
achieve the result with sufficient accuracy.

To satisfy industrial customer needs, product
developers have to focus on cost effective micro
product and production processes, aiming at a
profitable and reliable middle and high volume
production. However, a high level of
miniaturization leads to a high degree of production
process complexity. The size-effects result in high
production process uncertainties. Additionally,
micro production processes are characterized by a
high number of interacting factors. Hence, slight
parameter changes in early production steps may
cause unpredictable parameter changes in later
steps. To avoid high amounts of sub-standard
goods, the process parameter configuration has to
be conducted within reasonably small ranges of
tolerance. Due to these influences, micro production
requires comparatively long set-up times and high
reconfiguration costs. With regard to production
efficiency, process chain planning becomes an
important factor for micro production. Process
chains include all production steps that have an
impact on the product quality, including parameters
of material, tools, and associated product
components. Based on the need for a better
understanding, various scientific research has
already be done on the micro production process
topic (Vollertsen 2004, Piotrowska 2009, Hu 2009
etc.). Most publications focus on downscaling of the
production process for single product components.
Therefore, it is not sufficient to study single
production processes, but the whole process chain
must be taken into consideration. However, up to
now a holistic concept for process chain planning in
micro production is missing.

To face the scientific and conceptual challenge of
process chain planning, this research presents the
software concept -ProST (-Process-chain
Simulation Tool), which combines a process chain
model with simulation and experimentation. To
motivate the need of such a software concept, this
paper gives a short overview of the special
constraints of micro production and arguments why
the use of planning tools from macro production is
not sufficient. Thereafter, a process chain model
concept is introduced, based on the corresponding
production process models. Afterwards, the process
chain simulation concept allows the transfer from
qualitative to quantitative technical cause-effect
relationship information.
2 MICRO PRODUCTION
Intensified investigations on the physical aspects of
micro production demand a clear definition of micro
components. Vollertsen defines a part as a micro
component, if and only if at least two of its
geometrical dimensions are smaller than one
millimeter (Vollertsen 2004). Micro production
processes are employed to produce or handle such
micro components. Commonly, the development of
micro products and their production processes is
motivated by the desire to scale down already
existing macro products. In order to gain a
downscaled product, all relevant length dimensions
of the process parameters are reduced in a similar
way, i.e. by a constant factor. Nevertheless, micro
production stands for more than the mere physical
act of manufacturing. Micro production includes
assembling of micro components, micro tooling and
handling of the components as well as production
process planning and process parameter adjusting.
On a micro scale there are several influences on
production efficiency, which affect the use of
conventional planning methods in macro
production. For simplification, the subsequent
paragraphs use a instead of micro, i.e. -
component for micro component.


Fig.1 Schematic representation of the three main groups of
size effects F force F
A
adhension force F
f
friction force F
G

gravity (Vollertsen 2009)
2.1 SIZE EFFECTS
At first sight, the development of a -production
process, under the assumption of a well known
macro process, seems to be a simple task: If the

206

process, including all process properties, is scaled
down in a similar way, then the product will be as
well. Although, this is theoretically true, the
downscaling of all dimensions and forces relevant
to the production process is not possible. For
instance, it would have to include the downscaling
of natural constants, such as the density of material,
or the gravity force. The deviations of the process
properties, which occur, the geometrical dimensions
and thereby the product mass are scaled down, are
called size effects.

To understand the origin of parameter deviations,
two parameter types are distinguished: Parameters
which do not change with the mass are called
intensive variable, whether those which change with
mass, called extensive variables. The size effects
can be divided into three main categories: density,
shape and microstructure (Vollertsen 2009). Shape
effects occur due to the fact, that holding the shape
constant during downscaling leads to a change in
the relation of surface to volume. The shape effects
are distinguished into surface-related ones and those
which can be described as a sum of volume and
surface related sub values, while the relative amount
changes during scaling. The last category, the
microstructure effects combine all effects that occur
because the simultaneous downscaling of all
structural values is physically or practically not
possible. An overview of the size effect categories
is given in Figure 1. With regard to a manufacturing
scenario, the value of size effects can be beneficial
as well as neutral or detrimental for the whole
production process.
2.2 LOGISTICS IN MICRO SCALE
Similarly to the physical specifics of micro
manufacturing, the properties and logistics of
production processes in micro production differ
from the macro production ones.

By scaling the geometrical dimensions, process
parameters, such as the product dimensions, weight,
hardness, and sensitivity are affected in a direct
way. Further, the strength of environmental
influences, like temperature, dust, contamination,
humidity or electrostatic is growing. Hence it is
neither possible to describe micro production
logistics by simply downscaling the processes from
macro to micro scale and investigations on the
logistic behavior of micro processes are necessary.
2.2.1 Logistic Aspects in Micro Production
In particular, several parameters relevant to the
production efficiency, change as consequence of the
size effects and thus enforce changes in the process
properties. For instance, due to the increased
surface-to-volume ratio of micro parts, their
gravitational force is lower than the adhesion forces.
Hence the handling and the assembly of -
components differs from macro production. Thus
the development of handling techniques is a great
challenge in -manufacturing. One possibility to
avoid handling conflicts is to produce components
in a larger composite and strive for a separation as
late as possible. This way, they can be handled with
standard conveyer machinery like macro parts.
Otherwise, intentional use of micro components
gives new ideas on process realizations becoming
possible only due to the size effects. For example
contactless transportation systems using air stream
take advantage of the relative small gravitational
forces (Moesner 2004). Developing micro specific
processes enables effective production and is a great
challenge of micro technology research.

A fundamental aspect of the efficiency of -
production processes is a constantly high product
quality. In micro production, precise manufacturing
is decisive for product quality. Thus, geometrical
structure deviations below one micrometer are a
common goal setting. To ensure the high quality
requirements, quality tests have to be made. Up to
now, standardized methods and instruments for
automated quality inspections are missing. The
limited resolution of optical instruments results in
measurement uncertainties. Thus, there is a
knowledge lack about the behavior of -production
processes and their parameter relationships. These
arguments, combined with the fact that -
production processes are highly sensitive and
component post processing is not feasible, lead to
the conclusion, that manufacturing constantly high
quality is a main task, for micro production and
requires very small manufacturing tolerances.

In addition to these technology based consequences,
production and logistics related effects like the
possible use of smaller machines lead to new
potentials in -production planning. As micro
production is characterized by short production
times, a micro factory has to be able to handle short
and unpredictable product orders. Thus a flexible
job planning and a daily process adaption to new
orders are essential. High investment needs and
fixed cost, which can be found in the large ratio of
machine costs, are another relevant factor to
efficiency.

Large product ranges and small series production
cause frequent setup procedures. Hence, to avoid
investment risks and idle machines, an efficient

207

planning and forecast of resource and engine supply is significant to the companys profitability. Due to
the lack of stable processes and standardized
interfaces as well as suitable handling and
measurement methods in the micro scale,
production process planning has to be integrated at
an early state of the product development process.
(Scholz-Reiter et.al 2010) Therefore, the -ProST
software concept involves an integrated -process
chain planning tool including handling operations,
assembly, quality tests and investment planning.
Several research groups are facing the challenges
given by single micro manufacturing problems.
2.2.2 Planning of Micro Process Chains
Due to high accuracy and a strong dependency
between quality, handling and manufacturing in -
production processes, the approach of parallel
planning of these three fields has been made
(Scholz-Reiter et.al 2010). The proposed concept of
an integrated -process chain provides a framework
for the generation of a tool for production planning
with special respect to characteristics found in
micro production.

A manufacturing process chain describes the
chronological and logical order of all operations
necessary to produce the micro component or
subassembly. Due to the complexity of planning
constraints, assembly and test operations have to be
taken into account at an early state of the production
planning in order to avoid later cost and time
efficient adjustments. This means that production
technology, test and handling techniques are
simultaneously developed. Thus, possible
adaptation problems between production technology
and handling operations, including discrepancies in
handling time, can be detected early and bottlenecks
can be found. The aim of the integrated micro
process chain concept is to provide a basis for
production program planning. The coordination of
planning processes demands standardized
descriptions of processes and their interfaces.
Challenges rest on their systematic preparation and
presentation.

In summary, production process chain planning has
to be integrated in the product development process.
To accelerate the setting up times a software
support is necessary.
3 SOFTWARE CONCEPT
The special constraints in micro production entail
the application needs of a micro planning tool. To
meet these consequences, the first section of this
chapter presents a component list of the main
software a planning tool kit should include. In the
second section the -ProST software concept is
presented.
3.1 SOFTWARE REQUIREMENTS
This section presents the software component of
software a planning tool kit.


Fig.2 The -ProST GUI, including the adjusted DESMO-J
simulation tool
3.1.1 User Interface
Due to flexibility of processes and machines, the
proposed software has to include a user interface.
The user interface allows updating and addition of
new data, as well as creation of production or
investment scenarios and evaluation methods. Of
course the user interface should be clearly
structured and support uncomplicated operation. A
help function, demonstrational models and
simulations are useful supplementations.
3.1.2 Database
The software kit has to organise and store different
types of data (product, tool, machine, customer
etc.). Therefore, a database is one of the basic
software parts. As high volume production of -
components is an actual research field, effective
data storage is potential to provide the groundwork
for precise predictions. Moreover, the database
forms the substructure needed to forecast costumers
behaviour and to get valuable statistical data of
micro technological properties.
3.1.3 Process Chain Model
As interaction of the production steps among the
production chain is important to achieve
sophisticated products, the internal representation of
a process chain is a decisive factor. Thus, changes
in the process chain system have to be part of the
model concept. Micro production processes are
characterized by a high number of interacting
factors.


208


Fig.3 Information flow during the development of the technical cause-effect model
Hence, slight parameter changes in early production
steps may cause unpredictable parameter changes in
later steps. To avoid a high amount of sub-standard
goods, the process parameter configuration has to
be conducted within reasonably small ranges of
tolerance. As a result, error propagation becomes an
essential instrument for the control of product
quality. Therefore, a sufficient detailed description
of the cause-effect relationships should be part of
the process chain model, to detect critical parameter
constellations in early configuration steps and avoid
high configuration costs.
3.1.4 Micro Process Models
The -ProST software tool does not compensate the
missing development of process models. Although
the software is able to detect and handle a certain
amount of model flaws, precise process modeling is
a decisive preliminary work. Therefore, we assume
given physical or mathematical model descriptions
for all production steps. Every single model derives
the process step outcome from the process step
input, taking into account all parameters relevant to
process description and local technological cause-
effect relationships.

Aiming at optimal use of all existing process
information, the software should be able to handle
and store different model types. The most common
models are differential equations systems (for
further information a detailed overview of
mathematical models see Imboden 2008)
3.1.5 Simulation Tool
As the software acts as a planning tool, the most
important user application is the simulation tool. In
order to guaranty short set-up times and to satisfy
the flexible needs of a micro production company, a
fast simulation method is indispensable.
Additionally, dealing with a limited amount of
knowledge gaps or process uncertainties may be
necessary.
3.1.6 Additional Interfaces
The high impact of parameter interactions in
process chains demands a fundamental
understanding of the used technologies, the process
properties and the process interactions. Additional
interfaces, such as model solvers, accelerate the
process chain simulation by outsourcing complex
derivations to suitable software resolutions.
Finally, an interface for realtime data recording
completes the software tool kit.
3.2 THE -PROST SOFTWARE CONCEPT
The software concept for the -Process Chain
Based Simulation Tool (-ProST) is a model based
planning system. Due to current updates of the
technology state the -ProST software can serve as
a central planning and documentation tool in a
micro factory. As a preliminary work before the
start of the simulation, the initial process models
must be developed. An overview of the
development steps is given in Fig 3.

209


Fig.4 cause-effect relationships of process parameters (a) effect-network for the interface of two process steps (b) reduced graph
of the process chain
To satisfy the high number of different data types,
object oriented programming
is chosen. To achieve a simple software structure
and fast computing, the decision was made to use a
Java based software development.
As the common service functions (GUI (see Figure
2), database, graphical evaluations, job scheduling
etc.) do not differ much from a software realization
for macro production, the next section highlights
only two software components which are adapted
for the special constraint of micro production. These
are the software representation of the process chain
model and the adapted use of a Java based
Simulation tool, DESMO-J.
3.2.1 Process Chain Model Concept
For the simulation of a -production process the
mathematical and physical models, representing
steps of the process chain, have to be implemented
and linked. We use the technical-cause-effect
relationship based process chain model -ProWi
(Scholz-Reiter 2009). The holistic model approach
is part of the CRC 747 research work. The -ProWi
model approach expresses the qualitative technical-
cause-relationships in terms of directed, acyclic
graphs, called effect-networks. The nodes of the
network represent process variables. Process
variables may be observable quantities, latent
variables and also unknown parameters. The edges
represent technical-cause-effect dependencies
between the process variables. A successful
implementation of the -ProWi model will able to
forecast general interactions of process parameters
and thus to identify significant control factors.
Hence, effect-networks allow the representation of
technical-cause-effect relationship information.
To quantify the cause-effect relationships, we
assume an extended modeling concept. Based on
the -ProWi model approach, the cause-effect
relationship graph is extended to a Bayesian
network with continuous variables (Jensen 2001).
Due to the continuous parameter representation,
different parameter states can be distinguished and
the impact of parameter changes can be evaluated.
The technical-cause-effect relationship can be
represented by probability distributions. The
construction and specification of the Bayesian
network is divided in two main steps. Firstly, the
graph structure is defined. Secondly, the probability
function has to be modeled. The graph structure is
given by the qualitative -ProWi model, given by
the scenario input of the user. The computation of
the probabilities is usually founded by experimental
datasets. If a huge data base is available, the
software should use it. If not, and that is the normal
case in micro production, the model must be
approximated by the given knowledge as good as
possible (using the given process models and
simulation methods to create a simulation based
database). A combined model approximation,
involving uncertainty representations, such as fuzzy
logics, may be a good initial solution.
3.2.2 Process Chain Simulation
The existing information structure suggests an
iterative simulation procedure, based on the material
flow among the process chain. Hence, the process
steps can be implemented and calculated separately.

As a simulation framework we choose DESMO-J
(Discrete-Event Simulation and Modeling - with
Java), developed by the university of Hamburg
(Lechler 1999). Several aspects make the DESMO-J
framework perfect for -process chain simulations.
DESMO-J supports an object oriented model
representation and provides a complete separation
of model and experiment. As an open source
project, DESMO-J is adaptable to our special

210

conditions. Furthermore, a wide range of useful
method implementations relevant for process
oriented modeling, such as stochastic distributions
and statistical data collectors, can be adopted.
Moreover DESMO-J supports the synchronisation
of simulation processes, which are acting
concurrently or allow a process interruption.
Finally, DESMO-J supports both, process-oriented
and event-oriented model style, also known as
process-interaction approach or event-scheduling
approach, respectively.

The -ProST simulation approach extends the
process oriented simulation methods by an
additionally black box model. The production
process chain is interpreted as a dynamic system
x(t) generating an output y(t) dependent on an input
u(t), where x, y and u are continuous vectors and t is
the simulation time. Hence, the process steps can be
implemented and calculated separately. The input
u(t) consists of all measurable production relevant
factors, including machine parameters. The output
y(t) represents the manufactured micro product,
including information about the logistic effort and
product quality. The simulation of the system
behavior is divided into different production steps
and their models, based on the material flow. Every
single production step itself is represented as a
dynamic system (black box model). The behavior of
the production step system may iteratively be
defined by a smaller intern process chain or by a
given process model. The system interfaces are
fixed by the input and output of the process steps.
Hence, the simulation of parameter change
propagation is possible. A high repeating simulation
rate with changing parameter constellations allows
the simulation of whole screening plans. Running
the simulation with given process step models leads
to a data base which enables the quantitative
modeling of cause-effect networks. If the structure
of the networks is not limited, this may induce
networks of extremely high order and complexity.
(Figure 4a). The reduction to a minimalistic acyclic
graph is a necessary task (Figure 4b). Suitable
methods for the graph reduction and the training of
the conditional probabilities (i.e. the Expection-
Maximization-method, Jensen2001) must be
chosen.
3.2.3 Interaction of Process Chain
Simulation and the Technical-Cause-Effect
Model
As quality is strongly dependent on the process
chain simulation and the technical-cause-effect
model we go in detail with the interaction. On the
one hand, the gained data information of the process
chain simulation and the parameter dependencies in
between allows a better understanding of the
technical-cause-effect relationships and thus a
correction of the process chain model. On the other
hand, a successful simulation only is possible if the
underlying process chain model, consisting of the
process step models, has been well defined. Thus a
solid process modeling and experimentation stays a
primary task. No software tool can to replace the
research work and technical expertise. Nevertheless,
if the simulation leads to a better model and a better
model leads to a better simulation, a repeating of the
information gain cycle may be a possibility.
Research to study such an iterative information gain
has not been done yet. Another interesting
investigation field is the error behavior depending
on the process chain model quality. Maybe starting
criterias assuring restricted error ranges in the
simulation can be found.
3.3. EVALUATION
In Order to demonstrate the principle software use,
a prototypical production scenario is implemented.
We assume a micro factory where a new production
process should be installed. The new -product c
requires the assembly of two product components.
Product component a must be served by engine type
A, component b by engine type B. In a second
production step components a and b are assembled
by engine C. The assembly engine C is a fixed
process element, which cannot be changed. The
quality of product c depends on the incoming
product components qualities and an unknown
influence of engine C. In contrast to the assembly
engine, the production properties of engines A and
B are known. For both, production step A and
production step B, two engines (A1,A2,B1,B2) are
available. The engines differ in production costs and
manufacturing quality ranges. The influence of the
engines on product component quality is given by a
statistical distribution. The quality is defined by a
number between zero (defect) and one (best
quality). The more expensive engines produce a
higher product quality. The Engines parameters are
listed in Table 1.
Table 1: Engine characteristics
Engine Quality
Distribution
Costs/day
A_1 U(0,8,1) 500
A_2 U(0,75,0,95) 200
B_1 N(0,9,0,1) 300
B_2 N(0,6,0,25) 200

Based on the engine characteristics, a decision plan
shall be made. Thus, in case of a new product order,
a fast decision, which engine combination fits

211

optimal to the process chain and delivers a
maximum profit, is supported. Due to the costumers
requirements, only products fulfilling a certain
threshold can be sold. Hence the profit depends on
the customer demands and the products actual
market price.

To enable a simple product quality forecast, the
behavior of engine C must be approximated. Due to
the knowledge from macro production, we assume
that the product quality of product c depends strictly
on the quality of the subcomponents a and b. Thus,
the arithmetic average of the subcomponents
quality is taken. The prototypic forecast simulates
the production of twenty samples and derives the
product components and final product quality. The
qualities for the engine scenario (A1 B1) are plotted
in Figure 5.


Fig.5 product quality of an prototypic simulation scenario
To compare the engine scenarios the same
simulation was applied for all combinatorial
possiblities. The resulting qualities of product c are
plotted in Figure 6.


Fig.6 product quality comparison of the four of prototypic
simulation scenarios
As expected the engine scenarios including the
more expensive engines generate a higher product
quality. In order to compare the value of the engine
scenarios the production profit with respect to
manufacturing costs, must be plotted against the
quality threshold and product price. We did a
prototypic profit plot for the first two scenarios A1
B1 and A2 B1. The results are shown in Figure 7.
high regions represent the threshold price
combinations which allow a profitable
manufacturing.



Fig.7 profit matrix of the engine scenarios
Based on these data, a suitable engine combination
can be chosen rapidly, when a new job order
arrives. Obviously it is possible that during the set-
up of the process chain the behavior of engine C
does not consist with the supposed one. For
instance, product quality may be nearly independent
from one or more product components. Thus, the
experimental data of the set up process can be taken
to update the missing knowledge and to gain a
better model of the assembly process step.
5. CONCLUSIONS
With respect to the complexity of micro
components and production processes, a planning
tool is essential for effective high volume
manufacturing of micro components. Aiming such a
planning support, this paper first presented an
overview of the restrictions on micro production
engineering. We figured out, that beside the
physical aspect of a production process, also the
logistic process parameters suffer size-effect
changes while downscaling the product dimensions.
Based on the special logistic constraints of micro
production, this article derived the need of a

212

planning software. The introduced -ProST
software concept provides a solution for the
simulation of complex process interactions in micro
production and is able to detect quality relevant
process parameters in early configuration steps.
Finally, a prototypic application on a micro
manufacturing decision scenario for a process
configuration problem was presented. First, the
software generates an initial process model, based
on given fragmentary process data and additional
interaction assumptions. Second, the best possible
process configuration is calculated. With the
beginning of the production process new data
becomes available. Thus the process model can be
corrected. In the given scenario, the resulting
information gain about the cause effect relationship
between the engines allows the manufacturer to
change the process configuration and to raise the
overall production profit. In summary, a planning
tool regarding the special constraints of micro
production is a worthy tool, both in research, to gain
information about unknown parameter and size
effect interactions, as well as in the industrial sector.
Nevertheless to proof the feasibility of planning
problems with a high degree of complexity is a
main future task.
6. ACKNOWLEDGMENTS
This research has been funded by the German
Research Foundation (DFG) as the subproject C4
Simultaneous Engineering of the Collaborative
Research Centre 747 Microforming (SFB747).
REFERENCES
Ahn, H., Optimierung von
Produktentwicklungsprozessen, Deutscher
Universittsverlag, Wiesbaden, 1997
Eversheim, W., Schuh, G., Integrierte Produkt- &
Prozessgestaltung, Springer-Verlag, Berlin
Heidelberg, 2005 (8)
Hoxhold, B., Bttgenbach, S., Batch fabrication of
micro grippers with integrated actuators,
Microsystem Technologies, Vol.14, No.12, 2008,
pp.917-924
Hu, Z., Walther, R., Vollertsen,F. Umformwerkzeuge
beim Mikrotiefziehen Einfluss der geometrischen
Abweichungen der Werkzeuge auf die Stempelkraft
beim Mikrotiefziehen Werkstattstechnik online
Volume 11, 2009, pp 814-819 (11)
Imboden, D., Koch, S., Systemanalyse: Einfhrung in
die mathematische Modellierung natrlicher
Systeme, Springer-Verlag, Berlin Heidelberg ,2008
Jensen, F. V. Bayesian Networks and Decision Graphs,
Springer-Verlag ,New York, 2001 (7)
Lechler, T.,Page, B. DESMO-J: An Object-Oriented
Discrete Simulation Framework in Java Proc. 11th
European Simulation Symposium, Erlangen, 1999,
SCS-Publications, Delft 1999, pp. 46-50.
Moesner, F.M., Higuchi, T., Traveling Electric Field
Conveyer for Contactless Manipulation of Microparts
IEEE Annual Meeting, Vol.3, IEEE Press, New York,
2006, pp. 20042011
Piotrowska, I., Brandt, C., Karimi, H. R., Maa, P.,
Mathematical model of micro turning process,
International Journal of Advanced Manufacturing
Technology Vol.45, No.1, Springer-Verlag
Heidelberg, 2009, pp 33-40 (10)
Scholz-Reiter, B., Ltjen, M., Brenner, N.,
Technologieinduzierte Wirkzusammenhnge in der
Mikroproduktion Entwicklung eines
Modellierungskonzepts, 22. Digital Engineering
Herausforderungen fr die Arbeits- und
Betriebsorganisation, 2009, pp. 81-102 (1)
Scholz-Reiter, B., Brenner, N., Kirchheim, A.,
Integrated Micro Process Chains, APMS 2009, IFIP
AICT 338, 2010, pp. 27-32
Scholz-Reiter, B., Ltjen,M., Heger ,J., Integrated
simulation method for investment decisions of micro
production systems
Vollertsen, F., Schulze Niehoff, H., Hu, Z, State of the
art in micro forming. Keynote paper, Proceedings of
1st International Conference on ew Forming
Technology, Harbin Institute of Technology Press,
China, 2004, pp. 17-28
Vollertsen, F., Categories of size effects, Production
Engineering, Vol.2, No.4, Springer-Verlag, 2008, pp.
377-383
Proceedings of DET2011
7th International Conference on Digital Enterprise Technology
Athens, Greece
28-30 September 2011
213

AN INVENTORY AND CAPACITY-ORIENTED PRODUCTION CONTROL
CONCEPT FOR THE SHOP FLOOR BASED ON ARTIFICIAL NEURAL
NETWORKS
Bernd Scholz-Reiter
BIBA Bremer Institut fr Produktion und
Logistik GmbH at the University of Bremen
bsr@biba.uni-bremen.de
Florian Harjes
BIBA Bremer Institut fr Produktion und
Logistik GmbH at the University of Bremen
haj@biba.uni-bremen.de


Jeanette Mansfeld
BIBA Bremer Institut fr
Produktion und Logistik
GmbH at the University of
Bremen
man@biba.uni-bremen.de
Oliver Stasch
University of Bremen
o.stasch@uni-bremen.de
ABSTRACT
The constantly growing demand for customized and innovative products results in highly complex
production processes. The corresponding large workload of the production planning and control
systems strengthens the interest in flexible, adaptive and intelligent approaches for both
manufacturing systems and the related production control. Methods from the field of artificial
intelligence, such as software agents or artificial neural networks, have proven their applicability in
this field. This paper presents a production control concept based on artificial neural networks for
the inventory and capacity-oriented control of a shop floor. An example demonstrates the overall
concept as well as the implementation and performance of the proposed control system
KEYWORDS
Production Control, Shop Floor, Capacity, Inventory, Artificial Neural Networks

1. INTRODUCTION
The customer-oriented production of multi-variant
products with short production cycles plays an
important role in today`s market (Schfer et al.,
2004). This results in complex and dynamic
production processes, which are difficult to handle
for established production planning and control
systems (Barata & Camarinha-Matos, 2005). Due
to the orientation to small series, single pieces and
prototypes, shop floor productions have a particular
demand for a continuous advancement of
production control strategies and techniques.
In this context, methods from the field of
artificial intelligence, such as bio-inspired
algorithms (Scholz-Reiter et al., 2008), software
agents (Scholz-Reiter & Hhns, 2003) and artificial
neural networks (Rippel et al., 2010) (Scholz-Reiter
et al., 2010) have proven their applicability in
production related tasks. At this, the application
ranges from machine control (Kwan & Lewis,
2000) over prediction purposes (Natarajan et al.,
2006) to the determination of suitable operational
policies (Yildirim et al., 2006) (Chryssolouris et al.,
1991).
This paper introduces a production control
concept for the combined control of inventory levels
and capacity utilization within a shop floor
production. In this concept, artificial neural
networks act as capacity and inventory controller in
cascaded control loops.

214

The structure of the paper is as follows: The next
section gives a short overview of artificial neural
networks in general. Section 3 introduces the
organizational form shop floor production and the
generic shop floor model that underlies the
experiments. Section 4 describes the overall control
concept and the neural controllers it uses. An
experimental validation of the concept by means of
the previously described model follows in section 5.
The paper closes with a conclusion basing on the
obtained results and gives an outlook on future
research.

2. ARTIFICIAL NEURAL NETWORKS
Artificial neural networks represent mathematical
imitations of neural systems found in nature
(Dreyfus, 2005). They consist of artificial neurons,
also called nodes, and weighted links, also known
as edges (Steeb, 2008). A typical neural network
consists of three layers, an input layer, one or more
hidden layers and an output layer (Haykin, 2008).
At this point, the number of hidden layers depends
on the type of network (Steeb, 2008). Figure-1
depicts a schematic view of an artificial neuron.


Figure - 1 Schematic view of an artificial neuron (Rippel et
al., 2010)
Within a neural network, the artificial neurons act as
data processing units. They process input data,
coming from other neurons or the environment, and
forward the calculated results. Therefore, neural
networks offer a fast and parallel data processing
(Dreyfus, 2005).
Further advantages of neural networks are a
comparatively small modelling effort and the ability
to learn from experience (Scholz-Reiter & Hhns,
2003). This learning ability empowers neural
networks to approximate complex mathematical
coherences, which are not exactly describable or
may be even unknown (Rippel et al., 2010). In this
case, the networks act as a kind of black-box.
The learning process can take place in three
different ways. Supervised Learning is applicable, if
data in form of matching input output pairs exist
(Chaturvedi, 2008). Their presentation to the
network triggers an adjustment of the internal
connections in a way that every input generates the
corresponding output. Reinforcement learning
follows a similar approach. At this point, the
network receives input and a feedback concerning
the correctness of the result (Haykin, 2008). The
exact desired output is not presented. Finally,
Unsupervised or Self-organized Learning denotes a
learning process without assistance. The neural
network receives only input data and tries to
approximate possible coherences within the
presented pattern autonomously (Kohonen, 2001).
In all of the three cases, the success of the
learning procedure is verified by presenting an
additional set of validation data. This avoids a
merely memorising of the initial training data
(Haykin, 2008).
3. SHOP FLOOR PRODUCTION
3.1 SHOP FLOOR PRICIPLE
Shop floor production is a very dynamic and
complex form of production. The manufacturing of
prototypes, single pieces and small series results in
a high degree of customization (Rippel et al., 2010).
The production facility is organisationally divided
into specialised workshops, such as a turnery, a
sawmill and so on (Figure-2). Within the shop floor,
work pieces can pass machines and workshops in
any order (Slack et al., 2007). At this point, the
machining sequence depends on the technical
specifications of both the work piece and available
workstations or machines (Rippel et al., 2010).
Often, processing steps have a variable order or are
optional.


Figure 2 Shop floor organization (Scholz-Reiter et al.,
2011) (Pfohl, 2010)
1
I
2
I
n
I
) ,..., (
1 n
I I in
) (in out
out w
j
*
1
out w
j
*
2
out w
n
j
*
threshold in >


215

The resulting flexibility leads to complex material
flows and highly dynamic production processes. As
a result, scheduling within a shop floor is quite
difficult and often referred to as the job shop or
shop floor scheduling problem (Chen et al., 2008).
The complexity of production planning and control
systems in this field is correspondingly high.
3.2 GEERIC SHOP FLOOR MODEL
The evaluation of the control approach introduced
in this paper takes place by means of a generic shop
floor model. The model consists of nine technically
different machines in four workshops (Figure-3).
Every workshop contains an input buffer in front of
the respective machines.
During the simulation period, six different types
of work pieces are manufactured. At this point, all
work piece types run through every workshop. To
reflect the general complexity and dynamics of a
shop floor, the manufacturing steps for one of the
work piece types is variable. Pieces of this type can
pass the production stages in varying orders, while
backflows are possible in workshop 3, as a
consequence of quality effects. Further, the set-up
and processing times differ for every work piece
and machine. This depends on technical
specifications and/or the sequence the work pieces
arrive in.


Figure 3 Schematic view of the shop floor model
The order release takes place in front of the first
workshop. The size of the homogeneous lots
amounts up to five work pieces. Finally, the
commissioning forms the end of the production
process.
4. THE NEURAL CONTROL CONCEPT
The proposed concept focuses on combined control
of inventory levels and capacity utilization. At this,
the capacity utilization denotes the time slice that a
machine m processes a work piece or is set up for
processing. The calculation is as follows:

(1)

1. CU
m
: Capacity utilization of machine m.
2. PT
m
: Time slice, machine m works.
3. ST
m
: Time slice, machine m is set up.

The inventory level bases on an average between
the machine specific inventories of the considered
workshop. Equation-2 defines the individual
inventory calculation for every machine m:

(2)

1. I
m
: Inventory level of machine m
2. PT
i
: Processing time for work piece i on machine
m
3. ST
i
: Setup time for work piece i on machine m
4. i: Current work piece i on machine m
5. k: Number of work pieces within the buffer

Within the shop floor, every workshop is equipped
with one neural control network per control
variable. Together, the neural controllers form a
cascaded control structure, with the capacity control
as the inner and the inventory control as the outer
control loop. The inventory levels are decisive for
the distribution of work pieces between the different
workshops. The allocation of work pieces to
machines inside a workshop follows the capacity
utilization.
In this context, the general control flow provides
a transfer of work pieces depending on set-points
for the inventory levels of workshops.
Redistribution only takes place, if it does not exceed
the desired limit. To avoid a standstill of individual
workshops, a transfer is allowed in special cases,
however. A special case occurs, when a compliance
of the desired inventory would lead to a blockade in
one or more workshops. Within a workshop, the
capacity control assigns the work pieces waiting in
the buffer to the available machines.
M
a
t
e
r
i
a
l

F
l
o
w
Order
Release
Buffer 1
M
11
Workshop 1
Buffer 2
M
32
M
22
M
12
Workshop 2
Buffer 3
M
23
M
13
Workshop 3
Buffer 4
M
34
M
24
M
14
Workshop 4
Commissioning/
Dispatching


The neural inventory controllers
forward architecture. The precise design depends on
the position of the considered workshop
material flow. In the following, the control network
of a workshop with three machine
example. The network has a 4:12:12:1 topology
with four inputs, one output and two hidden layers
with 12 neurons each. Figure-4 shows a schematic
view of the corresponding network.
space reasons, a black-box replaces the detailed
presentation of the hidden layers. This also refers to
the approximation of possible coherences between
the input and output data in a black

Figure 4 Schematic view of a neural inventory network
The depicted network computes an inventory based
factor (W
KZ
), which is determining for the
distribution decision. Hence, i
inventory deviation for the three
as the number of breaks for the previous
in the material flow.
The deviation is defined as the quotient
desired and the actual inventory level.
this quotient instead of the difference leads to a
normalization of the input values for the neural
networks. Further, the quotient reflects the ratio
between actual and desired inventories. T
simplifies the generalization of the neural networks,
as absolute values always depend on closely
restricted situations.
The amount of breaks denotes the time
machines of the previous workshop
high amount of breaks is an indication
overload. In this case, a redistribution can take place
despite a possible exceeding of the inventory limits.
With regard to the example, this implies the
following input variables:

1. E
BSm
: Normalized inventory error

2. PA
n-1
: Amount of breakes for previous workshop
n-1.

216
controllers have a feed-
forward architecture. The precise design depends on
workshop within the
following, the control network
workshop with three machines will serve as an
The network has a 4:12:12:1 topology
with four inputs, one output and two hidden layers
4 shows a schematic
view of the corresponding network. Because of
box replaces the detailed
This also refers to
the approximation of possible coherences between
the input and output data in a black-box manner.

of a neural inventory network
computes an inventory based
which is determining for the
. Hence, it processes the
machines as well
previous workshop
ion is defined as the quotient between the
ual inventory level. The use of
this quotient instead of the difference leads to a
normalization of the input values for the neural
Further, the quotient reflects the ratio
between actual and desired inventories. This
simplifies the generalization of the neural networks,
as absolute values always depend on closely
aks denotes the time, the
machines of the previous workshop are blocked. A
aks is an indication for an
overload. In this case, a redistribution can take place
of the inventory limits.
le, this implies the
ormalized inventory error for machine m
akes for previous workshop
The corresponding neural network for the capacity
control has a 6:12:12:3 topology. It processes six
input values and computes a ranking for the
available machines. At this, the machine with the
highest ranking gets
(winner-takes-it-all) for machining
the neural capacity controller for the example
workshop with three machines

Figure 5 Schematic view of a neural capacity network
The controller processes the following input values:

1. t
AZn
+ t
RZn
: Setup and processing time of the
regarded work piece on machine

2. e
KZm
: Current capacity utilization of machine


3. Y
KZm
: Ranking for the redistribution decision
with regard to machine

Both types of control networks run through a
supervised learning procedure.
validation data were recorded during test runs of
shop floor model introduced in the previous section.
During this runs, the control was based on simple
priority rules. At this point, only redistribution
decisions with the desired results found entrance in
the training and validation database
pairs.
5. EXPERIMENTAL VALI
The experimental validation comprises two
simulation runs, simulating a production period
30 days each. Within both runs, 5000 orders run
through the shop floor. As mentioned in section 3.2,
every order comprises a lot of 1 up to 5 pieces.
Main difference between the two setups is the
The corresponding neural network for the capacity
control has a 6:12:12:3 topology. It processes six
input values and computes a ranking for the three
available machines. At this, the machine with the
the respective work piece
for machining. Figure-5 depicts
the neural capacity controller for the example
with three machines.

ic view of a neural capacity network
The controller processes the following input values:
: Setup and processing time of the
regarded work piece on machine m
: Current capacity utilization of machine m
: Ranking for the redistribution decision
with regard to machine m
Both types of control networks run through a
supervised learning procedure. The learning and the
recorded during test runs of the
shop floor model introduced in the previous section.
During this runs, the control was based on simple
At this point, only redistribution
decisions with the desired results found entrance in
the training and validation database as input output
5. EXPERIMENTAL VALIDATION
The experimental validation comprises two
, simulating a production period of
Within both runs, 5000 orders run
through the shop floor. As mentioned in section 3.2,
comprises a lot of 1 up to 5 pieces.
Main difference between the two setups is the

217

desired inventory level. The first run bases on a
general inventory limit (mentioned as set-point in
section 4) of 60 minutes for every workshop of the
shop floor. The second simulation works with a
limit of 80 minutes.

The neural control concept affects the workshops
2, 3 and 4 excluding workshop 1, as the first
production stage receives its orders directly from
the order release. The following results exemplarily
explain the obtained results of workshop 2 for an
inventory limit of 60 minutes. This is the first
workshop with a neural control within the material
flow. Figure-6 depicts the inventory course of
machine 2 inside the considered workshop. The
remaining two machines are not depicted. The
illustration covers an extract of approximately 21
days.
At this point, the first 12 hours (grey shaded)
represent the initial period of the simulation. The
missing nine days cover the phasing-out period of
workshop 2. Further, days with a decreasing
occupancy towards the end of the simulation are left
out. The length of this period results from the front
position of the considered workshop within the
material flow. Both periods do not flow in the
inventory analysis. The adjusted inventory curve
shows a typically uneven course with only a few
variances. The averages mostly correspond to the
desired values. At this, machine 2 has an average
inventory of 63 minutes, machine 1 achieves
approximately 70 minutes and machine 3 is around
75 minutes. The overall deviation amounts between
3 and 16 minutes.
The capacity utilization is, in contrast to the
satisfactory inventory values, insufficient (Figure-
7). The utilization of the machines in workshop two
ranges from 27.40% to 28.96%. The average of
27.66% constitutes the minimal value for the whole
shop floor. At this point, the results extend to a
maximum of 41% for workshop 3. Further, the
curves for all machines belonging to this workshop
show a noticeable even course with only small
deviations after the transient phase.
The insufficient capacity utilization originates
from two reasons: the distribution of work piece
types within the job data and the physical structure
of the shop floor. The work piece types are equally
distributed over the job data. The even course of the
utilization after the end of the transient phase is a
direct consequence. Further, the physical structure
Figure 6 Inventory course of machine 2 in workshop 2
Figure 7 Capacity utilization of workshop 2

218

determines the number of available processing
alternatives for a work piece. Workshop 3 contains,
in contrast to the other workshops, only two
machines and therefore achieves the highest
capacity utilization.
The lead times of the six work piece types
underline this development. Figure-8 sketches the
course of all types during the simulation. The
covered period amounts the whole simulation run.
The first two and a half day can be seen as the
initial phase of the whole shop floor. The phasing-
out period is left out.
At this point, the curves are even after the
transient phase and end with a value of
approximately nine hours. Work piece type one (red
curve) defines the only exception with a generally
lower lead time of seven hours. This results from
the varying processing order of this type, which
leads to a high flexibility for the redistribution
decisions.
Overall, when applying an inventory limit of 60
minutes for the whole shop floor, the simulation
results render an acceptable approximation of the
desired limit value. The corresponding capacity
utilization is qualitatively satisfactory. The courses
of the machines show an even course with small
deviations after the transient phase. In contrast, the
quantitative results are not satisfactory, as the
maximum utilization is only around 27.66% percent
for the example workshop and 41% for the whole
shop floor.
A repetition of the experiments with an inventory
limit of 80 minutes leads to quite similar results
with regard to the inventory limits (Figure-9 shows
the results, using machine 2 as an example again).
The average inventory of machine 1 amounts
approximately 84 minutes. Machines 2 and 3 hold
an average inventory of 87 and 94 minutes. The
deviation is on average slightly better than in the
first run and reaches from 1.8 up to 14.3 minutes.
The uneven course of the inventories remains
unchanged.
The capacity utilization for the example
workshop during the second run improves from
27.66% to 43.91% (Figure-10). This improvement
is remarkable, as the number of machines and the
used order data stays unchanged. The course of the
utilization is even, similarly to the first results with
a smaller inventory limit of 60 minutes.
Figure 8 Lead times of all six work piece types
Figure 9 Inventory course of workshop 2 with an inventory limit of 80 minutes

219

The increase of the limits reduces the lead times
for all six work piece types (Figure-11). The
maximum value during the simulation period is
around 7 hours. Similar to the first run, work piece
one shows the lowest lead time due to its variable
processing order. For this work piece type, the value
is around five hours. Overall, the reduction ranges
between two and four hours.
6. CONCLUSION AND OUTLOOK
This paper presents an approach for the combined
control of inventory and capacity utilization within
a shop floor production. The control concept
includes the use of artificial neural networks as
inventory and capacity controllers in a cascaded
control structure.
At this, the neural network for inventory control
is responsible for the redistribution of work pieces
between different workshops on the shop floor.
Meanwhile, the neural capacity controller assigns
single work pieces to available machines belonging
to the respective workshop.
The evaluation of this approach by means of a
generic material flow model shows a good
performance relating to the compliance of the set
inventory limit. The capacity results are changeable;
they have a close coherence to the set inventory
limits. A limit of 60 minutes for the whole shop
floor leads to a low capacity utilization, while an
increase to 80 minutes clearly improves the
obtained results.
The close relationship between the inventory
limit and the achieved utilization of the shop floor
makes a dynamic and continuous adjustment of the
set limit interesting for future research. Further, the
composition of the order data and its effect on
capacity utilization and inventory development
should be investigated.
In the field of neural network research, the
further development of the neural controllers is
from major interest. At this, the possible suitability
of different network architectures and
configurations should be a central point. As the
quality and performance of neural networks in
practical applications is closely related to the
learning process, the continuous learning of neural
networks is very important. Therefore, the
development of new, possibly hybrid network
architectures should be advanced.

7. ACKNOWLEDGMENTS
This research is funded by the German Research
Foundation (DFG) as part of the project
Automation of continuous learning and
Figure 11 Lead times of all six work piece types for an inventory limit of 80 minutes
Figure 10 Capacity utilization of workshop 2 for an inventory limit of 80 minutes

220

examination of the long-run behaviour of artificial
neural networks for production control, index
SCHO 540/16-1.

REFERENCES
Barata, J. & Camarinha-Matos, L., 2005.
Methodology for Shop Floor Reengineering Based
on Multiagents. In L. Camarinha-Matos, ed. IFIP
International Federation for Information
Processing - Emerging Solutions for Future
Manufacturing Systems. Boston: Springer. pp.117-
28.
Chaturvedi, D.K., 2008. Artificial neural networks
and supervised learning. In Chaturvedi, D.K. Soft
Computing: Techniques and its Applications in
Electrical Engineering. Berlin Heidelberg:
Springer. pp.23-50.
Chen, J.C., Wu, J.J. & Chen, C.W., 2008. A study
of the flexible job shop scheduling problem with
parallel machines and reentrant process. The
International Journal of Advanced Manufacturing
Technology, pp.344-54.
Chryssolouris, G., Lee, M. & Domroese, M., 1991.
The Use of Neural Networks in Determining
Operational Policies for Manufacturing Systems.
Journal of Manufacturing Systems, pp.166 - 175.
Dreyfus, G., 2005. eural etworks Methodology
and Application. Berlin Heidelberg: Springer
Verlag.
Haykin, S., 2008. eural etworks and Learning
Machines (3rd Edition). New Jersey, USA: Prentice
Hall.
Kohonen, T., 2001. Self-Organizing Maps. 3rd ed.
New York: Springer.
Kwan, C. & Lewis, F.L., 2000. Robust
backstepping control of nonlinear systems using
neural networks. IEEE Transactions on Systems,
Man and Cybernetics, Part A: Systems and
Humans, November. pp.753-66.
Natarajan, U., Periasamy, V.M. & Saravanan, R.,
2006. Application of particle swarm optimisation in
artificial neural network for the prediction of tool
life. The International Journal of Advanced
Manufacturing Technology, May. pp.1084 - 1088.
Pfohl, H.C., 2010. Logistiksysteme:
Betriebswirtschaftliche Grundlagen. Berlin:
Springer Verlag.
Rippel, D., Harjes, F. & Scholz-Reiter, B., 2010.
Modeling a Neural Network Based Control for
Autonomous Production Systems. In Schill, K. &
Scholz-Reiter, B., eds. Artificial Intelligence and
Logistics (AILog) Workshop at the 19th European
Conference on Artificial Intelligence 2010.
Amsterdam, 2010. IOS-Press.
Schfer, W., Wagner, R., Gausemeier, J. & Eckes,
R., 2004. An Engineers Workstation to Support
Integrated Development of Flexible Production
Control Systems. In Integration of Software
Specification Techniques for Applications in
Engineering. Berlin Heidelberg, 2004. Springer.
Scholz-Reiter, B., de Beer, C., Freitag, M. &
Jagalski, T., 2008. Bio-inspired and pheromone-
based shop-floor control. International Journal of
Computer Integrated Manufacturing, pp.201-05.
Scholz-Reiter, B., Harjes, F. & Rippel, D., 2010. An
Architecture for a Continuous Learning Production
Control System based on Neural Networks. In Teti,
R., ed. 7th CIRP Int. Conference on Intelligent
Computation in Manufacturing Engineering CIRP
ICME 10. Capri, Italy, 2010.
Scholz-Reiter, B. & Hhns, H., 2003. Agent-based
Collaborative Supply Net Management.
Collaborative Systems for Production Management,
pp.3-17.
Scholz-Reiter, B., Toonen, C. & Lappe, D., 2011.
Job-Shop-Systems Continuous Modeling and
Impact of External Dynamics. In Chen, S.,
Mastorakis, N., Rivas.Echeverria, F. & Mladenov,
V., eds. Recent Researches in Multimedia Systems,
Signal Processing, Robotics, Control and
Manufacturing Technology. Proceedings of the 11th
WSEAS International Conference on Robotics,
Control and Manufacturing Technology
(ROCOM'11)., 2011. WSEAS Press.
Slack, N., Chambers, S. & Johnston, R., 2007.
Operations Management. 5th ed. FT Prentice Hall.
Steeb, W.-H., 2008. The onlinear Workbook:
Chaos, Fractals, eural etworks, Genetic
Algorithms, Gene Expression Programming,
Support Vector Machine, Wavelets, Hidden Markov
Models, Fuzzy Logic with C++, Java and
SymbolicC++ Programs. 4th ed. Singapore: World
Scientific Publishing Co. Pte. Ltd.
Yildirim, M.B., Cakar, T., Doguc, U. & Meza, J.C.,
2006. Machine number, priority rule, and due date
determination in flexible manufacturing systems
using artificial neural networks. Computers &
Industrial Engineering, May. pp.185-94.

Proceedings of DET2011
7th International Conference on Digital Enterprise Technology
Athens, Greece
28-30 September 2011
221

A MULTI-AGENT-ENABLED EVOLUTIONARY APPROACH TO SUPPLY
CHAIN STRATEGY FORMULATION
Ray Y. Wu
University of Westminster
r.wu1@westminster.ac.uk
David Z. Zhang
University of Exeter
d.z.zhang.exeter.ac.uk

ABSTRACT
This paper presents a research framework for investigating the impact of different supply chain
strategies on operational performances of companies, and exploring how such strategies could be
formulated in a given competitive environment. Supply chains consist of multiple independent
companies with a dynamic relationship of interaction and competition. They appear to be dynamic
adaptive systems presenting complex emergent behaviour with uncertainty which causes difficulties
for the management to cope with. The research framework employs multi-agent technology and
associated systems modelling methods to represent and simulate such interactive and competitive
behaviour in a supply chain network. Furthermore, on the basis of the multi-agent simulation
platform, an evolutionary approach is developed for identifying best strategies for supply chains
operating in different competitive settings. The research will gain further understanding as to how
strategies evolve in fast-changing, interactive and competitive situations, which will suggest
significant research implications and form practical guidance for industries.
KEYWORDS
Supply Chain Strategy, Software Agent, Simulation, Operational Performance

1. BACKGROUND
In the last two decades, with the implementation of
lean practices, the introduction of mass
customisation, and the move towards globalisation,
companies face more severe competition in the
markets than ever before. Constant pursuance, by all
companies, of the maximum fulfilment of customer
requirements for product variety, cost efficiency,
and responsiveness has resulted in a dramatic
change in the way supply chains are organised and
operate. For instance, many companies now source
globally rather than locally. With the move of
manufacturing sites to locations where cost could be
reduced, there has been a redistribution of profits
from manufacturers towards the downstream of the
supply chains. In order to obtain better competitive
positions, companies have made significant efforts
to improve the relationships with their customers
and suppliers and to develop strategic cooperation.
As a result, todays competition is emerging to a
greater extent between supply chains rather than
between companies (Fawcett and Magnam, 2002).
As individual participants in a supply chain tend
to maximise their own profit and there are few
incentives to improve the performance of the overall
supply chain, the global optimisation of supply
chain operations is difficult to achieve. Cooperation
among companies in the same supply chain is
necessary. However, goodwill from one company is
not sufficient to support cooperation since other
companies may simply take advantages of it.
Therefore, there is a need for a mechanism to
coordinate the operations of participants, such that
individuals efforts to maximise their own
performance also make contribution towards the
global maximum of the supply chain performance.
Cooperation can take place in the form of close
production priority and delivery relationships and
through information sharing (Li and Liu, 2006).
Close relationships speed up flows of information,
goods and money and yield reliable supply chains.

222

Information sharing, on the other hand, enables
precise forecasting and improved planning and
scheduling, and reduces bullwhip effect (Lee et al,
2000; Cachon and Fisher, 2000; Forza and Salvador,
2001). The coordination of cooperation between
members can be achieved by the use of coordination
strategies which include policies and contracts that
define the forms of relationships, information
sharing, risk sharing, and profit sharing between
companies (Tsay, 1999; Li and Kouvelis, 1999;
Klastorin et al, 2002; Qin et al, 2007; Xiao et al,
2007; and Miragliotta et al, 2009). Some policies
may also have to be dynamic to cope with complex
interactions between members and the dynamic
nature of competition (Tsay et al, 1998; Kamrad and
Siddique, 2004; and Jammernegg and Kischka,
2005). The questions are: what constitute an
effective coordination strategy for a supply chain
and how to identify a good strategy?
In the literature, work has been carried out to
investigate ways to model the effects of some of the
policies, such as pricing, on supply chain
performances. The investigations so far have been
based on analytical techniques. However, as supply
chains are cooperative and competitive systems
where interactions among members are complex
(Surana et al, 2005) and could lead to chaotic
behaviour (Wu and Zhang, 2007), mathematical
models were found to be unable to represent fully
the level of complexity involved and predict the
dynamic behaviour of such systems (Axelrod,
1997). In this regard, game theory appears to
provide an alternative methodology. However it
requires explicit strategies-payoff data dependent on
interactions between customers and suppliers which
are not always available. Multi-agent technology,
with the ability to model complex systems using
distributed agents which interact to produce
emergent behaviour, appear to offer an advantage.
However, work carried out so far (Krothapall and
Deshmukh, 1999; Calinescu et al, 2003; van der Zee
and van der Vorst, 2005; and Piramuthu, 2005) in
this area mainly focused on operational, rather than
strategic policies. For instance, Zhang et al (2007)
and Akanle and Zhang (2008) have investigated the
use of multi-agent technology to model and
optimise operational decisions involved in a
dynamically integrated manufacturing system or
supply chain network. Others from the University of
Michigan have used multi-agents to model supply
chain operational environment and have developed
an internet-based game (the Trading Agent
Competition Supply Chain Management game, or
TAC SCM) (Eriksson et al, 2006), for
manufacturers to explore the effectiveness of
different operational decisions. The game considers
a three-tier supply chain, where suppliers and
customers are modelled as resources in the
environment and participants take part as
manufacturers competing against each other. It
provides a useful platform for manufacturers to
explore alternative operational decisions, but does
not support the implementation of coordination
strategies across supply chain members.
Therefore, in the area of supply chain
coordination, there is currently no technology
available to support the identification of effective
strategies. In fact, a comprehensive understanding of
the whole concept of coordination strategies for
supply chains is missing as investigations so far
have only managed to consider few policies that
might form part of the strategies. The fundamental
questions are: What are the sets of policies and
business practices that define a coordination strategy
for a supply chain? Are there strategies which will
result in better overall supply chain performance
than others? If there are, how are they to be
identified? Can they be identified through an
optimisation process rather than through an ad-
hoc trial and error process? Furthermore, how does
the best coordination strategy, according to a given
set of performance measures, vary with customer
demand patterns and the characteristics of market,
products and competitions?
This research will make an initial attempt to
answer these questions. In particular, the project
will investigate whether or not a hybrid approach
combining multivariate analysis, multi-agent
modelling, and evolutionary optimisation presents a
way of developing answers to these questions.
Multivariate analysis will be used to investigate
elements that constitute coordination strategies.
Multi-agent modelling will be investigated as
possible ways of simulating the effects of individual
strategies under specific competition environment,
while evolutionary optimisation will be investigated
as a possible mechanism for finding better and
better strategies. The detailed research methodology
is described below.
2. METHODOLOGY
Software agents are considered as autonomous and
good candidates for application requiring constant
adaptation. This feature makes multi-agent system
(MAS) a desired tool for supply chains/network
simulation.
During the past decades, with the development of
computer technology, application of software agents
for simulation provided manufacturing industry with
a convenient way of modeling processes that were
distributed over space and time (Kwon and Lee,
2001). In this condition, the MAS technology was
subsequently focused by the research community.
The concept of MAS is on the basis of distributed

223

artificial intelligence (DAI) and meanwhile it also
refers to system design and analysis using object-
oriented methodology with human interfaces
(Jennings et al., 1998). It is acknowledged that
MAS is characterized by autonomous interaction,
adaptability to environmental changes and rational
manner (Lee and Kim, 2008; Li and Xiao, 2006).
The MAS consists of a group of software agents,
each of which takes specific roles within an
environment and interacts with others for achieving
their responsibilities and objectives (Fox et al.,
2000; Kwon and Lee, 2001),
In the context of supply chain networks which are
composed of interacting entities and exhibit a wide
range of dynamic behaviors in terms of environment
changes, the MAS has spurred enormous application
with respect to SCM and been considered as the
most promising technology regarding this discipline.
Fox et al. (2000) proposed procedures for
constructing models and tools which facilitate MAS
to sort out coordination and communication in real-
world application for SCM. Huang and Nof (2000)
described an approach through agent formation and
protocol formation to reduce uncertainty and to keep
productivity in manufacturing systems. By resorting
MAS, some issues such as the decision making
problems (Hu el at., 2001), the adaptive inventory
controlling in ERP (Kwon and Lee, 2001),
knowledge management (Wu, 2001) were also
supported and developed. Furthermore, Allwood
and Lee (2005) presented new agent architectures to
model competitive supply chain networks dynamics,
which had novel features including vendor
selection, preferred distribution, production and
inventory management, and price determination
based on competitive behavior.
The project uses multi-agent technology to
simulate the competition in a supply chain. Each
player in the supply chain is simulated by an
software intelligent agent which intends to
maximise its own performance. The details of the
methodology are described as follows.
3. SIMULATION MODEL AND ITS
ARCHITECTURE
The architecture of the model, as shown in Figure-1,
includes a three-tier supply network comprising
customers, retailers, manufacturers and suppliers for
a particular category of products. Competitions take
place among the supply chains of each product
brand. The participants in the same tier compete
though they do not communicate with each other. A
retailer may sell multiple brand products of different
manufacturers. A supplier may also provide raw
materials or components to different manufacturers.



Figure 1 An illustration of a supply network
3.1. CONSUMER AGENTS
Consumers are the final customers of products.
They generate demand and are simulated by
customer agents. The consumers purchase
behaviour and decision-making process are
simulated by the decoy effect (Meyer and
Johnson, 1995), in which the consumer perceived
trade-off is projected into the product attributes.
Consumers are classified into groups according to
age, income, occupation, education, and
psychological status. Each customer agent
representing a consumer is assigned to a consumer
group according to a statistical distribution and the
agent assumes the attributes of the group. Such
attributes determine the purchase behaviour of
individual agents. For instance, a consumer in a high
income group tends to pursue high end products.
Some consumers request products to be available as
soon as they purchase, while others do not bother
waiting for a few days. Some consumers are easily
affected by friends or relatives, while some others
trust only the reviews made by experts. Some are
loyal to big brands while some put emphasis on
functions. Consumers are connected by networks
through which they affect each others purchase
decisions resulting in collective emergent behaviour.
There are different types of connections which are
all being simulated in this work.
3.2. GENERIC SUPPLIER AGENTS
The suppliers, manufacturers and retailers are
modelled with a generic supplier agent architecture,
which includes a sales sub-agent, a production sub-
agent, a procurement sub-agent, a coordination sub-
agent, and an integrated decision-making sub-agent,
as shown in Figure-2.






C
C
C
C
C
R
R
R
R
M
M
M
M
S
S
S
S
Consumers Retailers Manufacturers Suppliers




Figure 2 The architecture of a generic supplier agent
The sales sub-agent negotiates with customers,
determines the sales price, forecasts and manages
demands, controls inventory, and handles orders and
deliveries. The production sub
production schedule, determines production priority
and makes resources plans, such as investment for
production capacity. The procurement sub
carries out purchasing, manages raw materials and
components inventory, and n
suppliers. The coordination sub-
the coordination strategy to optimise the supply
chain operations. In a coordinated supply chain, as
shown in Figure-3, the manufacturer has a closer
relationship (coordinated links) with some
retailers and suppliers. The coordination strategy,
which includes profit, information, work and risk
sharing policies, applies to these coordinated links.
The integrated decision-making sub
higher (strategic) level decisions than other
agents by setting strategic rules or constraints for
other sub-agents. The generic agent architecture is
configured to generate retailer agents, manufacturer
agents and supplier agents.


Figure 3 The supply chain of a product
forecasting
order/delivery
handling
product
inventory
production
planning/
scheduling
materials
inventory
control
integrated
decision
making
supplier
selection
procurement
resources
planning
S
u
p
p
l
i
e
r
s
coordination
agent
strategic
partner
production
information
database
strategies
database
procurement
Retailers Manufacturers Suppliers
RA
RA
RA
RA
MA
SA
SA
SA
SA
ordinary link
coordinated link
ordinary link ordinary link
coordinated link coordinated link
224

generic supplier agent
agent negotiates with customers,
determines the sales price, forecasts and manages
demands, controls inventory, and handles orders and
production sub-agent produces
production schedule, determines production priority
and makes resources plans, such as investment for
production capacity. The procurement sub-agent
carries out purchasing, manages raw materials and
components inventory, and negotiates with
-agent determines
the coordination strategy to optimise the supply
chain operations. In a coordinated supply chain, as
3, the manufacturer has a closer
relationship (coordinated links) with some of the
retailers and suppliers. The coordination strategy,
which includes profit, information, work and risk
sharing policies, applies to these coordinated links.
making sub-agent makes
higher (strategic) level decisions than other sub-
agents by setting strategic rules or constraints for
agents. The generic agent architecture is
configured to generate retailer agents, manufacturer

The supply chain of a product
4. SIMULATION AND
PROCEDURE
A run of simulation is arranged so that
several manufacturing agents (brand owners)
suppliers, retailers operating
customers demand in a
a particular time, each manufacturing age
governed by a strategy comprising a set of
strategic rules corresponding to manufacturing,
marketing, purchasing and supply chain
coordination respectively.
determine the agents decisions about production
and inventory, its relati
retailers and consumers, its policies as to how to
align with suppliers, and its policies about
information and profit sharing along the supply
chain. The scenarios
among partners along the supply chai
simulated in comparison with a coordinated supply
chain.
The process of strategy simulation and evolution
will take the form of an iterative process. At the
beginning, each manufacturing agent will be
allocated with a basic strategy. These
then enter an iterative loop of competition and
strategy evolution. As shown in Fig
each iterative cycle, agents implement their
respective strategies through reconfiguration and
compete for a period of time using the strategies.
The results of competition are
with top performance will carry their strategies to
the next iterative cycle. Those in the middle will
carry out an incremental improvement to their
strategies, while those at the bottom will make a
drastic change to their strategies. The strategies will
be implemented as a combination of rules and data
and their adaptation carried out through techniques
similar to evolutional computation. The agents then
carry the updated strategies forward to the next
cycle of competition and evolution. This process is
repeated until a satisfactory winner resulted. The
strategies of the winner in the final cycle will be
forecasting
order/delivery
handling
product
inventory
control
pricing
C
u
s
t
o
m
e
r
s
strategic
partner
sales
Consumers Retailers
CA
CA
CA
CA
CA
RA
RA
RA
RA

Figure 4 The cycle of evolution
SIMULATION AND EVOLUTION
A run of simulation is arranged so that there are
several manufacturing agents (brand owners),
operating together to fulfil
a competing environment. At
a particular time, each manufacturing agent is
governed by a strategy comprising a set of
strategic rules corresponding to manufacturing,
marketing, purchasing and supply chain
coordination respectively. These rules will
determine the agents decisions about production
and inventory, its relationships with distributors,
retailers and consumers, its policies as to how to
align with suppliers, and its policies about
information and profit sharing along the supply
of confliction of interests
among partners along the supply chain can also be
simulated in comparison with a coordinated supply
The process of strategy simulation and evolution
will take the form of an iterative process. At the
beginning, each manufacturing agent will be
allocated with a basic strategy. These agents will
then enter an iterative loop of competition and
strategy evolution. As shown in Figure-4, within
each iterative cycle, agents implement their
respective strategies through reconfiguration and
compete for a period of time using the strategies.
he results of competition are then analysed. Agents
with top performance will carry their strategies to
the next iterative cycle. Those in the middle will
carry out an incremental improvement to their
strategies, while those at the bottom will make a
ic change to their strategies. The strategies will
be implemented as a combination of rules and data
and their adaptation carried out through techniques
similar to evolutional computation. The agents then
carry the updated strategies forward to the next
cle of competition and evolution. This process is
repeated until a satisfactory winner resulted. The
strategies of the winner in the final cycle will be

The cycle of evolution

225

considered a paradigm for companies operating in
the specific competition environment.
5. IMPLEMENTATION
The simulation architecture of supply network is
built by an agent server and individual agents in
Java. Consumers and their behaviour are
implemented within the agent server. All other
participates in the supply network are implemented
by the generic supplier as mentioned in section 3.
The role of each individual agent can be configured
when the agent is registered. The strategy adopted in
an agent can be reconfigured during simulation.
The agent server is responsible for registration of
individual agents by recording their identifications.
It controls the time frame, i.e., the order and supply
cycle, during simulation. The agent server also
provides a supporting platform for information
exchanges among different agents and records the
simulation data into the central database. For
example, the agent server includes a few supporting
software tools such as a simulation manager, a time-
manager and an internal bank for the virtual
materials, information and cash flows within the
supply network.
An individual agent in the simulation model is
built with different methods and objectives within
each function of an organization as described in
section 3 to reflect different strategies in supply
chain management, for example, minimised
inventory for lean practice, order trigged
replenishment for just-in-time (JIT), and keep a
certain level inventory with priority scheduling for
quick responsiveness or agility. These different
strategies are dynamically reconfigurable during the
simulation to implement evolutionary approaches.
6. SUMMARY
This paper have proposed a research framework to
investigate the effect impact of the different supply
chain strategies on operational performances of
companies and how such strategies could be
developed in a given competitive environment.
The research framework employs multi-agent
technology and associated systems modelling
methods to represent and simulate such interactive
and competitive behaviour in a supply chain
network. On the basis of the multi-agent simulation
platform, an evolutionary approach is developed for
identifying best strategies for supply chains
operating in different demand and competitive
settings. Through simulation, the research will gain
further understanding as to how strategies evolve in
fast-changing, interactive and competitive
situations, which may suggest significant research
implications and form practical guidance for
industries.
The future work includes case studies to collect
real world data to test the simulation model and run
the evolutionary programme for application
guidance and meaningful implications to each
specific case.
REFERENCES
Allwood JM and Lee JH, The design of an agent for
modeling supply chain network dynamics. Int. J.
Prod. Res., Vol. 43, 2005, pp 4875-4898
Akanle OM and Zhang DZ, Agent-based model for
optimising supply chain configurations, Int. J. of
Prod. Econ., Vol. 115, No. 2, 2008, pp 444-460
Axelrod R, The Complexity of Cooperation: Agent-
Based Models of Competition and Collaboration,
Princeton University Press, Princeton, New Jersey,
1997
Cachon GP and Fisher M, Supply chain inventory
management and the value of shared information,
Management Science, Vol. 46, No. 8, 2000, pp 1032
1048
Calinescu A, Efstathiou J and MacCarthy B, Agent-
Based Modelling in Manufacturing and Supply
Chains: Potential Benefits and Open Questions, The
International Workshop on Complex Agent-based
Dynamic Networks, 5 - 7 October 2003, Sad Business
School, University of Oxford
Eriksson J, Finne N and Janson S, Evolution of a supply
chain management game for the Trading Agent
Competition, AI Communications, Vol. 19, 2006, pp
112
Fawcett SE and Magnam GM, The rhetoric and reality
of supply chain integration, Int. J. Physical
Distribution & Logistics, Vol. 32, No. 5, 2002, pp 339-
361
Forza C and Salvador F, Information flows for high-
performance manufacturing, Int. J. Prod. Econ., 70,
2001, pp 21-36
Fox MS, Barbuceanu M and Teigen R, Agent-oriented
supply-chain management. Int. J. Flex. Manuf. Sys.,
Vol. 12, 2000, pp 165-188
Hu Q, Kumar A and Zhang S, A bidding decision model
in multiagent supply chain planning. Int. J.
Production Research, Vol. 39 2001, pp 3295-3310
Huang C-Y and Nof SY, Formation of autonomous
agent networks for manufacturing systems. Int. J.
Prod. Res., Vol. 38, 2000, pp 607-624
Jammernegg W and Kischka P, Dynamic, customer-
oriented improvement of supply networks, European
Journal of Operational Research, Vol. 167, 2005, pp
413426

226

Jennings NR, Sycara K and Woodridge M, A roadmap
of agent research and development. Autonomous
Agents and Multi-agent Systems, Vol. 1, 1998, pp 7-38
Kamrad B and Siddique A, Supply contracts, profit
sharing, switching, and reaction options, Management
Science, Vol. 50, No. 1, 2004, pp 6482
Klastorin TD, Moinzadeh K and Son J, Coordinating
orders in supply chains through price discounts, IIE
Transactions, Vol. 34, 2002, pp 679689
Kwon OB and Lee JJ, A multi-agent intelligent system
for efficient ERP maintenance. Expert system with
application, Vol. 21, 2001, pp 191-202
Krothapall NKC and Deshmukh AV, Design of
negotiation protocols for multi-agent manufacturing
systems, Int. J. Prod. Res., Vol. 37, No. 7, 1999, pp
1601-1624
Lee JH and Kim CO, Multi-agent systems applications
in manufacturing systems and supply chain
management: a review paper. Int. J. Prod. Res., Vol.
46, 2008, pp 233-265
Lee HL, So KC and Tang CS, The value of information
sharing in a two-level supply chain, Management
Science, Vol. 46, No. 5, 2000, pp 626-643
Li CL and Kouvelis P, Flexible and risk-sharing supply
contracts under price uncertainty, Management
Science, Vol. 45, 1999, pp 13781398
Li J and Liu L, Supply chain coordination with quantity
discount policy, Int. J. Prod. Econ., Vol. 101, No. 1,
2006, pp 89-98
Li H and Xiao R, A multi-agent virtual enterprise model
and its simulation with Swarm. Int. J. Prod. Res.,
Vol. 44, 2006, pp 1719-1737
Makris, S and Chryssolouris, G, "Customer's behavior
modeling for manufacturing planning", Int. J. Com.
Int. Manuf., Vol. 23, No. 7, 2010, pp 619-629
Makris, S, Xanthakis, V, Mourtzis, D, and Chryssolouris,
G, "On the information modeling for the electronic
operation of supply chains: A maritime case study",
Robot. Com.-Int. Manuf., Vol. 24, No. 1, 2008, pp
140-149
Meyer R and Johnson EJ, Empirical generalizations in
the modeling of consumer choice, Marketing Science,
Vol. 14, 1995, pp 180-189
Miragliotta G, Brun A and Soydan IA, Coordinating
multi-business sales through management simulators,
Int. J. Prod. Econ., Vol. 121, No. 2, 2009, pp 533-549
Piramuthu S, Knowledge-based framework for
automated dynamic supply chain configuration,
European J. Oper. Res., Vol. 165, 2005, pp 219230
Qin Y, Tang H and Guo C, Channel coordination and
volume discounts with price-sensitive demand, Int. J.
Prod. Econ., Vol. 105, 2007, pp 43-53
Surana A, Kumara S, Greaves M and Raghavan, UN,
Supply-chain networks: a complex adaptive systems
perspective, Int. J. Prod. Res., Vol. 43, No. 20, 2005,
pp 4235-4265
Tsay AA, The quantity flexibility contract and supplier
customer incentives, Management Science Vol. 45,
No. 10, 1999, pp 13391358
Tsay AA, Nahmias, S and Agrawal, N, Modeling supply
chain contracts: a review, In: Tayur, S., Ganeshan, R.,
Magazine, M. (Eds.), Quantitative models for supply
chain management, Kluwer Academic, New York.
1998
Van der Zee DJ and van der Vorst JGAJ, A modelling
framework for supply chain simulation: opportunities
for improved decision making, Decision Science,
Vol. 36, No. 1, 2005, pp 65-95
Wu DJ, Software agents for knowledge management:
coordination in multi-agent supply chains and
auctions. Expert Systems with Applications, Vol. 20,
2001, pp 51-64
Wu Y and Zhang DZ, Demand fluctuation and chaotic
behaviour by interaction between customers and
suppliers Int. J. Prod. Econ., Vol. 107, No. 1, 2007,
pp 250-259
Xiao T, Qi X and Yu G, Coordination of supply chain
after demand disruptions when retailers compete, Int.
J. Prod. Econ., Vol. 109, No. 1-2, 2007, pp 162-179
Zhang Z, Anosike A and Lim MK, Dynamically
Integrated Manufacturing Systems (DIMS) a multi-
agent approach, IEEE T. Sys., Man and Cyb., Part A:
Systems and Humans, Vol. 37, No. 5, 2007, pp 824-
850


Proceedings of DET2011
7th International Conference on Digital Enterprise Technology
Athens, Greece
28-30 September 2011
227

DEVELOPMENT OF AN ASSEMBLY SEQUENCE PLANNING SYSTEM
BASED ON ASSEMBLY FEATURES
Hong Seok Park
School of Mechanical and Automotive
Engineering, University of Ulsan, Ulsan,
Korea
phosk@ulsan.ac.kr
Yong Qiang Wang
School of Mechanical and Automotive
Engineering, University of Ulsan, Ulsan,
Korea
y.q.wang82@gmail.com
ABSTRACT
To meet the requirements of industries and support manufacturing planners to make decisions
rapidly and accurately, the assembly features-based assembly sequence planning system is
developed. The system employs a semantic technique for creating an assembly features model. And
there are several functional modules in the assembly sequence planning system to make full use of
assembly features. In the generation of assembly sequences for any product, the core technologies
include the reasoning mechanism for matching assembly features, the algorithm proposed for
automatic generation of assembly sequence and the evaluation method for obtaining the optimal
assembly sequences. To verify the validity and efficiency of the developed system, the assembly
features-based assembly sequence planning is applied to a practical problem, i.e. the assembly of an
automotive module such as oil pump and the corresponding results are presented.
KEYWORDS
Assembly Feature Model, Assembly Sequence Planning, Reasoning mechanism, Evaluation
Method

1. INTRODUTION
Assembly involves the integration of components
and parts to produce a product or system (Chen et
al., 2008). Assembly planning aims to identify and
evaluate the different ways to build a functional
module from its components. Assembly sequence
planning (ASP) plays an important role in the
assembly plan and affects several aspects of
assembly process as well as assembly productivity
and cost. The assembly sequence planning is the
core problem in the development of computer-aided
assembly planning system. In addition, good ASP
has been recognized as a practical way to reduce
operation difficulty, the number of tools, assembly
product costs and working time, improvement of
quality and shrinkage of time to market (Lai and
Huang, 2004). Automating the generation of
assembly sequences and their optimisation can
ensure the competitiveness of manufactured goods
and increase profit margins (Romeo M. et al., 2006).
Currently, automatically generating feasible
assembly sequences is still an extraordinarily
difficult task due to the complexity increasing
exponentially with the number of parts. Hence, it
has been an objective for manufacturing industries
to look for effective and suitable methods to
overcome this challenge.
This paper focuses on the computer-aided ASP
system, more specifically on assembly sequence
planning and optimizing. An assembly feature-based
ASP system is proposed by which all feasible
assembly sequences can be reasoned out
automatically and the optimal assembly sequences
can be obtained easily according to the evaluation.
The arrangement of the paper is as follows.
Section 2 gives literature view on ASP. Section 3
shows the strategy for developing ASP system. In
Section 4, core technologies of assembly feature-
based ASP system are elaborated. In Section 5,

228

architecture of assembly features-based ASP system
is designed, and the programming system is
implemented and the functionality of the system is
introduced. Section 6 demonstrates the application
of the developed ASP system with the practical
problem. Finally, the conclusions and further
research directions are summarized in Section 7.
2. ASSEMBLY SEQUENCE PLANNING IN
THE AREA OF ASSEMBLY
ASP has received much attention in manufacturing
industries and research projects over the past two
decades. There have been many attempts to solve
and optimize the ASP using various approaches.
These methods can be roughly classified into three
kinds: human-interaction manual method, geometric
feasibility reasoning approach and knowledge-based
reasoning method.
The early assembly sequence planners were
mainly interactive in nature (Priyadarshi and Gupta,
2009). Traditional ASP is manual according to the
experience and knowledge of industrial engineers.
And it mainly focuses on each users query either on
the connection between a pair of parts or the
feasibility of a single assembly operation. However,
if the product is complex, the planner needs to spend
lots of time and energy to determine the sequence,
and sometimes he cannot ensure this sequence is
feasible or optimal. Therefore, traditional manual
analysis does not allow the feasibility of assembly
sequences to be easily verified and then it is far
from automation.
Thereafter several authors proposed geometry-
based reasoning approaches to generate assembly
sequence. Niu et al. (2003) applied a hierarchical
approach to generating precedence graphs for ASP.
Gu et al. (2008) proposed the procedures to
transform directed graph and AND/OR graph into
symbolic ordered binary decision diagram (OBDD)
for mechanical assembly sequences. Su et al. (2009)
used the 3D geometric constraint analysis (3D-GCA)
and algorithms for spatial ASP. Su (2009) also
presented a hierarchical approach to ASP and
optimal sequences based on geometric assembly
precedence relations (APRs). However, geometry-
based reasoning approach is prone to lead to
combinatorial explosion problem. In order to reduce
the searching space of ASP of complex product,
numerous intelligent algorithms have been
developed and used to generate assembly sequence,
such as genetic algorithms (GAs) (G. Dini et al.,
1999, Romeo M. et al., 2006), artificial neural
network (ANN) (Chen et al., 2008), artificial
immune systems (Chang et al., 2009), particle
swarm optimization algorithm (Guo and Li, 2009;
Wang and Liu, 2010), symbiotic evolutionary
algorithm (Shin et al., 2011), and memetic
algorithm (MA) (Gao et al., 2008, Tseng et al.,
2007). Although most algorithms afore mentioned
improved the efficiency of the process to search the
assembly sequences and avoid the combinatorial
explosion problem, these algorithms depended upon
the influence of initial positions and relative
parameters which limit the efficiency of finding
global optimal solution for complex product. In
addition, they may tend to converge prematurely at
local optimal solutions frequently.
To implement the automation of generating
assembly sequence, there is not enough to only
consider the geometric information. The above
methods do not consider much assembly knowledge,
so they are short of enough assembly information to
deal with ASP in practice. In this context,
knowledge-based reasoning is put forward. Here,
knowledge consists of geometric information,
assembly method, assembly tools and machines, and
other knowledge related to the assembly sequence.
Dong et al. (2005) applied a collaborative approach
to ASP, and knowledge-based approach is proposed
to integrate geometry-based reasoning with
knowledge-based reasoning. Chen et al. (2010)
proposed three-stage integrated approach to promote
the quality of assembly plan and facilitate assembly
sequence optimization via a knowledge-based
engineering system and a robust BPNN (Back
Propagation Neural Network) engine. Park (2000)
developed a knowledge-based system for generation
of an optimal assembly sequence. The advantages of
system are that assembly-oriented information can
be grasped and used efficiently, and some difficult
operation information can be used to evaluate
assembly sequence. Therefore, knowledge-based
method is feasible and available to generate
assembly sequence automatically.
From the above literature analysis, it is known
that information of components themselves is
underutilized and there is no planning method based
system for generating the assembly sequence.
Therefore, it is necessary to develop a new planning
strategy and algorithm to generate the appropriate
information model for assembly sequence planning
and apply the information model to plan assembly
sequence at the same time.
3. STRATEGY FOR DEVELOPING ASP
SYSTEM
In ASP, the primary step is to generate an assembly
product model. The efficiency of an ASP depends
heavily on it. Rationalization of an assembly
product model is judged by its potential to directly

229

use CAD data and its capability to effectively assist
the generation of assembly sequences. Features
which combine geometric and technological
information are defined for modelling and planning.
A feature-based product model is suitable for
automatic generation of assembly sequences. From
this viewpoint, the paper presents a novel assembly
feature model which integrated single-part models
and sub-assembly models, and assembly sequence
can be generated based on assembly features model.
Figure-1 shows the systematic procedure of ASP. In
order to implementation of the ASP system, there
are four main tasks to be studied.


Figure 1 Systematic procedure of ASP
1) Information model of assembly feature for
parts: Assembly features are extracted from the
analysis of parts and subassemblies to build the
model of assembly feature attributes.
2) Strategy for realization of reasoning
mechanism: reasoning mechanism is designed
to match assembly features and determine the
relationship among assembly features.
3) Algorithm for generating all feasible assembly
sequences: The proposed algorithm is used to
generate all feasible assembly sequences with
the help of reasoning mechanism.
4) Evaluation method for obtaining the optimal
assembly sequences: Evaluation method is used
to select the optimal assembly sequences
among all feasible assembly sequences.
4. DEVELOPMENT OF ASP SYSTEM
4.1. FORMING AN ASSEMBLY FEATURE
An assembly operation requires at least two parts,
and the two parts are assembled through their
respective assembly features. So an assembly
feature is here defined as an assembling bridge
between two parts, i.e. a contacting point. In other
words, assembly of parts can be transformed into
their assembly features matching. During the
assembly process, there is lots of assembly-specific
information which should be encapsulated in terms
of assembly features. For better and clear utilization,
assembly features can be divided into two types:

230

Type 1. Form features which are described
semantically could be found from 3D drawings,
i.e. pocket, column, etc. And form features are
used to find their counter features assembly.
Type 2. Geometric features are expressed by
traditional dimensional data which could be
obtained from 2D drawings, i.e. 50.00mm, -
0.06mm. And geometric features are used to
determine whether two parts could be
connected or not.
Generally, one part has many generic features,
but not all of them are useful in assembly process.
In order to have an accurate definition of assembly
feature, assembly drawing is needed to analyse the
assembly process of parts. Because the assembling
process happens at contact surface and two parts
interact with one another, assembly features should
be defined in pairs. This concept is used to
determine all assembly features.
Figure-2 illustrates an example of determination
of assembly features. In Figure-2, part 2 can be
assembled with part 1 using its outside feature, not
the inside feature. So, the outside feature is
considered as an assembly feature for part 2.
Meanwhile, part 1 also uses one of its features to
match part 2s outside feature, so the used feature is
also defined as an assembly feature for part 1.


Figure 2 Determination of assembly features
This kind of the definition has several advantages.
The first advantage is that it provides purposeful and
precise for determining assembly features because it
was described in consideration of assembly process
and prevents the insufficient or excessive definition.
Another advantage is that all defined assembly
features should be run out exactly after assembling
all parts completely. And it ensures the accuracy and
reliability of the results of an ASP.
4.2. GENERATING AN ASSEMBLY
FEATURE MODEL
An assembly feature model includes all necessary
information of ASP. In order to generate an
assembly feature, form features and geometric
features should be extracted respectively. Notably,
form features must be determined firstly, and then
geometric features will be added along with form
features. For the given product, the first step is to
decompose the whole product to determine the
amount of parts in this product and analyze
assembly relations among parts. The product could
be decomposed easily under the environment of
CATIA because there is a function to obtain the
exploded view to make assembly analysis. The
fundamental principle for determining form features
is that there is at least a pair of form features (each
part has one form feature) if two parts could be
assembled. Each form feature should be defined
using semantic way. Through assembly analysis, all
form features could be found and extracted from 3D
drawings. According to the fundamental principle,
the minimum amount of form features could be
determined. After determining all the form features,
the geometric features should be generated with the
help of 2D drawings and include not only dimension
but also the technology data related assembly
process such as tolerance and roughness, etc. After
that, all assembly feature attributes could be
obtained completely and they will be integrated into
an assemble feature model.


Figure 3 Structure of an assembly feature model
Figure-3 illustrates an example of an assembly
feature model. In Figure-3, Part possesses six
assembly features which are integrated into an
assembly feature model. Every assembly feature is
expressed compactly and sufficiently using the same
tree structure which is made up of form feature and

231

geometric feature. Assembly feature 03 shows this
tree structure in details.
4.3. DEVELOPMENT OF A REASONING
MECHANISM
In order to build assembly relationship among parts,
a reasoning mechanism is designed to determine the
relations among assembly features. The relationship
represents the assemble possibility between two
parts. The reasoning mechanism is implemented
based on the assembly feature model. Moreover,
every assembly feature should be reasoned by item-
by-item method due to the structure of an assembly
feature model. In the course of each assembly
feature reasoning, form features and geometric
features should be matched respectively. Figure-4
shows the strategy of a reasoning mechanism. There
are four steps in the reasoning mechanism.
Step 1. Shape matching: Shape describes the
geometric cross section by specialised term,
and it contains circle, ellipse, triangle, square,
and rectangular and so on. Here, shape of each
part should be compared firstly. If the shape is
described using the same term, it meets the
condition of shape matching.
Step 2. Dimension matching: After the previous
step, the geometry information of cross section
should be further checked. It contains diameter,
depth, length, width, long axis, minor axis,
bottom side, and height and so on. If the
dimension is described using the same term, it
meets the condition of dimension matching.
Step 3. Dimension value checking: In this step, the
reasoning could be continued by the dimension
value checking. Generally, if two parts are
assembled, the clearance between two parts
must be smaller than the maximal tolerance .
For example, the assembly clearance between
piston pin hole and piston pin should be 0.0025
~ 0.0075mm under the cold assembly condition.
So the could be 0.075mm. The condition of
checking dimension value is that the clearance
should abide by equation-1, where
1 part
D and
2 part
D are the dimension values for two
compared parts, respectively.

1 2 part part
D D mm (1)

Step 4. Feature relationship determining: After the
previous three steps, it is assured that both form
features and geometric features meet each
condition, so that these two assembly features
satisfy the assembling conditions. If two
assembly features could be matched by the
reasoning mechanism, these two assembly
features can be assembled.
In the reasoning mechanism, Step 1 and Step 2
belong to semantic reasoning, but Step 3 is a
geometric reasoning. Step 4 is used to determine
and save the assembly features relationship.
Through repeatedly reasoning, all relations of
assembly features will be obtained and they will be
used in the next algorithm process.


Figure 4 Strategy of a reasoning mechanism
4.4. DEVELOPMENT OF AN ALGORITHM
An algorithm is used to generate all feasible
assembly sequences with the help of reasoning
mechanism, and it consists of backward reasoning
and merging mechanism. The basic idea of an
algorithm is to find the part to be assembled through
the relationship of assembly features. So, the
working procedure is part - feature - feature - part.
There are four important modules in this algorithm
and more details as follow.
Grouping module: This is a selection domain,
which consist of two groups: Group and Group.
At the beginning of algorithm, Groupcontains one
part which is selected arbitrarily, and the rest parts
are in Group . With the operation of iterative
mechanism, the part will be moved continually from
Groupto Group. If the assembly process is
completely finished, namely, all parts have been
assembled through assembly features, Group will
contain all parts and Groupis empty. So, the end
condition of this algorithm is Groupis empty.
Removing module: To reduce the searching effort
of assembly features, this module can reduce the
number of parts of the product unceasingly via
assembly part merging along the assembly
sequence. In this module, after two parts are
assembled, they are considered as one component
and every assembled feature or part can be removed
from Group. The new generated component has
the rest features of the two assembled parts except
the features used for assembling two parts. Using
this module, the quantity of assembling parts will be

232

reduced and the solution space will be compressed
simultaneously.
Sequence generating module: This module is
constructed according to the basic idea of algorithm,
and it is a processing procedure. Every part has its
own features and they are also assigned to their part.
Using this module, one optional feature in a part
tries to find its counter feature in another part
according to the above reasoning mechanism. This
process proceeds until all exiting features or parts
are empty.
Self-checking module: In this algorithm, self-
checking is indispensable procedure when two parts
are assembled, because there may be some
additional assembly features which should be
deleted at assembling. Through assembly process,
some assembly features could be matched
automatically or covered by the assembled
counterpart, i.e. the accessibility to those features is
prevented. This happens due to the shape of a part.
Such kind of information should be described in the
database through analysing the assembly drawings.
The automatically matched and covered features
have to be removed from the features generated by
assembling two parts.


Figure 5 Generation of assembly sequence by cooperating
four modules
Figure-5 shows the relationship among four
modules and presents how these modules cooperate
to generate assembly sequence. As results of the
cooperation of the modules, all feasible assembly
sequences can be generated. They mean the
sequences which have no left feature and parts in it
after completing whole assembly processes.
4.5. DEVELOPMENT OF AN EVALUATION
METHOD
Practical product can have lots of feasible assembly
sequences with the increasing number of parts.
There is a need to develop some procedures to
reduce large quantities of sequences in order to
select the optimal assembly sequence that most
nearly meets planners needs for a particular
purpose in consideration of the practical conditions.
So the evaluation method is designed to screen all
feasible assembly sequences. In the evaluation
method, three evaluation criteria are applied to
obtain optimal assembly sequence, namely, base
part, direction change, and special part. They should
keep to the following basic principle:
1) Base part: If the part contains maximum
quantity of assembly features and is heavier
than other parts, this part is considered as a
base part. Base part should be assembled firstly
because it can take most parts and carry them.
2) Special part: In order to follow reasonable
assembly sequence in terms of quality
assurance and safety, the special part such as
sensibility parts, e.g. high accuracy and surface
roughness parts as well as easily broken parts
such as glass etc. and dangerous parts such as
explosive parts should be assembled as lately as
possible.
3) Direction change: If the assembly direction is
changed for meeting assembly requirements
during the assembly process, this kind of
operations will increase extra assembly effort,
time and cost due to resetting part, turning part
and assembly tools. This affects badly
assembly efficiency. The fewer the direction
change happen in assembly sequence, the better
that assembly sequence is.
Based on the above principle and the developed
assembly feature model, three rules are developed to
select the optimal assembly sequence. Each
principle forms one rule and three rules are derived
out as follow.
Rule 1: Base part must be assembled firstly.
CHOOSE PART (a)-(f,W)
GET f %f - number of assembly feature%
GET W %W weight of part%
IF MAX (f, W)
THE BASEPART (TRUE, part (a), FIRST)
SELECT SEQUECE (q)
IF BASEPART (FALSE, part (a), FIRST)
THE SEQUECE (FALSE, sequence (q))
DELETE SEQUECE
ED

Rule 2: Special part should be assembled lately.
CHOOSE PART (a)-(Attribute)
%Attribute sensitive, dangerous%
GET SPECIALPART (TRUE, part (a), LATE)

233

SELECT SEQUECE (q)
GET LOCATIO UMBER ln
%ln location number of special part%
COUT p
%p sum of the whole location number%
IF MI (p)
THE SEQUECE (TRUE, sequence (q))
OBTAI SEQUECE
ED

Rule 3: The few number of direction change in
assembly sequence.
SELECT SEQUECE (q)
DEFIE PART (a), PART (b)
%Two parts are successive in this sequence%
GET dc %dc- number of direction change%
IF MI (dc)
THE SEQUECE (TRUE, sequence (q))
OBTAI SEQUECE
ED


Figure 6 Mechanism of the evaluation method
Figure-6 shows the mechanism of evaluation
method. The given rules reduce searching time by
eliminating unrealistic and uncommon solution. If
an assembly sequence satisfies three rules, this
assembly sequence is an optimal assembly
sequence. Thus, the rule-based evaluation method
can generate reasonable and near-optimal heuristic
solutions efficiently.
5. IMPLEMENT OF ASP SYSTEM
5.1. SYSTEM ARCHITECHTURE
In order to develop the ASP system, system
architecture and database are designed firstly.
Figure-7 illustrates the architecture of ASP system
and shows the flow of information. In the system
architecture, there are six databases: product
information database, assembly process database,
assembly feature database, assembly method
database, feasible assembly sequence database, and
optimal assembly sequence database. Product
information database and assembly process database
are described from enterprise information database,
and the other databases are generated with running
the system. In addition, there are four functional
modules that are the core modules of ASP system.


Figure 7 Architecture of ASP system
Definition of assembly features module. This
module is used to define assembly features for each
part. Here, 3D drawings and 2D drawings are used
to derive out assembly features and analyse
assembly process.
Determination of assembly methods module. This
module is applied to extract assembly method by
examining assembly features. And company specific
assembly methods are analysed and applied to
assembly sequence planning.
Generation of all feasible assembly sequences
model. This module is supported by reasoning
mechanism and algorithm. All feasible assembly
sequences can be generated by this model.
Evaluation for obtaining optimal assembly
sequences module. This module is used to select the
optimal assembly sequences. All alternative
sequences are evaluated by the derived evaluation
criteria.
Through the evaluation of all feasible assembly
sequences, several optimal assembly sequences
might be selected and the best one is determined
among them by planner in consideration of planning
conditions such as company organization and
working behaviours and so on.
5.2. REALIZATION OF ASP SYSTEM

234

Based on the system architecture and holistic design
concept, the ASP system has been implemented
using C++ and Microsoft Foundation Classes (MFC)
library in Windows XP Professional platform.
Microsoft Visual C++ 6.0 is used as the integrated
development environment (IDE) to build event-
driven software. The MFC library provides the user
interface (UI) module. All codes are programmed by
C++. The developed whole ASP system is shown in
Figure-8.


Figure 8 The whole ASP system
From Figure-8, it is known that the ASP system
consists of four modules: Area 1-product analysis
module, Area 2-assembly method definition module,
Area 3-assembly sequence generation module, and
Area 4-evaluation module. These modules are
integrated into one interface and the developed ASP
system is easy for planner to operate due to the well
designed interface.
5.3. FUNCTIONALITY OF ASP SYSTEM
5.3.1. Product analysis module
The functionality of this module is to analyse the
whole product and define assembly features of each
part. In Figure-8, Area 1 shows product analysis
interface. There are two methods to input assembly
feature information. One is to input data one by one,
and the other is to import the assembly feature file.
Six buttons manage assembly features. The list
shows the results of product analysis and all
assembly features will be displayed in the list. The
input of this module is each part of product and the
output is assembly features for each part which
should be stored in the assembly feature database.
5.3.2. Assembly method definition module
The roll of this module is to define the assembly
method between two parts. In Figure-8, Area 2
shows assembly method definition interface.
Assembly method is auxiliary information from
enterprise database. The input is two parts and each
assembly feature and the output is assembly method
sheet. Thanks to product analysis module, part and
assembly feature could be selected directly. From
assembly features, assembly method, assembly
condition and description of assembly operation will
be added from enterprise database. In case of the
difficulty of deriving assembly method directly from
assembly feature model such as surface contact, four
buttons including add, modify, clear and delete are
used to define assembly methods. The list shows
assembly method between two parts and it could be
saved in the assembly method database.
5.3.3. Assembly sequence generation
module
This is a core module in the developed system. The
functionality of this module is to generate all
feasible assembly sequences based on the assembly
feature model. There are two sub-modules:
reasoning module and processing module.
Reasoning module determines the relationship
among assembly features. All feasible assembly
sequences are generated automatically by processing
module. In Figure-8, Area 3 shows interface of this
module. The selection domain confirms assembly
parts. Two buttons are used to add or delete the part.
Check button and match button are used to operate
reasoning mechanism. Processing button is used to
generate all feasible assembly sequences. Several
pop-up dialog boxes will be applied to prompt the
operating results. The results will be shown in the
list of feasible assemble sequence and be stored in
the feasible assembly sequence database.
5.3.4. Evaluation module
The evaluation module is to select optimal assembly
sequences. In Figure-8, Area 4 shows the interface
of evaluation module. Determine button is used to
generate the evaluation factors automatically based
on three rules. Evaluate button is used to sift all
feasible assembly sequences and obtain optimal
assembly sequences from them. If there is no
evaluation result, it means that there is at least one
position conflict among designated parts, and the
evaluation factors should be modified. Two pop-up
dialog boxes will be used to prompt the operating
results. The results can be shown in the list of
optimal assembly sequence and be saved in the
optimal assembly sequence database.

235

6. APPLICATION OF THE DEVELOPED
SYSTEM TO A PRACTICAL PROBLEM
To verify the validity and efficiency of the
developed system, assembly features-based ASP
system is applied to a practical problem, i.e. the
assembly of an automotive module such as oil pump.
The assembly structure of oil pump is shown in
Figure-9. Because the product is made up of 17
parts, the maximum number of assembly sequence
is 17! = 355687428096000 in theory.


Figure 9 Whole product structure of oil pump


(a)

(b)
Figure 10 (a) Assembly feature file and (b) Assembly
method file

(a)

(b)
Figure 11 (a) The sheet of all feasible assembly sequences
and (b) Optimal assembly sequences file
First of all, assembly features are extracted from
2D drawings and 3D drawings through assembly
analysis to build the assembly feature model. They
will be stored in the assembly feature database as
text formatting, see Figure-10 (a). Next, assembly
method and assembly condition are added according
to assembly features. Assembly methods will be
stored in the assembly method database as text
formatting, see Figure-10 (b). And then, 17 parts
could be added into the selection domain one by one.
Reasoning mechanism will be carried out based on
the generated assembly feature model. Assembly
relationship among assembly features can be
determined to find a counter feature. After that, 576
feasible assembly sequences are generated by
adding process button and the results are shown in
Figure-11 (a). Last, three evaluation criteria could
be generated automatically by adding determine
button. By applying the evaluate button, three
optimal assembly sequences could be obtained.
They should be stored in the assembly feature
database as text formatting by adding save button.
The final results are shown in Figure-11 (b).
The functionality of the developed system has
been proved in practice. Through applying this ASP
system, the assembly sequence planning was carried
out effectively and efficiently. The solution space
was markedly reduced by generating all feasible
assembly sequences automatically and selecting the
optimal assembly sequence finally.
7. CONCLUSIONS
This paper proposed an assembly feature model for
encapsulating assembly-oriented information.
Assembly operation can be transformed into their
assembly features matching mechanism. Based on
this model, a systematic ASP approach including
reasoning mechanism, algorithm and evaluation
method is applied to generate all feasible assembly

236

sequences automatically and obtain the optimal
assembly sequences finally. On the basis of the
proposed strategy, assembly features-based ASP
system was developed using Microsoft Visual C++
6.0. In order to demonstrate the functionality of ASP
system, a practical problem is applied to validate the
reliability of the developed system. In another word,
the developed ASP system supports a lot of
assembly planners to complete the assembly
sequence planning task.
Further development work will aim to extend the
assembly feature model for whole assembly
planning works such as the selection of the
appropriate resources, e.g. person, assembly
machine and jig/fixture, the calculation of assembly
time and focus on the further development of a
whole assembly planning system based on this
assembly feature model.
8. ACKNOWLEDGMENTS
This research was supported by the Ministry of
Knowledge Economy, Republic of Korea under the
Configurable MES Platform for Productivity
Innovation & Process Optimizing of SME.
REFERENCES
Alok K. Priyadarshi and Satyandra K. Gupta,
Algorithms for generating multi-stage molding plans
for articulated assemblies, Robotics and Computer-
Integrated Manufacturing, Vol. 25, No. 1, 2009, pp
91-106
Chien-Cheng Chang, Hwai-En Tseng and Ling-Peng
Meng, Artificial immune systems for assembly
sequence planning exploration, Engineering
Applications of Artificial Intelligence, Vol. 22, No. 8,
2009, pp 1218-1232
G. Dini, F. Failli, B. Lazzerini and F. Marcelloni,
Generation of Optimized Assembly Sequences Using
Genetic Algorithms, CIRP Annals-Manufacturing
Technology, Vol. 48, No.1, 1999, pp 17-20
Hong-Seok. Park, A Knowledge-Based System for
Assembly Sequence Planning, International Journal
of the Korean Society of Precision Engineering, Vol.
1, No. 2, 2000, pp 35-42
Hwai-En Tseng, Wen-Pai Wang and Hsun-Yi Shih,
Using memetic algorithms with guided local search to
solve assembly sequence planning, Expert Systems
with Applications, Vol. 33, No. 2, 2007, pp 451-467
Lai, H. Y. and Huang, C. T., A systematic approach for
automatic assembly sequence plan generation,
International Journal of Advanced Manufacturing
Technology, Vol. 24, No. 9-10, 2004, pp 752-763
Liang Gao, Weirong Qian, Xinyun Li and Junfen Wang,
Application of memetic algorithm in assembly
sequence planning, International Journal of
Advanced Manufacturing Technology, Vol. 49, No. 9-
12, 2010, pp 1175-1184
Qiang Su, A hierarchical approach on assembly
sequence planning and optimal sequences analyzing,
Robotics and Computer-Integrated Manufacturing,
Vol. 25, No. 1, 2009, pp 224-234
Qiang Su, Sheng-jie Lai and Jun Liu, Geometric
computation based assembly sequencing and
evaluating in terms of assembly angle, direction,
reorientation, and stability, Computer-Aided Design,
Vol. 41, No. 7, 2009, pp 479-489
Romeo M. Marian, Lee H.S. Luong and Kazem Abhary,
A genetic algorithm for the optimization of assembly
sequences, Computers & Industrial Engineering, Vol.
50, No. 4, 2006, pp 503-527
Tianlong Gu, Zhoubo Xu and Zhifei Yang, Symbolic
OBDD representations for mechanical assembly
sequences, Computer-Aided Design, Vol. 40, No. 4,
2008, pp 411-421
Tianyang Dong, Ruofeng Tong, Ling Zhang and Jinxiang
Dong, A collaborative approach to assembly
sequence planning, Advanced Engineering
Informatics, Vol. 19, No. 2, 2005, pp 155-168
Wen-Chin Chen, Pei-Hao Tai, Wei-Jaw Deng and Ling-
Feng Hsieh, A three-stage integrated approach for
assembly sequence planning using neural networks,
Expert Systems with Applications, Vol. 34, No. 3,
2008, pp 1777-1786
Wen-Chin Chen, Yung-Yuan Hsu, Ling-Feng Hsieh and
Pei-Hao Tai, A systematic optimization approach for
assembly sequence planning using Taguchi method,
DOE, and BPNN, Expert Systems with Applications,
Vol. 37, No. 1, 2010, pp 716-726
Xinwen Niu, Han Ding and Youlun Xiong, A
hierarchical approach to generating precedence graphs
for assembly planning, International Journal of
Machine Tools & Manufacture, Vol. 43, No. 14, 2003,
pp 1473-1486
Y.W. Guo, W.D. Li, A.R. Mileham, and G.W. Owen,
Application of particle swarm optimisation in
integrated process planning and scheduling, Robotics
and Computer-Integrated Manufacturing, Vol. 25, No.
2, 2009, pp 280-288
Proceedings of DET2011
7th International Conference on Digital Enterprise Technology
Athens, Greece
28-30 September 2011
237

MULTI-OBJECTIVE OPTIMIZATION FOR THE SUCCESSIVE
MANUFACTURING PROCESSES OF THE PAPER SUPPLY CHAIN
M. M. Malik
University of Western
Australia
mohsin.malik@uwa.edu.au
J. H. Taplin
University of Western
Australia
john.taplin@uwa.edu.au
M. Qiu
University of Western
Australia
min.qiu@uwa.edu.au
ABSTRACT
The traditional production focus in the paper industry has been on maximizing machine utilization
and minimization of cost but it has had adverse effects on the overall supply chain benchmarks such
as over capacity, long lead times, excessive inventory and low customer service. A least cost
production plan for the paper manufacturing and conversion stages results in poor cycle service
levels where many of the customer orders may fail to meet the due dates. Conversely, a service
level maximization approach yields a poor solution with respect to production costs. Therefore,
production planning problem in the paper supply chain is faced with more than one optimization
criterion which transforms the traditional cost minimization objective into a multiple objective
optimization problem with consideration for meeting customer requirements for different grades
and the due dates. In this paper, a multi-objective optimization approach to the successive
production processes of paper manufacturing and conversion is advocated and applied to obtain a
range of compromise solutions between the two conflicting objectives of production cost
minimization and maximization of the cycle service levels.
KEYWORDS
Production planning in paper industry, Multi-objective optimization, Genetic Algorithms

1. INTRODUCTION
The traditional production focus in the paper
industry has been on economies of scale for cost
advantage but it has had adverse effects on the
overall supply chain benchmarks such as over
capacity, long lead times, excessive inventory and
low customer service (Ranta, Ollus & Leppnen
1992; Hameri & Holmstrm 1997; Hameri &
Lehtonen 2001; De Treville, Shapiro & Hameri
2004). This led to a gradual shift to a more flexible
production strategy with shorter production cycle
times and increased number of grade changeovers
for better customer service. While the capacity
driven strategy may still be valid for few
standardized products with high volume, the
increased product customization in the pulp and
paper supply chain warrants a focus on meeting
customer requirements that is only possible through
a flexible production approach.
Hameri & Lehtonen (2001) described a transition
in the production strategy for five Nordic paper
mills manufacturing paperboard, specialty, and
standard fine paper. The volume driven strategy
with emphasis on maximum utilization and low cost
was replaced by a flexible approach that focused on
small lot sizes, shorter lead times and punctual
deliveries. Small lot sizes essentially means a higher
frequency of grade changeovers which improves the
customer service but additional costs are incurred
because of increased number of production setups.
Different paper grades require a common production
resource and whenever a production switch to a new
grade is made, production time is lost in setting up
the machinery. In the paper industry, setup costs are
more important as the machine keeps making paper
but it takes time to adjust to the quality settings of
the new grade. The paper produced in the transition
time is rejected. Therefore, apart from the

238

opportunity costs (i.e. lost production time),
significant material losses are also encountered.
Another aspect of sharing resources is that the
production of different grades cannot happen at the
same time, therefore, customer orders must be
sequenced which has repercussions for the cycle
service levels. Apart from the order sequencing
issue and its effects on cycle service levels,
inventory holding costs are also an important
consideration for planning purposes, Grade
changeovers can be minimized for particular
production plan by scheduling each grade only once
during the planning horizon till the demand is met,
however, the opportunity costs of capital tied up in
inventory, the direct costs of storing goods and
holding items also prohibit large stacks of inventory.
Furthermore, in some instances, securing additional
capital may also be a concern and therefore, another
reason to limit inventory holding costs.

2. FROM SINGLE OBJECTIVE TO MULTI-
OBJECTIVE OPTIMIZATION
The conventional single objective optimization
literature identifies the production planning at the
paper machine as a lot-sizing problem which finds
a balance between low setup costs (favouring large
production lots) and low holding costs (favouring a
lot-for-lot-like production where sequence decisions
have to be made due to sharing common resources)
(Rizk & Martel 2001). An aggregated cost function
representing grade changeover cost, inventory
holding cost and tardiness penalty is minimized to
obtain a single best solution. A least cost production
plan for the paper manufacturing and conversion
stages results in poor cycle service levels where
many of the customer orders fail to meet due dates.
In most real world situations, a decision maker may
not opt for least cost solution because not all
customer requirements are met. Conversely, a
production plan that endeavours to meet all
customer orders might be too expensive because of
too many grade changeovers. In such scenarios, a
single objective optimization approach which either
minimizes the production cost or maximizes the
customer service fails to capture the dynamics of the
decision environment. Instead, the decision maker is
more likely to be interested in solutions that give a
range of values in between the two extremes
obtained by the multi-objective optimization.
Service level maximization and minimization of
production cost are conflicting objectives in the pulp
and paper supply chain. A single optimization
criterion of either cost minimization or
maximization of service levels yields a good
solution from one perspective but is likely to give
poor results for the corresponding conflicting
objective. Therefore, production planning problem
in the pulp and paper supply chain is faced with
more than one optimization criterion which
transforms the traditional cost minimization
objective into a multiple objective optimization
problem with consideration for meeting customer
requirements for different grades and the due dates.
Whenever an optimization problem is faced with
multiple and conflicting objectives, the usual
meaning of the optimum does not suffice in the
decision making context because a solution
optimizing all objectives simultaneously generally
does not exist. The identification of a best solution
requires a trade-off or compromise between the
conflicting objectives. The tradeoff between
conflicting objectives has been most effectively
captured with the help of a widely known economic
concept of Pareto optimality or dominance wherein
solutions are sought from which it is impossible to
improve one objective without deterioration in
another objective. The multi-objective optimization
approach utilizes the Pareto dominance concept to
tackle conflicting objectives and is different to the
conventional single objective optimization approach
on the following counts:
There are at least two distinct objectives instead
of one.
It results in multiple solutions giving a range of
values between the extreme possibilities for
each objective.
It possesses two different search spaces:
objective space and decision space.
The search process is not influenced by the
magnitude of the cost coefficients associated
with each objective.
The usefulness of a multi-objective optimization
approach is accentuated in the situations where it is
hard to estimate the cost coefficients associated with
the objectives because the search process is
unaffected by their magnitude. Even if these
coefficients are estimated, their magnitude
represents a bias that guides the search process in a
specific direction. A multi-objective optimization
approach removes the bias towards a particular
objective by either normalizing the coefficients of
the aggregated objective function, using only one
objective at a time or by incorporating the Pareto
rank or dominance based approach where all
objectives are given equal importance during the
pair-wise comparison for dominance.
Traditionally, most optimization problems have
been solved through a single objective approach,
however, over the years a parallel line of research
has evolved by taking a new perspective on the
combinatorial optimization problems hitherto
treated as single objective problems. For example,

239

vehicle routing, travelling salesman, timetabling,
machine scheduling, airline crew scheduling
problems and cutting stock problems have long been
optimized using single objectives but there is a
growing realization in the research community that
most real world problems need to satisfy more than
one criterion for optimization. Routing problems
such as travelling salesman and vehicle routing are
generally optimized by minimizing the total
travelled distance but Ombuki, Ross & Hanshar
(2006) identified minimization of the number of
vehicles used as another objective and argued that
vehicle routing is intrinsically a multi objective
optimization problem. Jozefowiez, Semet & Talbi
(2008) carried out a survey of multi objective
optimization methods applied to routing problems
and noted that depending upon the problem context,
the optimization considerations included
minimization of criteria like travelled distance,
vehicles, vehicle waiting times, merchandize
deterioration, mean transit time, variance in transit
time, individual perceived risk, the actual risk,
individual disutility, unused working hours, the
length of the longest tour whereas route balancing,
maximization of capacity utilization and size of the
population covered was used. Similarly, for time
tabling problems, Datta, Deb & Fonseca (2007)
proposed two conflicting minimization objectives of
average number of weekly free time slots between
two classes for the students and average number of
weekly consecutive classes for the teachers. For
machine scheduling, Li et al. (2010) used
minimization of make span, completion time and
tardiness as optimization criteria. For crew
scheduling, total cost, delays, and unbalanced
utilization have been simultaneously minimized
(Lucic & Teodorovic 2007). Ghoseiri, Szidarovszky
& Asgharpour (2004) used a dual objective
scheduling approach for train operations by
considering lower fuel cost for the railway company
as one objective while shortening total passenger-
time is the other objective.
In this paper, a multi-objective optimization
approach to the successive manufacturing processes
of lot-sizing and cutting stock problem is advocated
to obtain a range of compromise solutions between
the two conflicting objectives of production cost
minimization and maximizing the cycle service
levels. In the next section, the production context is
defined and a bi-objective formulation is developed
for simultaneously minimizing the production cost
and maximization of cycle service levels. Solution
methods are described in section 4 and experimental
results are discussed in section 5. The discussion on
results is carried out in section 6. The paper is
concluded in section 7.
3. MODEL FORMULATION
3.1 PROBLEM DEFINITION
The planning problem is essentially to determine the
production levels of multiple finished products (FP)
and intermediate products (IP) over a finite planning
horizon in a paper mill, where paper production and
conversion are two successive stages. Large reels of
paper called Jumbos are produced on paper
machines which are cut into smaller rolls as per
customers specifications during the conversion
process. A schematic of the two processes is shown
in figure 1:







Figure 1: A Schematic of Paper Manufacturing Process
The customer orders for the finished products
have the following characteristics:
Paper grade
Rolls width
Number of rolls required
Order due dates which can be a particular day of
the week long planning horizon. It is assumed
that in case, the roll requires further finishing
activities, the quoted due date includes
necessary time buffer.
It is assumed that the cutting stage is
unconstrained because the rate of cutting jumbo
reels is much faster than the production rate at the
paper machine. It is a reasonable assumption
because the paper machine is usually the bottleneck
resource in the pulp and paper supply chain (Martel
et al. 2005).The FP demand over the entire planning
horizon has to be met; however, if an order cannot
be delivered in time, it incurs a tardiness cost M.
Cycle Service Level (CSL) is defined as the
probability that the cycle time for the customers
order will be less than the quoted lead time (Hopp &
Spearman 2008). Mathematically

CSL = Probability {Cycle Time < Lead Time}

The demand of intermediate products i.e jumbo
reels is unknown but derived through the
independent demand i.e FP demand. No inventory
of Jumbo reels is kept; however, finished products
can be stored at the manufacturing facility.
Changeover costs are incurred whenever a different
grade of paper is manufactured on a paper machine.
Conversion Process Paper Machine
Raw
Material

Rolls
Jumbo Reel

240

3.2 MATHEMATICAL FORMULATION
In this section, a two step procedure is used for
simultaneous minimization of production cost and
maximization of cycle service levels. The major
components of paper production cost are trim loss
and grade changeover cost. While minimization of
trim loss is the only criterion for the paper
conversion process, a tradeoff curve between the
two conflicting aims of grade changeover cost
minimization and improved cycle service levels is
obtained by employing a bi-objective formulation in
the following manner:
Step 1. The conversion process is solved with a
single objective of minimization of trim
loss.
Step 2. Allocation of cutting patterns to different
planning periods triggering the production
of jumbo reels of the respective grades with
a bi objective optimization criterion namely,
grade changeover cost minimization and
service level maximization. Mathematically,

Objective to be minimized

=
T
t
it
IP i
it
K f
1


(1)

=
T
t FP i
t i
y f
'
'
2


(2)
it it it it t
k Q a C +

(3)
t i t i t i t i
I I Q d
' ) 1 ( ' ' '
+ =


(4)

=
J j
ij
T
t
it
x Q 0


(5)
( ) 1 , 0
(6)
Integer Q Q
t i it
, 0 ,
'


(7)

The indexes, parameters, sets and decision
variables used in the above formulation are
explained in table 1. The planning problem has been
formulated as a bi-objective minimization problem
with f
1
representing the grade changeover costs (1);
and service level improvements have been indirectly
formulated in f
2
as minimization of late orders y
it

(2). Customer orders for the finished product i that
will not be delivered by the customer specified due
date are to be minimized along with the grade
changeover costs incurred on the paper machine
subject to the capacity constraint (3) and the
material balancing constraints (4) and (5). While the
constraint (4) ensures that the end demand of
finished products is met, constraint (5) stipulates
that the cut finished products are equal to the
number of jumbo reels of a particular grade (IP), not
allowing any inventory of the intermediate products.
Constraints (6) and (7) ensure integer solution to the
planning problem.
Table 1: otations
T = Length of the planning Horizon
t = A single planning period
i

Intermediate Products (IP)
i

Finished Products (FP)
j

A cutting pattern
x
ij
= Number of times the jth pattern is used on IP i to
generate FP i
d
it
= Demand for the FP i in period
C
t
= Paper machines production capacity (hours)
k
it
= Grade changeover time for IP i (hours)
K
it
= Grade changeover cost for IP i (hours)
a
it
= Capacity consumption rate of IP i (hours/metric ton)
Q
it
= Quantity of FP i produced during period t

i t
= Setup Indicator for IP i in period t
y
it
= FP quantity i that are not delivered within due date
Q
i t
= Quantity of IP i produced during period t
I
i t
= Inventory of FP i at the end of period t

4. SOLUTION APPROACH
There have been various ways to applying multi-
objective optimization approaches but the scaler and
Pareto approaches are the main ones. Due to the
fundamental difference between the methods
employed to approximate the Pareto frontier, these
two approaches may differ substantially with each
other with regard to suitability for application to a
specific decision context and the results obtained.
Therefore, it is deemed prudent to test both solution
approaches for the production problem of the two
successive stages of paper manufacturing. Epsilon
constraint method is selected as the scaler approach
whereas the non-dominated sorting algorithm-II
(NSGA-II) is chosen as the preferred Pareto or
multi-objective evolutionary algorithm (MOEA).
4.1 EPSILON CONSTRAINT METHOD
WITH STANDARD GA
The epsilon constraint method is a multi-objective
optimization converted to a single objective problem
and solved through conventional algorithms.
Different resolution algorithms ranging from exact
to meta-heuristics, depending upon the problem
context, have been used in conjunction with epsilon
constraint method. A steady state genetic algorithm
is used as the resolution algorithm for the bi-
objective epsilon constraint formulations (Palisade
2009a). The experimental settings for the GA
parameters were as follows:

241

A uniform crossover value of 0.5 is used across
all experiments and auto mutation is used. The latter
allows the genetic algorithm to increase the
mutation rate automatically when an organism
"ages" significantly; that is, it has remained in place
over an extended number of trials. For many
models, especially where the optimal mutation rate
is not known, selecting Auto can give better results
faster (Palisade 2009b). Experiments with initial
populations of 50, 200, 500 and 1000 have been
performed and it was noted that the convergence
pattern improved with 500 population size but no
improvements were recorded with a 1000 size
despite considerable increase in computational
workload. Therefore, the population size of 500 was
chosen. Similarly, experiments showed that GAs
converged before 200 GA equivalent generations or
100,000 iterations; therefore, it was selected as the
stopping criterion for all the experiments.
4.2 NON DOMINATED SORTING
GENETIC ALGORITHM (NSGA-II)
Multi Objective Evolutionary Algorithms (MOEA)
utilize the Pareto based dominance concept in
finding out a set of non-dominated solutions. Non
Dominated Sorted Genetic Algorithm (NSGA-II)
utilizes a non-dominated sorting mechanism to rank
the entire population of solutions. Srinivas & Deb
(1994) developed NSGA which was the first
implementation of a non-dominated sorting
mechanism. Later on, NSGA-II was introduced to
improve upon the three known deficiencies of
NSGA i.e high computational complexity of non-
domination sorting, lack of elitism and the use of a
user specified sharing parameter for ensuring
diversity of solutions to inhibit early convergence
(Deb et al. 2002).
The non-domination sorting algorithms rank the
whole population of solutions according to the
domination count n
i
i.e number of solutions that
dominate solution i. The best Pareto front will
correspond to n
i
= 0 and it for each of these
solutions, a set of solutions S
i
being dominated by i
is also calculated. S
i
is used to find out all the other
non-dominated fronts by increasing the domination
count by one. The process continues till the whole
population is ranked.
NSGA-II was selected as the Pareto based multi
objective evolutionary algorithm. GANetXL, is a
software platform that utilizes NSGA-II for multi-
objective optimization, has been used. It is written
in C++ and exploits a component object model
(COM) interface to interact with Excel (Savic, Bicik
& Morley 2011). Its interface with Excel facilitated
the model development with the help of Visual
Basic for Application (VBA) macros.
4.3 TEST DATA
The paper machines speed determining its capacity
was provided by an Australian manufacturer along
with the grade changeover times. Trade journals
were consulted for cost data of different grades of
paper kraft. Now, only the details of customer
orders for the finished products were unknown and
randomly generated data was used to represent these
unavailable parameters. The random generation of
test data was inspired by Gau & Wascher (1995) but
it was modified considerably for the study. The
details are as follows:
The customer orders are usually for cut rolls or
sheets obtained during conversion stage of a paper
mill and are characterized by paper grades, rolls
width or sheets dimensions, number of rolls
required and orders due date. Cut roll widths l
i

were randomly generated from a uniform
distribution so that the simulated widths l
i

represent all values from 20% to 80% of jumbo
reels length. In the production environment
considered, the number of cut rolls required is
determined by machine capacity because it is the
bottle neck resource in paper manufacturing supply
chain. Its capacity is determined by the machine
speed which in turns determines the quantity of
customer orders it can handle in one week. Also, the
randomly generated roll widths affect the required
number of jumbo reels because of different
combinations of cutting patterns. These two
parameters restrict the required quantity; therefore,
the number of rolls required is spread across all roll
widths to match the paper machine capacity. The
order due dates were also randomly generated from
a uniform distribution of the five working days in a
week long planning horizon which was considered
enough to make the point regarding service level
considerations.

5. RESULTS
5.1 EPSILON CONSTRAINT METHOD
WITH STANDARD GA
In a two step process, the first step involves cutting
jumbo reels with a minimum trim loss criterion and
the second step allocates the cutting patterns to
different planning periods triggering the production
of jumbo reels of the respective grades with two
minimization objectives namely grade changeover
These two vectors determined the extreme values of
the Pareto frontier and with ten epsilon increments,
the Pareto frontier was approximated in figure 2. A
well spread Pareto frontier is obtained between the
Nadir and Ideal vectors represented in figure 2 by
the light and dark shaded circles respectively.

242


Figure 2: Approximated Pareto Frontier Epsilon Constraint Method

The Pareto frontier gives the decision maker a
range of solutions to choose from. A least cost
solution of $11,699 for grade change over costs
results in a cycle service level of 0.838 but as the
grade changeover cost increased, the service levels
also improved. This is because the solutions
resulting in lower grade changeover cost correspond
to at most one setup in one planning period with the
possibility of carrying over the setup state to the
next planning period. For example, the solution
resulting in grade changeover cost of $11,699 and a
cycle service level of 0.838 had only 5 setups in the
week long planning horizon. The number of setups
in the entire planning horizon gradually increased to
8, 9, 12, 17, 18, 19, 21and so did the corresponding
cycle service levels. The maximum cycle service
level of 0.959 only resulted because of 21 setups in
one weeks production schedule but also incurred
much higher costs of $51,045.
The important consideration here is whether the
estimated Pareto frontier is global or local, i.e can
the solutions be improved further? The answer to
this question lies in the resolution algorithm. If an
exact algorithm was used as the solution approach,
the estimated Pareto frontier would have been
global and could not have been improved any
further. Genetic algorithm was used as the solution
approach and being a stochastic search algorithm,
the optimality of the obtained solutions can not be
guaranteed in a single run. Repeated genetic
algorithm runs enhance the probability of obtaining
close to optimal solutions (Yuen, Fong & Lam
2001). However, it would have been
computationally prohibitive in this case because
each of the ten solutions obtained would have to be
re-run a number of times. Nevertheless, determining
the solution quality is important and another
measure for the same could be to solve the same
problem by a multi-objective evolutionary
algorithm such as NSGA-II and to compare the
results.
5.2 NON DOMINATED SORTING GA
(NSGA-II)
The state of the art Non Dominated Sorting Genetic
Algorithm NSGA II was also applied to the same
problem. The initial population was generated
randomly. However, it was noted that no individual
among the initial population was a feasible solution.
The algorithm was allowed to run for 5000
generation with a 500 population. After nearly 50
hours of run time, the algorithm was unable to
generate a single feasible solution. Different GA
parameters were tried but the generated solutions
were always infeasible. The possibility of obtaining
feasible solutions after 5000 generations cannot be
ruled out but the computational cost was
prohibitive. The other alternative is to start with a
population of feasible solutions; this approach has
been reported in the literature for similar hard
combinatorial problems.

243


Figure 3: Approximated Pareto Frontier Multiple GSA-II Runs

Datta, Deb & Fonseca (2007) also encountered
infeasibility of NSGA II generated solutions for a
highly constrained university timetabling problem
and when they used feasible solutions as the initial
population, considerable improvement was
recorded. Similarly, Fangguo & Huan (2008)
ensured feasibility of all solutions for their
dominance based multi-objective genetic algorithm
by using an initial feasible population. Sathe,
Schenk & Burkhart (2009) employed a clustering
algorithm along with NSGA II in order to always
generate feasible solutions for a multi-constraint bin
packing problem. Li & Hamzaoui (2009) also used
initial feasible solutions for their NSGA II
implementation. Varela et al.(2009) improved the
heuristic solution obtained for a variant of the
cutting-stock problem by using it as the initial
population for their multi-objective genetic
algorithm. Craig, While & Barone (2009) improved
the hockey league scheduling with the help of
multi-objective evolutionary algorithm by using
previous years schedules as the initial population.
Reiter & Gutjahr (2010) implemented NSGA-II for
a bi-objective vehicle routing problem by using a
separate algorithm to generate feasible solutions to
be used as an initial population.
The same approach of injecting feasible
solutions, including the results obtained by the
epsilon constraint method, in the randomized
population was used to solve the problem. However,
only 30% of the initial population was filled with
the feasible solutions while the rest of the
population comprised the randomly generated
infeasible solutions. This was done to ensure
diversity among solutions. Different GA parameter
settings were tested and the algorithm did return
improved feasible results this time.Details are as
follow:
Earlier, in order to generate feasible solutions
from the initial random population, a population
size of 500 was used which resulted in high
computational cost, fifty hours being required for a
5000 generations run. This was because of the
computational complexity of NSGA II which is
exponentially related to the population size i.e
O(MN2) where M is the number of objectives and
N is the population size. With feasible solutions as
the initial population, the computational load can be
reduced by resorting to smaller populations
especially because multiple runs are essential to
ensure that quality solutions are obtained by
stochastic optimizers such as GA. Initially, a
population size of 100 was chosen with simple
crossover probability of 0.95 and a mutation by
gene probability of 0.05. The adaptive mutation
probability of 0.01 was also resorted to after 1000
generations. After a four hour run and 2000
generations, there were nine improved feasible
solutions and all the rest were copies. Thus, 2000
generations was selected as the stopping criterion.
With the same initial feasible solutions and the
same proportion of randomized initial infeasible
solutions, the algorithm was run ten times.

244


Figure 4: SGA II Comparison with the Epsilon Constraint Results


All the solutions from the ten runs were combined
and top hundred were used as the initial population
to generate the best Pareto front for the problem
(Figure 3). Typical of a stochastic optimizer, the last
generation of all the ten NSGA II runs was
different; therefore, the combination of all solutions
to obtain the final frontier makes sense. As
expected, the combined run that includes the best
100 solutions turns out to be the global Pareto
frontier of the problem.
In figure 3, the Pareto frontier obtained by Run 5
and Run 6 appear to cross the global Pareto frontier
obtained by the Combined Run. However, it does
not happen actually because all the points obtained
by Run 5 and Run 6 are dominated by the
Combined Run. It is just that the Run 5 and Run 6
contain overall fewer solutions and the additional
solutions of the combined Run at the exact
locations of the overlaps give the false impression
that the Run 5 and Run 7 are better.

6. DISCUSSION
The Pareto frontier obtained by the Epsilon
constraint method which is also the initial Pareto
frontier is compared with the improvements
recorded by the NSGA-II solutions in figure 4. All
the NSGA-II solutions are equally good or better
than the epsilon constraint results highlighting the
fact that the Epsilon constraint method did not result
in a global Pareto frontier which is not surprising
because the GA experiments corresponding to one
epsilon value were only performed once. It is
widely regarded that only repeated GA experiments
can ensure best possible solutions (Malik, Qiu &
Taplin 2009). Figure 4 also shows that the NSGA-
IIs Pareto front stayed within the bounds obtained
by the ideal and nadir objective vectors. The overall
shape of the Pareto frontier also did not change
much suggesting that NSGA-II was only able to
find improved solution in the vicinity of the existing
feasible solutions. Therefore, it appears that the
initial Pareto front dictates NSGA-IIs search
process.
The ability of the standard genetic algorithm
when applied as an epsilon constraint method to
obtain feasible solutions and the inability of NSGA-
II to do the same from an initial random population
can be explained by the different constraint
handling mechanisms for the two employed multi-
objective algorithms. The standard GA uses the
penalty function to handle constraints. The aim is to
transform a constrained optimization problem into
an unconstrained one by penalizing the objective
function by a value based on the constraint
violation. This is particularly useful for NP-Hard
combinatorial optimization like the problem under
study because the feasibility of solutions is
gradually achieved by minimizing the soft
penalties to zero. On the contrary, NSGA-IIs
constraint handling mechanism has proved to be
ineffective if the entire initial population is
infeasible. However, injecting 30% feasible solution
as part of the initial solution did improve the results.
Moreover, the bi-objective optimization of grade
changeovers and corresponding cycle service levels
simplifies the lot-sizing decisions. In the
conventional single objective optimization, one of
the important considerations for the selection of an
appropriate lot-sizing model is the maximum
number of products that can be manufactured in a

245

single planning period. The lot-sizing model that
corresponds to single product per planning period
give poor results with regards to the cycle service
levels but there are cost savings. When multiple
products are allowed in each planning period, the
cycle service levels are maximized with increase in
costs. In bi-objective optimizations, there is no need
to make a prior decision regarding number of
products in each planning period. Multiple lot-
sizing models have been integrated in to one
experiment and the resulting Pareto frontier gives
the production manager a range of options to choose
from, depending upon the decision context.

7. CONCLUSION AND DIRECTION FOR
FUTURE RESEARCH
In this paper, a multi-objective optimization
approach is advocated for the successive
manufacturing processes of the paper industry
supply chain. A two step solution approach is
proposed to the bi-objective production planning
problem. In the first step, a set of non-dominated
solutions is obtained by employing the epsilon
constraint method which is used a part of initial
population for the NSGA-II in the second step.
NSGA-II not only improves the quality of epsilon
constraint solutions but also increases the number of
solution on the Pareto frontier. Issues associated
with the successful implementation of the multi-
objective optimization algorithms were discussed
and the importance of estimating the ideal and nadir
objective vector to reflect the entire set of feasible
search space was highlighted.
The relevance of multi-objective optimization
approach to the real world situation such the
production planning problem understudy is stressed.
Ideally, the mill would like to run long production
runs with minimum grade changeovers and to cut
the jumbo reels to stock in anticipation of customer
demand. The same decision context prevailed in
yesteryears but ever increasing customer
requirements and marker pressures now warrant a
trade-off between production cost, flexibility and
customer service. Typically, in paper industry,
while some customers enjoy considerable leverage
on the paper mill and will insist on having their
orders delivered in time because of their own
constraints, the paper mill can also afford to delay
some orders by being flexible with a few of its
customers for order delivery, therefore, saving on
production costs. The cost reduction by
compromising on the service levels can be an
advantage for both the supplier and customers. For
the customers insisting on punctual deliveries, the
bi-objective formulation discussed in this paper
gives a useful tool to the mill manager because it
can help to quantify the associated extra cost.
REFERENCES
Craig, S, While, L & Barone, L 2009, 'Scheduling
for the National Hockey League Using a
Multi-objective Evolutionary Algorithm',
Proceedings of the 22nd Australasian Joint
Conference on Advances in Artificial
Intelligence.
Datta, D, Deb, K & Fonseca, C 2007, 'Multi-
Objective Evolutionary Algorithm for
University Class Timetabling Problem', in
Evolutionary Scheduling, vol. 49, eds K
Dahal, K Tan & P Cowling, Springer Berlin
/ Heidelberg, pp. 197-236.
De Treville, S, Shapiro, RD & Hameri, aP 2004,
'From supply chain to demand chain: the
role of lead time reduction in improving
demand chain performance', Journal of
operations management, vol. 21, no. 6, p.
613.
Deb, K, Pratap, A, Agarwal, S & Meyarivan, T
2002, 'A fast and elitist multiobjective
genetic algorithm: NSGA-II', Evolutionary
Computation, IEEE Transactions on, vol. 6,
no. 2, pp. 182-197.
Fangguo, H & Huan, Q 2008, 'Multi-objective
Optimization for the Degree-Constrained
Minimum Spanning Tree', in Computer
Science and Software Engineering, 2008
International Conference on, pp. 992-995.
Gau, T & Wascher, G 1995, 'CUTGEN1: A
problem generator for the standard one-
dimensional cutting stock problem',
European journal of operational research,
vol. 84, no. 3, pp. 572-579.
Ghoseiri, K, Szidarovszky, F & Asgharpour, MJ
2004, 'A multi-objective train scheduling
model and solution', Transportation
Research Part B: Methodological, vol. 38,
no. 10, pp. 927-952.
Hameri, A-P & Lehtonen, J-M 2001, 'Production
and supply management strategies in
Nordic paper mills', Scandinavian Journal
of Management, vol. 17, no. 3, pp. 379-396.
Hameri, AP & Holmstrm, J 1997, 'Operational
speed - An opportunity for the finnish paper
industry', Paperi ja Puu/Paper and Timber,
vol. 79, no. 4, pp. 244-248.
Hopp, WJ & Spearman, ML 2008, 'Basic Factory
Dynamics', in Factory Physics, McGraw-
Hill Irwin, New York, p. 230.
Jozefowiez, N, Semet, F & Talbi, E-G 2008, 'Multi-
objective vehicle routing problems',
European Journal of Operational Research,
vol. 189, no. 2, pp. 293-309.

246

Li, X, Yalaoui, F, Amodeo, L & Chehade, H 2010,
'Metaheuristics and exact methods to solve
a multiobjective parallel machines
scheduling problem', Journal of Intelligent
Manufacturing, pp. 1-16.
Li, X, Yalaoui, F., Amodeo, L., & & Hamzaoui, A
2009, 'Multiobjective method to solve a
parallel machine scheduling problem', 7th
International Logistics&Supply Chain
Congress09.
Lucic, P & Teodorovic, D 2007, 'Metaheuristics
approach to the aircrew rostering problem',
Annals of Operations Research, vol. 155,
no. 1, pp. 311-338.
Malik, MM, Qiu, M & Taplin, J 2009, 'An
integrated approach to the lot sizing and
cutting stock problems', in Industrial
Engineering and Engineering Management,
2009. IEEM 2009. IEEE International
Conference on, pp. 1111-1115.
Martel, A, Rizk, N, D'Amours, S & Bouchriha, H
2005, 'Synchronized Production-
Distribution Planning in the Pulp and Paper
Industry', in Logistics Systems: Design and
Optimization, eds A Langevin & D Riopel,
Springer US, pp. 323-350.
Ombuki, B, Ross, B & Hanshar, F 2006, 'Multi-
Objective Genetic Algorithms for Vehicle
Routing Problem with Time Windows',
Applied Intelligence, vol. 24, no. 1, pp. 17-
30.
Palisade 2009a, 'Evolver Extras', in Guide to Using
Evolver: The Genetic Algorithm Solver for
Microsoft Excel, Palisade Corporation, New
York, pp. 93-97.
Palisade 2009b, 'Evolver Reference Guide', in
Guide to Using Evolver: The Genetic
Algorithm Solver for Microsoft Excel,
Palisade Corporation, New York, pp. 93-97.
Ranta, J, Ollus, M & Leppnen, A 1992,
'Information technology and structural
change in the paper and pulp industry :
Some technological, organizational and
managerial implications', Computers in
Industry, vol. 20, no. 3, pp. 255-269.
Reiter, P & Gutjahr, W 2010, 'Exact hybrid
algorithms for solving a bi-objective vehicle
routing problem', Central European Journal
of Operations Research, pp. 1-25.
Rizk, N & Martel, A 2001, SUPPLY CHAI FLOW
PLAIG METHODS: A REVIEW OF
THE LOT-SIZIG LITERATURE in
Working Paper DT-2001-AM-1, Centre de
recherche sur les technologies de
lorganisation rseau (CENTOR),
Universit Laval, Canada, Quebec.
Sathe, M, Schenk, O & Burkhart, H 2009, 'Solving
Bi-objective Many-Constraint Bin Packing
Problems in Automobile Sheet Metal
Forming Processes', in Evolutionary Multi-
Criterion Optimization, vol. 5467, eds M
Ehrgott, C Fonseca, X Gandibleux, J-K Hao
& M Sevaux, Springer Berlin / Heidelberg,
pp. 246-260.
Savic, DA, Bicik, J & Morley, MS 2011, 'A DSS
generator for multiobjective optimisation of
spreadsheet-based models', Environmental
Modelling & Software, vol. 26, no. 5, pp.
551-561.
Srinivas, N & Deb, K 1994, 'Muiltiobjective
Optimization Using Nondominated Sorting
in Genetic Algorithms', Evolutionary
Computation, vol. 2, no. 3, pp. 221-248.
Varela, R, Muoz, C, Sierra, M & Gonzlez-
Rodrguez, I 2009, 'Improving Cutting-
Stock Plans with Multi-objective Genetic
Algorithm', in Communications in
Computer and Information Science,
Springer Berlin Heidelberg, pp. 332-344.
Yuen, SY, Fong, CK & Lam, HS 2001,
'Guaranteeing the probability of success
using repeated runs of genetic algorithm',
Image and Vision Computing, vol. 19, no.
8, pp. 551-560.


Proceedings of DET2011
7th International Conference on Digital Enterprise Technology
Athens, Greece
28-30 September 2011
247

PERFORMANCE OF 3-D TEXTURED MICRO- THRUST BEARINGS
WITH MANUFACTURING ERRORS

C.I. Papadopoulos
School of Naval Architecture
and Marine Engineering,
National Technical University
of Athens,15710
Zografos, Greece
chpap@central.ntua.gr

P.G. Nikolakopoulos
Machine Design Laboratory
Dept. of Mechanical and
Aeronautics Engineering
University of Patras, 26500
pnikolak@mech.upatras.gr


L. Kaiktsis
School of Naval Architecture
and Marine Engineering,
National Technical University
of Athens,15710
Zografos,Greece
kaiktsis@naval.ntua.gr

ABSTRACT
The selection and implementation of proper manufacturing processes is crucial in producing
mechanical components that meet the required performance expectations over their lifetime.
Manufacturing errors may substantially deteriorate performance, and decrease the lifetime of
journal and thrust bearings. Based on recent research, proper texturing may drastically improve the
performance of micro- thrust bearings in terms of both load carrying capacity and friction
coefficient. In the present work, the performance sensitivity of textured micro- thrust bearings to
manufacturing errors is investigated. Here, the bearings are modelled as three-dimensional micro-
channels consisting of a smooth moving wall (rotor), and a stationary wall (stator) exhibiting
periodic rectangular dimples. Several types of representative manufacturing errors are considered.
In particular, discrepancies in the size and shape of the texture geometry, as well as macroscopic
errors in the stator surface (concavity/convexity and waviness) are parametrically modelled. The
bearing operation is simulated by means of the numerical solution of the Navier-Stokes equations
for incompressible isothermal flow. By processing the simulation results, the effects of
manufacturing errors on the bearing load carrying capacity and friction coefficient are analyzed, for
representative width-to-length ratios. The effects are interpreted by means of pressure distributions
and visualization of the flow fields. It is found that, in a number of cases, manufacturing errors
result in improved performance of textured micro- thrust bearings.
KEYWORDS
Micro- thrust bearings, CFD, manufacturing errors, performance sensitivity.
1. INTRODUCTION
Thrust bearing are common machine elements in
industrial and marine systems. Micro- thrust
bearings are also common components in micro-
machines, such as micro-turbines, or in electronic
devices like hard disks.
Micro-texturing technologies can be used to
create micro- protrusions and recesses on a surface,
forming an artificial surface roughness. These
surface features are commonly characterized by the
A.G. Haritopoulos
School of Naval Architecture and Marine
Engineering,
National Technical University of Athens,
15710 Zografos, Greece
nm08013@central.ntua.gr
E.E. Efstathiou
School of Naval Architecture and Marine
Engineering,
National Technical University of Athens,
15710 Zografos, Greece
efst@central.ntua.gr


248

periodic repetition of a basic geometric pattern.
Recent technological advances in surface treatment
methods, such as micro-stereolithography, chemical
etching, surface indentation, micro-machining,
LIGA processes and laser ablation, have enabled the
implementation of artificial texture patterns in
machine components, with resolution accuracy in
the micron range (Yang et al, 2009).
In thrust bearing applications, the presence of
micro texturing in part of the stator surface has been
shown to reduce friction coefficient, increase load
capacity, and improve other bearing properties, such
as the dynamic stiffness and the damping coefficient
values, see Andharia et al (2000), Tonder (1987),
Etsion et al (2004), Ozalp and Umur (2006),
Pascovici et al (2009), Buscaglia et al (2005),
Papadopoulos et al (2011a, 2011b). Computational
studies of textured thrust bearings have utilized both
the Reynolds equation and the Navier-Stokes
equations. For textured sliders, the applicability of
the Reynolds equation has been investigated in
Dobrica and Fillon (2009); it was shown that the
validity of the Reynolds equation depends on both
the Reynolds number value and the dimple
geometric parameters.
In untextured bearings, manufacturing errors of
the bearing surface, such as waviness, tilt, concavity
or convexity, are known to affect bearing
performance (load-carrying capacity and friction
coefficient), see e.g., Abramovitz (1955), Puri et al
(1983), and Ramanaiah and Sundarammal (1982). In
particular, bearings with convex or waved pad
surface may, in certain cases, be characterized by
improved performance, in comparison to that of flat
surface bearings.
Detailed analyses of the effects of manufacturing
errors have been reported in the recent literature for
aerostatic bearings. This type of bearings provides
full performance at zero speed, increased
positioning accuracy, and practically frictionless
operation. Aerostatic bearings are widely used in
metrology, in ultra-precision machining, and in
coordinate measuring machines. The effect of
manufacturing errors on a flat pad aerostatic bearing
design has been investigated by Bhat et al (2010),
by mean of a Pareto optimization study; their results
show that multi-orifice aerostatic flat pad bearings
are highly sensitive to surface profile variations.
Cheng and Rowe (1995) presented a design
methodology for externally pressurized journal
bearings, which includes issues related to
manufacturing and the specification of tolerances.
The sensitivity of journal bearing performance to
deviations from the ideal cylindrical surface is
demonstrated in the combined experimental-
theoretical work of Stout and Pink (1980). Kwan
and Post (2000) considered the effects of
manufacturing errors on a single type of multi-
orifice, rectangular flat pad aerostatic bearings; they
found that deviations of the bearing surface from the
ideal plane had a significant effect on bearing load
capacity and stiffness coefficient. Li and Ding
(2007) studied the influence of the geometrical
parameters of a pocketed type orifice restrictor on
the performance of an aerostatic thrust bearing. For
a similar bearing geometry, Chen and He (2006)
investigated the effects of the recess shape, orifice
diameter and gas film thickness, on performance
parameters. Sharma and Pandey (2009) compared
the performance of surface profiled hydrodynamic
thrust bearings to that of conventional plane thrust
bearings; they showed that an increase in load
carrying capacity of approximately 30% can be
achieved by proper surface profile selection.
In summary, literature studies report either: (a)
the effects of manufacturing errors in flat-surface
bearings, or (b) the effects of surface texturing, in
the absence of manufacturing errors. Studies of
textured bearings with manufacturing errors have
not yet been reported. Therefore, following the work
of Papadopoulos et al (2011a, 2011b), in the present
study, the effects of manufacturing errors in textured
thrust bearings are studied. Here, 3-D thrust
bearings of different width-to-length and
convergence ratios, with optimal texturing patterns,
are considered. A parametric CAD model
accounting for different types of manufacturing
errors of the stator, in particular convergence ratio
value, convexity / concavity, waviness, and
dimensioning and shape of the texture cell pattern.
CFD simulations are performed, in which the thrust
bearings are modelled as 3-D micro-channels with a
moving and a stationary wall. Processing of the
simulation results yields the bearing performance
parameters. The results demonstrate that
manufacturing errors may, in several cases, lead to
improved bearing performance.
The paper is organized as follows: The problem
definition and computational approach are first
presented, with a short reference to the optimal
texture geometries obtained by Papadopoulos et al.
(2011b). CFD simulation results are then presented
and discussed, and, finally, the main findings are
summarized.
2. PROBLEM FORMULATION
2.1 MICRO- THRUST BEARING GEOMETRY
In Papadopoulos et al (2011), the optimization of
texture geometry of three dimensional converging
micro- thrust bearings with partial periodic
rectangular texturing was considered. A sketch of
such a bearing and its approximation in terms of a 3-
D textured channel is presented in Figure-1. The

249

channel consists of a moving smooth wall (rotor)
and a stationary textured wall (stator). In the case of
a smooth (untextured) stator, pressure buildup is
only feasible for converging channels. The presence
of dimples can result in substantial pressure buildup,
even for parallel or slightly diverging bearings.
In the present study, the three-dimensional
parametric CAD model of Papadopoulos et al
(2011b) has been generalized, to include different
types of manufacturing errors. The bearing height
varies from the inlet height
1
H (x=0) to its value
0
H at the outlet (x = L).
0
H and
1
H are controlled
by the convergence ratio, ( )
1 0 0
- k H H H = , taking
positive, zero or negative values, for converging,
parallel and diverging sliders, respectively. The
value of minimum film thickness, H
min
, depends on
the actual stator geometry. Here, H
min
is assumed in
all cases constant. The length L of the bearing is
controlled by the non-dimensional parameter
min
l L H = ; here l is equal to 100, as in
Papadopoulos et al (2011b). The bearing width, B, is
controlled by the ratio B/L.
Part of the stationary wall is textured with
rectangular dimples. For all cases, an untextured
part (sill) at the inlet, of length equal to 1/100 of the
total length, i.e. 0.01
ui
l = , is considered. The
untextured length at the bearing outlet is variable,
controlled by the non-dimensional parameter
uo
l .
The textured part exhibits periodic texture cells.
Each texture cell is defined by the cell length, L
c
, the
dimple length, L
d
, and the dimple depth, H
d
, see
Figure-1(b). These dimensional geometric
parameters are controlled by the texture density,
T
,
and the relative dimple depth, s .


Figure 1 (a) Three-dimensional textured converging slider
geometry (parallel slider for H
1
=H
0
, diverging slider for
H
1
<H
0
). (b) Geometry of dimples. (c) Typical thrust
bearing application with partial texturing. (Papadopoulos et
al., 2011.)
Due to manufacturing errors, the stator surface
can be characterized by several deviations from the
nominal surface, see e.g. Kwan and Post (2000). In
the present work, a number of manufacturing errors
is considered, resulting in the following deviations
from nominal design, see Figure-2:
(i) Discrepancy in convergence ratio values.
(ii) Discrepancy in normalized dimple depth.
(iii) Discrepancy of dimple shape from the
reference orthogonal form.
(iv) Concavity / convexity of the base stator
surface.
(v) Waviness of the base stator surface in the
streamwise direction.
Discrepancy of dimple shape from the reference
orthogonal form is accounted for by considering
trapezoids, inscribed in the reference (rectangular)
dimple geometry. The discrepancy is quantified by
the non-dimensional parameters b
a
, b
b
, defined in
terms of lengths B
a
, B
b
, see Figure 1, as follows:
b
a
=B
a
/L
d
, b
b
=B
b
/(L
d
-B
a
) (1)
Concavity/convexity (in the streamwise direction)
is defined by first considering two points, with a
vertical coordinate C with respect to a local
coordinate system, with origin at the mid-point of
the base (planar) stator surface. Positive values of C
correspond to concave, while negative values to
convex manufactured surfaces. For a given value of
C, a parabola between the stator end-points is
readily defined, thus determining the
convex/concave stator base surface, see Figure-2.
Parameter C is controlled by the non-dimensional
parameter c=C/H
min
.
Waviness in the streamwise direction is
considered sinusoidal, defined in terms of an
amplitude A and the wavenumber n. In the present
work three different values of n are considered, n=1,
3, 5. Amplitude A is controlled by the normalized
waviness amplitude a=A/H
min
. The local difference
between the manufactured and the nominal film
thickness is defined as follows:
( ) sin(2 / ), 1, 3, 5
min
H x a H nx L n = = (2)
2.2 COMPUTATIONAL METHODOLOGY
In the present study, the flow is considered
isothermal, and cavitation is not account for. The
conservation equations for unsteady incompressible
and isothermal flow, with zero gravitational and
other external body forces, are:
Mass conservation equation:
0 = V (3)
Momentum equations:

2
1
p
t

+ = +

V
V V V (4)
Equations (3) and (4) are solved with the CFD
code ANSYS CFX. From dimensional analysis, it
follows that, for given geometry, the flow dynamics
Untextured
part
(a)
(b)
(c)
Textured
part

250

depends on the Reynolds number, Re, defined here
in terms of the moving wall velocity and the
minimum film thickness, H
min
. In the present study,
Re=1, a value representative of micro-bearing
applications. Results are presented in non-
dimensional form.

Figure 2 Sketch of textured sliders with manufacturing
errors in the stator: (a) discrepancy in convergence ratio,
(b) discrepancy in dimple depth, (c) discrepancy in dimple
shape from orthogonal, (d) stator concavity, (e) stator
convexity, (f) stator waviness with wavenumber n=1, and
(g) stator waviness with wavenumber n=3.
Typical 3-D meshes generated consist of
approximately 600,000 hexahedral finite volumes.
Here, the density of grids utilized is similar to that
of the validated grids of Papadopoulos et al (2011b).
The bearing walls are considered impermeable.
The bottom wall is stationary. The upper wall is
assumed to be moving at a constant velocity U
(parallel to the x axis), ( ) , 0, u x y z U = = . No-slip
conditions are assumed at both walls. The inlet and
outlet surfaces of the bearing are considered
openings: the pressure is assumed constant, with the
same value p=0 prescribed at both boundaries,
while a Neumann boundary condition is assumed
for the velocity. At the bearing sides, z=B/2, an
outflow condition is prescribed, prohibiting fluid
entrance to the computational domain.
All simulations were initialized from zero
velocity and pressure fields. For Re=1, the
governing equations were integrated for a total non-
dimensional time (tU/H
min
) of 10, with a time step of
0.05. In all cases, convergence to steady state was
verified by monitoring the computed velocity and
pressure at a number of representative points within
the flow domain.
The shear (friction) force and the vertical force
exerted to the rotor are calculated at steady state by
integrating the shear stress, , and pressure, p,
respectively, over the rotor surface:
/ 2
/ 2 0
B L
fr
B
F dxdz

=

,
/ 2
/ 2 0
B L
p
B
F pdxdz W

= =

(5)
The friction coefficient is defined as:
fr p
f F F = (6)
2.3 OPTIMAL GEOMETRY PARAMETERS OF
REFERENCE TEXTURED BEARINGS
In the present study, the bearings considered are
characterized by manufacturing errors, with respect
to reference optimal stator texture patterns. These
optimal patterns correspond to maximum load
carrying capacity, and are the outcome of the
optimization study of Papadopoulos et al. (2011b),
see Table 1.
Table 1- Parameters of optimal texture geometry reported
in Papadopoulos et al (2011b): values of l
uo
and s, for
different values of k and B/L. These values are derived for
constant values of dimple number and dimple density, =5
and
T
=0.83. For a given value of B/L, a convergence ratio
of k=k
opt
corresponds to the global maximum in load
carrying capacity.
B/L
ratio
Convergence
ratio, k
Untextured outlet
length, l
uo

Non-dimensional
dimple depth, s
inf
0 0.364 0.695
k=k
opt
=0.75 0.307 0.412
1.0
0 0.447 0.412
k=k
opt
=1.1 0.567 0.308
0.5
0 0.532 0.372
k=k
opt
=1.31 0.705 0.380
3. RESULTS AND DISCUSSION
The tribological performance of thrust bearings is
expressed by the two main integral parameters, the
normalized load carrying capacity, W
*
, and the
friction coefficient, f. To assess the performance of
textured thrust bearings with manufacturing errors, a
detailed computational investigation has been
performed, for representative values of the slider
width-to-length ratio, B/L, equal to infinity (2-D
sliders), 1, and 0.5. For all three values of B/L, two
values of convergence ratio have been considered,
equal to k=0 (parallel sliders) and k=k
opt
, see Table-
1. The types of manufacturing errors considered are
depicted in Figure-2. The following ranges are
selected for the parameters quantifying
manufacturing errors:
Discrepancy in convergence ratio values:
k=[-0.2, 0.2].
Discrepancy in normalized dimple depth:
s=[-0.3, 0.3].
Discrepancy of dimple shape from the reference
orthogonal form: b
a
=[0,0.3], b
b
=[0,0.3].
Concavity / convexity of the base stator surface in
the streamwise direction: c=[-0.3,0.3]
(a)
(b)
(c)
(d)
(e)
(f)
(g)

251

Waviness of the base stator surface in the
streamwise direction: n=1,3,5; a=[0,0.3]
Figures 3 and 4 present the effect of different
types of manufacturing errors to the load carrying
capacity and friction coefficient of infinite width,
parallel, and converging (with k=k
opt
) bearings. The
following observations can be made:
As in parallel sliders pressure buildup is only due
to texturing, an increase in convergence ratio, k,
results in increased load carrying capacity (Figure-
3(a)), due to the addition of the wedge effect;
Correspondingly, a decrease in k (diverging slider)
results in a decrease in W
*
. Nonetheless, a decrease
in k of the order of 0.1 can be tolerated if a
decrease in W
*
of 30% can be accepted. The
monotonic increase in W
*
with k results in a
corresponding monotonic decrease in friction
coefficient, see Figure-3(f). In converging sliders
of infinite width, a variation of k around k
opt
by
k=0.2 results in a small decrease of less that 2%
in W
*
(Figure-4(a)). The friction coefficient is a
slowly decreasing function of k (Figure-4(f)).
In parallel sliders, the variation of dimple depth
around the optimum by 30% of H
min
results in a
maximum decrease in W
*
of less than 10% (Figure
3(b)). A corresponding increase of less than 10%
is found for the friction coefficient (Figure 3(g)).
For converging bearings with k=k
opt
, the effect of
varying s around the optimum value is even less
pronounced (Figure 4(b,g)).
In the presence of both concavity and convexity,
the load carrying capacity of parallel sliders
decreases (Figure-3(c)), with a corresponding
increase in friction coefficient (Figure-3(h)). The
decrease in W
*
is more pronounced for c>0
(concave sliders), exceeding 35% at c=0.3. For
converging bearings with k=k
opt
, the effects of
varying parameter c are similar for c<0 (convex
sliders); However, a non-negligible increase in W
*

is gained for c>0 (concave sliders), see Figure-
4(c). The effect on the friction coefficient is the
same as in parallel sliders.
In parallel sliders, the presence of waviness of n=1
(one full sine pattern) has a positive effect on
performance, in terms of both W
*
and f.
Specifically, an improvement of approximately
20% is obtained for both parameters for a non-
dimensional wave amplitude a=0.2, see Figure-
3(d,i). However, performance deteriorates at
increasing n values. The trends are the same for
converging bearings with k=k
opt
(Figure-4(d,i)).
In parallel sliders, the deviation of the dimple
shape from an orthogonal one, expressed by
parameters b
a
and b
b
, is not significant, see Figure-
3(e,j). The effects are even smaller for converging
bearings with k=k
opt
(Figure-4(e,j)).
For finite width bearings, variation of the
parameters characterizing manufacturing errors
results in the same trends in W
*
and f as those found
for infinite width sliders; nonetheless, the deviations
from the nominal design are more pronounced.
These deviations increase at decreasing B/L (Figures
5-8). It is emphasized that the results presented in
Figures 5-8 can be used for prescribing tolerances,
in the design of textured micro- thrust bearings.
To illustrate the effects of different
manufacturing errors on the pressure buildup
mechanism of infinite width sliders, Figures 9-11
are presented. Figure 9 presents pressure
distributions of parallel and converging bearings
with k=k
opt
=0.75, corresponding to nominal designs,
as well as to strongly concave/convex stators. Figure
9(a) shows that concavity results in substantial
pressure buildup in the dimpled region, with a sharp
decrease in the untextured part; the latter can be
attributed to the locally diverging slider geometry.
On the other hand, the effect of convexity is less
pronounced in the dimpled region, however, the
pressure decrease is smoother in the untextured part.
In both cases, the pressure maximum as well as the
pressure integral (load carrying capacity) are less
than the corresponding of the nominal design
(Figure 3(c)). In converging sliders (Figure 9(b)),
concavity results in a pressure rise, which is higher
than that of the nominal design; correspondingly,
the pressure integral is increased, see Figure 4(c). A
less pronounced increase and a corresponding
decreased pressure integral are attained for the case
of a convex infinite slider.
The effect of stator waviness on the pressure
distribution is presented in Figure 10. It is found
that, for both parallel and converging bearings, the
presence of one full sine pattern results in increased
overall pressure buildup, which explains the
increased load capacity values of Figures 3(d) and
4(d). Pressure buildup becomes less pronounced at
increasing wavenumber values.
Figure 11 presents the effect of parameters b
a
and
b
b
, on the pressure distribution and flow structure
(streamline patterns). It is found that slight
inclination of both dimple legs leads to a marginal
increase in pressure buildup (Figure 11(a)), due to
the decrease in the size of the corresponding
recirculation zones (Figure 11(b)). However, a
substantial increase in b
a
and b
b
results in decreased
pressure buildup, due to a decrease of the equivalent
(useful) dimple length.


252


(a) (b) (c) (d) (e)
0.6
0.7
0.8
0.9
1
1.1
1.2
1.3
1.4
-0.2 -0.1 0 0.1 0.2
W
*
/
W
*
o
p
t
k
130%
115%
105%
95%
85%
70%
100%

0.6
0.7
0.8
0.9
1
1.1
1.2
1.3
1.4
0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
W
*
/
W
*
o
p
t
s
130%
115%
105%
95%
85%
70%
100%

0.6
0.7
0.8
0.9
1
1.1
1.2
1.3
1.4
-0.3 -0.2 -0.1 0 0.1 0.2 0.3
W
*
/
W
*
o
p
t
c
130%
115%
105%
95%
85%
70%
100%

0.6
0.7
0.8
0.9
1
1.1
1.2
1.3
1.4
0 0.1 0.2 0.3
W
*
/
W
*
o
p
t
a
Wave number=1
Wave number=3
Wave number=5
130%
115%
105%
95%
85%
70%
100%

0.6
0.7
0.8
0.9
1
1.1
1.2
1.3
1.4
0 0.05 0.1 0.15 0.2 0.25 0.3
W
*
/
W
*
o
p
t
ba , bb
ba
bb
130%
115%
105%
95%
85%
70%
100%
(f) (g) (h) (i) (j)
0.6
0.7
0.8
0.9
1
1.1
1.2
1.3
1.4
-0.2 -0.1 0 0.1 0.2
f
/
f
o
p
t
k
130%
115%
105%
95%
85%
70%
100%
0.6
0.7
0.8
0.9
1
1.1
1.2
1.3
1.4
0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
f
/
f
o
p
t
s
130%
115%
105%
95%
85%
70%
100%

0.6
0.7
0.8
0.9
1
1.1
1.2
1.3
1.4
-0.3 -0.2 -0.1 0 0.1 0.2 0.3
f
/
f
o
p
t
c
130%
115%
105%
95%
85%
70%
100%

0.6
0.7
0.8
0.9
1
1.1
1.2
1.3
1.4
0 0.1 0.2 0.3
f
/
f
o
p
t
a
Wave number=1
Wave number=3
Wave number=5
130%
115%
105%
95%
85%
70%
100%

0.6
0.7
0.8
0.9
1
1.1
1.2
1.3
1.4
0 0.05 0.1 0.15 0.2 0.25 0.3
f
/
f
o
p
t
ba , bb
ba
bb
130%
115%
105%
95%
85%
70%
100%
Figure 3 B/L = inf, k = 0: ormalized values of load carrying capacity ( )
* *
opt
W W and friction coefficient ( )
opt
f f for different
parameters expressing manufacturing errors: (a),(f) discrepancy in convergence ratio; (b),(g) discrepancy in normalized dimple
depth; (c),(h) concavity (+) / convexity (-); (d),(i) waviness; (e),(j) discrepancy from orthogonal dimple shape.
(a) (b) (c) (d) (e)
0.6
0.7
0.8
0.9
1
1.1
1.2
1.3
1.4
0.5 0.6 0.7 0.8 0.9 1
W
*
/
W
*
o
p
t
k
130%
115%
105%
95%
85%
70%
100%

0.6
0.7
0.8
0.9
1
1.1
1.2
1.3
1.4
0 0.2 0.4 0.6 0.8
W
*
/
W
*
o
p
t
s
130%
115%
105%
95%
85%
70%
100%

0.6
0.7
0.8
0.9
1
1.1
1.2
1.3
1.4
-0.3 -0.2 -0.1 0 0.1 0.2 0.3
W
*
/
W
*
o
p
t
c
130%
115%
105%
95%
85%
70%
100%

0.6
0.7
0.8
0.9
1
1.1
1.2
1.3
1.4
0 0.1 0.2 0.3
W
*
/
W
*
o
p
t
a
Wave number=1
Wave number=3
Wave number=5
130%
115%
105%
95%
85%
70%
100%

0.6
0.7
0.8
0.9
1
1.1
1.2
1.3
1.4
0 0.05 0.1 0.15 0.2 0.25 0.3
W
*
/
W
*
o
p
t
ba , bb
ba
bb
130%
115%
105%
95%
85%
70%
100%

(f) (g) (h) (i) (j)
0.6
0.7
0.8
0.9
1
1.1
1.2
1.3
1.4
0.5 0.6 0.7 0.8 0.9 1
f
/
f
o
p
t
k
130%
115%
105%
95%
85%
70%
100%

0.6
0.7
0.8
0.9
1
1.1
1.2
1.3
1.4
0 0.2 0.4 0.6 0.8
f
/
f
o
p
t
s
130%
115%
105%
95%
85%
70%
100%

0.6
0.7
0.8
0.9
1
1.1
1.2
1.3
1.4
-0.3 -0.2 -0.1 0 0.1 0.2 0.3
f
/
f
o
p
t
c
130%
115%
105%
95%
85%
70%
100%

0.6
0.7
0.8
0.9
1
1.1
1.2
1.3
1.4
0 0.1 0.2 0.3
f
/
f
o
p
t
a
Wave number=1
Wave number=3
Wave number=5
130%
115%
105%
95%
85%
70%
100%

0.6
0.7
0.8
0.9
1
1.1
1.2
1.3
1.4
0 0.05 0.1 0.15 0.2 0.25 0.3
f
/
f
o
p
t
ba , bb
ba
bb
130%
115%
105%
95%
85%
70%
100%

Figure 4 B/L = inf, k = k
opt
=0.75. ormalized values of load carrying capacity ( )
* *
opt
W W and friction coefficient ( )
opt
f f for
different parameters expressing manufacturing errors: (a),(f) discrepancy in convergence ratio; (b),(g) discrepancy in normalized
dimple depth; (c),(h) concavity (+) / convexity (-); (d),(i) waviness; (e),(j) discrepancy from orthogonal dimple shape.
(a) (b) (c) (d) (e)
0.6
0.7
0.8
0.9
1
1.1
1.2
1.3
1.4
-0.2 -0.1 0 0.1 0.2
W
*
/
W
*
o
p
t
k
130%
115%
105%
95%
85%
70%
100%

0.6
0.7
0.8
0.9
1
1.1
1.2
1.3
1.4
0 0.2 0.4 0.6 0.8
W
*
/
W
*
o
p
t
s
130%
115%
105%
95%
85%
70%
100%

0.6
0.7
0.8
0.9
1
1.1
1.2
1.3
1.4
-0.3 -0.2 -0.1 0 0.1 0.2 0.3
W
*
/
W
*
o
p
t
c
130%
115%
105%
95%
85%
70%
100%
130%
115%
105%
95%
85%
70%
100%

0.6
0.7
0.8
0.9
1
1.1
1.2
1.3
1.4
0 0.1 0.2 0.3
W
*
/
W
*
o
p
t
a
Wave number=1
Wave number=3
Wave number=5
130%
115%
105%
95%
85%
70%
100%

0.6
0.7
0.8
0.9
1
1.1
1.2
1.3
1.4
0 0.05 0.1 0.15 0.2 0.25 0.3
W
*
/
W
*
o
p
t
ba , bb
ba
bb
130%
115%
105%
95%
85%
70%
100%

(f) (g) (h) (i) (j)
0.6
0.7
0.8
0.9
1
1.1
1.2
1.3
1.4
-0.2 -0.1 0 0.1 0.2
f
/
f
o
p
t
k
130%
115%
105%
95%
85%
70%
100%

0.6
0.7
0.8
0.9
1
1.1
1.2
1.3
1.4
0 0.2 0.4 0.6 0.8
f
/
f
o
p
t
s
130%
115%
105%
95%
85%
70%
100%

0.6
0.7
0.8
0.9
1
1.1
1.2
1.3
1.4
-0.3 -0.2 -0.1 0 0.1 0.2 0.3
f
/
f
o
p
t
c
130%
115%
105%
95%
85%
70%
100%

0.6
0.7
0.8
0.9
1
1.1
1.2
1.3
1.4
0 0.1 0.2 0.3
f
/
f
o
p
t
a
Wave number=1
Wave number=3
Wave number=5
130%
115%
105%
95%
85%
70%
100%

0.6
0.7
0.8
0.9
1
1.1
1.2
1.3
1.4
0 0.05 0.1 0.15 0.2 0.25 0.3
f
/
f
o
p
t
ba , bb
ba
bb
130%
115%
105%
95%
85%
70%
100%

Figure 5 B/L = 1, k = 0. ormalized values of load carrying capacity ( )
* *
opt
W W and friction coefficient ( )
opt
f f for different
parameters expressing manufacturing errors: (a),(f) discrepancy in convergence ratio; (b),(g) discrepancy in normalized dimple
depth; (c),(h) concavity (+) / convexity (-); (d),(i) waviness; (e),(j) discrepancy from orthogonal dimple shape.


253

(a) (b) (c) (d) (e)
0.6
0.7
0.8
0.9
1
1.1
1.2
1.3
1.4
0.8 0.9 1 1.1 1.2 1.3 1.4
W
*
/
W
*
o
p
t
k
130%
115%
105%
95%
85%
70%
100%

0.6
0.7
0.8
0.9
1
1.1
1.2
1.3
1.4
0 0.2 0.4 0.6 0.8
W
*
/
W
*
o
p
t
s
130%
115%
105%
95%
85%
70%
100%

0.6
0.7
0.8
0.9
1
1.1
1.2
1.3
1.4
-0.3 -0.2 -0.1 0 0.1 0.2 0.3
W
*
/
W
*
o
p
t
c
130%
115%
105%
95%
85%
70%
100%

0.6
0.7
0.8
0.9
1
1.1
1.2
1.3
1.4
0 0.1 0.2 0.3
W
*
/
W
*
o
p
t
a
Wave number=1
Wave number=3
Wave number=5
130%
115%
105%
95%
85%
70%
100%

0.6
0.7
0.8
0.9
1
1.1
1.2
1.3
1.4
0 0.05 0.1 0.15 0.2 0.25 0.3
W
*
/
W
*
o
p
t
ba , bb
ba
bb
130%
115%
105%
95%
85%
70%
100%

(f) (g) (h) (i) (j)
0.6
0.7
0.8
0.9
1
1.1
1.2
1.3
1.4
0.8 0.9 1 1.1 1.2 1.3 1.4
f
/
f
o
p
t
k
130%
115%
105%
95%
85%
70%
100%

0.6
0.7
0.8
0.9
1
1.1
1.2
1.3
1.4
0 0.2 0.4 0.6 0.8
f
/
f
o
p
t
s
130%
115%
105%
95%
85%
70%
100%

0.6
0.7
0.8
0.9
1
1.1
1.2
1.3
1.4
-0.3 -0.2 -0.1 0 0.1 0.2 0.3
f
/
f
o
p
t
c
130%
115%
105%
95%
85%
70%
100%

0.6
0.7
0.8
0.9
1
1.1
1.2
1.3
1.4
0 0.1 0.2 0.3
f
/
f
o
p
t
a
Wave number=1
Wave number=3
Wave number=5
130%
115%
105%
95%
85%
70%
100%

0.6
0.7
0.8
0.9
1
1.1
1.2
1.3
1.4
0 0.05 0.1 0.15 0.2 0.25 0.3
f
/
f
o
p
t
ba , bb
ba
bb
130%
115%
105%
95%
85%
70%
100%

Figure 6 B/L = 1, k = k
opt
= 1.1. ormalized values of load carrying capacity ( )
* *
opt
W W and friction coefficient ( )
opt
f f for
different parameters expressing manufacturing errors: (a),(f) discrepancy in convergence ratio; (b),(g) discrepancy in normalized
dimple depth; (c),(h) concavity (+) / convexity (-); (d),(i) waviness; (e),(j) discrepancy from orthogonal dimple shape.
(a) (b) (c) (d) (e)
0.6
0.7
0.8
0.9
1
1.1
1.2
1.3
1.4
-0.2 -0.1 0 0.1 0.2
W
*
/
W
*
o
p
t
k
130%
115%
105%
95%
85%
70%
100%

0.6
0.7
0.8
0.9
1
1.1
1.2
1.3
1.4
0 0.2 0.4 0.6 0.8
W
*
/
W
*
o
p
t
s
130%
115%
105%
95%
85%
70%
100%

0.6
0.7
0.8
0.9
1
1.1
1.2
1.3
1.4
-0.3 -0.2 -0.1 0 0.1 0.2 0.3
W
*
/
W
*
o
p
t
c
130%
115%
105%
95%
85%
70%
100%

0.6
0.7
0.8
0.9
1
1.1
1.2
1.3
1.4
0 0.1 0.2 0.3
W
*
/
W
*
o
p
t
a
Wave number=1
Wave number=3
Wave number=5
130%
115%
105%
95%
85%
70%
100%

0.6
0.7
0.8
0.9
1
1.1
1.2
1.3
1.4
0 0.05 0.1 0.15 0.2 0.25 0.3
W
*
/
W
*
o
p
t
ba , bb
ba
bb
130%
115%
105%
95%
85%
70%
100%

(f) (g) (h) (i) (j)
0.6
0.7
0.8
0.9
1
1.1
1.2
1.3
1.4
-0.2 -0.1 0 0.1 0.2
f
/
f
o
p
t
k
130%
115%
105%
95%
85%
70%
100%

0.6
0.7
0.8
0.9
1
1.1
1.2
1.3
1.4
0 0.2 0.4 0.6 0.8
f
/
f
o
p
t
s
130%
115%
105%
95%
85%
70%
100%

0.6
0.7
0.8
0.9
1
1.1
1.2
1.3
1.4
-0.3 -0.2 -0.1 0 0.1 0.2 0.3
f
/
f
o
p
t
c
130%
115%
105%
95%
85%
70%
100%

0.6
0.7
0.8
0.9
1
1.1
1.2
1.3
1.4
0 0.1 0.2 0.3
f
/
f
o
p
t
a
Wave number=1
Wave number=3
Wave number=5
130%
115%
105%
95%
85%
70%
100%

0.6
0.7
0.8
0.9
1
1.1
1.2
1.3
1.4
0 0.05 0.1 0.15 0.2 0.25 0.3
f
/
f
o
p
t
ba , bb
ba
bb
130%
115%
105%
95%
85%
70%
100%

Figure 7 B/L = 0.5, k = 0. ormalized values of load carrying capacity ( )
* *
opt
W W and friction coefficient ( )
opt
f f for different
parameters expressing manufacturing errors: (a),(f) discrepancy in convergence ratio; (b),(g) discrepancy in normalized dimple
depth; (c),(h) concavity (+) / convexity (-); (d),(i) waviness; (e),(j) discrepancy from orthogonal dimple shape.
(a) (b) (c) (d) (e)
0.6
0.7
0.8
0.9
1
1.1
1.2
1.3
1.4
1.1 1.2 1.3 1.4 1.5
W
*
/
W
*
o
p
t
k
130%
115%
105%
95%
85%
70%
100%

0.6
0.7
0.8
0.9
1
1.1
1.2
1.3
1.4
0 0.2 0.4 0.6 0.8
W
*
/
W
*
o
p
t
s
130%
115%
105%
95%
85%
70%
100%

0.6
0.7
0.8
0.9
1
1.1
1.2
1.3
1.4
-0.3 -0.2 -0.1 0 0.1 0.2 0.3
W
*
/
W
*
o
p
t
c
130%
115%
105%
95%
85%
70%
100%

0.6
0.7
0.8
0.9
1
1.1
1.2
1.3
1.4
0 0.1 0.2 0.3
W
*
/
W
*
o
p
t
a
Wave number=1
Wave number=3
Wave number=5
130%
115%
105%
95%
85%
70%
100%

0.6
0.7
0.8
0.9
1
1.1
1.2
1.3
1.4
0 0.05 0.1 0.15 0.2 0.25 0.3
W
*
/
W
*
o
p
t
ba , bb
ba
bb
130%
115%
105%
95%
85%
70%
100%

(f) (g) (h) (i) (j)
0.6
0.7
0.8
0.9
1
1.1
1.2
1.3
1.4
1.1 1.2 1.3 1.4 1.5
f
/
f
o
p
t
k
130%
115%
105%
95%
85%
70%
100%

0.6
0.7
0.8
0.9
1
1.1
1.2
1.3
1.4
0 0.2 0.4 0.6 0.8
f
/
f
o
p
t
s
130%
115%
105%
95%
85%
70%
100%

0.6
0.7
0.8
0.9
1
1.1
1.2
1.3
1.4
-0.3 -0.2 -0.1 0 0.1 0.2 0.3
f
/
f
o
p
t
c
130%
115%
105%
95%
85%
70%
100%

0.6
0.7
0.8
0.9
1
1.1
1.2
1.3
1.4
0 0.1 0.2 0.3
f
/
f
o
p
t
a
Wave number=1
Wave number=3
Wave number=5
130%
115%
105%
95%
85%
70%
100%

0.6
0.7
0.8
0.9
1
1.1
1.2
1.3
1.4
0 0.05 0.1 0.15 0.2 0.25 0.3
f
/
f
o
p
t
ba , bb
ba
bb
130%
115%
105%
95%
85%
70%
100%

Figure 8 B/L = 0.5, k = k
opt
= 1.31. ormalized values of load carrying capacity ( )
* *
opt
W W and friction coefficient ( )
opt
f f for
different parameters expressing manufacturing errors: (a),(f) discrepancy in convergence ratio; (b),(g) discrepancy in normalized
dimple depth; (c),(h) concavity (+) / convexity (-); (d),(i) waviness; (e),(j) discrepancy from orthogonal dimple shape.


254


(a) (b)
-0.05
0.00
0.05
0.10
0.15
0.20
0.25
0.30
0.35
0.40
0 0.2 0.4 0.6 0.8 1
N
o
n
-
d
i
m
e
n
s
i
o
n
a
l

p
r
e
s
s
u
r
e
,

p
*
Non-dimensional x coordinate, x*
Reference case (no errors)
Convexity=-0.3
Concavity=0.3
B/L=inf
k=0
luo=0.364
s=0.695




-0.05
0.00
0.05
0.10
0.15
0.20
0.25
0.30
0.35
0.40
0 0.2 0.4 0.6 0.8 1
N
o
n
-
d
i
m
e
n
s
i
o
n
a
l

p
r
e
s
s
u
r
e
,

p
*
Non-dimensional x coordinate, x*
Reference case (no errors)
Convexity=-0.3
Concavity=0.3
B/L=inf
k=0.75
luo=0.307
s=0.412




Figure 9 B/L = inf: Distributions of non-dimensional pressure on the moving wall of: (a) a parallel bearing, and (b) a converging
bearing with k=k
opt
=0.75, for nominal designs (c=0), convex sliders with c=-0.3, and concave sliders with c=0.3.
(a) (b)
-0.05
0.00
0.05
0.10
0.15
0.20
0.25
0.30
0.35
0.40
0 0.2 0.4 0.6 0.8 1
N
o
n
-
d
i
m
e
n
s
i
o
n
a
l

p
r
e
s
s
u
r
e
,

p
*
Non-dimensional x coordinate, x*
Reference case (no errors)
Wave number=1, Wave ampl.=0.2
Wave number=5, Wave ampl.=0.3
B/L=inf
k=0
luo=0.364
s=0.695




-0.05
0.00
0.05
0.10
0.15
0.20
0.25
0.30
0.35
0.40
0 0.2 0.4 0.6 0.8 1
N
o
n
-
d
i
m
e
n
s
i
o
n
a
l

p
r
e
s
s
u
r
e
,

p
*
Non-dimensional x coordinate, x*
Reference case (no errors)
Wave number=1, Wave ampl.=0.2
Wave number=5, Wave ampl.=0.3
B/L=inf
k=0.75
luo=0.307
s=0.412




Figure 10 B/L = inf: Distributions of non-dimensional pressure on the moving wall of: (a) a parallel bearing, and (b) a
converging bearing with k=k
opt
=0.75, for nominal designs (a=0), and wavy sliders with different wave number and non-
dimensional amplitude values.
(a) (b)
0.00
0.05
0.10
0.15
0.20
0.25
0.30
0.35
0.40
0 0.2 0.4 0.6 0.8 1
N
o
n
-
d
i
m
e
n
s
i
o
n
a
l

p
r
e
s
s
u
r
e
,

p
*
Non-dimensional x coordinate, x*
Reference case (no errors)
ba=0.05, bb=0
ba=0.3, bb=0
ba=0, bb=0.05
ba=0, bb=0.3
B/L=inf
k=0
luo=0.364
s=0.695




Figure 11 B/L = inf, k=0: (a) Distributions of non-dimensional pressure on the moving wall for different values of parameters b
a

and b
b
.(b) Corresponding streamline patterns in the region of the first dimple, coded with velocity magnitude.

255


(a) Reference case (no errors) (b) Concavity, c=0.3




(c) Waviness, n=1, a=0.3 (d)




0.00
0.02
0.04
0.06
0.08
0.10
0.12
0 0.2 0.4 0.6 0.8 1
N
o
n
-
d
i
m
e
n
s
i
o
n
a
l

p
r
e
s
s
u
r
e
,

p
*
Non-dimensional x coordinate, x*
Reference case (no errors)
Concavity=0.3
Wave number=1, Wave ampl.=0.3
B/L=0.5
k=1.31
luo=0.705
s=0.380

Figure 12 B/L = 0.5, k=k
opt
=1.31: Color-coded contours of non-dimensional pressure on the moving wall of the slider, for: (a)
nominal optimal slider; (b) a concave slider with c=0.3; (c) a wavy slider with n=1 and a=0.3. (d) Corresponding distributions of
non-dimensional pressure on the moving wall symmetry line (A-A).
The effects of finite bearing width on pressure
distribution is presented in Figure 12, for
representative cases with B/L=0.5. Here, converging
sliders with k=k
opt
, of nominal design, as well as two
sliders with manufacturing errors (the latter
characterized by substantially improved
performance) are considered. Figure 12(a),(d)
shows that, in the case of nominal design, a
substantial pressure buildup is present, for the
textured, as well as for a large portion of the
untextured part. Figure 12(b),(d) shows that, for a
concave slider with c=0.3, an increased pressure
buildup slope is attained for the textured part,
resulting in an substantial increase of the pressure
integral (see Figure-8(c) for c=0.3). Finally, Figure
12(c),(d) demonstrates that in the case of a wavy
stator with n=1 and a=0.3, a substantial increase in
pressure buildup is attained for a large portion of the
untextured part; This increases substantially the
maximum pressure value, as well as the pressure
integral (by approximately 20%, see Figure-8(d) for
a=0.3).
4. COCLUSIOS
In the present study the effects of different types of
manufacturing errors on the performance of micro-
thrust bearings has been studies by means of CFD
studies. The results have demonstrated that, in
several cases, the presence of manufacturing errors
result in improved bearing performance, in terms of
both load carrying capacity and friction coefficient.
The present results can be used as guidelines on
prescribing tolerances of textured micro- thrust
bearings.
5. ACKOWLEDGMETS
This work has been partially supported by the EU
project MARINELIVE, grant Nr. 264057. This
support is gratefully acknowledged.
OMECLATURE
Micro- thrust bearing geometry variables (Figure 1a)

B slider width (m)
B/L slider width-to-length ratio
0 1
, H H outlet, inlet height (m)
min
H minimum film thickness (m)
k convergence ratio:
1 0 0
( ) k H H H =
L slider length (m)
l non-dimensional slider length:
min
l L H =
ui
L untextured inlet length (m)
ui
l non-dimensional untextured inlet length:
ui ui
l L L =
uo
L untextured outlet length (m)
uo
l non-dimensional untextured outlet length:
uo uo
l L L =
*
x non-dimensional x coordinate:
*
x x L =
A A A A
A A

256

Dimple geometry variables (Figure 1b)
a
B bottom left length of dimple (m)
a
b non-dimensional bottom left length of dimple:
a a d
b B L =
b
B bottom right length of dimple (m)
b
b non-dimensional bottom right length of
dimple: ( )
b b d a
b B L B =
d
H dimple depth (m)
c
L texture cell length (m):
( ) ( ) 1 1
c ui uo T
L L l l = +
d
L dimple length (m)
number of dimples
s relative dimple depth:
min d
s H H =
T
texture density:
T d c
L L =
Variables defining manufacturing errors
A waviness amplitude (m)
a non-dimensional waviness amplitude :
min
a A H =
C concavity/convexity amplitude (m)
c non-dimensional concavity/convexity amplitude :
min
c C H =
n Wave number
Physics variables
f friction coefficient:
fr
f F W =
fr
F ,
p
F friction force, vertical pressure force (N)
p pressure (Pa)
*
p

non-dimensional pressure:
*
min
2
Re
H p
p
L
U
=
Re Reynolds number:
min
Re UH =
U moving wall velocity (m s
-1
)
, , u v w streamwise, cross-flow and spanwise fluid velocities
(m s
-1
)
V fluid velocity vector: k u v w = + + V i j
W absolute value of external bearing force (load carrying
capacity) (N)
*
W non-dimensional load carrying capacity:
( )
2
*
min
H
W
W
UB L
=
fluid dynamic viscosity (Pa s)
fluid density (kg m
-3
)
shear stress (Pa)
REFERECES
Abramovitz S, Theory for a slider bearing with a convex pad
surface; Side flow neglected, Journal of Franklin Institute,
Vol. 259, No. 3, 1955, pp 221233.
Andharia PI, Gupta, JL, and Deheri, GM, On the Shape of the
Lubricant Film for the Optimum Performance of a
Longitudinal Rough Slider Bearing, Industrial Lubrication
and Tribology, Vol. 52, No. 6, 2000, pp 273-276.
Bhat N, Barrans S and Kumar AS, Performance Analysis of
Pareto Optimal Bearings Subject to Surface Error
Variations, Tribology International, Vol. 43, No. 11, 2010,
pp 22402249.
Buscaglia GC, Ciuperca I, and Jai M, The Effect of Periodic
Textures on the Static Characteristics of Thrust Bearings,
ASME Journal of Tribology, Vol. 127, 2005, pp 899-902.
Cheng K, and Rowe WB, A Selection Strategy for the Design
of Externally Pressurized Journal Bearings, Tribology
International, Vol. 28, No. 7, 1995, pp 465-474.
Chen XD and He XM, The Effect of Recess Shape on
Performance Analysis of the Gas-Lubricated Bearing in
Optical Lithography, Tribology International, Vol. 39,
2006, pp 13361341.
Dobrica MB and Fillon M,. About the Validity of Reynolds
Equation and Inertia Effects in Textured Sliders of Infinite
Width, Proceedings of the Institution of Mechanical
Engineers, Part J: Journal of Engineering Tribology, Vol.
223, No. 1, 2009, pp 69-78.
Etsion I, Halperin G, Brizmer V and Kligerman Y,
Experimental Investigation of Laser Surface Textured
Parallel Thrust Bearings, Tribology Letters, Vol.17, No. 2,
2004, pp 295-300.
Kwan YBP and Post JB, A Tolerancing Procedure for
Inherently Compensated, Rectangular Aerostatic Thrust
Bearings, Tribology International, Vol. 33, 2000, pp 581
585.
Li Y and Ding H, Influences of the Geometrical Parameters of
Aerostatic Thrust Bearing with Pocketed Orifice-Type
Restrictor on its Performance, Tribology International, Vol.
40, No.7, 2007, pp 11201126.
Ozalp AA and Umur H, Optimum Surface Profile Design and
Performance Evaluation of Inclined Slider Bearings,
Current Science, Vol. 90, No. 11, 2006, pp 1480-1491.
Papadopoulos CI, Efstathiou EE, Nikolakopoulos PG and
Kaiktsis L, Geometry Optimization of Textured 3-D Micro-
Thrust Bearings Proceedings of ASME Turbo Expo 2011:
Turbine Technical Conference and Exposition, GT2011,
June 6-11, 2011, Vancouver, Canada.
Papadopoulos CI, Nikolakopoulos PG and Kaiktsis L,
Evolutionary Optimization of Micro- Thrust Bearings with
Periodic Partial Trapezoidal Surface Texturing. Journal of
Engineering for Gas Turbines and Power, Vol. 133, 2011,
pp 1-10.
Pascovici MD, Cicone T, Fillon M and Dobrica MB,
Analytical Investigation of a Partially Textured Parallel
Slider. Proceedings of the Institution of Mechanical
Engineers, Part J: Journal of Engineering Tribology, Vol.
223, No. 2, 2009. pp 151-158.
Puri V, Patel CM and Bhat MV. The Analysis of a Pivoted
Porous Slider Bearing with a Convex Pad Surface, Wear,
Vol. 88, 1983, pp 127-131.
Ramanaiah G and Sundarammal A, Effect of Bearing
Deformation on the Characteristics of a Slider Bearing,
Wear, Vol. 78, No. 3, 1982, pp 273-278.
Sharma RK and Pandey RK, Experimental Studies of Pressure
Distributions in Finite Slider Bearing with Single
Continuous Surface Profiles on the Pads, Tribology
International, Vol. 42, 2009, pp 10401045.
Stout KJ and Pink EG, Orifice Compensated EP Gas Bearings:
the Significance of Errors of Manufacture, Tribology
International, Vol.13, No. 3, 1980, pp 105111.
Tonder K, Effects of Skew Unidirectional Striated Roughness
on Hydrodynamic Lubrication, Wear, Vol. 115, No.1-2,
1987, pp 19-30.
Yang H, Ratchev S, Turitto M and Segal J, Rapid
Manufacturing of Non-Assembly Complex Micro-Devices
by Stereolithography, Tsinghua Science and Technology,
Vol. 14, 2009, pp 164-167.
Proceedings of DET2011
7th International Conference on Digital Enterprise Technology
Athens, Greece
28-30 September 2011
257

A MULTILEVEL RECONFIGURATION CONCEPT TO ENABLE VERSATILE
PRODUCTION IN DISTRIBUTED MANUFACTURING
Sarfraz Ul Haque Minhas
Chair of Automation Technology BTU Cottbus
Germany
minhas@tu-cottbus.de
Marcel Halbauer
Chair of Automation Technology BTU Cottbus
Germany
marcel.halbauer@tu-cottbus.de


Ulrich Berger
Chair of Automation Technology BTU Cottbus Germany
ulrich.berger@tu-cottbus.de
ABSTRACT
The manufacturing industry is confronting challenges due to high diversity of product variants,
reduced product life cycles, short innovation cycles, faster time to market as well as strict
environmental regulations. These challenges have persuaded manufacturers to exploit concepts
related to open innovation, distributed manufacturing, modular and scalable production system
design and eco-efficient production. This paper aims at providing a short review of state of the art in
reconfiguration of distributed production systems and focuses on new strategies to resolve
complexities that arise subsequently. In this regard, a reconfiguration concept based on new
strategic objectives has been proposed to enable customized production. The approach will be
implemented and validated in the collaborative projects.
KEYWORDS
Optimization Module, Plug and Produce, Smart Robot Tooling

1. INTRODUCTION
The manufacturing industry is subjected to
numerous current and future challenges. These
challenges need drastic changes in their operating
environment to achieve future strategic goals. It is
particularly important when they intend to compete
in the global environment. As a matter of fact, the
manufacturers particularly the OEMs have focussed
on getting quick and easy access to the segregated
markets as a strategy to raise their competitiveness
and market share. In doing so, they need to address
individualized customer demands in a shortest
possible time.
The manufacturers are consolidating resources as
well as relying on high outsourcing strategies in
design and production to bring innovations in the
shortest possible time. Furthermore, the current and
future challenges concerning environmental
regulations require alternative choice for materials
and processes that are environmental friendly. They
also require adaptable and eco-efficient
manufacturing systems by exploiting reconfigurable
and lean production concepts. Better organizational
structures have been identified and explored to
achieve responsive, expandable, adaptable,
reconfigurable and eco-efficient systems. These
characteristics demand intensive and systematic
collaboration among all manufacturing stakeholders
i. e. OEMs, subcontractors, suppliers, dealers,
retailers as well as customers in the form of
collaborative networks. These networks act as
strategic capacity builders to foster manufacturers
competitiveness. Likewise other industrial sectors,
automotive industry has been going through a huge
change in their organizational structures since they
initiated exploiting mass customization principles.
The increasing competition, for satisfying customer
258

requirements along with declining product costs
reducing innovation cycles and increasing
production volume mix, has compelled
manufacturers to interact to a deeper extent with
their suppliers as well as with customers without
relying completely on their own competencies and
solutions (Scavarda and Hamacher, 2007). Several
strategies have been adopted along the time history
of production in the automotive industry to address
customer demands. Most notably they are named as
make to stock, build to order and assemble to order.
A hybrid strategy combining assemble to order and
make to stock is practically adopted in the vehicle
manufacturing sector to achieve short delivery times
as well as minimizing production capacity
constraints (Brabazon and MacCarthy, 2006). The
build to order strategy has its advantages for large,
expensive and customized parts. In the car industry
that would be e.g. engines or entire cars (Alicke,
2005). In Germany, build to order strategy has a
long tradition and more than 60% of the cars are
built to customer orders (Parry and Graves, 2008)
connected through highly dynamic value chain
management process. The value added chain in
automotive is characterized by intense flow of
materials, information and components along with
the tremendous amount of collaboration activities
among suppliers, manufacturers, dealers, retailers as
well as customers in ever expanding production
networks. Consequently, the collaboration requires
intense involvement of stakeholders in common
design, production as well as after sales services.
Corallo and Lazoi (2010) have highlighted some of
the recent practices being adopted by value added
network actors in an aerospace company to manage
and perform innovation activities. Similarly, issues
related to the future collaboration between the
stakeholders of automotive production to get highly
individualized and environmentally friendly cars are
reported in an article by Daum (2005). The
emphasis has been laid on better understanding of
customer needs in order to bring innovations in
shortest possible time. It has triggered the expansion
of collaborative networks by enhancing roles of tier
1 suppliers and customers role in the product
development phase. The advent of internet based
technologies has revolutionized the communications
possibilities in collaborative networks. In the
context of B2C, it is particularly seen as product
configurators in consumer products as well as in
modern vehicles. The car configurators introduced
by every manufacturer for instance provide
customer a set of options to customize vehicles
according to their choices. In B2B relations, the
existing communication infrastructure at
manufacturers as well as at the suppliers is not
sufficient enough to deliver satisfactory
performance for effective collaboration (PTC,
2009). Additionally, the short innovation cycles
have compelled OEMs to rely on more extensive
partnerships with other stakeholders for better
utilization of capabilities and capacities within the
production networks. However, some highlighting
challenges being faced due to multiple data
repositories, insufficient process support and
integration issues between various tools at
manufacturers and supplier sides. It has made the
development process sluggish often prone to time
delays and losses of investments are reported in
(PTC, 2009). Consequently an effective
collaboration cannot be made due to information
and data exchange issues. This necessitates
searching for innovative solutions for collaboration
at the product design side among potential
customers and manufacturers. Moreover, the
collaboration between suppliers and dealers must
embrace ideas concerning innovative collaborative
platforms to connect heterogeneous software tools
or exchange of data in production networks.
Another issue in focus in the scope of this paper
relates to the environmental impact of production
activities in the collaborative networks. The
diversified product requirements from
heterogeneous markets, global distribution of
manufacturing and supplier locations have
generated a pool of highly diversified
manufacturing and supply alternatives. Each of the
manufacturing and supply process generates
environmental emissions. Subsequently, a huge
variety of alternative production schemes can be
generated. They should be optimized based on cost,
time and environmental efficiency to decide for the
feasible production scheme that can manufacture
the customized product. Currently, the shop floor
processes are optimized considering their associated
costs and manufacturing time. There is no
significant contribution found in the scientific
literature that gives conception for the assessment of
the environmental impact of customized
manufacturing in distributed production system.
Previous studies mentioned in Olugu et al. (2010)
have focussed mainly in the areas of sustainability
costs, process optimization to reduce ecological
load and recycling. A much attention is given on
limiting wastes disposals from processes less than
5% of the 90% of end of life vehicles (Olugu et al.,
2010, Schultmann et al., 2006) to comply with the
aspirations of the European Commission. The
environmental assessment of production processes
i.e. manufacturing process chain or supply chains
are generally carried out on the manufacturing site
basis and with respect to the specific product. In
some cases it is even neglected as the assessment
has not been enforced strictly by laws or any other
influential factor. Furthermore, the responsivity and
259

reconfigurability of the production system have
posed question on productivity while conceding
changing requirements in the customized
production. The diversified customer demands are
fulfilled by introducing a huge vehicle variant
diversity. OEMs follow this practice as a strategy to
gain competitive advantage. Wemhner (2006)
indicated that new models of Mercedes Benz in the
last 15 years increased an average from one to 2.5
models per year and the product life cycle of these
models reduced from 9 years to 5-7 years
respectively. BMW also claims that the possible
variations in BMW7 series can reach to 10
17
(Hu et
al., 2008). This diversity is not only seen in vehicle
drives, power trains and other accessories but also
in the new body structures constituted of new
materials such as light weight materials (Goede et
al., 2009) and multifunctional materials (Salonitis et
al., 2009) to achieve fuel economy as well as the
eco-efficiency. The high diversity has increased the
complexities in controlling the production internally
as well as externally. Internally it has posed a big
challenge to the order fulfilment process (Salvador
and Forza, 2004) as well as to the optimal
configuration of production setups. Externally, the
configuration of supply chain with the changing
customer requirements and environmental
regulations concerning materials and processes has
raised enormous complexities. One of the main
challenges in controlling the production is the
configuration of the automotive production system
to new production requirements and reusability of
resources for new processes, operations and
applications. The automotive production setups
configuration process is accompanied by several
activities related to resource planning, designing,
simulation, optimization, commissioning and
sequencing and scheduling of tasks to be
accomplished on the specified resources in a
production network. The production configuration
process mainly constitutes commissioning of
resources. This process is quite repetitive and
sluggish accompanied by technical complexities.
The complexities in optimizing resources to reach
production objectives have imparted negative
impact on the technical and economic objectives of
manufacturers in serial start-up of production. A
case study mentioned in (N. N., 2005) depicts some
interesting statistics about serial start-up of the
production in European automotive industry during
the year 2004-2005. These statistics indicate that
during that period, about 60% of the running serial
start-ups missed their set objectives. Approximately
23% of the start-ups were neither economically nor
technically successful. The main problems that arise
during commissioning are the unsuitable resources,
missing resources, resources with incorrect or misfit
specifications, desired cycle time. Besides, setup
time and costs are extremely high. Furthermore, the
exchange of planning and commissioning related
data is not consistent and involves high risk in loss
of accuracy in information exchanged through
heterogeneous software tools. Additionally, the
coordination with the external suppliers is
extremely sluggish due to absence of seamless and
standardized communication and data exchange
means and tools. Today, the commissioning time
shares about 11% of the time for vehicle volume
production up to five years (Schuh et al., 2008 ,
Barbian, 2005). With the increased customization,
resources must incorporate fast reconfiguration
possibilities with minimum setup times or they must
be completely transformable with least changeover
time. It also requires adoption of fast
commissioning strategies to achieve customized
production with even shorter lead time. To sum up
the presented facts and trends, the future
manufacturing systems need distributed production
networks that can be adjusted and configured
dynamically to enable customized production.
Besides, the optimization of production processes
and the resources used in the collaborative networks
must be made by considering cost, time, quality and
environmental efficiency. Furthermore, at the
resource level, the corresponding process setups
must be reconfigured fast to achieve high
responsivity and reusability. It will help reducing
changeover time and the associated development
costs and delays.
2. STATE OF THE ART ON DISTRIBUTED
PRODUCTION SYSTEMS
The production is a totality of all functions that are
required to design, produce, distribute and service
products. A distributed manufacturing system
(Figure-2) comprised geographically networked
value adding resources and material handling
resources that are interconnected with transported
resources meant for producing variety of products
for segregated markets (Farid et al., 2006). Until
now, production systems have been under
continuous process of alteration as well as
consolidation due to changing product
specifications. Furthermore, governmental
legislations demand eco-friendly vehicles that have
less environment impact in production, in use and
after use cases. Taking the example of automotive
industry, the corresponding production systems
comprise dense network of geographically
distributed manufacturing sites as OEM global
infrastructure. Japanese concept for just in time
production as well as the subsequent supply chain
management has made geographically distributed
suppliers to be linked directly with respective
vehicle manufacturing production phase. The
260

distribution of suppliers and manufacturing
locations is uneven due to inconsistent customer
demands or choices.


Figure 1 AX platform for collaboration between OEMs
and suppliers (Cassivi et al. 2000)
Modularity is regarded as a key factor in organizing
not only the product customization but also the
supply chains (Ro et al., 2007). It has facilitated the
collaboration activity with suppliers, which are
integrated bilaterally with manufacturers through
indigenous electronic data exchange tools
(Koperberg, 2007). The German automotive
manufacturer Daimler for example, is believed to
have strong partnership with its suppliers through
common platform like ANX (see Figure-1).
Through this platform not only the manufacturer,
but also tier 2 and 3 suppliers are able to work with
tier 1 supplier in an effective and much easier way
(Cassivi et al., 2000). The collaboration through
ANX allows real time interaction between product
developers and production engineers. The most
common means known today are namely the video
conferencing, data visualization tools easy data
analyzation, digital mock-up tools, application
control and data exchange to avoid any duplication
of files and messenger for exchange of office files
(Cassivi et al., 2000).
The distributed manufacturing system in automotive
manufacturing is illustrated in Figure-2. The shops
related to each manufacturing facilities are closely
or loosely coupled with suppliers, subcontractors
and distributors etc. The production planning in
such a distributed supply and manufacturing chain
is a highly complicated task as the optimization of
processes is to be done locally as well as globally
along the whole manufacturing interconnected
network. Consequently, decision making in
distributed production system has become
problematic due to constraints related to time, cost
and environmental emissions. The scope of this
paper is restricted to the planning issues in the
distributed manufacturing systems as well as
configuration issues at the resource level with focus
on body shop development and machine shop,
respectively.

Customer
Manufacturer
Suppliers/ Subcontractors
Dealers/ Distributors
Press
Shop
Body
Shop
Paint
Shop
Final Assembly
Shop
Machine
Shop

Figure 2 Distributed Manufacturing System (etwork and
Automobile Manufacturing Phases)
The configuration or reconfiguration of
manufacturing system consists of the following
steps shown in the block in Figure-3. At the
network level, the decision regarding selection of
production location as well as the supply chain for
manufacturing each of the anticipated products is
defined. At the site level, the processes and the
corresponding resources are optimized based on the
production goals i. e. cost, time and quality. As the
process quality has direct influence on the product
quality, the processes are selected depending upon
the quality that can be achieved through the
processes as well as resource related constraints.

Resources
Optimization
Commissioning
Tasks /Job
Scheduling
Task / Job
Sequencing
Simulation &
Development
Decision (Location,
Resources, Suppliers)
Network Level
Site Level

Figure 3 Main phases of configuration of distributed
production system
Furthermore, the process cycle time has direct
influence on the production rate; therefore the
processes are not optimized based on their
feasibility to accomplish tasks but also on the
inherent costs, cycle time and the product quality.
In the simulation phase (see Figure-3), tasks are
simulated using the selected resources in digital
factory tools. This is particularly meant for the
assessment of the process and production related
metrics. After simulation, new resources are added
or altered. This activity is specified by intensive
collaboration from material, components and
equipment suppliers on one side and on the other
side from the system integrators to assist in
261

finalizing commissioning and performance
evaluation. Afterwards, the scheduling and
sequencing of the jobs or tasks are carried out by
assigning them to resources in an optimal way.


2.2.1. Distributed Production Planning and
Scheduling
Production planning and scheduling refers to
activities that deal with selection and sequence of
production processes as well as the optimal
assignment of tasks to manufacturing resources over
a specific time. Several methodologies have been
introduced in the literature to enable computer aided
process planning namely feature based planning
(Cai, 2007; Mokhtar et al., 2007; Berger et al.,
2008), artificial intelligence i. e. neural networks
and genetic algorithms based planning (Joo, 2005;
Monostori et al., 2000; Zhang et al., 1997;
Venkatesan et al., 2009) and knowledge based
approaches (Wu et al., 2010; Tsai et al., 2010). A
handful of paper discusses planning and scheduling
related issues in the distributed manufacturing
environment. Recent literature e. g. discusses the
coordination of local schedulers by using heuristics
algorithms (Xu et al., 2010). Agent based concept
has emerged as an innovative solution to solve
planning and control problems in distributed
production systems. Lima et al. (2005) presented a
model for agent based production planning and
control to dynamically adapt to local and distributed
utilization of production resources and materials. In
a highly individualized customer demand scenario i.
e., one-of-a-kind production, incremental process
planning has been proposed for extension or
modification of primitive plan incrementally
according to the new product features (Tu et al.,
2000). Likewise, agent based approach is used to
enable manufacturing organizations dynamically
and cost effectively integrate, optimize, configure,
simulate, restructure their manufacturing system as
well as supply networks (Zhang et al., 2006). Agent
based approaches are more flexible, efficient and
adaptable to dynamic and distributed manufacturing
environment. However, none of these contributed
work addresses planning based on strategic
production goals. Thus issues on optimization of
processes and activities on cost, time and potential
environmental impact of manufacturing and supply
processes for automotive manufacturing have rarely
been addressed. A very confined number of
research papers address the issue of optimization of
energy consumption at individual machine level
optimization of sequence or ordering of activities
based on energy consumption at individual machine
(Mouzon et al., 2007) or shop floor level
(Vijayaraghavan and Dornfeld, 2010). Therefore the
future manufacturing setups and processes need to
be optimized based on cost, time and environmental
efficiency to help manufacturers in gaining
competitive advantage in bringing cheap, high
quality and innovative products to the market in a
short delivery time.


2.2.2. Configuration at Resource Level
The second issue addressed in this paper is the
configuration of resources to achieve customized
production at the shop floor. The configuration
activities at this level are mainly dominated by
commissioning of shop resources, to enable smooth
execution of intended tasks with the anticipated
quality. The commissioning process comprises
several distinct activities. The major part of these
activities is assisted by digital tools to enable
smooth and fast ramp-up as well as reduce
development commissioning costs. The
commissioning activities use advanced robot
simulation tools to virtually simulate, validate and
commission robot application environment. It
allows experimentation possibilities which may be
difficult to test using real systems. The
commissioning process may involve intensive
repetitive activities accompanied by tedious testing
and hit and trials procedures to achieve the robot
movements. The simulation of new robot or robot
with unknown characteristics is needed to specify
the positions of tool centre points (TCP) in order to
execute intended tasks precisely. Until now, robots
have been used in high volume production
applications. Besides, robots are typically
programmed for new tasks by first teach in
procedures and then programmed offline to get the
desired path. The changeover for robot from one
process application scope to another is a time
consuming process. Furthermore for each of the
changed product features, the robot have to be
programmed through teach in procedures or through
virtual simulation tools and then programmed
offline. The process is also time consuming as they
lack absolute positioning accuracy and the
programming and simulation tools are unreliable to
configure robots in a short time for executing
complex tasks such as machining. These limitations
need configuration strategy to enable reusability of
robot as a plug and produce device used in
execution of various processes.
3. CONCEPT FORMULATION
The concept for reconfiguration of production
systems is presented by considering two distinct
262

cases. The first is taken from the automotive body
shop while the second is from the machine shop.
The formed, stamped and roller components of
body-in-white are sent to the body shop to develop
complete body-in-white after assembly. The
machine shop is an auxiliary part of the automotive
production setups. In automotive industry, vehicle
components for the end assembly are machined in
the shop and delivered to the final assembly shop.
The body shop is taken as a case study to present
reconfiguration concept at the network level and
machine shop to introduce resource reconfiguration
concept. Hence the mass customized manufacturing
of products with small changing lot sizes is
possible.
3.1. RECONFIGURATION AT NETWORK
LEVEL
Currently the complete vehicle body-in-white is
developed at one particular location in a centralized
body shop. The body shop at each of the vehicle
manufacturer follows either of the basic layout
forms as shown in the Figure-4.

i) open basic forms
ii) closed basic forms
different stations
in the body shop e.g.:
assembling, welding, etc.
material-/ part flow
cell border

Figure 4 Basic Body Shop Layouts
Hesse (2006) describes different basic layout forms
for assembly systems. These basic forms are also
followed in the body shop assembly lines. However,
most prominent layouts are Z shaped assembly
layout, C form assembly layouts and fishbone
assembly layouts.
Considering body-in-white as a product, the
current body-in-white is modularized (Paralikas et
al., 2011) to generate product families from the
basic platform easily. Furthermore, parts or modules
can be either carried over or reused after slight
modifications. The modular body-in-white design
has open up many new possibilities for redefining
new layout inside the body shop of manufacturing
plant as well as among different manufacturing
facilities in a distributed manufacturing scenario.
Network
Suppliers and OEMs
(Integrators)
Centralized Body Shop
Distributed Body Shop
End Customers
Present
Future

Figure 5 Distributed Body Shop (OEM and Supplier
etwork)
Figure-5 describes the present and future layouts for
the automotive body shops. Currently,
modularization of body-in-white has promoted
structuring of body-in-white assembly lines on fish
bone layout. Various vehicle modules are produced
separately at various sub assembly lines and then
body-in-white is integrated in a stepwise fashion.
The flexibility at the cell level and assembly line
level is very limited due to the dedicated joining
stations and the assembly robots. The cycle times
are also fixed at the cell as well as at assembly lines.
Askar and Zimmermann (2007) noted that the body
shop has generally no technical flexibility because
the robots used have fixed cycle times. The
handling of variant diversity, modularity
(Pandremenos et al., 2009) and metal hybrid body-
in-white concepts (Grujicic et al., 2009) concepts
are adopted. Furthermore, new production concepts
based on migration manufacturing principle
(Meichsner, 2009) have been exploited to develop
various vehicle shapes. This principle, however,
cannot be easily mapped to handle diversity of
luxury vehicle segments. All these efforts are based
on modularization of the products to handle variety
in the manufacturing systems; however, the
manufacturing setups are not reconfigurable to
develop products that are modularized at the
product level. Minhas et al., (2011) and Zipter et al.,
(2011) introduced two novel concepts to make the
production setups reconfigurable or transformable
to handle diversified assembly tasks. The versatile
production setup (Minhas et al., 2011) concept in
the form of multi-technology joining cell to join
body-in-white subassemblies is a specific case to
make the joining cell scalable, modular and
responsive to the changing product development
requirements. The robot farming concept (Zipter et
263

al., 2011) is envisaged to meet the challenges of
volatile markets by dynamic resource management
concept. However, these concepts have not
addressed the open innovation as well as
environmental impact challenges that the body shop
layout and production processes may face in the
near future. Moreover, no contribution has so far
been made in configuration of body shop production
considering distributed production. It is a required
production scenario when the manufacturers are not
able to meet short innovation cycles and higher lead
time and the body-in-white variants are developed
through mutual collaborations of suppliers and
OEMs. Papakostas et al. (1999) introduced the
flexible agent based system RIDER that
encompasses both real-time and decentralized
manufacturing decision making capabilities in
textile and cable producing enterprises. This
concept is still valid to solve the reconfiguration
issues in real time and decentralized manufacturing
environment in automotive area. The current trends
show that new assessments concerning the role of
supplier and manufacturers as well as relationship
between the manufacturers enable customized
production. Figure-5 shows the graphical
representation of distributed body shop. The
manufacturer will analyze customer requirements
based on the vehicle style as well as external
accessories such as roof and assess the production
schemes based on the available suppliers in order to
deliver required modules or parts in a specified
time. The configuration or reconfiguration of the
potential manufacturing or supply scheme will be
made upon considering the associated cost to
manufacture the product as well as the production
and delivery time. Additional factor will also be
considered to assess the potential production
schemes based on their environmental impact. The
architecture for decision support tool is shown in
Figure-6 as block diagram. The modular and
scalable body-in-white is customized based on the
style specifications from the potential customers.
The customized version of body-in-white is
compared with the bill of materials and bill of
processes of the reference body-in-white to decide
for the customized bill of materials and bill of
processes. This information is used to decide about
the locations where the production will be carried
out to manufacture the specific body as well as
integration takes place. The environmental impact
of each of the production scheme will be assessed
by getting direct information from the knowledge
base to calculate the environmental impact metrics
of the production processes as well as supply
means. In case of missing information or
completely new production case, the production
chain will be simulated in the environmental
assessment tool.

Customized
Body-in-White Specifications
Customized
Production Process
Chain
Environmental Impact
Database
Reference
BoP & BoM
Suppliers
Information
Manufacturing Sites
Optimization
Module
Cost related Metrics
Time Metrics
Environmental
Assessment
Database
Calculation
Agent
(Algorithms
Equations)
Knowledge
base
Simulation
Tool
Graphical User Interface (OEMs, Suppliers)
Production
Process Chain

Figure 6 Distributed Production etwork Optimization
Module
The final decision about the production at the
supplier side or at the manufacturing side will take
place considering the cost, time and environmental
impact of the production as well as supply chains. It
will enable economic, quick and eco-efficient
production in mass customization scenario.

3.2. RECONFIGURATION AT RESOURCE
LEVEL
At resource level, the reconfiguration process must
be flexible and fast enough to accomplish the task
and deliver customized products in a shorter time.
The customized products for automotive industry
are machining of complex automotive parts, moulds
and dies. The machine shop today is composed of
various stand-alone as well as flexible CNC
machines connected in flexible layouts and transfer
lines to allow machining of parts of complex
geometries in a multi-stage process (Eversheim,
1989). This structuring of machining is one of the
most challenging tasks as it is structured according
to the product specifications, processes and
operations. Furthermore, resources such as CNC
machines are very expensive. Moreover, the NC
process chain has become very complex and
dynamic. Along with that, huge information has to
be handled and exchanged along the process
planning phase (Berger et al., 2008). At the resource
layout level, Smart Robot Tooling concept (see
Figure-7) is introduced in this paper which takes
into account the machining using the cost efficient
resources like robots. The employment of robot for
machining operations requires different machining
strategies, parameters, applications and settings
compared to a CNC machine. As a plug-and-
264

produce solution, the industrial robot machining cell
is not limited to any specific manufacturing
technology. It can be reconfigured for assembly and
joining applications as well as transportation of
materials and workpiece in its working area
possibly by crossing or linking with other cells. The
configuration or reconfiguration of machining cells
based on industrial robots is less challenging than
rearranging of CNC machines in machining centres
or machining parks. The limitations that hinders the
industrial robot to be used for machining application
is the lack of its absolute pose accuracy, the
discrepancy between offline robot programming and
the actual path followed by robots to accomplish
any task.
assembling lathe milling grinding/
polishing
screwing
glueing spot welding
painting
laser welding/
laser cutting
waterjet
cutting
laser sensor ultrasonic touch probe

Figure 7 Smart Robot Tooling Concept
Additionally, the huge cloud of program points is
generated for machining operations. The robot
stability problems generates process forces and the
slow processing time of measured data and
feedback loops in high speed robotic motions
(Wang et al., 2009). Euhus and Krain (2011)
introduced a universal sensor module with an
innovative Ethernet UDP export function, which
provides this data with a sample frequency of 500
KHz. The introduced Smart Robot Tooling concept
envisaged to reconfigure the robot for new
applications and help reducing high changeover and
development costs associated with the
reconfiguration process. The quick reconfiguration
of robot requires a hybrid concept. This hybrid
concept incorporates the model based and sensor
base solutions. The model based solution handles
each single robot as a single entity. It can be
achieved by first measuring the robot characteristics
(e.g. tolerances and accuracy) and its behaviour
(e.g. process forces) to be stored in a database. It is
particularly useful in a situation when the robots are
exchanged or replaced to transform or scale
production setups. All the necessary information is
taken from the database to ensure accuracy in the
reconfiguration process. The sensor based
compensation will eliminate differences between
the desired and actual position of robot. Positioning
errors below the robot physically accuracy due to
the created non-static process forces by the milling
tool, should also be compensated to achieve the
same level of machining quality as CNC machine
delivers. These desired high speed movements for
compensating the described errors require a high
speed dynamic and stiffness piezo actuated platform
introduced by Fraunhofer IPA Germany.
4. CONCLUSIONS
The distributed manufacturing systems have
emerged as the solution for enhancing agility and
responsivity in production systems. Furthermore,
the distributed production as well as the customized
product specifications, corresponding pool of new
materials and processes and time constraints to
bring innovations in a shortest possible time
demands configuration of the production system at
distributed network level based on cost, time and
environmental impact of production processes. At
the resource level, plug and produce approach
should be employed to make the resources quickly
ready for customized production. Moreover,
resources can be reusable for new applications. The
increasing material and process variety will lead
OEMs to open up their current body shop
production strategy from centralized to a distributed
network. This distributed network will be
constituted by suppliers and OEMs to develop
customized body-in-white in a modularized way.
This distributed production network will push
manufacturers to make more concrete and effective
planning to achieve future strategic goals.
Therefore, the configuration of distributed
production network must be based on cost, time and
potential environment impact of production process
at suppliers, OEMs as well as the associated supply
chains. Additionally, the use of cost efficient and
versatile resources is envisaged to reduce
development and setup costs. Furthermore, at
resource level, the incorporation of cost efficient
resources and their configuration is necessary to
achieve productivity in customized production.
5. ACKNOWLEDGMENTS
The work reported in this paper is regarded as a part
of dissemination activity and is supported by two
EC FP7 Projects: e-Custom and COMET. e-Custom
"A Web-based Collaboration System for Mass
Customization" (NMP2-SL-2010-260067), mainly
addresses the optimization of production processes
based on discussed production goals using web
based collaboration platform for customized
production at mass level. COMET is envisaged to
provide plug and produce components and methods
for adaptive control of industrial robots enabling
265

cost effective, high precision manufacturing in
factories of the future (FP7-2010-NMP-ICT-FOF-
258769).

REFERENCES
Alicke K, "Planung und Betrieb von Logistiknetzwerken:
unternehmensbergreifendes Supply Chain
Management", Springer, Berlin, 2005
Askar G and Zimmermann J, "Optimal Usage of
Flexibility Instruments in Automotive Plants",
Operations Research Proceedings, 2006, Part XVII,
2007, pp. 479-484
Barbian P, "Produktionsstrategie im Produkt-
lebenszykluskonzept zur systematischen Umsetzung
durch Produktionsprojekte", Universitt Kaiserlautern,
2005
Berger U, Kretzschmann R, Arnold K P and Minhas S,
"Approach for the development of a heuristic process
planning tool for sequencing NC machining
operations", Applied Computer Science, Vol. 4, No. 2,
2008, pp 17-41
Berger U, Lebedynska Y and Minhas S, "An approach of
a knowledge management system in an automated
manufacturing environment", 9th WSEAS
International Conference on AUTOMATION and
INFORMATION (ICAI'08), Bucharest, Rumania, 24-
26 Juni 2008
Brabazon P G and MacCarthy B, "Fundamental behavior
of build-to-order systems", International Journal of
Production Economics, 2006, No. 104, pp 514-524.
Cai J, "Development of a reference feature-based
machining process planning data model for web-
enabled exchange in extended enterprise", PhD
Dissertation, Shaker Verlag, Aachen, 2007
Corallo A and Lazoi M, "Value Network Collaborations
for Innovations in an Aerospace Company",
Proceedings of 16th International Conference on
Concurrent Enterprise Collaborative Environments for
Sustainable Innovation, Lugano 21-23 June 2010
Cassivi L, Lefebvre L A and Hen L G, "Supply Chain
Integration in the Automobile Industry", Proceedings
from the 8th International Conference on Management
of Technology, Elsevier Advanced Technology,
Oxford UK, 2000
Daum H J, "Intangible Assets and Value-Based Network
Control in the Automotive Industry. Part1: The role of
an intangible-based analysis of the value creation
system- the example of Toyota", The new ew
Economy Analyst Report, 06 Feb., 2005
Eversheim W, "Organisation in der Produktionstechnik -
Fertigung und Montage", Band 4, VDI Verlag,
Dsseldorf, 1989, pp 33-54
Euhus D, Krain R, "First metal cut at TEKS monitored
with 500KHz", www.comet-project.eu, 2011,
http://www.comet-project.eu/publications/ARTIS-
first-metal-cut-at-TEKS.pdf

Farid A M and McFarlane D C, "A tool for assessing
reconfigurability of distributed manufacturing
systems", 12th IFAC/IFIP/IFORS/IEEE/IMS
Symposium on Information Control Problems in
Manufacturing, Saint-Etienne, France, 17-19 May
2006
Goede M, Stehlin M, Rafflenbeul L, Kopp G and Beeh
E, "Super Light Carlightweight construction thanks
to a multi-material design and function integration",
European Transport Research Review, 2009, pp 5-10
Grujicic M, Sellappan S, He T, Seyr N, Obieglo A
Erdmann M and Holzleitner J, "Total Life Cycle-
Based Materials Selection for Polymer Metal Hybrid
Body-in-White Automotive Componets", ASM
International 1059-9495, 2009, pp 111-127
Hesse S, "Automatische Montagemaschinen", Lotter B,
Wiendahl H-P (Editors), Montage in der industriellen
Produktion, Springer Verlag, Berlin, Heidelberg,
2006, pp 220-224
Hu S J, Zhu X, Wang H and Koren Y, "Product variety
and manufacturing complexity in assembly systems
and supply chains", CIRP Annals - Manufacturing
Technology, No. 57, 2008, pp 45-48
Joo J, "Neural Network-based Dynamic Planning Model
for Process Parameter Determination", Proceedings of
International Conference on Computational
Intelligence for Modelling, Control and Automation,
2005 and International Conference on Intelligent
Agents, Web Technologies and Internet Commerce,
Vienna, 28-30 Nov. 2005, pp 117-122
Koperberg S X, "The information flows and supporting
technology in the automotive supply chain: a suppliers
focus", 6th Twente Student Conference on IT,
Enschede, 2nd Feb. 2007
Lima R, Sousa R and Martins P, "Distributed production
planning and control agent based system",
International Journal of Production Research, 18th
International Conference on Production Research, 31
Jul -4 Aug. 2005, Italy
Meichsner T P, "Changeable and Reconfigurable
Manufacturing Systems Advanced Manufacturing",
Springer Verlag, 2009, pp 373-388
Minhas S U H, Lehmann C and Berger U, "Concept and
Development of Intelligent Production Control to
enable Versatile Production in the Automotive
Factories of the Future", Proceedings of the 18th CIRP
International Conference on Life Cycle Engineering,
Braunschweig, Germany, 2-4 May 2011, pp 57-62
Mouzon G, Yildirim M B and Twomey J, "Operational
methods for minimization of energy consumption of
manufacturing equipment", International Journal of
Production Research, 2007, Vol. 45, No. 18-19, pp
4247-4271
266

Mokhtar A, Tavakoli-Bina A and, Houshmand M,
"Approaches and challenges in machining feature
based process planning", Proceedings of 4th
International Conference on Digital Enterprise
Technology (DET 2007)
Monostori L, Viharos Z J and Markos S, "Satisfying
various requirements in different levels and stages of
machining using one general ANN based process
model", Journal of Materials Processing Technology,
Vol. 107, Issues 1-3, 2000, pp 228-235
N. N., "Konzept gegen Rckruf-Aktionen (Concept
against callback actions) ", Automobil-Produktion,
April 2005
Olugu E U, Wong K Y and Shaharoun A M, "A
comprehensive approach in assessing the performance
of an automobile closed loop supply chain", Journal of
Sustainability, Vol. 2, 2010, pp 871-889
Pandremenos J, Paralikas J, Salonitis K and Chyssolouris
G, "Modularity concepts for automotive industry: A
critical review", CIRP Journal of Manufacturing
Science and Technology 1, 2009, pp 148-152
Papakostas N, Mourtzis D, Bechrakis K, Chryssolouris
G., Doukas D, "A flexible agent based framework for
manufacturing decision making", 9th International
Conference on Flexible Autom & Intel Manufacturing,
Tilburg, Netherlands, 23-25 June 1999
Paralikas J, Fysikopoulos A, Pandremenos J and
Chryssolouris G, "Product modularity and assembly
systems: An automotive case study", CIRP Annals-
Manufacturing Technology, Vol. 60, Issue 1, 2011, pp
165-168
Parry G and Graves A, "Build To Order - The Road to
the 5-Day-Car", Springer Verlag, London, 2008
PTC, "Five Key Capabilities for Collaborating Across
Automotive Design, Manufacturing & Supply
Chains", White Paper Automotive Design, Supplier &
Manufacturing Collaboration, 2009.
Ro Y K, Liker J K and Fixson S K, "Modularity as a
strategy for supply chain coordination: The case of
U. S. Auto", IEEE Transactions on Engineering
Management, Vol. 54, No. 1, 2007, pp 172-189
Salonitis K, Pandremenos J, Paralikas J and
Chryssolouris G, "Multifunctional Materials Used in
Automotive Industry: A Critical Review". In:
Engineering Against Fracture Proceedings of the 1st
Conference, Springer Science+ Business Media B. V.,
Patras, 2009
Salvador F and Forza C, "Configuring products to
address the customization-responsiveness squeeze: A
survey of management issues and opportunities",
International Journal of Production Economics, Vol.
91, Issue 3, 2004, pp 273-291
Scavarda L F and Hamachar S, "The role of SCM
capabilities to support automotive industry trends",
Brazilian Journal of Operations & Production
Management, Vol. 4, No. 2, 2007, pp 77-95
Schuh G, Stlze W and Straube F, "Anlaufmanagement
in der Automobilindustrie erfolgreich umsetzen: Ein
Leitfaden fr die Praxis", Springer Verlag, Berlin
Heidelberg, 2008, p 02
Schultmann F, Zumkeller M and Rentz O, "Modeling
reverse logistic tasks within closed-loop supply
chains: An example from the automotive industry",
European Journal of Operational Research, Vol. 171,
2006, Issue 3, pp 1033-1050
Shi X, Chen J, Peng Y and Ruan X, "Development of a
knowledge-based process planning system for an auto
panel", The International Journal of Advanced
Manufacturing Technology, Vol. 19, No. 12, 2002, pp
898-904
Tu Y, Chu X and Yang W, "Computer-aided process
planning in virtual one-of-a-kind production", Journal
Computers in Industry, Vol. 41, 2000, pp 99-110
Tsai Y L, You C F, Lin J Y and Liu K Y, "Knowledge-
based Engineering for Process Planning and Die
Design for Automotive Panels", Computer-Aided
Design & Applications, Vol. 7, No. 1, 2010, pp 75-87
Venkatesan D, Kannan K and Saravanan R, "A genetic
algorithm-based artificial neural network model for
the optimization of machining processes", eural
Computing and Applications, Vol. 18, No. 2, 2009,
pp. 135-140
Vijayaraghavan A and Dornfeld D, "Automated energy
monitoring of machine tools", CIRP Annals-
Manufacturing Technology, Vol. 59, 2010, pp 21-24
Wang J, Zhang H and Fuhlbrigge T, "Improving
Machining Accuracy with Robot Deformation
Compensation", Proceedings of IEEE/RSJ
International Conference on Intelligent Robots and
Systems, St. Louis (USA), 2009, pp 3826-3831
Wemhner N, "Flexibilittsoptimierung zur
Auslastungssteigerung im Automobilrohbau", PhD.
Dissertatio, Shaker Verlag, Aachen, 2006
Wu M, Li D and Ji W, "Knowledge-based reasoning
assembly process planning approach to laser range-
finder", International Conference on Computer
Application and System Modeling (ICCASM),
Taiyuan, 22-24 Oct. 2010, pp V2-686-V2-690
Xu C, Sand G and Engell S, "Coordination of distributed
production planning and scheduling systems", 5th
International Conference on management and control
of production logistics, 08-10 Sep. 2010, Portugal
Zhang D Z, Anosike A I, Lim M K and Akanle O M, "An
agent-based approach for e-manufacturing and supply
chain integration", Journal of Computers and
Industrial Engineering, Vol. 51, Issue 2, 2006, pp
343-360
Zhang F, Zhang Y F and Nee A Y C, "Using genetic
algorithms in process planning for job shop
machining", IEEE Transactions on Evolutionary
Computation, Vol. 1, Issue 4, 1997, pp 278-289
Zipter V, Zrn M and Berger U, "Entwicklung einer
Bewertungsmethode fr die Integration von Robot
267

Farming, Konzepten in Montageprozesse Ein
Beitrag zur wandlungsfhigen Montage", Automation
2011 Congress Zukunft verantwortungsvoll
gestalten, Baden-Baden, VDI Verlag GmbH,
Dsseldorf 2011, pp 235-245
Proceedings of DET2011
7th International Conference on Digital Enterprise Technology
Athens, Greece
28-30 September 2011
268

VIRTUAL FACTORY MANAGER OF SEMANTIC DATA
Giorgio Ghielmini
ICIMSI-SUPSI
giorgio.ghielmini@supsi.ch
Paolo Pedrazzoli
ICIMSI-SUPSI
paolo.pedrazzoli@supsi.ch

Diego Rovere
ICIMSI-SUPSI
diego.rovere@supsi.ch
Walter Terkaj
ITIA-CNR
walter.terkaj@itia.cnr.it
Claudio R. Bor
ICIMSI-SUPSI
claudio.boer@supsi.ch

Giovanni Dal Maso
Technology Transfer System S.r.l.
dalmaso@ttsnetwork.com
Ferdinando Milella
SimX ltd.
f.milella@simx.co.uk
Marco Sacco
ITIA-CNR
marco.sacco@itia.cnr.it


ABSTRACT
The growing importance of manufacturing SMEs within the European economy, in terms of Gross
Domestic Product and number of jobs, emphasizes the need of proper ICT tools to support their
competitiveness. Major ICT players already offer one-does-all Product Lifecycle Management
suites, supporting several phases of the product-process-plant definition and management. However,
these do also show consistent shortcomings in terms of SME accessibility, degree of personalization
and they often lack of an acceptable level of interoperability. These problems are being addressed
by the development of a Virtual Factory Framework (VFF), within an EU funded project. The
approach is based on four pillars: 1) Semantic Shared Data Model, 2) Virtual Factory Manager
(VFM), 3) Decoupled Software Tools that lay on the shared data model and can interact through the
VFM, 4) Integration of Knowledge. This paper will focus on the Virtual Factory Manager,
proposing an evolution of the former VFF second Pillar (Sacco et al, 2010), that acts as a server
supporting the I/O communications within the framework and its stored knowledge for the
decoupled software tools needing to access its repository.
KEYWORDS
Virtual Factory, Enterprise Modelling, Reference Model, Interoperability, Semantic Data Model

1. INTRODUCTION
Market needs and expectations require a
continuously rapidly evolving production
framework: thus production systems, from small to
large scale and integrated factories, have to be
conceived and set-up in shorter and shorter times
(Chryssolouris et al, 2008). Several critical aspects,
related to this need of rapid prototyping of factories,
have to be addressed: it is critical to provide
sufficient product variety to meet customer
requirements, business needs and technical
advancements (Huang et al, 2005), while
maintaining economies of scale and scope within
the manufacturing processes (Terkaj et al, 2009).
Therefore, the current challenge in manufacturing
engineering consists in the innovative integration of
the product, process and factory worlds and the
related data, aiming at synchronizing their lifecycles
(Tolio et al, 2010).
The creation of a holistic, integrable, up-
gradable, scalable Virtual representation of the
Factory can empower this synchronization,
promoting high cost savings in the implementation
of new manufacturing facilities or reconfiguration

269

of existing ones, thanks to the effective virtual
representation of buildings, resources, process, and
products: this is shown both by industrial practice
and academic scientific research. The entire factory
is simulated as a continuous and consistent digital
model, which can be used, without interruption, all
the way from the product idea to the final
dismantling of the production plants and buildings
(Bracht and Masurat, 2005).
These challenge is being addressed by the
development of a Virtual Factory Framework
(VFF), within an EU funded project
(http://www.vff-project.eu/). The approach is based
on four pillars: 1) Semantic Shared Virtual Factory
Data Model (VFDM), 2) Virtual Factory Manager
(VFM), 3) Decoupled Software Tools, based on the
VFDM and that can interact through the VFM, and
4) Integration of Knowledge. VFF objective is to
fosters an integrated virtual environment that
supports factory processes along all the phases of its
lifecycle.
This paper will focus on the Virtual Factory
Manager (VFM), proposing an evolution of the
former VFF Pillar II (Sacco et al, 2010). This
evolution finds its justification in the identified
weakness of the former second Pillar, found both in
the support to the data consistency check against
modifications performed by different modules and
in the integration of the knowledge layer with the
pure factory data layer. A viable solution has been
identified in the adoption of ontology as means for
data and relationships representation, promoting
knowledge integration in the data-model. This
approach introduces a modification in the overall
VFF architecture, where pillar IV (knowledge
integration) is no longer seen as foundation of Pillar
I (reference data model), as presented in (Sacco et
al, 2010), but rather considered as an additional
decoupled module (Figure 1).
This paper presents the new VFF framework born
from this evolution.
2. VIRTUAL FACTORY FRAMEWORK
As mentioned, an answer to the market
requirements previously highlighted has been
provided by the development of a first version of
the Virtual Factory Manager (Sacco et al, 2011).
That solution proved the validity of the concept of
having an integrated virtual environment supporting
the design and management of all the factory
entities, ranging from the single product to the
network of companies, along all the stages of the
factory lifecycle. The centralized data management
platform based on a common description of the
digital factory demonstrated the capability to
improve the integration process between software
design tools (existing and new developed ones) and
to provide a shared knowledge base to be used
during the factory modelling phases. Nevertheless,
the former approach showed some weaknesses both
in the support to the data consistency check against
modifications to parts of the factory instances
performed by different modules and in the
integration of the knowledge layer with the pure
factory data layer. The result, highlighted by
advanced tests, was a framework affected by some
problems of usability.
The main cause of this situation has been
identified in the impossibility of the previous
implementation of the VFDM, to represent not only
valid data structures, but also their semantics. A
viable solution has been identified in the adoption
of ontology as means for data and relationships
representation in order to improve the integration of
knowledge among the VFF pillars. This approach
introduces some modifications in the whole VFF
picture, affecting the way the components of the
architecture interact. Therefore, the result is a
tighter cooperation between the Knowledge
Manager and the VFDM pillars.
2.1. SEMANTIC VIRTUAL FACTORY
FRAMEWORK ARCHITECTURE
Figure 1 shows the new architecture of the Semantic
Virtual Factory Framework composed by the four
pillars of Semantic Shared Data Model (Pillar I),
Semantic VF Manager (Pillar II), Decoupled VF
Modules (Pillar III) and Knowledge Manager (Pillar
IV).
The Semantic VFDM establishes a coherent
standard extensible set of ontologies for the
common integrated representation of the factory
objects and of the factory knowledge domain,
basing on the tools of the semantic web (mainly the
Web Ontology Language - OWL). Section 3 is
dedicated to a thorough description of the new
Figure 1 - The Semantic Virtual Factory Framework architecture

270

Semantic VFDM approach and of the reasons that
drove the change.
This common ontology set is governed by the
Semantic VFM (Pillar II) that completes the
functionalities of access control, data versioning and
selective data query, already implemented by the
previous VFM, with a full support to the semantic
data validation. In this way, Decoupled Modules
(Pillar III) modifying single parts of the factory data
immediately receive feedback on the consistency of
their actions with the overall definition of the
factory instance. Section 4 and Section 6 are
respectively dedicated to the analysis of the
Semantic VFM and to the description of the current
prototype implementation.
In order to couple with the new features of the
VFM and with the new data exchange formats, it
has been necessary to intervene on the internal
architecture of the Decoupled Modules and, in
particular, of their VF Connector modules. Section
5 reports on the new structure of Pillar III, while an
example of new module interacting with the
Semantic VFM prototype is provided in Section 7.
Also the Knowledge Manager (Pillar IV), that was
the only component of the previous architecture
already based on the usage of ontologies, has been
affected by the new approach. With the new
structure, in fact, it can directly interface the
Semantic VFM as a decoupled module managing a
dedicated part of the Semantic Data Model
ontology. In this way it has been removed another
weak point of VFF that was represented by the need
for adaptation of formats and protocols between
pillars.
3. SHARED SEMANTIC DATA MODEL
The Reference Model (Pillar I) establishes a
coherent standard extensible Virtual Factory Data
Model (VFDM) for the common representation of
factory objects related to production systems,
resources, processes and products. The common
data model can be considered as the shared meta-
language providing a common definition of the data
that will be governed by the VFM (Pillar II) and
used and updated by the Decoupled Functional
Modules (Pillar III).
According to the original requirements, the
VFDM has to be holistic, covering all the relevant
fields related to the Factory domain and exploit
existing technical standards to represent the data.
Moreover the VFDM has to be extensible and
guarantee the proper granularity, providing at the
same time the enablers for data consistency, data
safety, and proprietary data management.
Sacco et al (2011) conceived the VFDM as a set
of XSD files (W3C, 2004c) defining the structure of
the XML files that would be stored and managed by
the VFM. This solution offers relevant advantages
in terms of:
Syntactic validation of the XML files according
to the defined XSD files.
Rich expressiveness since several default data
types can be further extended and complex
constraints and properties can be modelled.
Possibility to integrate several XSD files within a
single project.
However, the XSD technology alone is not suitable
for knowledge representation and several flaws can
be highlighted:
No explicit characterization of data with their
relations on a semantic level.
Intradocument references are supported but
interdocument references (cross-references) are
poorly modelled, thus endangering referential
consistency.
Distributed data can be hardly managed.
The integration of different knowledge domains
can be cumbersome.
The presented considerations led to evaluate and
finally adopt the Semantic Web technologies which
offer key advantages to the whole VFF because they
enables to:
Represent a formal semantics.
Efficiently model and manage distributed data.
Ease the interoperability of different applications.
Process data outside the particular environment in
which it was created
Exploit generic tools that can infer from and
reason about an ontology, thus providing a
generic support that is not customized on the
specific domain.
The new semantic VFDM has been designed as an
ontology (W3C, 2004a) by adopting the OWL
language (W3C, 2004b). In particular, it defines all
the classes, properties and restrictions that can be
used to create the individuals to be stored in the data
repository (Pillar II). Given the wide range and
heterogeneity of the knowledge domains to be
covered by the VFDM in the scope of VFF, it is
necessary to integrate various knowledge domains
as already highlighted by Colledani et al (2008) and
Valente et al (2010) in previous related works.
Therefore, the VFDM has been decomposed into
macro areas (i.e. bricks), creating a hierarchical
structure of sub-ontologies that have been named
Factory, Building, System, Resource, Process,
Product, Strategy, Performance and Management.
This architecture allows decomposing the problem,
downsizing its complexity while keeping a holistic
approach. These sub-ontologies have been

271

developed by referring to the state-of-the-art
technical standards available in the different
domains, and in particular the Industry Foundation
Classes (IFC2x3, 2006), STEP-NC (ISO 14649-
10:2004), and ISA-95 (ISA-95).
4. VIRTUAL FACTORY MANAGER
This section presents the analysis of the
requirements for the VFM (Sect. 4.1) and its
proposed architecture (Sect. 4.2).
4.1 VFM REQUIREMENTS
The main goal of the VFM design and
implementation consists in obtaining an open
integration platform representing a common and
shared communication layer between already
existing and newly developed software tools to
support the factory design and management.
The preliminary architecture of VFM proposed
by Sacco et al (2011) was related to a VF Data
Model based on the XSD/XML format. The
adoption of an ontology-based representation of the
VF Data Model in the VFF project has led to a re-
design of the VFM where Semantic Web
technologies have been exploited. In the new
architecture previous basic requirements have been
extended to include specific semantic
functionalities:
Platform independent interfacing capabilities.
The VF modules are software tools developed by
different vendors/organizations, with different
programming languages, operating systems and
HW architectures. The VFM has to interface all
of them by providing its service in an open and
proper way.
Management of concurrent access and data
consistency. Several software tools can access
and/or modify partial areas of the factory data at
different, and possibly overlapping, times.
Therefore, the VFM is required to ensure that
concurrent accesses occur without endangering
the data integrity and slowing down the planning
process to unacceptable levels.
Management of evolving Factory Data. The
VFM has to provide functionalities for managing
the evolution and revision of the data related to
complex entities like production systems,
processes and products. A typical VFDM object
is made by several files, depending on the sub-
ontologies it refers to, as described in section 3.
Hence, a coherent versioning mechanism must
take into consideration the inter-document
references between sub-ontologies.
Data safety must be ensured in case of hardware
failures or user errors.
Addition of customized functionalities. Third
party developers need an appropriate mechanism
to enrich the set of functionalities provided by the
VFM without impacting on its core.
Response time. The interaction between VFM
and the VF modules requires the support of
communication mechanisms that are able to
provide answers in an appropriate time frame.
A Semantic Web Endpoint which enables
stakeholders to query virtual factory models with
the required level of granularity for a more
efficient and selective data access.
Most of the above underlies the development of
the previous version of VFM. For this reason the
architecture of the new semantic version shares
similar features with its predecessor. However the
ability to support validation and queries of semantic
data introduces novelty aspects in the overall design
of VFM.
4.2 SEMANTIC VFM ARCHITECTURE
The architecture of the semantic VFM was designed
to provide support to the required functionalities.
Each solution implemented by the VFM is based on
stable and well established technologies in order to
obtain an overall system capable to respond to
industrial needs of reliability. The resulting VFM
architecture is shown in Figure 2 as an UML
component diagram.
The functionalities of the VFM are exposed as
web services that have been identified as a suitable
and widely adopted solution to guarantee platform
independent interfacing capabilities. The
Application Server provides the front end for the
exposure of VFM functionalities and takes care for
the information transport of the VFM.
The Information Exchanging Platform (IEP) is
the main component of the VFM and provides VF
modules and plugins with a high level access to the
two functional cores of VFM: the Versioning Layer
and the Semantic Layer. It represents the preferred
way (even if not the only one) to connect to the
VFM, since it provides a complete set of methods
for structured data retrieval and semantic validation,
data locking mechanism and factory version
management.
The Versioning Layer contains the VF Data
Repository where all the shared data are stored. The
evolution of the factory data is managed by the
Versioning System that organizes and updates the
set of virtual factory instances. The Versioning
System guarantees the data safety as well, since it
allows restoring an older version at any time, thus
preventing data losses due to user errors. Moreover,
rollback methods can be used in case of data
inconsistencies due to broken connections or other

272

factors, always ensuring data safety. In particular,
the locking mechanism exposed through the IEP
helps to manage the concurrent access of the VF
modules.


Figure 2 - Semantic VFM Architecture

The Semantic Layer is implemented by
embedding in VFM one of the most common and
reliable Semantic Web Frameworks: Jena (Jena,
2011). Through the IEP users can carry out
semantic validations of VFF models using Jena
functionalities directly on the server. Thanks to
Jena, the IEP can also provide a VFM SPARQL
endpoint. By starting a Query Session on data
extracted from the VF Data Repository it is possible
to perform SPARQL queries (W3C, 2008a).
Through queries each module (or plugin) can select
and aggregate information and be fed with exactly
the data it needs for its business process. Model
modifications can also be executed using the
SPARQL Update language (W3C, 2008a). Modified
models can then be serialised in output files in the
same format used by the VF Data Model ontology
(RDF/XML).
5. DECOUPLED VF MODULES
Currently many software applications, called VF
Decoupled Modules, are under development and
will interface the VFM for accessing the
information kept in the data model.
Since the VFM is the centre of the data exchange
among modules, it has been conceived with
openness in mind to be able to deliver data to the
large variety of tool involved in the factory planning

273

process. The decoupled modules range from
completely new developments to integration of
existing application, to off-the-shelf commercial
software, characterized by different operating
systems and development languages, among them
Windows, Linux and Java, C++, Python.
5.1. REQUIREMENTS ORIGINATED BY THE
VF MANAGER
Since the exposed functionality of the VFM is
implemented as a web service (W3C, 2011), all the
modules are required to implement a web service
client according to the WSDL file (Booth and Liu,
2007) describing the published interface.
Additionally, to address the issue that web services
are intrinsically stateless, the VFM has implemented
a few specific functions to keep track of the state of
its clients. Therefore the decoupled modules need to
actively support this mechanism. Finally, the data
received from the VFM are in RDF/XML format
(Beckett, 2004) and the utilization of third party
libraries for the handling of that format is essential.
5.2. ARCHITECTURE
The listed common requirements lead to a similar
overall module architecture which foresees a few
predefined component.
The following diagram illustrates the generic
architecture of a VF decoupled module with the
mentioned components and a section of the VFM in
the bottom part of the picture.


Figure 3 - Decoupled VF Module Architecture
5.2.1. VF Connector
Since all the decoupled modules will face common
tasks related to the VFM connection, in order to
avoid repeated development efforts among the VFF
partners, a specific VF Connector for the most
common development languages (C++, Java and
Python) has been implemented.
The VF Connectors take care of the web service
client implementation and the connection state
mechanism.
5.2.2. RDF/XML Library
Each decoupled module will manage different parts
of the data model in different ways. Nevertheless
most of the results coming from the VFM are in
form of RDF/XML streams so that the development
effort will be reduced using already existing third
party libraries conceived for RDF/XML data
manipulation. The following table lists some of the
most used open source libraries.
Table 1- RDF/XML Libraries
Library Language Link
Redland RDF
Libraries
C with Python-
Perl- PHP-
Ruby-
Interfaces
http://librdf.org/
RDFLib Python http://www.rdflib.net/
Jena RDF
API
Java http://jena.sourceforge.net/
Proteg API Java http://protege.stanford.edu/
Sesame
OpenRDF
Java http://www.openrdf.org

Semantic data handling obviously is more
complex than the one required for XSD/XML-based
models. Most of the VF Decoupled Modules are not
semantic applications (Motta and Sabou, 2006). As
such they access the VFDM semantic representation
only to extract and modify plain data. Indeed this
is one of the few disadvantages of the proposed
semantic approach that can be mitigated only by
fully exploiting the related technology to ensure the
full integration of the four Pillars of VFF.
5.2.3. Business Logic
This part of the software is peculiar to each module
and will be developed on top of the mentioned
components.
Nevertheless it is possible to distinguish two
substantially different functionalities; the ad hoc
developed modules will provide a specific logic and
expose it through a graphical user interface while
the adaptation modules will provide the required
interface for a seamless integration of existing
commercial tools.

274

6. VF MANAGER PROTOTYPE
The Semantic VF Manager has been implemented
on the basis of the previous prototype presented in
Sacco et al (2011). Even if this version is a
complete rewrite of the software, the main inspiring
guidelines that have driven the first prototype
release have not been changed. The choice of
development based on an open source and cross
platform architecture is still valid. This allows the
deployment of the tools in real industrial scenarios
where the VF Manager should be integrated into
existing legacy intranet architectures. Having
chosen to adopt technologies with proven reliability,
cross platform compatibility and well known by IT
personnel grants a smooth integration and
successful operation inside most of the existing
network configuration.
We hereby describe, for each of the main
components of the architecture shown in Figure-2,
the prototypal choices of the applied software tools.
The Application Server was implemented with
Apache HTTP Server, one of the most deployed and
reliable HTTP servers (The Apache Software
Foundation, 2011a). The Servlet Container was
developed with Apache Tomcat (The Apache
Software Foundation, 2011b), used in numerous
large-scale, mission-critical web applications across
a wide range of industries and organizations.
Tomcat is paired with Tomcat mod to support
integration with Apache HTTP server: this
connector redirects the information received by the
Apache Server to Tomcat, and therefore to the plug-
ins. The Versioning Layer was developed by
adopting Subversion (Collins-Sussman et al, 2004)
that is an open source version control system widely
used in the open source world. Access to the
ontology model, i.e. creating, writing and reading
models from OWL, has been implemented on top of
the Jena framework (Jena, 2011) which is a proven
library that implements Web Semantic in Java.
FRONT END
MANAGERS
UTILITIES
IEP
Web Service
Administration
Web Pages
Users
Web Pages
Jena Subversion Transaction Users
Projects
Working
Copies
Ontology
Models
Logging
Figure 4 - Architecture of the VF Manager Prototype
The architecture of the VF Manager in Figure-4
highlights the internal division in tree main layers:
front end, managers and utilities.
The front end layer consists of three components
that expose the functionalities:
IEP provides a SOAP Web Service to the VF
modules.
Administration lets the administrative personnel
manage the user and the opened sessions using a
web-based interface.
Users Pages are web pages that a user can access
to see his open session and to chat with other
active users.
The managers layer groups most of the business
logic of the VF Manager and is composed of the
following components:
Jena handles the ontology model using the Jena
framework;
Subversion is responsible for data storage and
versioning using the SVNKit library (SVNKit,
2011);
Transaction manages user sessions and commits,
coordinating the Jena, Subversion and Users
components;
Users manages the user database, access control
and sessions lists.
The utilities layer consists of components
providing a common infrastructure to handle:
Projects
Working copies
Ontology models
Logging
All this components have been developed in Java
and JSP (Java Server Pages) and deployed as
Tomcat web application.
This prototype is an evolution of the software
presented in Sacco et al (2011) and focuses on the
implementation of a wider set of features:
Data versioning
SPARQL Query
Locking
Granularity
Dependencies
Web access
The previous prototype focused only on
versioning and locking. A key feature introduced in
this version is granularity, implemented with the
concept of projects to represents the minimal
independent unit (i.e. set of files) that can be locked.
A project can have one or more dependency on
other projects, allowing reusing of shared resources.
A project can not only declare its dependencies
allowing a module to manually retrieve them - but
can also automatically include them to build a

275

complete ontology model. This model can be
queried using SPARQL language (W3C, 2008a).
Finally in this prototype we have implemented the
ability to access the VF Manager using an Internet
browser: administrators and registered user can
access some functionalities of the VF Manager
without needing to use a module application.
Further development of the prototype is targeted
at consolidating the robustness of the implemented
features and at improving the functionalities that
can be accessed from the web interface. This last
development will eventually enable a user to access
the data in the VF Manager without relying to any
module, maybe allowing limited editing
capabilities. Another important features that will be
enabled by the web interface is the access from
mobile devices, such as smart phones and tablets.
7. FLP A DECOUPLED VF MODULE
In order to illustrate the interaction of decoupled
modules with the VFM one among the several tools
developed for the VFF project has been chosen, the
Factory Layout Planner (FLP) (Ceruti et al, 2010).
The FLP, together with other two applications
(GIOVE Virtual Factory by ITIA-CNR and Visual
Components with SimX adaptation module), was
already involved in the feasibility demonstration of
the former VFM (Sacco et al, 2011).
7.1. FUNCTIONALITY
FLP is a client/server application that enables the
collaborative development of a factory layout.
The main functionality of the FLP consists in:
3D visual editing of the layout
3D visual editing of the building
running Discrete Events Simulation (DES)


Figure 5 - FLP Main Window
The application is characterized by a two-level
architecture with a fat client dealing with complex
3D models and real time requirements, and a server
which acts as a synchronization manager and as
VFM web client.
7.2. IMPLEMENTATION
FLP is an application written in Java. For handling
the data received from the VFM in RDF/XML
format, the third party library Jena (Jena, 2011) is
used.
7.3. ACCESSED DATA
FLP will interact with different areas of the VFDM
to exploit its functionality:
the multisite information - the production plants
of the enterprise [read-write]
the building data [read-write]
the resources templates - for every resource
template (or type) a set of information (icon,
VRLM file(s) for 3D representation, properties
and further data) [read-only]
the layout data (which contains for example the
instantiated resources, their position and their
property values) [read-write]
the production plans and processes data to feed
the DES engine [read-only]
the results of the DES simulation [read-write]
7.4. SAMPLE USE CASES
The prototype of the VFM hosts a partial Data
Model and enables the FLP to access data related to
the layout of a factory, both resource types and
instantiated resources. The FLP use cases treated
here are related to those data.
7.4.1. Caching resources types (read only)
The FLP composes a layout by creating and placing
into the 3D window instances of resources from the
resources catalogue. The FLP itself does not modify
the resource templates, which consists of resource
type description data and typically very large 3D
model files (those data are not subject to frequent
changes). Given those assumptions, for performance
purposes, the FLP maintains a local cache of the
data by checking from time to time if a
synchronization of the local cache is required.
The steps to accomplish this task consist in
select the current revision of the sub-project
Resource Library
download the ontology file(s) containing the
individuals
load the file(s) into a Jena ontology model in
order to access the content
retrieve all the 3D files from the resource
catalogue folder and store them locally
7.4.2. Layout planning (read-write)
The FLP user selects the current version of a layout
with the purpose of modifying it. It must signal to
the VFM its intention, so that other users do not

276

modify the same data and the Data Model remains
consistent. Precondition for this use case is that the
local catalogue of resources has been successfully
synchronized for the selected project.
The additional steps (compared with use case
7.4.1.) required to accomplish this task consist in:
start a transaction on the selected project
display all the instances of resources defined in
the project in a editable 3D view of the layout
apply the modifications to the Jena ontology
model and modify the local project file(s)
send back the modified files to the VFM
make the changes permanent and available to all
others VFF users by committing the open
transaction
8. CONCLUSIONS
The new approach driven by the Semantic VFDM
and enabled by the Semantic VFM represents a step
forward in the improvement of the Virtual Factory
Framework and in particular towards the target of a
fully integrated architecture of all the Pillars. Data
individuals and their Semantics come now from the
same coherent source and can be seen by different
perspectives, according to the needs of the
accessing clients (Knowledge Manager or
Decoupled Modules).
A first prototype of the Semantic VFM exploring
the potentials and the issues related to this new
approach has been presented. In particular, the
applied Semantic Web technologies represent the
cornerstone to obtain a framework where the
different stakeholders can effectively contribute in a
harmonized way to the definition of the virtual
factory along all the phases of its lifecycle.
In the coming months improved versions of the
VFM will implement the defined architecture and
fulfil the expected functionality. An increasing
number of Decoupled Modules will interface the
VFM and fill in the VF Data Repository with
individuals according to the developed Virtual
Factory Data Model, thus validating the new
approach.
9. ACKNOWLEDGMENTS
The research reported in this paper has received
funding from the European Union Seventh
Framework Programme (FP7/2007-2013) under
grant agreement No: NMP2 2010-228595, Virtual
Factory Framework (VFF).
REFERENCES
Beckett, D., RDF/XML Syntax Specification
(Revised), W3C, 2004, Retrieved: 15.06.2011,
<http://www.w3.org/TR/rdf-syntax-grammar/>
Booth, D. and Liu, C.K., Web Services Description
Language (WSDL) Version 2.0 Part 0: Primer, W3C,
2007, Retrieved: 15.06.2011,
<http://www.w3.org/TR/wsdl20-primer/>
Bracht, U., Masurat, T., "The digital factory between
vision and reality", Computers in Industry, Volume
56, Issue 4, 2005, pp. 325333
Carroll, J.J., Dickinson, I., Dollin, C., Seaborne, A.,
Wilkinson, K. and Reynolds, D., Jena: Implementing
the Semantic Web Recommendations, Proceedings of
the 13th international World Wide Web conference,
2003, pp 74-83
Ceruti, I.F., Dal Maso, G., Ghielmini, G., Pedrazzoli, P.
and Rovere, D., Factory Layout Planner, ICE - 16th
International Conference on Concurrent Enterprising,
2010, Lugano, Switzerland
Chryssolouris, G., Mavrikios, D., Papakostas, N.,
Mourtzis, D., Michalos, G., Georgoulias, K., "Digital
manufacturing: history, perspectives, and outlook",
Proceedings of the Institution of Mechanical
Engineers Part B: Journal of Engineering
Manufacture, Volume 223, No. 5, 2008, pp.451-462
Colledani, M., Terkaj, W., Tolio, T. and Tomasella, M.
Development of a Conceptual Reference Framework
to manage manufacturing knowledge related to
Products, Processes and Production Systems, In:
Bernard A, Tichkiewitch S (eds), Methods and Tools
for Effective Knowledge Life-Cycle-Management,
Springer, 2008, pp 259-284.
Collins-Sussman, B., Fitzpatrick, B.W. and Pilato, C.M.,
Version Control with Subversion, 1st Edition,
O'Reilly Media, Sebastopol, CA, 2004, p 320
Colombetti, M., Ingegneria della conoscenza: modelli
semantici, 2010-11 Edition, Facolt di ingegneria
dellinformazione, Politecnico di Milano, Italy, 2011,
p 42
Huang, G.Q., Simpson, T.W. Pine II, B.J., The power of
product platforms in mass customization,
International Journal of Mass Customisation, Vol. 1,
No. 1, 2005, pp 1-13
ISA-95, ISA-95: the international standard for the
integration of enterprise and control systems, ISA-
95.com, Retrieved: 15.06.2011, <http://www.isa-
95.com/>
Jena, Jena A Semantic Web Framework for Java,
SourceForge.com, 2011, Retrieved: 15.06.2011,
<http://www.openjena.org/>
IFC2x3, IFC2x3 Release, buildingSmart, 2006,
Retrieved: 15.06.2011, <http://buildingsmart-
tech.org/specifications/ifc-releases/ifc2x3-release>
Mc Bride, B., An Introduction to RDF and the Jena
RDF API, 2010, Retrieved: 15.06.2011,
<http://jena.sourceforge.net/tutorial/RDF_API/>
Mc Carthy, P., Search RDF data with SPARQL, IBM,
developerWorks, 2005, Retrieved: 15.06.2011,
<http://www.ibm.com/developerworks/xml/library/j-
sparql/>

277

Motta, E. and Sabou, M., Next Generation Semantic
Web Applications, 1st Asian Semantic Web
Conference (ASWC), 2006, Beijing, China
Sacco, M., Dal Maso, G., Milella, F., Pedrazzoli, P.,
Rovere, D. and Terkaj, W., Virtual Factory
Manager, HCI International, 2011, Orlando, USA
Sacco, M., Pedrazzoli, and Terkaj, W., VFF: Virtual
Factory Framework, ICE - 16th International
Conference on Concurrent Enterprising, 2010,
Lugano, Switzerland
SVNKit, [Sub]Versioning for Java, TMate Software,
2011, Retrieved: 15.06.2011, < http://svnkit.com/>
The Apache Software Foundation, Apache HTTP Server
Project, The Apache Software Foundation, 2011,
Retrieved: 15.06.2011, <http://httpd.apache.org/>
The Apache Software Foundation, Apache Tomcat 6.0,
The Apache Software Foundation, 2011, Retrieved:
15.06.2011, <http://tomcat.apache.org/tomcat-6.0-
doc/index.html>
Terkaj, W., Tolio, T., Valente, A., Designing
Manufacturing Flexibility in Dynamic Production
Contexts, In: Tolio, T. (ed) Design of Flexible
Production Systems. Springer, 2009, pp 1-18
Tolio, T., Ceglarek, D., ElMaraghy, H.A., Fischer, A.,
Hu, S., Laperrire, L., Newman, S., Vncza, J.,
SPECIES -- Co-evolution of Products, Processes and
Production Systems, Cirp Annals-Manufacturing
Technology 59 (2), 2010, pp 672-693
Valente, A., Carpanzano, E., Nassehi, A. and Newman,
S. T., A STEP compliant knowledge based schema to
support shop-floor adaptive automation in dynamic
manufacturing environments, Cirp Annals-
Manufacturing Technology 59 (1), 2010, pp 441-444
W3C, OWL Web Ontology Language - Use Cases and
Requirements, W3C, 2004a, Retrieved: 15.06.2011,
<http://www.w3.org/TR/webont-req/#onto-def>
W3C, OWL Web Ontology Language - Reference,
W3C, 2004b, Retrieved: 15.06.2011,
<http://www.w3.org/TR/owl-ref/>
W3C, SPARQL Query Language for RDF, W3C,
2008a, Retrieved: 15.06.2011,
<http://www.w3.org/TR/rdf-sparql-query/>
W3C, SPARQL Update, W3C, 2008b, Retrieved:
15.06.2011,
<http://www.w3.org/Submission/SPARQL-Update/ >
W3C, Web Services Activity, W3C, 2011, Retrieved:
15.06.2011, <http://www.w3.org/2002/ws/>
W3C, XML Schema Part 1: Structures Second Edition,
W3C, 2004c, Retrieved: 15.06.2011,
<http://www.w3.org/TR/xmlschema-1/>
ISO 14649-10:2004 Industrial automation systems and
integration -- Physical device control -- Data model
for computerized numerical controllers -- Part 10:
General process data

Proceedings of DET2011
7th International Conference on Digital Enterprise Technology
Athens, Greece
28-30 September 2011
278

AGILE MANUFACTURING SYSTEMS WITH FLEXIBLE ASSEMBLY
PROCESSES
Sebastian Dransfeld
SINTEF Raufoss Manufacturing AS
sebastian.dransfeld@sintef.no
Kristian Martinsen
Gjvik University College
kristian.martinsen@hig.no


Hkon Raabe
SINTEF Bedriftsutvikling
hakon.raabe@sintef.no

ABSTRACT
Traditionally automated manufacturing required high volume and large batches. New technologies
for flexible assembly lower the volume requirements and increase the possibilities for product
variation. The effectiveness of flexible assembly does however put new demands, but also opens
new opportunities, to the manufacturing organisation, the manufacturing logistics chain and the
control of material flow. This paper describes two case studies in two Norwegian manufacturing
companies. One is a furniture manufacturer for the consumer market; the other is an automotive 1st
tier supplier. Both are faced with increasing customisation of products with increased variations and
decreased volumes of each individual product. The focus of the research is how the manufacturing
organisation and the internal material flow need to adapt to gain from the investment in the flexible
automation solutions.
KEYWORDS
Agile Manufacturing, Flexible Assembly, Manufacturing Organisation

1. INTRODUCTION
The concept of Agile manufacturing is still being
refined by the research community, but according to
Oleson (1998) agility is understood as the ability to
respond effectively to unexpected or rapidly
changing events. Wiendahl et al (2007) defines
Agile manufacturing as the strategic ability of an
entire company to open up new markets, to develop
the requisite products and services, and to build up
necessary manufacturing capacity. Yusuf et al
(1999) compiled a list of main points in the
definitions of agile manufacturing where high
quality and highly customised products with high
information and value adding content,
responsiveness to change and uncertainty, social
and environmental issues and synthesis of diverse
technologies are some of the topics on the list.
An agile manufacturing system must be able to
quickly respond to the changes and the assembly
process is often used as an enabler to create mass
customised products, and is a key process for many
companies to achieve agility. The aim is typically to
co-locate the main T-point and the decoupling point
intermediate store. Components are typically
purchased or manufactured to an intermediate stock
by using Just-in-Time/pull principles. The finished
product is assembled to order to deliver the
diversity the customer requires.
To achieve the required flexibility in assembly
the assembly process is often realised by utilising
manual labour, but demands to improve efficiency
and quality leads companies to find automated
solutions or combinations of manual and
automation (Consigilo et al, 2007, Krger et al,
2009). Conventional automated assembly systems

279

do not handle frequent change, unpredictable
events, and disturbances.
During the last two decades, however, novel
concepts for highly flexible and reconfigurable
assembly cells have been proposed and realised in
research labs. Concepts such as plug and produce
(Arai et al, 2000) leads to increased flexibility and
reconfigurability, but few of these have been
commercialised and made available as standardised
systems on the shop floor.
A reason why these novel concepts have not been
implemented in a larger scale might be that it is
difficult to standardise assembly operations, so even
though standardised assembly components exists,
like vision systems and robots, the complete
assembly cell will be specialised and complex.
Specialised complex equipment will always have a
high entry barrier for use, a large investment cost,
and it will often take a long time before a
manufacturing organisation can utilise the potential.
This requires a focus on how to create user-
friendly systems. A car is for example a highly
complex device, but the user interface is extremely
simple. Better understanding of complex equipment
can also be supported by improving feedback and
adding learning abilities.
Even if a technological solution has been found
for an assembly challenge, the resulting system
must fit into the total manufacturing system.
Requirements for operator presence and internal
material flow must be fulfilled, and the
manufacturing organisation must be able to utilise
the potential of the automated assembly system and
work around the limitations. Even if modern
assembly technologies are very advanced, it will
never have all the capabilities of a human operator,
but it will also have other qualities which the human
is incapable of.
The missing link between advanced assembly
technology and successful industrial
implementation is tools to improve the interaction
between the operator and the equipment, tools to
assess the right level of automation and how to
make the automated solution fit into the
manufacturing organisation, and tools to help
manufacturing organisations utilise the potential of
advanced assembly technology.
This paper presents a case study of two
manufacturing companies who have introduced
flexible and reconfigurable assembly systems which
will completely replace manual operations.
The paper is sectioned into three parts. The first
part of the paper will give a short introduction to the
state-of-the-art in assembly technology. The second
part will present the two case studies with their
current manufacturing system and challenges, and
the assembly system which is proposed. Section
three will sum up the results from the case study
and present important factors to consider when
implementing flexible assembly systems, and
propose areas for further research to make current
technology more available for industrial use.
2. RESEARCH FOCUS AND
METHODOLOGY
This paper is a case study of two mass producing
companies which deliver mass customised products.
Another part of the research project has examined
the core assembly processes, and proposed, tested,
and developed prototypes to prove that it is possible
to automate these. The research in this paper
focuses on how these solutions can be implemented
in these companies manufacturing systems and
production processes.
The information for the study has been collected
by following the research on the new automated
assembly systems, and interviewing people at
various levels in the manufacturing organisations.
Introducing complex technology to a
manufacturing organisation will always fail if the
organisation is not prepared, the technology does
not live up to expectations, and there is no user-
friendliness.
3. STATE-OF-THE-ART IN FLEXIBLE
ASSEMBLY TECHNOLOGY
Design and implementation of assembly systems
has evolved through half a century and has been
influenced by a lot of different research areas.
Especially the automotive industry has pushed
flexibility and adaptivity for automated assembly
solutions (Michalos et al, 2010). Key challenges has
been approached from a variety of angles, and not
least is there no common agreement on which
implementation approach is the correct to fulfil the
dream of an assembly system with human like
flexibility and machine like dexterity and accuracy.
The central component of a flexible assembly
system is the industrial robot. Industrial robots are
continuously improved with technology like
automated calibration (Arai et al, 2002), improved
absolute accuracy (Watanabe et al, 2005), and
lightweight design with intrinsic safety and
similarities to humans (Albu-Schffer et al, 2007)
with kinematic redundancy, compliancy, and two-
armed setups.
Grippers are another central component which is
under heavy development. Most gripping principles
have been developed, like contact, needle, vacuum,
etc., but flexibility of grippers is still in
development. Grippers resembling human hands
(Butterfass et al, 2001) with several degrees of

280

freedom and integrated sensors are available, but
still have a premium price tag.
Sensors are the key element for assembly systems
to interact with their surroundings. Sensors are
essential for part recognition, part joining, and
quality control. The most common sensors resemble
human capabilities, like vision, contact, and force
and torque sensors (Santochi and Dini, 1998).
A robot equipped with grippers and sensors is
capable of executing assembly tasks; the missing
piece of physical hardware is the feeder. The most
common feeder principles are vibratory bowl
feeders, elevator feeders, belt feeders, and drum
feeders (Boothroyd, 2005). Flexible feeders which
present parts on a flat surface using vision sensors
for part location have become common and are
commercialised (Zenger et al, 1984). These feeders
use common feed principles like belts, flipping, and
vibration to move and reorder parts. Another
approach under heavy research is bin picking,
where the feeder is skipped, and the robot picks
parts directly from a bin (Kristensen et al, 2001).
By themselves these physical components are not
capable of doing anything. The must be taught and
controlled.
To perform a task a robot needs to do movements
and interact with the environment. For assembly
robots the interaction is mostly gripping, and the
movements can be for transport or guiding. The
main part of a robot program can be created offline
on a computer, and simulated to check for
functionality or it can be created online in an
assembly system. Offline capabilities have been
available for several years, but lack in accurately
representing the real world, so often online
adjustments are needed. To improve online
capabilities modern robots can be equipped with
force feedback systems which makes it simpler for
operators to physically move the robot through
required trajectories (Pires et al, 2009), or vision
systems which makes it possible to learn trajectories
by watching an operator (Fujita, 1988).
Robots can also be used in cooperation with
operators, either by assisting them by doing heavy
lifting or in cooperation where a robot does some
parts of an assembly, and the operator the rest
(Krger et al, 2006).
All components in an assembly cell must be
controlled. Current research focuses on distribution
of control. By distributing control only parts of an
assembly system will be affected by minor changes.
Holonic Manufacturing Systems is the common
term for distributed systems, which is the software
side of creating Plug and Produce assembly systems
(Arai et al, 2000).
To increase quality and robustness, and improve
operator understanding of complex assembly
systems, feedback and monitoring systems are
crucial. Both short term information to understand
the current status of an assembly system, and long
term information to use for improvement analysis
must be available.
By distributing control and integrating closed
loop feedback and monitoring systems in assembly
systems, autonomy and self-X capabilities can be
introduced. X can be capabilities like calibration,
adjustment, optimisation, etc. (Scholz-Reiter and
Freitag, 2007).
The technology for creating advanced and
complex assembly systems is available. The task is
to find the correct technology which creates a
feasible system, at the correct cost, and with the
required capability. It is not feasible to create a
complete system at once; it has to be built up
gradually to keep complexity and investments under
control. This will be a long term iterative process,
and it is important that the manufacturing
organisation and strategy is committed for a long
term.
4. CASE STUDIES
4.1. EKORNES
4.1.1 Case introduction
Ekornes is the largest furniture manufacturer in the
Nordic region, with a vision to be one of the world's
most attractive suppliers of ergonomically designed
furniture for the home. Ekornes has almost all of its
manufacturing operations in Norway. It is a very
profitable company which has proved able to
counter the operator costs in a high-cost country
with technology development and automation, as
well as premium prices based on very strong brands.
The main products are high-quality reclining chairs
and sofas of the Stressless brand. All finished
products are manufactured to customer orders.
A modular design of the recliners includes a very
limited range of internal steel frames, foam-
moulded cushions, and swivel bases. This has
facilitated batch manufacturing of a number of parts
and modules to stock. The manufacturing of the
seating cover however, is not initialised until a
customer order is placed. Due to the complexity of
handling limp materials in this case mostly
furniture hide (soft leather) parts and some textile
parts, the manufacturing of seating covers has been
kept mostly manual. The seating cover lead time
varies, but is typically below two weeks. New
models (or changes) are introduced every year, and
each seating cover consists of several (~20) parts

281

which must be routed through manufacturing
together.
A typical recliner consists of steel frames inside
moulded foam plastic cushions for the seat and back
body of the chair. Armrests and a typical
accompanying footstool are similar, but have
simpler frames. All these main modules have covers
for surface finish and seating comfort. The cover
modules are frequently layered with one or more
fibre layers attached to them. The swivel base is
typically of laminated wood, and some high-end
models even have leather covers and steel parts on
the base. An essential seating comfort feature is the
possibility to adjust the seat, back, and headrest
position.
4.1.2 Current manufacturing operation
The process to create a seating cover is first to cut
hide and fibre to create the core parts of the cover.
The cutting of hide is performed on manually
operated punching machines, or in digital cutters.
The quality inspection of the hide, and the manual
or digital placement of the cover parts while still
maximising hide utilisation is a vital element in
the quality of the finished product. The fibre is then
attached to the hide parts on traditional industrial
sewing machines. Some parts are sewn in special
sewing machines with a gathering seam, in order to
improve fit of the seating cover to the chair. These
subassemblies are then sewn together in several
steps to create a complete seating cover. This also
includes some parts of fabric (typically a technical
textile). In the final assembly process, the seating
covers are drawn over the cushions and chair frame.
The cover is the part of the chair with the highest
product variety. Thus, the process times and process
complexity vary a lot. The overall planning of the
factory is based on running a certain number of
standard seating units per day - across departments.
In the cover manufacturing, this creates a need to
schedule each day as a mixture of covers with short,
standard and long process or cycle times, in order to
achieve a fairly stable capacity factor. A large
variation in process complexity and cycle times is
traditionally an argument in favour of manual and
thus flexible operations. However, Ekornes also
utilises a piece-rate remuneration system for its
operators. This has undisputedly contributed to
industry leading and increasing productivity. On the
other hand, this stimulates a certain specialisation
among the operators. They will naturally perform
better on operations they know well, and will try to
compose small batches wherever possible. This
tendency towards operator specialisation also
supports the need to have a general mixture of cover
models each day. Even though the overall principle
in the cover production is one-piece flow based on
customer orders, the general production pattern is
typically 2-5 covers for the same chair model
grouped together in one production order. It should
be noted that customers furniture dealers often
place orders for more than one chair of the same
kind. The technology in the sewing department also
drives process specialisation to a certain extent.
Some of the cover making sewing processes
requires special sewing machines with features such
as gathering seam.
4.1.3 Current product flow
The production flow in the cover making process
for recliners in the main factory may be described as
follows: Parts for a given production order (one or a
few covers of the same model) are punched in the
manual punching machines or cut in the digital
cutters. The parts are then manually loaded into
trolleys one for each production order that are
moved around the sewing department on an
overhead conveyor.
The sewing department is all on one floor, and
seemingly one large department. It consists
however, of a number of sub-stations and sub-
departments. The trolleys are first directed to sub-
stations performing so-called pre-sewing operations.
The typical example is the attachment of fibre to the
cover hide parts, often combined with a gathering
seam. The cover parts are manually lifted out of the
trolley, and sewn together with more standardised
material, like fibre parts, that are picked from buffer
stocks at the sub-station. Finished sub-assemblies
are then put back in the trolley. The trolley is then
directed to the sub-department for the actual model
range. There are approximately 10 such mini
sewing departments, each containing 20-25
manually operated sewing machines, and typically
handling 2-3 cover models. The cover parts and
sub-assemblies are again lifted out of the trolley,
and the sewing operations constituting the major
share of the sewing process time, are then
performed by a sewing operator. The finished
covers are put back in the trolley, which moves over
to the assembly department. Here the covers are
unloaded and reloaded on one-piece flow trolleys
together with the other parts and modules for the
chair, and moved to finishing assembly.
A typical process time for the sewing of a
complete cover is less than one hour, but the
throughput time in the sewing department is
typically 2-4 days. The work-in-process buffer
stocks are in front of the sub-stations and sub-
departments. This enables the production managers
and organisers to balance the department load by
mixing orders with short and long process times, as
well as providing the operators an opportunity to

282

pick production orders that suit their competencies,
and preferably form batches.
The core assembly process in the production of
seating covers is the sewing. Sewing is important to
create the right quality and look and feel of the
chair. Other processes have been tested, but sewing
provides the best quality and is viewed vital for
customer satisfaction.
4.1.4 Proposed flexible assembly solution
Why then is the sewing automated? Why is a hybrid
cell where the operator sews and part transport is
automated not sufficient? At Ekornes, the argument
is manual process time, and as mentioned earlier the
major share of the time in the sewing process is at
the sewing machine. Through automating steel and
wood part manufacturing, painting, foam moulding,
and other standardised components manufacturing,
total manual process time for a typical recliner has
been reduced from 5 to less than 3 hours. Sewing
operations however, still accounts for close to 1
hour, as it did 20 years ago. The target of the
automated sewing project at Ekornes is to reduce
manual sewing time by 50%. This will be achieved
through automating the sewing processes best suited
for automation. The first automated sewing
processes are based on special machines sewing
special, fairly standardised cover parts. The
automated sewing cell discussed below is much
more flexible, and performs the fibre attachment to
different cover hide parts, also including gathering
seam. Gathering seam requires a special sewing
machine, and the material cannot be fixated when
fed through the sewing machine.
The automated sewing cell consists of a robot, a
sewing machine, several sensors, and control units.
Each physical unit contains its own controller to
simplify exchange, and to prepare for future sensor
development. The sensors are essential to make
automated sewing possible. Two or more
components are stacked together, and the robot
guides them through the sewing machine. Because
of sharp corners, the sewing process is not
necessarily continuous for one component, and
because of complex shapes, one component might
be relocated and regripped. This setup will behave
similar to how an operator handles the process, and
has been deemed the only viable process to handle
the variation of parts and the gathering seam.
Since the components which are to be sewn are
limp or non-rigid, there are several sensors which
supervise the process which is then continuously
adjusted. The sensors are responsible for detecting
how the components are located, how they move
through the sewing process, and to detect whether
the sewing operation progress.
Before the sewing process begins, the different
components must be located, stacked and
transported to an initial position where the assembly
cell can pick up the stack and move it to the sewing
machine. The components vary in size dimensions
from 10 centimetres up to 1 metre and shape
width / height ratio from 1:1 to 1:10, and variations
of circularity . By designing a flexible gripping
system, a limited set of tools can handle all parts.
By including a tool change system, one assembly
cell can handle all variants.
The system includes monitoring of all critical
input and output parameters to improve feedback,
and to improve understanding of how the assembly
system works. Monitoring information will also be
available for long term analysis. As mentioned
earlier, the sewing process must handle limp
materials and it is not feasible to fixate the parts.
This results in a process which is not repeatable,
thus the need for continuous sensor supervision and
process regulation. The operator which is close to
the process must be able to understand and tune
the process. In a variable process like sewing, it is
the experience of the operator which creates optimal
process parameters.
As the process is not repeatable, it is impossible
to handle all problems which may occur. As the
furniture hide is visible to the customer, it is crucial
that the process does not create any visible features
on any hide. If a piece of hide is damaged, it is a
time consuming operation to replace it, as it must fit
in colour and finish with the other pieces for a chair.
So it is preferred to stop the sewing process early,
and hand the part to an operator for repair.
It is important with tools to improve the process,
but with the described assembly task, teaching of
new products will be as crucial. There are a lot of
factors which are important to create a routine to
sew one part. Sewing speed, where to gather, how
much to gather, curvature, entry and exit point on
the part, entry and exit at the sewing machine, etc.
Some of these factors can be planned ahead, but
most must be defined by experienced personnel.
The proposed solution to create a program is to
draw a path on picture of the part, and then split the
path into segments. Different factors for each
segment can then be set. The operator can then
adjust all these based on visual and sensor feedback
to create and optimal process.
4.1.5 Integration of assembly in
manufacturing organisation
The theoretical performance of the proposed
assembly system makes it a feasible solution. The
initial investment of a robot, a sewing machine,
cameras, sensors, and grippers is relatively small. It
is also possible to extend the automatic material

283

flow chain, to increase operation time without
human interaction.
In theory one sewing cell can process all different
components, as the sewing principle is equal. In
practice this is not feasible, as this increases the
need for equipment like grippers and feeders.
To handle the total production volume, several
sewing cells will be needed. Since the principle for
all cells will be equal, it will be simple to move
products, programs, grippers etc. between cells. So
even though some specialisation will be needed for
one cell, the cells will be more flexible than
operators because their specialisation can change.
Factors like variation in part size, sewing time etc.
will affect whether a cell might specialise in one
part, or maybe handle all parts for a complete chair.
Another important factor will be how components
for one chair can be routed through manufacturing
as fast as possible, and without losing track of the
components which belong together.
As the current sewing operations have little
automation, it is important not to create a system
which is too large and complex. The operators need
to be gradually introduced to the automated
processes. The proposed flexible assembly solution
handles a certain degree of variation, but as how to
handle a part to produce the correct seam is
workmanship, teaching and adjustment of the
sewing parameters must be made available to the
operators. Precise and consistent feedback is also
needed to make operators understand how different
actions affect the process.
By keeping some material handling operations
manual, the operators will be kept busy, and the
total complexity of the system will be reduced
because the processes will be decoupled. A buffer
will be located at each sewing cell, so one operator
can handle several sewing cells, and can work
asynchronously.
Depending on the level of automation, the
operator can do stacking and feed each component,
feed a stack of components, or fill a feeding device.
Traditional feeders do not work well with limp
materials, so some preparation must be done before
a handling robot can take over. Stacks are relatively
stable and can be transported on conveyors. The
cycle time for sewing one part ranges from 10
seconds to one minute, so handling will not be a
bottleneck.
As mentioned earlier it is difficult to create a
process without errors, so a repair operation is
needed. As the products are made to order, the
repair has to be done in connection to the process,
the parts for repair cannot be stored and fixed in
batches.
4.2. KONGSBERG AUTOMOTIVE
4.2.1 Case introduction
Kongsberg Automotive (KA) is a worldwide 1st tier
supplier for the automotive industry. Their product
line includes systems for seat comfort, clutch
actuation, cable actuation, gear shifters,
transmission control systems, stabilizing rods,
couplings, electronic engine controls, speciality
hoses, tubes and fittings. The case product line in
this paper is couplings. Almost all coupling
manufacturing operations is located in Norway, and
has been able to counter the operator costs by
focusing on high technology products and
automated manufacturing.
The primary marked for couplings is air brake
systems for commercial vehicles. Couplings are
designed for a variety of tube dimensions, with a
variety of interfaces and interface dimensions.
Currently the programme consists of 100-150
unique variants. The product specific volume range
is from ~10,000 to ~4,000,000, so most products are
manufactured to stock.
To increase KAs share of the value chain for air
brake systems, they have introduced complete
manifolds with fitted couplings, which reduce their
customers requirement for assembly. Manifolds
consist of a housing plate and a set of couplings.
The product specific volume for each manifold will
be much lower than the component volumes, so
most manifolds must be made to order.
Since air brakes are a safety critical system in a
commercial vehicle, there are not often large
product changes. But small continuous changes to
improve product performance, both for functionality
and internal processes occur.
A new product programme was introduced some
years ago, where one innovative product was a
coupling with a washer based port side. This
coupling can be mounted into a port by pressing,
whilst traditionally couplings had to be screwed.
This simplifies assembly, and creates new
possibilities for manifold design and assembly.
By utilising the washer based coupling, no taps
are needed in the ports of the manifold house, so the
housing plates can be created by injection
moulding. By designing injection moulding tools
with inserts and combined with the large variety
of couplings it is possible to create an endless
variety of products.
4.2.2 Current manufacturing operation
Today couplings are produced in high speed
dedicated manufacturing lines. Most components
for the couplings are made in-house from composite
granulate and extruded brass rods. Because of short
cycle times below 2 seconds and small product
size from 5 to 50 millimetres components are

284

places in large bins. Components are feed into in
specialised high speed automated assembly cells by
a mixture of dedicated and flexible feeders and
placed in boxes or blisters.
With a yearly volume of ~100 million couplings,
this is the sensible solution. The initial volume for
manifolds is estimated at some hundred thousand.
Because of the large theoretical variety of
assemblies, and the relatively low total and product
specific volume, a different solution had to be
developed for the assembly of manifolds.
The current manual assembly of manifolds is
relatively simple. One operator takes one manifold,
the corresponding couplings, and a fixture and
presses one coupling at a time into the manifold. All
couplings must be checked for correct positioning,
and the needed pressure force is so high that safety
equipment is needed. This setup is duplicated to
achieve the needed production volume.
As the assembly operations are relatively simple,
the cycle time will be low. Since all ports on a
manifold might be equal, but require different
fittings, the probability of failure is quite large,
especially as the volume for one specific assembly
reaches one. These concerns are also a driving force
behind choosing an automated solution.
4.2.3 Proposed flexible assembly solution
Why does the assembly operation need to be
automated? At KA, the argument is short supply
chains and high quality requirements. All
components are produced at a production facility in
Norway, utilising highly automated assembly lines.
These provide high efficiency, high quality, and are
the most profitable manufacturing solution. Manual
assembly would be located in a low cost country,
dividing manufacturing in distance and increase
throughput time. As the air brake system is safety
critical, product and process control is essential.
The core assembly process for assembly of one
manifold is simple; it is the large variation of
parameters which is the driving force behind a
flexible assembly solution. And since the product
specific volume is low, all variants has to be
included in the assembly cell to create a large
enough total volume.
The proposed automated solution consists of two
robots, equipped with sensors, grippers and tool
change system. One robot is responsible for picking
couplings and placing them into the manifold. The
other robot is responsible for picking the manifold,
presenting it for coupling insertion, and holding the
manifold in the press. No fixture is needed, as the
robot can hold the part while pressing. The sensors
are responsible for part location, support for part
insertion, and process and quality control.
A two robot setup like this will behave like an
operator which can simultaneously pick and press
parts, so the potential for an efficient and profitable
solution is present.
There are three main tasks which require
flexibility: location and gripping of parts, joining of
parts and feeding of parts. And as research has
shown (Krger et al, 2009), feeding and gripping
can be the costly and time consuming part of an
assembly system.
For flexible introduction of parts into the
assembly cell, vision is chosen. By using vision for
location of parts, parts can be introduced in a
variety of ways without requirement of re-teaching
locations. Parts can be introduced in kits, blister or
flexible feeders, and it will not matter for the
assembly process. It will only require parts in some
allowed poses and within the field of vision.
To simplify gripping, and reduce the amount of
flexibility needed, design for automated assembly
has been utilised. The different couplings have
similar features, with different sizes. The manifolds
can be divided into similar groups which all receive
features with equal position and size, unrelated to
performance of the product. This limits the need for
flexible grippers, and a set of grippers and a tool
change system can handle the product variation.
Joining of the coupling and manifold is always
done at the same position. This simplifies
generation of trajectories, and improves the
possibilities for optimisation. Theoretical positions
for each joining operation can be found by
extracting information from CADs, but because of
the inaccuracies in the robots position system,
inaccuracies in gripping, and inaccuracies in the
injection moulding process, inline optimisation of
positions is required. To support this process, the
robots are equipped with force sensors. The force
sensors can be used to improve initial positions, and
be used during operation to supervise whether the
assembly operation was completed successfully.
Due to demands for low cycle time below 3
seconds for one pick and place cycle there is no
time for force controlled insertion.
One primary obstacle in using vision is the
calibration between vision coordinates and robot
coordinates. The cell can self-calibrate by using a
rough digital cell description, known calibration
objects, and force feedback.
To improve teaching time and performance of the
vision system, couplings and manifold houses are
designed with unique features. Also by utilising
similar features of different components, vision
analysis for location is shared by parameter sets.

285

Force feedback is also used to prevent collisions,
which can occur if parts are picked incorrectly or
other failure situations.
All sensor information is monitored and provided
as feedback to the operators. This improves
understanding, thus improves the ability for
operators to tune and optimise the system. Often
operators see that improvements can be made, but
they do not understand why the assembly system
acts as it does, and which parameters to change to
improve the situation.
Since the brake system is a safety critical
component, full traceability of all components is
required, especially as the system will go through
continuous changes for improvement.
4.2.4 Integration of assembly in
manufacturing organisation
The theoretical performance of the proposed
assembly system makes it a feasible solution. The
initial investment of two robots, cameras, force
sensors, and grippers is relatively small. It is also
possible to extend the automatic material flow
chain, to increase operation time without human
interaction.
Some basic efficiency performance indicators
have been proposed to make the total OEE
acceptable: An introduction time of less than one
hour for an unknown manifold, less than 15 minutes
for unknown coupling, unmanned production for 8
hours, and zero defects. By utilising the flexible
setup, the possibilities of offline programming of
robots and vision, and the integrated sensors, this is
within target.
The assembly process in itself is not very
difficult, and the manufacturing organisation is used
to handle automated manufacturing equipment. The
key to achieve a successful implementation is that
the operators can understand and improve the
system continuously. Precise and consistent
feedback is also needed to make operators
understand how different actions affect the process
and monitoring and traceability will secure the
quality of the product.
Even though the system consists of a set of
advanced technological solutions, the cell is still not
as flexible as an operator. An operator can pick a set
of components, a fixture and start assembling. By
making it possible to feed parts into the assembly
cell by different methods (kits, blisters, feeders) it is
possible to introduce only the needed parts for
current batch: kits for small batches, flexible
feeding for larger batches.
By introducing sensors into the cell, the assembly
operation will be continuously supervised. Data can
be used for optimisations, both online and offline. A
lot of information can be used to facilitate offline
programming; preliminary vision analysis can be
created with a set of pictures and a simulation.
By keeping some handling operations manual,
operators are kept busy, and the assembly system
can be decoupled from the rest of the manufacturing
system.
4.3 Case conclusions
The main challenge of the automated solutions
presented in the case studies is the need for
flexibility. This results in an advanced automated
solution, which again requires empowered
operators. By utilising standard assembly
equipment, focusing initially on the core assembly
process and leaving some material handling tasks to
the operator, the initial investments in the assembly
cells have been kept low.
It is important to let the operator work
asynchronous to the assembly cell, so that the
operator has time to interact with the system to get a
better understanding of the process. The monitoring
and feedback system will also improve learning of
the operator, and increase the possibilities for the
operator to improve system performance.
By designing the assembly operation fail safe
we will have no errors, but we need to repair
products. This is still a better solution than all
manual, and a situation which can be improved. Too
often an all or nothing approach is used when
automation is introduced.
The last crucial factor is the empowerment of the
operator. The systems must be designed for
continuous change. A continual presence of an
operator with skills to change and improve the
system is necessary, and tools for feedback and
optimisation must be present.
5. KEY FACTORS
In this chapter, the authors try to list some of the
key factors for a flexible automated assembly
process. We have indentified 5 main factors, each
with 5 sub-factors; Material flow, Human
interaction, Technical solution, Economy and
Changeability. Table 1 shows an overview of the
main Key factors and the sub-factors. Table 2
suggest 3 levels. The key-factors are mutually
dependent on each other directly or indirectly.
Table 1- Key factors
Material flow Human interaction Technical Solution Efficiency Changeability

286

Decoupling point Man-machine cooperation Self-X Availability Changeover-ability
Redundant paths vs. special Feedback/ learning Monitoring and control Quality Reconfigurability
Kit/batch/group Competence Autonomy Initial investment Plug and produce
Automated feeding Maintenance/ support Traceability Variable costs Ramp-up
Volume/product portfolio Complexity Standardisation of equipment Cycle time Offline capabilities
Table 2- Factor levels
Main elements Sub-elements Suggested levels
1 2 3
Material flow Decupling point Store (finished goods) Before assembly Components manufacturing
Redundant paths vs. special Full redundancy/ flexibility Partial redundancy/ flexibility Special machines
Kit/ Batch/ group One-piece flow Grouping Batch
Automated Feeding Manual Asynchronised manual
(conveyor, blister, etc.)
Automated
(feeder, bin pick, etc.)
Volume/product portfolio Low volume,
short product range
Low volume,
medium product range
Medium volume,
medium product range
Human interaction Man-machine cooperation Synchronised Asynchronised Fully automated
Feedback/ learning None Monitoring Monitoring and analysis
Competence Low Medium High
Maintenance/ support No support Medium Close contact/ instant
Complexity Simple Medium Advanced
Technical solution Self -X None Some Extensive
Monitoring and control None Simple Extensive
Autonomy None Simple Extensive
Traceability None Date and time/ batch Each individual and process
Standardisation of equipment Standard off-the-shelf Medium One-of-a-kind special purpose
Efficiency Availability 50% 75% 80%
Quality rate 90% 95% 100%
Initial investment Low Medium High
Variable costs Low Medium High
Cycle time High Medium Low
Changeability Changeover-ability High Medium Low
Reconfigurability High Medium Low
Plug and produce None Some Yes
Ramp-up Long time Short time Instant
Offline capabilities None Some Fully

5.1. MATERIAL FLOW
Material flow or manufacturing logistics is a key
factor for the usefulness of an assembly solution.
Within the Material flow we have identified 5 sub-
factors: Location of the decoupling point, which is
the place in the process chain where the product
dedicated to a specific customer. A system based on
make to order would probably need a more
flexible solution than make to stock. Moreover is
the redundancy of the material flow, one-piece-flow
vs. batch production, how components are
transported and fed into the process as well as the
product portfolio and volumes important factors
within the material flow.
5.2. HUMAN INTERACTION
Human interaction is the second key factor with the
following sub-factors: Man-machine cooperation,
Feedback/learning, Competence, Maintenance/
support and Complexity. First, the factors cover to
what degree manual labour is a part of the process;
can the operator work asynchronous? What is the
automation level? On the other hand, the factors
cover how human knowledge is developed and used
to achieve a good automation solution. To what
degree is the operator getting feedback from the
process trough sensors? Are there any monitoring
and/or analytical / decision support systems? What
is the competence level of the operators and
technical staff on automation? Is the process
dependent on outside support form a third party?
How complex is the system?
5.3. TECHNICAL SOLUTION
Section 3 is giving a brief overview of state-of-the-
art of technology for flexible and agile automation
technology. More or less cognitive systems with a
Self-X such as Self-Optimisation, Self-
Calibration, Self-Adjustment, Self-Repair etc. are
one important factor to reach agility. The
monitoring and control mention at the human
interaction is also dependent on the technical
solution. The degree of autonomy (Scholz-Reiter
and Freitag, 2007) is to what degree an assembly
cell can control itself in a decentralised way.
Traceability of components, products and process
parameters is furthermore an important
technological key factor for quality assurance, and
process analysis and improvement.
5.4. EFFICIENCY

287

The OEE factors availability, quality and cycle time
is in addition to variable costs and investment costs,
the key factors regarding efficiency in this study.
5.5. CHANGEABILITY
The main factors for changeability are the
changeover-ability between existing product
portfolio. Other factors include reconfigurability
when the system needs to change, Plug and produce
facility as well as ramp-up period and capability for
off-line programming and manufacturing of
grippers and pallets.
6. CONCLUSIONS
Although the development of solutions for
advanced automated assembly has come a long way
during the last decades, there are still several tasks
which manual operators still can do better than
automated systems.
The research in this paper shows that it is
possible to exchange specialised manual operations
with automated processes if the surrounding
manufacturing organisation is adapted to handle the
new equipment, and the operators are empowered to
transfer their skills to the automated system.
Technical solutions to handle advanced assembly
tasks are available, but when combining these with
advanced control structures, most operators will
have problems understanding why the system
responds as it does. And understanding is important
to make operators and manufacturing operations
interested in automated solutions. Improved
solutions for feedback and learning in complex
assembly systems are therefore needed, and should
be a focus area in research.
7. ACKNOWLEDGMENTS
The authors are most grateful to Ekornes and
Kongsberg Automotive for their support in this
research project. We would also thank the
Norwegian Research Council for funding the
research by the BIA programme, and SFI
NORMAN.
REFERENCES
Albu-Schffer A., Haddadin S., Ott Ch., Stemmer A.,
Wimbck T. and Hirzinger G., The DLR Lightweight
Robot Design and Control Concepts for Robots in
Human Environments, Industrial Robot: An
International Journal, Vol. 34, No. 5, 2007, pp 376-
385
Arai T., Aiyama Y., Maeda Y., Sugi M. and Ota J.,
Agile Assembly system by Plug and Produce,
CIRP Annals Manufacturing Technology, Vol. 49,
No. 1, 2000, pp 1-4
Arai T., Maeda Y., Kikuchi H. and Sugi M., Automated
Calibration of Robot Coordinates for Reconfigurable
Assembly Systems, CIRP Annals Manufacturing
Technology, Vol. 51, No. 1, 2002, pp 5-8
Boothroyd G., Assembly Automation and Product
Design, 2nd Edition, CRC Press, Boca Raton, 2005
Butterfass J., Grebenstein M., Liu H., Hirzinger G.,
DLR-Hand II: next generation of a dextrous robot
hand, Robotics and Automation. ICRA 2001. IEEE
International Conference on, Vol. 1, No. 1, 2001, pp
109-114
Consiglio S., Seliger G. and Weinert N., Development
of Hybrid Assembly Workplaces, CIRP Annals
Manufacturing Technology, Vol. 56, No. 1, 2007, pp
3740
Fujita N., Assembly of Blocks by Autonomous
Assembly Robot with Intelligence, CIRP Annals
Manufacturing Technology, Vol. 37, No. 1, pp 33-36
Kristensen S., Estable S., Kossow M. and Brsel R.,
Bin-picking with a solid state range camera,
Robotics and Autonomous Systems, Vol. 35, No. 3-4,
2001, pp 143-151
Krger J., Bernhardt R., Surdilovic D. and Seliger G.,
Intelligent Assist Systems for Flexible Assembly,
CIRP Annals Manufacturing Technology, Vol. 50,
No. 1, 2006, pp 21-24
Krger J., Lien T.K. and Verl A., Cooperation of human
man machines in assembly lines, CIRP Annals
Manufacturing Technology, Vol. 58, No. 2, 2009, pp
628-646
Michalos G., Makris S., Papakostas N., Mourtzis D. and
G. Chryssolouris, Automotive assembly technologies
review: challenges and outlook for a flexible and
adaptive approach, CIRP Journal of Manufacturing
Science and Technology, Vol. 2, No. 2, 2010, pp 81-
91
Oleson J.D., Pathways to agility, 1st Edition, John
Wiley & Sons Inc., New York, 1998
Pires J.N., Veiga G., Arajo R., Programming-by-
demonstration in the coworker scenario for SMEs,
Industrial Robot: An International Journal, Vol. 36,
No. 1, 2009, pp 73 - 83
Santochi M. and Dini G., Sensor Technology in
Assembly Systems, CIRP Annals Manufacturing
Technology, Vol. 47, No. 2, 1998, pp 503-524
Scholz-Reiter B. and Freitag M., Autonomous Processes
in Assembly Systems, CIRP Annals Manufacturing
Technology, Vol. 56, No. 2, 2007, pp 712-729
Watanabe A., Sakakibara S., Ban K., Yamada M., Shen
G. and Arai T., Autonomous Visual Measurement for
Accurate Setting of Workpieces in Robotic Cells,
CIRP Annals Manufacturing Technology, Vol. 54,
No. 1, 2005, pp 13-18
Wiendahl H.-P., ElMaraghy H.A., Nyhuis P., Zh M.F.,
Wiendahl H.-H., Duffie N. and Brieke M.,
Changeable Manufacturing Classification, Design

288

and Operation, CIRP Annals Manufacturing
Technology, Vol. 56, No. 2, 2007, pp 783-809
Yusuf Y.Y., Sarhadi M. and Gunasekaran A., Agile
Manufacturing: The Drivers, concepts and attributes,
International Journal of Production Economics, Vol.
62, No. 1-2, 1999, pp 33-43
Zenger D., Dewhurst P. and Boothroyd G., Automatic
Handling of Parts for Robot Assembly, CIRP Annals
Manufacturing Technology, Vol. 33, No. 1, 1984, pp
279-281
Proceedings of DET2011
7th International Conference on Digital Enterprise Technology
Athens, Greece
28-30 September 2011
289

A VIRTUAL FACTORY TOOL TO ENHANCE THE INTEGRATED DESIGN OF
PRODUCTION LINES
Rka Hints
Technical University of
Cluj-Napoca
reka.hints@tcm.utcluj.ro
Marius Vanca
Technical University of
Cluj-Napoca
marius.vanca@tcm.utcluj.ro
Walter Terkaj
ITIA-CNR
walter.terkaj@itia.cnr.it


Elena Domenica Marra
COMAU - Turin
elena.marra@COMAU.com
Stefano Temperini
COMAU - Turin
stefano.temperini@COMAU.com
Dorel Banabic
Technical University of
Cluj-Napoca
banabic@tcm.utcluj.ro
ABSTRACT
Virtual manufacturing concepts have been adopted by most of the industrial companies, including
the small and medium ones, to face the global competition and deal with the top challenges of
manufacturing industry, i.e. improving the quality, reducing the delivery time and decreasing the
costs. However, most of the virtual manufacturing methodologies, tools and software systems are
not integrated well enough to perform the required activities in an efficient manner. The attention is
usually focused on local and specific proficiency, thus jeopardizing the sharing of information
between the departments, the parallelization of work and the communication along the product or
factory life-cycle. Indeed, the transmission of data and results is usually difficult and carried out by
means of expensive and/or time-consuming manual work. This paper presents a software tool,
named Design Synthesis Module (DSM), to face some of the aforementioned problems by adopting
the approach proposed by the Virtual Factory Framework project, consisting in a holistic virtual
environment that integrates several decoupled functional tools sharing the same data model to
support the design and management of factories. The proposed solution represents one of the tools
integrated in VFF and aims at improving the proposal and design phases of production lines in
terms of quality, time and cost by supporting the management of production system configuration
data across several departments. DSM will support the bidding and system design activities by
enabling a quick evaluation of system configurations, easy adjustments and reuse of data, and the
concurrent design and integrations with other tools.
KEYWORDS
Virtual Factory, Integrated Design, Concurrent Design, Production Lines, Life Cycle Cost Analysis

1. INTRODUCTION
The world economy is passing through a relevant
crisis since 2008, without being able to recover
significantly, yet. As part of this global uncertainty,
the manufacturing industry is facing extraordinary
challenges. The market conditions are tougher than
ever and the companies have to look for solutions to
increase the competitiveness and efficiency of their
manufacturing processes even more than before.
In these circumstances, the use of virtual
manufacturing and digital representation of the
factories and their processes becomes even more
important to optimize the production activities. In
the course of a rapidly advancing information
technology, digital tools and systems are applied in
all industrial branches supporting a great variety of
different tasks along the lifecycle of a factory.
Because of the complexity in tackling product
289

290

design and manufacturing as a whole, software tools
are traditionally designed to focus on specific issues
and tasks (Tolio et. al, 2010). But the wide range of
software tools and applications used in virtual
manufacturing today for data processing, for
graphical representation of manufacturing devices,
for planning purposes, etc. have to efficiently
integrate, collaborate and interchange information
among all manufacturing processes. It is not enough
to be efficient and effective towards their own goals,
since this practice has drawbacks when considering
requirements of networked collaboration (Mottura et
al, 2008) and concurrent engineering for the design
of products, processes and production systems.
Another important factor the companies should
consider for increasing the competitiveness is
having quick responses to business requests and
opportunities. Manufacturing companies have to
develop their ability to prepare offers for customers
in the least possible time, overcoming the customer
expectations by providing several solutions for one
inquiry, having the possibility to easily reconfigure
and adapt an existing or already designed system.
Another challenge the manufacturing companies
have to face is dealing with often changes.
Companies work on quite stable product categories
produced in high volumes but, at the same time,
they must cope with frequent product modifications
and short product life-cycles (Terkaj et al, 2009).
Dealing with change is one of the most fundamental
challenges facing organizations today (Wiendahl et
al, 2007; Sacco et al, 2010).
The generation and propagation of changes create
a multitude of possible scenarios that companies
must face in order to stay competitive. The
scenarios are often unpredictable and this represents
a major cause of complexity when operating in
dynamic manufacturing environments together with
a lack of unified solution approaches (Tolio et al,
2010).
Recent research efforts seem to individuate the
concept of reconfigurability as the answer to the
need for facing continuous changes in the
production problems (Koren et al 1999). In
Wiendahl et al (2007) the reconfigurability concept
is defined as the operating ability of a production
system or device to switch with minimal effort and
delay to a particular family of work pieces or
subassemblies thorough the addition or removal of
functional elements.
Beside reconfigurability, also flexibility is a key
requirement to be met by manufacturing
organizations and their systems in order to
overcome the changes that may appear. While as
shown above reconfigurability is the operative
ability of a manufacturing system to switch to a
particular family of part types, flexibility is
somehow a broader concept involving the tactical
ability of the entire production and logistics areas to
switch between families of components (Tolio et al,
2010). However, flexibility cannot be properly
considered in the decision making process, if it is
not defined in quantifiable terms. Alexopoulos et al
(2010) presents an interesting method of modelling
and assessing the flexibility of a manufacturing
system.
A more in depth research direction was the
introduction of a focused flexibility paradigm.
Focused flexibility may represent an important
means to rationalize the way flexibility is embedded
in manufacturing systems. Focused Flexibility
Manufacturing Systems FFMS (Terkaj et al, 2010)
represent a competitive answer to cope with the
analyzed production context since they guarantee
the optimal trade-off between productivity and
flexibility.
Another concept that is researched and has
synergies with reconfigurability, flexibility,
adaptability and changeability is the co-evolution.
Co-evolution involves the repeated configuration of
product, process and production system over time,
to profitably face and proactively shape the market
dynamics namely changes (Tolio et al, 2010).
The topics pointed out above integration and
collaboration of software tools used in virtual
manufacturing, concurrent engineering,
reconfigurability, flexibility and adaptability of
manufacturing systems - are addressed both by the
software providers and scientific community. In
recent years several research projects (e.g. Modular
Plant Architecture - MPA, A configurable virtual
reality system for Multi-purpose Industrial
Manufacturing Applications IRMA and Digital
Factory for Human-Oriented Production System
DiFac) have been developed in these areas.
The complexity of the aforementioned problems
asks for support tools to effectively address all these
problems in all phases of the factory lifecycle.
Major ICT players already offer all-comprehensive
Product Lifecycle Management suites supporting
most of the processes. However, they do not offer
all the required functionalities and they lack of
interoperability. Moreover, Small and Medium
Enterprises cannot afford the present expensive
PLM software suites. An answer to the problems
and requirements highlighted so far can be given by
the large-scale European project focused on
development of a new Virtual Factory Framework
(VFF) that can be defined as An integrated
collaborative virtual environment aimed at
facilitating the sharing of resources, manufacturing
information and knowledge, while supporting the
design and management of all the factory entities,
from a single product to networks of companies,
290

291

along all the phases of the their lifecycles. (Sacco
et al, 2010; Sacco et al, 2011)
VFF is aggregating a series of decoupled software
tools that implement various methods and services
for factory design, performance evaluation,
management, etc. In this paper we are presenting
one of these tools, named Design Synthesis Module
(DSM), which deals with several of the general
problems addressed by VFF: integration of various
software tools, collaboration and parallelization of
work, easy reconfigurability by quick adjustments
of system configurations, data reuse, work
automation, enable concurrent design. DSM will
address all these in the context of offers/bids
preparation and system design activities held by
manufacturing companies.
Although there are already several software tools
on the market (i.e. Enovia, TACTIC, Arena,
Teamcenter) addressing some of the problems listed
above, it seems there is no one that deals with all of
those issues together and in the same time being
specialized on offer preparation and pre-design
activities and being also affordable by SMEs.
This paper is organised as follows. Section 2
describes in more detail the problem statement of
this paper and explains the role of VFF and DSM in
solving this problem. Section 3 presents in detail the
DSM tool. In Section 4 a case study is presented
an industrial scenario where DSM will prove its
benefits. Section 5 summarizes the expected results
from DSM but also the problems that still remain
unsolved and planned for future research.
2. PROBLEM STATEMENT AND VFF
As highlighted in the Introduction section, the
manufacturing companies have been innovating a
lot during the past years, in order to improve their
competitiveness, business performance and to be
able to increase their market share. As shown in
Tolio et al (2010) the current challenge in
manufacturing engineering consists in the
innovative integration of the product, process and
factory worlds and the related data, aiming at
synchronizing their lifecycles. The effective
collaboration and integration of many dispersed
actors across various departments being involved in
different production flow activities stands at the
basis of time-efficient and cost-efficient
manufacturing innovation.
Knowledge sharing and management is
additionally one of the very important aspects to be
considered here since a consistent number of data
files have to be shared and exchanged by the various
groups of engineers involved in offers preparation,
pre-design and design activities. As suggested by
Mahdjoub et al (2010), the design process has to be
rationalized to manage knowledge, skills and
technological patrimony.
The challenge is that most enterprise information
systems are not well integrated or maintained. Data
and information can be transmitted anywhere at any
time in an e-manufacturing environment (Wang and
Nee, 2009).
As already mentioned in the previous section,
beside the integration of different sorts of
collaboration tools, another challenge faced by the
industrial companies today is the ability to quick
adjust system configurations and reuse the data.
This ability has a great applicability in the todays
frequent changing manufacturing environments but
also for preparing offers to customers and providing
several options and solutions for the same inquiry,
in a very short time. In virtual manufacturing, the
new software tools are trying to cover also these
aspects of reconfigurability and flexibility of
systems, so that solutions in the direction of
production system modularisation and
reconfigurability have been adopted by more and
more industrial companies.
In addition to extensive collaboration and
adaptive system configurations, the integrated
system design involves concurrency. Concurrent
engineering is a work methodology based on
parallelization of tasks (i.e. performing tasks
concurrently). As written in Wang and Nee (2009)
the ideal process of concurrent or simultaneous
engineering is characterized by parallel work of a
potentially distributed community of designers who
know about the parallel work of their colleagues and
collaborate as necessary. The process is approached
as a whole, the accumulation of results is not
performed sequentially.
There are two theories that stand at the basis of
integrated and concurrent engineering. The first one
redefines the basic design process structure that was
used for decades. The new idea is that all elements
of a production systems life-cycle should be taken
into careful consideration in the early design phases
when the representation of the system is still in a
more abstract or at least virtual state. Using the
conventional-sequential method (also called
Waterfall Model) incompatible elements of design
are not discovered until late in the process, when it
is usually more expensive to make changes. In
contrast, the integrated design process requires an
iterative approach, multidisciplinary collaboration,
including key stakeholders and design professionals,
from conception to completion. The collaboration
between designers has to converge towards
optimizing engineering design cycles.
The second theory says that all design activities
should be occurring at the same time, meaning
concurrently.
291

292


Figure 1 Sequential vs. Concurrent System Design
Figure 1 shows a graphical representation of the
difference between the conventional system design
approach and respectively the iterative/concurrent
approach.
Applying the new theories does not mean that all
problems are solved. Many organizational and
managerial challenges arise when applying these
methods. Opening the design process to allow
concurrency creates problems of its own: ensuring
compatibility between the different collaboration
tools, enabling the communication between
engineers, etc. There must be a strong basis for
teamwork since the overall success of the methods
relies on the ability of engineers to effectively work
together. Often this can be a difficult obstacle, but
using proper processes and software tools, this
obstacle can be successfully passed.
Considering the above mentioned challenges and
the directions of ongoing research, it can be said that
modern factories have to be modular, scalable,
flexible, open, agile and knowledge-based in order
to quickly adapt to the continuously changing
market demands, technology options and
regulations. All these concepts are addressed by the
European research project Virtual Factory
Framework. The goal of VFF is to create the next
generation Virtual Factory meant also to stand at the
basis of future applications in this research area.
VFF aims at supporting various factory activities
and to facilitate the sharing of resources,
information and knowledge related to
manufacturing processes. It implements the
framework for a collaborative virtual environment
based on object-oriented technologies.
This framework is based on four key Pillars: (I)
Reference Model, (II) Virtual Factory Manager, (III)
Functional Modules and (IV) Integration of
Knowledge. All the functionalities required by the
factory planning processes are provided by different
decoupled modules (Pillar III) that work on a
consistent reference factory model (Pillar I) thanks
to the VF Manager (Pillar II) that plays an
integrating role by interfacing all the modules.
(Sacco et al, 2010)
Based on international standards and advanced
techniques a series of dissemination, exploitation
and validation strategies are developed within VFF.
Several industrial use cases are designed within the
validation scenarios. The framework will be
validated based on these scenarios and its impact on
the real factories will be evaluated.
The VF Manager (VFM) is the core of VFF and
handles the common space of abstract objects
representing the factory. The VFM orchestrates the
VFF functional modules and guarantees data
consistency and data availability among them.
The preliminary architecture of VFM proposed
by Sacco et al (2011) was related to a VF Data
Model based on the XSD/XML format. However,
the XSD technology alone is not suitable for data
consistency checks and knowledge representation. A
more viable solution has been identified in the
adoption of ontology as means for data and
relationships representation, promoting knowledge
integration in the data model (Ghielmini et al,
2011).
According to its latest architecture, the
Information Exchanging Platform (IEP) is the main
component of the VFM and provides VF modules
with a high level access to the two functional cores
of VFM: the Versioning Layer and the Semantic
Layer. The Versioning Layer contains the VF Data
Repository where all the shared data are stored. The
Versioning System guarantees the data safety
allowing to restore an older version at anytime.
(Sacco et al, 2011).
The VF modules are decoupled functional
software tools that operate independently but they
are all using the same Factory Data Model. They are
designed and implemented in order to cover one or
more of the factory life-cycle phases. They have to
be seen as collaborative tools targeted for increasing
the performance in design, management, evaluation
and reconfiguration of new or existing production
facilities. The holistic aspect of VFF is ensured by
the broad range of these functional modules.
The exposed functionality of the VFM is
implemented as web services (W3C, 2011), thus all
Activity
A
Activity
B
Activity
C
Activity
D
Activity
E
Designed System
Designed System
Designed System
Designed System
Activity
A
Activity
B
Activity
C
Activity
D
Activity
E
S
e
q
u
e
n
t
i
a
l

A
p
p
r
o
a
c
h

C
o
n
c
u
r
r
e
n
t

A
p
p
r
o
a
c
h

Designed System
292

293

the functional modules are required to implement a
web service client according to the WSDL file
(Booth and Liu, 2007) describing the published
interface (Sacco et al, 2010). In this way, all of the
functional modules have to respect the same set of
interfaces defined by the VF Manager and thus the
advantage is that they can be easily integrated.
As already introduced in the previous section,
Design Synthesis Module (DSM) is one of the VFF
functional modules focused to facilitate the
integrated and concurrent design of production
systems, as well as preparation of offers for existing
or potential customers. Similar to the whole VFF,
the DSM module is going to be implemented in a
sufficiently generic and flexible manner so that it
can be used by a wide range of industrial companies
facing similar problems related to customer
proposals preparation, reconifgurability of systems,
integration and concurrency of pre-design and
design activities; not only for the industrial partners
of VFF. But more details about DSM will be
provided in the following section.
3. DESIGN SYNTHESIS MODULE (DSM)
In order to win a competition, locally or globally,
customer satisfaction has to be treated with the
highest priority. This has led to the need of creating
configurable and customizable systems and to even
more complex manufacturing processes. Designing
structures of easy reconfigurable manufacturing
systems for a potential customer has become one of
the desired targets.
In these circumstances the offer preparation
process is critical for industrial companies because
it is characterized by strict time constraints and
because it has the crucial role for winning an order
from the customer. If the bid is not won, the entire
effort and cost spent during this phase are
potentially lost. Additionally, this phase has to be
carried out in a very short time period - usually 2 or
3 weeks. Since several departments are involved it
is clear that parallelization of the work across these
departments would speed up the process and make
the whole phase more efficient.
Presented in more detail, the proposal phase
consists of the following activities:
Technical Proposal
Cost Estimation
Macro-level Design Planning
Macro-level Manufacturing Planning
Macro-level Acquisition Planning
Macro-level Buy off Planning
Nowadays there are still many industrial
companies that are using excel files to store the data
related to the configuration of the production
resources that are used to design a
manufacturing/assembly system. This happens
although several departments have to work on the
same set of files. This means that sometimes the
employees have to wait for each others work in
order to start theirs, since it is not possible to work
in parallel.
Additionally, there are certain cases when the
usage of data coming from various sources of
information (catalogues, standards, older projects,
etc.) involves a lot of time-consuming search and
copy-paste activities since there is no integration
between these sources of information.
Moreover, it is very hard to present to the
customer several options, or several solutions, for
one single bid inquiry while the available time is
very short, system reconfiguration is hard to be
facilitated and evolved software tools are not used to
achieve concurrency.
The DSM module, as part of VFF, will address
the above mentioned problems and will try to offer
suitable solutions to all of them. Its main objective
is to improve this process as well as the design &
development phase of an industrial company
specialized on creating production systems. This
goal is acquired by facilitating and speeding up the
integrated work of the departments involved in these
business processes.
DSM aims at providing a shared access to
explore and modify concurrently the configuration
of the production system and its resources and
components.
DSM will be a desktop software application
integrated with Virtual Factory Manager for
accessing the central VFF data. Additionally, it will
have a local storage in order to provide offline
availability of data. The synchronization of the local
storage with the VFF Data Repository may be
performed by user request or automatically
depending on the user preferences.
The production resources required for the
production systems that are going to be created
reside in the VFF Data Repository and they are
retrieved to DSM by the VF Manager. However, if a
new production resource is needed, then the user can
design a new resource by accessing various
databases that are external to the VF Data
Repository and which are however reachable by
DSM. These newly created production resources
together with their characteristics and sub-
components will be also stored in the VFF Data
Repository and can be used in later projects or by
other users.
293

294

The access to external databases is necessary
because at the moment it is not foreseen that the VF
Data Model will deal with very detailed
representation of data regarding the components of a
production resource (e.g. a machine tool). In the
scope of VFF, a production resource is considered
as a black box receiving input and providing an
output. The external databases which will be
accessed by the DSM module might be catalogues
of components selected to design a production
resource or technical standards that can be used to
estimate the characteristics of the resources (e.g.
MTBF) and/or processes (e.g. time to execute a
manual operation).
In particular the module will focus on the
configuration of resources depending on the
information arriving from:
the operational departments concerning
operations to be performed by resources.
the system engineering departments concerning
aggregated data about the characteristics of the
manufacturing system (e.g. the number of
resources).
the design departments.
If no resource described in the available
catalogue answers the specifications received as
input, then the module starts the configuration of
new resources.
The module will perform automatically
computations of estimations for costs, reliability,
etc. which are now done manually using excel files.
The application will assist the calculation of the Life
Cycle Cost (LCC) of a production system. Some
costs can be directly calculated by the module (e.g.
total investment cost of machine tools), whereas
other costs (e.g. energy cost, spare parts cost) can be
estimated by other VF modules by exploiting the
defined characteristics of the production resources.
The time of the proposal process is expected to
be significantly shortened thanks to DSM module
by:
speeding up the definition/evaluation of the
production resources
enabling a quick reuse of data
enabling a concurrent design and
characterization of the production resources
From the conceptual point of view the module
will split the handled data in two important
categories. The projects will contain the information
related to the new production systems which are
prepared. Each new offer inquiry is seen as a new
project. To cover one of the missing capabilities so
far each project may contain several solutions for
the same biding preparation. Each solution will
contain in fact the definition and characteristics of
the production line resources chosen to be part of
the respective system. On the same project, several
users can work in parallel since all the data is easily
shared through the VF Data Repository and VF
Manager. Figure 2 (left) illustrates as example the
content of such a project.
Figure 2 A typical DSM project and Master Data
structure
The master data will contain all of the resources
(stations, equipments, tools, etc.) that can be used
and reused in projects to create solutions for
customer inquiries. From here the elements are
taken and used in projects. Always a copy is created
when such an element has to be part of a specific
solution, so that it can be customized then for that
solution without affecting the basic characteristics
of the element which resides in the master data or
other projects. Figure 2 (right) shows a typical
content of the master data hierarchy. Once a new
resource created into a project is expected to be
reused in other projects, it can be published to the
master data. But this operation is limited only to a
certain user role. And in general, all writing
operations on the master data will require a special
privilege, since the modifications here have to be
done with care, as they impact the basic set of
resources for all future projects.
One important aspect is that the master data is
usually different from one manufacturing company
to another. Thats why DSM will provide support
for easy configuration of the master data structure
based on an XML definition. This way we make
sure the tool can be used for a wide range of
production systems. Other useful features that will
be provided by the DSM module are: export/import
of master data objects, projects, parts of projects to
external files in different formats (e.g. export/import
to/from Excel, export to PDF, etc.); data security
based on users and roles - using the capabilities
provided by VF Manager; freezing a project
solution which is considered final, so that accidental
294

295

modifications cannot be performed later on (e.g.
freezing the proposal phase solutions once the
project is in design phase); search capabilities for all
kinds of elements; easy copy-paste of objects from
other projects to the current solution; easy drag &
drop of objects from master data to projects; each
user can customize his user interface (look & feel).
As seen in the diagram, the module will provide a
connector for interacting with the VF Manager for
accessing resources from the VFF Data Repository.
Figure 3 Design Synthesis Module Architecture
Other connectors will be designed for DSM in
order to access the third party systems and data
storages (e.g. catalogues, standards). The VFM
Connector takes care of the web service client
implementation and the connection state mechanism
since the VF Manager will be accessed through its
IEP WebService interface. The SPARQL query
language will be used in order to access the
semantic data. However, DSM is not a semantic
application and thus it will access the semantic
representation only to extract and modify plain
data. In fact this is one of the goals desired for
future versions of DSM how to profit from the
semantic information.
Figure 3 shows the multi-layered architecture of
DSM. The module will provide a stand-alone user
interface for being accessed by the user. An
additional web user interface may be also provided
in the future but it is not in focus at the moment.
The stand-alone user interface was required because
the users need to be able to work off-line with the
application.
IEP
(Information Exchange Platform)
Versioning
System and
Data Repository
VFM Connector
Business
Logic
Layer
U
s
e
r

I
n
t
e
r
f
a
c
e

C
o
n
n
e
c
t
o
r
s

t
o

o
t
h
e
r

S
y
s
t
e
m
s

Local
DataBase
Persistence Layer
User
Other storage systems
(catalogues, standards, etc.)

Other
applications

I
E
P

W
e
b
S
e
r
v
i
c
e

IEP
WebServices

Design Synthesis Module (DSM)

Other DSM Instance

Other DSM Instance

Other DSM Instance

VF Manager

295

296

The core of the DSM module is represented by its
business logic layer which will handle the entire
data processing, computations and logic operations.
Another issue DSM will have to deal with, as
well as any other VFF functional module, is the fact
that the data received from the VFM is in
RDF/XML format (Beckett, 2004).
At the moment the DSM module implementation is
in a very incipient state. So far the most efforts have
been put in clarifying in detail all the features that
will be provided by this module. Although in the
beginning the chosen technology for
implementation was Java, the recent discussions
have brought the topic of using .NET and Windows
Presentation Foundation. So the initial decision
might be changed in the near future.
4. CASE STUDY: COMAU
Since the final goal of the Virtual Factory is to
improve the performance of the Real Factory, it is
necessary to verify the impact of the VFF approach.
This need asks for the cooperation of industrial
companies to define demonstration scenarios that
aim at testing and validating the framework. Within
the VFF project four demonstration scenarios have
been designed by pairing different factory planning
processes and industrial sectors represented by the
project partners: Factory Design and Optimisation,
Factory Ramp-up and Monitoring, Factory
Reconfiguration and Logistics, and The final
scenario named Next Factory aims at
demonstrating the applicability of the VFF on the
entire factory lifecycle. This integrated scenario
focuses on the woodworking and automotive sectors
represented by Homag AG and COMAU Powertrain
SpA. (Sacco et al, 2010).
COMAU is a global supplier of industrial
automation systems for the automotive
manufacturing sector offering full service, from
product engineering to production systems and
maintenance services, together with a worldwide
organization. Beside automotive, COMAU is active
in several other industrial sectors, including
aerospace, and other non-automotive applications.
Additionally, COMAU is an innovation leader
committed to the continuous improvement of
products, processes and services, through the
production of advanced manufacturing systems. As
mentioned above, COMAU is also involved in VFF
in the position of industrial partner that comes with
important engineering expertise and expects to
improve its processes by using the framework and
functional modules that are going to be developed.
Similar to other industrial organizations,
COMAU is competing in a dynamic marketplace
that demands short time-to-market and agility in
production. There are five business phases carried
out by COMAU when dealing with the problem of
supplying a production system: proposal
(concept/pre-design), design and development, build
& install, run & monitor, performance
improvement, but only the first two are important
for the topic of this paper. All these activities would
benefit if adequate and integrated virtual factory
methods and tools were available (Sacco et al,
2010).
In the proposal phase COMAU receives a bid
inquiry from a potential costumer and prepares one
or more technical and commercial offers for the
production system. Some very high-level design
activities are involved. Once the order is won,
COMAU receives the final specifications and starts
the final design of the production system. The pre-
design information that was prepared for the
proposal phase is of great value for the design
phase. Employees of several departments are
involved in these two phases as well as in the other
ones. They need to collaborate very well for a
successful result.
Several documents/files are produced by
COMAU to support the proposal activity: CAD
drawings of the production system layout, Excel
files with description of the production system
configuration and its resources, etc. The second
category is of interest for us in this case. The Excel
files are used by the different departments and there
is one such file created for each station that will
compose the production line. Thus, only one
file/station is shared by several departments and it is
not possible to work in parallel. Also, filling in the
excel file is time consuming. Moreover, several
design loops can be necessary to present the final
proposal due to several reasons: technical
improvement (process and resources modification),
or commercial improvement (decrease the solution
costs), or even modifications required by the client
himself during the proposal phase. The system
configuration modifications cause changes in the
efficiency and productivity of the designed solution.
One other limitation is that due to the constraints
caused by the existing time-consuming process, the
ability of proposing new and alternative solutions
takes a long time. The proposal of several
alternative solutions would be fastened and
beneficial if COMAU would be helped to:
speed-up the design of the solutions, this way
having time also for preparing alternatives.
perform a complete and detailed Life Cycle Cost
(LCC) analysis of the production system during
the proposal phase.
Once the negotiation between the customer and
COMAU is successful, an order containing the
detailed specifications is issued and the design and
296

297

development phase is starting. The activities carried
out during this phase are similar to those carried out
during the proposal process. Nevertheless the level
of detail of the data describing the production
system is higher. However, the files produced
during the proposal process are taken as reference to
start the design and development phase. Several
departments work together, again, to configure a
refined version of the production system.
The improvement needs highlighted above in the
case of COMAU are addressed by VFF and in
particular by DSM. The following goals are set for
DSM in relation to improve COMAU processes:
efficient and effective management of data and
information about the production system
configuration to be shared between the COMAU
departments.
tools to support the system design activity by
quickly evaluating the performance and cost of
the production system configurations and its
resources. In particular, the creation of
simulation models needs to be as much
automated as possible.
With the help of the DSM module, COMAU aims
at speeding up the choice of the macro-components
when designing the machine configurations. This
can be done by selecting the electromechanical and
mechanical macro-components from a database.
Moreover, each macro-component should be
associated with relevant technical data such as
MTBF (Mean Time Between Failures)/MTTR
(Mean Time To Repair), preventive maintenance
(both when the machine is running and when it is
stopped), etc. In addition it will be possible to add
information about the spare parts and dedicated
operators costs, thus enabling the evaluation of the
LCC of macro-components, machines or even entire
production systems.
Additionally, having the shared access provided
by DSM, the several design loops that exist today in
the proposal phase can be reduced.
By integration with other VFF modules, the
visualization of 3D design of the production system
layout will be possible to be triggered from DSM.
Moreover, the integration of some of the already
adopted support tools would result in the possibility
to bring more activities in parallel, reducing the time
required during the design process.
Figure 4 shows how the integration between
departments is working now related to the definition
of production system configuration and how it is
expected to work once the DSM module is in place.

Figure 4 Departments collaboration before/after DSM
5. CONCLUSIONS
DSM is expected to increase the efficiency and
effectiveness of the activities involved in proposal
and design phases by:
parallelization of the work
avoiding the use of the existing heavily
spreadsheet files
speeding up the definition/evaluation of the
production resources
enabling concurrent and integrated design
enabling quick reuse of data (from other
proposals, other projects), facilitating easy
adjustments of the proposals
exploring and modifying the resources and
components of the production system (their
characteristics, attributes, etc.)
accessing external data bases (components
catalogues, standards, etc.) and other tools
supporting the calculation of the Life Cycle Cost
(LCC) for the production system
export/import capabilities
role-based data security
One of the directions of future research,
addressed by following versions of DSM, will be to
check how the semantic information and knowledge
(not data) provided by VF Manager can be used by
DSM to improve even more the processes it is
aimed for.
6. ACKNOWLEDGMENTS
The research reported in this paper has received
funding from the European Union Seventh
DSM
+ VFM &
VFF Data
Repository
Operational
Dept.
Proposal
Dept.
Design
Dept.
Purchasing
Dept.
System
Engineering
Dept.
Financial
Dept.
Purchasing Dept. Proposal Dept.
Design Dept. Financial Dept.
Syst. Eng.
Dept.

Operational
Dept.
Using Excel Files
Using DSM
297

298

Framework Programme (FP7/2007-2013) under
grant agreement No: NMP2 2010-228595, Virtual
Factory Framework (VFF).

This paper has been elaborated as part of the
projects "PhD research in the field of engineering
with the purpose of developing a science-based
society - SIDOC", Contract no.
POSDRU/88/1.5/S/60078 and PCCE-100/2010.
REFERENCES
Alexopoulos K., Papakostas N., Mourtzis D.,
Chryssolouris G., "A method for comparing flexibility
performance for the lifecycle of manufacturing
systems under capacity planning constraints",
International Journal of Production Research, Vol.
49, No. 11, 2010, pp. 3307-3317
Beckett D., RDF/XML Syntax Specification (Revised),
W3C, 2004, Retrieved: 15.06.2011,
<http://www.w3.org/TR/rdf-syntax-grammar/>
Booth D. and Liu C.K., Web Services Description
Language (WSDL) Version 2.0 Part 0: Primer, W3C,
2007, Retrieved: 15.06.2011,
<http://www.w3.org/TR/wsdl20-primer/>
Chryssolouris G., Mavrikios D., Papakostas N, Mourtzis
D., Michalos G., Georgoulias K., "Digital
manufacturing: history, perspectives, and outlook",
Proceedings of the Institution of Mechanical
Engineers Part B: Journal of Engineering
Manufacture, Vol. 222, No. 5, 2008, pp.451-462
Denkena B., Shpitalni M., Kowalski P., Molcho Z.,
Zipori Y., "Knowledge management in process
planning", CIRP Annals, Vol. 56, No. 1, 2007, pp.
175180
Engelbert Westkmper, Carmen Constantinescu, Vera
Hummel, New Paradigm in Manufacturing
Engineering: Factory Life Cycle, Production
Engineering, Vol. 13, No. 1, 2006, pp 143-146
Ghielmini G., Pedrazzoli P., Rovere D., Terkaj W., Bor
C.R., Dal Maso G., Milella F., Sacco M., Virtual
Factory Manager of Semantic Data, Proceedings of
DET2011 - 7th International Conference on Digital
Enterprise Technology, Athens, 2011
Koren Y., Heisel U., Jovane F., Moriwaki T., Pritschow
G., Ulsoy G., Brussel HV, Reconfigurable
Manufacturing Systems, CIRP Annals -
Manufacturing Technology, Vol. 48, No. 2, 1999, 527-
540
Lihui Wang, Andrew Y.C. Nee, Collaborative Design
and Planning for Digital Manufacturing, Springer,
London, 2009
Morad Mahdjoub, Davy Monticolo, Samuel Gomes and
Jean-Claude Sagot, A collaborative Design for
Usability approach supported by Virtual Reality and a
Multi-Agent System embedded in a PLM
environment, Computer-Aided Design, Vol. 42, No.
5, 2010, pp 402-413
Mottura S., Vigano G., Greci L., Sacco M., Carpanzano
E., New Challenges in Collaborative Virtual Factory
Design, Azevedo A, (Ed.) Innovation in
Manufacturing etworks, Springer, 2008, Boston, pp
1724
Pedrazzoli P., Sacco M., Jnsson A., Bor C., Virtual
Factory Framework: Key Enabler For Future
Manufacturing, Cunha, P.F., Maropoulos, P.G. (eds.)
Digital Enterprise Technology, Springer, US, 2007,
pp. 83-90
Sacco M., Dal Maso G., Milella F., Pedrazzoli P., Rovere
D., Terkaj W., Virtual Factory Manager, in:
Shumaker R., "Virtual and Mixed Reality - Systems
and Applications", Lecture Notes in Computer
Science, Springer, 2011, pp 397-406
Sacco M., Pedrazzoli P., and Terkaj W., VFF: Virtual
Factory Framework, ICE - 16th International
Conference on Concurrent Enterprising, 2010,
Lugano, Switzerland
Terkaj W., Tolio T., Valente A., Designing
Manufacturing Flexibility in Dynamic Production
Contexts, Design of Flexible Production Systems,
Springer, 2009, pp 1-18
Terkaj W., Tolio T., Valente A., Stochastic
Programming Approach to support the Machine Tool
Builder in Designing Focused Flexibility
Manufacturing Systems (FFMSs), International
Journal of Manufacturing Research, Vol. 5, No. 2,
2010, pp 199-229
Tolio T., Ceglarek D., ElMaraghy H.A., Fischer A., Hu
S., Laperrire L., Newman S., Vncza J., SPECIES --
Co-evolution of Products, Processes and Production
Systems, CIRP Annals - Manufacturing Technology
Vol. 59, No. 2, 2010, pp 672--693
Wiendahl H-P., ElMaraghy H.A., Nyhuis P., Zaeh M.F.,
Wiendahl H-H., Duffie N., Brieke M., Changeable
manufacturing - classification, design and operation,
Annals of the CIRP - Manufacturing Technology , Vol.
56, No. 2, 2007, pp 783809
W3C, OWL Web Ontology Language - Reference,
W3C, 2004b, Retrieved: 15.06.2011,
<http://www.w3.org/TR/owl-ref/>
W3C, SPARQL Query Language for RDF, W3C,
2008a, Retrieved: 15.06.2011,
<http://www.w3.org/TR/rdf-sparql-query/>
W3C, XML Schema Part 1: Structures Second Edition,
W3C, 2004c, Retrieved: 15.06.2011,
<http://www.w3.org/TR/xmlschema-1/>
298
Proceedings of DET2011
7th International Conference on Digital Enterprise Technology
Athens, Greece
28-30 September 2011
299

METHODOLOGY FOR MONITORING AND MANAGING THE ABNORMAL
SITUATION (EVENT) IN NON-HIERARCHICAL BUSINESS NETWORK

AHM Shamsuzzoha
University of Vaasa, Finland
ahsh@uwasa.fi
Sami Rintala
Wapice Oy, Finland
sami.rintala@wapice.com
Timo Kankaanp
University of Vaasa, Finland
tka@uwasa.fi

Luis Carneiro
INESC Porto, Portugal
luis.carneiro@inescporto.pt
Pedro Sena Ferreira
CENI, Portugal
psf@ceni.pt
Pedro Cunha
CENI, Portugal
pcunha@ceni.pt
ABSTRACT
Because of substantial impact, potential and value, business collaborations are nowadays turning to
be an important issue of contemporary business management. Although business network supports
competitive advantages however, it often becomes difficult to manage the integration of the
operational processes. During the operational processes there might evolve unexpected situations or
events within the network boundary. These events create various obstacles to run the business
collaboration smoothly. Monitoring and managing of such abnormal situations create huge
challenges for the collaborative firms in respect to production processes. The main objective of
event monitoring and management (EMM) is to provide warning and managing any uneven
situations that might cause serious damage for the firms. The research presented in this article
provides the fundamental concept of EMM that are common in any industrial establishment. A case
example is also highlighted within the scope of this paper with the view to demonstrate the ICT-
based EMM process, applicable in a non-hierarchical business network.
KEYWORDS
Event Monitoring and Management (EMM), Event Management Ontology, Non-Hierarchical
Business Network, Virtual Organization

1. INTRODUCTION
In the current competitive business world,
manufacturing firms are moving forward to expand
their businesses across organizational boundaries.
This new environment inspires and motivates them
to be collaborative for achieving specific business
benefits. Collaborative business offers extended
market opportunities for the partners in terms of
sharing valuable resources and expertise among
each other (Camarinha-Matos et al., 2009). In
running such operational environment, there often
evolves several uneven or abnormal situations that
hampers the overall benefit of such business
collaboration. This abnormal situation often termed
as event that has the adverse effect on productivity
and profitability. Therefore, the business mission is
to prevent and reduce the frequency and severity of
abnormal situations through early detection and IT-
based monitoring and management.
Increased demand levels for higher efficiency
and productivity create tremendous pressure in the
process control that results uneven situations within
manufacturing firms (Alexopoulos et al., 2010).
This abnormal situation (event) results a wide range
of minor to major production disruptions which
need to close monitoring and control. Continuous
monitoring and control of the events are the
prerequisite for achieving key business goals
including improved operational efficiencies,
optimized processes, reduced plant incidents, life
cycle costs and maintenance costs, increased plant
uptime and profitability, etc. In order to ensure
continuous or real-time operational monitoring,

300

collaborative business partners need an ICT-based
communication infrastructure. The event monitoring
and management (EMM) system is basically
controlled by the IT-based infrastructure, where
non-hierarchical business network promotes the
definition and description of the events.
The approaches of monitoring and managing of
events varies from firms to firms and also depends
on the structure of the business network such as
hierarchical network or non-hierarchical. In
hierarchical business collaboration, usually the
largest partner takes the overall control of the
network and the other partners support its decision
or command. On the other hand, in non-hierarchical
network, all the partners enjoy the equal power and
control over the business collaboration. In this
research paper, the EMM is demonstrated based on
the non-hierarchical business network. This research
develops an IT-based framework for categorizing
the abnormal situations or unexpected events faced
by the collaborative partners in their daily business
operations and the essential practice for controlling
them is presented.
The rest of the paper is organized as follows:
Section 2 presents review of the existing literature
on EMM applicable in collaborative business
network, while Section 3 defines the event with its
different types commonly available in a business
network. Section 4 outlines the concept of non-
hierarchical business network. Section 5 illustrates
the methodology for EMM process, whereas,
Section 6 presents an example based on monitoring
and managing an event in non-hierarchical network.
The basic outcomes from this research are discussed
and concluded with future research direction in
Section 7.
2. LITERATURE REVIEW
The monitoring and managing the abnormal
situation of any industrial establishments is
considered an important step towards minimizing
the damages caused by it. It is eventually occurred
within the manufacturing firms without any notice
and the damage can be even severe which can not be
recovered easily. The abnormal situations in any
business network have always challenged the
aggressive application of increasingly complex
processes, sophisticated control strategies and highly
integrated approaches to production planning
(Cochran and Bullemer, 1996). They are responsible
with a wide range of minor to major process
disruptions for which the productivity is reduced
considerably, lower the operational efficiency and
increases the maintenance cost substantially. In such
a consequence, the success of a firm usually
measured by the capability on how it can prevent or
at least immediately identify and resolve the
deviations from the planned activities (Andreas,
2003).
It is required to identify the abnormal activities
in an operational process in order to minimize their
negative impacts before they are detrimental to
customer satisfaction and operational efficiency.
Managing of an event needs double actions: firstly,
to eliminate the possible delay between the
identification of the event and the remedial action
for it; secondly, to eliminate the delay between the
time when it occurs and the time when the decision
maker finds it out (Categoric software, 2002). In
order to remove or minimize the gap of both types
of delay, manufacturing firms need to generate rule-
based resolutions plan. The formal method of event
monitoring and management can be defined as
repair, reschedule, re-plan and learn (Andreas,
2003). If it is impossible to repair an immediate
event, the subsequent steps or processes should be
rescheduled. This reschedule might require re-
planning the complete operational activities. In
learning process, the main objective is to prevent
future occurrences of events through proper
managerial attention (Llog, 2002).
The general trends of managing collaborative
events among participating firms are through online
communication and using SMS and/e-mail. This
type of communication pattern ensures the true
visibility among the business collaboration, which
enables collaborative partners to resolve individual
events in more efficient and effective way (Rintala
et al., 2010). This approach obviously mimics the
learning mode for the traditional industrial partners
and pursues them to reduce deviations, preferably by
prevention. The visibility of information flow
among partners is considered as a crucial step
towards the collaborative event monitoring and
management system (Kemmeter and Kimberly,
2002; Banker, 2002). It is seemed to be a useful
technique or methodology for the organization
managers to implement the decision making process
for preventing from an abnormal situation.
Different aspects of event monitoring and
management can be achieved through traditional
information buffering and/or introducing automated
process monitoring techniques such as statistical
process control (SPC) and by implementing
expensive and error prone human attention
(Thomson, 1967; Pfeffer, 1972; Deming, 1992;
Shigeo, 1986). In an automated event monitoring
technique human attention is replaced by the
infusion of software, which offers strategy to
stabilize inter-organizational processes more
efficiently and more effectively. In this process, the
collected execution data from the inter-organizations
is stored in a central database that allows the
necessary monitoring to the inter-organizational
processes. Other technique like track and trace

301

solution is usually applied in managing the events in
logistics and supply chain network, where the
service providers monitors their logistics operations
and update the customers through reporting the
progress of the delivery shipment in real-time or
near real-time information (Stefansson and Tilanus,
2001; Bretzke et al., 2002). This logistics and supply
chain environment ensures the production of highly
customizable products through providing robust
plan, ensuring the supply of the right part at the right
time at a rather reasonable cost, thus eliminating the
quality defects of the end product (Makris et al.,
2011).
3. DEFINING EVENT AND ITS TYPE IN
THE BUSINESS ENVIRONMENT
The term event can be defined as the deviation from
a plan such as unexpected delivery delay,
operational accident, unnoticed labour unrest, etc.
The importance of monitoring and managing the
events are quite high in terms of both organizational
and market perspectives. The event monitoring and
management process should have to be considered
from various prospective such as to identify, detect
and analyze event occurrences and identify and
manage the implementation of counter measures
(Rintala et al., 2010). Before adopting suitable
strategy to controls events, entrepreneurs need to
identify different types of events commonly evolves
during operational consequences. Based on the
various operational circumstances, the events within
a business network can be classified as follows.

Internal event: A logistic occurrence can be
classified as internal event stating that the event
has only internal impact and VO does not need to
be notified. This type of event does not have
operational and/or economical impact to VO but
only impacts internally to an individual partner.
External event: This type of events usually
evolves from external factors of an enterprise or
a VO.
VO event: This type of event has economical
and/or operational impact on the surrounding
VO.
SOS event: This defined event considers with
highest priority level and has a huge impact to
the operations of a VO. The production process
needs to be shutdown entirely if there are SOS
events.
Predicted event: The event that has been
predicted previously and managerial procedures
have been defined for it is termed as predicted
event. This type of event can be noticed before it
causes negative impact.
Unpredicted event: The event that has not been
predicted before but its impact is visible is
termed as unpredicted event.
Temporal event: The consequence of this type of
event is to indicate a time strap such as either late
or early of an operational activity.
Quantitative event: It defines the quantitative
measures of an event such as level of resources
whether it is null, low, medium or high.
Qualitative event: This type of event is concern
with the characteristics of the occurrences
(events) in terms of qualitative manner. For
instance the a product need to be colour blue but
accidentally it finishes up with black due to the
problem of the painting machine.
Spatial event: It identifies the spatial features in
terms of location, status, state, etc. For instance,
address of a partner, location of a shipment such
as GPS position, etc.
Structured or expected event: The event evolves
due to the causes of preceding event and can not
be avoided by any means. In this situation
management usually pre-planned to tackle such
event in well ahead.
Unstructured or unexpected event: This type of
event never expected by the organizational
management team and therefore no actions or
plans have been made in advance.

All the events as mentioned above have their
individual states and the possible consequences. The
management team of the partner organization needs
to be pre-planned in case of possible event which
might damage or slowed down the operational
processes. Although most the event as occurred
during operational processes is unknown or
unidentified, however, managers need to be aware to
adjust the possible consequences of an event. The
generic types of events are always dependent on the
circumstances of each business network, company
branch, and even on economical situation. Events of
the business partners are related with their
objectives. It is beneficial for the business network
to differentiate among events in order to minimize
their impact to the operations of the partners
organization. The theme of information of these
events can be used to conclude more precisely to the
urgency of the events and their immediate impacts
on the VO partners.
4. CONCEPT OF NON-HIERARCHICAL
BUSINESS NETWORK
The concept of business collaboration or networking
is evolving rapidly due to its inherent potentials or
benefits. It is prime concern in todays competitive
business environment to be competitive in terms of

302

sharing resources and expertise with other
companies. The first and most prerequisite condition
of developing such networking is to building the
trust among the partner organizations. Without
established trust and commitment it is not almost
impossible to be successful in any kind s of
collaboration. Manufacturing firms need to be
collaborative in respect to their individual strengths
and weaknesses too. This open environment can
help for building valuable trust and longer term
relationships among the networking partners.
In the business network there are various types of
networking commonly available which can be
mentioned such as virtual organization breeding
environment (VBE), virtual enterprise (VE), virtual
organization, business community (BC) [Carneiro et
al. 2010], etc. The objectives of all these networking
types are mostly similar with their levels and terms
and conditions. However, specific business
opportunities demands for certain type of
networking. It also depends on the geographical
location, culture, motivation level, etc.
Other two types of business collaboration can be
considered in terms of level of control and running a
network such as hierarchical and non-hierarchical
networks. In hierarchical network, mostly larger
organizations are networked with comparatively
small and weak organizations for achieving specific
business goals. Here the partners do not enjoy the
equal power but dominated by the larger
organizations in the network. On the other hand, in
non-hierarchical network (NHN) the business
partners are usually similar capacities and
capabilities collaborated together for definite
mission (Shamsuzzoha et al., 2010). The partners in
the NHN enjoy equal power within the network and
mostly organized among small and medium
enterprises (SMEs). This NHN is usually
implemented within the SMEs that are producing
high variety, low volume complex products. The
collaboration helps SMEs in terms of complex
product designing and preparing quotations.
5. METHODOLOGY FOR EMM PROCESS
Specific approach is necessary in order to manage
events within any organizations. Proper
identification and planning enhance the overall
event management activities. Before considering the
operational events following steps or measures
could be focused.

(a) Event identification
(b) Event assessment and prioritization
(c) Event monitoring and control

In a collaborative business environment, each
partner should have shared its events with others and
to manage and monitor in a collaborative means.
However, individual partners are also responsible
for their own planned deviation of operations
(events) and should manage them accordingly. The
detailed processes of the event management in a
collaborative business environment are displayed in
Figure 1 and discusses briefly at the following
paragraphs.

Organization
strategic
objectives
IT-basedtools
Plan and
schedule
controls
Analyze controls
Identify controls
Event
monitoring
andcontrol
Assessment
andprioritize
of events
Event
identification
Event
reporting
Event
database

Figure 1 The generic processes of event management
5.1. EVENT IDENTIFICATION
The prime concern before managing the events is to
notify the potential events that cause unnecessary
interruption in the business operations. The main
objective of event identification is to avoid future
uncertainties and to be able to manage these
reactively. The nature of identified events could be
interruptions in regular operations, order and
delivery fluctuations in case of logistics, fluctuations
in resources in case of inventory management,
quality failures in production processes, etc. Both
internal and external events within a firm are needed
to be identified. It is not often easy but cumbersome
task to identify the potential events especially in
case of events that occur as feedback loops.
In a collaborative business, identification of
events needs to be done on shared basis in order to
avoid repetitions and for mutual benefits. The
partners in the business network could implement
IT-based tool to identify and mitigate the events,
which are stored within an interface engine. The
identified events should be visible for the partners
on a daily, monthly and yearly basis. During event
identification it is required to gather information,
transmission to the partners and filtered for
important features. This effective information
sharing among partners is the key factor to decrease
external and internal uncertainties. The fast
observation of the event and sharing is crucial. The
through put times are short and the faster the partner
network can adjusts their activities based on the
changes needed the better result they achieve.
5.2. EVENT ASSESSMENT AND
PRIORITIZATION
After identifying the potential events within an
organization it is required to assess those events for
prioritization. This prioritization helps for choosing

303

appropriate management actions for the predefined
events. The events can usually be ranked in a scale
of 1 to 5, where 1 represents the low level event
while 5 is the highest level, others are between
them. The complete scales of the events and their
descriptions, impacts and likelihoods are shown in
Table 1. From Table 1 it is observed that if the
events are not properly identified and assessed, it
might cause the worst case of discontinue in
business potentials. The probability of occur the
events also can be identified and assessed for the
business safety reason.
Table 1- Assessment scale for events prioritization
Rank Description
of the event
Impact Likelihood

1 Low level Can be
ignored
May not be
noticeable
(insignificant)
2 Moderate
level
Moderate Might be
harmful for
business
3 Substantial
level
Medium Frequent
business
interruption
4 Serious level Crucial Causes huge loss
for business
potential
5 Extremely
serious level
Destructive Possibility of
discontinue in
business

The partners in a collaborative business network
might have their own event assessment criterions
from where they could share their experiences to
other partners. During the assessment works the
potential consequences of events should be noted
down from the view of their impacts and likelihood.
Any critical event from a partner might have equal
or even worst impact on the other partners in the
collaborative business. The selection of proper
partners in the business network can reduce the
potential events. Good partners utilising a proper
event monitoring system are less risky than average
partners. The overall compensation or cost in order
to mitigate the impact of events can be shared
among partners. The assessment of events gives an
overall view upon all the consequences and offers
the most important events requiring the highest
attention.
5.3. EVENT MONITORING AND CONTROL
The event monitoring is the last major element of
event management and considered very important
step in event planning. Monitoring event means to
review it and update it continuously. It is an ongoing
process and constant monitoring is essential for its
mitigation plan. Effective monitoring and reporting
of events help to identify insufficient resources,
inefficient use of resources and substandard
performance that detract from customer service and
product delivery. Monitoring and reporting also
support reactive systems management that can help
the organization position itself to meet its current
needs and plan for periods of growth, mergers or
expansion of product lines. Regular event
monitoring whether centralized or decentralized
provides management to ensure resources are
operating properly, used efficiently and identify the
root causes of problems. The basic framework for
detection, handling, and reaction planning of an
event can be displayed as in Figure 2.


Figure 2 The basic framework for EMM (Rintala et al.,
2010)
From Figure 2, it is observed that event detection
or monitoring is mainly based on event handling and
event reaction planning. The event detection process
identifies the operational status within the VO and
transfer the updated event for required handling
from an activity based point of view. If an event is
identified as a manifestation of risk which threatens
the realization of production system or logistics
network, it would be then deployed to event reaction
planning. During event reaction planning details
information related with possible events is collected,
prioritizes them and plans for resolution scenarios
(Andreas, 2003). This planning process supports the
event handling which also get feedback from the
definitions of the past events. In event reaction
planning any deviation definitions of possible events
is forwarded to the event detection process for
necessary handling. Through event management
process, the risks evolves from potential events can
be deployed to be monitored or to be utilised in
other manners.
Monitoring the status of specific events is
required in order to control the progress in their
respective actions plans. It is essential to make sure
that the partners in the collaborative business
networks are aware of the status of top level events
and the corresponding plan to manage them. Before
controlling the events it is required to identify the

304

specific controls, analyze the controls and plan and
schedule them accordingly. To identify and analyze
the controls, it is necessary to monitor the changes
of events in the business network such as changing
customer needs, technology, partner strategies and
competitors and to update the event assessment
correspondingly. The information obtained from
event identification and assessment requires
formulating the strategic plan and schedule for
monitoring and controlling the actions for event
mitigation.
6. EMM FOR NON-HIERARCHICAL
NETWORK (NHN): AN EXAMPLE
The monitoring and managing of an event in a non-
hierarchical business network can be explained by
using the following example, where an ICT-based
framework is presented. The overall structure of the
collaborative event management is displayed in
Figure 3. From Figure 3, we could observe three
VOs namely Red pen, Star T-shirt and Big
boots connected with each other within a business
network or BC. This IT-based collaborative event
monitoring and management system provides the
real-time information of an event within the BC. It
also provides the detailed of the event such as
events priority level, events name, time of
occurrence, name of the VO where the event occurs,
name of the Source Company or partner, status of
the event and possible solution how to manage that
event. Figure 3, displays two example events
namely machine break and employee strike within
different VOs in the BC. After taking necessary
counter measure to manage an event, the event log
updated the status of the event to indicate whether
the event is managed or not.

Figure 3 Overview of collaborative EMM in H
In NHN, the event monitoring and management
system can also be presented in terms of mapping
the view by using Google map, where the locations
of the VO partners can be easily visible. Figure 4,
displays an example of partners locations, where
the possible source of an event is visible along with
the location of the affected partners. From this map
view, the event can be filtered by choosing a single
VO or information from all the VOs in the business
network. Below the map view in Figure 4, there
displays two tables titled with unconcluded events
and concluded events. The unconcluded events
present the events still need to be managed by the
system or are in progress whereas, concluded events
presents a history log of the past events occurred
within the business network.

Figure 4 The map view of collaborative event
management approach
Figure 5 displays two sub-windows of event
management namely, create event and define
deviation which can be opened from the main event
management window by pushing the create event
button situated in top right corner of the window.
The create event sub-window contains create event
and notification menu which provides the basic
information of creating an event such as event
name, event description, urgency level of the event
(high, medium, low), event type and effect range.

305


Figure 5 Display of sub-windows for create and define an
event
The other sub-window define deviation
provides the information relating to filter tasks such
as name of the VO, required task, effected orders,
necessary resources and specification of deviation
type. All these required information initiates the
event monitoring and management process of an
individual company or a collaborative business
network on a real-time environment. This
implementation example provides the fundamental
ideas of monitoring and managing an event of
business collaboration based on IT system.
7. DISCUSSION AND CONCLUSIONS
The evolving trends of business network demands
for continuous safety during operational processes
in order to ensuring quality and productivity.
However, often this environment can not be ensured
due to operational disruptions as caused by various
reasons in the production sites. Before forming the
business collaboration, partners need to plan about
the possible disruptions which might be occurred
during production run. This disruption or abnormal
situation termed as an event can be expected and
unexpected depending on the nature of the
operational consequences. The expected disruption
can be pre-planned according to its adverse effect
on the operational processes within the business
network in order to avoid completely or reduced
substantially the resulted damages. While,
unexpected disruption can not be noticed early and
it can cause in severe damage to the operational
activities. Both the expected and unexpected event
lowers the productivity due to decreased efficiency
It is therefore, prime concern for business
network to avoid such events through monitoring
and controlling them properly. The event
monitoring and managing (EMM) scenario as
evolved in any business network need to be
carefully controlled in respect to run the
collaboration successfully. The concern about the
EMM is needed to be demonstrated to the
individual partners within the business network with
the view to understand its interim consequences.
The effect of events over production processes can
result in the business network with increased lead
time and cost and reduced operational safety. A pre-
planned approach to handle the events always
encouraged to the collaborative partners to ensure
sophistication of the operational processes.
In order to EMM in non-hierarchical business
network, the collaborative partners need to have a
common communication infrastructure. This
communication infrastructure formulates the events
description and maintains the remedial measures and
history of the events. This event monitoring and
management system basically controlled by IT-
based infrastructure, where the business network
promotes the definition and description of the events
commonly available within the network. The
identification, assessment and prioritization of
events are the prerequisite for managing them
properly. The event management and monitoring
process differs from hierarchical business network
to non-hierarchical network. In hierarchical
network, this process is monitored and controlled by
equal participation of the partners, whereas, in
hierarchical network it is mostly dominated and
controlled by the largest partner.
The research presented in this article, provides
the fundamental concept of events that are usually
occurred in an industrial establishment. The
objective of the research theme is to indentify,
scaling down and to demonstrate the practicality of
how an event can be managed and monitored
properly, especially, in a non-hierarchical
collaborative environment. The ICT-based event
management and information modelling approach
for event monitoring is illustrated within the scope
of this paper. An example case is also elaborated
with the view to explain the procedural steps and
methodology to event monitoring and managing
process. In the future research, the extension of this
work will be conducted towards the analysis of
several real case implications within the non-
hierarchical network with the view to generalize the
concept and plan for EMM process.
8. ACKNOWLEDGMENTS
The authors would like to acknowledge the co-
funding of the European Commission within NMP
priority of the Seventh RTD Framework
Programme (2007-13) for the Net Challenge project

306

(Innovative Networks of SMEs for Complex
Products Manufacturing), Ref. CP-FP 229287-2.
The authors also acknowledge the valuable
comments from the reviewers and extensive
collaboration as provided by the project team during
this research work.
REFERENCES
Alexopoulos K, Makris S, Xanthakis V and
Chryssolouris G, A Web-Services Oriented
Workflow Management System for Integrated
Production Engineering, Proceedings of 43
rd
CIRP
International Conference on Manufacturing Systems:
Sustainable Production and Logistics in Global
etworks, May 26-28, 2010, Vienna, Austria, pp 351.
Andreas O, Supply Chain Event Management: Three
Perspectives, The International Journal of Logistics
Management, Vol. 14, No. 2, 2003, pp 1-13.
Banker S, Supply Chain Collaboration: The Processes,
ARC Advisory Group: March, 2002, pp 1.
Bretzke W-R, Stlze W, Karrer M and Ploenes P, Vom
Tracking & Tracing Zum Supply Chain Event
Management Aktueller Stand Und Trends,
Dusseldorf: KPMG Consulting AG, 2002.
Camarinha-Matos LM and Afsarmanesh H,
Collaborative etworks: Reference Modeling,
Springer Science+Business Media, LLC. ISBN-
13:978-0-387-79426-6, 2008.
Camarinha-Matos LM, Afsarmanesh H, Galeano N and
Molina A, Collaborative Networked Organizations
Concepts and Practice in Manufacturing Enterprises,
Computers & Industrial Engineering, Vol. 57, No. 1,
pp 46-60.
Carneiro L, Almeida R, Azevedo AL, Kankaanp T and
Shamsuzzoha AHM, An Innovative Framework
Supporting SME Networks For Complex Product
Manufacturing, Paper is accepted for the
Proceedings of 11
th
IFIP Working Conference on
Virtual Enterprises (PRO-VE10), 11-13 October,
2010, Saint-Etienne, France.
Categoric Software, Business Activity, Real Time
Enterprise, Position Statement, 2002, pp 5.
Cochran E and Bullemer P, Abnormal Situation
Management: Not By New Technology Alone,
Paper presented at the AICHE 1996 Safety
Conference, Chicago, IL. March 13-17, 1996.
Committee of Sponsoring Organizations of the Tread
way Commission (COSO), Enterprise Risk
Management Integrated Framework, American
Institute of Certified Public Accountants, Jersey City,
NJ, 2004.
Deming E, Demings 14 Points for Management,
Salisbury: The Laverham Press, 1992.
Jeffrey, P, Merger as A Response to Organizational
Interdependence, Administrative Science Quarterly,
Vol. 17, 1972, pp 382-394.
Hallikas J, Karvonen I, Pulkkinen U, Virolainen V-M
and Tuominen M, Risk Management Processes In
Supplier Networks, International Journal of
Production Economics, Vol. 90, No. 1, 2004, pp 47-
58.
Harland C, Supply Chain Management: Relationship,
Chains and Networks, British Journal of
Management, (March), 1996, pp 6380.
Hallikas J, Virolainen V-M and Tuominen M, Risk
Analysis and Assessment in Network Environments,
International Journal of Production Economics, Vol.
78, No. 1, 2002, pp 4555.
Jarillo JC, On Strategic Networks, Strategic
Management Journal, Vol. 9, No. 1, 1988, pp 3141.
Kankaanp T, Shamsuzzoha AHM, Carneiro L,
Almeida R, Helo P, Fornasiero R, Ferreira PS and
Chiodi A, Methodology For Non-Hierarchical
Collaboration Networks For Complex Products
Manufacturing, Proceedings of 16
th
International
Conference on Concurrent Enterprising, June 21-23,
2010, Lugano, Switzerland.
Kemmeter J and Kimberly K, Supply Chain Event
Management in the Field: Success with Visibility,
AMR Research Report, 2002.
Llog, Supply Chain Event Management: Real Time
Exception Supervision and Prevention, 2002.
Makris S, Zoupas P and Chryssolouris G, Supply Chain
Control Logic for Enabling Adaptability Under
Uncertainty, International Journal of Production
Research, Vol. 49, No. 1, 2011, pp 121-137.
Rintala S, Saari A-K, Jussila H, Kankaanp T,
Shamsuzzoha A, Toscano C, Carneiro L, Almeida R,
Chiodi A and Campbell S, Monitoring and Event
Management: Concept and Approach, Delivery D3.5,
Final Version, Net-Challenge Project: Ref. CP-FP
229287-2, 2010.
Shamsuzzoha AHM, Kankaanp T, Helo P, Carneiro L,
Almeida R and Fornasiero R, Non-Hierarchical
Collaboration in Dynamic Business Communities,
Proceedings of 11
th
IFIP Working Conference on
Virtual Enterprises (PRO-VE10), 11-13 October,
2010, Saint-Etienne, France.
Shigeo S, Zero Quality Control: Source Inspection and
the Poka-Yoke System, Cambridge, MA:
Productivity Press, 1986.
Stefansson G and Tilanus B, Tracking and Tracing:
Principles and Practice, International Journal of
Services Technology and Management, Vol. 2, Nos. 3-
4, 2001, pp 187-206.
Thomson JD, Organizations in Action, New York:
McGraw-Hill Publishing Company, 1967.
Proceedings of DET2011
7th International Conference on Digital Enterprise Technology
Athens, Greece
28-30 September 2011
307

IMPLEMENTATION AND ENHANCEMENT OF A GRAPHICAL MODELLING
LANGUAGE FOR FACTORY ENGINEERING AND DESIGN
Carmen Constantinescu
Fraunhofer Institute for Manufacturing
Engineering and Automation IPA,
Stuttgart, Germany
Carmen.Constantinescu@ipa.fraunhofer.de

Gnther Riexinger
Fraunhofer Institute for Manufacturing
Engineering and Automation IPA,
Stuttgart, Germany
Guenther.Riexinger@ipa.fraunhofer.de
ABSTRACT
The optimal design and permanent adaption of factories guarantees sustainable success and
competitiveness in a global economy. Thus factory engineering is a key issue to be addressed. In
order to support the engineering of factories a generic and extensible Reference Model for Factory
Planning has been developed. The Reference Model is comprised of systemised planning phases
and their corresponding planning activities. In this paper different concepts and languages to model
the factory planning reference process during the Factory Life Cycle, are presented. Based on the
requirement analysis, considering the functionalities of the existing modelling languages, a suitable
graphical notation is selected. Furthermore the implementation in the phase of equipment and
workplace planning is presented. The requirement of future enhancement of the graphical modelling
notation is illustrated and a corresponding roadmap is introduced.
KEYWORDS
Factory Engineering and Design, Reference Model for Factory Planning, Factory Life Cycle,
Graphical Modelling Languages

1. INTRODUCTION
To achieve sustainable success and competitiveness
in a global economy, factories need to be able to
face all the challenges with optimal designed
production systems. These challenges and
requirements arise from global markets, growing
customisation of products with short life cycles and
new adaptive production technologies. Factories and
Production Systems need to be designed with the
focus on sustainable and adaptable production
systems (Bullinger et al, 2009).
As a consequence, the factory planning frequency
and design complexity increases. Planning task and
their influencing parameters get more and more
complex and need to be approached with holistic
optimisation of the process chains and factory
planning sequences. Furthermore a considerable
effort is needed, to coordinate all the information
and data exchange between the responsible
engineers and planners involved in a factory
planning project. A holistic and multi-scale factory
engineering and design reference process is needed,
to support factory and process planning along the
whole Factory Life Cycle. Thus, the generic and
extensible Reference Model for Factory Planning
was developed to support the design and
engineering of factories (Constantinescu and
Westkmper, 2009).
In this paper the scope and concept of the
Reference Model for Factory Planning is illustrated
and different concepts and graphical modelling
languages to model the factory planning reference
process during the Factory Life Cycle, are
presented. Therefore, the modelling languages are
evaluated considering the requirements and needed
functionalities of the Reference Model. An
implementation example in the phase of Equipment
and Workplace Planning is given and new concepts
for the future enhancement of the employed

308

graphical modelling language are presented arriving
from current research on scenario-based evaluation
of factory planning processes and the use of the
Reference Model.

2. SCOPE AND CONCEPT OF THE
REFERENCE MODEL FOR FACTORY
ENGINEERING AND DESIGN
This section addresses the scope and concept of the
Reference Model for Factory Engineering and
Design concerning factory and process planning.
One objective of the Reference Model for Factory
and Process Planning is to support planners and
interdisciplinary teams in different planning phases
with generic factory planning steps and activities.
Several research works approached the concepts,
purposes and tasks of factory and process planning
and provide guidelines for the factory planning
process e.g. Aggteleky (1990), Chryssolouris (2005)
or Grundig (2009). According to these fundamental
planning concepts and Factory Life Cycle phases
proposed by Westkmper (2008), the fundamentals
of the Reference Model for Factory and Process
Planning have been established.
The systematised and structured planning phases
that are shown in Figure-1 are the foundation of the
Reference Model. All phases and activities required
as standard and mandatory for the purpose of
factory and process planning have to be
implemented. The planning phases consist of
corresponding planning steps and individual
planning activities with inputs and output data
objects. Furthermore, these planning entities and the
identified relationships between the individual
phases and planning steps support the management
of factory data over the entire Factory Life Cycle
and therefore the development of a holistic Factory
Data Model (Constantinescu and Kluth, 2011).
Due to the individual, flexible and scenario-based
nature of the factory planning process, the
Reference Model has to be instantiated to cover the
needs and requirements of a specific planning
project and several types of factories operating in
different industrial sectors. Thus, the main features
and requirements characterising the Reference
Model are:

a) The Reference Model has to be generic, to
cover all the standardised aspects of factory
planning and to provide the basic planning
entities.
b) The Reference Model has to be modular to
enable the reconfigurable and independent use
of its factory planning phases, steps and
activities.
c) Furthermore it has to be extensible and open to
enable the implementation of additional data
entities and further planning steps within its
planning phases.

Moreover, the Reference Model has to be clearly
structured and well-defined through the use of a
graphical modelling method that covers all the
requirements concerning the scope and concept of
the Reference Model. Therefore, suitable graphical
modelling concepts and languages are presented and
evaluated in chapter 3. The Reference Model aims
to be a common basis for planners and
interdisciplinary teams involved in the factory
design process. It supports the overall factory design
process and brings benefits such as higher planning
efficiency, higher planning quality and lower
planning cost. Furthermore the Reference Model
illustrates dependencies between the single factory
design processes and its involved planning teams. In
comparison to other state-of-the art factory planning
approaches, a wider perspective of factory
engineering and design processes with all its
interrelations and information flows is described.
Thus, different types of factories and industry
sectors are able to instantiate and refine the
Reference Model for their individual needs.
To provide additional benefits concerning the
analysis of scenario-based and individually
instantiated planning alternatives the Reference
Modell for Factory and Process Planning need to be
enhanced with additional factors or parameters.
Thus, this paper presents a first approach concerning
the enhancement of the employed graphical
modelling language.
Maintenance
and Equipment
Management
Product
Develop-
ment
Site and
Network
Planning
Investment
and
Performance
Planning
Buildings,
Infrastructure
and
Media
Planning
Internal
Logistics
and
Layout
Planning Process,
Equipment and
Workplace
Planning
Ramp-up
and Project
Management
Factory
Operation
Refe
r
e
n
c
e
M
o
d
e
l
f
o
r
F
a
c
t
o
r
y
a
n
d
P
r
o
c e s s P l a n
n
i n
g
Digital
Factory

Figure 1 Continuously Integrated Factory Engineering
and Design with the Planning Phases of the Reference
Model for Factory and Process Planning
Fraunhofer IPA

309

2.1. PLANNING PHASES OF THE
REFERENCE MODEL
The Reference Model for Factory Engineering and
Design and its systemisation of planning phases and
their corresponding planning activities refers to the
Factory is a Product and the Multi-scale Factory
approaches, which concentrate on the whole Factory
Life Cycle and its phases (Westkmper et al, 2006).
The focus of the Reference Model for Factory and
Process Planning is on factory planning phases and
its planning steps. Regarding the scope of the
Reference Model these phases have been further
structured in:

a) Investment and Performance Planning,
b) Site and Network Planning,
c) Bulidings, Infrastructure and Media
Planning,
d) Internal Logistics Planning,
e) Layout Planning,
f) Process Planning,
g) Equipment and Workplace Planning and
h) Ramp-up and Project Management.

Each planning phase is composed of individual
steps and activities, which can be related to other
planning phases through the definition of the
information exchange. There is no defined order for
a specific planning sequence, instead the established
planning phases can be situation-based arranged and
individually adapted to specific planning needs.
Thus, the overall planning phases and steps are as
detailed as possible and as generic as necessary in
order to be instantiated for different types of factory
planning scenarios and industry sectors.
2.2. DETAILING OF THE PHASE:
EQUIPMENT AND WORKPLACE PLANNING
The developed Reference Model will be applied in
various factory engineering and design projects and
planning scenarios. Thus, a short overview of the
phase Equipment and Workplace Planning is
given, as the implementation of the Reference
Model for Factory and Process Planning is
illustrated in this planning phase.
Equipment and Workplace Planning is closely
related to all phases of the Factory Life Cycle.
Information from earlier phases like logistics and
layout design or process planning is crucial for the
design of factory equipment and workplaces.
Required production technologies are identified and
established in accordance with the product
requirements and production processes. Thus, the
required production resources (e.g. machines,
devices, tools ) are defined. Besides the
functional design and configuration of machines, the
dimensions and capabilities are defined according to
the production operations and expected capacity
requirements (Wiendahl, 2005). Furthermore, the
capability and capacity of the production resources
need to fulfil the planned production volume
requirements (Aggteleky, 1990). In order to meet
the needs for flexible configuration of production
systems, the adaptability of planned resources has to
be considered during this planning phase
(Westkmper and Zahn, 2009).
To face these challenges, the classical methods of
industrial engineering have to be used through
innovative and efficient digital tools, which are
employed within all phases of Factory Engineering
and Design. These tools support the planning of the
number and performance of individual machines for
instance. Furthermore, the planning activities can be
followed by simulations in order to optimise the
machine parameters like setup times and the
capacity utilisation.
Parallel to the equipment planning, the
workplace has to be designed by taking into
consideration ergonomic and safety aspects. The
workplace is planned to ensure the optimal
coordinated interaction of personnel and equipment
under consideration of human capabilities and
requirements (Eversheim, 2002). The focus is on
ergonomic design of workplaces. The defined
processes, equipment and environmental conditions
(e.g. noise and ambient temperature) should be
based on human characteristics (e.g. concerning
anthropometry, biomechanics, physiology and
psychology) and abilities (Schlick, 2010). Thus,
digital human models can be used to ergonomically
optimise and validate the planned processes and
assembly sequences respectively (Schlick, 2009).
Furthermore advanced planning methods and tools
support the whole phase of equipment and
workplace planning with manufacturing and
assembly processes simulations.

3. MODELLING FOUNDATIONS:
EMPLOYMENT OF GRAPHICAL
MODELLING METHODS
Concerning the development of the Reference
Model, concepts and languages to model the factory
planning reference process during the Factory Life
Cycle have to be evaluated and a suitable modelling
method has to be selected. The evaluation is based
on the criteria and requirements from: a) Scope, b)
Clearness and Visualisation and c) Implementation
of the Reference Model.
The modelling language should meet all the
requirements on the scope of a generic, modular,
extensible and open Reference Modell. Thus, the

310

modelling language must be flexible if changes in
the factory planning sequence are needed.
Furthermore, the modelling language has to
represent and visualise all planning phases, its
factory planning process with corresponding input
and output entities as well as the involved
stakeholders (e.g. product designers, production
planners, quality engineers,). It should consist of
structural elements, which provide a high degree of
clearness with well-defined notation methods.
Additionally, the modelled Reference Model should
be able to be easily instanced and implemented in
various industry scenarios and sectors.
Thus, in the following section four widespread
graphical modelling languages and concepts are
introduced and analysed regarding their applicability
to represent the Reference Model.
The modelling methodology: Integration
Definition for Function Modelling (IDEF0) as well
as the related Integration Definition for Process
Description Capture Method (IDEF3) are able, to
model and describe process steps and actives of a
factory planning scenario. These standards are a part
the IDEF modelling method family and are based on
the Structured Analysis and Design Technique
(SADT). The IDEF standards have been established
by United States Air Force within the Integrated
Computer-Aided Manufacturing Programme
(Clarkson and Eckert, 2005). The analysed IDEF
languages provide useful methods and notations for
the functional analysis and representation of
processes. The process steps are being described in
hierarchical structure diagrams with input and
output mechanisms concerning the flow of
information and resources. (Parnell et al, 2010).
Furthermore, the Event-driven Process Chain
(EPC) modelling method allows the graphical
modelling of business and factory planning
processes. The main focus is on representing
business processes concepts from a business
perspective rather than describing the technical
realisation of these processes. The EPC are part of
the Architecture of Integrated Information
Systems (ARIS) framework developed by Scheer
(Weske, 2007). Business process and factory
planning work flows can be represented in EPC
flowcharting diagrams. EPC diagrams consist of
graphs with events and functions that can be
associated to information or resource objects and
organisational units. EPC and IDEF diagrams
consist of clear structures, but also could get very
complicated if complex processes have to be
represented. Furthermore, they are not very flexible
if changes in the process sequence occur (Anderl et
al, 2008). Thus, a different method to model and
represent the factory planning reference processes
during the Factory Life Cycle has to be selected.
Another very common standard of graphic based
languages is the Unified Modelling Language
(UML). It is an object-oriented modelling language
that is used in the field of software engineering and
managed by the Object Management Group (OMG).
Concerning the graphical notations and modelling
techniques, UML provides structure diagrams and
behaviour diagrams. To represent the Reference
Model for Factory and Process Planning, activity
diagrams can be used to describe single steps of the
factory planning process (Object Management
Group, 2011b). UML activity diagrams are
technically orientated and illustrate the network of
process steps with their connections and data flows.
UML 2.x diagrams are using petri-like semantics
with a support of parallel executed systems and a
wide scope of possible modelled situations.
However, the involved stakeholders need experience
in UML modelling to understand the activity
diagrams for a business process model. Thus, a
more business oriented modelling solution is
required.
A graphical modelling notation, which focuses on
the graphical representation and implementation of
the business processes, is the Business Process
Model and Notation (BPMN). BPMN is also
published by the OMG and provides modelling
solutions and flowcharting techniques, which are
comparable to UML activity diagrams. The focus of
the BPMN standard is to provide a graphical
notation to specify and model business processes
and whole process landscapes with relations
between the single planning activities. (Object
Management Group, 2011a) Furthermore, BPMN
diagrams have a high clarity and are easy to
understand for all stakeholders that are involved in
the planning process. Thus, the graphical modelling
method and notation BPMN was selected for the
purpose of comprehensive modelling the Reference
Model for Factory and Process Planning. The results
of the evaluation are presented in Table-1.
Table 1 Modelling Methods Analysis and Evaluation
Scope Clearness
and
Visualisation
Implementation
IDEF + - -
EPC + + +
UML ++ + -
BPMN ++ ++ +



311


4. MODELLING METHOD FOR THE
REFERENCE MODEL: BPMN
The graphical modelling method Business Process
Model and Notation (BPMN) and its functionalities
is presented in this section. BPMN is published by
the Object Management Group (OMG) as a standard
for the modelling, implementation and execution of
business processes. It is understandable by all
business users, from the initial concepts and drafts
of the processes, to the implementation of methods,
tools and data models, that are needed to perform,
manage, monitor or support business processes
(Object Management Group, 2011a).
Business processes are modelled in Business
Process Diagrams that are using graphical elements
with the flowcharting technique. The main graphical
elements of the modelling notation are Flow Objects
with Events, Activities and Gateways as well as
Connecting Objects with Sequence Flows, Message
Flows and Associations.
P
o
o
l

L
a
n
e

A
L
a
n
e

B

Figure 2 Main Graphical Elements of BPM
These elements are modelled within Swimlanes
that represent the organisational structures or
business stakeholders in Pools and Lanes. The flow
of data and the associations to data entities is shown
through Data Objects. Furthermore, Groups can be
formed and Annotations can be made through
Artifacts. An overview of these basic modelling
elements is presented in Figure-2. The graphical
elements of BPMN are specified as a standard visual
language that all process modellers and stakeholders
can understand and recognise. These elements and
shapes are implemented within the Reference Model
to compose the Factory and Process Planning
Reference Diagrams of the established factory
planning phases. Furthermore in version 2.0 of the
BPMN standard new features like XML schemes for
model transformation or the modelling of
orchestrations and choreographies have been
included.
4.1 MODELLING OF THE FACTORY
PLANNING PROCESS WITH BPMN
For modelling the factory planning process within
the Reference Model, Flow Objects are used as one
of the main graphical elements of the Reference
Model. Flow Objects constitute the factory planning
process sequence with a start and an end event.
Furthermore, the convergence or divergence within
the process sequence is defined through gateways
(Object Management Group, 2011a). The main
gateways used in the Reference Model are: a)
Exclusive Gateways (XOR) for modelling
branching points where one process sequence can be
separated in two or more alternatives and only one
of them can be chosen, b) Inclusive Gateways (OR)
for modelling branching points where two or more
process flows can be split or merged and c) Parallel
Gateways (AND) to divide a process path in two or
more parallel process sequences.
Within the Reference Model, single process steps
and activities are connected through Sequence
Flows to define a process sequence. These
Connecting Objects illustrate the order in which the
factory planning processes are performed.
Data Objects are used to define input and output
data entities of single planning steps as well as to
provide information about the related planning
phase of the required data for the process to be
performed.
Furthermore, individual stakeholders, who are
involved in particular planning activities, are stated
and milestones are defined. Besides the use of
Swimlanes also Groups are defined to cluster
similar planning processes and therefore to give a
simplified overview of each planning phase. In the
following chapter 4.2 an implementation example is
given.
4.2 IMPLEMENTATION EXAMPLE IN THE
PHASE OF EQUIPMENT AND WORKPLACE
PLANNING
BPMN is selected and employed as modelling
method for the Reference Model as presented in
chapter 2.1. Every planning step and activity as
well as the corresponding relations have been
implemented by using the BPMN. In this section an
example of the implementation is presented for the
equipment and workplace planning phase. The
phase is structured in four main generic planning
steps, which are shown in Figure-3.

312

E
q
u
i
p
m
e
n
t

a
n
d

W
o
r
k
p
l
a
c
e

P
l
a
n
n
i
n
g

a
n
d

D
e
s
i
g
n
O
t
h
e
r

P
l
a
n
n
i
n
g

P
h
a
s
e
s

o
f

t
h
e

R
e
f
e
r
e
n
c
e

M
o
d
e
l

Figure 3 Main Planning Steps of the Equipment and
Workplace Planning
In the first planning step Identification of
Requirements, the process requirements and other
general needs regarding Equipment and Workplace
Planning are defined. Safety and legal regulations as
well as production data consisting of layout, media
process and logistic information is taken into
account. The following planning steps are the
Technical and Functional Equipment and
Workplace Design as well as Ergonomic and
Safety Related Workplace Design. The used
inclusive (OR) gateways specify, that the
execution of planning steps can be performed in
parallel or individually in a given sequence. After
the last main planning step Validation and
Optimisation a loop back can be performed.
Within this planning step detailed evaluations or
simulations are performed, and new knowledge
could be taken into account, to refine the planned
equipment and workplaces in an iterative planning
sequence.
To present the implementation example with
BPMN in detail, the planning step Technical and
Functional Equipment and Workplace Design is
illustrated in Figure-4. This generic example can be
further detailed and instantiated on the specific
requirements of any Factory Engineering and
Design project. The planning step Technical and
Functional Equipment and Workplace Design was
chosen as an example planning step within the
Equipment and Workplace Planning phase. It is
composed of the following single planning
activities:
a) Equipment and Tool Planning: This planning
activity is closely related to process and layout
planning. In this planning activity, the required
production resources (e.g. machines, devices, tools
) are defined. The dimension and functional
design of the production resources is planned, under
consideration of their required capability and
capacity. Furthermore all technological and
economical requirements as well as aspects like
maintenance and set-up have to be considered in this
planning activity.

Figure 4 BPM Model of the Technical and Functional Equipment and Workplace Design
b) Workspace Infrastructure and Equipment
Handling: Within this planning activity the
Workspace Infrastructure is designed considering
the connection to other components and to the
internal logistic system as well as the supply and
access to the energy and media network of the
factory. Furthermore the equipment handling is
planned in close relation to the phase Internal
Logistics and Buildings, Infrastructure and Media
Planning. The handling and transport systems is
defined in detail.

313

c) Automation and Control Planning: Within this
planning activity, the machines are developed in
close connection with other planning activities such
as virtual commissioning and the ramp-up phase.
Automated processes have to be designed and
adjusted to the production needs and overall
workplace design. Collision-free assembly
sequences and motion paths have to be defined,
concerning the development of complex robotics
manufacturing zones.
d) Detailed Workplace Layout: The finalised
detailed design of the workplace layout regarding all
technical and functional constrains as well as the
ergonomic and safety aspects from the planning step
Ergonomic and Safety Related Workplace Design
is established within this activity. Thus, the final
definition and location of the production resources
with the assigned work space and paths is defined.
5. REQUIREMENT FOR FUTURE
ENHANCEMENTS OF BPMN
During the development and use of the Reference
Model new requirements for further enhancement
and application of the employed modelling method
BPMN emerged. Factory planners and engineers
have to evaluate different customised and
instantiated planning processes and sequences.
Therefore additional factors and economic aspects
such as costs, time and quality have to be taken into
account and integrated into the Reference Model.
These additional factors or attributes are necessary
to enable the evaluation of the factory planning
process and to assess the effectiveness of a defined
planning scenario (Schenk and Mller, 2010).
Thus, the extension of the selected BPMN
standard and further enhancement of the
corresponding modelling tools is recommend, as the
integration of economic factors and requirement
aspects have not been sufficiently considered within
the BPMN standard and current tools. The BPMN
standard 2.0 (Object Management Group, 2011a)
provides guidelines and specifications for the
extension of the method and notation as well as the
implementation of additional modelling features.
There are approaches to incorporate quality
requirements and economic factors to the BPMN
standard and modelling tools, but these are not
covering the specific needs and requirements of the
factory planning process (Saeedi et al, 2010).
Furthermore, the planner must be able to manage
factory planning risks such as design risks, to reach
the factory planning objectives. These risks often
result from unreliable or incomplete information on
planning parameters (Weig, 2008). The aspects and
parameters concerning risk assessment of different
factory planning scenarios are also not supported
within the current BPMN standard. Therefore, the
necessary parameters and planning factors have to
be integrated in the employed modelling method to
enable the evaluation of factory planning
requirements and the management of risks within
the planning project and its single planning
processes (Olson and Wu, 2008).
One approach to extend the BPMN standard is
the assignment of customised factors and attributes
directly to BPMN Flow Objects. The additional data
entities have to be analysed and evaluated with
individual and scenario-based calculation methods
and tools as shown in Figure-5.
Time
Cost
Risks
Scenario 1
Scenario 1
Scenario 2
Scenario n
Additional
Factors
Scenario-Based
Evaluation Methods

Figure 5 Future Integration of Additional Factors and
Scenario-Based Evaluation of the Reference Modell
Besides the extension of the BPMN standard,
new and enhanced planning and evaluation
functionalities have to be integrated in future BPMN
modelling tools. In the next research steps new
solutions will be developed to enhance and support
the planning of factories and production systems as
well as the scenario-based evaluation of factory
planning processes.
6. CONCLUSIONS
This paper presents different concepts, standards
and languages to model a factory planning reference
process during the Factory Life Cycle. Therefore,
the generic and extensible Reference Model for
Factory Planning is illustrated. Different graphical

314

modelling standards and languages are evaluated
and the BPMN standard is selected and described in
detail. To explain the employment of BPMN, an
implementation example in the phase of equipment
and workplace planning is given.
Concerning the limitations of the described
graphical modelling notation BPMN, further
enhancement for the evaluation of factory planning
processes within the Reference Model is required.
Thus, a first approach regarding the extension of the
BPMN standard with customised factors and
attributes directly assigned to BPMN Objects is
presented.
As a result, the proposed extension of the BPMN
standard and the further development of
corresponding modelling tools will bring a
significant enhancement to the evaluation and risk
management of factory planning processes within
the Reference Model for holistic Factory
Engineering and Design.

The research activities conducted in this paper are
partially funded by the European Commission under
the Project: VFF - Holistic, extensible, scalable and
standard Virtual Factory Framework, FP7-NMP-
2008-3.4-1.
REFERENCES
Aggteleky B., Fabrikplanung: Werksentwicklung und
Betriebsrationalisierung - Vol. 2: Betriebsanalyse und
Feasibility-Studie, 2nd Edition, Hanser, Mnchen,
1990, pp 465-476
Anderl R., Malzacher J. and Raler J., Proposal for an
Object Oriented Process Modeling Language,
Enterprise Interoperability III: ew Challenges and
Industrial Approaches (I-ESA Conference 2008), 1st
Edition, Springer, Berlin, 2008, pp 533545
Bullinger H-J., Spath D., Warnecke, H-J. and
Westkmper E., Handbuch Unternehmens-
organisation: Strategien, Planung, Umsetzung, 3rd
Edition, Springer, Berlin, 2009, pp 26-27
Chryssolouris G., "Manufacturing Systems: Theory and
Practice", 2nd. Edition, Springer, New York, 2005, pp
329-452
Clarkson P. and Eckert C., Design process
Improvement: A Review of Current Practice, 1st
Edition, Springer, London, 2005, pp 74-76
Constantinescu C. and Westkmper E., A Reference
Model for Factory Engineering and Design,
Proceedings of the International Conference on
Digital Enterprise Technology Digital Enterprise
Technology (DET), Hong Kong, 2009
Constantinescu C. and Kluth A., Flexible Connection of
Product and Manufacturing Worlds: Concept,
Approach and Implementation, Proceedings of the
44th CIRP Conference on Manufacturing Systems,
Madison, USA, 2011
Eversheim W., Arbeitsvorbereitung, 4th Edition,
Springer, Berlin, 2002, pp 97-99
Grundig C.-G., Fabrikplanung: Planungssystematik -
Methoden - Anwendungen, 3rd. Edition, Hanser,
Mnchen, 2009
Object Management Group (OMG), BPMN 2.0, OMG
documents BPMN 2.0, 2011a, Retrieved: 23.05.2011,
<http://www.omg.org/spec/BPMN/2.0/>
Object Management Group (OMG), OMG Unified
Modeling Language, Superstructure, OMG
documents UML Version 2.4, 2011b, Retrieved:
03.06.2011, <http://www.omg.org/spec/UML/2.4/
Superstructure>
Olson D. and Wu D., New Frontiers in Enterprise Risk
Management, 1st. Edition, Springer, Berlin, 2008, pp
209-221
Parnell G., Driscoll P. and Henderson, D., Decision
making in systems engineering and management, 2nd
Edition, Wiley, Hoboken, 2010, pp 40-49
Saeedi K., Zhao L. and Falcone P., Extending BPMN
for Supporting Customer-Facing Service Quality
Requirements, Proceedings of the IEEE International
Conference on Web Services (ICWS), IEEE Computer
Society, Miami, 2010, pp 616623
Schenk M. and Mller E., Factory Planning Manual:
Situation-Driven Production Facility Planning,
Springer, Berlin, 2010
Schlick C., Industrial Engineering and Ergonomics:
Visions, Concepts, Methods and Tools, Springer,
Berlin, 2009
Schlick C., Arbeitswissenschaft, 3rd Edition, Springer,
Berlin, 2010, pp 949-1152
Weig S., Konzept eines integrierten Risikomanagements
fr die Ablauf- und Strukturgestaltung in
Fabrikplanungsprojekten, Herbert Utz, Mnchen,
2008, pp 53-78
Weske M., Business Process Management, 1st Edition,
Springer, Berlin, 2007, pp 158-169
Westkmper E., Constantinescu C. and Hummel, V.,
New paradigms in Manufacturing Engineering:
Factory Life Cycle, Annals of the Academic Society
for Production Engineering, Research and
Development, Vol. XIII/1, 2006, pp 143-147
Westkmper E., Fabrikplanung vom Standort bis zum
Prozess, 8. Deutscher Fachkongress Fabrikplanung,
Ludwigsburg, 2008
Westkmper E. and Zahn E., Wandlungsfhige
Produktionsunternehmen. Das Stuttgarter Unterneh-
mensmodell, 1st Edition, Springer, Berlin, 2009
Wiendahl H.-P., Planung modularer Fabriken: Vorgehen
und Beispiele aus der Praxis, 1st Edition, Hanser,
Mnchen, 2005
Proceedings of DET2011
7th International Conference on Digital Enterprise Technology
Athens, Greece
28-30 September 2011
315

APPROACH FOR THE DEVELOPMENT OF LOGISTICS ENABLERS FOR
CHANGEABILITY IN GLOBAL VALUE CHAIN NETWORKS
Bernd Scholz-Reiter
BIBA Bremer Institut fr Produktion und
Logistik GmbH at the University of Bremen
bsr@biba.uni-bremen.de
Susanne Schukraft
BIBA Bremer Institut fr Produktion und
Logistik GmbH at the University of Bremen
skf@biba.uni-bremen.de


Mehmet-Emin zsahin
BIBA Bremer Institut fr
Produktion und Logistik
GmbH at the University of
Bremen
oez@biba.uni-bremen.de

ABSTRACT
Recently, logistics networks are increasingly faced with dynamically changing influences in their
environment. In order to cope with these volatile trends, flexible adaptations with a short-term
horizon are often used, but not sufficient. Rather, a permanent adaption of the network structures is
necessary. In this context, our current research deals with the changeability of value chain networks
triggered by internal and external influences. The general objective is the development of a
methodology for changeable network structures to support or enable necessary changes. With the
focus on logistics processes and network elements this paper describes an approach to analyse
existing value chain networks and to identify all changeable objects and their specific change
drivers. Furthermore, an approach for the development of change enablers and the evaluation of
occurring change demands will be presented. The practical applicability of the approach will be
assured trough the participation of two industrial partners.
KEYWORDS
value chain network, logistics, changeability, change drivers, change enablers

1. INTRODUCTION
The markets for production companies are
increasingly dynamic, due to shorter product life
cycles, increasing variety in small batches and
changing customer requirements. (Wildemann,
2007) These dynamic influences do not only
directly affect the product ranges and production
systems, but also the entire value chain network.
(Zahn, 2010) At the same time, the markets
experience an increasing globalization. Thus,
manufacturing companies are more and more
associated within globally distributed value chains.
Global value chain networks can be used for
international locational advantages. (Stabell, 1998)
However, the progressive global integration of
internal corporation plants as well as external
partners and suppliers leads to complex mechanisms
within the networks. These mechanisms are
partially or completely unknown and difficult to
forecast. (Zahn, 2010) So that the effects of
dynamic influences do not lead to negative
emergences in value chain networks, these must be
quickly reactive and adaptable. Consequently,
changeability for global value chain networks is
demanded. This paper describes an approach for the
planning and optimization of changeable global
value chain networks. Thereby, the focus lays on

316

the logistics structures and processes within value
chain networks. The paper on hand describes firstly
basic terms and definitions regarding value chain
networks and changeability (section 2).
Furthermore, the industrial partners that participate
in the research will be introduced in section 3. The
main part of the paper lays on the approach for the
development of logistics change enablers that are
described in section 4. Finally, a conclusion and
outlook is given in section 5.
2. CHANGEABILITY OF VALUE CHAIN
NETWORKS
As a result of out-sourcing non-core activities many
companies are today much more reliant on external
suppliers of goods and services. (Christopher,
2005) This trend is reinforced through an increasing
globalization which offers new possibilities and
risks. On the one hand, globalization provides new
cooperation opportunities and business markets; on
the other hand, globalization also leads to an
increasing competition. In view of volatile
environments and rising demands to the goods and
services to be provided, networks are a more and
more preferred organization form of economic
activities. (Sydow, 1992) Subsequently, the
competition does not only occur between single
companies but between whole value added chains.
This trend also concerns the logistics processes
and structures in different ways. The integration of
single companies in globally dispersed networks
leads to an increasing meaning of operational
logistics processes as well as the planning and
controlling of them. Furthermore, the concentration
on the core competences reinforces the shifting of
logistics tasks to logistics service providers. Thus,
logistics has become an important part within global
value chain networks. (Christopher, 2005)
The material flow within a network can be seen
upstream over the different suppliers levels as well
as downstream over the direct customers up to the
final customers. The total of the material flow is
called supply chain or value chain. Besides,
logistics within a value chain can be seen as a
sequence of transformation processes of
procurement, production and distribution logistics.
Today, value chain networks face numerous
internal and external factors of influence. External
influences are, inter alia, caused by changes in the
suppliers market and outlet, social, political and
economically basic conditions. These changes are
caused by increasing product individualization, the
internationalization of markets and short technical
innovation cycles. Internal influences are entailed
by employees and production methods, products,
technologies and network partners. Caused by these
influences, value chain networks are confronted
with the challenge to adapt themselves to changing
conditions and to react dynamically. A co evolution
of value chain networks with their constantly
changing fields of problems thereby becomes
necessary. This requires that value chain networks
inhabit the ability to change. (Zahn et al., 2010)
In literature there are, for different investigation
objects, several attempts and definitions of
changeability. This paper uses two definitions of
changeability that describe the understanding on
value chain networks at the nearest. On the one
hand, changeability implies that companies from
themselves dispose straight applicable process
variability and structural variability as well as
behavioral variability (Westkmper et al., 2000).
They can react therefore reactively as well as
anticipative to changes (Westkmper, 1999).
Reinhart et al. (2002) and Zh et al. (2004) on the
other hand understand changeability as an
enlargement of flexibility. Changeability shows the
potential to carry out changes reactively and, if
required, beyond available flexibility corridors
proactively. Using these attempts changeability with
the focus on logistics structures and processes can
be understood on value chain networks as follows.
Changeability is the proactive or reactive change of
structures and processes beyond existing flexibility
corridors within value chain networks. The
necessity to change will be supported by the use of
process-transcendent and structural-transcendent
change enablers. In this context, change enablers
show a tool with whose help the change of a value
chain network can be enabled and optimized. Heger
(2007) describes, that changeability requires
efficient change processes. These processes of
change require an identification of the influences
described above, named as change drivers.
Furthermore, it is necessary to generate the
changeability by adequate change enablers on a
value chain network.
3. INVESTIGATION OBJECT
General object of our research is the development of
a methodology for the creation of changeable global
value chain networks. The focus lays on the
logistics structures and processes. The applicability
will be assured by the participation of two industrial
partners. This allows the practical use and
evaluation of the developed methodology.
One of the industrial partners is a service
providing company, which is active in the area of
apparel logistics. The company owns three logistics
centers in Northern Germany where the finished
goods get stored, commissioned and handled
according to incoming orders and customer
requirements. The company is integrated in a global
value chain network. The manufacturing of the

317

apparel goods occurs in four production locations in
Eastern Asia. Some of these locations belong to the
companys cooperation; the others are integrated in
the network by a classical customer-supplier
relationship. The customers are settled
predominantly in Germany and Western Europe. A
low quota is delivered, with rising trend, to
customers in Eastern Europe and Asia. Besides bulk
buyers like department stores and mail-order
companies, numerous retail dealers and specialist
suppliers are also amongst the customers. Further
network partners are service providers for the
transportation of the products. The process focus
lays on the logistics distribution processes and the
order execution. After the incoming customers
orders the demands are transmitted to the
production centers. The procurement of raw
materials and the production are controlled by the
production centers. After the transport of the
finished goods to Germany the goods get taken over
by the distribution centers, stored and dispatched to
the customers. This execution shows the core
business of the company.
The other project partner is a leading
manufacturer of telecommunication sea- and aerial-
cables, as well as submarine power and offshore
cables, and furthermore technical plastics and
environmental products with its head quarter in
Northern Germany. The main focus of the company
lays on the cable production that is located at the
head quarter. Besides that, the delivery of the
finished products, the installation and the
subsequent service provision are also part of the
core competences. The companies network
includes approx. 50 suppliers that are located
mainly in German and Europe. The customers are
distributed worldwide with some focus on Europe
and a high share of deliveries to Eastern Europe,
Asia, North and South America regarding selected
product lines. The network also includes service
providing companies for the storage and
transportation of the products. One of them is
located directly besides the head quarter and
responsible for the logistics handling beginning
from the goods arrival until the finishing of the
goods. The focus of the cooperation within the
project lays in the evaluation of the logistics
expiries of the company.
Both companies are part of a globally dispersed
network of suppliers, customers and service
providers. They face varied challenges, caused by a
rising individualization of the products as well as
changed market standards and customer
requirements. In the past, varied efforts have
already been started to adapt themselves to
changing conditions and external influences. The
analysis and utilization of these experience values is
an important component of the research and is
considered within the approach, which will be
described below in detail.
4. APPROACH FOR THE
DEVELOPMENT OF LOGISTICS
CHANGE ENABLERS
The approach for the development of change
enablers focuses on the logistics view and can be
divided into four steps as pictured in Figure-1.

As-Is-Analysis of value chain
networks based on three levels
(structure, process,
organization)
Identified objects and functions
within a network including
resources and operational figures.
Identification of changeable
objects and change drivers
based on previous change
processes
Development of change
enablers to support necessary
changes
Evaluation of change demand
regarding performance and
economical feasibility
Identified changeable objects
including existing flexibility
corridors.
Identified change drivers.
Change enablers to gain
changeable network structures.
Change enablers for specific
change drivers.
Change enablers for specific
network objects.
Work steps Results
Rated scenarios.
Decision basis for the selection of
a specific network scenario.

Figure 1 Approach for the development of logistics change
enablers
The approach starts with a detailed analysis of the
existing network (section 4.1) in order to gain an
understanding of the structures, processes and the
interactions within the network. The next step is the
identification of all objects within the network,
which needs the ability of adaption. These objects
underlie internal or external influencing factors that
will also be identified (section 4.2). The result is the
knowledge of all changeable objects including their
existing flexibility and specific change drivers.
Following up is the development of change enablers
that are methods and concepts to support or enable
necessary changes (section 4.3). These methods are
named as change enablers. The results of this phase
can be separate methods for specific objects or
scenarios as well as general methods used over the
whole value chain network. The development of
overall methods leads to changeable network
structures. Last step of the approach is the

318

evaluation of the potential change demand (section
4.4). A change has to be considered if defined
action control limits of the changeable objects are
over or below defined targets. Possible network
alternatives including the as-is situation and
particular necessary change steps have to be rated
qualitatively and quantitatively, which results to a
recommendation for one specific network scenario.
Subsequently, the specific steps of the approach will
be explained in detail.
4.1. AS-IS ANALYSIS
The as-is analysis for the logistics view onto the
value chain network follows a three-level model.
Within this model data regarding structure,
processes and organizational matters of a network
will be evaluated. Overall three levels operating
figures and indicators will furthermore be identified.
The result constitutes the aggregation of the
evaluated data by the collection of all objects within
the network, their specific functions including the
resources that are necessary to fulfill the respective
functions (Figure-2).

Structure
Process
Organization
function 11
function 12
function 1m

function n1
function n2
function nm

Resources
Employees, Time,
Finances, Information,
Technique, etc.
function

Object 1 Object n

Figure 2 Three-Level-Model for the as-is analysis
On the structure level the existing network and
product structures will be examined. The evaluation
of the network structures includes the identification
of the function areas within the company in focus as
well as external company sides and network
partners including their function and geographical
dispersion. Within the product structure the
companys product range will be evaluated and
analyzed regarding its specific characteristic. In
order to link network and product structure it will be
determined which network partners are relevant for
which product areas. Additionally, the systems load
will be determined by evaluating the quantities of
final products, components and raw materials that
have been purchased or produced over time for a
specific product.
Within the process level the main processes that
are necessary for the order processing will be
identified. The processes will be assigned to the
main categories procurement, production and
distribution. Besides the working steps, the
necessary resources and time demand will be
evaluated. Additionally, the operating figures that
measure the performance of the single part
processes will be evaluated.
On the organization level the focus lays on the
resources that are necessary for the fulfillment of
tasks. These resources include, amongst others, the
companies employees as well as time and
information aspects. One main point lays on the
evaluation of the organizational structure. Here it
will be analyzed how the functions that are
responsible for the process handling are integrated
within the organization. A comparison of the
organizational structure with the process handling
allows the identification of interfaces and
communication channels. Overall the company, an
analysis of information and communication
structures will follow to figure out how information
is exchanged within the network. The last point of
the organization level is the evaluation of the
companies project handling. Here, it will be
examined, how optimization projects are handled in
general within the company. Other points of interest
are the overall cooperation within the network and
experience values out of previous optimization and
change projects.
The data collected within the different levels will
be aggregated as pictured in Figure-2. The functions
pointed out of the process analysis will be assigned
to the objects identified in the structure level. The
organization level is included by the appointment of
necessary resources to the respective functions such
as employees, information and technique.
A short insight into the data that have to be
evaluated during the as-is analysis is given by the
introduction of the industrial partners in section 2.
Here, the companies network partners, product
ranges and the main processes are outlined. The
level of examination within the as-is analysis as
well as in the following steps depends on the
information transparency within the value chain
network. For the companies in focus the evaluation
of information can be executed on the level of
single objects. In the case of external network
partners the level of examination depends on the
information transparency between the companies. In
extreme cases the lowest vertical level of
examination is the network partner itself.
4.2. IDENTIFICATION OF CHANGEABLE
OBJECTS AND CHANGE DRIVERS
Based on the results of the as-is analysis the objects
that can be effected by change drivers have to be
identified out of the accumulation of all possible
network objects. These objects are of great interest
for the following development of changeable
network structures. In the following these objects

319

will be named as changeable objects. With the
identification of these changeable objects, the
change drivers that lead to necessary changes will
be evaluated. Besides that, it is necessary to identify
their existing flexibility corridor to be able to
recognize possible change demands. Normally
objects inhabit a certain flexibility corridor. Within
this corridor the objects can be adapted to changed
conditions without undergoing general changes. In
the case that operational figures constantly reach
values over or below the flexibility corridor a
change has to be considered.
To bring out these changeable objects, their
specific change drivers and their existing flexibility
corridor it is helpful to revert to experience values
of the past. Those experiences can be used to educe
general aspects that characterize changeable objects.
This approach allows a thorough identification of all
changeable objects within a network and their
specific change drivers. For this reason previous
change projects will be detected, analyzed and
evaluated together with the industrial partners. First
step of this analysis is the identification of the
superior change driver that induced the decision to
pass the examined change process. Examples for
those change drivers are, amongst others, the
adaption of the product range caused by changed
customer demands or the sales of finished products
in a new market based on the increasing spending
capacity. Next step is the identification of the
objects and functions that were affected by the
undergone change. The changeable objects will be
analyzed concerning their undergone changes and
adaptations. Possible changes are e.g. the
enlargement or diminution of objects functions as
well as the transfer of functions to other objects. In
addition the elimination of single functions can take
place, too. In this context an analysis of necessary
resources such as employees, time, financial aspects
and information and an analysis of the demands to
these resources within the change process will be
executed. Moreover, an evaluation of the concerned
objects performance trends will take place using
the operating figures out of the as-is analysis.
Thereby, the focus lays on the alteration of the
characteristic lines in the time line before, during
and after the change. This allows the finding of the
objects flexibility corridors. To identify the change
drivers within the change project, it will be
examined which change drivers caused the
documented operational figure trends and lead to
the fact, that the operational figures have been
below or over defined limits.
Based on the experiences from the previous
change projects general aspects will be deflected
that allow the identification of changeable objects
through over the network. With the use of the
changeable objects, the change drivers will be
identified by the examination of the factors that
have an impact on the performance of these objects.
4.3. DEVELOPMENT OF CHANGE
ENABLERS
The described deviation of operating figures from
the defined flexibility corridor implicates, that the
concerned functions are not able any more to fulfill
their tasks satisfyingly. In these cases it has to be
evaluated if a permanent adaptation of the network
is necessary or if it is possible to restore the
performance by isolated concepts. If a permanent
adaptation of a network is necessary the possible
network scenarios have to be identified and
evaluated qualitatively and quantitatively. This
evaluation will be described in detail below.
Beforehand, possible methods and concepts (change
enablers) to support or enable permanent network
changes will be described in detail. Regarding to
their sphere of action three classes of change
enablers can be differentiated.
The first class of change enablers stands for
methods that can be implemented within the
network. The target of these enablers is the
attainment of changeable network structures. These
change enablers will be implemented before a
specific change demand occurs and support an
efficient change process. The development of these
enablers bases on the changeable objects. As
described above, the identification includes the
deduction of general demands to the resources that
are concerned from possible changes. Possible
change enablers could be concepts regarding the
qualification of employees or the information flow
within the network.
The second class of change enablers comprises
methods that can be deployed in the case that
specific change drivers occur. As the design and
development of a change enabler can require high
cost and time efforts it is not efficient to provide
change enablers for every change driver. Thus, the
development of these change enablers requires an
evaluation and prioritization of the change drivers in
advance. The evaluation considers, inter alia, the
occurrence probability and the expected impact.
Based on the evaluation the change drivers can be
prioritized. This prioritization will serve as a basis
to decide about the change drivers that have the
need for a specific change enabler. An example for
a specific change enabler is a method for the
systematically selection of customers in order to
reduce the risk of deficits in payment caused by a
lack of credit-worthiness.
The last class of change enablers includes
methods for specific network objects. These kinds
of change enablers can be reasonable if a network
contains critical objects, that are often concerned by

320

change drivers or that are important for the
companys performance.
4.4. EVALUATION OF CHANGE DEMAND
As described above, the change demand has to be
evaluated if the flexibility corridors of the
changeable objects are not able to fulfill their
function caused by change drivers. In this case
possible scenarios of alternative value chain
networks have to be developed and evaluated. To
decide about possible adaptations and network
scenarios, it is necessary to evaluate the change
demand in advance. This evaluation includes two
steps as pictured in Figure-3. Firstly, the network
scenarios have to be analyzed and compared
regarding their performance and costs. Secondly,
the necessary change processes have to be
developed and proved regarding time, resource and
cost efforts.

Network
1
Network
1.1
Possible network
scenarios
Network
1.2
Network
1.m

Change process
Evaluation of
performance
Evaluation of effort
(time and costs) and
feasibility

Figure 3 Evaluation of change demand
For the evaluation of the different network
scenarios, the characteristics of each scenario,
including the current network, have to be analyzed
under the changed conditions caused by change
drivers. For the logistics target achievement it will
be examined how the performance of the concerned
objects emerges throughout the planned change.
The local changing of performance for single
objects within the value chain network is thereby
subordinated to restrictions that can be induced by
the performance of up- and downstream value chain
levels. For the evaluation of the change effects it is
therefore necessary to know about the
interdependencies and reciprocal interactions of
logistics target values, their characteristics and their
mechanisms of action along the whole value chain
network. According to Wiendahl (2007), these
complex interactions are known as polylemma of
the operations planning in production logistics and
polylemma of materials administration in the areas
of procurement and distribution. Here, not only the
isolated active connections within the logistics area
of the value chain network but also the linking
between both logistics tension fields are considered.
(cf. Fastabend 1997, Nyhuis 2003)
Thereby, the whole value chain network can be
seen as a sequence of transformation processes
regarding procurement, production and distribution
logistics. Thus, for the identification the
characteristics of the operational figures and
characteristic curves in all different value chain
steps have to be analyzed. Based on this, the
reciprocal interactions of the operating figures and
characteristic curves will be examined.
The result of this analysis is a parameterized
value chain network that shows all characteristics of
operational figures and characteristic curves of the
network objects and their independences to up- and
downstream value chain levels. With the use of
these characteristics the effects of planned changes
on the parameterized value chain network will be
analyzed. The change of operating figures within
the different value chain levels leads, through the
examined interactions, to the planned change
process. The evaluation allows the illustration of the
changed logistics target achievement within the
changed value chain network. In addition to the
logistics performance, the performances of adjacent
areas that can also be indirectly concerned from the
change have to be examined, too. This is necessary
to avoid isolated optimizations that lead to a
deterioration in other areas.
In addition to the described evaluation of the
network scenarios the change process has to be
evaluated qualitatively and quantitatively.
Therefore, it is necessary to identify and assess the
change steps including participated resources, time,
cost efforts and feasibility. If the change can be
supported by specific change enablers for single
objects or change drivers like mentioned above, the
effort to implement the change enabler has also to
be considered.
The result out of the change demand evaluation is
a rating of the analyzed scenarios regarding their
performance and economic efficiency as well as the
necessary effort for the change process. This
evaluation leads to a decision basis for the selection
of a specific network scenario.
5. CONCLUSION AND OUTLOOK
So that value chain networks can react adequately to
dynamically changing influences, changeable
structures must be already developed in the process
of its configuration. In this context our research
pursues the aim to develop concepts and methods
for the realization of changeable value chain
networks. The presented paper describes an
approach, which shows the development and

321

evaluation of changeable value chain networks and
its change demands. Besides an approach for the as-
is analysis of existing value chain networks and the
identification of changeable objects, their existing
flexibility corridors and specific change drivers, the
paper introduces different classes of change
enablers and an approach for the evaluation of
occurring change demands.
The practical applicability of the approach will be
verified by means of two industrial partners. This
requires the knowledge about their existing network
structures. Thus, further steps of investigation will
be the detailed consideration of the network
structure and the identification of changeable
objects and their specific change drivers. A
prototypical evaluation of potential change demands
will be realized based on the possible change
drivers within the partners network. Following up
is the evaluation of specific change enablers that
enable the industrial partners to react to potential
change drivers.
Besides that, a general characteristic of
changeable objects will be deflected which serves as
a basis for the following development of change
enablers to gain changeable network structures.
6. ACKNOWLEDGMENTS
This research is funded by the German Federal
Ministry of Education and Research (BMBF) as part
of the project POWer.net Planning and
Optimization of changeable global value chain
networks.
REFERENCES
Christopher, M., Logistics and supply chain
management: creating value-adding networks, 3
rd

Edition, Pearson Education Limited, Harlow, Great
Britain, 2005
Fastabend, H., Kennliniengesttzte Synchronisation von
Fertigungs- und Montageprozessen, Fortschritt-
Berichte VDI Reihe 2, No. 452, VDI-Verlag,
Dsseldorf, 1997
Heger, C., Bewertung der Wandlungsfhigkeit von
Fabrikobjekten, Diss. Leibniz University, PZH
Verlag, Garbsen, 2007
Nyhuis, P. and Wiendahl, H.-P., Logistische Kennlinien.
Grundlagen, Werkzeuge und Anwendungen,
Springer-Verlag, Berlin, 2003
Reinhart, G., Berlak, J., Effert. C. and Selke, C.,
Wandlungsfhige Fabrikgestaltung, in: ZWF
Zeitschrift fr wirtschaftlichen Fabrikbetrieb, Vol. 1/2,
No. 97, 2002, pp 18-23
Stabell, Ch. B. and Fjeldstad, O. D., Configuring Value
For Competitive Advantage: On Chains, Shops, and
Networks, in Strategic Management Journal, Vol. 19,
1998, pp. 413-437
Sydow, J., Strategische Netzwerke. Evolution und
Organisation, Wiesbaden, 1992
Westkmper, E., Die Wandlungsfhigkeit von
Unternehmen, in: wt Werkstatttechnik online, Vol. 4,
No. 89, 1999, pp 131-140
Westkmper, E., Zahn, E., Balve, B. and Tilebein, M.,
Anstze zur Wandlungsfhigkeit von
Produktionsunternehmen, in: Werkstatttechnik, No.
90, 1/2, 2000, pp 22-26
Wiendahl, H.-P., Betriebsorganisation fr Ingenieure,
Volume 6, Hanser Fachbuchverlag, Mnchen, 2007
Wildemann, H., Fertigungssegmentierung. Leitfaden zur
fluss- und logistikgerechten Fabrikgestaltung 11.
Vol. Mnchen: TCW, 2007
Zh, M.F., Mller, N., Prasch, M. and Sudhoff, W.,
Methodik zur Erhhung der Wandlungsfhigkeit von
Produktionssystemen in: ZWF Zeitschrift fr
wirtschaftlichen Fabrikbetrieb, Vol. 4, No. 99, 2004,
pp 173-177
Zahn, E., Tilebein, M., Reichel, A., Goll, F. and
Haag, H., Strategische Frherkennung in
Wertschpfungsnetzwerken, Schriftenreihe der
Hochschulgruppe fr Arbeits- und
Betriebsorganisation e.V. (HAB), 2010, pp 87-101

Proceedings of DET2011
7th International Conference on Digital Enterprise Technology
Athens, Greece
28-30 September 2011
322

AUTOMATION OF THE THREE-DIMENSIONAL SCANNING PROCESS
BASED ON DATA OBTAINED FROM PHOTOGRAMMETRIC
MEASUREMENT
Roman KONIECZNY
Poznan University of Technology
Institute of Mechanical Technology
Pozna, Poland
roman.konieczny@put.poznan.pl

Andreas RIEL
Grenoble University of Technology
G-SCOP Laboratory
Grenoble, France
andreas.riel@grenoble-inp.fr


Maciej KOWALSKI
Poznan University of Technology
Institute of Mechanical
Technology
Pozna, Poland
maciejkow@poczta.fm

Wiesaw KUCZKO
Poznan University of
Technology
Institute of Mechanical
Technology
Pozna, Poland
wieslaw.kuczko@doctorate.put.poz
nan.pl

Damian GRAJEWSKI
Poznan University of Technology
Institute of Mechanical
Technology
Pozna, Poland
damian.grajewski@put.poznan.pl

ABSTRACT
The article presents a general concept of the automation of a three-dimensional scanning process
using structured light projection technology. To take measurements, a scanner is moved and
positioned in set points by a robot. A cloud of points, representing the scanned object and generated
as a result of the photogrammetric measurement process, is used as input data for the creation of the
robot control program. A model generated in the process is analyzed by an application which has
been developed by the authors and which calculates positions for the industrial robot with a fitted
scanner. The described procedure has been tested during measurements of car body parts using a
GOM Atos scanner, a Tritop photogrammetric system and a Kuka industrial robot.
KEYWORDS
Reverse Engineering, Photogrammetry, 3D Scanning.

1. INTRODUCTION
The engineering design process is nowadays widely
supported by CAD/CAM/CAE systems which have
become a key tool of engineering designers. To be
able to develop a technical product, one must create
its digital images, for use in the computer design
records but also virtual simulations, engineering
calculations, optimization of design and
development of technological software.
From the other point of view, the shapes of
products are becoming more and more complicated
as the customers esthetical and ergonomic
requirements grow. A model or utility design is
often created by a visual artist or stylist designer
who uses traditional materials including wood, clay
or plastic foam. Consequently, the complex surfaces
created in result of artistic ideas are very difficult to
represent using conventional CAD tools.
Digital representations required in the different
stages of the product development process are often
obtained using reverse engineering (RE) methods.
The objective of RE is to precisely recognize and

323

document the structures, dimensions and operations
of existing technical objects. This technique is often
used when the documentation of a damaged or
destroyed object is not available. It can also be
applied for digitalization of a physical conceptual
model created by a visual designer. Reverse
engineering is in this case an integral part of the
product development process (Sokovic and Kopac,
2006), (Zhang, 2003).
The easiest way to create a digital image of an
object is to measure it using manual or automated
measurement tools and, based on the obtained data,
create a digital representation usually in the form of
solid or surface 3D CAD models. If the shape of the
object is too complicated and the usual
measurement methods do not provide the number of
data necessary to create a model, the 3D scanning
process is used.
Techniques most frequently used for
digitalization include optical scanning with the use
of structured light and laser scanning (Cheng et al,
2010), (Kus, 2009), (Son et al, 2002). Both methods
are very accurate, with up to 0.02 mm accuracy, but
also time-consuming (despite a short time of one
measurement lasting a few seconds only). In large
objects scanning procedures, i.e., in case of car
bodies, it is necessary to take a number of
measurements from different perspectives, which
lengthens the process even up to several hours.
To shorten the time needed by a scanner operator
to determine at which point the next measurement
should be taken, the authors propose using methods
described in this study, allowing to automate the
scanning process.
In the proposed methodology, coordinates of the
consecutive positions of scanner, necessary to
correctly collect information about the geometric
shapes of the examined model, are determined on
the basis of an approximate geometric model. The
model is generated on the basis of quick
photogrammetric measurements. There is a control
program generated for the industrial robot based on
the determined coordinates, which moves the
scanner in the process of taking series of
measurements.
There are ready-to-use systems available on the
market in which the scanner is coupled with the
robot. However, these systems are only used as
coordination systems to determine the spatial
position of the scanner based on the available CAD
model, and to compare the model generated by the
scanning process with the CAD model. They create
a coloured map of deviations (Gom, 2010), or a set
of fixed scanner positions is used to measure a class
of similar objects (Callieri et al, 2004), (Zhao et al,
2008).
The solution designed by the authors, apart from
determining the scanner positions for objects of
known geometry, allows automating the process of
scanning objects for which a CAD format
representation is not available.
2. THREE-DIMENSIONAL SCANNING
3D scanning is a technique in which the shape of a
real object is mapped and saved in a digital form.
Optical scanners which operate based on the
structured light projection method (the so-called
stripe scanners) are most commonly used in reverse
engineering.
The scanners projector projects a pattern of
stripes of known density on a given object. Straight
lines are getting distorted adequately to the
deformation of the object surface, and the image is
recorded on matrixes of two cameras. Using the
input data (the structure of light, the camera
recorded image, the calibration parameters, and the
angle between projection direction and the read-out
direction) coordinates are calculated for each pixel
of the camera. One measurement generates a cloud
of points in a number directly depending on the
resolution of the used cameras (Cheng et al, 2010).
The principle of operation of a stripe scanner is
presented on Figure 1.


Figure 1 Principle of operation of a 3D scanner
Engineering photogrammetry is one alternative
method for obtaining three-dimensional data. It is
used especially for measuring large objects or for
fast inspection of object location (Clarke and Wang,
2000), (Hefele and Brenner, 2000), (Luhmann et al,
2007), (Maas, 1997).
Photogrammetric measurements in the reverse
engineering processes allow generating a cloud of
reference points representing a given object, on the
basis of appropriately taken series of photographic
pictures. Photogrammetric analysis requires suitable

324

photographic equipment and appropriate automatic
image analyzing software. Depending on the
requirements, metric (phototheodolites) or non-
metric cameras are used, as many photogrammetric
systems allow calibrating standard digital cameras.
Specific markers are used in close range
photogrammetry, some of them are positioned on
the measured object (markers), others are positioned
in its vicinity (code marking), allowing to orient the
created pictures against each other. Additionally,
scaling rods must be positioned on the photographed
scene, to re-scale the produced cloud of points to
their actual size.
The prepared object is then photographed from
different camera positions. The received images are
analyzed and, thanks to superposition of data from
many pictures, three-dimensional data are generated
in the form of a cloud of reference points,
representing markers placed on the object. The data
are then imported by the scanner software, which
prevents errors connected with matching the
successive scans. If a GOM Atos scanner is used,
like in the authors study, the maximum size of the
measured object should not exceed three-times the
size of the measuring field (500x500mm).
By contrast, with photogrammetric
measurements, objects of fifteen meters and more
can be scanned without any accuracy losses. In the
described project, the collected data are additionally
used to create a rough model of the examined
object. Based on the rough model the developed
application generates the next desired space
positions of the scanner for the individual areas of
scanning.
3. AUTOMATED SCANNING
METHODOLOGY
3.1. INITIAL PHOTOGRAMMETRIC
MEASUREMENT
The algorithm of the industrial robots programming
method using photogrammetric data is presented on
Figure 2.



Figure 2 Analysis of the automated scanning process

325


At first the object is prepared for taking
photogrammetric measurements using the Tritop
system. There are markers positioned on the object
so that a mesh of triangles, which is spread over a
cloud of points creating a rough model, corresponds
as much as possible with the real object. Any
characteristics points, like pockets or stacks, are
also represented.
Because of the requirements of the subsequent
scanning procedure, transparent and strongly
reflective objects require additional tarnishing using
spray chalk. The object, once prepared using the
mentioned techniques, is positioned on the
photographed scene together with the scaling rods
and calibration crosses (Fig. 3).


Figure 3 Object prepared for photogrammetric
measurements
Every cross has a set of code markings allowing
orienting the pictures against each other.
Additionally, one of them determines the origin of
the local coordinate system. The next step consists
in taking pictures which are sent online to a
computer via a local wireless network. There they
are subject to an image-based analysis, generating a
cloud of reference points, reflecting markers
positioned on the object (Fig. 4).
Figure 4 also presents the imaged calibration
crosses, the local coordinate system and the scaling
rods matched in size with the measured object.

Figure 4 Object generated using the acquired 3D data
The coordinates of points in REF format are sent to
the scanner software while the same data in IGS
format are imported to the Catia V5 software, where
a mesh of triangles is spread and later saved in STL
format. Figure 5 presents a rough model of a fender
with reference points shown.


Figure 5 Rough fender model in TI mesh format
3.2. DATA PROCESSING FOR ROBOT
CONTROL
The representation of a surface in STL format gives
the coordinates of the vertexes of a mesh of
triangles and the coordinates of the normal vectors
of each triangle, determining the outward and
inward facing surfaces. The data in text format are
imported by the described application. In the first

326

phase of the algorithm, the whole object is divided
into smaller sectors of sizes directly depending on
the size of the scanner measurement field.
The measurement field is a surface. However
considering the scanners depth of focus, it has been
assumed that triangles included in a cube of an a
side, defined according to the measurement field,
will be searched. The location of the first cube is
determined on the basis of the vertex having the
smallest value of the X, Y, Z axis coordinates. The
following cubes are built so that the a value of
their sides is added, one after the other, in the
positive directions of the X, Y, Z axis.
After defining the length of the cube side and its
position, a filter is created in the space, which looks
for all triangles present in the area of the cube from
the data obtained from the STL file. A set of
selected triangles also includes those which are only
fragmentarily present in the set space. In the next
step, the surface area of those triangles is calculated,
and the weighting factor, which is proportional to
the area, is determined.
Considering this factor, and based on the normal
vectors of the respective triangles, an averaging
direction vector is calculated for the analyzed
fragments of the surface. Its origin is determined by
averaging the coordinates of the centres of the
triangles located in the analyzed set. An additional
condition, verified in the described procedure,
analyzes the values of angle deviations of every
normal vector of the respective triangles, from the
direction of the resultant vector.


Figure 6 Scanner spatial positions generated by computer
software
Every triangle must be positioned under an angle
appropriate for the averaging direction vector of the
examined surface. This results from the fact that the
scanner has a maximum angle of observation of
the examined surface, allowing for correct scanning.
If, for a given triangle, the deviation of angles is
higher than the adopted acceptable value and,
moreover, the percentage share of its surface area in
the surface area of the considered fragment of solid
exceeds the declared threshold value, its data,
together with data of other triangle failing to meet
the conditions, are saved in a separate file. The data
are again loaded by the application and, determine
the positions of additional scanning procedures, as
soon as the analysis of other areas is finished.
For every required measurement, the position of
the scanner is determined on the axis of the resultant
vector, within a previously defined l distance,
which ensures a correct scanning process. Figure 6
shows the positions of scanner, as they have been
calculated by the presented algorithm for the
scanned fender.


Figure 7 Scanning simulation in Catia environment
3.3. SCANNING TESTS
The described tests have been carried out using a
Kuka KR30 industrial robot equipped with a
specially designed and produced GOM Atos optic
scanner holder, a GOM Tritop photogrammetric
measurement system and the software developed by
the authors.
The Kuka KR30 industrial robot (Fig. 8) is
responsible for moving the scanner to the next set
position. To minimize the total scanner travel and
the scanning time, there is a procedure used in the
application, scheduling the points of scanning with
the use of a genetic algorithm.




327


Figure 8 Automatic scanning process performed by the industrial robot Kuka KR30

Adapting the genetic algorithm for application
purposes, a crossing operator has been individually
developed, providing for a possible setting of input
parameters including: the number of individuals in a
generation, the number of generations and the
probability of mutation. The possible initial
optimization using the nearest neighbour method
has also been accounted for.
The procedure, depending on the complexity of
the scanned models geometry, allows shortening
the total scanner travel by almost three times,
compared to non-scheduled data. To be able to
position the scanner precisely, the robot must have
its global coordinate system linked with a local
system determined by the calibration cross. Before
the final creation of the robot control program its
planned movements are visualized by loading paths
geometry data into the CATIA system (Fig. 7). This
method allows to protect against possible scanner
collisions when scanning the test object.
The system must be calibrated before the
scanning procedure is launched, i.e., the systems
position and orientation in the global robot system
must be determined by indicating the origin of the
local coordinate system and X and Y axis. To do so,
the robot is set up using a manual control system so
that a selected, characteristic point of an actuator
covers the points of the local system.
The method is accurate enough for scanning with
the use of a robot. Also the tool the scanner
must be calibrated. The origin of the tools
coordinate system of the tool is located in the
intersection of the optical axis of the projector and
the frontal scanner plane. Points generated by the
application are later loaded to the Matlab
environment where they are converted into
coordinates, using a homogenous transformation
matrix typical for the robot controller. In case of the
Kuka robot, a frame describing the tool (gripper)
position includes an X, Y, Z position vector and A,
B, C angles of rotation around the individual axis.
After the set scanner position is reached, the user
calls up a single scanner shot remotely, while the
scanner travels to the next position under the control
of a Matlab script. 14 single measurements have
been taken for the fender presented in Figure 9.
3.4. TEST RESULTS
The scanning procedure produces a 3D fender
model in the form of a triangle mesh (Fig. 9). Figure
10 presents the model generated by scanning by the
use of a scanner fitted on a stand. Some missing
surface fragments can be seen in hard-to-reach
places. They result from the robots constrained
operation area, limiting the size of the scanned
object and access to points which require the

328

actuator to be rotated out of the range of the axis of
rotation. Large-size objects can be moved against
the robot, maintaining the global coordinate system
in the same position, with calculations made again
in the application.
4. CONCLUSIONS
An innovative concept of the automation of a three-
dimensional scanning process using structured light
projection technology has been presented, along
with its implementation using well-established
industrial equipment and control software. The
significant shortening of the entire scanning
procedure which results from cutting time needed
by the operator to move the scanner to the next
position, is undoubtedly the biggest advantage of
the described method.
In the presented example of the fender, it took the
operator almost an hour to scan the fender using a
scanner fitted on a stand, while it took only about 10
minutes to scan it using the robot. The labour
intensity required for preparing the scanner and the
object for the scanning procedure is similar in both
cases (positioning markets, tarnishing, selecting the
appropriate field of measurement, calibrating the
scanner).




Figure 9 3D fender model scanned using the described
method



Figure 10 3D fender model scanned using the standard
method
The described method requires additional
photogrammetric measurements, but these take a
little time only, thanks to the online analysis of the
consecutive pictures straight after sending them to
the system (less than 10 minutes in case of the
fender) and, in case of large objects, are also
necessary in the traditional method.
Moreover, the spreading of the cloud of points
itself and its analysis in the described application is
not time consuming. The longest time is spent on
the robot preparation defining the origin of the
global system of coordinates, and the tool (scanner).
This procedure, however, is carried out only once
for a given setting of cross which determines the
system.
At this point of time, the authors do not know any
similar 3D-scanning automation solution which is
comparable in ease of implementation, cost and
efficiency. They are now working further on the
described method to use a turntable allowing to
apply the entire scanning procedure for medium
sized objects and thus to extend its functionality.

329

5. ACKNOWLEDGMENTS
This work has been financially supported by the
Polish Ministry of Science and Higher Education,
research project no. 1478/B/T02/2009/36.
REFERENCES
Atkinson, K.B., Close Range Photogrammety and
Machine Vision. Whittles Publishing, U.K. 1996
Callieri M., Fasano A., Impoco G., Cignoni P., Scopigno
R., Parrini G., Biagini G., RoboScan: an Automatic
System for Accurate and Unattended 3D Scanning,
Proceedings of the 2nd International Symposium on
3D Data Processing, Visualization, and Transmission,
Thessaloniki, Greece, 2004, p. 805-812
Cheng F.-H., Lu Ch-T., Huang Y.-S., 3D Object
Scanning System By Coded Structured Light, Third
International Symposium on Electronic Commerce
and Security, Guangzhou, China 2010, p. 213-217
Clarke T., and Wang X., The Control of a Robot End-
Effector using Photogrammetry, International
Archives of Photogrammetry and Remote Sensing, Vol
33 (B5), Amsterdam 2000, p. 137-142
Hefele, J. , Brenner, C., Robot Pose Correction Using
Photogrammetric Tracking, Machine Vision and
Three-Dimensional Imaging Systems for Inspection
and Metrology, Photonics East, Boston, 2000
Kus A., Implementation of 3D Optical Scanning
Technology for Automotive Applications, Sensors, 9
(2009), p. 1967-1979
Luhmann T., Robson S, Kyle S., Harley I., Close
Range Photogrammetry: Principles, Techniques and
Applications, Wiley, 2007
Maas, H-G., Dynamic Photogrammetric Calibration of
Industrial Robots, Videometrics - SPIEs 42nd
Annual Meeting, San Diego 1997, p. 106-112
Sokovic, M.; Kopac, J., Reverse Engineering as
Necessary Phase by Rapid Product Development,
Journal of Materials Processing Technology 175,
2006, p. 398403
Son S., Park H., Lee K.H., Automated Laser Scanning
System for Reverse Engineering and Inspection,
International Journal of Machine Tools &
Manufacture, 42-2002, p. 889-897
Zhang, Y., Research into the Engineering Application of
Reverse Engineering Technology, Journal of
Materials Processing Technology, 139 (2003), p.
472475
Zhao Y., Zhao J., Zhang L., Qi L., Development of a
Robotic 3D Scanning System for Reverse Engineering
of Freeform Part, International Conference on
Advanced Computer Theory and Engineering, Phuket,
Thailand, 2008, p. 246-250
Automated Robot Inspection Cell for Quality Control on
Sheet Metal Components,
<http://www.gom.com/uploads/media/automated_metr
ology_EN.pdf>

Proceedings of DET2011
7th International Conference on Digital Enterprise Technology
Athens, Greece
28-30 September 2011
330

RECOMMENDING ENGINEERING KNOWLEDGE IN THE PRODUCT
DEVELOPMENT PROCESS WITH SHAPE MEMORY TECHNOLOGY
Ralf Thei
Ruhr-Universitt Bochum
theiss@lmk.rub.de
Sven Langbein
Ruhr-Universitt Bochum
langbein@lmk.rub.de


Tim Sadek
Ruhr-Universitt Bochum
sadek@lmk.rub.de

ABSTRACT
Shape Memory Technology (SMT) opens up new approaches to actuators and sensors but a broad
application of SMT is hindered by two major issues: First, industrial users lack the necessary
knowledge to apply SMT in their products. Second, there is a lack of simplified tools for scientists
to make their research findings available to industrial users more easily. Therefore, SMT-specific
engineering knowledge has been collected in the Knowledge and Method base for Shape Memory
Alloys (KandMSMA). An assistance system has been integrated into the KandMSMA which
supports scientists on publishing of new content and industrial users on finding relevant content.
The article at hand presents an analysis of the initial situation, which led to development of the
assistance system and a description of the assistance system itself. Based on this, the article
concludes with an evaluation of the usability of the assistance system and an outlook for an
enhanced SMT-based product development process.
KEYWORDS
Shape Memory Technology, Product Development, Recommender System, Methodical Support

1. INTRODUCTION
SMT represents an innovative, but heretofore
sparsely used approach to developing novel
actuators and sensors. In this way Shape Memory
Alloys (SMA) are, for instance, employed in
unlocking actuators as described in (Sadek et al,
2010). However, there are two crucial problems to a
broader employment in diverse products. For one
thing, the material properties and effects entail
higher demands as to the skills of the product
developers. For another thing, developers lack
exactly the knowledge that is necessary for a useful
integration of SMT into their products. As described
in (Langbein, 2009), the required knowledge
comprises information on effects and material
properties of different SMA and especially
information that supports the SMT-specific product
development process as described methodically in
(Sadek et al, 2010) and (Breidert and Welp, 2002).
Within the interdisciplinary SFB459, various
research findings in the field of materials science
and a methodology for the development of SMT-
based products have been generated. These research
findings were then edited for industrial transfer and
were made available online in the wiki-based
KandMSMA. The acquired research results stem
from diverse fields of materials science, mechanical
science and product development.
In order to face the issue of the lack of
information and to enable an optimal transfer of the
research results in industry, an approach for an
improved information provision has to pursue two
goals: The first goal consists in the development of
a recommendation-based assistance system for the
support of scientific authors and industrial users,

331

Table 1 Different content domains within the KandMSMA
Basic knowledge of SMT
Guidelines for product
development
Manufacturing and
processing of SMA
Effects
Material characteristics
Alloys
Polymers
Characteristics in use
Corrosion
Wear

Process model for SMT
Methods for
Dimensioning
Standardisation

Application samples

Processing
Master forming
Forming
Finishing

Processing tools
Laser melting



which will enable a simplified documentation of
research results and their provision in the
KandMSMA. Here, easy to operate and supportive
software tools keep the effort for the insertion of
and search for information low, so that an improved
exchange of information between industrial users,
especially product developers and scientists is
facilitated. The second goal is to integrate
methodical support which is oriented on existing
process models for methodical product development
into the assistance system to be developed. This
methodical support has to provide information
adapted to the developers progress in the product
development process.
2. INITIAL SITUATION
In the course of the research in SFB459, the wiki-
based KandMSMA has been implemented in several
iteration stages (cf. (Welp and Breidert, 2001)). The
KandMSMA already includes a major part of the
research findings in the form of articles on different
groups of themes. Therefore, the KandMSMA is a
central tool for the documentation and provision of
interdisciplinary research findings and for the
support of the development process of SMT-based
products. The contents of the KandMSMA
essentially stem from three topic areas as shown in
Table-1: Basic knowledge of SMT, methodical
guideline for product development with SMT as
well as manufacturing and processing of SMA pre-
products and SMA components.
The first area comprises basic knowledge on
shape memory materials and effects. The
information included in this area describes the
material properties of SMA as well the
corresponding experiments. The micro structural
composition of SMA and the characteristic of the
phase transformation stages are elaborated on in
particular. The second area comprises information
on a methodical guideline for product development
with SMT.
Here the individual steps and the methods used in
them are described for a product development
process that is adapted to the development of SMT-
based products. Furthermore, examples of the
employment of SMT in diverse products are to be
found in this area. The third area finally deals with
aspects of the manufacturing and processing of
shape memory alloys. On the one hand, information
on SMA and their components come up under this
area. On the other hand, there is also information on
the manufacturing and processing of SMA.
A major task of the KandMSMA that has already
been accomplished to date is the collection and
provision of the information on SMT generated in
SFB459. In contrast to conventional databases and
wiki systems and due to the future implementation
of an assistance system, the efficient and targeted
provision of the contents stored in the KandMSMA
is to be to the fore. The usage of the KandMSMA
and the integrated assistance system supports
developers in the development of SMT-based
products along the entire product development
process and allows them to access the required
information faster. The focal point of the
information contained in the KandMSMA is the
support of a developer in the early phases of product
development, i.e. planning, conception and design.
An intended further task of the assistance system
in the KandMSMA is the support of developers on
the qualitative inspection of the results generated in
the product engineering process with appropriate
tools. Besides classic check lists or to do lists, which
a developer can use in individual steps during
product development, simple interactive tools, e.g.
for dimensioning are supposed to be available, too.
In addition to this, the user is to be enabled through
appropriate contents to use other supportive tools
for the development with SMT, for instance in the
way described in (Breidert and Welp, 2003).
Thereby the use of the KandMSMA leads to a
considerable reduction of the iteration loops and
thereby of the development effort.

332

2.1. TARGET GROUP FOR SUPPORT VIA
AN ASSISTANCE SYSTEM
The support via the assistance system primarily
addresses industrial users. However, it is also
necessary to address the circle of scientific authors
with this support. The profiles and goals in the
addressed groups of people from research and
industry are heterogeneous. They differ in their way
of usage as well as in their standard of knowledge.
Both in industry and in research the knowledge
standard on SMT appears to be very differently
distinctive, so that differently experienced users or
user groups have to be considered for the
development of the assistance system as shown in
Table-2. Since the KandMSMA and the assistance
system are available online, the differing experience
of the users with online media and online
publications also has to be considered. The essential
user groups can therefore be characterized by means
of the following criteria: knowledge standard in the
domain of SMT, experience with publication in and
use of online media, manner of use of the
KandMSMA, frequency of usage of the
KandMSMA and demand for support on the use of
the KandMSMA and on SMT. The following
reference groups and implication scenarios are
therefore taken as a basis for the development of the
assistance system:
Science (I): SMT-experienced material scientists
who are sufficiently experienced in dealing with
online media and who are planning to publish their
research findings in the KandMSMA. It is expected
that theses users have sufficient experience in their
area of studies. Still, due to the scope and
complexity of the information in the KandMSMA,
they are not able to integrate their research findings
sufficiently well into the structure of the
KandMSMA. These users are to be supported
primarily in the integration of these contents into the
KandMSMA.
Science (II): SMT-experienced material
scientists who want to inform themselves of the
latest research findings on different SMA and to
integrate their own findings into the KandMSMA.
Just as user group Science (I), they have sufficient
experience in their own area of studies. By contrast,
they have only inadequate experience in dealing
with online media. Due to the scope and complexity
of the information in the KandMSMA, they are also
unable to integrate their research findings
sufficiently well into the structure of the
KandMSMA. On the one hand, the user group is to
be assisted in integrating their research findings into
the existing contents of the KandMSMA at a lesser
effort. On the other hand, they are to be enabled to
navigate to the contents that are interesting to them
by means of simple and intuitive tools.
Industry (I): Product developers who want to
implement SMT in their products and therefore need
information on the development of SMT-based
products. The developers lack the necessary basic
knowledge and methodical knowledge for the
development of SMT-based products. Consequently,
support for the developers has to be given in two
ways. For one thing, the developers are to be lead
through the development process methodically. For
another thing, they are to receive the information
necessary for their current step in the development
process.
Industry (II): SMT-inexperienced prospective
customers, who want to inform themselves
essentially of SMT and its uses. This exemplary
user group is particularly interested in sample
applications and information on the different effects
that are presented to them in a targeted fashion.
The emphases in the development of the
assistance system are on developers (Industry I) and
scientific authors (Science I and II).
Table 2 User properties
Science (I) Science (II) Industry (I) Industry (II)
Knowledge on SMT high high low low
Experience with
online media
high low medium low
Intention of use
transfer of
research results
transfer of
research results
application of SMTin
products
collecting information
Type of use active, publishing active, publishing
passive (assisted),
consuming
passive (assisted),
consuming
Frequency of use medium - high seldom medium seldom
Required assistance
for publishing
low - medium high
-
(high)
-
(high)
Required assistance
for using KandMSMA
low low medium high


333

2.2. CHALLENGES DURING THE
PROVISION OF INFORMATION IN THE
KANDMSMA
In the existing KandMSMA, groups of themes and
articles come under an evolved hierarchical
structure as depicted as a section in Figure-1. This
structure leads to difficulties with the integration
and finding of information in the KandMSMA, both
for the scientific authors and for the industrial users.
This deficit is now to be remedied by the assistance
system which is integrated into the KandMSMA.

Fs fsfdl werpowefok fs pfokjpk wefokeps kf okefps ekf dlkvm
What can an as sis tant s ystem do for you or
Does ageneral (methodical) process for the
SMT bas ed products exist? Is it pos sible
conceptual des ign of SMT bas ed actuators ?
as sistant system to do or the s .c. backend
possible to identify the problem and an a
can a problem be des cribed and/ or identified
the us er be identified (pers pective)? What can an ass istant Does
a general (methodical) proces s for the conceptual des ign of
SMT bas ed products exist? Is it pos sible to s tandardiz e the opp.
conceptual des ign of SMT bas ed actuators ? What has an helpful
as sis tant s ystem to do or the s .c. backend In which ways is it
poss ible to identify the problem and an adequate s olution? How
can a problem be des cribed and/ or identified (c as s is tant sys
m to do or the s .c. backend In which ways isit
the us er be identified (pers pective)? What can
ageneral (methodical) process for the conceptual
SMT based products exis t? Is it poss ible to
sys tem to do or the s .c. backend In which ways
problem and an adequate s olution? How can a
and/ or identified (context)? How can the us er be identified
As d, sffg, s df
As dasdfsdf
As dasdas d
as dasdaad
As d, sffg, s df
As dasdfsdf
As dasdas d
as dasdaad
?
Analytical Methods
Actuator Dimensioning
Phenomenological Model
Development example
Requirements
Function structure
Working principles
Dimensioning and Simulating

Figure 1 Section of the content structure used in the
KandMSMA with article example
Two essential problems with the integration and
finding of contents emerge from the present wiki
system of the KandMSMA: The first problem
consists in the basic possibility to assign articles to
several groups of themes and thereby to different
positions in the hierarchical structure. For example,
it is possible to class an article on the manufacturing
of an SMA both with the area of manufacturing and
with the area of SMA. This problem can be further
subdivided into three sub problems:
1. While articles are integrated, the structure
can be complemented with user-individual
substructures, mostly without using existing suitable
substructures. This leads to several thematically
identical parallel structures within the KandMSMA.
2. New articles are integrated into
inappropriate substructures, which lead to an
intermingling of different groups of themes and
exacerbate the subsequent location of contents.
3. New articles are not classed within the
hierarchy profoundly enough and thereby not
unambiguously, so that articles on different groups
of themes are located at the same place in the
hierarchy. Owing to the low hierarchy level, these
articles are indeed correctly classed, but are
insufficiently easy to find within the bulk of articles
located in this place.
This situation emphasizes a second problem. Due
to their varying standards of knowledge, users of the
KandMSMA can only with difficulty class articles
in the complex and scientifically oriented
hierarchical structure of the KandMSMA.
Furthermore, the structure of the articles allows for
different interpretations by users from diverse
disciplines and with various standards of
knowledge. As a consequence, less adept users, who
do not know this structure sufficiently will have
difficulty finding articles they are looking for. This
aspect is especially applicable to industrial
development engineers who want to inform
themselves of SMT and integrate it into their
products. In addition to this, the existing system is
characterized by an only static provision of
information to the user and by a requirement of
substantial human resources for changing and
annotating articles.
3. APPROACH TO RECOMMENDATION-
BASED ASSISTANCE
In order to meet the problems in the existing
KandMSMA, a new approach has been developed to
assisted users with a sophisticated assistance
system. The approach is based on two basic
functions as described in the two following
subchapters. On the one hand, recommendations are
made for the annotation of contents, so that these
can be integrated into the KandMSMA in a most
simple way. On the other hand, suitable
recommendations of contents are generated in a
personalized way, based on these annotations, for
individual users, in particular product developers.
Here the users are offered two kinds of
recommendations. The first kind comprises
recommendations that are similar or contextually
related to the contents viewed by the user. The
second kind of recommendation is a series of
articles, in which the individual contents are ordered
in a sequence suitable for reading and processing.
This second kind of recommendation is especially
suitable for the stepwise support of a product
development process. The implementation of these
two functions facilitates a detailed and more
efficient provision of information, so that published
research findings can be made available to the
developers in an easy fashion. By means of this, a
reduction of the time that a user needs for finding
information in the KandMSMA is also to be
achieved by means of a personalized provision of
information.
3.1 RECOMMENDATION OF ANNOTATIONS
FOR NEWLY-ADDED CONTENTS
A new approach to solution, the basis of which is a
semi-automated annotation of articles, has been
developed for the publishing of new contents in
articles. In this context, the term semi-automated
refers to the fact that the assistance system
recommends seemingly suitable annotations by
means of a combination of content-based and rule-
based methods. Users can accept, alter and also
reject these annotations for their articles. In this

334

approach, the complex hierarchical arrangement of
articles in the KandMSMA is replaced by a list of
acceptable key words and categories, from which
the annotations and thereby the relations between
articles are created. Contrary to the solution in
being, articles are thus available in a non-
hierarchical structure and are therefore non-
redundant in the hierarchy. Individual users who
want to integrate and annotate new articles are not
necessarily meant to know the entire amount of
available key words and categories, but can confine
themselves to the subareas that are relevant to them.
Moreover, by recommending possible annotations,
users are to be induced to rather use existing
keywords and categories than to integrate keywords
and categories of their own. Finally, the semi-
automated annotation is supposed to minimize the
formation of parallel content structures and
competing annotations, which exacerbates the
subsequent location of contents. In this way, the
time and the number of actions required to insert
new contents into the structure of the KandMSMA
is reduced through the semi-automated annotation of
the contents.
3.2 PERSONALIZED RECOMMENDATION
OF INFORMATION
As the second basic function of the assistance
system, by means of the recommender system
(Klahold, 2009), users are recommended a subset of
the contents of the KandMSMA that is relevant in a
context, e.g. the information necessary for
development. The context comprises the user
profile, the totality of the contents of the
KandMSMA and the situation in which the user is
situated together with the assistance system. The
user profile itself is composed of various explicit,
i.e. indicated by the user, and implicit properties,
which were deduced from the user behavior.
Examples of explicit information are contact data
and personal data as well as information on the
industrial or scientific background of the users, their
research fields or professions. There is also the
option for a user to state preferences for different
topics which the assistance system considers for
recommendations. Topic preferences are also
deduced from the users behavior as implicit
properties. Since the assistance system is to
accomplish an accompanying support of the
developers along the development process, the
progress in the development process has to be
recorded additionally in the user profile. After all, a
situation describes further influencing variables,
which can be considered by the system during the
generation of a recommendation. Examples of this
are the system deployed by the user or the protocol
of the current browser session. During the
development of the assistance system, the
consideration of the situation only plays a minor
part.
On the whole, the information used by the
assistance system can be subdivided into three main
categories (Burke and Ramezani, 2011). In the first
category, information on the user for whom the
recommendation is generated is collected. This
information is contained in the user profile as
mentioned above. The second category describes
information that can be deduced from the collective
behavior of all users. The contents, for example that
have been searched and viewed successively as well
as the annotations assigned by the individual users
are especially part of this behavior. Furthermore,
evaluations and opinions can also be included in the
recommendation generation. The third and therefore
last group observe information in the articles of the
KandMSMA, themselves. The properties, for
example annotations, of these articles serve as a
primary basis for this.
Content-based and collaborative approaches as
well as combinations of these are used for
processing the available information. At the same
time, further supportive approaches such as e.g.
editorially created rule based systems can be
deployed. There is a variety of approaches as
illustrated e.g. in (Burke and Ramezani, 2011) and
(Manning et al, 2008). Content-based approaches
determine recommendations on the basis of the
similarity of contents. In this way, contents are
compared e.g. by reference to the occurring terms or
the annotations used for them. This means that
contents which are similar to the contents that have
already been viewed or defined as preferences are
recommended in the user profile. Collaborative
approaches by contrast neglect the actual contents
and their properties. Instead, they use statistics on
the behavior of users during the employment of the
KandMSMA and search for similarities in the user
behavior. Here sequences and coherences in which
the contents of the KandMSMA are called and
employed by users are of particular interest. Both
content-based and collaborative approaches make
use of mathematical algorithms such as e.g. Term
Frequency-Inverse Document Frequency (TF-IDF,
cf. (Burke and Ramezani, 2011) and (Manning et al,
2008)), Latent Semantic Analysis (LSA, as in
(Landauer et al, 1998) or (Guillermo and Jose,
2010)) or Latent Dirichlet Allocation (LDA, (Blei et
al, 2003) and (Krestel et al, 2009)) in order to
determine the relevance of a recommended content.
A more detailed explanation of these algorithms is
not to be a subject of this paper.


335

3.3 CHALLENGES IN RECOMMENDATION
GENERATION
Two significant problems have emerged especially
in the realization of the semi-automated annotation.
The first one consists in the finding of appropriate
approaches that are suitable for a support of the
annotation of new contents. As already described,
these are based on user-individual or
social/collaborative information or on the contents
to be annotated. Even though all of these approaches
would be generally suitable to recommend
annotations, in this special case of publishing new
contents they are utilizable only under specific
conditions. Thus if the author is not the only one
who has access to these contents, collaborative
approaches are utilizable after the publication of the
contents in the KandMSMA only. Therefore, they
are not suitable for recommending annotations for
new content.
Contrary to this, annotations for new contents can
already be recommended when using content-based
and user-individual approaches. The second crucial
problem with semi-automated annotation consists in
the fact that these approaches have to draw on a
sufficiently large reference corpus in order to be
able to recommend appropriate annotations for new
contents. By contrast with the recommender systems
and databases applied elsewhere, only a rather small
text corpus is available to the present KandMSMA
(cf. e.g. (Burke and Ramezani, 2011) and (Manning
et al, 2008)). Hence a concept for an assistance
system that solves all of the problems mentioned
above by a combination of the different approaches
has been developed specifically for SMT as
described in the following.
4. APPROACH TO RECOMMENDATION-
BASED ASSISTANCE
The development is aiming at the creation of an
assistance system that is integrated into the already
existing KandMSMA, which on the one hand
effectively provides the user with personalized
information from the KandMSMA and on the other
hand enables a simplified publication of new
information. Figure-3 shows the basic sub
approaches and their interaction in order to facilitate
the recommendation of annotations and articles.
4.1 RECOMMENDING ANNOTATIONS FOR
NEWLY INTEGRATED CONTENT
On the one hand, the contents to be newly integrated
are analyzed by individual methodical components
of the assistance system, so that a recommendation
for an annotation can be generated. In this process,
the aforementioned content-based approaches are
employed in combination with editorially generated
rules. Editorial parts are of particular importance to
the recommendation of annotations. In this manner,
e.g. in the text corpus of the KandMSMA at hand,
improvements in the annotation could be achieved
and recommended series of articles could be
generated by using editorial rules.
For the recommendation of articles from the
KandMSMA on the other hand, collaborative
methods are used in addition to the content-based
and editorial defined rules. In order to recommend
annotations on new contents, the author first
integrates the content into the KandMSMA.


KandMSMA
Article + annotation + optional notification for experts
New article + Annotation (optional by user input) +
User profile
R
e
c
o
m
m
e
n
d
e
d

a
r
t
i
c
l
e

a
n
n
o
t
a
t
i
o
n
s
Association rules
Text statistics
Recommended items
Database queries
Recommendation system
+ displays content and recommendations
+ tracks viewed/used content, user behavior
User interface
Articles considered
relevant by other
users
Articles considered
relevant by editors
Statistically
relevant
articles
U
s
e
r

p
r
o
f
i
l
e
,

u
s
e
r

i
n
p
u
t
+ used for inserting content and annotations
+ displays recommendations
User profile
Fs fsf dl werpowef ok fs pfokjpk wefok epsk f okefps ek f dlkv m
Whatc an an as sis tants ystem dof or you or Does ageneral (methodic al) process forthe SMT basedproducts ex is t? Isi tpos sibl e conc eptual des ignof S MT based actuators ? as sis tants ystem todoorthe s.c . bac kend poss ibletoi denti fy the probl em and an a can aproblem be desc ribed and/ or identi fied
the us er be identifi ed (pers pecti ve)? What canan as sis tantDoes a general (methodical) process forthec onceptual design of SMT bas ed produc ts ex ist? I s i tposs ible to standardiz ethe opp. conc eptual desi gn of SMT basedactuators ? Whathas an helpful as sis tants ys temtodo orthe s. c. bac kend In whic h ways is it poss ible to identify the problem and an adequate soluti on? H ow cana problem bedes cribedand/oridentif ied (c as si stant sy s
m to do or thes .c . backendI nwhi ch ways is it theuserbei denti fied (perspective)? Whatc an ageneral ( methodical) process for thec onceptual SMT based produc ts ex ist? I sit poss ibl eto sy stem to do or the s .c. back endI nwhi ch ways problem and anadequate solution? How c an a
and/ or identified( context)? H ow can the us er be identified
Asd, sff g, s df
Asda sdfsdf Asda sdasd a sda sdaad
Asd, sff g, s df
Asda sdfsdf Asda sdasd a sda sdaad
Database queries
Consolidation of recommendations
Text analysis
Behavior-
based
Editor
defined
Content-
based
Profile analysis Consolidation of recommendations
Content -
based
Editor
defined
Collaborative

Figure 3 Overall concept for the assistance system

336

The content which is thus forwarded to the
assistance system is then examined along with the
user profile via a text and profile analysis. In a
second step, a comparison between the new content
and the existing content is drawn. By means of this,
it is to be rendered possible e.g. to class a new
article with the topic requirement determination in
the planning phase according to the terms used in it.
Editorially created association rules utilize the terms
used in an article in order to recommend additional
annotations. In the profile analysis, interests and
actions that are saved in the user profile are matched
with the new contents. It is then determined in how
far a user has already been active in a subject area.
This allows for e.g. the information of experts in
this subject area on new contents in cases of no
accordance. The individual annotation
recommendations from these sub-approaches are
subsequently arranged in a general annotation, so
the user can now accept or alter the suggested
annotations.
4.2 APPROACH TO RECOMMEND CONTENT
FROM THE KANDMSMA
On the analogy of the recommendation of
annotations on new contents, user profiles and
actions are used as a basis for the recommendation
of existing contents. For this, the user is observed by
the assistance system when he makes use of the
KandMSMA. This observation is to determine
especially his interests and context, which can be
deduced from e.g. the annotations of the articles
read. These annotations are employed by three
groups of methods for recommendation generation.
In the behavior-based group, the user profile is
matched with the profiles of other users and
contents that other users have viewed in the same
context or with the same interest are selected. The
editorially rule-based recommendation group
deduces further recommendations for potentially
relevant contents and series of articles from the
viewed articles. Content-based methods are finally
used as last group, which generates a
recommendation of articles that are similar in
content to the articles viewed. With adaptations, the
described mathematical algorithms for the
recommendation of annotations can also be used.
On the analogy of the recommendation of
annotations, the individual recommendations are
finally merged with a control system to a general
recommendation.
4.3 PRESENTING RECOMMENDATIONS TO
THE USER
Starting from a support during the conceptual
development of SMT-based products, developers
are to be supported on the design of their products
and on the realization of a prototype.
Developers are to be offered check lists,
catalogues and examples as methods of support.
These methods are to be complemented with tools,
e.g. ones that support developers on their
calculations. An example of a tool of that kind is
depicted in Figure-4 (German is used as the main
language in the KandMSMA). This tool enables the
conduction of a coarse dimensioning of wire
actuators. The tool itself is embedded into an article
which describes the fundamental principles and
calculations for dimensioning an SMA wire
actuator.


Figure 4 Screenshot section of an embedded tool for
dimensioning SMA wire actuators
Functions and interfaces for presenting
recommendations are grouped into the following 4
groups: Search, portals, Lists (e.g. tag clouds) and
graphs, which are used to provide the
recommendations to the user. Especially the search,
portals and lists are commonly known by most of
users because of the wide range of use in the
internet. The recommendations are integrated into
these functions. For example the search results and
the content presented in portals are filtered and
ordered based on the user preferences. Lists in form
of tag clouds also suggest relevant annotations to the
user. The support of the developers occurs via a
targeted provision of the information and tools
contained in the KandMSMA and is guided by the
assistance system. As a result, the assistance system
is to gradually lead developers from the information
provided in the articles through the development
process and also to instruct them to proceed in a
methodical way. This support is given by a
recommendation of the KandMSMA articles
relevant to the respective developer on the basis of
the personalized user profiles and statistics on the
use of the KandMSMA by other users.
To this end, user profiles which describe the
interests and focus areas as well as the usage history
of a user are generated by the system. This means
that the system can attempt to classify developers
into a phase of the product development according
to the articles they have viewed. In consequence of
this classification, the assistance system then
recommends suitable contents and series to the
developers. Figure-5 illustrates the function for the

337

recommendation of series relevant to the progress in
the development process that is provided by the
assistance system. Here, the recommended contents
are depicted in the form of a graph.

Determine
requirements
Correlation
network
Structure
requirements
Analyse
requirements
Document
requirements
Mainpage
Documented
requirements

Figure 5 Example of the recommendation of an article
series for determining requirements
5. USABILITY OF THE APPOACH
To evaluation of the implemented assistance system,
the system consists of several test series beginning
with a simple questioning and ending with an
evaluation project. At the current state of the project
a first set of acceptance and usability tests was
conducted, using a questioning, whose summarized
results are presented in the following section.
The questioning was targeted on determining the
acceptance and usability of the supporting
functionality and interfaces provided by the
assistance system. It is important to determine the
acceptance by the user to improve these
functionality and interfaces for a faster and more
simplified access in the further development of the
assistance system. This can include changes of
existing function or even the removal of a non
accepted functionality. It has to be noted, that the
usability of the assistance system is in focus.
The determination of the usability is based on the
analysis of the benefit for solving given tasks with
the functionality provided by the assistance system.
For this reason, users have to answer five SMT-
specific questions with an increasing level of
difficulty. The level of difficulty of a question is
defined by the complexity of the actions, which are
necessary to answer this question. For example,
easier questions can be answered by searching
specific articles in the KandMSMA. In opposition to
this, questions with a higher level of difficulty
require to read and understand a series of articles.
The analysis of the acceptance is based on the
frequency of use of different functions and
interfaces for answering the questions. In addition to
this, users are asked to estimate the popularity of the
provided interfaces and their knowledge in the
domain of SMT. Functions and interfaces are
grouped into the following 4 groups: Search,
portals, Lists (e.g. tag clouds) and graphs, which are
used to provide the recommendations to the user.
In the course of the experiment, users have to
answer the questions first without the help of the
assistance system and then with.
5.1 EXPERIMENT SETUP AND EXECUTION
Based on the described targets a questionnaire with
three parts has been generated. The first part of the
questionnaire consists of a self assessment and an
estimation of the popularity of different interfaces
and functionalities. The second part contains the
SMT-specific questions, which have to be answered
with respectively without the assistance system.
Finally, the third part asks the user for the use of the
functions and interfaces to find information in the
KandMSMA.
Questions for self assessment and popularity are
based on rating scales with values from 1 to 5. All
other questions are open. The answers to the SMT-
specific questions are checked for correctness. The
experiment was conducted with 17 students from
engineering science, sales engineering, medical
science and business administration, which
represent the targeted user groups. Each user
answers a questionnaire under observation. The
observer ensures the solely use of the KandMSMA
and documents its use. In addition to this, the users
are allowed to ask the observer in cases of
comprehending the questionnaire or the experiment.
5.2 RESULTS
The following two diagrams illustrate the influence
of the assistance system on the answering of the
SMT-specific questions and the acceptance of the
provided functions and interfaces. The first diagram
(Figure-6) shows the answering of the 5 SMT-
specific questions (Q1-Q5) without and with the use
of the assistance system. The questioning of 17
users with 5 SMT-specific questions results
85 5 17 = answers in two rounds. Without the
assistance system the users have given 28 correct
answers in sum or 33% of the possible answers. It
has to be noted that more questions with a low level
of difficulty has been answered correctly. With use
of the assistance system the number of correct
answers increases to 64, which represents 75% of
the possible answers. Also an increase of answers
for more difficult questions is notable.

338

0
10
20
30
40
50
60
70
80
Without KandMSMA With KandMSMA
Q5
Q4
Q3
Q2
Q1

Figure 6 umber of correct answered questions (y-axis)
with and without the assistance system
The second diagram (Figure-7) shows the use of
interface for finding information within the
KandMSMA. Especially interface, which are known
by the user from common internet sides, are used.
Unknown or sophisticated functions like graphs are
used only by a few users.

0
2
4
6
8
10
12
14
16
18
Search Portal Lists Graph

Figure 7 Used interfaces and functions
(y-axis: number of users)
Users have noted that more sophisticated
functions or interfaces are not obvious and not
familiar enough to use them effectively. As a
conclusion of this experiment it has to be stated, that
especially the interfaces have to be improved to
provide a better access to the functionality of the
assistance system.
6. DEVELOPMENT PROCESS OF SMT-
BASED PRODUCTS
The assistance system described in the previous
chapters has the central task of supporting the
product developer in various tasks along the product
development process. For example, if a user reads
an article on requirement determination, the
assistance system will recommend further articles
on the group of themes on requirement. As
already mentioned in the introduction, the
recommendation of series of articles, which lead
users to their self-defined goals, constitutes a crucial
functionality of the assistance system. Figure-8
illustrates the process model that was taken as a
basis for the development process.

p
h
a
s
e
s

o
f

p
r
o
d
u
c
t

e
n
g
i
n
e
e
r
i
n
g
P
l
a
n
n
i
n
g
C
o
n
c
e
p
t
u
a
l
i
-
z
a
t
i
o
n
D
e
s
i
g
n
P
r
o
c
u
r
e
m
e
n
t
R
e
a
l
i
z
a
t
i
o
n
requirements
specifications
C
u
s
t
o
m
e
r
C
o
n
t
r
a
c
t
o
r

Quality Gate
E
n
g
i
n
e
e
r
i
n
g
S
a
l
e
s
Requirements
Cost
Concept
Quality Gate
Quality Gate
Quality Gate
E
n
g
i
n
e
e
r
i
n
g
P
r
o
d
u
c
t
i
o
n Simulation/
Experiment
Guidelines
Concept
K
n
o
w
l
e
d
g
e

a
n
d

M
e
t
h
o
d

b
a
s
e

f
o
r

S
h
a
p
e

M
e
m
o
r
y

A
l
l
o
y
s

(
K
a
n
d
M
S
M
A
)
A
s
s
i
s
t
a
n
c
e

s
y
s
t
e
m

f
o
r

i
n
f
o
r
m
a
t
i
o
n

p
r
o
v
i
s
i
o
n

(
A
S
)
P
r
o
c
u
r
e
m
e
n
t
S
u
b
-
c
o
n
t
r
a
c
t
o
r
Controlling
Shipment
Order
U
s
e
R
e
c
y
c
l
i
n
g
Quality Gate
End of life
Product Idea/Market Analysis
S
e
r
v
i
c
e
C
u
s
t
o
m
e
r
Inspection
Monitoring
Maintenance
Use
C
u
s
t
o
m
e
r
D
i
s
p
o
s
a
l
Material
recycling
Disassembly
Functional
recycling
Test
P
r
o
d
u
c
t
i
o
n
Manufacturing
Check
Assembling Q
u
a
l
i
t
y

M
a
n
a
g
e
m
e
n
t

Figure 8 Process model for the development of SMT-based
products
Initially, steps similar to those in other process
models are gone through, e.g. VDI2221. After the
first two phases of planning and conception, the
phases of design, acquisition and realization follow,
which cater particularly to the specifics of the SMT-

339

oriented process model. The steps that are to be
processed during the succession of the phases are
depicted in detail in the assistance system. Methods
and tools, for instance are to be elucidated and made
available in the corresponding articles as early as
during the planning phase.
7. SUMMARY AND OUTLOOK
The introduced approach has to support the
developers in diverse tasks along the product
development process and to instruct the developers
to proceed in a methodical way. Depending on the
identified user context, contents of the KandMSMA
are compiled as personalized recommendations and
are then presented to the user. In addition to that, the
assistance system has to offer support on the
publication of new contents by enabling a semi-
automated integration of these contents into the
KandMSMA on the basis of recommendation of
suitable annotations. The assistance system which
provides support along the product development
process makes use of the existing KandMSMA and
is directly integrated along with the available
interfaces into its web frontend. Moreover, this
enables the use of the assistance system by every
web-enabled computer with a standard browser.
With advancing development, the functionality of
the assistance system has to be evaluated. For this
purpose, objective evaluation criteria and scenarios,
which depict the expected users of the KandMSMA
representatively in their respective contexts have to
be developed systematically.
The presented results of the experiment indicate
to focus further developments efforts into the
representation of the functions of the assistance
system. Therefore, the interface design and
integration of the functions into the web frontend
have to be improved and evaluated in further tests.
ACKNOWLEDGMENTS
We thank German Research Foundation (DFG,
www.dfg.de) for funding our research within the
project Special Research Project SFB459 Shape
Memory Technology (www.rub.de/sfb459/). The
presented results and findings would not have been
possible without this funding.
REFERENCES
Sadek, T., Lygin, K. and Langbein, S.: SMA Actuator
System for Unlocking Tasks in Interiors of Vehicles,
Proceedings of the 12th International Conference on
New Actuators, ACTUATOR 2010
Langbein, S., Local configuration and partial activation
of shape memory effects to create smarter parts,
Ruhr-University Bochum, Institute of Product and
Service engineering, Germany, 2009
Breidert, J. and Welp, E. G., Actuator Development
Using a Knowledge Base, Proceedings of the 7th
International Conference on New Actuators, ACTUA-
TOR 2002
Welp, E. G. and Breidert, J., Knowledge Base for
Designing with Shape Memory Alloys, Proceedings
of the 13th International Conference on Engineering
Design, ICED 2001
Breidert, J. and Welp, E. G., Tools Supporting the
Development of Modular Systems, Proceedings of
the 14th International Conference on Engineering
Design, ICED 2003
Klahold, A., Empfehlungssysteme, Vieweg Teubner,
Germany, 2009
Burke, R., Ramezani, M., Matching Recommendation
Technologies and Domains, In Recommendation
Systems Handbook, Springer, Germany, 2011
Manning, C. D., Raghavan, P., Schtze, H., An
Introduction to Information Retrieval, Cambridge
University Press, USA, 2008
Landauer, K. T., Foltz, P., W., Laham, D., An In-
troduction to Latent Semantic Analysis, Discourse
Processes, Vol. 25, 1998
Guillermo J.-B., Jose A. L., Olmos, R., Escudero, I.,
Latent Semantic Analysis Parameters for Essay
Evaluation using Small-Scale Corpora, Journal of
Quantitative Linguistics, Vol. 17, 2010
Blei, D. M., Ng, A. Y., Jordan, M. I., Latent Dirichlet
Allocation, Journal of Machine Learning Research,
MIT Press, 2003
Krestel, R., Fankhauser, P., Nejdl, W., Latent Dirichlet
Allocation for Tag Recommendation, Proceedings of
the RecSys 2009, ACM 2009
VDI 2221: Methodology for the development and con-
struction of technical systems and products, VDI-
Verlag, Germany, 1993
Proceedings of DET2011
7th International Conference on Digital Enterprise Technology
Athens, Greece
28-30 September 2011
340

KINEMATIC STRUCTURE REPRESENTATION OF PRODUCTS AND
MANUFACTURING RESOURCES
Mikael Hedlind
KTH Royal Institute of Technology,
Production Engineering
mikael.hedlind@iip.kth.se
Lothar Klein
LKSoftWare GmbH
lothar.klein@lksoft.com



Yujiang Li
KTH Royal Institute of Technology,
Production Engineering
yujiang@iip.kth.se
Torsten Kjellberg
KTH Royal Institute of Technology,
Production Engineering
torsten.kjellberg@iip.kth.se
ABSTRACT
The basics of kinematic modelling in majority of CAE applications are about to define motion
constraints for components relative to other components. The main concepts are links and joints
which combined build the topology and geometry of the mechanism. With the additional
information about joint type, actuation and motion range, the model provides useful information for
motion study. The kinematic structure schema of the standard ISO 10303-105 provides proven
capability to represent this information. In the second edition of this standard, currently under
development, the granularity and functionality of the model will be increased and further integrated
with other parts of the standard ISO 10303. Case studies are presented on utilization of the added
capabilities in different applications within product and manufacturing resource representation to
illustrate the importance of these features. This paper reports on the authors contribution to this
standard.
KEYWORDS
Kinematic, Modelling, Computer aided engineering (CAE)

1. INTRODUCTION
Kinematic data models provided in current ISO
10303 standards is not addressing all needs in the
engineering area. Another disadvantage of the
current data model is that it is not fully integrated
into the overall framework provided by the general
representation approach used throughout ISO
10303.
The research results provided in this paper shows
how kinematics is going to be further integrated into
the general representation structure of ISO 10303,
allowing to address further needs in engineering.
There are two known implementation projects of
current kinematics representation scheme p105ed1
(ISO 10303-105 edition 1) as used in the application
protocol ISO 10303-214. The first was the European
project IDA-STEP (Integrating Distributed
Applications on the Basis of STEP Data Models)
(Rech et al, 2004) and later the Swedish project
DFBB (Digital Factory Building Block) (Li et al,
2011). During these projects limitations of the
standard were identified. Together with possible
solution strategies they have been presented to ISO
TC184 SC4 (2009) by Klein and Hedlind which
lead to the initiation of a new edition of the
kinematic data models in ISO 10303.
This paper presents the research that has
contributed to the first draft (ISO TC184 SC4
WG12, 2011) of p105ed2 (ISO 10303-105 edition 2
working draft). The main principle of how
kinematics is represented is preserved from

341

p105ed1, but the data model for p105ed2 has been
reworked to facilitate new functionalities and reuse
of data structures from the integrated generic
resources of ISO 10303. With this increased
integration, the granularity of the kinematics
representation elements is made available for
association with other non-kinematic properties.
The kinematic model in p105ed1 is
overconstrained, preventing from re-using the same
constructs in variations of the same model. E.g. for a
topological model only one mechanism could be
provided. These limitations have been overcome in
p105ed2 by deep integration with ISO 10303
representation structures.
Also it is ensured that each part of the model can
be re-used, e.g. for variations.
2. REPRESENTATION OF KINEMATICS
During analysis and synthesis of kinematic
mechanisms, model simplification is common.
Typical simplifications are e.g. to assume that a part
is completely rigid, even if we know that it bends
under stress. Or neglect that there are tolerances or
plays in kinematic pairs. E.g. a rotational pair
typically also allows axial play.
Ability to represent complicated mechanisms
with a simplified model requires understanding of
the principles of kinematics. This section focus on
how the principles of kinematics are represented in
ISO 10303.
2.1. REPRESENTATION PRINCIPLES
Kinematic joints and links are the topologic aspect
of a mechanism. In a graph the joint is represented
as an edge, and the link as a vertex. It is important to
notice that a joint is always relating exactly two
links. So in the case that 3 links are related to each
other then 2 or even 3 joints have to be used; even if
this is not immediately obvious.
Figure-1 illustrates the kinematic topology with
open and closed kinematic loops for an ABB
IRB6660 robot (3D model and link names are
provided by ABB). This 3D model consists of 108
components and for the purpose of analyzing the
reachable work volume of the robot 9 links and 9
joints have been identified.
A kinematic pair is the geometric aspect of a joint
and provides motion constraints between two links.
Each link comes with its own coordinate system,
also called geometric context together with
geometric elements such as locations, orientations,
curves and surfaces that are needed to formulate the
kinematic interaction with other links. The
kinematic interaction is described by a pair that is
relating geometric elements of the two involved
links with each other together with other
information.
Motion constraints can be done for translation
and rotation, which in 3 dimensions become 6 DOF
(degrees of freedom). These constraints can also be
expressed as rolling or sliding with reference to a
curve or surface. For the robot in Figure-1 are all
kinematic pairs constrained to revolute motion i.e.
only 1 DOF. Additional information to a kinematic
pair is motion range, actuation and pair value. The
pair value is data defining a specific state, e.g. an
angle for e revolute pair.


Figure 1 Kinematic topology with open and closed
kinematic loops
The behavior of a mechanism can be analysed in
two ways, either forward or backward.
In forward analysis the pair values are given, and
the objective is to find the position and orientation
of one or several links. With a sequence of pair
values, possible continuously described by a
function, motion paths for the links can be
calculated. Pair values can only be set for actuated
pairs and the motion path is constrained by the
actuated direction.
In backward analysis the position and orientation
for one or several links is given. The objective is to
find the corresponding pair values. A link motion
path can be turned into functions describing pair
values. Actuated pairs are used to achieve the links
position and orientation constrained by the actuated
direction.
In kinematic synthesis the objective is to design a
mechanism that achieves a specified motion. This
process starts with defining the kinematic topology
and continues with the geometric aspect as pair
types, motion range and actuated directions. This
process is typically iterative with interrelated
decisions to be made. During this elaborative work,
pairs with suitable DOF and actuated direction can
Base
Link 1
Link 3
Link 2
Rod
Arm
Link 4
Link 5
Link 6

342

be chosen in variants without having a direct
corresponding physical realizable mechanism.
Notations for representing kinematics are
essential in engineering to find mechanism design
solutions, similar to the notation for mathematics.
Design of clock mechanisms was one of the early
engineering domains to drive research on a notation
for kinematics. The basic notation set by Reuleaux
(1876) on the concept of kinematic pairs, joints and
links is today practiced in CAE applications.
Reuleaux showed how his notation can be used for
analysis and synthesis of mechanism and how
similarities between mechanisms can be identified.
For this paper it is of interest to point out
Reuleaux proposal (1876) on how to treat
mechanism with non-rigid links. For this Reuleaux
introduced the concept of tension-organ (e.g. wire or
chain) and pressure-organ (e.g. fluid or gas). This
enabled kinematic modeling of links capable only to
pull or push. This is a capability that is still not
common in CAE applications nor supported in
p105ed1. In p105ed2 this issue is addressed with the
added functionality to represent the actuated
direction of a kinematic pair.
A graphical notation for kinematic mechanisms,
consisting of pairs and links and their motion is
provided in ISO 3952 Kinematic diagrams.
2.2. HISTORY OF ISO 10303 KINEMATICS
The only ISO 10303 application protocol using
p105ed1 is ISO 10303-214. An application protocol
based on ISO 10303 framework and modelling
methodology using p105e1 is DIN PAS 1013
developed by the MechaSTEP industry research
group in Germany.
The first proposal for a kinematic representation
schema in ISO 10303 was developed by the
European research projects NIRO Neutral Interfaces
for Robotics (Bey et al, 1994) and later InterRob
Interoperability of Standards for Robotics in CIME
(Mikosch, 1997). This proposal was accepted by
ISO as basis for further integration with the ISO
10303 framework. During this integration process
several changes were made to align it with
modelling principles used throughout ISO 10303.
These changes were, by the NIRO project,
considered to make implementation less efficient
(Bey et al, 1994).
The NIRO proposal was more compact in number
of entities compared to p105ed1 while still covering
similar scope. The NIRO schema did not require the
same degree of reasoning over a data set to get all
information. This is also exemplified when
comparing the different schemas to describe a
kinematic pair (Bey et al, 1994). Figure-2 shows a
comparison of instantiated data set for a prismatic
pair using the different schemas. In the NIRO
proposal the pair entity has mandatory attributes for
the pair values and optional attributes for the motion
range. In p105ed1 three entity instances are required
instead. First the kinematic pair entity itself, and
then one entity for the motion range, and another
entity for the pair value, both referencing the pair
entity. In p105ed2 this requires two entities. A pair
entity can directly describe motion range optionally,
while pair values is kept as a separate entity.

NIRO proposal
p105ed1
p105ed2
prismatic_pair
prismatic_pair
actual_translation = 5.
lower_limit_actual_translation (OPTIONAL) = 10.
upper_limit_actual_translation (OPTIONAL) = $
prismatic_pair_value
actual_translation = 5.
prismatic_pair_with_range
lower_limit_actual_translation (OPTIONAL) = 10.
upper_limit_actual_translation (OPTIONAL) = $
prismatic_pair_range
lower_limit_actual_translation = 10.
upper_limit_actual_translation = .UNLIMITED.
prismatic_pair_value
actual_translation = 5.
applies_to_pair
applies_to_pair
applies_to_pair

Figure 2 Instantiation of prismatic pair and data for
motion range and pair value using different schemas
One underlying reason for these differences is
how an optional attribute is viewed. The argument
during p105ed1 development was that an optional
attribute should be avoided as the cardinality then
can be considered as unclear. For p105ed2 this has
been resolved with normative text declaring that
absence of motion range data imply that the motion
range is unlimited. It is also possible to describe a
pair without range using a supertype entity that does
not specify motion range but have the same
constraints in DOF.
Even though p105ed1 is done according to the
modelling principles applied in ISO 10303, there is
low use of the integrated generic resources and this
limits the association between elements of the
kinematic representation and other properties. In
p105ed2 a higher integration with the integrated
generic resources has been accomplished, enabling
e.g. properties as friction to be associated to a

343

kinematic pair. The integration is done through
reused representation structures retrieved via
declaring more entities as subtypes of integrated
generic resource entities. This required also
rearrangements of the p105ed1 overall structure.
With these changes it is also believed that it will be
easier for understanding and implementation. The
level of changes makes p105ed2 not compatible
with p105ed1, but the kinematic representation
principles are unchanged; all concepts from
p105ed1 are preserved or transformed into
equivalent concepts.
Below are the overall structure of p105ed1 and
p105ed2.
The p105ed1 consist of following 3 schemas.
Kinematic structure schema
Kinematic motion representation schema
Kinematic analysis control and result
schema
The p105ed2 consist of following 6 schemas.
Kinematic property schema
Kinematic topology schema
Kinematic structure schema
Kinematic state schema
Kinematic motion representation schema
Kinematic analysis control and result
schema
The p105ed1 Kinematic structure schema has
been split into 4 more refined and specialised
schemas. In focus for this paper are the 3 schemas
for kinematic topology, structure and state.

3. ISO 10303 KINEMATICS EDITION 2
Following are excerpts from p105ed2, with
instantiation examples that illustrate the benefits of
the higher integration and increased functionality.
General representation structure of ISO 10303 is
defined in ISO 10303-43. It provided concepts for
the entities; representation, representation_context
and representation_item and how to relate them. In
ISO 10303-42 the general representation capabilities
are specialized for geometry and general topology.
P105ed2 provides further specialization of these
concepts for kinematics.
For these examples a single acting cylinder is
used, illustrated in Figure-3, as this is a common
machine component and with properties were the
improvements in p105ed2 become obvious. This
machine component consists of 2 links, a cylinder
and a piston. The piston have 2 DOF, it can translate
and rotate. There is one actuated direction which
drives the piston to the right, see Figure-3. To move
the piston to the left will require an external applied
force. The piston rotates freely.

cylinder piston

Figure 3 Single acting cylinder (ISO 1219 symbol) and
corresponding kinematic topology
3.1 KINEMATIC TOPOLOGY SCHEMA
Figure-4 illustrates an excerpt of the p105ed2
kinematic topology schema. Kinematic_joint and
kinematic_link are made subtypes of the generic
topological entities edge respectively vertex
declared in the topology schema of ISO 10303-42
were these concepts are well recognized for shape
representations. From a viewpoint of an application
protocol using p105ed2 the granularity of kinematic
representation is now increased for association with
other properties. In p105ed1 the joint and link
entities are isolated from the main representation
structure of ISO 10303 which prohibits relationship
with other properties.
The introduced entity kinematic_topology
_structure is made a subtype of the representation
entity from ISO 10303-43 and collects kinematic
joints as representation items. In a similar way, but
with more specialized entities is it also possible to
explicit represent substructures, network structures
and directed structures as tree structures.


Figure 4 Kinematic topology schema
3.2 KINEMATIC STRUCTURE SCHEMA
Figure-5 illustrates an excerpt of the p105ed2
kinematic structure schema. As a kinematic pair is
the geometric aspect of a joint, this entity is made a
subtype of geometric_representation_item from
ISO 10303-42, which brings a geometric context.
This can be combined with globally defined units
for the whole mechanism.
In kinematics low and high order pairs are
common concepts. A low order pair does not require
a reference to a shape for defining its DOF. A high

344

ordered pair requires references to surfaces or
curves to define its motion constraints.
The concept of low and high order kinematic
pairs was used in the NIRO schema proposal, but
removed from the schema in the integration process
for p105ed1 with the argument that it does not carry
any clear semantics and thereby this distinction was
considered superfluous. For p105ed2 this distinction
has been taken back. The semantics has been
increased by a new breakdown of the different pair
types with added functionality to better support
kinematic synthesis. Most of the low order pairs can
primarily be described by listing their DOF in terms
of rotation and translation. However there are some
pair types that require additional geometric
information and therefore an additional
classification have been introduced, the
low_order_pair_with_motion_coupling.
This arrangement made it possible to have a
supertype of simple low order pairs enabling
control of the different DOF individually, which
supports kinematic synthesis. Instance of this
supertype should only be used when there is no
specific low order pair subtype with the desired
DOF configuration.

Figure 5 Kinematic structure schema
The following specialized pair types are available
in p105ed2.

Low order pairs:
fully constrained pair (no DOF);
revolute pair (one rotation DOF);
prismatic pair (one translation DOF);
cylindrical pair (one rotation and one
translation DOF);
universal pair (two rotation DOF);
homokinetic pair (two rotation DOF);
spherical pair with pin (two rotation DOF);
spherical pair (three rotation DOF);
planar pair (one rotation and two translation
DOF);
unconstrained pair (three rotation and three
translation DOF);
Low order pairs with motion coupling:
screw pair;
rack and pinion pair;
gear pair;
High order pairs:
point on surface pair;
sliding surface pair;
rolling surface pair;
point on planar curve pair;
sliding curve pair;
rolling curve pair.
The introduced specialized pairs are homokinetic
pair and spherical pair with pin. The
homokinetic_pair was first introduced in ISO
10303-214, and is now included in p105ed2.
Spherical_pair_with_pin was included because it is
also supported by ISO 3952.
Figure-6 illustrates how subtypes of the
low_order_kinematic_pair redeclare each DOF
attribute and derive valid DOF configuration from
local domain rules. This way the DOF
configuration is explicitly provided for all the
subtypes of low_order_kinematic_pair.


Figure 6 Low order kinematic pair

345

As defined in p105ed1 a kinematic pair can only
be actuated in all its DOF or not at all. A
consequence of this is that one or several artificial
links is needed if the pair is not actuated in all DOF.
Figure-7 illustrates this for the cylinder component.
The actuated prismatic pair can move the piston in
both directions and the non-actuated revolute pair
enables the piston to rotate freely. As this is an
unnatural way of describing motion constraints this
part has been changed in p105ed2 with a solution
also enabling representation of single acting
actuation.


Figure 7 Use of artificial link to emulate the kinematic
pair of a cylinder component
The coordinate system of the kinematic pair,
named contact frame, is used as reference for the
direction. For low order pairs the contact frame and
the pair frame for second link coincide. The
enumeration items of actuated direction are:
bidirectional;
positive_only;
negative_only;
not_actuated.
In this way the cylinder component can be
represented without use of an artificial link as
illustrated in Figure-8.


Figure 8 Cylindrical pair actuated in one direction
Figure-9 illustrates how the property static
friction can be assigned to a specific kinematic pair
in a specific structure using p105ed2 in an
application protocol. The complete set of entities to
represent the property e.g. the measure value of the
static friction is left out due to limited space, but it
follows the established way for property modeling
in ISO 10303.

Figure 9 Kinematic pair with assigned static friction
property
3.3 KINEMATIC STATE SCHEMA
Figure-10 and Figure-11 illustrates excerpts of the
p105ed2 kinematic state schema. A mechanism state
representation is made a subtype of the
representation entity from ISO 10303-43 and
collects pair values as representation items.

mechanism_state_representation
representation_schema.
representation
pair_value
(RT)
items S[1:?]
geometry_schema.geometric
_representation_context
(DER) (RT)
context_of_items
kinematic_structure_schema.
mechanism_representation
represented_
mechanism

Figure 10 Kinematic state schema
Subtypes of the pair_value entity for each type of
kinematic pair declare values to define a state of the
pair. Pair_value is made subtype of the
geometric_representation_item entity from
ISO 10303-42, which brings a geometric context
and defined units to the pair value.


Figure 11 Kinematic pair value
One example of what this higher integration with
ISO 10303-42 and ISO 10303-43 enables is the
possibility to associate a nominal state with
measured deviation.

346

4. KINEMATIC PAIR ERRORS
A mechanisms geometric accuracy is of interest for
both products and manufacturing resources.
Identification of systematic kinematic errors enables
compensation to get higher accuracy.
For machine tools kinematic errors are one of the
major error sources affecting geometric accuracy.
Kinematic errors for both linear and rotation axis are
divided in component errors and location errors.
(Schwenke et al., 2008)
Component errors address deviation with six
measure values that are dependent on axis motion.
For a linear axis these measures are; one positioning
error, two linear errors, and three angular errors.
Location errors for a linear axis address deviation
with three measure values, one position in plane
error and two angular errors. Location error is
defined as the average line of the axis motion.
When p105ed2 is used in an application protocol,
kinematic errors can be represented as properties of
a kinematic pair. Component errors as measured
would be associated to the kinematic pair in a given
state defined by a pair value. An interpolation of
component errors for different pair values would be
associated to the kinematic pair in a given
mechanism.
Volumetric accuracy for a machine tool is defined
as The maximum error between any two points in a
specified volume of measurement (McKeown,
1973). Calculating volumetric accuracy from
kinematic pair errors using p105ed2 data imply
error stack-up analysis based on the kinematic
topology schema and kinematic structure schema.
Figure-12 illustrates how deviation from straight-
line motion (component error) for a prismatic pair
will be represented using p105ed2 in an application
protocol. On the left side is the representation
structure for interpolated deviation data using
bounded curves (defined by ISO 10303-42 as a
curve of nite arc length with identiable end
points). On the right side is the representation
structure for measured deviation in one state for the
prismatic pair. The interpolated property definition
is related as dependent on the property definition for
measured deviation.


Figure 12 Prismatic pair with representation of straight-line motion deviation

347


In machine tool metrology straight-line motion
parameters and measure is defined in ISO 230-1 and
well known in industry. The parameter naming
convention is based on a three letter combination.
For a linear axis these are e.g. EXZ for linear
deviation and EZZ for the positional deviation. The
last letter indicates the direction of motion using a
nomenclature defined in ISO 841 for a set of NC
machines. As the direction of motion is defined by
the kinematic pair, the last letter can preferably be
omitted from the name of the
measure_representation_item. This gives a uniform
representation of error components, independent of
the axis name. If the three letter combination is
requested it can be represented as alias
identification in the context of ISO 841 for the
measure_representation_item.
Donmez et al. (1986) provided a general
methodology to predict the resulting error of a
sequence of kinematic pair errors. This
methodology is developed for machine tool error
compensation, but can also be applied for other
mechanisms and is based on multiplication of
homogeneous transformation matrices (HTM). The
elements of one HTM are the error elements of one
kinematic pair. With this methodology any state,
defined by nominal pair values, can be analysed to
predict the resulting geometric error.
5. CONCLUSIONS
With p105ed2 new capabilities in kinematic
modelling are enabled for the ISO 10303
framework. Artificial constraints in p105ed1 have
been removed and the p105ed2 schema gives more
compact data sets. The presented examples on
enabled capabilities and higher integration with
other parts of ISO 10303 illustrate the importance of
these features.
Further research on this will be on enabling usage
of mathematical functions to represent continuous
motion in kinematic pairs.
For applications on the examples of increased
integration a more stringent modelling specification
will be required than can be done in this paper. With
an application protocol or application modules
using p105ed2 the representation will be
unambiguous. Note that the schemas presented in
this paper are taken from a working draft for the
standard on which principle consensus has been
achieved.
6. ACKNOWLEDGMENTS
This work is funded by VINNOVA (The Swedish
Governmental Agency for Innovation Systems) and
the ProSTEP iViP association and has been
supported by XPRES (Initiative for excellence in
production research). Important contributions to this
research result have been discussions on machine
tool modelling within ISO TC184 SC4 WG3 T24
and especially with R. Fesperman NIST, F. Proctor
NIST, and A. Archenti KTH. We also thank Scania
and Volvo for fruitful discussions on user
requirements in modelling kinematics.
REFERENCES
Bey I., Ball D., Bruhm H., Clausen T., Jakob W.,
Knudsen O., Schlechtendahl E. G., and Srensen T.,
Neutral Interfaces in Design, Simulation, and
Programming for Robotics, Research Report ESPRIT
Project 5109, 1994, ISBN 3-540-57531-6
Donmez M.A., Blomquist D.S., Hocken R.J., Liu C.R.,
Barash M.M., "A general methodology for machine
tool accuracy enhancement by error compensation",
Precision Engineering, Vol. 8, No. 4, 1986, p 187-196,
doi:10.1016/0141-6359(86)90059-0
ISO TC184 SC4, WG12 Meeting Minutes, Document
reference WG12 N6693, 2009
ISO TC184 SC4, Industrial automation systems and
integration Product data representation and
exchange, Part 105: Integrated application resource:
Kinematics, second edition, Working Draft,
Document reference WG12 N7301, 2011
Li Y., Hedlind M., Kjellberg T., Implementation of
kinematic mechanism data exchange based on STEP,
Proceedings of DET2011 7th International Conference
on Digital Enterprise Technology, Athens, Greece,
2011
McKeown P.A., Loxham J., Some Aspects of The
Design of High Precision Measuring Machines, CIRP
Annals, Vol.22, No.1, 1973
Mikosch F., Interoperability of Standards for Robotics
in CIME, Research Report ESPRIT Project 6457,
1997, ISBN 3540618848
Rech R., Klein L., Randis R., Baltramaitis T.,
Nargelas V., "IDA-STEP, Integrating Distributed
Applications on the Basis of STEP Data Models",
Final Report, Project Reference IST-2000-30082
(European Fifth Framework Programme), 2004
Reuleaux F., Kinematics of Machinery; Outlines of a
Theory of Machines, Kennedy, A.B.W. (Editor,
Translator), MacMillan and Co., London, 1876
Schwenke H., Knapp W., Haitjema H., Weckenmann A.,
Schmitt R., Delbressine F., Geometric error
measurement and compensation of machinesAn
update, CIRP Annals, Vol. 57, No. 2, 2008,
p 660-675, doi:10.1016/j.cirp.2008.09.008
Proceedings of DET2011
7th International Conference on Digital Enterprise Technology
Athens, Greece
28-30 September 2011
348

MULTI-AGENT-BASED REAL-TIME SCHEDULING MODEL FOR RFID-
ENABLED UBIQUITOUS SHOP FLOOR
Yingfeng Zhang
Key Laboratory of Contemporary Design and
Integrated Manufacturing Technology,
Ministry of Education, Northwestern
Polytechnical University, PR China
zhangyf@nwpu.edu.cn
George Q. Huang
Department of Industrial and Manufacturing
Systems Engineering,
The University of Hong Kong,
Hong Kong, PR China
gqhuang@hku.hk

Ting Qu
Department of Industrial and Manufacturing
Systems Engineering,
The University of Hong Kong,
Hong Kong, PR China
quting@hku.hk
Shudong Sun
Key Laboratory of Contemporary Design and
Integrated Manufacturing Technology,
Ministry of Education, Northwestern
Polytechnical University, PR China
sdsun@nwpu.edu.cn
ABSTRACT
Because of the lack of timely feedback manufacturing information during production execution stage,
real-time production scheduling is very difficult to be implemented. In this paper, an overall architecture
of multi-agent based real-time scheduling for ubiquitous shopfloor environment is proposed to close the
loop of production planning and control. Several contributions are significant. Firstly, wireless devices
such as RFID are deployed into value-adding points in a ubiquitous shopfloor environment to form
Machine Agent for the collection and processing of real-time shopfloor data. Secondly, Capability
Evaluation Agent is designed to optimally assign the tasks to the involved machines at the process
planning stage based on the real-time utilization ration of each machine. The third contribution is a Real-
time Scheduling Agent model for manufacturing tasks scheduling / re-scheduling strategy and methods
according to their real-time feedback. Finally, a Process Monitor Agent model is established for tracking
and tracing the manufacturing execution based on a critical event structure.
KEYWORDS
Multi-agent, Real-time Scheduling, Ubiquitous Manufacturing, Auto-ID, RFID

1. INTRODUCTION
Manufacturing scheduling is the process of selecting
and assigning manufacturing resources for specific
time periods to the set of manufacturing processes in
the plan (Shen et al., 2006). It is the important
manufacturing planning actives which deal with
resource utilization and time span of the
manufacturing operations. Agent-based
manufacturing scheduling systems are a promising
way to provide this optimization (Chan et al., 2002).
Recently, rapid developments in wireless sensors,
communication and information network
technologies (e.g. radio frequency identification -
RFID or Auto-ID, Bluetooth, Wi-Fi, GSM, and
infrared) have nurtured the emergence of Ubiquitous
Manufacturing (UM) (Huang et al., 2010; Zhang et
al., 2011) as core Advanced Manufacturing
Technology in next-generation manufacturing
systems. A UM system is based on wireless sensor
network that facilitates the automatic collection and
real-time processing of field data in manufacturing
processes (Jun et al., 2009). It will facilitate the real-
time shop-floor scheduling in a ubiquitous
manufacturing environment.

349

Despite of the significant progress of agent
technologies and real-time manufacturing data
collection, the following research questions still
exist in applying real-time scheduling methods to
real-life manufacturing shopfloors.
(1) In process planning stage, the task is only
assigned to a type of machine; it is not assigned to a
certain machine. As a result, the tasks may be not
optimally assigned among the machines and has
adverse effect on scheduling stage. Without
considering real-time machine workloads and shop
floor dynamics, process planning may become
suboptimal or even invalid at the time of execution.
(2) Due to the lack of manufacturing
information capturing and processing methods,
current shop-floor monitor is inaccurate, incomplete,
inconsistent, and presents time-delay. In addition, it
has not integrated with scheduling system.
Therefore, the real-time shop-floor scheduling is
difficult to implement.
In this paper, we integrate the advantages of
multi-agent and auto-ID technologies powered
ubiquitous manufacturing technologies to
implement real-time shop-floor scheduling. The
proposed multi-agent real-time scheduling
architecture aims to close the loop of production
planning and control from process planning to
finished products.
The rest of the paper is arranged as follows.
Section 2 will review the relevant literature under
three categories of multi-agent system for
manufacturing, real-time scheduling, and ubiquitous
manufacturing. Section 3 presents the overall
architecture of multi-agent based real-time shop-
floor scheduling. The multi-agent models such as
Machine Agent, Capability Evaluation Agent,
Scheduling Agent, and Process Monitor Agent are
discussed in Section 4. Section 5 describes the
software framework of the proposed multi-agent
based real-time shop-floor scheduling system.
Conclusions and further works are summarized in
Section 6.
2. LITERATURE REVIEWER
The research considered in this paper can be
portrayed against the literature along several
directions. They are (1) multi-agent system for
manufacturing, (2) real-time scheduling, and (3)
ubiquitous manufacturing.
2.1. MULTI-AGENT SYSTEM FOR
MANUFACTURING
Agent technology is a branch of Artificial
Intelligence (AI) and has been widely accepted and
developed in manufacturing applications for its
autonomy, flexibility, reconfigurability, and
scalability (Sikora, 1998, Macchiaroli, 2002 and
Maturana, 2004). An agent based concurrent design
environment (Krothapalli, 1999) has been proposed
to integrate design, manufacturing and shop-floor
control activities. A compromising and dynamic
model in an agent-based environment (Sikora, 1998)
has been designed for all agents carrying out their
own tasks, sharing information, and solving
problems when conflicts occur. Papakostas et al.,
(1999) describe a flexible agent based framework
for manufacturing decision m aking. Some mobile
agent-based systems (Shin et al., 2004) have been
applied to the real-time monitoring and information
exchange for manufacturing control. Jia et al. (2004)
proposed an architecture where many facilitator
agents coordinate the activities of manufacturing
resources in a parallel manner. Jiao et al. (2006)
applied the MAS paradigm for collaborative
negotiation in a global manufacturing supply chain
network. Besides, in various kinds of applications
such as distributed resource allocation (Bastos et al.,
2005), online task coordination and monitoring (Lee
and Lau, 1999 and Maturana et al., 2004), or supply
chain negotiation (Wu, 2001), the agent-based
approach has played an important role to achieve
outstanding performance with agility. Monostori et
al. (2006) introduce the software agents and multi-
agent systems and discuss the open issues and
strategic research directions in all domains of
manufacturing where problems of uncertainty and
temporal dynamics, information sharing.
2.2. REAL-TIME SCHEDULING
In order to satisfy customer requirements and meet
the delivery time punctually in MTO (Make to
Order) environments, production scheduling and
planning is an important process for avoiding delay
in the production process and for improving
manufacturing performance. Previous approaches
focus on the process allocation of equipment to
production tasks before the production starts (Wong
et al, 2006). Aghezzaf (2007) adopts a mixed integer
programming model for developing a capacity and
warehouse management plan that satisfies the
expected market demand with the lowest possible
cost. Mendes et al. (2009) integrate a genetic
algorithm with heuristic priority rules to solve
resource constrained project scheduling problems.
Guo et al. (2008) propose a genetic algorithm for
solving the order scheduling with multiple
constraints for maximizing the total satisfaction
level of all the orders while minimizing their total
throughput time. Recently, real-time scheduling
strategies and methods are investigated to facilitate
production management. Buyurgan et al. (2008)
present a framework that employs the analytical
hierarchy process (AHP) in advanced manufacturing

350

systems for real-time scheduling and part routing.
Poon et al. (2011) describe a real-time production
operations decision support system (RPODS) is
proposed for solving stochastic production material
demand problems. By considering various
uncertainties such as uncertain processing time,
uncertain orders and uncertain arrival times, Guo et
al (2011) propose a mathematical model for order
scheduling problem with the objectives of
maximizing the total satisfaction level of all orders
and minimizing their total throughput time. Cho et
al. (2007) use a continuous control-theoretic
approach for distributed production scheduling at
the shop floor and machine capacity control at the
CNC level.
2.3. UBIQUITIOUS MANUFACTURING
In the past ten years, rapid developments in wireless
sensors, communication and information network
technologies (e.g. radio frequency identification -
RFID or Auto-ID, Bluetooth, Wi-Fi, GSM, and
infrared) have nurtured the emergence of Ubiquitous
Manufacturing (UM) (Huang et al., 2009) as core
Advanced Manufacturing Technology (AMT) in
next-generation manufacturing systems (NGMS). A
UM system is based on wireless sensor network that
facilitates the automatic collection and real-time
processing of field data in manufacturing processes.
In this way, the error-prone, tedious manual data
collection activities are reduced or even eliminated
(Jun et al., 2009). UM provides a networked
manufacturing environment free from excessive and
difficult wiring efforts in manufacturing workshops
(Jones, 1999). In UM, real-time visibility and
interoperability have been considered core
characteristics (Huang et al., 2008) that close the
loop of production planning and control for adaptive
decision making. By taking advantage of data
capacity stored in an RFID tag, Qiu et al. (2007)
propose a RFID-enabled framework in support of
manufacturing information integration. A new
paradigm, called UbiDM: Design and Manufacture
via Ubiquitous Computing Technology (Suh et al.,
2008), has been proposed for the design and
manufacturing of a product by using ubiquitous
computing technology. The importance of the UM
has also been widely identified for strategic research
and development in industrialized European Union,
North Americas, and Japan where manufacturing is
widely considered as one of the major means of
creating the national wealth. In a UM framework,
management and control facilities of shop floor are
required to implement real-time traceability,
visibility and interoperability in improving the
performance of shop-floor planning, execution and
control by using workflow management architecture
(Zhang et al., 2010) and RFID-enabled smart
gateway (Zhang et al., 2011). Besides, the facilities
must be able to effectively deal with the complex
manufacturing information following the standard
schemas and transmit it in time between
workstations, shop floors and enterprise.
3. 3. ARCHITECTURE OF MULTI-
AGENT BASED REAL-TIME SHOP-
FLOOR SCHEDULING
The overall architecture of multi-agent based real-
time shop-floor scheduling is shown in Figure-1. It
aims to implement real-time scheduling for a
ubiquitous shop-floor environment. Through auto-
ID technologies, the dynamic manufacturing
information could be captured. Then, at process
planning stage, the tasks could be better assigned
according to the real-time status and capacity of
each machine. It will provide accurate information
for production scheduling. During execution stage,
the dynamic scheduling can be achieved based on
the real-time manufacturing data.
Four agents are designed in this research to fulfil
real-time shop-floor scheduling. They are briefly
described as follows:
(1) Machine Agent (MA)
It is responsible for capturing the real-time
manufacturing data by equipping auto-ID devices
and processing the captured data as meaningful
manufacturing information. Then, the real-time
workstation application services can be provided.
(2) Capability Evaluation Agent (CEA)
It is used to evaluate the capability of the
machines. Based on the real-time status transmitted
by machine agent, the process planning can assign
the tasks to optimal machines.
(3) Real-time Scheduling Agent (RSA)
It provides intelligent model and algorithm to
optimally plan or re-plan the start time and finish
time of each process of each task according to the
real-time shopfloor information.
(4) Process Monitor Agent (PMA)
It is responsible for reflecting the real-time status
of different manufacturing resources. During
production execution, disturbances and changes of
shopfloor are timely tracked and traced, and the loop
of production planning and control for real-time
shop-floor scheduling could be achieved.


351

M
a
c
h
i
n
e

A
g
e
n
t
s
S
h
o
p

f
l
o
o
r
Machine
Agent [1]
Orders
Capability
Evaluation
Agent
Real-time
Scheduling
Agent
Process
Monitor
Agent
RFID Reader RFID Tag
Machine
Agent [n]
Machine
Agent [i]
T
a
s
k
s
A
s
s
i
g
n

t
a
s
k
s
t
o

m
a
c
h
i
n
e
s

Real-time
Data
Re-scheduling
Scheduler
Real-time
Capability Information
Real-time
Execution Information
MRP
Process
Planning
1 1 1
2 3 4
Figure 1 Architecture of multi-agent based real-time shop-floor scheduling
4. MULTI-AGENT MODELS
4.1. MACHINE AGENT MODEL
MA is responsible for wrapping the workstation
applications to process the complex real-time data
captured from Auto-ID devices such as RFID. On
one hand, it is used to connect and centrally manage
the multiple types of auto-ID devices for capturing
real-time manufacturing data according to a specific
logic flow. On the other hand, it is also used to
process the captured manufacturing data to
meaningful manufacturing information and provide
real-time workstation application services.
Figure-2 shows the MA model. It is implemented
with intelligence logics so as to sense and identify
the real-time manufacturing status of each machine.
It includes two components, namely data capturing
and application service.
(1) Data Capturing
This component aims at managing the behaviours
of auto-ID devices installed at a machine to
capturing the dynamic data of the manufacturing
resources. It consists of two modules.
Definition and auto-driven module is used to
wrap various drivers of heterogeneous auto-ID
devices to form a driver library which enables the
newly plugged auto-ID device to be Plug and
Play with only simple definition of some basic
parameters. Two driven modes, standard interface
driven and the third-part driven, are designed in this
module.
Standard data capturing module is responsible for
wrapping heterogeneous auto-ID devices into
standard methods so that their perception functions
can be easily invoked under a uniform model. Two
types of methods, namely readingData (Parameter
[1], Parameter [i]), writingData (Parameter [1],
Parameter [i]), are involved in this module.
(2) Application Services
This component aims to provide value-added
information based on the captured manufacturing
data by auto-ID devices. It also consists of two
modules.
Reasoning module is designed to enhance the
intelligence of the MA. It will make MA know
which type of manufacturing resource is coming or
leaving the machine. Rule-based methods are
adopted to accelerate MA to make decision based
on real-time manufacturing environment and
production logics. The fundamental element of a
rule is function. A function has a name, a set of
arguments, and a return value. Function itself can be
an argument of another function. All the rules are
described in a standard structure and stored in an
XML file which can be further updated. The MA
can apply the corresponding rule by choosing and
loading it from the XML file.

352

Real-time information processing module is used
to deal with the various real-time data captured by
auto-ID devices installed at the machine side.
Contrast to reason module, it focus on how to form
more meaningful real-time manufacturing
information. For example, the getMaterials
processing method will deal with all the data
relevant to materials of this machine and return
detailed real-time information such as used
materials, produced semi-finished products etc.

Machine Agent
A
u
t
o
-
I
D
D
e
v
i
c
e
s
R
F
I
D
T
a
g
g
e
d

I
t
e
m
(
H
F
)
R
F
I
D
T
a
g
g
e
d

I
t
e
m

(
U
H
F
)
Sensing Zone
W
r
a
p
Data Capturing
Definition and
Auto-Driven
Tag
Tag
Tag
Tag
Tag
Tag
Bar-code
Tag
Application Service
Reasoning
Real-time
Manufacturing
Processing
Standard Data
Capturing

Figure 2 Machine agent model
4.2. CAPABILITY EVALUATION AGENT
MODEL
CEA is used to evenly assign the processes of tasks
to the involved machines. Its model is shown in
Figure-3. For each task, it consists of n processes.
For each process [i], according to its process
planning, CEA will find an optimal machine with
the corresponding capability based on a bid
competitive mechanism. Without considering real-
time machine workloads and shopfloor dynamics,
process plans may become suboptimal or even
invalid at the time of execution.
As shown in Figure-3, a Utilization Ratio (UR) is
used in CEA to evaluate the capability of each
machine and chooses an optimal one for each
process [i]. In bid stage, for each process, the group
of the machines with the corresponding capability
will be selected at first. Then, the MAs relevant to
these machines bid this task. They must report their
real-time status e.g. current used capacity to CEA.
Finally, the CEA will calculate the URs of these
MAs and then evaluate the optimal one according to
the objective function. The objective function is:

% 100 ) ( =
i i i
TC UC UR Min
(1)

In this formulation,
i
UR is the utilization ratio of
machine i.
i
UC is the used capability of machine
i, it is dynamically changed with the changed
queue of the machine i.
i
TC is the total capability
of machine i, it is a constant that represents a time
period.

Capability Evaluation Agent
Tasks
MA [1]
MA [i]
MA [n]
F
o
r
e
a
c
h
t
a
s
k

[
i
]
Process Planning
Task [i]
Process [1]
Process [2]
Process [i]
B
i
d
(
U
t
i
l
i
z
a
t
i
o
n

R
a
t
i
o

)

Figure 3 Capability evaluation agent model
4.3. REAL-TIME SCHEDULING AGENT
MODEL
RSA is designed to implement real-time scheduling.
Its overall model is shown in Figure-4. Its inputs
include the initial information from CEA and the
real-time manufacturing execution information from
PMA. Its outputs are the task queues that includes a
series of } , , , , { FT ST k j i . Here, } , , , , { FT ST k j i
represents the process j of task i is assigned to
machine k, it will be started at ST and finished at
FT.
Three modules, namely mathematic module,
solving module and re-scheduling module are
involved in RSA.
(1) Problem Formulation
Before given the mathematic formulation, the
corresponding notations are defined at first. The
details can be seen in Table-1.
Table 1- otations
} ,..., , {
2 1 m
m m m M =

Set of machines
} ,..., , {
2 1 n
t t t T =

Set of tasks
i


The number of processes of
task i
} ,..., , {
2 1
i
i
p p p TP =

Set of processes of task i
) , , (
k j i
M P T

Represents the process j of
task i is machined at
machine k
) , , (
k j i
M P T S

Starting time of
) , , (
k j i
M P T

) , , (
k j i
M P T PT

Process time of
) , , (
k j i
M P T
at
machine k
} ,..., , {
2 1 n
d d d D=

Set of delivery time of T

Based on the notations, the established
mathematic model is described as following.

353

Objective function:
)]} , , ( ) , , ( [ {
k j i k j i
M P T PT M P T S Max Min +
(2)
Subject to:
) , , ( ) , , ( ) , , (
1 b j i b j i a j i
M P T PT M P T S M P T S
+
(3)
) , , ( ) , , ( ) , , (
k d y k d y k c x
M P T PT M P T S M P T S
or
) , , ( ) , , ( ) , , (
k c x k c x k d y
M P T PT M P T S M P T S
(4)
0 ) , , ( ) , , ( +
k j i k j i i
M P T PT M P T S d
(5)
] , 1 [ , , n y x i
,
] , 1 [
i
j
,
] , 1 [
x
c
,
] , 1 [
y
d
,
] , 1 [ , , m b a k

Equation-2 is the objective function. Its value is
to take the maximal value of the finishing time of
the last process of all the manufacturing tasks. This
value is changed with the different scheduling
results. So, by minimize this value, the optimal
scheduler could be obtained. Equation-3, 4 and 5
are constraints. They represent processes sequence
constraint, resource constraint and delivery
constraint respectively.
(2) Solving module
According to the objective function and
constraints, the solving module is responsible for
calculating the optimal solution based on intelligent
algorithm.
For example, Genetic Algorithm (GA) has been
widely studied, experimented and applied in many
fields in manufacturing fields. It can be used in
RSA to solve the established scheduling problem.
(3) Re-scheduling module
As described in introduction section, real-time
scheduling plays an important role in current
enterprise management. For each scheduler, during
the execution stage, the PMA will feedback the real-
time production information. Re-scheduling module
is used to evaluate whether execute re-scheduling or
not.
As described previously, the real-time production
information is tracked and traced by PMA during
manufacturing execution. When exceptions occur,
re-scheduling module will firstly identify the type of
the exception. Then, re-scheduling strategy is put
into effect. Two strategies, namely local re-
scheduling and overall re-scheduling, are designed
in this module. Local re-scheduling strategy is used
to schedule a small part of the tasks while other
tasks scheduler is not changed. It deals with the
exceptions such as the queue exception of a
machine or temporary tasks. Overall re-scheduling
strategy is used to schedule all the tasks. It is
seldom executed only if the majority of the tasks are
exception.

Process Monitor Agents
Real-time Scheduling Agent
M
a
c
h
i
n
e

A
g
e
n
t
s
C
a
p
a
b
i
l
i
t
y
E
v
a
l
u
a
t
i
o
n

A
g
e
n
t
Real-time
Manufacturing Information
T
a
s
k
Q
u
e
u
e
I
n
i
t
i
a
l

I
n
f
o
r
m
a
t
i
o
n
Problem
Formulation
Module
Solving
Module
Re-scheduling
Module

Figure 4 Real-time scheduling agent model
4.4 PROCESS MONITOR AGENT MODEL
The PMA model is shown in Figure-5. It acts as a
sandwich and plays an important role to manage
and control the material and information flows in
the entire shopfloor. The users or other systems can
get or update the real-time WIP (Work-in-progress)
information by easily sending a request to PMA.
During the manufacturing execution, the real-
time visibility and tractability of shop-floor WIP
will be collected. At the beginning, the PMA will
invoke the data source service to get the necessary
information relevant to the production order such as
the product BOM, schedule information etc. from
the up-level EISs (Enterprise Information Systems).
Based on the gotten manufacturing information and
the information schema (wipML) of WIP, a new
WIP instance is created, which includes the
manufacturing BOM (Bill of materials) information.
For each node of the manufacturing BOM, its
dynamic information nodes can be captured by the
distributed MAs. And the binding model is used to
build up the bind relationship between the dynamic
information nodes and the corresponding MAs.
During execution, the huge manufacturing
information captured by each gateway will be
processed by critical event structure according to
the RSAs request. Two main components are
involved in the designed PMA to fulfil this purpose,
they are:
(1) Data Source Service
It provides data acquisition, processing and
updating services for sharing and integrating
information between manufacturing execution level
and EISs. Due to the difficulties of information
sharing and integration among heterogeneous EISs,
B2MML (Business to Manufacturing Markup
Language) standards are adopted in this component
to provide standard schemas for manufacturing
elements. Through data source service, the
necessary information or dynamical information can
be easily extracted or updated from or to
heterogeneous EISs.
The inputs of this component are the parameters
of the data source of the EISs which users want to
acquire or update information from or to, while the

354

outputs are the standard information based on
B2MML schemas.
(2) WIP Tracking and Tracing
It is responsible for configuring the distributed
MAs according to specific logical relationship to get
real-time information of WIP in the entire shopfloor.
The critical event structure is used to obtain more
meaningful and actionable information from large
amount of low level events and to control the event-
driven information systems. It establishes an
aggregation of series of the events from auto-ID
devices to form high level events. Then, based on
the timely information stored in repository,
supervisors can monitor and control the production
process of the overall shopfloor.

Machine Agents
EISs
PDM SCM ERP
Data Source Service
Get Production
Information
Create an
Instance
Configuration
of CE
Get Real-time
CE Information
WIP Tracking and Tracing
wipML
Real-time Visibility and Traceability
MA(i)
Real-time data
Get Information Update Information
Call Get Information Method Call Update Information Method
MA (2) MA (i)
P
r
o
c
e
s
s
M
o
n
i
t
o
r
A
g
e
n
t
Real-time
Scheduling
Agent
R
e
a
l
-
t
i
m
e

m
a
n
u
f
a
c
t
u
r
i
n
g
i
n
f
o
r
m
a
t
i
o
n

Figure 5 Process monitor agent model
5. CONCLUSIONS
Ubiquitous Manufacturing is emerging as an
advanced manufacturing technology (AMT). It
relies substantially on wireless auto-ID / RFID
sensors and wireless information networks for the
collection and synchronization of real-time field
data from manufacturing shop floors. It enables the
shop-floor management to realize real-time
production scheduling.
This paper has proposed a referenced multi-agent
based real-time scheduling for ubiquitous shop-floor
environment. The contributions are summarized as
follows.
(1) A machine agent is designed to collect and
process of real-time shopfloor data captured by
auto-ID devices such as RFID. These auto-ID
devices are deployed at machine side to form value-
adding points in a ubiquitous shopfloor
environment.
(2) A capability evaluation agent is designed to
optimally assign the tasks to the involved machines
at the process planning stage based on the real-time
utilization ration of each machine.
(3) A real-time scheduling agent is designed for
manufacturing tasks scheduling / re-scheduling
strategy and methods according to their real-time
feedback.
(4) A process monitor agent is established for
tracking and tracing the manufacturing execution
based on a critical event structure.
The current work will be further extended in our
future research from the following aspects. Firstly,
the proposed multi-agent based real-time shop-floor
scheduling architecture and its models only provide
a reference for ubiquitous manufacturing, and a
great effort should be needed to support more
various EISs of different companies. Secondly, the
proposed multi-agent based scheduling model
should be further extended to real-time internal
logistics planning and scheduling etc.
ACKNOWLEDGMENTS
We are most grateful to various companies who
provide technical and financial supports to this
research. Authors would like to acknowledge
financial supports of National Science Foundation
of China (50805116) and Grant of Northwestern
Polytechnical University (11GH0134).
REFERENCES
Aghezzaf, E.H., Production planning and warehouse
management in supply networks with inter-facility
mold transfers, European Journal of Operational
Research, 2007, Vol. 182, pp 11221139.
Bastos, R.M., Oliveira, F.M. and Oliveira, J.P. ()
Autonomic computing approach for resource
allocation, Expert Systems with Applications, 2005,
Vol. 28, No. (1), pp.919.
Buyurgan, N., Saygin, C., Application of the analytical
hierarchy process for real-time scheduling and part
routing in advanced manufacturing systems, Journal
of Manufacturing Systems, 2008, Vol. 27, pp 101-110
Chan, F., and Zhang, J., A., Multi-Agent-Based Agile
Shop Floor Control System, International Journal of
Advanced Manufacturing Technology, 2002, Vol. 19,
pp 764774
Cho, S., Prabhu, V. V., Distributed adaptive control of
production scheduling and machine capacity, Journal
of Manufacturing Systems, 2007, Vol. 26, No. 2, pp
6574.
Guo, Z. X., Wong, W. K., Leung, S. Y. S., Fan, J. T., &
Chan, S. F. Genetic optimization of order scheduling
with multiple uncertainties, Expert Systems with
Applications, 2008, Vol. 35, pp 17881801.
Huang, G.Q., Wright, P., and Newman, S., Wireless
Manufacturing: a Literature Review, Recent
Development, and Case Studies, International
Journal of Computer Integrated Manufacturing, 2009,
Vol. 22, No. (7), pp 1-16.

355

Huang, G., Zhang, Y.F., Jiang, P., RFID-based wireless
manufacturing for real-time management of job shop
WIP inventories, International Journal of Advanced
Manufacturing Technology, 2008, Vol. 23, No. 4, pp
469-477.
Jia, H.Z., Ong, S.K., Fuh, J.Y.H., Zhang, Y.F. and Nee,
A.Y.C., An adaptive upgradable agent-based system
for collaborative product design and manufacture,
Robotics and Computer-Integrated Manufacturing,
2004, Vol. 20, No. 2, pp 7990.
Jiao, J.R., You, X. and Kumar, A. () An agent-based
framework for collaborative negotiation
in the global manufacturing supply chain network,
Robotics and Computer Integrated Manufacturing,
2006, Vol. 22, No. (3), pp 239255.
Jones, L., Working without wires, Industrial
Distribution, 1999, Vol. 88, No. 8, pp M6-M9.
Jun, H., Shin, J., Kim, Y., Kiritsis, D., Xirouchakis, P.,
A framework for RFID applications in product
lifecycle management, International Journal of
Computer Integrated Manufacturing, 2009, Vol. 22,
No. (7), pp 595-615.
Krothapalli, N. and Deshmukh, A., Design of
negotiation protocols for multi-agent manufacturing
systems, International Journal of Production
Research, 1999, Vol. 37, No. 7, pp 16011624.
Lee W.B. and Lau H.C.W., () Multi-agent modelling of
dispersed manufacturing networks, Expert Systems
with Applications, 1999, Vol. 16, No. (3), pp.297306.
Macchiaroli, R. and Riemma, S., A negotiation scheme
for autonomous agents in job shop scheduling,
International Journal of Computer Integrated
Manufacturing, Vol. 15, No. 3, 2002, pp 222232
Maturana F.P., Tichy, P., Slechta, P., Discenzo, F.,
Staron, R.J., Hall, K., Distributed multi-agent
architecture for automation systems, Expert Systems
with Applications, 2004, Vol. 26, No. 1, pp 4956.
Mendes, J., Gonalves, J., Resende, M., A random key
based genetic algorithm for the resource constrained
project scheduling problem, Computers and
Operations Research, 2009, Vol. 36, pp 92109.
Monostori, L., Vncza, J., Kumara, S.R.T., Agent-Based
Systems for Manufacturing, CIRP Annals -
Manufacturing Technology, 2006, Vol. 55, No. 2, pp
697720.
Papakostas, N., Mourtzis, D., Bechrakis, K.,
Chryssolouris, G., Doukas, D., A flexible agent based
framework for manufacturing decision making, In
Proceedings of the 9th International Conference on
Flexible Automation and Intelligent Manufacturing,
Tilburg, Netherlands, June 1999, pp. 789800.
Poon, T. C., Choy, K. L., Chow, Chan, F.T.S., Lau,
H.C.W., A real-time production operations decision
support system for solving stochastic production
material demand problems, Expert Systems with
Applications, 2011, Vol. 38, pp 48294838.
Qiu, R.G., RFID-enabled automation in support of
factory integration, Robotics and Computer-
Integrated Manufacturing, 2007, Vol. 23, No. 6, pp
677-683.
Shen, W., Wang, L. and Hao, Q., Agent-Based
Distributed Manufacturing Process Planning and
Scheduling: A State-of-the-Art Survey, IEEE
Transactions on systems, man, and cyberneticsPart
C: applications and reviewers, 2006, Vol. 36, No. 4,
pp 563-577
Sikora, R. and Shaw, M.J., A multi-agent framework for
the coordination and integration of information
systems, Management Science, 1998, Vol. 44, No.
11, pp 6578.
Suh, S.H., Shin, S.J., Yoon, J.S., Um, J.M., UbiDM: A
new paradigm for product design and manufacturing
via ubiquitous computing technology, International
Journal of Computer Integrated Manufacturing, 2008,
Vol. 21, No. 5, pp 540-549.
Wong, T. N., Leung, C. W., Mak, K. L., & Fung, R. Y.
K., Dynamic shopfloor scheduling in multi-agent
manufacturing systems, Expert Systems with
Applications, 2006, Vol. 31, pp 486494.
Wu, D.J., Software agents for knowledge management:
Coordination in multi-agent supply chains and
auctions, Expert Systems with Applications, 2001,
Vol. 20, No. 1, pp 5164.
Zhang, Y.F., Huang, G.Q., Qu, T., Ho, K., Agent-based
workflow management for RFID-enabled real-time
reconfigurable manufacturing, International Journal
of Computer Integrated Manufacturing, 2010, Vol. 23,
No. (2), pp 101112.
Zhang, Y.F, Qu, T., Ho, K., Huang G.Q., Agent-based
Smart Gateway for RFID-enabled real-time wireless
manufacturing, International Journal of Production
Research, 2011, Vol. 49, No. 5, pp 1337 - 1352.
Proceedings of DET2011
7th International Conference on Digital Enterprise Technology
Athens, Greece
28-30 September 2011
FREQUENCY MAPPING FOR ROBUST AND STABLE PRODUCTION
SYSTEMS
Sebastian T. Stiller
Laboratory for Machine Tools and Production
Engineering WZL, RWTH Aachen University,
Germany
s.stiller@wzl.rwth-aachen.de
Robert Schmitt
Laboratory for Machine Tools and Production
Engineering WZL, RWTH Aachen University,
Germany
r.schmitt@wzl.rwth-aachen.de


ABSTRACT
Characterized by a complex network of interwoven tasks, time delays, iterations, and rework caused
by problems and changes of customer specifications product realization processes are highly
dynamic systems. Applying control engineering methods to product realization processes in order to
treat its dynamic behaviour can bring a significant benefit for the quality management of these
processes. Using its metrics and terms the paper discusses the assignability of the field of control
theory for the analysis and design of organizational production processes and order fulfilment.
Furthermore it provides an approach towards a description model for quality control loops.
Mapping a production system within the frequency domain will facilitate the setup of required
control loops and the configuration of a stable and robust production system.
KEYWORDS
Quality Management, product realization, control engineering, discrete state space

1. INTRODUCTION
The environment of enterprises stays turbulent:
Manufacturing enterprises are influenced by multi-
ple dynamic external factors such as the
individualization of products, the acceleration of
product life cycles or the pace of technical
innovations. Moreover dynamic internal business
factors such as the capability of processes, the
utilization of resources, or the qualification and
capability of employees have a significant influence
on companies normative, strategic, and tactical
orientation. Therefore, manufacturing enterprises
have to handle the growing variety and dynamics
from internal and external sources (Schmitt and
Beaujean, 2009). Nevertheless companies cannot
allow the effort to plan all probable states of their
production systems or to invest in costly activities
like fire fighting or specialized task forces to cope
with the consequences of these internal and external
dynamics (Jovane, 2009).
Considering these rising organizational and
technological challenges the field of quality
management has to answer the questions how the
described dynamics can be handled on strategic,
tactical and operative levels. As an integrative
approach quality management models have to
support the decision processes of enterprises
crossing the different hierarchical levels and
company internal boundaries between departments
or divisions. Therefore quality management
empowers enterprises to identify their desired steady
state equilibrium and provides the principles,
methods and tools to develop enterprises towards
this ideal point or stabilize it within the equilibrium.
2. SEEKING FOR ROBUSTNESS AND
STABILITY A NEW UNDERSTANDING
OF QUALITY
Based on the characterizing terms for the
performance measurement dimensions of production
356
systems, a framework of a quality management
model has to define and analyze the challenges of
markets and customers and while also considering
the strategic objectives, the entrepreneurial
conditions and the corporate skills.
Nowadays the design of the operations and
processes are often based on management and
quality concepts which are heavily influenced by
various philosophies, concepts and methods such as
Total Quality Management (TQM), Lean
Management, and Six Sigma. Moreover existing
explaining and evaluating models like the DIN ISO
9000:2005 series, or evaluating models like the
EFQM Model are well known and widely spread
throughout various industries strongly interlocked
with the principles, concepts and methods of these
philosophies.
Fulfilling the characteristics of an explanation
model the EFQM Model measures for example the
current state of maturity of operations based on a
strong statement about the cause and response chain
of successful enterprises: Excellent companies
provide the leadership to build strategies which
incorporate people, partnerships and resources,
implementing and operating efficient and effective
processes, products and services in order to satisfy
the stakeholders and gain extraordinary financial
results.
How do these models encounter the rising and
central questions of companies about a stable
equilibrium in rapid changing, dynamic
environments, as a guarantor for competitiveness
and viability? Can these models help companies to
align their strategies, structure, and operations
towards the desired equilibrium?
Both models, the EFQM and the DIN ISO
9001:2007, have in common that they emphasize on
the increase of the overlap rate of customer demands
and product features (Gucanin, 2003) as the target
equilibrium (figure 1).

Overlap rate between
requirements and characteristics
Customer
Requirements
Product Features
Maximization

Figure 1 Quality Management as a maximization problem
Accordingly, the models proclaim the optimization
of the overlap of customer demands and product
features as a maximization problem. But are
companies really able to align their strategies to a
one-dimensional maximization problem? The
analysis of this optimization problem shows that the
complexity of the desired equilibrium was reduced
using different implicit restrictions. With each target
field of the traditional quality understanding the
customer demands and product features two main
restrictions were implied:
Restriction 1 Organizational-sided
assumption: The companies possess all the
skills to operate exactly as their strategies
dictates
Restriction 2 Market-sided assumption:
Companies already knew or decided who
their customers are
Many times the entrepreneurial praxis has proven
these implicit assumptions as too restrictive.
Especially in high-wage countries companies which
produce within the given definition of quality,
delivering high quality products and running both
effective and efficient processes, are increasingly
suppressed and substituted by competitors of low
wage countries (Tseng, 2003). Hence, the pure
adherence to the traditional quality understanding
does not cause economic and entrepreneurial
success. Therefore enterprises cannot trust in
unidirectional maximization of its quality target
parameters, but have to balance their position within
the target field considering conflicting target
parameters.
Due to their primarily value-adding-oriented view
of the process, the quality management models are
further lacking to give information about how a
company can identify and therefore has to align to
reach the aforementioned desired equilibrium.
Meanwhile the models cannot provide answers
about how to establish the needed feedback
mechanisms in order to institutionalize
organizational learning and the dampening of
oscillations caused by disturbances.
The philosophy of entrepreneurial quality
management attempts to close these gaps and
ameliorate existing quality management models.
Instead of the traditional maximization problem of
customer demands and product features, a new
model has to allow companies to and identify targets
and balance them towards their desired equilibrium.
The management model is built on the
entrepreneurial quality philosophy which disperses
the one-dimensional focus and breaks the given
restrictions open.
To start with, Restriction 1 assumed that the
operations are able to produce the exact product
characteristics which the management dictates. In
order to relax this situation the field of product
features must be advanced towards a higher
resolution. This is achieved by the consideration of
corporate orientation and the corporate skills.
Simultaneously the remaining market-sided
constraint assuming that the company already
357
knows the targeted customers is dissolved since the
customer requirements stay as the counterpart to the
organizational characteristics. Figure 2 shows the
triangle of the Entrepreneurial Quality Philosophy.

Corporate Skills
Customer
Requirements
Corporate
Orientation
Overlap rate between
requirements and characteristics

Figure 2 The Entrepreneurial Quality Philosophy
After the introduction of the entrepreneurial
quality management philosophy these elements
must be incorporated in a framework which enables
companies to design their structures, operations, and
mechanisms in order to reach the desired
equilibrium of entrepreneurial quality.
3. THE AACHEN QUALITY
MANAGEMENT MODEL
The Aachen Quality Management Model shown in
figure 3 was designed to meet this need. It provides
a scope of action, which allows the design of
entrepreneurial quality management for a company
by considering strategic objectives, entrepreneurial
conditions, resources and product life cycles
(Schmitt, 2007). The constituting elements of the
Aachen Quality Management Model are Market,
Management, Quality Stream and Resources &
Services.

Figure 3 The Aachen quality management model
The unique outline of the Quality Stream consists of
two structural elements: the Quality Forward Chains
and the Quality Backward Chain.
The Quality Forward Chains take credit to the
proactive and preventive measures per product
group and life cycle such as quality gates in the
product development process. Therefore they cannot
just be interpreted as the value creation processes,
but also reflect the different states of the products
within the product development and production
processes.
The Quality Backward Chain works as the central
feedback mechanism organizing reactive and
corrective actions for all processes and product
groups. As stated before, the functioning of each
mechanism requires a closed loop feedback
mechanisms, where the system states are
continuously planned, monitored and adapted from
the view of the relevant perspective. Building
integrated and cascading quality control loops the
proper cooperation of the Quality Forward Chains
and the Quality Backward Chain is the central
prerequisite for the functionality of the stability
within the field of Entrepreneurial Quality.
The notion of Entrepreneurial Quality, stability
and robustness within the Aachen Quality
Management Model can be used to derivate further
models as for example the redesign of process and
project landscapes within product realization
(Schmitt, 2008). In the following paragraphs a
model targeting the balanced design between
corporate orientation and skills by frequency
mapping will be presented.
4. TODAYS CHALLENGES FOR THE
STABILIZATION IN PRODUCT
REALIZATION
Within the quality stream and the Quality Forward
Chains the product realization processes combine
the major business processes of producing
companies innovation, product development and
production containing many of the companies
core competencies. Besides the rising technical
complexity of products three main drivers for
disturbances for the management of product
realization processes are frequently discussed in
literature and also within companies (figure 4):

358
11
22
33
Delays within the processes and the
controller
Oscillating systems with feedback
loops
downtime
Ramp-up
production
Idle state
Ramp-down
P
r
o
d
u
c
t
i
o
n
c
y
c
l
e
Product realization
M
a
r
k
e
t
Product development Production Innovation
Iterations and Interdependencies caused by information dependency
Example: V-Modell
Rework caused by change requests, problems and failures, learning effects
Example: Rework after the assessment of work products at a quality gate
Managers either fail to recognize the effect of problems or changes in
time or are overwhelmed by escalations
Example: Reviews, Mile stones and Quality Gates
The main challenges within the design of product relization
M
a
r
k
e
t
11
22
33
Figure 4 Challenges within product realization and production cycles according to Heinen

All processes phases of product realization
are characterized by iterations and
information dependencies. The dependency
between different activities or tasks is
especially known within innovation and
product development processes where
activities need a degree of information from
other activities in order to be initialised.
Example for this effect is the development
according to concurrent engineering or
development activities within the V-model.
The permanent communication endangers
the occurrence of oscillations considering
the work progress (Browning and Ramasesh
2007).
Rework caused by change requests of the
customer, learning effects and iterations
especially during the product development,
and both problems and failures during
development and production amplify the
dynamic effects (Eppinger et al, 1994).
Moreover managers of product realization
processes either recognize problems late
risking the violation of the product or
project targets or tend to be overwhelmed
by the controlling work due to frequent
audits, assessments or reviews. Hence, not
only the iterations and rework cycles of the
processes, but also the corrective measures
of the managers are distinguished by time
delays (Schmitt, 2010).
Out of the perspective of systems theory product
realization processes are high dynamic systems
characterized by feedback links between activities
and time delays within the controlled processes and
the impact of the controller itself.
5. TOWARDS A MODEL FOR THE
ROBUST DESIGN OF PRODUCT
REALIZATION
Increased performance of realization processes in
terms of time-to-market, productivity and costs can
be achieved by focussing both structural design and
control policy of realization processes; principal
tasks of modern quality management.
The model contains three sectors reaching from
the illustration of the product realization with the
help of a process reference model, over the
identification of the critical elements for the
evaluation of the robustness of the product
realization towards the frequency mapping which
allows the simulation of different management
policies assessing the stability of the different
controller conditions (figure 5).

359
Product realization process
Core model
Sensor unit
controller actuator
Control process
Process reference
model
Activities
Information flows
and dependencies
Frequency mapping
Robust process design
Modelling of the control process
Elements for the evaluation of product realization process robustness
Workload Finished work Process
Unknown rework
Learning effects
Change requests
Problems
Problem/ Change
management
Process
Management
Scope
Employees
Deadlines
downtime
Ramp-up
production
Idle state
Ramp-down
Conceptual design of closed
loops structure
Selection of controller
parameterization
controller Z
-n
y
controller Z
-n
y
11
22
33
P
1
P
2
P
3
P
4
P
5
P
1
P
2
P
3
P
4
P
5
P
1
P
2
P
3
P
4
P
5
P
1
P
2
P
3
P
4
P
5
P
1
P
2
P
3
P
4
P
5

Figure 5 Maximum dimensions for wider (double column) tables and figure

While the first subsection of chapter 5 will explain
the core of the model which are linked to all three
sectors the following subsection will give an
introduction to each subsection.
5.1. THE CORE MODEL

Each sector inherits parts of the structure or
systematic of the core model. Thereby the interfaces
of all sectors and the proper transmission of the
parameters and structures between the sectors can
be easily secured.
The core model has both, structural and systemic
design components.
5.1.2. The structural model component
The structural model is deviated from the basic
architecture of control loops in control engineering.
According to the definition of control loops a
quality loop contains besides the controlled process
three major stages, each executing a part of the
quality control process: the Sensor unit, the
Control Unit, and the Actuator Unit (figure 7).

Input Output
Controlled
System
Actuator
Unit
Sensor
Unit
Controller Unit

Figure 6 Architecture of an elementary control loop
Sensor unit
The main assignment of the sensor unit is to
monitor and inform the controller about the current
state of the system. All control units are assumed to
work in discrete state space assuming that a
constant monitoring of the process is impossible,
but restricting the function of the control elements
always within equidistant time spots. Hence the
frequency of the signal is constant and does not
depend on a defined event. Examples of quality
sensors are: reports from factory workers, defect
detection during QA spot tests, customer
complaints, or new issues discovered while
resolving known problems.

Control unit
360
The main task of the control stage is the selection
of measures and management policies changing and
adapting the controlled system. In the context of the
quality control system, the selection or development
of an effective solution for a given issue alone is
however not sufficient.

Actuator unit
The implementation of the measure is the main
function of the actuator. It has to locate the exact
stage for the measure initiation within the forward
chains, deciding the scope, speed and costs of the
implementation. Examples for measures are:
adjustments scope in product development, staff
headcount, or postponement of deadlines.
Additionally the actuator stage is responsible for
providing the means of evaluating of measure
success.
5.1.2. The systemic model component
But, for control theoretical methods like the
frequency mapping of product realization, which is
a quantifiable approach of stability analysis also
systemic and quantitative components need to be
defined. As for one of the early steps of each
analysis and design project in control engineering a
control variable has to be identified (Lunze, 2007).
A quantifiable variable is needed which
comprehends to the process and product quality of
the system and can give a measure of the dynamics
in product realization. The recommended measure
for the status and quality of product realization
processes is the amount of checked and released
tasks or work products per period which plays also a
significant role within the rework cycle model
known in system dynamics (Cooper and Kenneth,
1980). The model compares the amount of tasks to
be done set by the management with the amount of
tasks completed, checked and release. The
difference between these values, the control
deviation, depicts the necessary rework increasing
the inventory or tasks to be done. Hence the sensor
has the function to check and release or deny the
work products of the tasks. The manager can affect
the proportion of released and open tasks by e.g.
increasing the staff, or changing the project scope.
The structural and systemic components are used
by the three model sectors, reference process
description, robust process analysis and frequency
control.
5.2. DESCRIPTION OF A PROCESS
REFERENCE MODEL
According to ROSEMANN the main objective of a
reference model is to streamline the design of
enterprise-individual (particular) models by
providing a generic solution (Rosemann, 2003).
Hence reference models are blueprints of best
practice, which accelerate the modelling of
individual processes by providing a set of
potentially relevant processes and structures.
The process reference model inherits the structure
of the quality loops from the core model. Therefore
the model contains a reference description of a
production realization process, and the process steps
of the sensor, controller and actuator units of the
quality control loops. Its process view defines the
integrated monitoring and processing of failures,
problems and change requests and channels the
information towards the controller which maintains
the management of the product realization process.
Within the actuator the management policies for
countermeasures are defined.
It also captures the information network such as
information flows, process dependencies and
interfaces between roles, responsibilities and ICT
solutions supporting the processes or quality loop
units.
The reference model can be easily adapted to
product realization processes and the control loops
of different companies allowing gathering all the
necessary information for the analysis of robustness
and stability in the frequency mapping sector.
5.3. ROBUST PROCESS ANALYSIS
In control engineering systems are called robust
if the control variables show the desired behavior
even if systems parameters shift significantly. In
quality management TAGUCHI introduced a well
known method for robust design. Similar to control
engineering robustness is defined as the
insensitivity of products, processes and systems to
noise. With the signal-to-noise ratio TAGUCHI
defines a method to quantify the robustness.
The robust process analysis uses a similar
approach. It assesses the robustness of the product
realization processes according to the tailored
process reference model. To give an example the
connectivity between the tasks is one important
measure for the robustness of the process. It takes
the tasks running parallel to one task in the sense of
concurrent engineering permanently exchanging
work products into account. Moreover the number
of tasks affected by the rework in one task gives
another important measure to the robustness level.
Information breaks due to system interfaces and
transmission between roles and responsibilities
allow also inference to the robustness of the tasks.
5.4. TIME DESCRETE CONTROL MODEL
FOR FREQUENCY MAPPING
When Toyota introduced takt time as the central
element for the synchronization of its production
system the design state space for production
361
systems shifted from the planning of processes in
time domain to the frequency domain. This method
is well known to control engineering which
transfers complicated differential equation in time
domain via Laplace Transformation to frequency
domain where the system can be setup easily
following simple mathematical rules. The
methodology of frequency follows this fundamental
idea.
From a system engineering viewpoint the
product realization process contains a series of
connected inventories with internal precedence
relationships. The inventories are the tasks or work
products checked and released of the product
realization process or a single phase waiting for the
further processing. All quality control loop
elements, the sensor for the detection of unfinished,
incorrect tasks or tasks affected by change requests
are characterized by time delays endangering the
stability of the product realization system.
Describing the model in time discrete state space
takes credit to these delays for an easier analysis
and design of the controller and control parameters.
The analyzed robustness of the process elements is
contributed within the parameter set in the model.
The z-Transformation of the resulting difference
equations makes the analysis of the systems
behaviour within frequency domain possible.
6. CONCLUSIONS
Quality management has to give answers to
companies acting in markets characterized by
changes and increasing competition. Hence, the
restriction of traditional philosophy of quality
management as a maximization problem has to be
resolved towards an entrepreneurial understanding
of quality management as a stabilization problem.
A core element of companies in order to cope
with change and disturbances in business processes
like product realization are feedback mechanisms.
The structure and conduct of a quality control loop
model can stabilize realization processes by
dampening oscillations caused by iterations, rework
and changes. The three sectors of the illustrated
model take account structural and systemic aspects
of product realization processes leading to the
methodology of frequency mapping in order to
define stable work points. The further research will
challenge the development of robust process
analysis and frequency mapping and evaluate the
model in companies environments.
7. ACKNOWLEDGMENTS
The research is based on the results of and the
graduate program Ramp-Up Management
Development of Decision Models for the
Production Ramp-Up and the research project
(QC) Quantifiable Closed Quality Control
within the Cornet framework. The (QC) project is
funded with budget funds of the Federal Ministries
of Economics and Technology (BMWi) via the
German Federation of Industrial Research
Associations Otto von Guericke e.V. (AiF). The
authors would like to thank all parties involved.
REFERENCES
Browning T. R., Ramasesh R. V. A Survey of Activity
Network-based Process Models for Managing Product
Development Projects, Production and Operations
Management, 16, 2007, pp. 217-240
Cooper, Kenneth G., Naval ship production: A claim
settled and a framework built, Intefaces 10 (6), 1980,
pp. 30-36
Eppinger S. D., Whitney D. E., Smith R. P., Gebala D., A
model-based Method for Organizing Tasks in Product
Development, Research and Engineering Design, 6
(1), 1994, pp. 1-13
Jovane F., Westkmper E., Williams D., The
ManuFuture Road Towards Competitive and
Sustainable High-Adding-Value Manufacturing,
Springer, Berlin, 2009, pp. 31
Lunze J., Regelungstechnik 1- Systemtheoretische
Grundlagen, Springer, Berlin, Germany, 2007, p. 317
Rosemann M., Application Reference Models and
Building Blocks for Management and Control,
Handbook on enterprise architecture, Springer Verlag,
2003
Schmitt R., Beaujean P., The Quality Backward Chain.
The Adaptive Controller of Entrepreneurial Quality,
In: Huang, G. Q.; Mak, K. L.; Maropoulos, P. G.
(Hrsg.): Proceedings of the 6th CIRP-Sponsored
International Conference on Digital Enterprise
Technology, Springer, Berlin, 2009, S. 1133-1143
Schmitt R., Stiller S., Beaujean P., Introducing Quality
Control Loops for the Integrated Analysis and Design
of Stable Production Systems 10th International
Symposium on Measurement and Quality Control
2010, Osaka University, Japan
Schmitt, R., Beaujean, P., Kristes, D., Entrepreneurial
Quality Management. A new definition of quality,
IEEE International Engineering Management
Conference, IEEE; New Jersey, 2008, pp. 275-280
Tseng M. M., Industry Development Perspectives:
Global Distribution of Work and Market, presented at
the CIRP 53rd General Assembly, Montreal, Canada,
2003
362
Proceedings of DET2011
7th International Conference on Digital Enterprise Technology
Athens, Greece
28-30 September 2011
363

AN AUTOMATED BUSINESS PROCESS OPTIMISATION FRAMEWORK FOR
THE DEVELOPMENT OF RE-CONFIGURABLE BUSINESS PROCESSES: A
WEB SERVICES APPROACH
Ashutosh Tiwari
Cranfield University, Cranfield, UK
a.tiwari@cranfield.ac.uk
Christopher Turner
Cranfield University, Cranfield, UK
c.j.turner@cranfield.ac.uk


Alex Alechnovic
Learning Resources International (LSN-LRI),
Olney, UK
AAlechnovic@lsn-lri.org.uk
Kostas Vergidis
Cranfield University, Cranfield, UK

ABSTRACT
The practice of optimising business processes has, until recently, been undertaken mainly as a
manual task. This paper provides insights into an automated business process optimisation
framework by using web services for the development of re-configurable business processes. The
research presented here extends the framework of Vergidis (2008) by introducing web services as a
mechanism for facilitating business process interactions, identifying enhancements to support
business processes and undertaking three case studies to evaluate the proposed enhancements. The
featured case studies demonstrate that an increase in the amount of available web services gives rise
to improvements in the business processes generated. This research highlights an increase in the
efficiency of the algorithm and the quality of the business process designs that result from the
enhancements. Future research directions are proposed for the further improvement of the
framework.
KEYWORDS
Web service, Business process, Optimisation

1. INTRODUCTION
Business processes map complex organisational
interactions, often describing tasks that are
undertaken manually. However, business process
automation can be achieved by translating manual
procedures or using semi-automated tools.
Davenport (1990) defines a business process as a set
of related tasks which are executed to achieve
desired outcomes. The aim of a business process is
to perform a business operation, i.e. any service-
based operation that is producing value for the
organisation (Tiwari et al., 2010). According to
Vergidis et al. (2008), business process definitions
are usually very simplistic or specific to the industry
from which they emerge.
It may be asked why there is a need to optimise a
business process? Hammer (1990) indicates that
companies tend to use information technology to
speed up old business processes without changing
them. This can lead to inefficient processes that do
not recognise or incorporate more recent automated
process steps. Optimisation, in relation to business
processes, is about improving performance and
achieving maximum results within time, cost and
efficiency parameters. Vergidis et al. (2007)
suggested that improving performance helps to
establish competitive advantage for organisations.
These authors also noted that optimisation has a

364

direct implication on costs and process duration.
Business processes are represented in this paper as
being composed of tasks, the discrete steps or sub-
components of a process, and resources, the inputs
and outputs of a task. Within this paper each task
will be represented by a web service performing a
specific function. Resources within a process link up
all of the tasks, they are the inputs and outputs of
each web service in a process (in graph theory
resources, as described in this paper, can be thought
of as edges). The use of web services in this
research stems from the rise of the Service Oriented
Architecture (SOA), on which web services are
based. The functionality of each web service is
described by the interface it exposes. The interface
of a web service defines the inputs it may receive
and the outputs it returns. Nagappan et al. (2003)
state that web services are modular business
applications that expose business logic as a service
over the Internet by subscribing, invoking and
finding other services. The concept of interchange-
ability of web services is an important process
improvement benefit utilised by the business
process optimisation framework to allow for tasks to
be swapped in and out.
Each task within a process can be given attributes
such as cost and efficiency. In this way the task of a
process may be changed in order change the overall
attribute totals, such as reduce the overall cost of a
process if each task has a cost attribute attached to
it. In the case studies within this paper the attributes
cost and efficiency are used as optimisation
parameters (reduced cost and increased efficiency
are the target outcomes).
The business process optimisation framework
presented in this paper is a soft computing approach
utilising Evolutionary Multi objective Optimisation
Algorithms (EMOAs). Evolutionary techniques
allow for the production and exploration of a
population of diverse process designs based on a
specific set of process requirements (Tiwari et al.,
2010). Wang et al. (2004) note that process
optimisation is a difficult task due to the inherent
discontinuous nature of the underlying mathematical
models. In terms of related work Hofacker and
Vetschera (2001) have attempted to transform and
optimise a business process model using GAs
though they concede that the results they obtained
were not satisfactory. Tiwari et al. (2006) and
Vergidis et al. (2006) extended the mathematical
model of Hofacker and Vetschera (2001) and
utilised multiobjective optimisation algorithms,
such as the Non-Dominated Sorting Genetic
Algorithm 2 (NSGA2). The results obtained from
these investigations were satisfactory and led to the
development of the business process optimisation
framework (bpo
F
). The aim of the research presented
in this paper is to show that an increase in the
amount of available web services gives rise to
improvements in the business processes generated.
In the case studies the use of a modified and
extended web service library is evaluated to this
end.
2. BUSINESS PROCESS OPTIMISATION
FRAMEWORK (BPO
F
)
Vergidis (2008) proposed an evolutionary multi-
objective optimisation framework for business
process designs. The main steps and the structure of
the business process optimisation framework (bpo
F
)
are shown in Figure 1. This research utilises
NSGA2 and the Large Scale Search Algorithm
(LSSA). NSGA2 is responsible for plotting the
optimised results. It is the most popular engine for
use in the optimisation of fragmented data. The
main parameters for use with bpo
F
are shown in
Table 1.
From Table 1 it can be seen that the population is
set to 250. This means that 250 versions of a
business process are produced for each generation
that the evolutionary algorithm is run for (25,000
generations are iterated, shown in Table 1). Table 1
also demonstrates that two objectives are being
optimised; that is cost, represented in the case
studies as Service Delivery Price (SDP), and
efficiency, represented as Service Fulfilment Target
(SFT).
The Business Process Optimisation Framework
(bpo
F
), presented here, is described in detail in
Vergidis (2008).
Table 1- General parameters





Population Generations Crossover
Probability
Mutation
Probability
Objectives
250 25,000 0.8 0.2 2

365


Figure 1 - The main steps of the business process optimisation framework (bpo
F
) (Vergidis, 2008)
The bpo
F
consists of five steps:
1. Generate random population: The first step of
the optimisation process is the generation of a
random population. This step occurs only once in
the optimisation process as then the population is
evolved for a defined number of generations.
However, for each of the sets there is a constraint in
the random allocation of tasks. The constraint is that
a task must appear only once in the same set. This
constraint avoids having duplicate tasks in one set
and in a potential business process design.
2. Check constraints: For each solution of the
population, the problem constraints are checked.
Note that bpo
F
checks the constraints prior to
solution evaluation due to a specific reason: the
constraints modify the solution. One particular
constraint measures the Degree of Infeasibility
(DoI) of the solution. It is here that the Process
Composition Algorithm (PCA) is run. The PCA is
an algorithm for composing new business process
designs. The PCA ensures that there is a one-to-one
relationship between the inputs and outputs of tasks
within the solution to ensure consistency in the
optimisation. If a solution cannot be built into a
graph (because edges between graph tasks are
missing for example) the solution is deemed
infeasible and a penalty is added. Additional
optional constraints can force solutions to contain or
exclude a certain set of tasks.
3. Evaluate solution: The solution evaluation
involves two stages based on the proposed
representation: (i) The Task Attribute Matrix
(TAM) is created and (ii) the various Process
Attributes (PAs) are calculated. The TAM is created
based on an updated version of the solution
involving the tasks in the design and their attribute
values. Based on this matrix, the solution is
evaluated in terms of the process attribute values.
4. Perform crossover: Crossover is a genetic
operator that exchanges information between two
solutions. Crossover occurs directly in the N
d
set of
each solution. Initially, the solutions are selected for
crossover based on a given crossover probability.
The solutions that are chosen for crossover are split
into pairs. For each pair a unique crossover-point is
defined based on a random number (between 1 and
n
d
-1). Note that step 2 checks whether the solution
is feasible.
5. Perform mutation: This genetic operator
randomly alters information in a chosen solution.
The operator is applied on the N
d
set of tasks of a
particular solution. When mutation occurs a task is
replaced with an arbitrary task from the task library
(the task library is a set of tasks which may be
inserted into a given process if the input and output
resources of a task, selected from the library,
correspond with the input and output resources of a
set of adjoining tasks in that process).
In this approach the Large Scale Search
Algorithm (LSSA) maps all possible solutions that
constitute the overall search space. NSGA2
complements LSSA by providing the capability to
visualize the results in the form of a scatter graph
with Pareto front. In order to view individual results

366

and process graphs the JGraph (2011) software is
used. JGraph (2011) translates each individual into
a flowchart showing tasks linked by resources (and
includes AND and OR construct indicators).

3. BUSINESS PROCESS OPTIMISATION
USING WEB SERVICES
As mentioned earlier in this paper each task in
the process is represented by a web service. Each
web service has two attributes: cost (Service
Delivery Price or SDP) and efficiency (Service
Fulfilment Target or SFT). These values are
fictional though they do represent a rating
mechanism for web services. Each business process
can be composed of many web services generated in
a random order. Every web service has its specific
function which depends on its inputs and outputs.
This ensures that the algorithm is able to pick up a
better web service, depending on the requirements.
For each scenario appropriate web services are
chosen and are converted into a script readable
format for the algorithms. After execution, NSGA2
generates 250 graphical business processes and a
log file with cost and efficiency values. These
values are plotted with a scatter diagram to compare
the results.
Three scenarios are featured in this paper. The
first scenario describes an automated sales
forecasting process (scenario provided by Grigori et
al. (2004)). The scenario is described by the initial
business process design shown in Figure 2. The
initial process design shows the main process steps
(each represented by a web service) and the inter
connections between the steps. The scenario starts
with two inputs: (a) company name and (b) market
update request.
One output is produced in this scenario: (a)
results report (which is the completed sales
forecast). An information retrieval stage, whereby
financial information about a business is retrieved
along with an update from the financial markets, is
the first step in this scenario. In subsequent stages a
sales forecast is created and a graph service
provides a number of visualisations of the forecast
data for inclusion in the report.
Scenario 2, shown in Figure 3, describes the
placing of an order in an on-line store (scenario
provided by Havey (2005)). The initial process
design starts with three inputs (a) Customer ID &
password, (b) Order details and (c) Website tracking
request (required to track the customers progress
through the website). The customer credentials are
necessary to access the on-line store and form the
first step of this process. A secure online payment is
then made for the goods, shown in step 2 (Vergidis,
2008). Paying for the order invokes the payment
validation in step 3 and the monitoring of the order
progress, step 4. The web analytics track the
customers progress in the website and are the final
step. Three outputs are evident from this process(a)
payment confirmation, confirms that the payment
processing is successful, (b) order tracking status
returns the order status in terms of delivery to the
customer and (c) website statistics record the
customers behaviour in the website and influence
the stores marketing strategy in terms of
customers individual needs (Vergidis, 2008).
Scenario 3, shown in Figure 4, describes an initial
design for a fraud investigation process (scenario
provided by Havey (2005)). The process requires 1
input, the security credentials of the customer. The
first step of the process utilises the security
credentials to allow the customer access to the data.
Steps 2 and 3 are executed in parallel; one check is
carried out on the customers identity and one on
their credit history. After the checks in steps 2 and 3
are completed, the outcomes are compiled into a
report which forms the single output of the scenario.




(1)
Retrieve business
financial information
(2)
Update on
market news
(3)
Create sales forecast
(4)
Generate graph(s)
(5)
Obtain Results
process
INPUT(S)
process
OUTPUT(S)
(a)
Company
Name
(b)
Market
update request
(a)
Results
report

Figure 2 - Initial business process design of the sales forecasting scenario (scenario 1)


367




Figure 3 - Initial business process design of the on-line order placement scenario (scenario 2)

Figure 4 - Initial business process design of the fraud investigation scenario (scenario 3)
Table 2 shows an example task library for use
with bpo
F
. In Table 2 it can be seen that each task is
represented by a web service, with each web service
requiring a number of inputs and outputs. The cost
(Service Delivery Price or SDP) and efficiency
(Service Fulfilment Target or SFT) attribute values
for each web service are also shown. In order to
expand and refine the web service library beyond
the work of Vergidis (2008) it has been necessary to
employ a categorisation of web services.
The most common approach to web service
categorisation is to classify web services by their
functionality. The intention is to help bpo
F
to find
appropriate web services more quickly. This
categorisation strategy is comprised of three
different initiatives. Large Internet companies such
as Google, Yahoo, eBay and Amazon expose a
number of different web services through their
APIs. They also attempt to provide a directory with
basic categories that can be assessed in terms of the
functional characteristics of web services. Another
approach is to study the library in terms of the
inputs and outputs for each web service and make a
comparison on that basis. Lastly, there is an
opportunity to assess web services by business
models. Almost every web service is built on a
revenue raising basis. Javalgi et al. (2005) stated
four main internet activities which add value to
information based products: search, transaction,
evaluation, and problem solving. The three
initiatives described above have influenced the
route selected for the categorisation used in this
work. After creating an enlarged web services
library in the previous section, analysis was
completed to detect the most frequent
transformational activities and unite them by
functional groups. The categorisation is displayed in
Table 3.
Table 2 - Example partial task library (used in Scenario 2)
No. Task Name Input(s) Output(s) SDP SFT
0
Achworks
Soap (Rico
Pamplona) 1, 2 3 208 113
1
Drupal
Authentication 0 1 200 103
2 Entrust Login 0 1 206 103
3
ecommStats
Web Analytics 6 7 218 112
4
Internet
Payment
Systems 1, 2 3 226 105
(1)
Security
login
(4)
Report
Compilation
(2)

Customer
Identity
Check
(3)
process
INPUT(s)
process
OUTPUT(S)

Security
Credentials


Fraud
Report
Customer
Credit Check
(1)
Customer
login
(4)
Package
tracking
(2)
Secure on-line
payment
(3)
Payment validation
(5)
Web analytics
process
INPUT(S)
process
OUTPUT(S)
(a)
Customer
ID + password
(b)
Order details
(a)
Payment
confirmation
(c)
Website
tracking
request
(b)
Order tracking
status
(c)
Website
statistics

368


Table 3 - Web service categorisation for use with bpo
F

Functionality Description o of
web
services
Information
Analysis
Responsible for searching,
evaluating, comparing,
forecasting, listing, and
monitoring information.
21
Event Driven
Services
Responsible for providing
notifications and alerts based
on triggers
10
Security Checks Perform verification checks to
detect frauds, assess risks, and
validate information.
8
Location Based
Services
Provide geography related
services.
8
User Profile Involve operations with user
accounts.
8
Statistic
Services
Track real time and historical
records to generate reports.
7
Payment
Processing
Facilitate purchasing process
from filling shopping basket to
confirming payment.
6
Records
Management
Update, delete, remove, add,
amend, edit.
6
Data
Manipulation
In charge of exchanging
information between different
parties, converting and
calculating new combinations,
and providing translating or
transliterating services.
6
Authentication
Services
In charge of identifying
requesters credentials and
authorising requester to access
or to do certain actions.
5
Integration
Services
Providing services for
embedding information by
integrating third party services
and their results into own web
services and business
processes.
5
Transaction Services that support
transactional processing.
4
Online Order
Placement
Placing and managing online
orders.
2
Communication Provide messaging,
networking, hosting or queuing
services.
1
Problem Solving Involve services that bring two
parties to do the job each other.
1
4. OPTIMISATION RESULTS
The aim of business process optimisation is the
automated improvement of business processes using
pre-specified measures of performance. The
importance of business process optimisation lies in
its ability to re-design a business process based on
quantitative evaluation criteria. This concept
stresses the need to generate alternative business
process designs based on the given process
requirements, and quantitatively evaluate and
compare these designs. In this research two
parameters are being optimised; the cost (Service
Delivery Price or SDP) and efficiency (Service
Fulfilment Target or SFT) values for each web
service help in the evaluation and selection of
optimal process designs. This section of the paper
presents the results gained from the optimisation of
the three process scenarios set out in section 3,
using bpo
F
as the optimisation engine. In order to
determine the effect of extending the web service
library the results of Vergidis (2008) gained for the
same three scenarios are presented along with the
authors results. The results are displayed in the
form of scatter graphs.
4.1 SCENARIO 1: DEMONSTRATING BPO
F

This scenario modelled a sales forecasting process.
The results are shown in Figure 5. In Figure 5 the
overall search space is represented by the darker
dots with the optimised results shown in a lighter
grey. Each cloud represents a set of process results
of a given size (in the cloud containing point B1 for,
example, each process has 4 tasks). Three test cases
are highlighted in Figure 5. And the differences
between the cases are shown in Table 4.
Table 4 - Test case differences for scenario 1
Category Case Tasks Differences SDP SFT
Statistics,
Information
Analysis,
Event Driven
Services
B1
4
No chart
service
853 446
B2
5
Chart
service
added
1056 553
B3 6 More
services
added with
charting
capabilities
1257 658



Figure 5 - Scenario 1 results

369

4.2 SCENARIO 1: A COMPARISON
BETWEEN PUBLISHED AND AUTHORS
RESULTS
As mentioned before in order to effectively assess
the use of an enlarged task library it has been
necessary to compare the authors results with the
published results of Vergidis (2008). Such a
comparison should highlight differences in the way
NSGA2 generates output in terms of efficiency and
cost. The NSGA2 algorithm in bpo
F
has, in both
cases, been used to obtain the results displayed in
this section. Using the same scenarios as Vergidis
(2008) though with an enlarged task library the
following results were obtained as shown in Table 5
and in the form of the process graphs Figure 6
(Vergidis (2008) result) and Figure 7 (Authors
result). With this scenario the differences between
the authors results and those obtained by Vergidis
(2008) are minor. Though, by looking at Figure 8 it
can be seen that curve of values from the authors
results (shown with black dots) is slightly better
than achieved in Vergidis (2008) (with grey dots).
Table 5 - Test case differences for scenario 1
Criteria Vergidis (2008) Authors Results
Tasks 6 6
Cost 1264 1261
Efficiency 663 663
ANDs 2 2
ORs 2 2
Unique
Services
Xignite Get
Balance Sheet
Mergent
Company Fund.
Web Services 20 41


Figure 6 - Scenario 1 (Vergidis, 2008)
Both results have an equal number of tasks and
clauses. The use of the NSGA by Vergidis (2008)
produced more expensive results than that achieved
by the Authors.

Figure 7- Scenario 1, Authors Results

Figure 8 - Scenario 1: Scatter graph comparison of authors
and Vergidis (2008) results
4.3 SCENARIO 2: A COMPARISON
BETWEEN PUBLISHED AND AUTHORS
RESULTS
From looking at Table 6 and the process graphs
shown in Figure 9b (Authors results) and Figure 9a
(Results of Vergidis (2008)) it may be noted that the
authors results had a higher efficiency than those of
Vergidis (2008). In addition both results have an
equal number of tasks and clauses. The results of
Vergidis (2008) are more expensive and less
efficient than the authors results (the scatter graph
of the results is shown in Figure 10).
Table 6 - Result comparison table for scenario 2
Criteria Vergidis (2008) Authors Results
Tasks 6 6
Cost 1240 1237
Efficiency 658 667
ANDs 2 2
ORs 1 1
Unique Services SXIP Login Yahoo Maps
Web Services 29 48


370








Figure 10 - Scenario 1: Scatter graph comparison of
authors and Vergidis (2008) results
4.4 SCENARIO 3: A COMPARISON
BETWEEN PUBLISHED AND AUTHORS
RESULTS
The curve of values obtained by the authors results
is longer and less sharp than Vergidis (2008). This
curve may be seen in Figure 11. The results are
detailed in Table 7 and are characterised by the
following observations. Both results have an equal
number of tasks but a different number of clauses.
The results of Vergidis (2008) NSGA2 are situated
nearby the central part of the cloud (shown in
Figure 11). The authors results sketched two extra
clouds.








The authors result (shown in Figure 12 and detailed
in Table 7) has a lower cost than that achieved by
Vergidis (2008).
Table 7-Result comparison table for scenario 3
Criteria Vergidis
(2008)
Authors Results
Tasks 10 10
Cost 2062 2047
Efficiency 1086 1079
ANDs 0 0
ORs 10 2
Unique
Services
Drupal
Authentication
WebservicesX.NET
Validate Email
Web
Services
31 72


Figure 11 - Scenario 3: Scatter graph comparison of
authors and Vergidis (2008) results
Figure 9 - Scenario 2: Vergidis (2008) results (a)
and Authors results (b)
(a)
(b)

371


Figure 12 - Scenario 3 Authors result

The results described in this section demonstrate the
benefit of using an extended web service library
with bpo
F
. It is also the case that additional new
process combinations are created when a greater
range of web services are provided to the
framework (as evidenced in the scatter diagrams).
The three scenarios in this paper illustrate the range
of processes that may be optimised by this
approach.
5. CONCLUSIONS
This paper has explored the optimisation of business
processes by providing insights into the use of web
services for the development of re-configurable
business processes. In particular the effect of using
of an expanded web service library has been
investigated. From the case study scenarios featured
in this work it is clear that an expanded library
based on a categorisation of common web services
can lead to the identification of better processes. In
terms of the experiments in this paper, processes
that have a higher efficiency and lower cost may be
identified though the use of the expanded web
service library. A categorisation of web services has
been provided for the classification of web services
used in the research presented in this paper. This
has aided the selection of web services for use
within a process by bpo
F
. Observations in this
research note that additional result clouds
containing new and novel process designs also
result from the use of an expanded web service
library. Further work is required to standardise the
way web services are defined and made available to
users. There is also no standard way of determining
the true price of using a web service along with the
comparative level of efficiency it provides; this
would increase the amount of web services
available to the practice of business process
optimisation. The bpo
F
would also benefit from the
development of an additional library composed of
process templates and sub sequences. Such a
template library would aid the efficient construction
of valid sub processes and enable bpo
F
to explore a
wider variety of solutions. The ability to re-use
existing processes, technology and business
requirements has great potential in the future.
REFERENCES
Davenport, T, The New Industrial Engineering: Information
Technology and Busines Process Redesign, Sloan
Management Review, Vol. 31, No. 4, 1990, pp 11-27.
Grigori, D., Casati, F., Castellanos, M., Dayal, U., Sayal, M.
and Shan, M.-C, Business Process Intelligence, Computers
in Industry, Vol. 53, 2004, pp. 321-343.
Hammer, M, Reengineering Work: Dont Automate,
Obliterate, Harvard Business Review, Vol. 68, No. 4, 1990,
pp 104-112.
Havey, M, Essential Business Process Modelling, O'Reilly,
U.S.A., 2005.
Hofacker, I. and Vetschera, R, Algorithmical Approaches to
Business Process Design, Computers & Operations
Research, Vol. 28, 2001, pp. 1253-1275.
Javalgi, R., Radulovich. L., Pendleton, G., Scherer, R,
Sustainable Competitive Advantage of Internet Firms: A
Strategic Framework and Implications for Global
Marketers, International Marketing Review, Vol. 22, No. 6,
2005, pp 658-672.
JGraph An Open Source Java Graph Library that Provides
General Graph Objects for Visual Display, JGraph,
Retrieved: 01/06/2011, http://www.jgraph.com.
Nagappan, R., Skoczylas R, Sriganesh, Developing Java Web
Services: Architecting and Developing Secure Web Services
Using Java, John Wiley & Sons, Inc., Bognor Regis, 2002.

372

Tiwari, A., Vergidis, K. and Majeed, B, Evolutionary Multi-
Objective Optimisation of Business Processes, in
Proceedings of IEEE Congress on Evolutionary
Computation 2006, Vancouver, Canada, 2006, pp. 3091-
3097.
Tiwari, A., Vergidis, K., and Turner, C.J, Evolutionary Multi-
Objective Optimisation of Business Processes, In: Gao,
X.Z., Gaspar-Cunha, A., Kppen, M., Schaefer, G., Wang, J.
(Eds.) Advances in Intelligent and Soft Computing: Soft
Computing in Industrial Applications, Springer,
Heidelberg, 2010, pp 293-301.
Vergidis, K, Business Process Optimisation Using An
Evolutionary Multi-Objective Framework, PhD Thesis,
Cranfield University, UK, 2008.
Vergidis, K., Tiwari, A. and Majeed, B, Business Process
Improvement Using Multi-Objective Optimisation, BT
Technology Journal, Vol. 24, No. 2, 2006, pp. 229-235.
Vergidis, K., Tiwari, A., Majeed, B, Business Process
Analysis and Optimization: Beyond Reengineering. IEEE
TRASACTIOS O SYSTEMS, MA, AD
CYBERETICS, Vol. 38, No. 1, 2008, pp 69-82.
Vergidis, K., Tiwari, A., Majeed, B. and Roy, R, Optimisation
of Business Process Designs: An Algorithmic Approach
with Multiple Objectives, International Journal of
Production Economics, Vol. 109, 2007, pp 105-121.
Wang, K., Salhi, A. and Fraga, E.S, Process Design
Optimisation Using Embedded Hybrid Visualisation and
Data Analysis Techniques Within a Genetic Algorithm
Optimisation Framework, Chemical Engineering and
Processing, Vol. 43, 2004, pp. 663-675.



Proceedings of DET2011
7th International Conference on Digital Enterprise Technology
Athens, Greece
28-30 September 2011
373

VIRTUAL RAPID PROTOTYPING MACHINE
PAJK Edward, Prof.
Poznan University of Technology, Faculty of
Mechanical Engineering and Management
edward.pajak@put.poznan.pl
GRSKI Filip, MSc. Eng.
Poznan University of Technology, Faculty of
Mechanical Engineering and Management
filip.gorski@doctorate.put.poznan.pl

WICHNIAREK Radosaw, MSc. Eng.
Poznan University of Technology, Faculty of
Mechanical Engineering and Management
radoslaw.wichniarek@doctorate.put.poznan.pl
ZAWADZKI Przemysaw, MSc. Eng.
Poznan University of Technology, Faculty of
Mechanical Engineering and Management
przemyslaw.zawadzki@put.poznan.pl

ABSTRACT
Conventional design techniques do not allow full testing of a device as complex as numerically
controlled machine, before physical prototype is created. Doing so requires investing time and
money. If designed machine realizes a new process, it is almost impossible to test it and tune its
parameters without physical prototype. Paper presents possibilities of using virtual reality to create
fully functional virtual prototype of machine for additive manufacturing. The machine itself is a
new, innovative device for producing physical prototypes of parts with multidirectional Fused
Deposition Modeling process. Basing on CAD model of the 5-axis FDM machine, virtual prototype
was created, along with virtual additive manufacturing process. Virtual machine is operated using
NC code, prepared basing on product CAD model. Virtual machine can be used to preview the
process and to check how various process parameters affect the part quality. Therefore, optimal
process parameters can be determined without the physical prototype of the machine. Furthermore,
various design aspects can be tested using virtual machine, allowing design verification, also
without investing resources into physical prototype.
KEYWORDS
Additive Manufacturing, Rapid Prototyping, Fused Deposition Modeling, Virtual Prototyping,

1. INTRODUCTION
Rapid prototyping and manufacturing using additive
manufacturing technologies (AMTs) allows creating
physical prototype of an object basing on 3D CAD
(Computer Aided Design) model, with no need for
special tooling. AMTs have found their place
among other manufacturing technologies they are
invaluable when there is a need for quick
production of physical prototype of designed part.
(Grski et al, 2010)
Constant development of additive technologies
results in creating new and improved methods,
giving consideration to increasing requirements that
must be fulfilled by these methods and prototypes
manufactured using them. Promising trend of
development is multidirectional additive prototype
manufacturing, currently studied in many research
centers across the world. Multidirectional
manufacturing breaks with an approach traditional
for AMTs unidirectional division of manufactured
object.
Research on multidirectional prototype
manufacturing using one of the most popular
methods Fused Deposition Modeling (FDM) is
also conducted in Laboratory of Rapid Prototyping,
located in the Institute of Mechanical Technology,
in the Faculty of Mechanical Engineering of Poznan
University of Technology. Current result of these

374

work is a design of the device for multidirectional
material deposition.
Full testing of such device using conventional
design techniques (CAD systems only) would not
be possible without creating its physical prototype.
This is when modern techniques of product
development become useful, especially one of them
Virtual Reality (VR). VR techniques expand the
range of application of model created in CAD
system, allowing to place it in virtual environment
of any form, in presence of other objects. Essence of
virtual environment is representation of mutual
interactions between objects and their behavior in
response to actions taken by user. Prototyping using
virtual reality techniques (Virtual Prototyping, VP)
enables presentation, testing and analysis of 3D
CAD models without need of producing a physical
prototype. Virtual models allow for manufacturing,
assembly, operation and recycling of the product
and influence of these processes on its costs. Using
virtual prototypes, especially in early phases of
product development allows making appropriate
decisions taking time and costs into account. (Weiss
et al, 2005)
Presented paper shows possibilities of virtual
reality in field of creating virtual prototype of
production machines. Basing on research regarding
multidirectional prototype manufacturing (among
others basing on the design of device manufacturing
prototypes with 5-axis FDM method), a virtual
version of rapid manufacturing process and virtual
prototype of machine realizing it were created.
Prepared virtual reality applications allow to
eliminate wrong assumptions and determine
appropriate limits of process parameters variation
on very early stage of development work.
Verification and optimization of device design is
also possible.
Work on creating virtual prototype of the
machine and virtual additive manufacturing process
were carried out using EON Studio environment
authoring tool for making virtual reality
applications.

2. MULTIDIRECTIONAL FUSED
DEPOSITION MODELING PROCESS
2.1. BASICS OF 5-AXIS FDM PROCESS
Manufacturing models with FDM technology
consists in layered deposition of plastified build and
support material in thread form, by an extrusion
head with two nozzles. Numerically controlled
machine deposits build and support material on the
model table, basing on subsequent horizontal
cross-sections prepared from digital 3D model. ABS
material is the most frequently used in this method.
Obtained models are characterized with acceptable
strength and durability and they can be put to
further processing, like machining, gluing, painting
to acquire sufficient surface quality. Produced part
faithfully represents the digital model and after
removing support material is almost immediately
ready for use. FDM method, according to Wohler
report belongs to one of the most popular additive
manufacturing technologies worldwide.


Figure 1 FDM technology scheme
Almost each additive manufacturing method (and
all the prevalent ones) consists in producing models
layer by layer. In consequence of layered structure,
volume errors are generated. In general, volume
errors are differences between volume of material
used for manufacturing and volume resulting from
digital representation. Volume errors can be of
various forms (Weiss et al, 2010) and they can
influence many characteristics of the model
surface quality, dimensional and shape accuracy or
mechanical properties.
Analysis of the additive manufacturing process in
FDM method shows that magnitude of volume error
is mostly dependent on orientation of the model in
working chamber (layer deposition direction).
Volume error influences parameters like surface
quality and part accuracy. Consumption of support
material and manufacturing time is highly
dependent on model orientation itself so it can be
said that this one parameter has influence on all key
aspect of additive manufacturing process. This
subject is studied by many research centers
worldwide and conducted research aim at preparing
methodology of finding optimal orientation to make
the prototype reach desired characteristics
(Daekeon et al, 2005, Thrimurthulu et al., 2003,
Pandey et al, 2006).
Selection of optimal model orientation during
additive manufacturing process is a difficult and
time consuming problem, because it is dependent on

375

many, frequently opposing factors like time and
surface quality or strength. Analysis of different
researches in field of selection of optimal
orientation is a starting point for developing the
technology of multidirectional layer deposition of
plastic materials. Multidirectional layer deposition
is characterized with varying direction of material
deposition, frequently obtained by changing
orientation of model in working chamber. This
approach has a number of advantages, like
possibility of reaching better parameters of final
product or possibility of significant reduction of
consumed support material (Yang et al, 2003).
Technology of multidirectional deposition of
layers of plastic for additive manufacturing requires
different process planning than in conventional,
widespread unidirectional material deposition
technologies. It becomes necessary to develop
algorithms of division of model into smaller
parts/solids/elements produced in various directions,
which results in hierarchical structure of the object
(Figure-2).




Figure 2 Hierarchical structure of model for multidirectional additive manufacturing process

2.2. DESIGN AND FUNCTIONS OF 5-AXIS
FDM MACHINE
The development of multidirectional plastic layer
deposition technology is related with a problem of
designing mechanism for manufactured part
orientation change. The mechanism has to ensure
the extrusion head the possibility to deposit layers
from different directions.
Result of the research on the multidirectional
additive prototyping technology, conducted in
Laboratory of Rapid Prototyping, is design of the
device for 5-axis manufacturing with Fused
Deposition Modeling method. Multidirectivity was
achieved by adding two extra rotational axes
allowing to change the model orientation during
manufacturing. Movement in these two axes is
realized using rotary table with cradle. The
important issue is proper programming of the
extrusion head movement, to avoid collisions with
material already deposited.
Because of appointed tasks and requirements of
the FDM process, the device has the following
features:
Five controllable axes. The material deposition
head makes movement in X and Y axes, while
feed in Z axis is realized by the table, on which

376

the model is manufactured. Rotation in two
additional axes (A and C) is ensured by cradle
and rotary table.
Rigid frame based on welded steel sections.
Possibility of heating the working chamber to
the temperature necessary to provide right
conditions for FDM process. The chamber has
thermal insulation minimizing the heat losses.
Extrusion head for material deposition with
possibility of heating applied material to the
temperature ensuring proper course of the
process. The head has a feature of cutting the
thread of material (for additional movements
between material deposition movements),
which is fed automatically from the spool.
Head has possibility of working with various
materials.
Machine control realized with dedicated
computer application, communicating with
machine controller.
The machine consists of following
subassemblies:
Body welded frame from steel sections.
X and Y axis drive realized with ball screws
and servomotors (drive transmission with
cogbelt)
Z drive realized like X and Y drives.
A and C axis drive rotary table with cradle
allowing full turn in C axis and turn in A axis
in range between -10
o
and +100
o
. Drive is
possible to disassemble possibility of
working only with three axes.
Head for material deposition.
Material feeder material in a form of wire
unwound from the spool.
Heating and insulation system.
Figure 3 contains the scheme of the machine with
marked main subassemblies.



Figure 3 5-Axis FDM Machine and its main components


377

3. VIRTUAL RAPID PROTOTYPING
MACHINE
3.1. PRINCIPLES AND TASKS OF VIRTUAL
RP MACHINE
Virtual additive manufacturing machine is a
computer application of virtual reality, with a main
purpose of simulating the operation of designed
innovative machine for 5-axis prototype
manufacturing with FDM technology. Virtual Rapid
Prototyping machine is therefore a virtual prototype
of additive manufacturing device. Destination of the
machine means that apart from standard
functionality of virtual prototype, it is necessary to
expand it with the virtual version of rapid
manufacturing process of FDM in five axes.
Thus, functionalities can be divided in two
groups, which are obviously related and blend with
each other, but they can be considered entirely
separately:
representation of principles of operation of the
machine,
representation of the manufacturing process
realized using the machine.
Results of trials and tests conducted on the virtual
machine have on purpose the verification of
machine construction and also examination of the
limits and possibilities of using the innovative
technology of 5-axis Fused Deposition Modeling.
Detailed tasks of virtual machine are as following:
1. Visualization of prototype manufacturing
process in five axes, possibility of visual
checking of produced prototypes and
identification of occuring volume errors.
2. Visualization of machine operation
movement of drives and other movable
elements (in detail level necessary for
construction verification), checking of
collisions.
3. Representation of human-machine interaction
possibility of manual control of particular axes,
adjustment of velocities and layer thickness,
zeroing the machine coordinate system.
Running the process basing on supplied NC
code.
Functionality related with simulation of the
manufacturing process in 5-axis FDM technology
has been created in form of a separate module.
Functionality contained in this module is also
integrated with main application, necessity of
creating the separate module is related with test
procedure machine operation is not always
necessary, especially while testing important
aspects of the 5-axis FDM process itself. Besides,
presence of machine geometry in virtual
environment and visualization of its kinematics
significantly decrease the application performance,
especially on computers with low processor
capacity.
Virtual RP machine was created basing on three-
dimensional geometrical model prepared in CAD
system. This model was imported to virtual
environment (created using EON Studio software)
and was given proper visual traits. Then the
behaviors and events related with machine
operation in answer for user actions were designed
and implemented. Application was provided with
appropriate graphical user interface and possibility
of running on any computer station.

3.2. VIRTUAL FDM PROCESS
Representation of FDM manufacturing process in
virtual environment requires using the tools related
with dynamical creation of geometry. Purpose of
the application realizing the virtual process is an
interpretation of the supplied NC code, prepared in
earlier stage. Basing on the code, the additive
process of model manufacturing is being re-created.
Functionality of virtual process is possible as
integral part of virtual RP machine and also in
separate module, for supportive visualizations and
process study. This module functions entirely
separately, as individual application of EON Studio
software.
Basic task of the module is visualization of the
manufacturing process of 5-axis FDM method and
visualization of the model which results from the
process. This functionality is realized based on
supplied NC code, prepared for the machine. This
code is prepared using CAD/CAM based
application, from the CAD model of manufactured
part. The code contains mostly instructions
regarding tool and object movement.
Code destined for virtual reality application is
submitted to the conversion process, to simplify the
procedures of its later reading and interpretation.
The conversion is conducted in additional
application created by authors especially for this
purpose (Figure-4).
Ready NC code in form of text file can be loaded
into the virtual reality application. Interpretation of
the code consists in translating the information
connected with tool and object kinematics to the
form compatible with nodes realizing the
kinematics in EON Studio software.


378


Figure 4 CC Editor for code conversion and edition
The loading and interpretation process consists of
the following stages:
1. Submitting the name of the file with code and
target path by the user.
2. Checking the file correctness.
3. Loading the single line from the file. Division
of the line into separate words, interpretation of
first word for command identification
(movement with material deposition, dead
movement in Z axis next layer, movement in
fourth and fifth axis next elementary object)
4. Creation of position vector for the tool basing
on the rest of the line, containing coordinates in
appropriate axes. According to the rules in
standard NC code (so-called G-code), if
command does not contain information for all
axes, coordinates from previous command are
adopted.
5. Adding the position vector to the tool
movement table.
6. In case of going to the next layer or object
marking the fact by writing a value in
appropriate table.
7. Repetition of points 3-6 until the end of file.
The end effect of loading the code are the tables
containing data of subsequent positions of the tool.
For simplification, in module realizing virtual FDM
process, all movements are performed by the tool.
This solution provides effect identical with real one,
when displacement values in object-related axes
will have opposite signs which is realized during
sending the information about single movement to
node responsible for tool kinematics.
After loading and interpreting the code, it is
possible to begin the course of virtual
manufacturing process in FDM technology. This
process is realized in following steps:
1. Zeroing the counter containing ordinal
numbers of subsequent movements.
2. Taking the information from movement table
(created during code interpretation). Index of
taken information is equal to the value of
mentioned counter (zeroed in first step).
3. Taking the information from object table and
layer table. Basing on this information, it is
determined if the movement will or will not be
related with deposition of thread of material.
Place (index) of shifting to next object/layer is
marked.
4. Sending the displacement vector to the node
realizing the kinematics. Movement of the tool.
5. In case of movement with material deposition
sending the information about previous and
current position of the head to the object
representing the single thread of material
(geometrical prototype of cylinder).
6. After the movement automatic incrementing
of the counter. Repetition of steps 2-5 in loop
until the end of movement table.


379


Figure 5 Virtual FDM process preview
The process is realized in continuous loop, stopping
is possible using the Reset button in graphical
user interface. The progress of the process is
displayed as a percentage and is also visualized in
form of progress bar. If there is no need of
visualizing the whole process, it can be turned off
by using the button Preview (Podglad). Turning
off the visualization accelerates the process of
virtual model generation.
The end result of virtual process is the ready
prototype. Writing the information about shifting
between layers and objects allows selective
visualization of individual elementary objects or
layers. Application allows saving the information
about current state of all objects representing the
threads of material, for later quick re-creation of
object visualization.


Figure 6 Virtual model resulting from virtual FDM
process
Application contains functions allowing model
verification volume of used material can be
calculated (by calculating the volume of all used
material threads) and visual verification can be
performed preview of STL file (standard file
format for Rapid Prototyping) containing original
geometry is also a feature of application.
Applied method of representation of the FDM
process has some disadvantages. The main issue is
high need for processor capacity, because of high
number of geometrical objects (material threads).
This need can be partially decreased, by realizing
the visualization of layer contours only (the external
model shell will remain for visualization), but the
possibility of calculating the volume of end model
will be then unavailable, not allowing to evaluate
the total volume error.

3.3. VIRTUAL MACHINE FUNCTIONALITY
AND OPERATION

Virtual 5-axis additive manufacturing machine was
created on the basis of CAD model prepared in
CATIA v5 environment. The model was converted
to the form recognized by EON Studio virtual
reality software and imported to the virtual
environment. Then, its visual features were adjusted
along with object hierarchy. Next step was

380

modeling of the machine kinematics (displacement
in five axes) and programming the methods of
operation (manual control, NC code interpretation)
and additional functionalities. The last stage was an
integration of formerly created module realizing
virtual FDM process with main application and
making the application available for general tests.
Created virtual reality application has the
following functionalities:
1. Control of movement in all five axes. Control
can be realized manually, there is a possibility
of reaching specific, submitted coordinates.
Velocity adjustment for each axis separately is
also implemented.
2. Selection of working mode classic 3-axis or
5-axis. In 3-axis mode, rotary table with cradle
is removed and all possibilities of movement in
two rotary axes are blocked.
3. Adjustment of the coordinate system (zero
point). There are two predefined zero points,
for 3-axis and 5-axis configuration. Any zero
point can be defined by setting the tool and the
table in desired position and using the right
option to mark the position as new zero.
4. Loading and interpretation of the program in
form of NC code. A choice of realization mode
is possible first mode simulates material
deposition (for process testing) and second
simulates only kinematics of the machine
(without material deposition).
5. Set of functions related with visual aspects of
the application hiding and showing the
elements of the machine body and casing for
better visibility, additional camera placed on
the FDM head (simultaneous process preview
from different positions), predefined positions
of main camera.
Application is provided with graphical user
interface (Figure-7), which consists of buttons,
labels, text boxes, pop-up menu (allowing to turn on
and off almost all elements of the interface) and
sliders. Part of the commands have keyboard
shortcuts, for actions performed manually in reality
(opening the doors, removing the material cassettes
or finished models), mouse can be used, there is a
possibility of implementation of special VR devices
(gloves, tracking systems). Virtual control panel has
not been implemented, because machine design
assumes control of all the functions through
computer application.


Figure 7 Graphical user interface of virtual RP machine

381

Realization of the main machine functionality,
which is the movement of material deposition head
(in X and Y axes) and table containing
manufactured model (in Z axis and two rotary axes
A and C) required changing the mutual
dependencies of objects and introducing auxiliary
nodes, to allow zeroing of the positions and limiting
the field of movement.
For each axis, kinematic nodes were introduced
(for X, Y and Z axis it was a linear motion, for A
and C rotary motion). Activity of these nodes was
connected with interface elements keyboard
buttons and text buttons. The velocities were made
dependent on the value of the sliders, also
controlled by user. User is also informed about
current position in each axis by a text communicate,
constantly displayed on the screen.
Separate problem is the realization of movement
into desired location. Earlier defined kinematic
nodes are not used for this purpose. Instead, special
logical nodes ensuring smooth transition between
two given values in provided time interval are
implemented. Each axis is assigned with one such
node, values sent to this node are taken from text
box filled by user with demanded coordinates. Time
of movement is calculated as ratio of distance
(calculated as difference between current and
demanded position) and velocity from the sliders.
The most important functionality is however the
possibility of loading and executing the NC code,
contained in text file. Algorithm of reading and
interpretation of the code is practically identical
with an algorithm applied for virtual FDM process.
The only difference is in instant realization tables
of movement are not created, subsequent lines of
code are interpreted on the fly and sent to the nodes
realizing the movement. Currently performed line of
code is visible in main application window.
Important issue in code realization is the moment
of transition to another line. Because there are five
nodes realizing the movement separately for each
axis, there is a need of gathering the signals about
ending the current movement from them. Transition
to next line of code is realized after making
movements in all axes. Not implementing this
solution could result in errors during code
execution.
As mentioned, the machine can work in two
modes, 5-axis and 3-axis. Switching to 3-axis mode
is realized by removing the rotary table with cradle.
Virtual machine allows free switching between
modes change of working mode results in hiding
the elements realizing movement in additional axes.
Possibility of changing the position in these axes,
both manually and by the text box is also blocked.
The application allows zeroing the coordinate
system, like on a real numerically controlled
machine. Two standard zero points are defined as
a base, one for 5-axis and other for 3-axis mode.
At any moment, current position in selected axis can
be marked as zero point (using the buttons of
graphical interface). Current location of coordinate
system can be checked at any moment, by clicking
on one of the buttons X, Y or Z. The visual
prototype of coordinate system will be displayed
then, in a form of three colored lines with literal
designation of particular axes.


Figure 8 Visualization of current zero position of machine
coordinate system
To make the operation of virtual machine easier,
a number of functions related with visual simulation
aspects was implemented. Some of them are:
hiding/showing of sheet metal casing and the
doors,
changing the transparency of machine body
and folding covers,
hiding every element of the machine except the
FDM head and the table,
hiding/showing the view from camera placed
in the head
predefined positions of main camera,
showing/hiding the elements of graphical user
interface.
Creating and programming of all relations
between objects of virtual machine required
connecting the techniques of visual programming
with classic programming using script language.
EON Studio software allows creating interaction
solely using visual programming, but with complex
object behavior (like in the case of virtual RP
machine) it would be extremely difficult. Using the
scripts written in available VBScript language
allows to simplify many procedures and enables
more efficient control of virtual environment.

382

4. FURTHER RESEARCH
Data about the construction of the model resulting
from virtual FDM process (position and shape of
particular threads of material) can be exported in
form of vectors to the text file for later use.
Additional application of this functionality is the
possibility of building the standard CAD model
basing on the vector data. This model, representing
the method of manufacturing can be later used for
many purposes, e.g. in strength calculations using
Computer Aided Engineering software.
Virtual process can be further developed by
implementing the simulation of physical laws and
phenomena (e.g. material shrinkage and model
deformations resulting from temperature shift).
Virtual environment allows implementation of more
advanced behaviors of objects and materials.
Unfortunately, creating a full model taking
dynamics and temperature-related phenomena into
consideration is very labor consuming and
additionally increases the need for processor
capacity.
5. CONCLUSIONS
The solutions in field of virtual prototyping of
production machines (especially realizing the
processes of rapid additive manufacturing
technologies) have very wide perspectives of
development. Presently the main factor limiting the
application of such virtual prototypes is the
performance of computers simulation of additive
manufacturing process, even in case of less
complicated model requires performing complex
calculations and displaying the structure of model
representing the method of its production (e.g. as a
result of FDM process out of the threads of
plastified material) demands using graphical
processors of very high efficiency. Connecting the
virtual process with full functionality of virtual
prototype (kinematics of the working elements,
object collision detection, use of special VR devices
to operate the virtual machine in a manner similar to
the operation of real machine) requires further
increase in computer performance level.
Virtual prototyping of objects as complex as
production machines requires having detailed
knowledge, from fields of mechanical technology,
process engineering, ergonomics and also
programming, creating 3D computer graphics and
operating the special VR devices. However, virtual
prototype created in an appropriate way allows to
conduct the tests that would be otherwise possible
only after building the physical, working prototype
of the device. Results of tests performed on the
virtual machine allow to draw some conclusions
regarding optimization of the process realized on
the machine and in consequence verification of the
machine design. Savings of time and funds (which
would be consumed on building and testing the
physical prototype during conventional design
process) are big enough to fully justify the
application of virtual reality on this stage of
development.
6. ACKNOWLEDGEMENTS
The paper describes results of study performed
during the research grant KBN 22-3390 Metodyka
wielokierunkowego wytwarzania prototypw
technikami przyrostowymi (Methodology of
multidirectional prototype manufacturing using
additive technologies).
REFERENCES
Grski F., Kuczko W., Wichniarek R., Dudziak A.,
Kowalski M., Zawadzki P., Choosing optimal rapid
manufacturing process for thin-walled products using
expert algorithm, Journal of Industrial Engineering
and Management, Vol 3, No 2 (2010)
Daekeon Ahn, Hochan Kim, Seokhee Lee, 2005:
Fabrication direction optimization to minimize post-
machining in layered manufacturing. Department of
Mechanical and Intelligent Systems Engineering,
Pusan National University, Busan 609735, Republic
of Korea. Department of Mechanical Engineering,
Andong National University, Andong 760749,
Republic of Korea
Pandey P.M., N. Venkata Reddy, Dhande S.G., 2006:
Part deposition orientation studies in layered
manufacturing. Department of Mechanical
Engineering, Indian Institute of Technology Kanpur,
Kanpur, India
Thrimurthulu K., Pulak M. Pandey, N. Venkata Reddy,
2003: Optimum part deposition orientation in fused
deposition modeling. Department of Mechanical
Engineering, Indian Institute of Technology, Kanpur,
India. Department of Mechanical Engineering, HBTI,
Kanpur, India
Weiss E., Pajk E., Kowalski M., Wichniarek R.,
Zawadzki P., Dudziak A., Paszkiewicz R., Grski F.,
Accuracy of parts manufactured by rapid prototyping
technology, Annals and Proceedings of DAAAM
International, 2010
Weiss Z., Kasica M., Kowalski M., Rzeczywisto
wirtualna w projektowaniu wyrobw, MACH-TOOL
2005 Conference: Innovative technologies in
machine construction, Poznan, 21.06.2005
Yang, Y., Fuh, J., Loh, H. and Wong, Y., 2003: Multi-
orientation Deposition for Supportless Layered
Manufacturing Process. Journal of Manufacturing
Systems, vol.22, no 2: 116-129

Proceedings of DET2011
7th International Conference on Digital Enterprise Technology
Athens, Greece
28-30 September 2011
383

MANUFACTURING SYSTEMS COMPLEXITY AN ASSESSMENT OF
PERFORMANCE INDICATORS UNPREDICTABILITY
Konstantinos Efthymiou
University of Patras
efthymiu@lms.mech.upatras.gr
Aris Pagoropoulos
University of Patras
apagor@lms.mech.upatras.gr


Nikolaos Papakostas
University of Patras
papakost@lms.mech.upatras.
gr
Dimitris Mourtzis
University of Patras
mourtzis@lms.mech.upatras.
gr
George Chryssolouris
University of Patras
xrisol@mech.upatras.gr
ABSTRACT
In the modern interconnected environment, manufacturing systems, in their pursuit of cost, time and
flexibility optimization, are becoming more and more complex, exhibiting a dynamic and non linear
behaviour. Unpredictability is a distinct characteristic of such a behaviour and affects production
planning significantly. This paper presents a novel approach for the assessment of unpredictability
in the manufacturing domain. In particular, the fluctuation of critical manufacturing performance
indicators is studied with the help of the Lempel-Ziv Kolmogorov complexity measure in order for
the complexity of a manufacturing system to be evaluated. Finally, the methods potentiality is
examined with the application of the proposed approach to an automotive industrial use case.
KEYWORDS
Manufacturing complexity, unpredictability, production planning, performance indicators

1. INTRODUCTION
In the globalized and interconnected market,
demand fluctuation along with the requirements of
high product quality, low cost, short lead time and
high customization lead to a manufacturing
complexity increase (Chryssolouris, 2006).
Unpredictability, a typical characteristic of a
complex system, may have a negative impact on a
production systems design, planning and operation,
in a quite significant manner. Determining the
complexity quantitative metrics is considered as a
prerequisite for understanding complexity
mechanics and managing efficiently complexity
(Hon, 2005, Wiendhal and Scheffczyk, 1999). The
scope of the current study is the examination
complexity in manufacturing systems, by assessing
the unpredictability of performance indicators with
the use of the Lempel Ziv complexity measure.
The rest of the paper is organized as follows.
Chapter 2, presents a review of the existing
literature on manufacturing modelling approaches.
Chapter 3, describes the proposed methodology for
the assessment of unpredictability, by introducing
the application of the Lempel Ziv complexity
measure to manufacturing performance indicators
timeseries analysis. A case study from the
automotive industry that illustrates the efficacy of
the approach to real industrial environments is
provided in chapter 4. In the case study, the
complexity assessment of an assembly line and the
relationships among flexibility, production mix and
unpredictability are studied. Chapter 5, concludes
the basic outcomes from this work and proposes
future research direction.
2. LITERATURE REVIEW
Over the past years, several approaches, utilizing

384

Timeseries
Analysis
Complexity Analysis
Methods
Entropy Axiomatic
Theory
Product,
Machine
Coding
Chaos Non
Linear Dynamics
Fourier
Analysis
Lyapunov
Exponents
Bifurcation
Diagrams
Phase
Portraits
Fractal
Dimension
Fluid
Dynamics
Analogy
Lempel Zev
Complexity
Algorithmic
Complexity

Figure 1 Classification diagram of the main manufacturing complexity analysis methods

different methods and tools, have been proposed for
modelling and measuring the manufacturing
complexity. Most of the approaches can be
classified into five main categories, based on the
tools used for the complexity analysis. The first
category of methods follows the information theory
approaches, having as fundamental measure this of
Shannons entropy. The second category is related
to timeseries analysis techniques, such as the
Fourier analysis and non-linear dynamics tools. In
the third category, several approaches study
complexity having as a basis the axiomatic theory.
The fourth category includes methods that attempt
to address complexity by defining a coding system
for machines and products. The last category
concerns methods inspired by ideas from fluid
dynamics and aim to introduce a Reynolds number
namely the metric to manufacturing in order for the
complexity to be assessed by defining a threshold
between a steady and a turbulent manufacturing
behaviour. The diagram of Figure 1, schematically
illustrates the classification of the aforementioned
categories and their subcategories.
Entropy, as it is introduced in the information
theory (Shannon and Weaver, 1949) is associated
with the uncertainty of the occurrence of a series of
events. In the manufacturing domain, the
information entropy approach is utilized in order for
the complexity of a production system to be
assessed, and it is regarded as the sum of individual
entropy rates for each process and product variant.
Following this approach, in (Deshmukh et al, 1998),
a theoretical framework is proposed for assessing
the static complexity of manufacturing systems.
Static complexity is associated with the different
types of resources and different types of parts in the
system and it can be regarded as the measure of
information, required to describe the system and its
components. Similarly, in (Hu et al, 2008) the
effect of product variability and assembly process
information on the manufacturing system
complexity, is studied with the help of entropy
based metrics. Entropy metrics are also used in
(Frizelle and Woodcock, 2008) for studying input-
output systems, in particular on focusing on queues
measurements. The mixed model assembly lines
complexity is analysed in (Zhu et al, 2008), where
the entropy of each station is computed as the
entropy, caused by the introduced variants and the
entropy induced by preceding stations. Based on
(Zhu et al, 2008) the complexity metric, the
complexity effect on the throughput of different
assembly system configurations is studied in (Wang,
2010).
In (Suh, 2005), complexity is considered as the
measure of uncertainty in satisfying the aims
(functional requirements) of a system and it is
classified into the following types: real and
imaginary, time dependent and time independent,
periodic and combinatory. A series of axioms,
concerning complexity are defined, and within the
resulting framework relations, between design
parameters and functional requirements, are
established in a matrix form. In terms of
manufacturing, the objective is the maximization of
productivity by reducing the complexity of the
manufacturing system, following a process called
Design-Centric Complexity (DCC) theory.
According to (Lu and Suh, 2009) the introduction of
functional periodicity, by reinitializing the systems
function on a periodic basis, is suggested in order
for the continuous drifting of system ranges to be
disrupted.
In a timeseries analysis, chaos and non-linear
dynamics techniques are used for the assessment of
complexity in manufacturing systems. Phase
portraits and time delay plots are utilized in order to
examine the scheduling of a simple manufacturing

385

Figure 2 Proposed Manufacturing Complexity Assessment Methodology based on LZ complexity analysis

system (Giannelos et al, 2007). Based on this
analysis, a new dispatching rule is proposed
presenting promising results in terms of time
performance characteristics. Time delay plots i.e.
the Poincare maps are also used in (Peters, 2003) for
studying the effect of buffer size on the performance
of a manufacturing system. The adaptability to
demand of a steel construction industry, under
different operational policies and parameters, is
studied, utilizing the maximal Lyapunov exponents
and bifurcation diagrams (Papakostas and Mourtzis,
2007). Similarly, the maximal Lyapunov exponents
are also utilized in (Alfaro and Sepulveda, 2005)
along with the Fourier analysis and fractal
dimensions for examining the chaotic behaviour of a
production system, based on buffer index timeseries.
In (Papakostas et al, 2009), a simulation based
method, along with a regression analysis and a non-
linear dynamics analysis is proposed. The aim of the
present methodology is the determination of a
manufacturing systems sensitivity to workload
changes, the measurement and the control of the
systems complexity. In another work, a sensitivity
analysis is performed in order to identify the
systems chaotic behaviour, by introducing small
perturbations in the initial conditions (Schmitz et al,
2002).
A coding system for classifying information of
major components of industrial systems is proposed.
In the context of this coding framework, complexity
is defined as a function of the quantity and the
uniqueness of information (ElMaraghy et al, 2005
and ElMaraghy and Urbanic, 2003).
In (Efthymiou et al, 2010) the introduction of the
Reynolds number concept to a manufacturing
system as an indicator of complexity is proposed.
The aim of this, is the identification of the transition
regime between the behaviour of steady and
turbulent manufacturing operations in analogy to
laminar and turbulent flows. Similar concepts,
coming from the fluid dynamics domain are also
proposed in (Schleifenbaum et al, 2010) for
production systems and in (Romano, 2009) for
supply chain.
Although the existing approaches of the
manufacturing complexity analysis may lead to
useful results, they do not provide a direct
assessment of unpredictability of the manufacturing
performance indicators that are significant
parameters for decision making during the design,
planning and operation of manufacturing systems
(Chryssolouris, 2006). Additionally, a series of
difficulties arise in applying the existing approaches
to real industrial problems. These obstacles are
mentioned within the paragraph hereafter.
Entropy based approaches require the definition
of the different states of a systems components. In
addition, a series of assumptions related to the
independence of the systems states should be made.
Finally, there is the problem of inserting subjectivity
into the analytical association of the entropy
measures with the systems performance
(Papakostas et al, 2009). The complexity
approaches, based on a coding system, insert the
subjective definition of the codes that subsequently
lead to a subjective assessment of complexity.
Moreover, in case that a code of a component or a
part is missing, the complexity assessment is not
feasible. The axiomatic theory methods demand the
knowledge of uncertainty for a systems specific
requirement. This uncertainty is connected with the
estimation of a probability that should be known or
assumed. The chaos and non-linear dynamics
theory tools are useful only when the system under
study is chaotic. The phase portraits and the
bifurcation diagrams provide a schematic way of
presenting a systems irregularity but they do not
provide a specific value that can be easily compared
with the values of other systems. Finally, the
approaches inspired by fluid dynamics are still in an
early stage of development
3. MANUFACTURING COMPLEXITY
ASSESSMENT METHODOLOGY
In the present method, complexity is approached as
the unpredictability of manufacturing performance
indicators. The assessment of the unpredictability is
performed by applying the Lempel-Ziv complexity
analysis to manufacturing performance indicators
timeseries.


386

Orders
Floor A
Floor B
Floor C
Performance Indicators
Timeseries
Simulation Model

Figure 3 Case Study inputs and outputs of the discrete event simulation model

In (Lempel and Ziv, 1976), a complexity measure
(LZ) based on symbolic dynamics and on
Kolmogorovs work (Kolmogorov, 1978) is
introduced. LZ is a nonparametric measure for
finite sequences, related to the number of distinct
substrings and the rate of their occurrence along the
sequence that assesses the degree of disorder or
irregularity of a sequence. The LZ values close to
zero indicate a system presenting the least complex
behaviour, while systems with LZ values near one
are related with stochastic, unpredictable behaviour
(Ferreira et al, 2003). The LZ presents several
advantages in comparison with the timeseries
complexity techniques. First, the LZ can be applied
both to deterministic and to stochastic (and chaotic)
systems. Second, the stationarity of the timeseries
under investigation is not required for the
application of LZ. Third, the LZ provides a
universal measure of complexity, facilitating the
comparison of different manufacturing systems.
The proposed methodology consists of three main
steps, namely, the simulation of the manufacturing
system, the LZ analysis of performance indicators
timeseries and the estimation of the mean value of
LZ measures. In the first step, the simulation model
of the manufacturing system under study is
developed. The system is examined under a range
or varying from 0.1 up to 1. The idea is to study
the system under a wide range of orders pressure
from low demand rates up to high. So, a series of
simulations are performed as many as the range or
. The output of this step is the performance
indicators timeseries. Each performance indicator
corresponds to a . In the next step, the timeseries
are analyzed with the use of Lempel Ziv and a
complexity measurement for each timeseries that
occurs. In the last step, the mean value of the LZ
measure of performance indicator timeseries for the
range of from 0.1 up to 1 is estimated. The mean
value is considered as weighted indicator of the
manufacturing systems unpredictability and it is
left as LZ . The flowchart of the proposed
methodology is presented in Figure 2.
3.1. UNPREDICTABILITY ASSESSMENT
WITH LEMPEL-ZIV KOLMOGOROV
COMPLEXITY
The LZ analysis of a performance indicators
timeseries, which is denoted {I
i
}, i
+
consists of
two phases: a. the timeseries preparation, and b. the
computation of complexity. The first phase
includes: a. the transformation of the performance
indicators timeseries into a sequence of 0 & 1 and
b. the definition of two subsequences of the
produced sequence. The {Ii} timeseries is
transformed to a sequence S including 0 and 1. The
S sequence is written as s(i), i
+
according to the
rule:

<
=
*
*
1
0
) (
I I if
I I if
i s
i
i
(1)

, where I
*
is the mean value of the timeseries.
The definition of two subsequences follows, so
let,
P and Q be two subsequences of S,
PQ be the concatenation of P and Q,
PQ be a sequence derived from PQ after
the last character is deleted,
v(PQ) denote the vocabulary of all
different subsequences of PQ



387

Table 1: mean values of the Lempel Ziv Kolmogorov complexity
Product Mix
(A%, B%, C%)
System
type
Mean LZ
measure, LZ
Total

Underbody
A
Underbody
B
Underbody
C
Product Mix A
(20%, 30%, 50%)
Assembly
Line A
Flowtime 0,48 0,44 0,50 0,49
Tardiness 0,11 0,11 0,08 0,05
Assembly
Line B
Flowtime 0,45 0,42 0,48 0,46
Tardiness 0,10 0,09 0,07 0,04
Product Mix B
(33%, 33%, 33%)
Assembly
Line A
Flowtime 0,45 0,39 0,48 0,51
Tardiness 0,07 0,07 0,06 0,07
Assembly
Line B
Flowtime 0,43 0,37 0,45 0,49
Tardiness 0,06 0,07 0,06 0,06
Product Mix C
(10%, 80%, 10%)
Assembly
Line A
Flowtime 0,20 0,07 0,43 0,35
Tardiness 0,19 0,18 0,03 0,17
Assembly
Line B
Flowtime 0,19 0,06 0,44 0,34
Tardiness 0,17 0,17 0,03 0,16


In general, the P and Q subsequences can be
denoted as,

) ( ..., ), 2 ( ), 1 ( r s s s P = (2)
) 1 ( + = r s Q (3)
) ( ),..., 2 ( ), 1 ( r s s s PQ = (4)

where, r[1,n]

The second phase is the computation of the
complexity. Sequence S is scanned from left to
right and a complexity counter c(n) is increased by
one unit every time a new subsequence of
consecutive characters is encountered. The steps
followed are described hereafter.
1. At the beginning of the computation c(n)=1,
P=s(1), Q=s(2), PQ=s(1), s(2) and PQ=s(1). In
general
) ( ..., ), 2 ( ), 1 ( r s s s P = (5)
) 1 ( + = r s Q (6)
) ( ),..., 2 ( ), 1 ( r s s s PQ = (7)

If Q belongs to v(PQ), then Q is a subsequence
of PQ.
2. Renew Q to be s(r+1), s(r+2) and check if Q
belongs to v(PQ).
3. Repeat the steps 1 & 2 until Q does not belong to
v(PQ) and increase c(n) by 1.
4. Renew P to be the sequence P=s(1),, s(r+i)
with Q=s(r+i-1).
5. Repeat the steps 1, 2, 3 & 4 until Q is the last
character, i.e. up to the point that r equals n. The
complexity counter c(n) at this point defines the
number of different subsequences in P.
In order for the LZ measure to be made
independent of the sequence length, the c(n) is
normalized with respect to the complexity of a
random binary sequence,
n n n b
2
log / ) ( = (8)

Thus, the normalized LZ used within the current
study is given by:
n n c
n
n C measure LZ
2
log ) (
1
) ( : = (9)
The mean value of the LZ measure of a performance
indicator timeseries for a set of is given by:

=
=
w
i
CF
w
Indicator e Performanc LZ
1
1
) (

(10)
, where w is the number of the examined
4 INDUSTRIAL USE CASE
The efficacy of the proposed approach is presented
with the help of an industrial use case from the
automotive sector. Two identical assembly lines
(AL) consisting of 17 consecutive stations are
simulated with the discrete event simulation SW
Witness 2007. Each line produces three different
types of car floors, namely underbody A, B and C.
The only difference between the two assembly lines
is that the setup times of the second assembly line
are the double of the first assembly line setup times.
Thus, the first assembly line is considered being
more flexible than the second one.
The output of the assembly lines discrete event
simulation models is the performance indicators
timeseries. Two types of performance indicators are
provided by the simulation and are further analysed

388


Figure 4 Mean LZ complexity measures of flowtime analysis

with LZ, namely, flowtime and tardiness and are
given by the following equations.

n n
i
n
AT ET F = (11)

, where F
n
, ET
n
and AT
n
, represent the flowtime, the
completion (end) time and the arrival date of job n
at time step i, respectively.
) , 0 max(
n n
i
n
ET DD T = (12)

, where T
n
and DD
n
represent the tardiness and the
due date of job n at time step i, respectively.
Flowtime and tardiness timeseries are further
divided into three timeseries for underbody A, B
and C. In particular, the notations are:
F
A
/T
A
: flowtime/tardiness timeseries of
underbody a
F
B
/T
B
: flowtime/tardiness timeseries of
underbody b
F
c
/T
C
: flowtime/tardiness timeseries of
underbody c
F
T
/T
T
: flowtime/tardiness timeseries of all the
underbodies
Three different groups of experiments are carried
out (Table 1) for three different product mixes.
Each group consists of 10 different experiments for
10 different values of , ranging from 0.1 up to 1,
with a step of 0.1.
The results of the analysis are presented with the
help of Table 1 and the diagrams of figures 3 and 4.
Table 1, includes the mean values of the Lempel Ziv
Complexity for the flowtime and tardiness
timeseries for both the assembly lines, of three
different product mixes. The diagrams in Figure 4
illustrate the mean value of the LZ, coming from the
analysis of the flowtime timeseries for both the
assembly lines under three different product mixes.
In particular, the diagram a presents the weighted
complexity indicator, based on the flowtime of all
the underbodies. Figure 4b, illustrates the mean LZ
of the flowtime timeseries of underbody a, b and c
in the case of product mix a. Similarly to diagram
b, the diagrams c and d show the mean LZ in the
case of product mix b and c respectively. Figure 5,
includes diagrams similar to those in figure 4, but in
figure 5, it is a tardiness timeseries analysis
illustrated, instead of the flowtime.
4.1. ASSEMBLY LINES UNPREDICTABILITY
The maximum mean value of the LZK complexity,
i.e. 0,51 occurs in the case of product mix B for the
assembly line A, based on the analysis of flowtime
timeseries. In general, the mean values of the LZK
complexity of assembly line A, coming from the
flowtime timeseries analysis, range from 0,07 up to
0,51. Similarly, assembly line B is characterized by
the same range of the LZ flowtime mean values,

389


Figure 5 Mean LZ complexity measures of flowtime analysis

in particular, the LZ fluctuates between 0,06 and
0,48. Additionally, the average of LZ of the
tardiness timeseries is characterized by low values.
Specifically, the values are significantly close to
zero, with a maximum of 0,18.
A process that is least complex and predictable
has an LZ value close to zero, whereas a process
with the highest complexity and unpredictability-
randomness will have an LZ close to one. A value
of the LZ near to zero is associated with a simple
deterministic process such a periodic motion, in
contrast to a value near to one that is related toa
stochastic and unpredictable process (Ferreira et al,
2003). Thus, both assembly lines A and B can be
considered as deterministic systems of a low
complexity and a high predictability, since the
average LZ values are close to zero. This
ascertainment is in agreement with the
characteristics of the assembly lines, since the
process and setup times are deterministic fixed
values, while the demand rate is also deterministic
and periodic.
4.2. FLEXIBILITY AND UNPREDICTABILITY
The setup times of assembly line A are two times
smaller than the setup times of assembly line B.
This difference leads A to have higher flexibility
than B. In order for flexibility to be quantified, the
FLEXIMAC (Alexopoulos et al, 2008) indicator is
utilized and A and B are characterized by 0.2242
and 0.0632 respectively. It is evident from the
diagrams of both figures 4 and 5 that flexibility is
proportional to complexity. Apart from one case in
Figure 5b, the mean value of LZ of assembly line A
is always higher than the respective LZ mean values
of assembly line B. Both flowtime and tardiness
unpredictability is affected by flexibility. Thus, a
strong correlation between flexibility and
complexity, in terms of unpredictability and
randomness, is identified. The relationship between
flexibility and complexity can be useful during the
design or the planning of a manufacturing system,
indicating flexibility thresholds that should not be
reached in order for any unpredictable running of
the manufacturing system to be avoided. Avoiding
randomness in a manufacturing system, facilitates
its successful monitoring and controlling.
4.3. PRODUCT MIX AND
UNPREDICTABILITY
Assembly lines A and B are studied under three
different product mixes. The product mix A,
consists of orders of underbodies a, b and c, in a
ratio of 20%, 30% and 50% respectively. The
product mix B consists of equal underbody orders
with a ratio of 33%. Finally, in the case of the
product mix C, the underbody b orders ratio is much
higher than that of the other two underbodies

390

orders ratio. Specifically, the underbody orders of
floor b is almost 80% while the ration of the
underbodies a and c is 10%.
The diagrams of figure 5, presenting the mean
values of the LZ analysis of tardiness timeseries,
indicate a relationship between the product mix and
the unpredictability. It is observed that the lower
the underbody order ratio is the higher the mean
LKZC value. In particular, in the figure 5b,
unpredictability is inversely proportional to the
underbodies ratio. A similar correlation is also
shown in figures 5c and 5d. In figure 5c,
unpredictability fluctuates around 0,06 for all the
underbodies, whose orders ratio is 33%. The
diagram d shows that tardiness unpredictability of
the underbodies a and c is almost the same and
much greater than the unpredictability of underbody
b. The underbodies a and c share the same orders
ratio of 10% and the underbody orders ratio is 80%.
It should be noted that this correlation, between
unpredictability and product mix is observed only
with the tardiness and not with the flowtime. The
diagrams of figure 4 that present the mean values of
LZ of the flowtime timeseries do not reveal a
connection between unpredictability and product
mix.
The relationship between the mean LZ of F
T
and
the mean LK complexity measure of F
A
, F
B
& F
C

can be studied with the help of table 1. The F
T

timeseries exhibit behaviour similar to that of the
F
A
, F
B
& F
C
in terms of unpredictability. The same
remark can be made for the tardiness timeseries as
well. The values of the mean LZ of T
T
T
A
, T
B
and
T
C
timeseries fluctuate in the same range, without
presenting great differences.

Figure 6 Design of Experiments
5. DISCUSSION
This paper proposes a new method of modelling
and analysing the complexity of manufacturing
systems from a performance indicators
unpredictability point of view. The assessment of
unpredictability is based on the performance
indicators timeseries analysis, with the use of the
Lempel Ziv Kolmogorov Complexity measure. The
efficacy of the approach is presented with a case
study from the automotive industry. Two assembly
lines, characterized by different flexibility,
producing three underbodies are examined, under a
range of demand rates and three different product
mixes. Both assembly lines present high
predictability and can be characterized by low
complexity. The values of the mean LZ are in line
with the characteristics of both the assembly lines
which are deterministic. A proportional relationship
between the flexibility and the unpredictability is
observed, after analysing the flowtime and tardiness
timeseries. This correlation of flexibility and
complexity, in terms of unpredictability, can be
useful for monitoring and controlling manufacturing
systems, and it should be further and thoroughly
studied. Another correlation is also identified, this
of the product mix and the unpredictability, but only
in the case of the unpredictability of tardiness. It is
observed that the lower the ratio of the underbody
orders is the higher its unpredictability.
6. ACKNOWLEDGMENTS
This work has been partially supported by K.
Karatheodoris grant from the University of Patras.

REFERENCES
Hu. S.J., Zhu, X., Wang, H., Koren, Y., "Product variety
and manufacturing complexity in assembly systems
and supply chains", CIRP Annals - Manufacturing
Technology, Vol. 57, No.1, 2008, pp.45-48
Frizelle, G., Woodcock, E., "Measuring complexity as
an aid to developing operational strategy",
International Journal of Operations and Production
Management, Vol. 15, No. 5, 1995, pp.26-39
Desmukh, A., Talavage, J., Barash, M., "Complexity in
manufacturing systems - Part 1: Analysis of Static
Complexity", IIE Transactions, 1998, pp.645-655
Nam P. Suh "A theory of complexity and applications",
Oxford University Press, USA, 2005
Stephen C.-Y. Lu and Nam P.Suh, "Complexity in design
of technical systems", CIRP Annals - Manufacturing
Technology, Vol 58, No 1, 2009,pp 157-160
ElMaraghy, H.A., Kuzgunkaya, O., Urbanic, R.J.,
"Manufacturing systems configuration complexity",
CIRP Annals - Manufacturing Technology, Vol.54,
No.1, 2005, pp.445 - 450
ElMaraghy, H., Urbanic, R.J., "Modelling of
manufacturing systems complexity", CIRP Annals -
Manufacturing Technology, Vol. 52, No. 1, 2003,
pp.363 - 366

391

Shannon C. E. and Weaver W,"A Mathematical Theory
of Communication", 1st Edition, University of Illinois,
Press, Urbana, 1949, p
Papakostas N and Mourtzis D, "An Approach for
Adaptability Modeling in Manufacturing - Analysis
Using Chaotic Dynamics", Annals of CIRP-
Manufacturing Technology, Vol. 56, 2007, No. 1, pp
491-494
Giannelos N, Papakostas N, Mourtzis D and
Chryssolouris D., "Dispatching policy for
manufacturing jobs and time-delay plots",
International Journal of Computer Integrated
Manufacturing, Vol. 20, No. 4, 2007, pp. 329-337
Papakostas N, Efthymiou K, Mourtzis D and
Chryssolouris G, "Modeling the complexity of
manufacturing systems using nonlineas dynamics
approaches", CIRP Annals - Manufacturing
Technology, Vol 58, No.1, 2009, pp 437-440
Wiendahl H P, Scheffczyk H, "Simulation based analysis
of complex production systems with methods of
nonlinear dynamics", Annals of the CIRP, Vol. 48,
1999
K. Peters, J. Worbs, U. Parlitz and H.-P. Wiendahl,
"Manufacturing systems with restricted buffer sizes",
onlinear Dynamics of Production Processes, 2004,
pp. 39-54
Alfaro, M.D., Sepulveda, J.M., "Chaotic behaviour in
manufacturing systems", International Journal of
Production Economics, 2006, pp. 150-158
Efthymiou K, Papakostas N, Mourtzis D and
Chryssolouris G, ''Fluid Dynamics Analogy to
Manufacturing Systems' 42
nd
CIRP Conference on
Manufacturing Systems, Grenoble, France, 2009
Schleifenbaum H, Uam JY, Schuh G and Hinke C.,
Turbulence in Production Systems Fluid Dynamics
and its Contributions to Production Theory,
Proceedings of the World Congress on Engineering
and Computer Science 2009 Vol II
Romano P, How can fluid dynamics help supply chain
management? International Journal of Production
Economics, Vol.118 No.2, 2009, pp. 463472.
Zhu X., Hu., S., Koren, Y., Marin, S.P., "Modeling of
manufacturing complexity in mixed model assembly
lines", Journal of Manufacturing Science and
Engineering, Transactions of the ASME, Vol. 130, No.
5, pp. 0.510131 - 05101310
Wang., H., Hu., S.J., "Manufacturing complexity in
assembly systems with hybrid configurations and its
impact on throughput", CIRP Annals - Manufacturing
Technology, Vol. 59, No. 1, 2010, pp. 53 56
Lempel A and Ziv J, On the complexity of finite
sequences. IEEE Trans Inform Theory, Vol. 22 No.
1, 1976, pp. 7581
Ferreira FF, Francisco G, Machado BS and
Murugnandam P Timeseries analysis for minority
game simulations of financial markets Physica A
Vol. 321 No X, 2003, pp.619632
Hon K K B, Performance and Evaluation of
Manufacturing Systems, CIRP Annals
Manufacturing Technology, Vol. 54, No 2, 2005, pp.
139-54
Chryssolouris G, "Manufacturing Systems: Theory and
Practice",2
nd
Edition, Springer-Verlag, New York
2006
Alexopoulos K, Papakostas N, Mourtzis D, Gogos P,
Chryssolouris G., "Quantifying the flexibility of a
manufacturing system by applying the transfer
function", International Journal of Computer
Integrated Manufacturing, Vol. 20, No. 6, 2007, pp.
538-547
Schmitz J P M, Beek van D A, Rooda J E, "Chaos in
discrete production systems",Journal of
Manufacturing Systems, Vol. 21, No. 3, 2002, pp. 236-
246
Proceedings of DET2011
7th International Conference on Digital Enterprise Technology
Athens, Greece
28-30 September 2011
392

A FUZZY CRITICALITY ASSESSMENT SYSTEM OF PROCESS EQUIPMENT
FOR OPTIMIZED MAINTENANCE MANAGEMENT
Qi H.S.
School of Engineering, Design and Technology
(SoEDT)
University of Bradford
West Yorkshire, BD7 1DP, England
h.qi@bradford.ac.uk
Alzaabi R.N.
School of Engineering, Design and
Technology (SoEDT)
University of Bradford
West Yorkshire, BD7 1DP, England


Wood A.S.
School of Engineering, Design and Technology
(SoEDT)
University of Bradford
West Yorkshire, BD7 1DP, England
A.S.Wood@bradford.ac.uk
Jani M.
School of Engineering, Design and
Technology (SoEDT)
University of Bradford
West Yorkshire, BD7 1DP, England

ABSTRACT
In modern chemical plants it is essential to establish an effective maintenance strategy, which will
deliver financially driven results at optimized conditions, i.e. minimum cost and time by means of a
criticality review of the equipments in maintenance. In this paper a fuzzy logic based criticality
assessment system of a local companys equipments is introduced. This fuzzy system is shown to
improve the conventional crisp criticality assessment system. Results from case studies show that
the fuzzy logic based system can perform the analysis same as the conventional crisp system can
do; and in addition, it can outperform, e.g. outputs more criticality classifications with improved
reliability and a greater number of different ratings that account for fuzziness.
KEYWORDS
Equipment criticality assessment, Maintenance management, Fuzzy logic
1. INTRODUCTION
In modern chemical plants, it is essential to establish
an effective maintenance strategy. Criticality-based
maintenance (CBM) is a prioritized approach to the
maintenance of process equipments in the chemical
process industries (CPI). In a process and hazard
criticality ranking (PHCR) study, each equipment
item is evaluated with a what if it fails scenario.
This requires personnel with thorough knowledge of
the process/equipment under study. The PHCR
value is a relative ranking in an overall criticality
hierarchy that is used to determine priorities for
maintenance programs, inspections and repairs
(Ciliberti V Anthony, 1998). A decision making
support systems of this kind, which can achieve
expert-level competence in solving problems in task
areas by gathering a body of knowledge about
sepecific functions, is called knowledge-based or
expert systems. More often, the two terms, expert
systems (ES) and knowledge-based system (KBS)
are used synonymously (Fasanghari, M. and
Montazer, G.A., 2010).
In this paper a crisp criticality assessment system
(CCAS) currently used in a local chemical company
based in West-Yorkshire UK, is presented (Jani M.
B., 2004). The vagueness of the system was
discovered during implementation of the system. To
improve the systems robustness, fuzzy logic is
applied to the CCAS system and consequently a
fuzzy criticality assessment system (FCAS) is

393

developed. Finally the advantages of the new FCAS
system over the existing CCAS system are
demonstrated with some real cases.
2. CRITICALITY ASSESSMENT SYSTEM
(CAS) AND EXPERT SYSTEM (ES) IN
DECISION MAKING
2.1. CRITICALITY ASSESSMENT REVIEW
Criticality assessment review of equipments
provides the structure around which a chemical
plant can form its operational maintenance plan. The
review is to assess the process criticality for
individual equipment items, taking into
consideration the potential impact upon the
Environment and Health & Safety, and the financial
impact upon the business in the event of equipment
failure (Dekker R. et al., 1998; Lee J. and Hong Y.,
2003). Normally a Multi-Criterion Classification of
the Critical Equipment (MCCCE) technique, as
defined by Felix et al. (2006), is used in a criticality
review and assessment. Through the criticality
review and assessment, companies can achieve:
a proper preventive maintenance for safer
equipment, better equipment availability for
production, and lower maintenance costs;
active planning, forecasting, scheduling and
follow-up of most work with minimum
downtime and need for emergency repairs;
an accurate and complete recording of
equipment maintenance activities and their
associated costs (material and labour), which
provides the necessary maintenance data for
maintenance managers to analyse and control
maintenance costs.
Afefy H. Islam (2010) reported that by implement
of the equipments criticality assessment for the plant
components, about 22.17% of the annual spare parts
cost are saved as the result of the preventive
maintenance.
2.2. CRITICALITY ASSESSMENT SYSTEM
USED AT THE LOCAL CHEMICAL
COMPANY
Criticality assessment review of the equipments at
the local chemical company, in West-Yorkshire UK,
was carried out during 2003-2004 (Jani M. B.,
2004). The review looked at all the plant equipment
in considerable detail, down to instrument level. The
assessment method in use was based upon a
corporate procedure as shown in Figure-1, and
several tasks were conducted through the review,
such as collecting and reviewing equipment
criticality data; concurrently building and collecting
data for critical spares.

Figure 1 Flow chart map of criticality assessment
procedure (Jani M. B., 2004)
The assessment method was based on a corporate
procedure for criticality assessment and involved
looking at the primary function of an item and
establishing the consequences of loss of its function
with the three factors/features listed in Table-1.
Table 1- Three factors for the criticality assessment
1. Environment, Health and Safety (EHS)
2. Impact on Business (IoB)
3. Annual Maintenance Cost (AMC)

This procedure was applied to all facilities,
structures, systems, equipment (rotating or fixed),
and components in the plant, including electrical,
mechanical and instrumentation. All equipment
within the plant were evaluated and processed
through the criticality assessment process based
upon site experience and team knowledge
represented by a Team of Plant Experts (TPES).
2.2.1. Team of Plant Experts (TPES)
The Team of Plant Experts was a group of staff in
the company with a good mix of expertise of
knowledge of the production process, the
environment (e.g. discharge of contents in air and
waste water and other regulations), as well as
maintenance/operation of the plant. The team
members, normally 8 to 10 staff at the plant site
depending upon the area of operation being
considered, included the Operational Supervisor,
Operator, Safety/EHS Representative, Area
Engineer, Process Engineer, Production
representative, Shift manager, Maintenance
Supervisor/Manager/representative, and Technical
representative.
The potential effect of each asset on each of the
three aforementioned aspects (shown in Table-1) in
the case of its failure was determined by TPES. The
Upload data to Intranet
Modify or write new PM's
Review Critical Items of plant
Upload data to intranet
Reveiw critical spares
Build Database on Eng drive
Score equipment to Matrix
Review plant P&id's

394

most probable failure situation associated with each
of the assets, among a number of failure scenarios,
was determined by TPES in terms of level of impact
of the failure on the company as far as maintenance
was concerned. Crisp scores (0, 1, 2, 3 or 4) were
assigned by TPES to each of the assets with regard
to effect on EHS, IoB and AMC (see Table-1).
2.2.2. Structure of the Crisp Criticality
Assessment System (CCAS)
The structure of CCAS is illustrated in Figure-2,
which consists of three inputs and two outputs.
Input One is the Effect on Environment, Health and
Safety (EHS). The score of EHS for each of the
assets, assigned by TPES based upon its hazardous
extent, could be 0, 1, 2, 3 or 4, as shown in Table-2.
Input Two is the Effect of Impact upon Business
(IoB). The score of IoB for each of the assets,
assigned by TPES based on the business loss if
shutdown of whole unit for certain time, could be 0,
1, 2, 3 or 4, as shown in Table-3. Input Three is the
Effect upon Annual Maintenance Cost (AMC). The
score of AMC for each of the assets, assigned by
TPES based on the equivalent cost of maintenance,
could be 0, 1, 2, 3 or 4, as shown in Table-4.










Figure 2 Structure of the Crisp Criticality Assessment
System (CCAS)
Based on Input One and Input Two, the system
provided the level of criticality (LC) as Output One
for each of the assets shown in Figure-2. The LC
was decided using a rule table (see Table-5)
designed by TPES. The LC of each of assets was
classified as HIGH (score 2) or MEDIUM (score 1)
or LOW (score 0) according to its scores on EHS
and IoB. As a result, all assets were grouped in three
categories (i.e. Low, Medium and High) based on
the LC score. The decision on maintenance priority
for individual asset was based on the category of the
asset.
Input Three, i.e. AMC score, did not actually
have any effect as far as the LC classification was
concerned. However, it did play a role in
determining the total criticality score (TCS) for each
asset, which was Output Two of the CCAS, as
shown in Figure-2. The TCS score was derived
based on the following Formula:

1 3 4 + + = AMC IOB EHS TCS (1)

4, 3 and 1 are weight factors assigned by TPES for
the three inputs, respectively, reflecting the level of
influence of each input on the total criticality score
(TCS). EHS (with weight factor 4) has higher effect
on TCS, as well as on LC, than IoB (with weight
factor 3). The AMC (with weight factor 1) has the
least effect on TCS and has no effect on LC. For
some other companies, the third input may be
become influential, and the weight factor should be
considered differently (consequently the third input
may not be ignored as far as LC is concerned). The
company used TCS, which varies from zero to a
maximum of 32 (based on the Formula-1), to
differentiate the relative criticality of individual
asset within the same LC category whenever
necessary. As the company used only the first two
inputs to decide the level of criticality (LC), this
paper only considers the first two inputs.
Table 2- HAZARD impact
Effect on
EHS
Description Score
ot
Hazardous
(H)
No hazards
*
exist 0
Slightly
Hazardous
(SH)
Potential First Aid injury on site
Non-regulated release could occur Local
order
1
Hazardous
(H)
Potential OII
*
, LT1
*
on site
Regulated release exceeding permit
conditions could occur
Offsite odour complaint
2
Extremely
Hazardous
(EH)
Potential serious permanent injury on site
Potential offsite injuries (FA
*
)
Regulated release occurs causing local
environmental damage
Multiple offsite odour complaints
Local media coverage
3
Deadly
Hazardous
(DH)
Potential loss of life on site
Potential serious offsite injuries (OII+)
Regulated release occurs causing long
term environmental damage
National media coverage.
4
*Notes: the corresponding definition/description for Hazard, OII,
LT1and FA can be found in ref. (Jani M. B., 2004)
Table 3- BUSIESS impact
Effect on IoB Description Score
o effect
(E)
No impact on production 0
Less effect
(LE)
Shutdown for up to 1 hr. (It is equivalent
to business loss of up to 5000)
1
Medium
effect
(ME)
Shutdown for1-8 hrs. (It is equivalent to
5000 -50000 business loss)
2
High effect
(HE)
Shutdown for 8-24 hrs. (It is equivalent
to 50,000-100,000 business loss)
3
Very high
effect
(VE)
Shutdown for more than 24 hrs. ( it is
equivalent to more than 100,000 loss)
4
Input One:
EHS score
Output One:
Level of
Criticality
(LC)
Rule Table:
The rule for
criticality
classifications
Input Three:
AMC score
Input Two:
IoB r score
Output Two:
Total
Criticality
Score (TCS)
Formula One:
TCS = EHS 4
+ IoB 3 +
AMC 1

395


Table 4- MAITEACE impact
Effect on AMC Description Score
Very Low (VL) < 1,000 per year 0
Low (L) 1,000 - 10,000 per year 1
Medium(M) 10,000- 20,000 per year 2
High (H) 20,000 - 50,000 per year 3
Very High
(VH)
> 50,000 per year 4
Table 5- Rule table for Level of Criticality (LC) score
EHS

IoB
0 1 2 3 4
0 LOW
(0)
LOW
(0)
LOW
(0)
MEDIUM
(1)
HIGH
(2)
1 LOW
(0)
LOW
(0)
LOW
(0)
MEDIUM
(1)
HIGH
(2)
2 LOW
(0)
LOW
(0)
MEDIUM
(1)
MEDIUM
(1)
HIGH
(2)
3 LOW
(0)
MEDIUM
(1)
MEDIUM
(1)
HIGH
(2)
HIGH
(2)
4 LOW
(0)
MEDIUM
(1)
HIGH
(2)
HIGH
(2)
HIGH
(2)
2.2.3. Necessity for system improvement
Advantages of CCAS The CCAS was run
successfully at the company. By using the CCAS,
the company assessed all equipment, as shown in
Figure-3, where 17.9 % of them were in the High
category, 26.8% were in the Medium category and
55.3% belonged to the Lower category (Jani M. B.,
2004). The Criticality Assessments were recorded in
an Excel spreadsheet, allowing easy manipulation
and sorting of data. This spreadsheet became a
control document with an appropriate change and
control procedure. New equipment was assessed and
added to the list when it was installed.
The benefits of implementing the CCAS at the
companys West-Yorkshire plant include:
Reducing the risk of serious failures on high
criticality assets;
Reducing costs by reduced labour
requirement (as low criticality assets
require less attention);
Reducing usage of parts due to unnecessary
maintenance;
Reducing planned maintenance stoppages
due to unnecessary maintenance;
High productivity attributed to 'critical
assets' improved reliability.
Utilisation of the CCAS can minimise unplanned
event such as:
Injury to people, both employees and the
public;
Damage to the environment;
Loss of process material;
Damage to capital assets;
Increase of operating costs.
In addition, the CCAS is easy to use and it is easy
to update the assessment score with new inputs of
the companys assets.

All Category Graphs
0
50
100
150
200
250
300
350
P
u
m
p
s
V
a
l
v
e
s

&

a
c
t
u
a
t
o
r
s





I
n
s
t
r
u
m
e
n
t
s
A
g
i
t
a
t
o
r
s
V
e
s
s
e
ls
C
o
n
t
a
in
m
e
n
t
R
e
l
ie
f

d
e
v
ic
e
s
L
if
t
i
n
g

e
q
u
ip
m
e
n
t
C
o
m
m
s

e
q
u
i
p
m
e
n
t
H
e
a
t

e
x
c
h
a
n
g
e
r
F
ilt
e
r
B
o
i
le
r
C
h
i
m
n
e
y
Equipment
T
o
t
a
l
High
Medium
Low

Figure 3 All Categories Critical Equipment Chart (Jani M.
B., 2004)
Issues for improvement To evaluate the
system performance, three issues have been
identified:
1, The input The input scores for HES and IoB
are simple, but have some drawbacks. For instance,
when TPES evaluates an individual asset, it is likely
they may have different views on what scores for
HES (and IoB) should be assigned. The CCAS
cannot accommodate these differences. For
example, during the critical assessment of an
agitator motor, which was used in the Effluent Plant
to give motion to an agitator, the TPES
demonstrated some differences in opinion on the
EHS score for the agitator motor. In the TPES with
10 members, 5 members gave a score of 0 and the
other 5 members gave a score of 1. In the CCAS,
however, TPES had to agree on what score (with an
integer value) should be assigned. Then, eventually,
everybody agreed on a score of 1. Such rigidity of
the CCAS on input information might filter out
some useful information, i.e. differences in TPES
opinions might indicate that the actual score should
be assigned with some degree of
uncertainty/fuzziness, e.g. a possibility of a score
lying between 0 and 1. As far as the IoB score is
concerned, apart from no tolerance on the difference
among TPES opinions (the same as on EHS score),
the CCAS treats, for example, a loss of 5000 and a
loss of 50000 the same as they both score 2 (see
Table-3). It would be better if a system could take
the actual estimated value as the input.
2, The output The output score on the
level of criticality is an integer score of 0, 1 and 2
that represent Low, Middle and High, respectively.

396

It is known that the company also wanted to rank
assets within the same level of criticality group in
terms of importance to the production operation,
which was one of the reasons that the third input,
AMC, was included in the CCAS. It would be better
if the input information, in terms of EHS and IoB,
could be used not only to assess individual asset to
different levels of criticality but also to rank the
assets within each level of criticality group.
3, The rule set The rule set in Table-5 set
up by TPES is the core of the CCAS. Robustness of
the rules used affects the quality of the criticality of
assessment. The 25 rules, generally speaking,
represent the knowledge of the team of experts (i.e.
TPES) and are reliable. However, it is possible that
human error and uncertainty existed in the
determination of the 25 rules that might make some
of the rules less trustworthy and rather subjective.
So it is necessary to evaluate and fine tune the rules
to make them better in representing the logic of the
physical system.
The issues mentioned above can be addressed
naturally by integrating the functions of Fuzzy logic
inference engines and fuzzy membership into the
CCAS.
2.3. FUZZY EXPERT SYSTEM
The quality of decisions, in terms of repair priorities
and resource assignment, is the critical factor for a
production company. A decision support system
plays a vital role in the decision process
enhancement. One problem in a decision process is
how to deal with or represent the meaning of vague
concepts usually used in situation characterization,
such as those implicit in linguistic expressions like
very hazardous, very expensive to repair. One
possible approach to handle vague concepts is
Fuzzy Set Theory, formulated and developed around
50 years ago by Lotfi Zadeh (1977). Fuzzy set
theory is a generalization of classical set theory that
provides a way to absorb the uncertainty inherent to
phenomena whose information is vague and supply
a strict mathematical framework, which allows its
study with some precision and accuracy. A fuzzy set
presents a boundary with a gradual contour, by
contrast with classical set, which present a discrete
border. Since fuzzy logic can be easily adopted as a
means of both capturing human expertise and
dealing with uncertainty, fuzzy systems have been
successfully applied to various applications and
large-scale complex systems that exist everywhere
in our society (Yager R. R., 1980; Zimmermann H.
J., 1992; Zadeh L. A., 1996; Betroluzza C., et al.,
1995; Garavelli A. C., 1999; Tran L. T. and L
Duckstein., 2002; Buyukozkan G., and Feyzioglu
O., 2004; Lu K. Y. and Sy C. C., 2009). Fuzzy
expert system has been developed in decisions
involving uncertainty and ambiguity (Tran L. T. and
L Duckstein., 2002), where fuzzy logic enables
expert system (ES) in coping with uncertainty and in
dealing with both quantitive and qualitive variables.
Buyukozkan G., and Feyzioglu O., (2004) pointed
out that fuzzy logic decision systems can encode
expert knowledge in a direct and easy way using
rules with linguistic labels. The main tasks in
developing the fuzzy logic decision system consists
of determining membership functions, fuzzy rules,
fuzzification and defuzzification. The membership
functions and fuzzy rules are generated by best
representing the companys expert knowledge.
3. DEVELOPMENT OF A FUZZY
CRITICALITY ASSESSMENT SYSTEM
(FCAS)
3.1. FCAS SET-UP FOR LEVEL OF
CRITICALITY ASSESSMENT
To keep a same system structure, the new FCAS
used the EHS and IoB as two fuzzy inputs for the
assessment of level of criticality (LC). The structure
of the FCAS system is illustrated in Figure-4
(Alzaabi R. N., 2005).



Figure 4 Structure of the Fuzzy Criticality Assessment
System (FCAS)
The FCAS consists of two fuzzy events as the
system inputs (i.e. EHS and IoB), one inference
engine based on 25 IF-THEN rules using the
Mamdani method, and one crisp output through de-
fuzzyfication with Centroid method (Mamdani E.
H., 1977).
3.1.1. Two fuzzy inputs: EHS and IoB
Each crisp-input of the previous CCAS system is
replaced by corresponding fuzzy input with fuzzy
membership functions, as shown in Figure-5 and
Figure-6. Five fuzzy labels are assigned for each
input, as shown in the right-hand column in Table-6
for EHS and Table-7 for IoB. EHS as antecedent 1
has five labels, i.e. NH, SH, H, EH, DH. IoB as

Output One: LC
25 IF-THEN
Inference
Engine
Fuzzy Input Two: IoB
Fuzzy Input One: EHS

397

antecedent 2 has five labels, i.e. NE, LE, ME, HE,
VE. For comparison, the scores in the left-hand
column of the tables are those used in CCAS
system.
Table 6- Fuzzy labels for EHS
Score Effect on EHS Fuzzy Labels
0 Not Hazardous NH
1 Slightly Hazardous SH
2 Hazardous H
3 Extremely Hazardous EH
4 Deadly Hazardous DH
Table 7- Fuzzy labels for IoB
Score Effect on IoB Fuzzy Labels
0 No effect on production NE
1 Shutdown of whole unit for up to 1 hr.
(It is equivalent to loss of up to 5000)
LE
2 Shutdown for1-8 hrs. (It is equivalent to
5000 -50000 loss)
ME
3 Shutdown for 8-24 hrs. (It is equivalent
to 50,000-100,000 loss)
HE
4 Shutdown for more than 24 hrs. ( it is
equivalent to more than 100,000 loss)
VE

The membership function for EHS is established to
give numerical meaning to each label as shown
Figure-5. A triangular membership function is used.
EHS is assumed within a universe of discourse U
1
=
{EHS / 0 EHS 4}. Therefore, we use a limited
universe of discourse to the range of interest of
application for EHS. The lower boundary is zero.
This makes sense because it means no hazardous
effect on production. This also is identical with the
set up of the existing crisp system (CCAS).

0 0.5 1 1.5 2 2.5 3 3.5 4
0
0.2
0.4
0.6
0.8
1
EHS
D
e
g
r
e
e

o
f

m
e
m
b
e
r
s
h
i
p
0 1 2 3 4

Figure 5 EHS membership functions
The membership function for IoB is established to
give numerical meaning to each label as shown
Figure-6. A trapotropical membership function is
used. IoB is assumed within a universe of discourse
U
2
= {IoB / 0 IoB }. Therefore, we use an
unlimited universe of discourse to the range of
interest of application for IoB. The lower boundary
is zero. This makes sense because it means no effect
on production or shutdown of the whole unit for
zero hour and equivalent to business loss of 0. This
also is identical with the set up of the existing crisp
system (CCAS).

0 0.5 1 1.5 2 2.5 3 3.5 4
x 10
5
0
0.2
0.4
0.6
0.8
1
ImpactOnBusiness
D
e
g
r
e
e

o
f

m
e
m
b
e
r
s
h
i
p
01 2 3 4

Figure 6 IoB membership functions
Table 8- Fuzzy labels for LC
Crisp
Score
Fuzzy
Score
Level of
Criticality
Fuzzy
Labels
0 0.5 LOW L
1 0.5< 1.5 MEDIUM M
2 1.5< 2.5 HIGH H
3 2.5< VERY HIGH VH

0 0.5 1 1.5 2 2.5 3
0
0.2
0.4
0.6
0.8
1
CriticalityClassification
D
e
g
r
e
e

o
f

m
e
m
b
e
r
s
h
ip
Low Medium High Very
H
igh

Figure 7Criticality classification membership functions
3.1.2. The output: Level of Criticality (LC)
Four fuzzy labels, i.e. L (Low), M (Medium), H
(High) and VH (Very High), are assigned for Level
of Criticality (LC), as shown in the right-hand
column in Table-8. For comparison, the left-hand
column of the table is the Level of Criticality scores
assigned by TPES used in CCAS.
The membership function for LC is established to
give numerical meaning to each label. Triangular
membership function is used for LC as shown in
Figure-7. The universe of discourse of LC as the
consequent in the rule-based fuzzy logic approach is
U
3
= {LC / 0 LC 3}. We use a limited universe
of discourse to the range of interest of application
for LC. This also is identical with the set up of the
existing crisp system (CCAS).
3.1.3. IF-THEN rule-base
IF-THEN rules have been set up for the fuzzy
inference, which can be presented in a matrix form,
referred to as a Fuzzy Associative Memory (FAM),
H SH H EH DH
()
E LE ME HE
VE
1.93
0.93 High
0.07 Medium

398

which has a similar form as the rule table in Table-5
used in CCAS. FAM is a matrix that uses the labels
of one input for the row names and the labels of
another input variable for the column names. Each
cell in the matrix contains an output label denoting
the output resulting from a specific input
combination represented by the row and column
(Buyukozkan G., and Feyzioglu O., 2004). For the
FCAS, using EHS and IoB as the inputs and LC as
the output, the FAM is developed to generate fuzzy
output as given in Table-9. Since the five labels are
defined for each input, the FAM is a 55 matrix. 24
of the 25 rules in the rule matrix in Table-9 are
identical with the rules designed by TPES in the
CCAS (see Table-5). One new rule, i.e. If EHS is
DH and IoB is VE, then LC is VH, is introduced
for the FCAS. (In comparison, in companys CCAS
system, 2 (=HIGH) is the output when EHS and IoB
both score 4.)
Table 9- Fuzzy Associative Memory (FAM) matrix for
criticality classifications
EHS

IoB
H SH H EH DH
E L L L M H
LE L L L M H
ME L L M M H
HE L M M H H
VE L M H H (VH)


Figure 8The profile of the fuzzy inference representing the
25 IF-THE rules used in the FCAS
The input variables appear only in the antecedent
part (i.e. IF part) of fuzzy rules, while the output
variable is found only in the consequent part (i.e.
THEN part) of fuzzy rules. For example, If EHS is
EH and IoB is LE, THEN LC is M. Figure-8 shows
the profile of the fuzzy inference based on the
Mamdani method using the Matlab Fuzzy Logic
Tool Box, to represent the 25 IF-THEN rules of the
FACS system in Table-9. The profile shows a transit
of the level of criticality from 0 to 3 in representing
LOW, MEDIUM, HIGH and VERY HIGH
respectively. It also indicates from the profile that
EHS is superior to IoB in terms of effect on LC,
which was implemented in the companys CCAS.
The profile shows that the IF-THEN inference
engine in the FCAS truly represents the companys
experts (TPES) opinions and knowledge.
3.1.4. De-fuzzification and crisp output for
the LC
The LC score for each of the assets (i.e. LOW,
MEDIUM, HIGH, and VERY HIGH) is obtained
through aggregation and de-fuzzification. Min-
Max inference is used in rule evaluation. It takes
the minimum of the antecedents and the maximum
of the rule strengths for the consequent. The
Centroid method is used for de-fuzzification. The
final level of criticality (LC) for each asset is one of
the four categories (from L to VH) based on the
fuzzy set definition of LC shown in Table-8.


Figure 9 Inputs and Output
Figure-9 demonstrates how output is obtained when
the EHS and IoB are entered. The left-down arrow
in the figure shows the container where the inputs
are entered (EHS = 2 and IoB = 161,000). In the
fuzzy inference process, the input of EHS fires the
rules from eleven to fifteen, as shown in the left
column. Meanwhile, the input of IoB fires the rules
of five, ten, fifteen, twenty, and twenty five
respectively and simultaneously, as shown in the
middle column. The result of Criticality
Classification then appears on the place where the
right-up arrow pointing, and the value 1.93 is the
result obtained from the fuzzy inference. Based on
EHS = 2 IoB = 1.61e+05 Criticality Classification = 1.93

399

Table-8, the asset is classified as level 2 (or HIGH)
in terms of level of criticality (as 1.5 < 1.93 < 2.5),
which is same as the result obtained from the case
companys current CCAS. Based on the definition
of the Criticality classification membership
functions (see Figure-7), the score of 1.93 can be
interpreted as: the corresponding asset is of 93%
level 2 (or HIGH) and 7% Level 1 (or MEDIUM),
as indicated in Figure-7. For further comparison of
two systems (i.e. FCAS and CCAS) 6 cases are
closely studied, which are summarised in Table-10.
4. RESULTS AND DISCUSSIONS WITH
CASES STUDY
4.1. CASES STUDY
The powerfulness and robustness of the fuzzy
criticality assessment system (FCAS) can be noticed
by looking and noting the differences between the
CCAS and the FCAS shown in Table-10, which
includes critical assessments of 6 assets. The Colum
3 and Colum 4 in Table-10 are the two inputs, EHS
and IoB. For CCAS the two inputs are integers. For
FCAS, however, the input of IoB is the real value
(i.e. equivalent number of hours lost and
corresponding business loss in ) and the input of
EHS is a statistical average of the collective scores
from individual member of the TPES. The Colum 5
includes the outputs obtained from both CCAS and
FCAS.
Table 10- Comparison of FCAS with CCAS
Asset
o.
IPUT OE:
EHS
IPUT
TWO: IoB
OUTPUT
: LC
1
Crisp 3 4 2 = H
Fuzzy 3.5
= (4*3+3*3+3.5*2)/8
36hrs
(~200,000)
2.4 = H
(0.4VH,
0.6H)
2
Crisp 3 4 2 = H
Fuzzy 3.375
= (4*3+3*5)/8
24hrs
(~100,000)
2.2 = H
(0.2VH,
0.8H)
3
Crisp 3 2 1 = M
Fuzzy 2.5
= (3*3+2*3+2.5*2)/8
4 hrs
(~25,000)
1.2 = M
(0.2H,
0.8M)
4
Crisp 2 2 1 = M
Fuzzy 1.4
= (2*4+1*6)/10
6hrs
(~35,000)
0.7 = M
(0.7M,
0.3L)
5
Crisp 1 1 0 = L
Fuzzy 0.5
= (1*5+0*5)/10
0.5hr
(~2,500)
0.4 = L
(0.4M,
0.6L)
6
Crisp 0 0 0 = L
Fuzzy 0.4375
= (1*3+0*4+0.5*1)/8
0hr
(~0)
0.3 = L
(0.3M,
0.7L)

Taking the third case in Table-10 as an example,
where the asset score on the effect of EHS is 3 and
on the effect of IoB is 2 from CCAS. Consequently,
the level of criticality (LC) of this asset scores 1,
which means that the assets criticality is Medium.
From the FCAS, however, by taking account the
difference in opinion among TPES when assessing
this asset, the EHS score statistically is 2.5 (instead
of 3), which is based on that 3 of the 8 TPES gave a
score of 3, other 3 of 8 gave a score of 2 and the rest
2 of 8 were neutral (2.5 was used here to represent
the neutral). For IoB, 4hrs, which represented the
shutdown the production for 4 hours and equivalent
loss of 25000, is used as IoB input. Consequently
the level of criticality (LC) of the asset is 1.2, which
can be interpreted using the fuzzy set definition as
80% Medium and 20% High (see Figure-7), and
largely the assets criticality is Medium same as that
obtained from CCAS.
4.2. ADVANTAGES OF THE FUZZY SYSTEM
(FCAS) OVER THE CRISP SYSTEM (CCAS)
Results of the cases study show that there are
several advantages of the new fuzzy system over the
current crisp system.
First, the fuzzy system can do what the
conventional system offer, i.e. if crisp values from
the third case discussed previously are inputted into
the FCAS, then LC =1 is resulted, which is identical
with the result obtained from CCAS. In addition, as
shown in Table 10, both systems derive same results
as far as the LC category is concerned, i.e. the assets
1 and 2 are in the category of High, the assets 3 and
4 are in the category of Medium and the assets 5 and
6 are in the category of Low.
Secondly the fuzzy system offers the possibility
of much detailed criticality classifications than the
conventional crisp system, by taking account of
fuzziness and greyness existed in the real world
production system and subjective/bias/imperfection
of experts. It is known that the company also wants
to rank assets within same level of criticality group
in terms of importance to the production operation,
which was one of the reasons that the third input,
AMC was included in the CCAS. In FCAS this can
be realised naturally by using EHS and IoB, to not
only assess individual asset to different level of
criticality but also to rank the assets within same
criticality group. As shown in Table-10, FCAS
system ranks all 6 assets based on their fuzzy scores,
i.e. the asset 1 is the first and successively to the
asset 6 as the last, in terms of criticality. However,
the conventional CCAS cant provide the
information, e.g. the asset 1 equals to the asset 2 in
the High category, the asset 3 equals to the asset 4 in
the Medium category and the asset 5 equals to the
asset 6 in the Low category.
Thirdly, the fuzzy criticality system allows the
team of the experts (TPES) to express their
difference in opinion when assessing and scoring for
each asset and takes those fuzziness and vagueness

400

into the criticality assessment process.
Consequently, the fuzzy system provides the
criticality ranking in terms of the companys assets,
with less bias and higher reliability.
The analysis of the all results obtained from the
FCAS shows that some assets got fuzzy scores,
either higher or lower than they should be according
to experts evaluation. This observation indicates
that possibly there is room for fine tuning some of
the 25 rules, which will be discussed in detail in
another paper.
5. CONCLUSIONS
In modern Chemical plants, it is essential to
establish an effective maintenance strategy, which
will deliver financially driven results at optimized
condition i.e. minimum cost and time by means of a
criticality review of equipment in terms of
maintenance. The crisp criticality assessment system
(CCAS) of a local companys equipments is a very
useful tool for the companys effective production
maintenance management. However it is found that
the system lacks flexibility and reliability and can be
improved by introducing Fuzzy Set into the system.
Consequently a new fuzzy criticality assessment
system FCAS is developed and presented in this
paper. The system is developed using Matlab fuzzy
logic toolbox with a Mamdani inference method. It
is found that:
1. This fuzzy system improved the existing crisp
criticality assessment system; it can do what the
conventional system can offer.
2. This fuzzy system offers the possibility of
much detailed criticality classifications than the
conventional crisp system, by taking account of
fuzziness and greyness existed in the real world
production system and bias/imperfection of
experts. In addition to assess individual asset to
deferent level of criticality (LC), FCAS can
naturally use the input information of EHS and
IoB to rank the assets within each LC group.
3. The fuzzy criticality system allows the team of
the experts (TPES) to express their difference
in opinion when assessing and scoring for each
asset and takes those fuzziness and vagueness
into the criticality assessment process.
Consequently, the fuzzy system provides the
criticality ranking in terms of the companys
assets, with less bias and higher reliability.
4. Using the new FCAS, the quality of the
company maintenance management can be
further optimized by evaluation of the existing
25 rules and fine tuning some of them wherever
necessary, which will be studied in future.
REFERENCES
Afefy H. Islam, Reliability-Centered Maintenance
Methodology and Application: A Case Study,
Engineering, Vol. 2, 2010, pp 863-873
Alzaabi R. N., A Fuzzy Criticality Assessment System,
Final Year Project Report, University of Bradford,
UK, 2005
Betroluzza C., Corral N. and Salas A., On a New Class
of Distances between Fuzzy Numbers, Mathware and
Soft Computing, Vol. 2, No. 2, 1995, pp. 71-84
Buyukozkan G., and Feyzioglu O., A Fuzzy Logic
Based Decision Making Approach for New Product
Development, International Journal of Production
Economics, Vol. 90, 2004, pp27-45
Ciliberti V Anthony, Use critically-based maintenance
for optimum equipment reliability, Chemical
Engineering Progress, Vol. 94, No. 7, July 1998, pp
63-67
Dekker R., Kleijn M. and Rooij P. de, A Spare Parts
Stocking Policy Based on Equipment Criticality,
International Journal of Production Economics, Vol.
56-57, 1998, pp 6977
Fasanghari, M. and Montazer, G.A, Design and
implementation of fuzzy expert system for Tehran
Stock Exchange portfolio recommendation, Expert
System with Applications, Vol. 37, No. 9, 2010, pp
6138-6147
Felix C., Leon H. Gomez De and Cartagena J. R.,
Maintenance Strategy Based on a Multicriterion
Classification of Equipments, Reliability Engineering
and System Safety, Vol. 91, No. 4, 2006, pp 444-451
Garavelli A. C., Gorgoglione M. and Scozzi N., Fuzzy
Logic to Improve the Robustness of Decision Support
Systems under Uncertainty, Computers & Industrial
Engineering, Vol. 37, 1999, pp 477-480
Jani M. B., Project on Critical Equipment Review,
Internal report, University of Bradford, UK, 2004
Lee J. and Hong Y., A Stock Rationing Policy in a (s, S)
Controlled Stochastic Production System with 2-phase
Coxian Processing Times and Lost Sales,
International Journal of Production Economics,
Vol.83, 2003, pp 299307
Lu K. Y. and Sy C. C., A real-time decision-making of
maintenance using fuzzy agent, Expert Systems with
Applications Vol. 36 No. 2, 2009, pp 2691-2698
Mamdani E. H., Application of Fuzzy Logic to
Approximate Reasoning Using Linguistic Systems,
Fuzzy Sets and Systems, Vol. 26, No. 1, 1977, pp.
1182-1191
Tran L. T. and L Duckstein., Comparison of Fuzzy
Numbers Using a Fuzzy Distance Measure, Fuzzy
Sets and Systems, Vol. 130, No. 3, 2002, pp. 331-341
Yager R. R., A General Class of Fuzzy Connectives,
Fuzzy Sets and Systems, Vol. 4, No. 3, 1980, pp. 235-
242

401

Zadeh L. A., Outline of a New Approach to the Analysis
of Complex Systems and Decision Process, IEEE
Transactions of Systems, Man and Cybernetics, No. 1,
1973, pp. 28-44.
Zadeh L. A., Fuzzy Logic Computing with Words,
IEEE TransactionsFuzzy Systems, Vol. 4, No. 2,
1996, pp. 103-111
Zimmermann H. J., Fuzzy Logic for the Management of
Uncertainty, John Wiley & Sons, Inc. , USA, 1992,
(Preface In: L. Zadeh and J. Kacprzyk)


Proceedings of DET2011
7th International Conference on Digital Enterprise Technology
Athens, Greece
28-30 September 2011
402

REALISING THE OPEN VIRTUAL COMMISSIONING OF MODULAR
AUTOMATION SYSTEMS
X.Kong, B.Ahmad, R.Harrison,
A.Jain,Y.Park
Wolfson School of Mechanical and
Manufacturing Engineering
Loughborough University
Leicestershire, LE11 3TU, UK
{x.kong, b.ahmad, r.harrison, a.jain,
y.park}@lboro.ac.uk
Leslie.J.Lee
Ford Motor Company
Powertrain Operations
Dunton Engineering Centre
Laindon, Essex SS15 6EE, UK
llee1@ford.com


ABSTRACT
To address the challenges in the automotive industry posed by the need to rapidly manufacture more
product variants, and the resultant need for more adaptable production systems, radical changes are
now required in the way in which such systems are developed and implemented. In this context, two
enabling approaches for achieving more agile manufacturing, namely modular automation systems
and virtual commissioning, are briefly reviewed in this contribution. Ongoing research conducted at
Loughborough University which aims to provide a modular approach to automation systems design
coupled with a virtual engineering toolset for the (re)configuration of such manufacturing
automation systems is reported. The problems faced in the virtual commissioning of modular
automation systems are outlined. AutomationML - an emerging neutral data format which has
potential to address integration problems is discussed. The paper proposes and illustrates a
collaborative framework in which AutomationML is adopted for the data exchange and data
representation of related models to enable efficient open virtual prototype construction and virtual
commissioning of modular automation systems. A case study is provided to show how to create the
data model based on AutomationML for describing a modular automation system.
KEYWORDS
Modular Automation Systems, Component Based, Virtual Commissioning, AutomationML

1. INTRODUCTION
After years of booming markets, automotive
industry is now facing unprecedented challenges
mainly arising from, excessive global production
capacity, decreasing product lifecycles and
increasing product variants (Jens Kiefer et al, 2006).
Despite of the improvements made by just-in-time
and lean production strategies, the current
manufacturing systems used by industry cannot
respond efficiently and effectively to this paradigm
shift. This is due, to a significant extent, to the fixed
configuration and hierarchical structures (in both
the hardware and software) of conventional
manufacturing systems which cannot be rearranged
and reused efficiently with changing market needs
and thus are facing a constant threat of obsolescence
(R.Harrison et al, 2006). Michalos et al also
provides a comprehensive review of the challenges
and outlook for automotive assembly technologies
(G. Michalos et al, 2010). To fulfil the demands of
mass-customisation, there is a strong need for new
forms of manufacturing systems. Among such
proposed approaches, Reconfigurable
Manufacturing Systems (RMS) is regarded as a
promising one. RMS enables rapid responsiveness

403

in the mass-customisation production era by
providing customised flexibility on demand in a
short time (ElMaraghy et al, 2009); on the other
hand, time of building and validating RMS is
increasing as the complexity of RMS is growing
(S.Lee et al, 2007). However, the competition for
key market shares makes shorter time in production
ramp-ups of key importance (Reinhart, G. and G.
Wnsch, 2007).
To address these crucial production-related
challenges, two emerging enablers are recognised
here for the building of reconfigurable automation
systems cost effectively and in minimum time.
These are:
1. Adopting a modular approach to build
reconfigurable automation systems by
composing such systems of reusable
autonomous mechatronic units. This approach
enhances the changeability of a reconfigurable
automation system.
2. Introducing the concept of Virtual
Commissioning (VC) to implement and
validate reconfigurable automation systems in
virtual environments prior to the physical
system being implemented. By adopting virtual
commissioning the ramp-up time can be
significantly compressed.
In this context, the objective and scope of this
paper is to 1) provide the background context for
modular automation systems and virtual
commissioning, 2) introduce new research work in
this field which is being carried out at
Loughborough University, 3) identify current
problems in the virtual commissioning of modular
automation systems, propose a collaborative
framework targeted at addressing these problems
and consider how to realise the data transformation
between tool-specific data formats and a neutral
data format - AutomationML, and 4) develop an
open data-model based on AutomationML for
modular automation systems transforming the
current data model Loughbroough Universitys
modular automation system into this format.
2. MODULAR AUTOMATION SYSTEMS
Current manufacturing automation systems are
normally implemented in rigid hierarchical
structures. The current approach, whilst well
established and using well proven methods, still
follows a classical rigid sequential model and uses
an ad-hoc collection of poorly integrated tools and
methods to take customer requirements and
translate them into the desired system. As shown in
the Figure-1, in the current approach the design,
build and validation of automation systems takes
place sequentially. In such an engineering process
the validation of a system cannot be carried out until
the final stage of the systems development, when
all electrical, mechanical units and the control
software have been integrated. It is obvious that any
unforeseen delays that occur during these activities
will result in the delay of succeeding activities and
hence delay the system delivery date. This
adversely affects the lead time of a production
machine and thus results in a failure to gain a
competitive edge and market share (R.Harrison et
al, 2001). Also, such an engineering approach
heavily relies on the knowledge and experience of
the engineering team. Moreover, the control codes
developed for such systems are often monolithic
and unstructured, making them difficult to
understand, modify and reuse. Due to this, any
alteration in the automation system is time
consuming, complex, error prone and expensive.
This results in an adverse impact on the
commissioning and ramp-up time and can also lead
to performance degradation.


Figure 1- Current Engineering Process of a Traditional
Automation System (R.Harrison et al, 2006)
To gain a competitive edge in the market by
providing more product variants more rapidly,
innovative approaches to automation system
engineering are required to achieve agility in the
manufacturing systems. An important consideration
is that new production systems must be scalable in
capacity and functionalities thereby making them
able to convert quickly to produce new products
(Mehrabi et al, 2000). In this context, modular
production systems are designed at the onset to be
re-configurable and created from basic hardware
and software modules that can be re-arranged
quickly and reliably (R.Harrison et al, 2006).
There are several modular approaches in the
literature from both academic and industrial
researchers. These modular approaches commonly
break down an automation system into reusable
autonomous production units. By combining these
units a modular automation system can be built to
achieve reconfiguration. Typical examples include
Component-Based approach proposed by Harrison
et al for the design and implementation of modular
assembly automation systems (R.Harrison et al,
2006), Actor-Based Assembly Systems (ABAS)

404

built based on autonomous mechatronic units
(Martinez Lastra, J.L., 2002) ,Modular Machine
Design Environment (MMDE) (Moore, P.R et al,
2003) proposed and implemented in VIR-ENG
research project for designing, implementing and
verifying control systems for agile modular
manufacturing machinery, a modular autonomous
material handling equipment solution for flexible
automation described in (Bj et al, 2004), a fully
automated robotic system in a modular way to meet
the needs of a high throughput chemistry laboratory
described by Manley (Manley, J.D et al, 2008) and
a modular approach for production system
engineering by adopting mechatronic objects
proposed by researchers from Daimler AG and the
University of Magdeburg (M. Bergert, J.K, 2010).
The component-based approach (R.Harrison et
al, 2006) proposed by researchers from
Loughborough University aims at building
reconfigurable modular automation systems for
automotive power-train assembly systems. In this
approach, a whole transport and assembly system
can be decomposed ultimately into reusable and re-
configurable components with embedded
knowledge of control, 3D modelling, kinematics,
and particular resources. A simplified representation
of the structure of a component-based modular
approach is shown in Figure-2 (upper part).
Components can be designed, implemented, and
validated concurrently and independently by various
vendors. An automation system developed using
this approach is inherently modular, reconfigurable
and can be quickly developed in a time- and cost-
effective manner through combining pre-validated
components, as shown in Figure-2 (lower part).
Evaluation work at ThyssenKrupp Krause showed
that a saving of about 50% in overall build time of a
control system on a reference assembly machine can
be achieved by using a component-based approach
(R.Harrison et al, 2001).


Figure 2- Architecture of Modular Component-based
Approach
The authors are currently working on a research
project named Business Driven Automation (BDA)
applying the component-based concept through
collaborative research involving Loughborough
University, Ford Motor Company and their machine
builders and control vendors. This project aims to
enable the realisation of next generation business-
driven automation systems which can be readily
evolvable under the direct control of the end-user
and can be pre-defined in a modular form to enable
the majority of process engineering to occur before
the beginning of product engineering. Providing a
virtual engineering environment for component-
based assembly automation systems and a common
engineering model that can effectively support the
supply chain partners throughout the machines
lifecycle is the main objective of this research. This
virtual engineering environment is to facilitate the
virtual construction, testing and validation of new
production facilities prior to their physical build.
Fundamentally, this engineering application is to 1)
reuse the proven system commonalities from the
previous projects, 2) provide efficient
(re)configuration capabilities within the powertrain
assembly systems and 3) provide robust launch of
new production systems. The virtual engineering
toolset enables the development of a practical and
effective set of reusable machine components that
could be easily deployed and integrated to build a
desired automation system. The functionality and
know-how for operation and error recovery are
embedded into the components; making them
intelligent in the context of having the ability to
decide what to do and when to do a task.
Based on the requirements of the end-user (i.e.
Ford) and their supply chain partners, the
engineering toolset has been designed into a set of
modules. These include a) a Core Component
Editor, b) a virtual machine operator V-Man c) and
a Runtime/Installation support. These modules are
briefly described below.
a) The Core Component Editor provides 3D
virtual modelling environment to develop and
(re)configure manufacturing systems.
b) The purpose of the V-Man engineering
module is to provide support for semi-
automatic and manual assembly stations,
integrating and optimising the interaction
between machines and operators.
c) The Runtime/Installation module brings
engineering concurrency between mechanical
and controls engineering by automatically
generating the control software, reusing
information from 3D CAD models.
The virtual engineering toolset application in the
context of virtual commissioning of automation
systems is discussed in detail in the next section.

405

3. VIRTUAL COMMISSIONING
The shrinking production cycle is making the
production ramp-up time an important factor for a
products economic success. The ramp-up phase
starts with the complete assembly of a production
system and ends with the achievement of the
targeted quality at a specified cost and output rate
(S.Lee et al, 2007). The ramp-up phase can be
divided into the commissioning and run-up phases.
Control system malfunction is a major source of the
time delay in prolonging the ramp-up phase.
Presently control software engineering is
responsible for more than half of the malfunctions
of highly automated production equipment and is
typically carried out during the commissioning
phase. An investigation for the German Association
of Machine Tool Builders showed that the
correction of defective control software consumes
up to 60% of commissioning time and accounts for
15% of time-to-delivery (Reinhart, G. and G.
Wnsch, 2007). This challenge can be relieved by
virtual commissioning, in which a virtual prototype
of the to-be system is used to validate control
software on an actual Programmable Logic
Controller (PLC) and Human-Machine Interface
(HMI) before the physical integration of all the
devices occurs on the shop-floor, thereby a saving
of ramp-up time can be achieved, as shown in
Figure-3.


Figure 3- Time Benefit of Virtual Commissioning (S.Lee et
al, 2007)
Current approaches to build a virtual prototype
for virtual commissioning can be classified into Full
Simulation of Machinery (FSM) and Hardware-in-
the-Loop (HIL) simulation. The FSM approach
includes a simulation of the production equipment
as well as the control hardware itself. This approach
can be carried out within the control system
hardware; however, the control software can only
be tested on a pseudo-code basis. In a HIL
simulation, on the other hand, the control software
can be tested under more realistic conditions by
connecting the virtual prototype of a machine to a
real control hardware, thereby avoid making
changes to the software afterwards. The HIL
simulation approach has been applied by most
researchers in the commissioning of different levels
of plant hierarchy (Reinhart, G. and G. Wnsch,
2007).
There are a large number of engineering tools for
virtual commissioning in the market from a range of
vendors. Typical examples of the state-of-the-art
commercial solutions include Delmia Automation,
UGS Tecnomatix, INVISION, WinMod and
ControlBuild. Each of these tools has its own
strengths and limitations and provides several good
functionalities to conduct the virtual commissioning
of a machine. However, from control point of view
none of these tools fully support the required
industrial functionalities, such as information reuse
from simulated machine models to generate the
required control logic. An industrial survey
conducted by the authors within the automotive
sector has shown that currently available tools only
achieve about 10-20 percent of the control
requirements of the user.
In order to provide a more complete solution,
Loughborough University is conducting research
which aims to enable the virtual engineering toolset
(as discussed in the previous section) to fully
support the virtual commissioning of automations
systems. In this context, the tools are provided with
User Interfaces (UI) and functions dedicated to the
design of automation systems control layout as
well as a lightweight 3D virtual environment. CAD
models of machine elements can be imported and
assembled to build a 3D virtual representation of
components. Kinematics can then be applied to the
moving parts of components by defining the type of
motion (such as rotation or translation), direction
and amplitude of the motion. The control
behaviours of each component are defined using a
state-transition diagram. Each state defines either
static position of a component (e.g. home position)
or a dynamic state (e.g. moving to work position).
This allows viewing of an animation of a machines
behaviour; thus enabling virtual testing, debugging
and validation of system behaviour. This not only
enables the virtual commissioning of a machine but
also makes possible the realisation of the concept of
a pre-validated and pre-commissioned library of
machine components which can be quickly
configured to develop new systems.
In order to enable 100% commissioning of
control software prior to the physical build of a
machine, the authors are also investigating a novel
control system software architecture and associated
programming method which can reuse machine
configuration information from simulated CAD
models of a machine to generate the control logic
and Human Machine Interface (HMI) screens via a
runtime installation module. The runtime
installation module accepts the control logic
information in XML format from the virtual

406

engineering toolset. This information is then
processed and converted into executable PLC
control code. This will enable machine builders to
develop control applications at a higher level of
abstraction by utilising the functionality of reusable
components without worrying about their low-level
programming details. The integration of this novel
control method with BDA virtual engineering tools
will allow testing of the virtual models of machines
against their generated control code and the HMI
screens using the physical control hardware (such as
PLCs) in the loop. This will also enable the end-
user to train their technical staff before the physical
machines arrive at their shop floor. The virtual
engineering tool and the concept of its integration
with control systems are illustrated in Figure-4.
Unlike other commercially available virtual
engineering solutions, the CCE tool aims to rely on
generic, open data formats for both control and
modelling data. This has the potential to increase
its integration capabilities with other engineering
tools. However, the CCE toolset does not currently
adopt a standard neutral data format.
From the review of relevant available engineering
tools it becomes visible no single tool available in
the market can fulfil all the requirements of
automation system engineering. To perform the
complete process of virtual commissioning,
normally several different engineering tools need to
be used in combination. If the involved tools are
from the same IT vendor, a seamless data exchange
between the IT systems based on the vendor-
specific proprietary data interfaces is normally
available; however, if these tools are from different
vendors, there is no possibility to exchange cross-
functional data models between two tools without a
loss of information due to the lack of common data
model for data exchange (Manley, J.D et al, 2008).
To achieve a successful industrial introduction of
virtual commissioning, some typical issues
summarized below still need to be addressed:
Insufficient data exchange between engineering
tools from different vendors: A virtual prototype
of the to-be system is the precondition of virtual
commissioning. Building this virtual model
needs to combine data from different disciplines,
like mechanical, electrical, control logic etc,
which normally come from different engineering
tools. Data exchange between virtual
commissioning tools and these discipline-
specific tools is still a challenge due to the
proprietary data formats. Currently, some of
these data are exchanged in paper-based ways,
which is mostly manual, repetitive, error-prone
and time-consuming.
Lack of common data model to represent
modular automation systems validated via VC:
Reusing existing models to build a new system is
a key principle of a modular approach. The
virtual models validated by virtual
commissioning, like topology information and
control logic information, should be stored in
common data models based on neutral data
formats so that they can be subsequently reused
by different tools. However, this is not the case
at the present due to the different data structures
and data formats of different VC tools.
No complete solution for direct deployment of
control logic data from virtual systems to real
systems: The PLC program should be generated
automatically based on the control logic
information which is already defined during
virtual construction and then validated by virtual
commissioning. However, there is lack of tools
which can directly translate this control
information into full usable control code.
The above challenges have been difficult to
address in the past due to a lack of a suitable open
standard data format. A tool-neutral cross-
functional data exchange format named
AutomationML for data exchange in automation
system engineering is being developed by
AutomationML organization. AutomatioinML has
been developed to enable the efficient data
exchange between different discipline-specific
automation engineering tools. In this paper we are
investigating it use to support the data
representation of a modular automation system, thus
enabling more open data exchange. The following
section provides a brief description of
AutomationML and its potential to address the
above challenges.


Figure 4- Virtual Engineering Environment Developed in
BDA Project
4. AUTOMATIONML A NEUTRAL DATA
EXCHANGE FORMAT FOR
AUTOMATION ENGINEERING
To address the existing heterogeneous tool
landscape in automation system engineering, a
neutral data format AutomationML is being
developed by a consortium of different companies
including Daimler and was initially released in

407

2008. The goal of AutomationML is to provide a
tool-independent format for data representation and
data exchange between different software tools
involved in automation system engineering without
loss of information.
AutomationML aims to be a neutral data format
which is usable in the whole process of automation
systems engineering. Its data representation
capabilities are still being expanded by the
AutomationML organisation, for instance, a new
work group was initiated in February 2011 focusing
on network models, device description and wiring
plans. In its current version, AutomationML covers
information on plant topology, geometry, kinematic,
and logic (sequencing, behaviour and interlocking).
This information is essential to build virtual
prototypes for virtual commissioning and for the
deployment of the resultant machines. If the tools
utilised during virtual prototyping and machine
deployment are from different vendors, data
exchange between them is difficult due to the large
number of required interfaces. Data exchange can
be potentially realised with significantly reduced
number of interfaces by using AutomationML.
After virtual commissioning, validated virtual
models at different levels, e.g., system and
component, behaviour models will be available for
further application and reuse if these information
can be saved in a proper neutral data format. As
illustrated in Figure-5, AutomationML adopts an
object-oriented paradigm and allows modelling of
real plant components as data objects
encapsulating information of different disciplines
as their properties which typically include data of
geometry, kinematic, behaviour, position within the
hierarchical plant topology and the relations to other
objects. An object can consist of other sub-objects
and can itself be a part of a larger composition or
aggregation. Moreover, AutomationML employs
existing industry data formats for the storage of
different aspects of engineering information, as
shown in Figure-5: COLLADA is used for storage
of geometric and kinematic information, PLCOpen
XML serves for the storage of sequences and
behaviours, and CAEX is used as the top level
format that connects the different data formats to
comprise the plant topology.


Figure 5- Architecture of AutomationML (Drath. R, 2008)
5. DATA EXCHANGE IN VC VIA
AUTOMATIONML
A framework for virtual prototype construction
and virtual commissioning as illustrated in Figure-6
has been proposed and is being developed in
Loughborough University to enhance the openness
of CCE tool. This framework adopts
AutomationML as a neutral data format for the data
exchange and data representation. To implement the
transformation of discipline-specific data, a plug-in
based framework called Conditioner Pipeline
Framework (CPF) needs to be implemented. The
simplified structure of the CPF is shown in Figure-
7. By using the CPF, the transformation can be
performed in the following three steps:
1. Load data from input data format by loader
module.
2. Transform information in conditioner to the
targeted data format, or optionally to an
intermediate data format, like IML for the logic
data.
3. Save the transformed as target data format, like
CAEX, COLLADA and PLCopen XML.


Figure 6- Virtual commissioning framework based on
AutomationML
For the transformation of CAD-files, the CPF can
be implemented by using COLLADA DOM to
access COLLADA files. In terms of logic data
mapping, data of different data formats like Gantt
Chart, Pert Chart and Logic Networks will be
mapped to an Intermediate Modelling Layer (IML)
first, while in a second step the resulting IML-
models will be transformed into SFC (Sequential
Function Chart) saved as PLCopen XML format.
IML defines 11 abstract elements representing the
main categories of data typically used in logic
models in order to decouple the neutral format
PLCopen XML from different input and output data
formats when implementing the transformation.
Alternatively, various types of logic data can be
transformed into Sequential Function Chart (SFC)
directly by using Extensible Stylesheet Language
Transformations (XSLT) technology.


408


Figure 7- Simplified structure of CPF
To implement the transformation of plant
topology information, a library called
AutomationML Engine provided by AutomationML
organisation will be used to handle CAEX files.
This process is more complex as there is a great
deal of user-defined information in topology data.
The key issues required to be addressed in
implementing data format transformation are
summarised as follows:
1. Extract tool-specific data from different files.
In CCE Tool, hierarchy information is saved in
XML files while most of the logic information
is stored in database.
2. Map different terminologies which are used in
different tools to describe the same object, e.g.
position information of an object is named
link point in CCE tool while its equivalent
word in AutomationML is frame.
Build class libraries including role class library,
interface class library and system unit class
library, especially system unit class library, which
are missing from CCE Tool data representation.
Predefined AutomationML object types - classes
are essential to AutomationML data format
because it follows an object-oriented paradigm.
Compared with role class library and interface
class library, which are AutomationML standard
library, system unit class library needs to be
defined by users. A comparison between the data
structure of CCE Tool and that of AutomationML
are illustrated in Figure-8.

Figure 8 -Data structures of AutomationML and CCE Tool
By employing this framework, the following
advantages can be gained:
Efficient data exchange to build virtual
prototypes: If each relevant engineering tool
involved in the automation system engineering
stores its data in an open standard neutral data
format or provides interfaces to import/export this
standard data format, efficient data exchange
between these tools and virtual commissioning can
be efficiently achieved even if the required tools are
from different vendors, thereby some duplicate
works can be avoided.
A tool-independent data representation for
validated virtual models: Validated virtual models
saved in a tool-independent data format will be
reusable even if VC tools are upgraded or even
changed. This will enable a seamless re-usability of
those models and a protection of past engineering
investments and expertise.
A common control behaviour model: It is the
foundation for automatic generation of PLC
program. After virtual commissioning, all the
validated control behaviour data could be saved as
SFC models. This has the potential to significantly
reduce the effort to implement direct deployment of
control logic into real machines.
In the following section, a case study is provided
to show how to transform the data model of a
modular automation system in the CCE engineering
tool into an equivalent model based on the
AutomationML data format.
6. CASE STUDY
This section presents a case study of building an
open data model for a Festo test rig based modular
automation system, which has been validated in the
CCE tool, as shown in Figure-9. The current data
structure and the equivalent data structure based on
AutomationML for the Festo Rig are described
respectively.

Figure 9 - Real Festo rig (left) and its virtual prototype
in CCE (right)
In the CCE tool, all the components are
categorised into actuators, sensors and non-controls.
An actuator contains the information of geometry,
kinematic and logic. A sensor contains information
of geometry and state while a non-control only has
geometry information. Geometry information of a
component in the CCE tool is stored in a file of
VRML data format. The logic information of a
component is described as a state transition
diagram.
A system is built by combining the components it
is composed of. All the information of a system,

409

except its geometry, can be exported as an xml file
for further analysis or reuse. The simplified
structure of the xml file including the information
about a section of the Festo rig is shown in Figure-
10. The information included in this file is difficult
to reuse in other tools because it does not follow an
open standard, although it is an xml-based file.
Also, the fact that all the information is stored in the
same file makes it difficult to extract discipline-
specific information for further application, e.g.
using logic information to generate PLC code,
<System>

</System>
<Component Pusher>
<Geo>pusher.wrl</Geo>
<STD>
</Component Pusher>
</STD>
<State>
</State>
</Transition>
<Transition>

<Condition> </Condition>
Pusher.wrl(VRML File)
Festo Rig.xml

Figure 10 - Data structure of a system (Festo rig) in the
CCE Tool
In this context, a new data model based on
AutomationML has been developed for representing
such a modular automation system. The current data
models of CCE Tool can be transformed into the
AutomatioinML-based data models using the CPF
which has been introduced in previous section. In
this new data model, the hierarchical information,
the geometry & kinematic information and the logic
information of the Festo rig are stored in different
xml-based files of their corresponding data formats
which are CAEX, COLLADA and PLCOpen xml
(described by Sequential Function Chart)
respectively. In the CAEX file, three classes, which
are Role Class, Interface Class and System Unit
Class, are defined first. According to the
information contained in Festo Rig, three roles
(resource, product and process), two interfaces
(COLLADAInterface and PLCOpenInterface) and
14 system units (work part, floor, sensor, pusher,
swivearm, conveyor, rotate table, et al) are defined.
Role class Resource still includes three sub-role
classes which are Actuator, Sensor and Non-control.
All the system unit classes inherit from the
corresponding role classes, e.g. Pusher inherits from
Actuator, Floor inherits from Non-control and Work
Part inherits from Product. The hierarchy data
structure (CAEX) includes COLLADAInterface and
PLCOpenInterface linking to geometry data and
control logic data. The simplified data structure of
Festo Rig based on AutomationML is shown in
Figure-11.

Figure 11 - Data structure of the Festo rig based on the
AutomationML format
As more and more engineering tools become
AutomationML compliants, this new data model
can be directly reused by other engineering tools for
the virtual commissioning of automation systems.
Furthermore, discipline-specific information can
now be readily extracted and used in further
engineering tools, e.g. a PLC code generator to
automatically generate PLC code using the logic
information which has been validated in CCE tool
and saved in the Sequential Function Chart format.
7. SUMMARY AND OUTLOOK
Reconfigurable Manufacturing Systems and Virtual
Commissioning have been regarded as two key
enablers to achieve agile manufacturing in response
to the need for mass-customisation. The research
work carried out in MSI Research Institute at
Loughborough University provides an innovative
virtual engineering approach and a corresponding
application tool to the implementation of modular
automation systems. The main advantages of this
approach are: 1) the modelled components can be
reused and reconfigured to achieve various machine
configurations, 2) virtual machine prototypes will
be highly portable as the data has been saved in a
generic, open data format, and 3) the control logic
information included in the validated machine
models can be deployed directly to corresponding
real machine thereby avoiding time-consuming and
error-prone manual work.
No suitable neutral format existed for automation
system description prior to the advent of
AutomationML. Without such a format data
exchange between the CCE tools and other
discipline-specific tools is difficult because,
potentially, a great number of point-to-point
interfaces need to be maintained. The authors have
identified AutomationML as a suitable format to
address the above challenges in virtual
commissioning of modular automation systems
considering its capabilities for neutral data

410

representation and object-oriented architecture. A
collaborative framework based on AutomationML
is being developed at Loughborough University.
This framework offers the potential to achieve
efficient data exchange between the CCE virtual
commissioning tools and other relevant engineering
tools and applications. A data model based on
AutomationML for describing CCE-based modular
automation systems has been defined. These
application system models have been validated via
virtual commissioning using the CCE tools. The
neutral format-based data model created enables the
validated information to be efficiently reused by
relevant engineering tools from different vendors.
Finally, it should be noted that to implement
complete seamless virtual engineering, a range of
other issues still remain to be addressed. These
include:
More information from different disciplines,
like I/O mapping, hydraulic and pneumatic etc,
needs to be included in virtual models to realise
a complete virtual commissioning. The
AutomationML development organisation is
trying to include these kinds of information in
AutomationML.
The virtual prototyping capability needs to be
extended to support direct deployment of
control software. This remains problematic for
multiple PLCs due to the wide variety of PLCs
brands which dominate the market, each with
their own vendor-specific software.
8. ACKNOWLEDGEMENT
The authors gratefully acknowledge the support of
the EPSRC and our industrial collaborators through
the IMCRC Business Driven Automation (BDA)
project in carrying out this research.
REFERENCES
Bj, et al., Using autonomous modular material handling
equipment for manufacturing flexibility, in
Proceedings of the 36th conference on Winter
simulation. 2004, Winter Simulation Conference:
Washington, D.C.
Drath, R., et al. AutomationML - the glue for seamless
automation engineering. in Emerging Technologies
and Factory Automation, 2008. ETFA 2008. IEEE
International Conference on. 2008.
ElMaraghy, H.A., M.A. Ismail, and H.A. ElMaraghy,
Component Oriented Design of Change-Ready MPC
Systems, in Changeable and Reconfigurable
Manufacturing Systems. 2009, Springer London. p.
213-226.
Harrison, R., et al., Reconfigurable modular automation
systems for automotive power-train manufacture.
International Journal of Flexible Manufacturing
Systems, 2006. 18(3): p. 175-190.
Harrison, R., et al., Distributed engineering of
manufacturing machines. Proceedings of the
Institution of Mechanical Engineers, Part B: Journal
of Engineering Manufacture, 2001. 215(2): p. 217-
231.
Jens Kiefer, T.B., Helmut Bley and Mechatronic-oriented
Engineering of Manufacturing Systems Taking the
Example of the Body Shop, in Conference on Life
Cycle Engineering. 2006: Leuven.
Lee, S., et al., A component-based approach to the design
and implementation of assembly automation system.
Proceedings of the Institution of Mechanical
Engineers, Part B: Journal of Engineering
Manufacture, 2007. 221(5): p. 763-773.
M. Bergert, J.K., Mechatronic Data Models in Production
Engineering in 10th IFAC Workshop on Intelligent
Manufacturing Systems. 2010: Lisbon, Portugal.
Manley, J.D., et al., Modular Approaches to Automation
System Design Using Industrial Robots. Journal of the
Association for Laboratory Automation, 2008. 13(1):
p. 13-23.
Martinez Lastra, J.L. Reference Mechatronic
Architecture for Actor-based Assembly System . 2004,
Tampere: Tampere University of Technology
Mehrabi, M.G., A.G. Ulsoy, and Y. Koren,
Reconfigurable manufacturing systems: Key to future
manufacturing. Journal of Intelligent Manufacturing,
2000. 11(4): p. 403-419.
Michalos, G., et al., Automotive assembly technologies
review: challenges and outlook for a flexible and
adaptive approach. CIRP Journal of Manufacturing
Science and Technology. 2(2): p. 81-91.
Moore, P.R., et al., Virtual engineering: an integrated
approach to agile manufacturing machinery design and
control. Mechatronics, 2003. 13(10): p. 1105-1121.
Reinhart, G. and G. Wnsch, Economic application of
virtual commissioning to mechatronic production
systems. Production Engineering, 2007. 1(4): p. 371-
379.


Proceedings of DET2011
7th International Conference on Digital Enterprise Technology
Athens, Greece
28-30 September 2011
411

WEB-DPP: AN ADAPTIVE APPROACH TO PLANNING AND MONITORING
OF JOB-SHOP MACHINING OPERATIONS
Lihui Wang
Virtual Systems Research Centre
University of Skvde, Sweden
lihui.wang@his.se
Mohammad Givehchi
Virtual Systems Research Centre
University of Skvde, Sweden
mohammad.givehchi.yazdi@his.se


ABSTRACT
Utilising the existing IT infrastructure, the objective of this research is to develop an integrated
Web-based distributed process planning system (Web-DPP) for job-shop machining operations and
their runtime execution monitoring. Our approach tries to engage a dispersed working group in a
collaborative environment, allowing the team members to share real-time information through the
Web-DPP. This paper analyses the challenges, and presents both the system design specification
and the latest development of the Web-DPP system. Particularly, it proposes a two-tier architecture
for effective decision making and introduces a set of event-driven function blocks for bridging the
gap between high-level planning and low-level execution functions. By connecting to a Wise-
ShopFloor framework, it enables real-time execution monitoring during the machining operations,
locally or remotely. The closed-loop information flow makes adaptive planning possible.
KEYWORDS
Process Planning, Machining Feature, Function Block, Job-Shop Machining, Uncertainty

1. INTRODUCTION
Recently, outsourcing, joint ventures, and cross-
border collaborations have led to a job environment
geographically distributed across organisational and
national boundaries. This distributed environment is
further complicated by uncertainties of todays job-
shop machining operations. Manufacturing systems
are thus required to be more flexible and adaptive to
unpredictable changes in a dynamic environment,
where manufacturing operations may be rearranged
or replaced upon the changes. Such manufacturing
systems must contain collaborative and intelligent
entities that can adaptively adjust themselves so as
to achieve a specified objective.
As a constituent component of the manufacturing
systems, the entity for process planning is required
to be responsive and adaptive to the rapid
adjustment of production capacity and functionality.
Unfortunately, traditional process planning methods
are time-consuming and error-prone, if applied
directly to such a changing environment. Therefore,
adaptive and intelligent process planning has been a
hot topic for the last decade.
A process plan generated in advance is often
found unsuitable or unusable to targeted resources,
resulting in wasted effort spent in early process
planning and productivity drop when idle machines
are waiting for re-planning the remaining machining
operations. To be responsive to sudden changes, a
distributed and adaptive approach is considered
suitable and is thus proposed here for handling the
dynamic situation, e.g. job-shop machining.
Process planning generally refers to those
preparatory tasks from design to manufacturing of a
mechanical product, such as process sequencing,
machine and cutter selection, tool path planning,
operation optimisation, NC code generation, as well
as setup/fixture planning, etc. Since the introduction
of computers to the field of process planning in the
1960s, subsequent research has been numerous. By
the end of 1980s, more than 156 computer-aided
process planning systems have been reported in the

412

literature survey by Alting and Zhang (1989).
Among many others, previous research studies on
process planning include object-oriented approach
(Zhang et al., 1999), neural network-based approach
(Park et al., 1996; Devireddy and Ghosh, 1999;
Monostori et al., 2000), Petri net-based approach
(Xirouchakis et al., 1998), genetic algorithm-based
approach (Zhang et al., 1997), multi-agent bidding-
based approach (Gu et al., 1997), constraint-based
approach (Mrkus et al., 2002), feature-driven
approach (Wang and Norrie, 2001), and information
and knowledge management (Lutters et al., 1999;
Denkena et al., 2007).
The reported approaches and their combinations
have been applied to several specific problem
domains, such as setup planning (Ong and Nee,
1996), process sequencing (Yeo et al., 1998), tool
selection (Lim et al., 2001), cutting parameter
selection (Arezoo et al., 2000), and tool path
planning (Boogert et al., 1996), to name a few.
More recently, research efforts on process planning
have shifted to distributed process planning (Wang
et al., 2003), planning and scheduling integration
(Ueda et al., 2007), reconfigurable process planning
(Azab and ElMaraghy, 2007), and intelligent
process planning based on capacity profile of
machine tools (Newman and Nassehi, 2009). The
common objective of the recent research is to
generate robust, precise yet flexible process plans,
effectively.
Nevertheless, small-and-medium-sized firms in
job-shop machining business are experiencing more
shop-floor uncertainties today than ever before,
including frequent product changeover, urgent job
insertion, job delay, broken tools, unavailability of
machines or fixtures, and labour shortage, due to
multi-tier outsourcing, customised product demand,
and much shortened product lifecycle. The reported
process planning approaches and systems in the
literature are mostly limited to static problems with
decisions made in advance. Their adaptability to
unpredictable changes on shop floors, however,
remains insufficient. Most process planning systems
available today are centralised in architecture, and
off-line in data processing. It is difficult for a
centralised off-line system to make adaptive
decisions, yet in advance, without knowing actual
runtime conditions on the shop floors.
In this paper, we present a web-based distributed
process planning (Web-DPP) approach to solving
the uncertainty problems. The ultimate goal of the
Web-DPP is to improve the system performance
when planning job-shop machining operations on
dynamic shop floors with high adaptability and
responsiveness. We introduce a two-layer structure
supervisory planning (SP) and operation planning
(OP). SP focuses on high-level machining sequence
generation, while OP focuses on machine-specific
working step planning and execution. More details
of the Web-DPP concept are provided in Section 2,
followed by the architecture design in Section 3.
Section 4 presents in detail the system analysis of
the Web-DPP in IDEF0, which is implemented in
Section 5. A test part machining is reported in
Section 6 to demonstrate the capability of the Web-
DPP prototype. Finally, in Section 7, our research
contributions and future work are summarised.

2. WEB-DPP CONCEPT
Figure-1 illustrates our long-term research activities
in distributed process planning, dynamic
scheduling, real-time monitoring and remote
control, in a shared cyber workspace, where Web-
DPP is the focal point of adaptive decision making
based on real-time monitoring information and
available resources from dynamic scheduling. The
four major activities shown in Figure-1 close the
loop of information flow. With the support of real-
time information from the monitoring module, the
decision making during process planning and
scheduling can be made adaptable to a change and
well informed to a situation.


1
Cyber Workspace
Physical
Shop Floor
e-Business
e-Manufacturing
e-ShopFloor
2 3
Dynamic
Resource
Scheduling
4
Distributed
Process
Planning
Real-Time
Process
Monitoring
Remote
Device
Control

Figure 1 Distributed process planning as related to other
decision modules in a shared cyber workspace
Process planning is the task that transforms design
information into manufacturing processes and
determines optimal sequence for machining. A
process plan generally consists of two parts: generic
data (machining method, machining sequence, and
machining strategy) and machine-specific data (tool
data, cutting conditions, and tool paths). A two-
layer hierarchy is, therefore, considered suitable to
separate the generic data from those machine-
specific ones in Web-DPP. This concept is shown in
Figure-2.
Other than SP and OP, machining features and
function blocks are two crucial concepts adopted in
the Web-DPP. They carry the machining process
information and go through a number of functional

413

modules of the system. As shown in Figure-2,
machining features such as step, pocket and hole are
first created and maintained as part of product data
by a machining feature-based design system (for
non-feature-based design systems, a third-party
utility tool is needed for features recognition). The
tasks of Web-DPP are divided into two groups and
accomplished at two different levels: shop-level
supervisory planning and controller-level operation
planning. The former handles product data analysis,
machining feature decomposition, setup planning,
machining process sequencing, jig/fixture selection,
machine selection, etc. The latter focuses on the
detailed working procedures for each machining
operations, including cutting tool selection, cutting
parameters assignment, tool path planning, and NC
control code generation. Between supervisory
planning and operation planning, scheduling
functions can be integrated by means of function
blocks. Because of the two-level structure, the
decision-making in Web-DPP becomes distributed
in terms of (1) timing (supervisory planning in
advance vs. operation planning at runtime) and (2)
location (supervisory planning in one engineering
workstation vs. operation planning within many
machine controllers). The separation of decisions
also makes the high-level process plans generic and
portable to alternative machines. In other words,
since a final process plan is generated adaptively yet
at runtime by machine controllers, there is no need
to generate redundant alternate process plans, thus
resulting in reduced unnecessary re-planning effort
and machine waiting time.


Function
Block
Function
Block
Machining Feature
based
Design System
Distributed machining processes and tasks
Shop Floor Level

Supervisory Planning
High-Level Execution Control
Product Data
DB/KB
Machining
Features
CNC Controller Level
Operation Planning
Dynamic
Scheduling
Function
Blocks
Machining
Features
Machine
Capacity
Cutting
Tool
Machining
Strategy
Monitoring

Figure 2 Staged decision making in Web-DPP
3. SYSTEM ARCHITECTURE
The Web-DPP is not limited to process planning. It
also handles job dispatching and job execution
monitoring at shop floor and machine levels. Such
functionalities are designed into a Wise-ShopFloor
framework (Wang, 2008), as shown in Figure-3,
where the Web-DPP in the Logic Container shares
information with other decision modules.

Application Server
Logic Container
Presentation Tier
(View)
Data Tier
(Model)
Application Tier
(Control)
Data Server
Data Accessor
Machining
Strategy
Resource
Information
Production
Knowledge
Internet
Factory etwork
R
e
g
i
s
t
r
a
r

/

S
e
s
s
i
o
n

M
a
n
a
g
e
r

M
R
Real Machines
Client Web Browser

Cyber Viewer Controller
Monitor
Scheduler
S
i
g
n
a
l

P
u
b
l
i
s
h
e
r

S
i
g
n
a
l

C
o
l
l
e
c
t
o
r

C
o
m
m
a
n
d
e
r



Planner
Monitoring Control
Scheduling Web-DPP
Machining
Strategy
W
e
b

S
e
r
v
e
r


Figure 3 Web-DPP as part of Wise-ShopFloor framework
Facilitated by the Wise-ShopFloor, availability of
machining resources and their current status are
made available for dynamic scheduling, which in
turn helps the Web-DPP for job dispatching.
The detailed system architecture of Web-DPP is
shown in Figure-4, where supervisory planning,
execution control and operation planning are the
three major components. It is the execution control
who looks after the job dispatching (in the unit of
setups), based on the up-to-date scheduling and
monitoring data and availability of machines.
In this research, we neglect the feature-based
design and feature recognition at product design
stage, based on an assumption that machining
features are already available in product data. They
are either created directly by using a feature-based
design system or recognised by a third-party feature
recognition solution.
During supervisory planning, a generic setup plan
can also be created by grouping machining features
according to their tool access directions (TAD). The
generic setup plan is for 3-axis machines as they
form the basic configuration of machine tools in a
typical machining shop. Necessary setup merging
for 4- or 5-axis machines is conducted during the
execution control and before job dispatching to best
utilise the capability of the higher-end machines.
(Setup merging is beyond the scope of this paper
and will be reported separately.) Decision making at

414

different stages is supported by the networked
knowledge bases and databases (at the bottom in
Figure-4).
4. SYSTEM ANALYSIS
The system analysis of the Web-DPP is carried out
using IDEF0. The three core components of the
Web-DPP in Figure-4 are modelled in IDEF0 in
Figure-5, together with their inter-relationship and
data/information flow, where M1 to M5 represent
human, computer, network, security and machine,
respectively.
Meta function blocks (MFBs) are used in this
research to encapsulate machining sequences (of
setups and machining features), and are the output
of supervisory planning. As its name suggested, an
MFB only contains generic information about
process planning of a product. It is a high-level
process template, with suggested cutter types and
tool-path patterns, for subsequent manufacturing
tasks. (Readers are referred to (Wang et al., 2009)

Part
Geometry or
Features
Product Data Supervisory Planning
Function Block Design
Operation Planning
E
x
e
c
u
t
i
o
n

C
o
n
t
r
o
l













(
A
d
a
p
t
i
v
e

S
e
t
u
p

M
e
r
g
i
n
g

/

D
i
s
p
a
t
c
h
i
n
g
)

Process Sequencing
(Generic Setup Planning)
Machining
Features
Resource
Database
Cutting
Strategy
Machining
Techno.
Scheduling Info
Fixturing Information
M
a
c
h
i
n
i
n
g

F
e
a
t
u
r
e

P
a
r
s
i
n
g

FB Processing
Pocket
Roughing
ECC
C
u
t
t
i
n
g

T
o
o
l

S
e
l
e
c
t
i
o
n

C
.

P
a
r
a
m
e
t
e
r
s

S
e
l
e
c
t
i
o
n

T
o
o
l

P
a
t
h

G
e
n
e
r
a
t
i
o
n

Tool
Database
Manufacturing
Knowledge Base
Open CNC Controller Factory Shop Floor Design Office
Fieldbus Corporate Network
F
e
a
t
u
r
e

R
e
c
o
g
n
i
t
i
o
n

Monitoring Info
Generic Setup Planning Adaptive Setup Merging

Figure 4 System architecture of Web-DPP with combined browser/server functionality

Figure 5 IDEF0 model of Web-based distributed process planning

415

for more details about function blocks.) Execution
function blocks (EFBs) are the function blocks that
are ready to be downloaded to a specific machine.
Basically, an EFB can be created by instantiating a
series of MFBs associated with a task. Each task
corresponds to its own set of EFBs, so that the
monitoring functions can be conducted for each task
unit. The structure of an operation function block
(OFB) is as same as that of an EFB. However, OFB
specifies and completes EFB with machine-specific
details about a machining operation. Moreover,
operation planning can override and update the
actual values of variables in the EFB, so as to make
it locally optimised and adaptable to various events
happened during machining operations. We use two
different terms of EFB and OFB to distinguish a
given function block, because they are two separate
entities with different level of detail in contents,
fulfilling different level of execution, residing in
different systems, and moreover, they may be
deployed in physically distributed controllers.
In other words, a function block holds a set of
pre-defined algorithms that can be triggered by an
arriving event to the function block. A decision can
be made by executing an appropriate algorithm. The
relationship between function blocks and machining
features are depicted in Figure-6.
4.1. SUPERVISORY PLANNING
An incoming main manufacturing plan (from a
higher-level production planning system) triggers

Adaptive
Process Planning
- MF parsing
- MF sequencing
- 3-axis based setup
- planning
- FB generation
Adaptive Setup
Merging
- Resource checking
- Setup re-grouping
- FB sequence
- optimisation
- Setup optimisation
Setup
Dispatching /
Execution
Monitoring
- FB initialisation
- FB execution
Product
design
Dynamic
scheduling
Machining
knowledge
Machine
database
CNC
machining
Function Block
Network
Machining
Features (MF)

Figure 6 Information evolution from machining features to function blocks (FBs)
TITLE: NODE: NO.: 3 A1 Supervisory Planning
1
Machining Feature
Parser
Feature-based
Design
M
a
i
n

M
f
g

P
l
a
n
2
A12
Machining Sequence
Generator
3
Function Block
Designer
Extended Machining Features
Fixture Information
M2 M3
M
a
c
h
i
n
i
n
g

T
e
c
h
n
o
l
o
g
y
M
a
c
h
i
n
i
n
g

C
o
n
s
t
r
a
i
n
s
Machining Sequence Plan
Meta
Function Block
M1 M2 M3 M4
M1 M2 M3 M4

Figure 7 IDEF0 model of supervisory planning

416

the SP to generate a machining sequence plan of a
given product. Feature-based reasoning is applied in
the SP by considering only regular manufacturing
resources (e.g. 3-axis machines, regular fixtures and
cutters, etc.), generic knowledge base of machining
technology and manufacturing constraints. The
generated machining sequence plan is passed to a
function block designer and packed as networked
meta function blocks (Figure-4). Details of the
internal structure and data flow of SP are depicted
in Figure-7. Within the SP, the function block
designer is responsible of defining new function
block types, specifying task-specific algorithms for
each function block type, and mapping machining
features to meta function blocks according to the
generated machining sequences.
4.2. EXECUTION CONTROL
Scheduling information and monitoring events are
integrated into and handled by the execution control
module, which makes it an important integration
point of data, activities and decision making of the
Web-DPP system. The functionalities of execution
control include setup merging (on a 4- or 5-axis
machine), event handling, job (EFB) dispatching,
and execution monitoring of an FB, as shown in
Figure-8. FB monitoring is facilitated by triggering
an FB-embedded algorithm that can send the current
machining status (including feature ID, machine ID,
cutting conditions, job completion rate, etc.) back to
the execution control module. Such information is
crucial for dynamic scheduling and job dispatching
of the next batch according to the availability of the
machines on the shop floor.
4.3. OPERATION PLANNING
OP is a real-time execution module of operation
function blocks (OFBs). It not only specifies and
optimises the process plans received from the SP
(i.e. cutting tool selection, machining sequence
optimisation, machining parameters selection, and
tool path generation), but also executes the OFBs
one by one, dynamically, in an explanation engine
(the Executor). In this way, the operation planning
process on a machine controller can be truly
adaptive, which means it can dynamically modify
its process plan according to the dynamics of the
actual machining process. As most legacy CNC
controllers are of closed architecture, they cannot
recognise function blocks yet. Our implementation
and testing is carried out in an open-architecture
CNC controller. Moreover, in order to utilise the
legacy machines already installed on a shop floor,
conventional G-code can be generated directly by
our function blocks. Details of the OP are given in
Figure-9.
5. PROTOTYPE IMPLEMENTING
As part of the Wise-ShopFloor framework, Web-
DPP adopts the same browser-server architecture
and VCM (view-control-model) design pattern with
built-in secure session control and data protection
(see Figure-3). The proposed solution for meeting
TITLE: NODE: NO.: 4 A2 Execution Control
1
Task Planning
Re-scheduling
Meta Function
Block
2
Set-up Merging &
Dispatching
Resource Information
3
Event Handler
Re-Planning
Mfg
Scheduling
Info.
Graphic User Interface
M
a
c
h
i
n
i
n
g

T
e
c
h
n
o
l
o
g
y
D
i
s
p
a
t
c
h
i
n
g

R
u
l
e
s
Execution Function Block
EC Events
Change Events
FB Execution Events
4
FB Monitoring
Updated MMP
Tool Information
Task Function Block
Machine Information
Operation Function Block
Monitoring Events

Figure 8 IDEF0 model of execution control

417

both the user requirements of rich visual data
sharing and real-time constraints is listed below:
Use interactive scene graph-based Java 3D
models for visualisation;
Provide users with browser-based graphical user
interface for process planning;
Deploy major planning and control logics in a
secure application server.
Figure-10 depicts the package diagram for Web-
DPP implementation. The eight system modules are
grouped into supervisory planning, execution
control and operation planning, to fulfil the desired
functionalities as depicted in Figure-5. The system
modules are accessible via the dedicated user
interfaces. Figure-11 shows one snapshot of the
system for web-based distributed process planning.
The prismatic test part included in the figure is also
TITLE: NODE: NO.: 5 A3 Operation Planning
1
A31
Local Operation
Planning
Execution FB
M
a
c
h
i
n
i
n
g

T
e
c
h
n
o
l
o
g
y
2
Local Operation
Scheduling
3
Executor
Working Steps Sequence
Cutting Tools
Machining Para.
Cutting Path
Local Schedule
G Code
Machine Information
Tool Information
EC Events
Machining Status
Machining Events
FB Execution Event
Event&Action
Logging
Control Commands
C
A
M

S
u
p
p
o
r
t

Figure 9 IDEF0 model of operation planning



Figure 10 Package diagram for Web-DPP implementation

418

used in the case study in the next section to
showcase the capability and validate the feasibility
of the Web-DPP concept.


Figure 11 Setup grouping and process sequencing
6. A CASE STUDY
The test part shown in Figure 11 consists of 14
machining features. After applying the 5 geometry-
reasoning rules (Wang et al., 2006), the 14
machining features are grouped into two setups and
a critical machining sequence is generated for each
setup, which mainly considers datum references and
manufacturing constraints at this stage. The non-
critical machining features remain in parallel (e.g.
F11F14 in Figure-11) whose machining sequence
will be determined at later stage by the controller-
level operation planning.
The setup-1 of the test part is then mapped to a
composite function block (CFB) as shown in
Figure-12, consisting of seven basic function blocks
(BFBs), each representing one type of machining
feature. The required machining sequence for the
test part is now represented by the event flow
among the BFBs corresponding to the cutting of
each machining features. Note that the same FB can
be called more than once to machine the same
machining feature type, e.g. the four holes on the
top surface of the test part. In this research, setups
or CFBs are the units of job dispatching to the
machines available at the moment. Once dispatched
to a chosen machine, detailed operation planning
takes place to specify machine-specific operations,
including cutter ID and machining parameters. In
the current implementation, this is carried out in a
front-end computer of the machine due to limited
access to the controller with closed architecture.
Figure-13 illustrates how this is done during the
operation planning. It shows the final sequence ,
setup formation , and optional G-code for
legacy machines, all derived by the embedded
algorithms of the function blocks. Whereas each
line of and is used to cut one corresponding
machining feature. In the future when they can be
recognised by CNC controllers, the function blocks

M
T

E
M
T

E
I_
IN
I
E
I_
U
P
D

E
I_
R
U
N

E
O
_
IN
I
E
O
_
R
U
N
R
D
Y

E
O
_
E
S
S

F
B
_
E
X
E

E
M
T

M
T

F
a
c
e

M
F
-
F
B

F
2

E
O
_
IN
I
E
O
_
P
1

E
I_
P

E
I_
IN
I E
S
-
F
B

E
O
_
P
3

R
O
U
T
E

E
O
_
D
O
N
E

E
O
_
P
2

4
-
S
i
d
e

P
o
c
k
e
t

M
F
-
F
B

F
1
0

S
t
e
p

M
F
-
F
B

F
5

B
l
i
n
d

H
o
l
e

M
F
-
F
B

F
1
1

F
1
2

F
1
3

F
1
4

3
-
S
i
d
e

P
o
c
k
e
t

M
F
-
F
B

F
4

S
e
m
i
-
B
l
i
n
d

S
l
o
t

M
F
-
F
B

F
6

F
8

T
h
r
u

H
o
l
e

M
F
-
F
B

F
7

F
9

E
I_
E
S
R

M
A
C
_
ID

O
P
E
R

C
C
_
U
P
D

R
O
U
T
E


Figure 12 Machining sequence embedded in a CFB
will be called directly instead of sending G-code,
which would give the controllers more flexibility
for adaptive machining.
As mentioned earlier, the Web-DPP is also
designed for execution monitoring by triggering one
algorithm of the function block that is in running
status. As illustrated in Figure-14, the cutting
conditions of a 3-side pocket and the current cutter
location (x, y, z) are displayed together with the job
completion rate (running progress) of 64%. This
added feature provides a production engineer with a
holistic view of the shop floor if every machine is
networked and online. The information retrieved in

419

real-time can largely help in dynamic scheduling,
job routing, line balancing, and shop floor execution
control. The machined part (slightly different from
what is shown in Figures-11 and 14, but with
surrounding material for quick fixturing) using a 5-
axis machine and function block generated G-code
is demonstrated in Figure-15.
7. CONCLUSIONS
This paper presents a Web-based approach for
distributed process planning (or Web-DPP) in a
dynamic manufacturing environment, particularly
for job-shop machining operations. The advantage
of such a system is the adaptive decision making to
unpredictable changes. The Web-DPP is designed
as a part of a Wise-ShopFloor system to separate
generic information from machine-specific ones.
The novelty of this work can be summarised as:
Two-layer system architecture for distributed
decision making;
Feature based reasoning for machining sequence
determination;
Function blocks for controller-level operation
planning; and
Closed-loop information flow for scheduling and
job dispatching via real-time monitoring.

M
a
c
h
i
n
i
n
g

s
e
q
u
e
n
c
e

S
e
t
u
p

1

Cutter data Cutting parameters Optional G-code generation
1
2
3 4 5

Figure 13 Operation planning by function block-embedded algorithms

DPP Execution Control
CNC Machine
Ethernet

Figure 14 Real-time function block execution monitoring

420


Figure 15 Function block-enabled machining
Our future work will focus on new algorithm
development, functionality enhancement and testing
using real-world cases via open architecture CNC
controllers in dynamic environment. Integration
with a third-party scheduling system and a more
sophisticated feature-parsing system is also under
investigation, the results of which will be reported
separately.
REFERENCES
Alting L and Zhang H, Computer Aided Process
Planning: the State-of-the-Art Survey, International
Journal of Production Research, Vol. 27, No. 4, 1989,
pp 553-585
Arezoo B, Ridgway K and Al-Ahmari AMA, Selection
of Cutting Tools and Conditions of Machining
Operations Using an Expert System, Computers in
Industry, Vol. 42, 2000, pp 43-58
Azab A and ElMaraghy HA, Mathematical Modeling
for Reconfigurable Process Planning, Annals of the
CIRP, Vol. 56, No. 1, 2007, pp 467-472
Boogert RM, Kals HJ and van Houten FJ, Tool Paths
and Cutting Technology in Computer-Aided Process
Planning, International Journal of Advanced
Manufacturing Technology, Vol. 11, 1996, pp 186-197
Denkena B, Shpitalni M, Kowalski P, Molcho G and
Zipori Y, Knowledge Management in Process
Planning, Annals of the CIRP, Vol. 56, No. 1, 2007,
pp 175-180
Devireddy CR and Ghosh K, Feature-based Modeling
and Neural Network-based CAPP for Integrated
Manufacturing, International Journal of Computer
Integrated Manufacturing, Vol. 12, No. 1, 1999, pp
61-74
Gu P, Balasubramanian S and Norrie DH, Bidding-
based Process Planning and Scheduling in a Multi-
Agent System, Computers & Industrial Engineering,
Vol. 32, No. 2, 1997, pp 477-496
Lim T, Corney J, Ritchie JM and Clark DER,
Optimizing Tool Selection, International Journal of
Production Research, Vol. 39, No. 6, 2001, pp 1239-
1256
Lutters D, Wijnker TC and Kals HJJ, Information
Management in Process Planning, Annals of the
CIRP, Vol. 48, No. 1, 1999, pp 385-388
Mrkus A, Vncza J and Kovcs A, Constraint-based
Process Planning in Sheet Metal Bending, Annals of
the CIRP, Vol. 51, No. 1, 2002, pp 425-428
Monostori L, Viharos ZJ and Markos S, Satisfying
Various Requirements in Different Levels and Stages
of Machining Using One General ANN-based Process
Model, Journal of Materials Processing Technology,
Vol. 107, 2000, pp 228-235
Newman ST and Nassehi A, Machine tool Capability
Profile for Intelligent Process Planning, Annals of the
CIRP, Vol. 58, No. 1, 2009, pp 421-424
Ong SK and Nee AYC, Fuzzy-set-based Approach for
Concurrent Constraint Setup Planning, Journal of
Intelligent Manufacturing, Vol. 7, No. 2, 1996, pp
107-120
Park MW, Rho HM and Park BT, Generation of
Modified Cutting Condition Using Neural Network for
an Operation Planning System, Annals of the CIRP,
Vol. 45, No. 1, 1996, pp 475-478
Ueda K. Fujii N and Inoue R, An Emergent Synthesis
Approach to Simultaneous Process Planning and
Scheduling, Annals of the CIRP, Vol. 56, No. 1,
2007, pp 463-466
Wang L, Wise-ShopFloor: An Integrated Approach for
Web-based Collaborative Manufacturing, IEEE
Transactions on Systems, Man, and Cybernetics
Part C: Applications and Reviews, Vol. 38, No. 4,
2008, pp 562-573
Wang L, Cai N, Feng H-Y and Liu Z, Enriched
Machining Feature Based Reasoning for Generic
Machining Process Sequencing, International
Journal of Production Research, Vol. 44, No. 8, 2006,
pp 1479-1501
Wang L and Norrie DH, Process Planning and Control
in a Holonic Manufacturing Environment, Journal of
Applied Systems Studies, Vol. 2, No. 1, 2001, pp 106-
126
Wang L, Song Y and Gao Q, Designing Function
Blocks for Distributed Process Planning and Adaptive
Control, Engineering Applications of Artificial
Intelligence, Vol. 22, No. 7, 2009, pp 1127-1138
Xirouchakis P, Kiritsis D and Persson J-G, A Petrinet
Technique for Process Planning Cost Estimation,
Annals of the CIRP, Vol. 47, No. 1, 1998, pp 427-430
Yeo SH, Ngoi BKA and Chen H, Process Sequence
Optimization Based on a New Cost-tolerance Model,
Journal of Intelligent Manufacturing, Vol. 9, 1998, pp
29-37
Zhang F, Zhang YF and Nee AYC, Using Genetic
Algorithms in Process Planning for Job Shop
Machining, IEEE Transactions on Evolutionary
Computation, Vol. 1, No. 4, 1997, pp 278-289
Zhang Y, Feng SC, Wang X, Tian W and Wu R, Object
Oriented Manufacturing Resource Modeling for
Adaptive Process Planning, International Journal of
Production Research, Vol. 37, No. 18, 1999, pp 4179-
4195
Proceedings of DET2011
7th International Conference on Digital Enterprise Technology
Athens, Greece
28-30 September 2011
421

SIMULATION AIDED DEVELOPMENT OF ALTERNATIVES FOR IMPROVED
MAINTENANCE NETWORK
Azrul Azwan Abdul Rahman
Technische Universitt Berlin
rahman@mf.tu-berlin.de
Pinar Bilge
Technische Universitt Berlin
bilge@mf.tu-berlin.de
Gnther Seliger
Technische Universitt Berlin
seliger@mf.tu-berlin.de
ABSTRACT
The success of organizations operating in complex environments depends on how well their value
chain can adapt to disruptions caused by unanticipated events. Building this resilience requires the
capability to identify uncertainties and modelling their impact on operations. To effectively achieve
this is very difficult. Thus, increasing resilience in maintenance and repair networks calls for an
adequate approach to address uncertainties. It is necessary to consider the maintenance activities
within and outside the company and those affecting all supplier partners of equipment. This paper
presents a comprehensive analysis, a potential approach to model their impact and alternatives to
increase the flexibility of the network to ensure profitability and continuity.
KEYWORDS
Simulation, Maintenance, Network

1. INTRODUCTION
Organizations involved in the business field
Maintenance, Repair and Overhaul (MRO) are more
strongly affected by unsteady incoming work orders
from scheduled and voluntary maintenance than
product manufactures.
The complexity of MRO activities is indicated by
highly fluctuating work volumes between the
maintenance orders, different disassembly and
assembly depths, unplanned express orders as well
as differentiated qualification levels of the workers.
The differences in the conditions of operating
products and existing of several workshops in a
maintenance and repair network for an MRO
organization further exacerbates the situation.
Therefore, to guarantee on-time delivery and
short turnaround times, despite of the capacitive,
logistical and order specific conditions, a reasonable
sequencing in the context of maintenance planning
and control is needed.
For the realization of the sequencing suitable
scheduling and priority rules are required. There are
a wide variety of tools to support the analysis of
complex systems such as those involving
maintenance from simple static spread sheet based
tools to those incorporating more sophisticated
simulation technology.
In order to solve sequencing problems Nyhuis
and Hartmann (2010) recommend the use of tool
modelling and simulation for the validation of a
suitable rule. Models emphasise main features of a
system to clarify interrelationships and ensure
transparency (VDI 2893, 2006). Simulation tools
are widely used for manufacturing systems as well
as services, defence, healthcare and public services
(Jahangirian, 2010).
It is defined as experimentation with a simplified
imitation of an operating system as it progresses
throughput time, for the purpose of better
understanding and/or improving the whole system
(Robinson, 2004). Simulation techniques have the
capability to analyse the performance of any
operating system without affecting the real system.
This paper is based on an industrial case study in
a cleaning and waste management service company
for vehicular maintenance. The services provided by
the company range from the punctual emptying of
refuse bins to ecological waste utilisation and
disposal in its own plants as well as street cleaning
and winter road maintenance.

422

2. MAINTENANCE NETWORK
In the past decade, various maintenance strategies
have been proposed for complex systems. In
summary, the European Federation of National
Maintenance Societies defines maintenance as the
combination of technical, administrative and
managerial actions during the lifecycle of a product
intended to retain or restore it to a state in which it
can perform its required function (Klemme-Wolff,
2009).
The main features of complex systems include
business processes, their organisation, the resources
used, and the outcomes. Seliger (2007) introduces
the factors of value creation networks as product,
processes, equipment, organisation and people.
Vehicles present the products in the considered
maintenance and repair network. Business processes
cover all MRO activities, whether preventive,
predictive, proactive or corrective. Resources and
materials are used as equipment in the workshops.
Replacement materials are obtained on demand
from a centralised warehouse. The organisation,
planning and control in and between the workshops
entail progressive detailing and the performance of
respective MRO processes. The qualification level,
number of employees, their knowledge and working
habits, relationships, and absenteeism considerably
influence the performance of the MRO network.
Based on the performance requirements of the
network, more specific maintenance planning can
be carried out. A comprehensive maintenance and
repair network addresses all aspects of MRO, from
preventive to corrective maintenance.
The performance of such a complex system with
operating and maintenance activities is determined
by the reliability and availability of the system
components depending on time and costs.
Time is one of the key performance indicators
here including operating time, daily overhaul time,
periodic maintenance time, condition based
maintenance and additional run to failure time of
vehicles (Rajpal, 2006).
The operating time is generally recorded as
cumulative working time of the product since the
last overhaul and gains profit for the service
provider company.
The daily overhaul time consists of services and
inspections conducted daily before the first
operation of the vehicle and after the completion of
daily assignments. These routine works include
small inspections and cleaning before causing a
failure and should not entail any expense (Wireman,
1990).
Periodic maintenance is a preventive method with
predetermined plans and schedules for MRO
activities to keep a product in stated working
condition through the process of checking and
reconditioning (Sharma, 2011).
Condition based maintenance is a predictive
approach. It implements modern measurement tools
and signal processing methods proactively to
diagnose the condition of the vehicle during the
operation time and optimise the maintenance
intervals.
Preventive and predictive maintenance incur
costs based on replacement materials, lost
operations, workforce and material like rags and
lubricants for MRO activities (Salonen, 2011).
Corrective maintenance comprises immediate,
unplanned and unscheduled activities to run to
failure or deficiencies and return the product to a
defined operating state. These are caused by
components with random failure distribution and
lack measurable deterioration or in cases of
infeasible or poorly performed preventive measures.
Additional costs occur in breakdown times related
to scrap, rework or overtime for recovery.
3. DESCRIPTION OF THE AS-IS STATE
The business of collecting waste and cleaning the
streets throughout the year requires an effective
fleet with different kinds of vehicles operating in a
predefined area.
The vehicles condition is affected by several
factors such as type, number, age, and arrangement
of components in the vehicle. The operating and
environmental conditions including operating
personnel, working habits, and safety measures also
impact the wear (Ebling, 1997).
The steps in running the MRO activities depend
on the conditions of the vehicle and vary in
duration, required workforce and equipment. To
prevent unexpected large failures, some
maintenance activities and services are collected
preventively in scheduled overhaul sets and run
regularly on the vehicles.
The MRO activities are processed in workshops
with different repair stations. This aspect of the
maintenance and repair presents a hybrid flow
sequencing problems with a pre assignment of
vehicles to repair stations.
The objective of the study is to balance the
volume of orders utilising the capacity of the
network in order to minimize the throughput time of
maintenance orders. To solve it, different simulation
alternatives have been applied.
3.1. PROCESS FLOW OF MRO ACTIVITIES
The vehicle fleet contains more than 1,600
vehicles enabling the cleaning activities and waste
management. There are 34 types such as garbage

423

truck, rinsing vehicle, collection vehicles, road
sweepers and etc.
The main target of the vehicles is to fulfil their
tasks to clean and dispose in their predefined area.
Each vehicle of the fleet is assigned to a location in
the network according to its operation area. That
means an operating vehicle starts its daily tour from
a specific station und returns to the same location
and parks when its work shift is over.
Some small breakdowns lead to temporary
interruption of the daily tour, e.g. flat tire. These are
repaired either by the driver himself or by a mobile
MRO workshop promptly without the need to make
a maintenance order.
After the assignment is finished, some small
overhaul activities are completed which mature as
part of the daily business and do not require a
maintenance order.
A new preventive maintenance order is placed
after the daily tour in the event of a scheduled
maintenance check. In this case, the vehicle is
checked for further damages which do not interrupt
the daily assignment, but can be repaired during the
MRO activities.
Damages or breakdowns of the vehicle reported
by the driver and his team or found during the daily
overhaul lead to a corrective maintenance order, if
the vehicle is not available for the next shift.
In this case, the vehicle will be moved from the
parking area to the MRO area which is also called a
maintenance and repair workshop, hereinafter
referred to as workshop. Around 24 operating and
parking areas are assigned to 14 workshops which
are the first point to receive vehicles with MRO
requirements.
So the municipal maintenance and network
developed in 1951 consists of two main workshops
(MW) and 12 small workshops (W) distributed
across the state.
The small workshops are able to handle simple
repairing activities specifically for their certain
types of vehicles. The main workshops differ from
small workshops by offering a large spectrum of
preventive maintenance services (scheduled) and
corrective repair activities (voluntary). If preventive
maintenance or more severe corrective maintenance
is required, the vehicles are moved from a small
workshop to one of the main workshops and return
after the completion of the MRO activities to its
home workshop and hereafter to its home area.
A workshop is full when all of its repair stations
are occupied and their staffs work on vehicles. This
means the next incoming vehicle will occupy an
already occupied repair station or will be moved
from the parking places to wait in designated buffer.
The capacity of these buffers is also limited and if
they are also full, the vehicles must wait in the
parking area of the operating area.
Waiting in any parking area to be maintained or
repaired results in an overall increase in non-
operating time for one of these vehicles. Such
waiting times are to be minimized for better
performance of the service business. Therefore an
optimal utilization of the network is needed. The
current network works without any predefined
priorities. But workshop staff and management self-
adjust based on the repair portfolio of other
workshops and their personnel relations to
colleagues from other workshops to encourage the
establishment of some time-critical orders.
3.2. DATA ANALYSIS
The general procedure of modelling and
simulation has been used as a based in performing
this study. A workflow for the case study is shown
in Figure 1 starting from defining the problem and
objective until interpretation of the results. As the
problem and objective have been defined the next
and the most important step is data mining and
system analysis.


Figure 1 A workflow for the case study
For the analysis of the network and process flow
and the case study, the data of the MRO network
from 2009 is taken. The volume of orders is
analysed in detail for the current maintenance and
repair network. In order to simplify the complexity
of the case study, a survey of some system features
is helpful.
This can be done with the value creation factors
introduced in chapter 2 using the question method:
where (workshop), what (vehicle, and maintenance
order), how (MRO activities), when (date), and who
(employees).
Within one year, approximately 39,000
maintenance orders are received and serve as input
data for the simulation case study. An illustrated
input data as an excerpt is shown in Table 1.

424

Table 1: Example of used inputs for data analysis
Location
Maintenance
order number
Maintenance
order type
Vehicle type
Order
item
Service
description
Date

Working time
(dd.hh.mm.ss)
MW1 52840765 Preventive
Garbage truck
(Type 19)
001 Full inspection 12.01.2009 01:11:33:44
002 Replacements 12.01.2009 00:08:26:01
W8 52846969 Corrective
Collection vehicle
(Type 1)
001
Installation
auxiliary heating
12.01.2009 00:02:37:43
MW2 52840773 Corrective
Road sweeper
(Type 18)
001 Small inspection 13.01.2009 00:05:51:18
002
Replacement break
light
13.01.2009 00:02:37:43
003
Replacement gear
system
14.01.2009 00:13:23:58
004
Deletion of failure
memory
14.01.2009 00:00:09:32
... ... ... ...
MW1 52841420 Preventive
Collection Vehicle
(Type 1)
001
Replacement break
system
12.01.2009 01:07:16:23
W3 52847738 Corrective
Garbage truck
(Type 19)
001
Replacement of
lubricants
13.01.2009 00:02:12:35
002 Filter cleaning 13.01.2009 00:01:53:51
003
Replacement
alternator
14.01.2009 00:03:06:26
004
Replacement
Wipers
14.01.2009 00:01:17:34
MW1 52842108 Corrective
Rinsing vehicle
(Type 5)
001
Replacement of
lubricants
13.01.2009 00:02:33:44

"Location" shows the workshop in which MRO
activities proceed. The "maintenance order number"
is the ordinal number of failure. "Maintenance order
type" differentiates between corrective or preventive
maintenance and vehicle type classifies the
vehicle handled during the order.
One maintenance order consists of many order
items followed by the description of the performed
service (service description). In some orders,
corrective or preventive items are listed together if
some unscheduled activities caused by damages are
completed in the same order.
"Date" defines the exact time (year, month, and
day) of failure and an order item is completed. The
duration of a maintenance order is defined as the
interval between the earliest and latest date of its
order items.
Working time is the value adding real working
time of the workshop staff to fulfil the required
service. The fifth entity row in Table 1 is explained
here in detail for a better understanding of the
process flow: An error of garbage truck was
reported in January 2009 and this vehicle of vehicle
type 19 was not able to operate in the next day.
The corrective maintenance order 52847738 was
received on the January 13
th
in the small workshop
W3. Four order items were identified by the staff of
W3 and ranked in the feasibility of the small
workshop W3. Its staff spent 2 hours 12 minutes to
replace the lubricants and 1 hour 53 minutes to
clean the filters on the same day. All four items
were completed in two work days and the real
working time of the workshop staff on the vehicle
has taken only 8 hours 28 minutes.
3.2.1. Performance Figures
The order frequency indicates the number of
MRO orders as recorded breakdowns per year.
According to VDI 2893 Mean Time To Repair
(MTTR) stands for the average breakdown time per
repair. It shows the ratio of the total amount of
breakdown time (t
failure
) divided by the number of
recorded breakdowns (x
failures
) per year:

failures
failure
Vehicle
x
t
MTTR

= (1)

Mean Time Between Repair (MTBR) is the
average time between repairs. It consists of the total
operating time (t
operating
) divided by the number of
recorded breakdowns per year:

failures
operating
Vehicle
x
t
MTBR

= (2)

Mean Time Between Failures (MTBF) is the
average running time of the MRO network between
breakdowns:

failures
Vehicle Vehicle
Vehicle
x
MTBR MTTR
MTBF
+
= (3)

425

3.2.2. Classification of maintenance orders
Due to large differences between the reported
working times it is not suitable to use the mean of
working times for the modelling of the considered
MRO activities.
Thus, to get a realistic output from the modelling,
working times for each vehicle type are separated
into corrective and preventive orders and analysed
to provide occurrence probabilities. Therefore five
time classes are defined for each vehicle type. The
lower limit l for the first class is the minimum
duration of the reported working times:

} { t min x
k j, repair,
l
k,1 j,
= (4)

Hereby j is considered as vehicle type, t stands
for the specific working time and k indicates
whether the reported time is for corrective or
preventive orders. Index 1 stands for the first time
class. The upper limit h for the fifth class is the
highest duration of the observed working times
within the considered vehicle group:

} { t max x
k j, repair,
h
k,5 j,
= (5)

The width of the specific time classes of each
vehicle group and their appropriate lower and upper
levels are calculated as the following:

} { x x
5
1
x
l
k,1 j,
h
k,5 j, k j,
= (6)

The reported working times for each vehicle type
are sorted in a descending order and assigned to the
time classes as defined in formula (4)-(6). A
working time is assigned to a class if it is smaller
than the upper level, but higher as the lower level of
a time class. Afterwards the number of observations
in each time class are counted and set into relation
to the total amount of observations for each vehicle
type to obtain their relative share. This relative
share, which is now considered as the occurrence
probability of the average mean of each time class,
is an appropriate dimension to get a realistic output
from the modelling.
4. MODELLING AND SIMULATION
Despite the importance of the model design phase
in the simulation process, it is very often
overlooked. In this phase, the project participants
are to be identified, the project goals are to be
clearly delineated, and the basic project plan is to be
developed. If these activities are not conducted
effectively, the model developed could be too
detailed or generic. While incorporation of detail
may increase the credibility, excessive levels of
detail may render a model hard to build, debug,
understand, and deploy. The determination of the
detail level is a primary goal of the design stage.
The preliminary work of the conceptual model
design is followed by the development of the model.
This involves choosing the modelling approach,
building the model, and doing verification and
validation of the model. The choice of approach can
make a large difference in the subsequent model
building and model execution times.
In this project, the sequencing problem for
maintenance and repair network has been modelled
using discrete event simulation with top-down
approach (Figure 2). A top-down approach is
essentially the breaking down of a system
(maintenance and repair network) to gain insight
into its compositional subsystems (workshops and
repair stations).
T
o
p

D
o
w
n

A
p
p
r
o
a
c
h

Figure 2 - The project view from Top-down Approach
In a top-down approach an overview of the
system is formulated, specifying but not detailing
any first-level subsystems. Each subsystem is then
refined in yet greater detail, sometimes in many
additional subsystem levels, until the entire
specification is reduced to base elements.
After determining the modelling approach, the
next step is to build the model with appropriate
system elements. Three levels of systems and
subsystems have been defined and modelled with
the discrete-event simulation software from
Siemens AG., Tecnomatix Plant Simulation. There
are maintenance and repair network levels,
workshop levels and repair stations levels. At
maintenance and repair network level, twelve small
workshops and two main workshops are modelled
based on their locations in the state (Figure 2). A
buffer is placed in front of each workshop to
represent a parking place for waiting vehicles which
are to be maintained or repaired in this workshop.
The First-In-First-Out (FIFO) strategy is used at
the buffer so that the first vehicle entering the buffer

426

will be the first sent to the available repair station.
The capacity of these buffers is fixed and varies
depending on workshop size. Due to that condition,
overload of maintenance orders in buffers are
possible. To overcome that problem, a dummy
overall buffer is modelled.
The workshops are then refined at the workshops
level (Figure 3). The number of repair stations and
workers for each workshop are modelled here.
In the next level each repair station is refined
considering capacity and specification of the repair
station, availability of resources and productivity of
workers at the repair station.
The discrete-event simulation is entity-based; it
deals with entity flows rather than with single
entities. In discrete-event simulation the operation
of a system is represented as a chronological
sequence of events. Each event occurs at an instant
in time and marks a change of state in the system.
In this model, entities are representing
maintenance orders. There are two types of
maintenance orders; preventive maintenance order
and corrective maintenance order. The preventive
maintenance orders are generated based on the
preventive maintenance list from 2009. Information
such as date of maintenance, vehicle type and
planned workshop for the maintenance are listed in
a similar data sheet as Table 1. Each maintenance
order will be created at the specific date by a
generator. This maintenance order will be sent to
the planned workshop buffer and will wait until a
repair station is available.
The corrective maintenance orders are created by
random generators based on MTBF. It is calculated
for each vehicle type separately as shown in chapter
3.3.1. One random generator for each vehicle type
has been modelled. There are 34 random generators
representing each type of vehicle which create the
corrective maintenance order independently.
For the implementation of unplanned
maintenance orders, there are following rules
(Figure 4): When a corrective maintenance order is
created, the location of the vehicle and the
responsible workshop are identified. Availability of
responsible workshop is checked.

Figure 3 Maintenance and repair network models

427

If the workshop is able to receive the
maintenance order and there are available repair
stations or places at the buffer, the workshop will
accept it. The vehicle will be sent to the buffer
before reaching the repair station for maintenance
activities. Then the maintenance order will be
completed according to its order items and closed.


Figure 4 Procedure for corrective maintenance order
implemented in simulation model

If the responsible workshop cannot accept the
maintenance order, the availability of main
workshops is checked. One of the main workshops
will accept the order if it has enough capacity. The
maintenance activities will then take place at the
main workshop. In case both main workshops
cannot accept the order, it will be sent to the overall
buffer. At the overall buffer, the maintenance order
will wait until the buffer of the responsible
workshop is able to receive it.
Corrective and preventive maintenance orders are
classified during their transfer from the generator to
workshop. Each order will be assigned to a class
with its maintenance time randomly based on the
occurrence probability of their classes as introduced
in chapter 3.3.2.
In modelling for a simulation, the level of
abstraction always comes into question. A high
level of abstraction will lead the model to be closely
to the real system and low level of abstraction will
not adequately represent the real system. The
optimum level of abstraction is hard to define. In
many cases data availability and duration of the
study will be the determinant for modelling
abstraction level.
Due to these factors, some assumptions have to
be made and implemented in the model.
Assumptions made and implemented for this case
study are:
i. Every workplace is able to receive every type
of vehicle (resource independent).
ii. Every worker is able to repair all kinds of
maintenance (capabilities independent).
iii. Each maintenance order involves only one
worker at a particular time. There are no
parallel activities in one maintenance order.
iv. Seasonal effect has not been considered and
there are no priority applied for seasonal
vehicles.
By implementing these assumptions into the
model and simulation, certain factual aspects were
not considered. To ensure that the model still
represents the real system and is enough to achieve
the objective of the study, verification and
validation was conducted.
Verification is a determination of whether the
computer implementation of the conceptual model
is correct. It was conducted by following the
principle of structured programming, usage of
interactive run controller or debugger and
monitoring the model animation.
Validation on the other hand is a determination of
whether the conceptual model can be substituted in
the real system for the purposes of experimentation.
Validation of this model has been done through
consistency checks, input-output transformation and
historical input data comparison.
The model was simulated for one year. The
outcomes; utilisation of workshops, utilisation of
workshops buffer and throughput time for
maintenance orders are recorded. The result from
the simulation is presented and explained in the next
chapter.
5. ANALYSIS OF THE OUTCOME
The simulation was run several times and the
average outcomes recorded. Figure 5 shows the
utilisation of workshops in the maintenance
network.
The value in the graph represents the mean
utilisation of all repair stations at the workshop
based on working, waiting and pausing percentages.
Working means the repair station has a vehicle to be
repaired and a worker to repair it. The repair station

428

is empty in the waiting mode and the pausing mode
demonstrates breaks in the working shift.
From the graph three workshops with high
utilisation are identified. Workshop 3 (W3),
workshop 2 (W2) and workshop 8 (W8) have an
average utilisation of more than 70% in a year. With
an average utilisation between 50-60%, imbalance
in utilisation occurs in the remaining workshops.
Two determining factors for this imbalance have
been identified.
W1
W2
W3
W4
W5
W6
W7
W8
W9
W10
MW1
W11 MW2
W12
Working
Waiting
Legend:
W Workshop
MW Main workshop
%
Pause
Figure 5 Utilisation of Workshops from simulation of AS-
IS state model
The first is a high ratio in preventive maintenance
orders over available repair stations at certain
workshops. Preventive maintenance orders created
through a scheduled list implemented in 2009, have
to be repaired at the planned workshop and cannot
be transferred. Huge numbers of preventive
maintenance orders have therefore been scheduled
for a particular workshop without considering the
fact that their capacity will lead to high utilisation of
the workshop.
The second factor for the imbalance comes from
the corrective maintenance order rules and
arrangement. For corrective maintenance, the
operation area of vehicle determines the responsible
workshop. Some vehicle types appear to be high in
corrective maintenance, with higher maintenance
throughput time than others.
Figure 6 shows the simulated annual number of
maintenance orders and average maintenance
throughput times according to vehicle types. The
assignment of responsible workshop added to the
corrective maintenance orders for certain vehicle
type increases the workshop utilisation.
0 500 1000 1500 2000 2500 3000 3500
0 20 40 60 80 100 120 140
Type 1
Type 3
Type 5
Type 7
Type 9
Type 11
Type 13
Type 15
Type 17
Type 19
Type 21
Type 23
Type 25
Type 27
Type 29
Type 31
Type 33
Average Throughput Time No. Maintenance Orders
Figure 6 Average maintenance throughput time and
numbers of maintenance order according to vehicle types
The next outcome from the simulation is
maintenance throughput times for each vehicle type.
The times are recorded from the opening of a
maintenance order until its completion. With
different maintenance order classes, random
maintenance times and uncertain waiting times, the
maintenance throughput time for every vehicle type
varies. There are four to six vehicle types with
longer throughput times. In order to minimize the
throughput time of maintenance in the whole
network, these types of vehicles need to be
invigilated during scheduling and arrangement of
responsible workshops.
One element of maintenance throughput time is
waiting time. Most of the waiting time for a
maintenance order occurs at the workshops buffer.
To investigate workshop influence in throughput
time, the utilisation of the workshops buffer is
recorded through their ratio of full and empty
capacity through the simulation year. In other words
the utilisation of workshops buffer can be seen as a
bottleneck of the maintenance network.
The recorded outcome is shown in Figure 7.
Compared with the workshop utilisation, a similar
trend can be found.
Workshop 3 (W3) and workshop 2 (W2) are the
two workshops with high buffer utilisation. Both
workshops buffers are full during nearly the whole
simulation running time. This will lead to longer
waiting times for the maintenance orders scheduled
and assigned to these workshops.

429


Figure 7 Utilisation of Workshops Buffer from simulation
of AS-IS state model
6. CONCLUSION AND FUTURE WORK
This paper has presented a case study of the
current state of the MRO network and improvement
potentials. The objective of this study is to balance
the volume of maintenance orders by utilising the
capacity of the network. A discrete event simulation
technology has been used in order to understand the
system, analyse the possible cause for imbalanced
maintenance network and to experiment the
network by different maintenance strategies, rules
and scenarios.
In the scope of this paper, a part of the study is
presented; definition of problem and objective, data
mining, system analysis, modelling of the AS-IS
state, simulation, and analysis of the simulation
outcomes. Data analysis provided the input for
modelling and simulation steps. Outcomes from the
simulation of AS-IS state shows that imbalances of
utilisation appeared in the current maintenance and
repair network. The types of vehicles with longer
throughput time and bottleneck of the MRO
network which lead for the imbalances have been
identified.
As the main outcome of this paper, the analysis
of the AS-IS state presented will be used for
developing the alternative network models in the
future work. Several alternatives of the maintenance
network models with different maintenance
strategies are going to develop and simulate. The
outcomes from these alternatives will then be
analysed and compared technically and
economically with the AS-IS state and between
alternatives. The best alternative will be chosen. Its
outcomes and strategies will be interpreted. As a
conclusion of the case study, the best strategy for
the maintenance network is going to be proposed to
the industry partner for a possible improvement.
7. ACKNOWLEDGMENTS
This case study is a part of the Fraunhofer
Innovation Cluster Maintenance, Repair and
Overhaul in Energy and Transport (MRO). We
would like to thank Hendrik Grosser und Marcus
Kim for their contribution in the case study.
REFERENCES
Ebling, C.E, An introduction to reliability and
maintainability engineering, Tata McGraw Hill
Publishing Company Limited, New Delhi, India, 1997
Jahangirian, M., Eldabi, T., Naseer, A., Stergioulas, L.K.
and Young, T., Simulation in manufacturing and
business: A review, European Journal of Operational
Research 203, 2010, pp 1-13
Klemme-Wolff, H., Press Release, European
Federation of National Maintenance Societies, 2009,
Retrieved: June 29
th
2011,
http://www.efnms.org/consoleo_files/modulefiles//file
browser/documents/SMRP_Press_Release_Harmoniza
tion_090911.pdf
Nyhuis, P., Hartmann, W., Konsequenzen der
Auftragspriorisierung, wt Werkstattstechnik online,
Vol. 100, No. 4, Springer-VDI-Verlag, Dsseldorf,
Germany, 2010
Rajpal, P. S., Shishodia, K. S., et al., "An artificial neural
network for modeling reliability, availability and
maintainability of a repairable system." Reliability
Engineering & System Safety, Vol. 91, No. 7, 2006, pp
809-819
Robinson, S., Simulation: the practice of model
development and use, Willy, Chichester, UK, 2004
Salonen, A., Deleryd, M., "Cost of poor maintenance: A
concept for maintenance performance improvement."
Journal of Quality in Maintenance Engineering,
Vol.17, No. 1, 2011, pp 63-73
Sharma, A., Yadava, G. et al. "A literature review and
future perspectives on maintenance optimization."
Journal of Quality in Maintenance Engineering, Vol.
17, No1, 2011, pp 5-25
Seliger, G., Sustainability in manufacturing: recovery of
resources in product and material cycles, Springer,
Heidelberg, Germany, 2007
VDI, VDI 2893: Selection and formation of indicators
for maintenance, Beuth Verlag, Dsseldorf,
Germany, 2006
Wireman, T., World Class Maintenance Management,
Industrial Press, New York, USA, 1990
Proceedings of DET2011
7th International Conference on Digital Enterprise Technology
Athens, Greece
28-30 September 2011
430

A FRAMEWORK FOR PERFORMANCE MANAGEMENT IN
COLLABORATIVE MANUFACTURING NETWORKS
Pedro S. Ferreira
CENI
psf@ceni.pt
Pedro F. Cunha
CENI, Instituto Politcnico de Setbal
pcunha@ceni.pt
ABSTRACT
The framework for performance management developed in the Net-Challenge project aims at
providing a practical approach to performance management for organisations getting involved in
Collaborative Networks. The framework scope comprises objectives and strategy setting, strategy
deployment, performance measurement and evaluation, monitoring and improvement. The
framework aims at the alignment and achievement of strategic and operational business objectives
in the Virtual Organisation and in its supporting Business Community environments. The approach
relies on the identification of key stakeholders and on their key success factors which provide the
external perspective driving the performance evaluation and improvement. An important component
of the framework is the net of performance factors, the drivers of performance, which is identified
collaboratively, oriented by the external perspective so that the value for stakeholders is kept in
sight. The reference processes for both environments are proposed, connecting all the other
framework components.
KEYWORDS
Performance management, Collaborative networks

1. INTRODUCTION
Through collaborative networks (CN), member
organisations aim at delivering high performance to
their stakeholders and at sustaining competitive
advantage, by sharing knowledge and resources.
Performance management, essential to the success
of collaborative networks, requires approaches
suitable to this type of networks and to their
objectives. The evolution of organisational models,
from companies with sharp boundaries, formal
relationships with other companies and a focus on
internal efficiency and effectiveness, to networks
has a profound impact upon performance
management practices (Folan and Browne, 2005).
Though the single organisation performance
management concepts and recommendations have
been applied to networks and are to great extent
valid, the new challenges require dealing with a
larger domain, including new processes, new
stakeholders and a less clear concept of internal and
external to virtual and real organisations. Other
specific issues of networks to deal with are the
duration of their life and their virtual nature.
The concepts of virtual organization (VO) and
virtual organization breading environment (VBE)
(Camarinha and Afsarmanesh, 2003) were used by
the Net-Challenge project (Carneiro et al, 2010). A
Business Community (BC), according to the Net-
Challenge project, is similar to a VBE, mainly
composed of SME in the same industry, usually in a
geographic proximity, that may be open or
restricted, depending on the membership policy.
Performance management is even more important
in CN to assure the delivery of value to the
stakeholders since organisations are more loosely
connected. Moreover, since trust is a fundamental
enabler of collaboration, performance management
should contribute to trust in CN, by delivering
objective information on performance of networked
organisations and of their members. In this context,
several contributions can be found for performance
measurement considering it a way to demonstrate
the benefits of participating in CN and to promote

431

the acceptance of these organisational forms
(Camarinha and Abreu, 2007) and aiming at
achieving equity among partners (Alfaro et al,
2005).
In CN, performance management calls for
suitable approaches and processes to identify critical
factors and indicators, to formulate actions to take
advantage of opportunities or overcome weaknesses
and improve the systems performance as it is
defined by the stakeholders (Cunha et al, 2008). In
general terms, performance management is
concerned with setting and sharing the goals to be
achieved and developing and managing resources
and initiatives, in order to achieve the goals set.
Performance cannot be objectively defined and it
can only have a clear definition within each specific
context (Lebas, 1995). In fact, the definition of
performance lacks knowing to whom is performance
delivered (Otley, 1999), the reason why
stakeholders are central in a properly formulated
approach.
Performance management covers objectives,
strategies, performance measurement and
evaluation, monitoring, learning and improvement
(Otley, 1999). The following activities are part of a
performance management process:
1. Definition of objectives and strategy
formulation (what the organisation wants to be
good at and what strategy is chosen to get
there);
2. Definition of what to measure and targets
setting;
3. Setup of a measurement system;
4. Measurement and analysis of performance;
5. Decision and carrying out of actions to assure
targets are achieved.
The activities 2 to 4 of the previous list, which are
part of the performance measurement process,
receive inputs from the first one and deliver outputs
to the last one. Performance measurement is about
collecting data about the past so that a projection
into the future can be done and improvement actions
can be decided.
Performance management is tightly integrated
with process design. Processes must be designed
and continuously tuned for specific objectives that
contribute to the organisations strategy. The
alignment of processes and of collaborating
organisations and the development of suitable
performance indicators that provide objective and
explicit representation of performance and benefits
within a collaborative network are tough challenges.
Approaches like the Supply Chain Operations
Reference (SCOR) model that proposes
performance indicators for supply chains are not
oriented for collaborative processes throughout the
supply network (Camarinha and Afsarmanesh,
2008), cannot cope with the dynamics of CN and
cannot measure performance on soft factors related
to the collaboration (Lebas, 1995, La Forme et al,
2007).
Collaboration has the potential to affect positively
several performance factors such as flexibility,
agility, resources utilization, specialization,
dependence on third party, competencies
development, innovation, which have consequences
in market position, regulation, etc. (Abreu and
Camarinha, 2008, La Forme et al, 2007).
Collaborative networks may even be a survival
mechanism in face of turbulent markets due to their
implicit agility (Camarinha and Afsarmaneh, 2004).
As an example, collaborative forecasting may
enable better customer service levels or a reduction
in inventory (Holweg et al., 2005).
Flexibility or changeability in general can be
achieved mainly in the Business Community
domain since it is a long term acquisition resulting
from planning. Once a VO is set in form and
purpose, its changeability can be limited to the
accommodation of small changes or disturbances
such as changing requirements from the customer or
the reaction to unexpected events.
The benefits of collaboration may come also
indirectly through knowledge creation and sharing
among the organisations in a network to affect many
different performance factors. Collaboration is in the
base of the reference collaboration processes
developed in Net-Challenge, e.g. collaborative
planning, capacity management, to improve
performance in KSF such as delivery time, sales,
capacity, etc.
Besides benefits, collaboration also has costs,
related to trust building, time to achieve a common
language, systems integration, trial and failure, etc.
Hence, there are some requirements for
collaboration to take into consideration in partner
selection such as competence uniqueness, coherence
with the network' strategy, flexibility and
adaptability, reliability (Wiendahl and Lutz, 2002).
In the following sections of this paper the Net-
Challenge framework for performance management
is presented. First, an initial overview of the
framework and of its elements is given. The central
concepts (key stakeholder, key success factor, key
performance factor and key performance indicator)
and how they relate with each other are detailed in
sub-sections 2.1 to 2.4. The reference processes for
performance management included in the
framework are presented in the sub-sections 2.5.
The validation of the framework is addressed in
section 3 and the conclusions are summarised in
section 4.

432

2. THE FRAMEWORK FOR
PERFORMANCE MANAGEMENT
The proposed approach to performance management
aims at guaranteeing the alignment and achievement
of strategic and operational business objectives in
the Virtual Organisation and in its supporting
Business Community environments. It relies on
establishing a strategy based on key success factors
(KSF) and on identifying and cascading them
internally in alignment with the strategy. Figure 1
represents the Net-Challenge framework for
performance management in its context.

Operation
Performance management
I
n
f
o
r
m
a
t
i
o
n

s
y
s
t
e
m
P
r
o
c
e
s
s


r
e
s
o
u
r
c
e
s
R
e
f
e
r
e
n
c
e

p
r
o
c
e
s
s
e
s
N
e
t

o
f

f
a
c
t
o
r
s
Performance management process
C
o
m
m
u
n
i
c
a
t
i
o
n
VO management BC management
S
t
a
k
e
h
o
l
d
e
r
s
P
e
r
f
o
r
m
a
n
c
e

o
n

K
S
F
Improvement
actions
Strategy and
Measurement
R
e
q
u
i
r
e
m
e
n
t
s

(
K
S
F
)
P
e
r
f
o
r
m
a
n
c
e

e
v
a
l
u
a
t
i
o
n

Figure 1 Components of the framework for performance
management
The main components of the Net-Challenge
framework for performance management are:
a stakeholders perspective of value which
defines what performance is (external
environment);
the interlinked factors in the CN which can be
acted upon in order to change performance
(internal environment);
the reference performance management
processes aiming at making the strategy
succeed;
the information system;
the process resources;
the communication processes (internal and
with stakeholders).
Performance can be changed by taking actions in
the two environments, which means in two time
horizons. In the VO, depending on its lifetime and
in the BC, where members develop their
capabilities, sharing knowledge whenever possible
and try to know each other.
In the BC, a management process can improve
the instruments related to membership and those
made available to VO to support their formation and
operation, such as the standard processes, templates
and specific ICT tools. Also a BC management
process for strategy revision ponders the actual
performance and the environment changes and
adjusts strategy if necessary, whilst a capability
improvement process and an event handling
preparedness process address the improvement of
members capabilities.
Two reference processes for performance
management in the BC and in the VO are part of the
Net-Challenge framework, which interface with the
reference collaboration processes as briefly
explained. In order to expedite the processes,
particularly in VO, some resources are provided
scenario templates which characterise typical
business scenarios and propose sets of factors to be
monitored which are relevant in that business
context, lists of factors, the corresponding
performance indicators and definitions.
The information system collects data from VO
partners and BC members as required to calculate
performance indicators, conveys evaluation of
performance and feeds a central repository of
information (BC member and VO profiles).
Aggregated and disaggregated data allows analysing
performance of VO, VO partners and of the whole
BC. This system supports search of partners based
on claimed capabilities, qualified processes and
actual performance. The type of information
transactions is depicted in Figure 2.

Member profile
DB
Facilitator
Customer
VO
Member Member
Capabilities Qualification
results
Member
profile
Performance
Perfor-
mance
Search
Search

Figure 2 Flow of information concerning the members
performance and profile
Internal and external communication of performance
is essential to convey the BC strategy, to mobilise
organisations for improvement and to reward the
members in the sense that reputation may be a
members KSF. The framework does not include
explicitly a reward system. If BC members and VO
partners perform well they will be invited often. The

433

search for partners takes performance into
consideration. Penalties may be foreseen in the
partners agreement (VO contract). Anyway, a
penalty will be not to be invited for VO and, in
extreme cases, to be excluded from the BC.
In order to speed up and guide organisations in
the analysis process and in the identification of KSF
and also to clarify the concepts of the performance
management framework, the framework contains
examples of KSF to BCs and VOs stakeholders.
To find the KSF the key question is what are most
important requirements that the stakeholder wants
from the organisation (and from other competing
organisations) that will determine his evaluation or
ultimately that will make him decide for one
organisation?. Since it is important to establish a
common understanding of the meaning of each
KSF, a KSF glossary is a necessary process
resource.
2.1. A STAKEHOLDERS VALUE BASED
APPROACH
The base principle in the present approach is that
performance is determined by the stakeholders.
A key stakeholder is an entity with an interest in
the organisation's activity or in its outcomes, which
has the power to influence them considerably.
Knowing who the key stakeholders are and what
they are expecting from the organisation is the
starting point to fulfil their expectations. The key
stakeholders of the BC are BC member, VO,
Customer and Society. The VOs key stakeholders
are Broker, Partner, Customer, BC and Society.
Figure 3 represents the stakeholders and their
relations.
Even though the broker (the organisation holding
the business opportunity) role is indispensable, the
customer is considered a stakeholder in order to
highlight the VOs orientation to value creation to
the customer, and with the purpose to emphasise its
specific requirements and to keep clear the specific
role of the broker.

BC
BC
member
VO
VO
broker
VO
partner
Customer
Society

Figure 3 BCs and VOs stakeholders
The Business Community is a VOs stakeholder
since it only fulfils its potential and its mission
through the VO. The VO is a BCs stakeholder by
definition, since the BC must provide the conditions
for the formation and success of VO. The society is
a key stakeholder of both BC and VO but with
different perspectives related with their different
time horizons and purposes.
Stakeholders are the ones who ultimately evaluate
the performance of an organisation. So, it is
fundamental to know what are the attributes they
value most (in the product, service, job or whatever
kind of deliverable) and that they expect the
organisation and its competitors can provide them
i.e., the success factors. The key success factors
(KSF) are the most important success factors for the
key stakeholders, the ones the organisation will
concentrate on. The difficulty about determining the
KSF lies in identifying the few things that will drive
the organisations strategy and its success. This
performance management system is inherently
multi-goal.
It is important to distinguish the success factors
(stakeholder centred) from factors internal to the CN
(organisation or process centred), which condition
the success factors and that will be called
performance factors. The key success factors have
to be known by asking the stakeholders. The way an
organisation satisfies the KSF will determine its
competitive advantage and for that reason they are
in the base of a strategy formulation.
There has been no consensus concerning this
terminology. The concept of KSF is used with this
name (La Form, 2007) and is also named key
strategic factor (Kenny, 2005). A related concept in
the SCOR model is the value proposition statement
which identifies the KSF for types of customer in
segmented markets (Bolstorff and Rosenbaum,
2003). Many authors do not distinguish the external
and internal perspectives when using the names
critical success factor (Kaplan and Norton, 1996,
SCC) and key performance factor (Kaydos, 1999),
among others.
The benefit concept is central in the approach to
performance measurement of the ECOLEAD project
(Camarinha-Matos and Abreu, 2007), since it is the
driver of the collaborative network behaviour.
According to those authors, the goal in a CN is the
maximization of a benefit which is an attribute of its
specific value system.
Since the KSF are related with competitiveness, it
should be noted that competition and the possibility
to choose alternatives exist both in BC and in VO
and the choice will be determined by the
performance on the KSF. As an example, an
organisation may decide to participate or not in a
BC and may be or not allowed to participate.
The identification of the most important factors
that affect the key success factors the key
performance factors (KPF) enables to act on the

434

processes and to measure them in alignment with
the strategy.
2.2 THE KEY PERFORMANCE FACTORS
A performance factor is an enabler or a constraint
that affects one or more success factors and, thus,
the performance of the organisation. The
organisation acts on the performance factor by
changing processes, methods, tools and resources.
The stakeholder has no direct interest on the
performance factors and may not know about them.
For example, production flexibility is of no interest
to the customer but it may be a performance factor
that affects delivery time and product mix, which
are customers KSF.
The key performance factors are those
performance factors that the organisation identifies
as the most important, those with higher impact in
other factors, requiring priority in monitoring and in
improvement. The name is used also by other
authors (Kaydos, 1999, La Forme et al., 2007),
though Kaydos does not limit it to the internal
factors.
The analysis of the KSF and determination of the
corresponding KPF require a systematic cause-effect
analysis, involving diverse points of view, from the
partnering organisations. The concepts of causal
model of Lebas (1995), the strategy deployment
(Kaydos, 1999), the Hoshin Kanri method and other
related approaches are of interest to this purpose.
Like the balanced scorecard (BSC), this method
allows to link the strategy with the organisations
internal factors or processes and with the
performance indicators. Furthermore, by
determining the KPF, an organisation is answering
the question what must we do in order to satisfy the
expectations of our stakeholders?, formulated by
Otley (2007) to link the drivers of performance with
the stakeholders and extend the BSC, which in the
present framework are naturally linked. Moreover,
the process of identification and definition of KSF
and KPF contributes to create a common language
within the Business Community.
As the map of cause-effect is built, one goes from
success factors that the stakeholder asks the VO or
broker, to enablers that are planned and achieved
long before, within the Business Community. The
consideration of a time dimension and of a time
scale puts some performance factors outside the
time scope of the VO and reveals some success
factors the VO expects from the BC. The
perspective of future inherent to the KPF should be
emphasised. The KPF are causal factors of the KSF,
thus, they locate further back in time. The indicators
that measure the KPF are leading indicators of
performance on the KSF.
In order to speed up the analysis, a list of some
KPF found to be the most relevant in the context of
the collaborative networks was also included in the
framework to support the KSF proposed. It resulted
from a cause-effect analysis, starting from each KSF
and identifying its main KPF successively by
repeating the key question what are the factors that
have a major impact in this KSF or KPF?.
This generic question or Otleys variant leaves
unconstrained the scope of the analysis. However,
the analysis may be limited to a specific area or
macro-process.
The distinction between KSF and KPF and the
open scope of this framework distinguish it from
others such as Hons (2005), which is specifically
targeting manufacturing systems and includes both
KPF and KSF in the five groups of metrics
proposed.
The search for flexibility or changeability, as
Wiendahl et al. (2007) name the general
characteristic, is an important driver to form CN
which are by nature agile, as pointed out above.
Thus, changeability appears naturally as a KPF
supporting one or more KSF. The toolbox
developed by Georgoulias et al. (2007) is of interest
to the present framework. It addresses three types of
flexibility (corresponding to three possible KPF)
and it enables to analyse flexibility in different
production levels, through data aggregation. Krappe
et al. (2007) integrated flexibility measurement into
change management processes and claim that the
integrated process allows choosing the most
appropriate response to improve the manufacturing
systems flexibility at any level. This response could
be the best network configuration to achieve a
desired flexibility.
The identification of the KPF requires the
consideration of the nature of collaborative
organisations and of the role of collaboration.
The VO performs well if its stakeholders get what
they want and get higher value from it than they
would from its alternatives. However, the evaluation
of the global performance of the VO may be
insufficient and the individual partners'
contributions must be evaluated. The case of a VO
that delivers to the customer as agreed, in spite of
some members bad performance and only due to
the extraordinary effort of the other members, shows
that a second dimension besides factorization (going
from effect to cause) is needed, which is
disaggregation (the contributions from the different
members to a global performance). Both dimensions
of the KPF development are depicted in Figure 4.


435


Figure 4 Factorization and disaggregation of KPF
For disaggregation it may be useful to consider the
different ways partners may work at a given
moment, represented in Figure 5:
i) the partners work together for a single output;
ii) the partners work individually for a single
output (sequentially or in parallel);
iii) the partners work individually for multiple
outputs (independently).
In the design and planning activities, i) may be
prevalent. This would be the case of collaboration in
strict sense (sharing resources, knowledge, etc.).
During the execution of manufacturing processes, ii)
may be more usual, a case that requires
coordination. In the first case, the evaluation of the
individual contributions may not be pertinent. This
will have implications in disaggregation and in the
calculation of KPI. The disaggregation of one KPF
into KPF
i
enables exposing the performance of
individual partners through individual KPI.

Figure 5 The different ways partners work
2.3 COLLABORATION AS A KEY
PERFORMANCE FACTOR
The collaboration ability of BC members, along
with their technical and management capabilities, is
a relevant issue raising the need for the assessment
of the collaboration preparedness of a candidate to
join a VBE or a VO (Afsarmanesh and Camarinha-
Matos, 2005). In fact, some of the KPF found and
included in the framework as examples are related
to the processes of admission of members into the
BC and of search and selection of partners during
the formation and reconfiguration of the VO.
In the Net-Challenge performance management
framework, collaboration is present in the internal
factors (performance factors). The identification of
the role of collaboration, in the process of
identification of the KPF, is important as it reveals
ex ante the benefits that may result from
collaboration.
However, assessing the performance of an
organization on collaboration, directly, is difficult
and many approaches to do so lack practicality.
Indirect methods try to measure collaboration either
by its consequences or by its factors or both ways.
One difficulty when reviewing the research work
are the differences in the underlying, explicit or
implicit, definition of collaboration. Camarinha-
Matos and Abreu (2007) propose KPI to measure
collaboration based on a benefit evaluation.
Westphal et al. (2010) address the measurement of
collaboration proposing the measurement of its
effects and of its enablers. Borgatti and Jones (1996)
presented a method to measure past collaboration
which could be an indicator of preparedness for
collaboration. Thomson et al. (2007) developed a
conceptual multidimensional model of collaboration
to measure collaboration. Simatupang and Sridharan
(2005) developed a collaboration index to measure
supply chain collaboration in three dimensions -
information sharing, decision synchronisation and
incentive alignment. Bititci et al (2004, 2008)
presented four conditions for collaboration that were
not met in some known cases of failure and
collected and systematised in the form of
recommendations, the result of a survey work. In
summary, the recommendations that resulted from
the companies experience of collaboration are that
trust is the base of collaboration and communication
is important to create trust; upon trust, relationships
must be built aware of cultural differences; then
methods and some formalisation must exist;
afterwards, collaboration can be practiced by
investing on it and when problems happen,
collaboration and a constructive approach should be
used to solve them.
The Net-Challenge approach is aligned with
Bititcis conditions and recommendations. The
performance factors considered in the present
framework to affect collaboration in the VO are:
Motivation and customer orientation;
Agreement and partners top management
commitment;
Trust;
Communication in the VO;
Leadership and problem solving instances;
Methods and tools;
P3
P2
P1
P2
P1
P3 P1&P2&P3
Case i) Case ii)
Case iii)
KPF
KPF
KPF
KSF
KPF
KPFi
KPF
Analysis (Factorization)
Analysis (Disaggregation)
Effect

436

Organisations culture and individual social
skills;
Balance of internal process development
among partners;
Geographical distance.
The agreement factor is about rules, obligations,
benefits and risks to be agreed upon explicitly.
Some factors depend on the selection of partners,
some depend on the agreements established to form
the VO and others depend on the capabilities of
people and organisations. Which of those are key
depend on each network configuration and should
be determined in face of its strengths and
weaknesses.
The importance of communication and of
establishing communication channels is known and
it was highlighted for example in a Net-Challenge
reference process to prepare the VO to respond to
events which is part of the process of VO formation.
Though difficult, it is desirable that the
performance factors affecting the collaboration are
independent of each other so that evaluation is
easier.
No research was conducted in order to establish
the relative importance of those KPF so that weights
could be set with a sound base. Nevertheless,
organisations can determine those weights based on
their experience.
2.4 MEASUREMENT OF PERFORMANCE
Key performance indicators (KPI) allow monitoring
the performance of the organisations on the selected
key factors (success and performance factors). A
performance indicator, sometimes called metric, is a
variable that measures quantitatively a performance
factor. Key performance indicators are the (few)
selected ones to represent the overall performance of
a system or organisation. Some KPI are proposed
within the framework for the KSF and KPF
suggested, with the main objective to speed up the
analysis during formation of BC and VO.
Although quantitative indicators were preferred,
for some factors only qualitative indicators could be
found. Some are measured periodically others are
measured once, which is the case of qualitative
measures obtained at the VOs dissolution phase in
a review of VOs performance.
Many authors proposed KPI that can also be used
or extended in the context of CN, some of them
were cited in this article. The KPI are selected
according to the factor to be measured, to the
particularities of the business processes, to
availability of data, etc. If the KPF is disaggregated
the corresponding KPI should have that ability.
2.5. PERFORMANCE MANAGEMENT
PROCESSES
Two reference processes describe how the
performance management takes place in both BC
and VO environments, across their lifecycle phases.
The processes, whose diagrams using BPMN
notation are shown in Figure 6 and in Figure 7,
contain the steps from strategy formulation to
performance evaluation and improvement, taken
into consideration the differences between BC and
VO and their lifetimes.
The purpose of the reference processes is to help
CN in the design of their standard processes, which
will be adjusted to the specific business
characteristics.
In the process of the BC, the sub-process
Develop a strategy (detailed in Figure 8) deals
with the identification of the KSF and consequent
strategy formulation. Once decided a strategy, it is
necessary to understand how every process of the
Business Community, members and VO contribute
to the execution of the strategy so that the total
alignment can be achieved which takes place in the
Deploy the strategy sub-process (detailed in
Figure 8). KPF and KPI are identified here. The
proposed KSF, KPF and their KPI are process
resources of the framework to provide guidance and
to speed-up the process. Only then performance
measurement processes can be setup so that
performance measurement will take place as
detailed in Figure 8 c).
In the VO environment the starting point is a
business opportunity and customer or market
requirements. An agreement among partners is
required to formalise the VO, also concerning the
KSF, KPF and KPI relevant to the BC and to the
specific business. Decisions at this moment,
concerning performance management, are
constrained by the BCs KPI and customers
requirements and targets.


437


Figure 6 Strategic planning and performance management in the BC

Figure 7 Performance management in the VO

a) Sub-process Develop a strategy

b) Sub-process Deploy the strategy

c) Sub-process Measure performance
Figure 8 Sub-processes of performance management in
the BC
3. VALIDATION
The Net-Challenge framework for performance
management is planned to be tested in a pilot
application led by industrial partners, in the textile
and garment sector and in the footwear sector. In
these sectors, supply chains are hierarchical; many
companies are very small and have a very informal
approach to performance management. The pilot
applications will be mostly focused on the
validation of the process resources (KSF, KPF, KPI
and business scenarios) and of the sub-processes to
negotiate them and setup the performance
management processes.
Interviews with key people in the participating
organisations will enable an initial validation and
adjustment and to determine the needs and
requirements for training and assistance.
Information will be acquired through inquiries
about the initial conditions and the effectiveness of
process execution. It will also be examined the
contribution of the performance management
processes to the alignment of strategies and to

438

customer orientation. Both the validation and the
specific requirements will enrich the framework.
4. CONCLUSIONS AND FUTURE WORK
The paper presents a comprehensive, coherent and
practical framework to assist CN in managing
performance which was missing even though
several works have addressed specific aspects.
The framework makes use of some existing base
concepts which were extended and articulated to
support the development of new processes
applicable in the domains of BC and VO and
embedding collaboration in a straightforward way.
A methodical identification of links from
stakeholders to KSF and to KPF provides guidance
on the quite often chaotic task of selecting KPI.
The concepts adjusted to the new environment,
the reference processes developed, the lists of KSF,
KPF and KPI and other supporting elements are
combined in the framework to make it
comprehensive and usable. CN can derive from this
framework the standard processes tailored to their
specific conditions.
Validation of the framework must still be done
and it may determine the improvement of its
components.
The benchmarking of collaborative practices in
the BC, inspired in the proposal of Simatupang and
Sridharan (2004) for supply chains may be an
instrument to include in the framework to determine
the success factors, to spread good practices of
collaboration and to improve the performance of
VO and BC.
5. ACKNOWLEDGMENTS
The authors would like to acknowledge the co-
funding of the European Commission within NMP
priority of the Seventh RTD Framework
Programme (2007-13) for the Net Challenge project
(Innovative Networks of SMEs for Complex
Products Manufacturing), Ref. CP-FP 229287-2.
The authors also acknowledge the valuable
collaboration provided by the project team during
the research work.
REFERENCES
Afsarmanesh, H., Camarinha-Matos, L.M., A
Framework for management of virtual organization
breeding environments, Collaborative etworks and
their Breeding Environments, (PRO-VE05), 2005, pp
35-48
Alfaro, J., Rodriguez, R., and Ortiz, A., A performance
measurement system for virtual and extended
enterprises, Proceedings of the Sixth IFIP Working
Conference on Virtual Enterprises, Vol. 186, 2005, pp
285-292
Bititci, U. S., Martinez, V., Albores, P. and Parung, J.,
"Creating and Managing Value in Collaborative
Networks", International Journal of Physical
Distribution & Logistics Management, Vol. 34, No.
3/4, 2004, pp 251
Bititci, U., Butler, P., Cahill, W. and Kearney, D.,
Collaboration: A key competence for competing in
the 21st century, SIOM Research Paper Series, No.
003, 2008
Bolstorff, P., Rosenbaum, R., Supply chain excellence:
a handbook for dramatic improvement using the
SCOR model, American Management Association,
2003.
Borgatti, S. and Jones, C., A measure of past
collaboration, Connections, Vol. 19, No. 1, 1996, pp
58-60
Camarinha-Matos, and L. M., Abreu, A., Performance
indicators for collaborative networks based on
collaboration benefits, Production Planning &
Control, Vol.18, No. 7, 2007, pp 592-609
Camarinha-Matos, L. M., Afsarmanesh, H.,
Collaborative Networked Organizations - A research
agenda for emerging business models, Kluwer
Academic Publishers, 2004
Camarinha-Matos, L. M., Afsarmanesh, H., Elements of
a base VE infrastructure Computers in Industry, Vol.
51, No. 2, 2003, pp 139-163
Camarinha-Matos, L. M., and Afsarmanesh, H., Related
work on reference modeling for collaborative
networks, Collaborative networks: reference
modeling, 2008, pp 15-28
Carneiro, L., Almeida, R., Azevedo, A., Kankaanpaa, T.,
and Shamsuzzoha, A., An Innovative Framework
Supporting SME Networks for Complex Product
Manufacturing, IFIP Advances in Information and
Communication Technology, Boston, 2010, pp 204-
211
Cunha, P. F., Ferreira, P. S., and Macedo, P.,
Performance evaluation within cooperate networked
production enterprises, International Journal of
Computer Integrated Manufacturing, Vol. 21, No. 2,
2008, pp 174-179
Folan, P. and Browne, J., A review of performance
measurement: Towards performance management,
Computers in Industry, Vol. 56, 2005, pp. 663-680
Georgoulias, K., Papakostas, N., Makris, S. and
Chryssolouris, G., A Toolbox Approach for
Flexibility Measurements in Diverse Environments,
CIRP Annals - Manufacturing Technology, Vol. 56,
No. 1, 2007, pp 423-426
Gunasekaran, A., and Patel, C., McGaughey, R. E., A
framework for supply chain performance
measurement, Int. J. Production Economics, Vol. 87,
2004, pp 333-347

439

Holweg, M., Disney, S., Holmstrm, J. and Smaros, J.,
Supply Chain Collaboration: Making Sense of the
Strategy Continuum, Pergamon, European
Management Journal, Vol. 23, No. 2, 2005, pp 170
181
Hon, K.K.B., Performance and Evaluation of
Manufacturing Systems, CIRP Annals -
Manufacturing Technology, Vol. 54, No. 2, 2005, pp.
139-154
Kaplan, R. S., and Norton, D. P., The Balanced
Scorecard, Harvard Business School Press, Boston,
1996
Kaydos, W. J., Operational performance measurement:
increasing total productivity, St. Lucile Press, Boca
Raton, 1999
Kenny, G., Strategic Planning and Performance
Management: Develop and Measure a Winning
Strategy, Elsevier/Butterworth-Heinemann, London,
2005
Krappe, H., Stanev, S., Ovtcharova, J., Georgoulias, K.,
Chryssolouris, G., Abul, H.A., "Development of
Flexibility Methods and their Integration into Change
Management Processes for Agile Manufacturing",
New Technologies for the Intelligent Design and
Operation of Manufacturing Networks, Fraunhofer
IRB Verlag, 2007, pp. 37-52
La Forme, F.-A. G., Genoulaz, V., and Campagne, J.-P.,
A framework to analyse collaborative performance,
Computers in Industry, Vol. 58, 2007, pp 687697
Lebas, M. J., Performance measurement and
performance management, International Journal of
Production Economics, Vol. 58, 1995, pp 23-35
Otley, D., Performance management: a framework for
management control systems research, Management
Accounting Research, Vol.10, 1999, pp 363-382
Otley, D., Accounting performance measurement: a
review of its purposes and practices, Business
Performance Measurement: Unifying Theory and
Integrating Practice, Cambridge, 2007, pp 11-35
Simatupang, T.M., and Sridharan, R., Benchmarking
supply chain collaboration: An empirical study,
Benchmarking: An International Journal, Vol. 11, No.
5, 2004, pp 484-503
Simatupang, T.M., and Sridharan, R., The collaboration
index: a measure for supply chain collaboration,
International Journal of Physical Distribution &
Logistics Management, Vol. 35, No. 1, 2005, pp 44-62
Thomson A. M., Perry, J. L. and Miller, T.K.,
Conceptualizing and Measuring Collaboration, J
Public Adm Res Theory, Vol.19, No.1, 2007, pp 23-56
Westphal, I., K.,-D., Thoben, M. Seifert, Managing
collaboration performance to govern virtual
organizations, J Intell Manuf,, Vol.21, 2010, pp 311-
320
Wiendahl, H.-P. and Lutz, S., Production in Networks,
CIRP Annals - Manufacturing Technology, 2002, pp
573-586
Wiendahl, H.-P., Elmaraghy, H.A., Nyhuis, P., Zh,
M.F., Wiendahl, H.-H., Duffie, N. and Brieke M.,
Changeable Manufacturing - Classification, Design
and Operation, CIRP Annals - Manufacturing
Technology, 2007, pp 783-809

Proceedings of DET2011
7th International Conference on Digital Enterprise Technology
Athens, Greece
28-30 September 2011
440

RISK MANAGEMENT IN EARLY STAGE OF PRODUCT LIFE CYCLE BY
RELYING ON RISK IN EARLY DESIGN (RED) METHODOLOGY AND USING
MULTI-AGENT SYSTEM (MAS)
Leyla Sadeghi
Cemagref - Unit TSAN 1, rue Pierre-Gilles
de Gennes - CS 10030 - 92761 Antony cedex
leyla.sadeghi@cemagref.fr

Mohsen Sadeghi
Graduate School of Management and
Economics
Sharif University of Technology
sadeghi@gsme.sharif.edu



ABSTRACT
Risk assessment and management play a very critical role in design phase of product process. The
aim of this article is sustain risk assessment and management during early design phase of product.
Indeed the results presented in this work contributes to managing risk during product design phase
by development of a computerized system by utilizing the concepts of Multi-Agent System , RED
(risk in early design) methodology and rule based intelligent techniques. The suitable decision for
design selected according to the acceptable risk. In fact Multi-Agent System helps to facilitate
applying RED methodology for risk assessment and management. This paper firstly, describes
motivation of this research, context and environment related to this topic. Secondly, a brief state of
the art of failure analysis methods is introduced. In the third part, a structured model is proposed
for applying RED metrology by utilizing Multi-Agent System. In fact this model is introduced that
applies feature-based and parameters design concepts and also Multi-Agent System to handling
Risk in Early Design (RED) Method. Then, the results are presented by RMD_MAS_RED tool.
Finally, the perspective of this work is presented.
KEYWORDS
Design Phase, Risk in Early Design, Risk Assessment, Risk Management, Multi Agent System

1. INTRODUCTION
Risk assessment and management play a very
critical role in design phase of product process.
Early assessment and management of risks is
necessary to anticipate and prevent failure from
occurring or repeating. In fact the impact of risk
assessment on the product is more in design phase
especially conceptual design phase. So to increase
the product safety, performance and reliability risk
assessment need to be moved forward to the
conceptual design phase [1]. Because the product
has not assumed a physical form in conceptual
design stage thus risk assessment is difficult in this
phase. In an effort to perform risk assessments,
based on function rather than physical components,
the risk in early design (RED) method was
developed [1].
It is known that formal risk analysis is considered
by designers as time consuming, tedious and often
useless activities (in mechanical and semiconductor
industries).
This paper presents RMD_ MAS_RED (Risk
Management in Design by using Multi Agent
System and Risk in Early Design method) tool
which is developed to capture, assess, organize,
store, share and update knowledge and information
in order to support RED method by using multi-
agent systems.


441

2. LITTERATURE REVIEW
Risk in the early design stages is concerned by
designers too much. One of first steps for risk
assessment and management is failure identification
and analysis. Several failure analysis methods exist
and they are used in industry but Failure Mode and
Effect Analysis (FMEA) is widely used [2].This
method examines components of system and their
failure mode characteristics to assess risk and
reliability [3]. To assure a systematic and thorough
coverage of all failure modes, the information is
usually arranged in a tabular format. The table
consists of at least three columns, one for the
component name and its reference number, the
second for the failure modes and the third for the
effects. Other columns might include failure
detection method, corrective action and criticality
[4]. Some examples of FMEA shortcomings are:
1. FMEA is tedious and time consuming because it
relies on experts to examine each component of a
system to identify its potential failure modes [5].
FMEA often leads to very poor quality in the
designed artifact that it is not very economical.
2. FMEA is applied too late so it does not effect on
important design and decision [9].If FMEA is
performed earlier in design stage then it will have to
be repeated whenever the design is changed [7, 8].
3. The analysis FMEA requires a detailed level of
system design, and thus is not optimal to be used
during conceptual design [10, 11].
4. FMEA does not capture component interactions
explicitly, and it relies heavily on expert knowledge
to assess failure consequences and their criticality
[11].
Another method is Fault Tree Analysis (FTA). FTA
is an event-oriented analysis which starts with
identification of a high-level failure event [11]. This
tool provides a logical framework for analyzing the
failure behavior of a system by identifying
sequences of events that have negative impacts. The
FTA history is now about 50 years, and has become
a tool for the reorganization and translation of the
failure behavior of a physical system into a visual
diagram and logic model [12].
By employing this technique, we can generate
qualitative descriptions and quantitative estimation
(when sufficient data are available) of the risk
elements [13]. FTA is a well accepted technique and
it is very suitable for finding failure relationship but
difficult to understand and the complex logic is
involved so that it cannot be perform by novice
designers. It is useful both in designing new
products and in dealing with identified problems in
existing products.
Like FMEA, FTA is a well-accepted, standard
technique. It is likely to identify more possible
failure causes than FMEA. However, FTA also
relies greatly on expert input and it shares similar
criticism that FMEA is subject to that [11].
Formally capturing of component interactions and
system dynamics is crucial for supporting design
decisions during early conceptual development. So
FTA is not appropriate for risk analyzing in early
design phase.
An attempt toward the identification of failure
modes during conceptual design was made possible
through the function-failure design method
(FFDM).The FFDM is a mathematical relationship
between product function and failure modes that
was developed by Tumer and Stone (Stone et al.
2005; Stone et al. 2004; Tumer and Stone 2003)
[14,15]. This method uses a functional model in
combination with historical failure information to
map functionality to potential failure modes
[14].Functional Basis is a standard taxonomy to
describe functionality and it was used to model
systems and components at the highest (functional)
level. Then the method collected failure data from
historical databases and designer elicitation and it
mapped these failures onto function, hence building
a knowledge base, related to failure modes, directly
onto functionality needs to know the details of the
design form or solutions [14].
FFDM produces the type and number of failures
that occurred for a particular product. A bill of
material that is a list of the components, making up
the product and functional model, are used to
document functional data. In other words FFDM
involves formation of a function failure matrix that
can be used as a knowledge base to identify and
analyze potential failures for design [15].
In FFDM, function-component matrix (EC matrix)
is created with the help of the bill of materials and
the functional model. This matrix has m columns
(component) and n rows (functions) [16]. For
creating component-failure matrix (CF matrix), the
bill of material and documentation of failure are
used. Finally, the function-failure matrix (EF
matrix) is obtained by multiplication of the
function-component matrix (EC) and the
component-failure mode matrix (CF).

EC CF = EF (1)

By use of matrix EF; designers can design out
identified failure modes during the conceptual
design stage.
FFDM provides a starting point for determining the
likelihood of system failure based on a set of
functions [11]. Designers can analyze potential
functional failures before any component selection
is made by using this method. Several methods have
been developed based on the FFDM method, for
example:

442

-A methodology to enable the design of health
monitoring modules, concurrently with system
conceptual design in order to reveal, model, and
eliminate associated risks and failures (Hutcheson et
al., 2006) and;
-A formulation for a functional failure likelihood
and impact-based risk assessment approach which
classifies high-risk to low-risk function failure
combinations to provide designers (Lough et al.
2006; Krus and Lough 2007) [14].
In this research we used the second method. This
method is Risk in Early Design (RED) method
which formulates a functional-failure likelihood and
impact risk assessment. This approach classifies
high-risk to low-risk function failure combinations
and provides designers a tool that can be used to
qualitatively rank/order functional failures and their
consequences during conceptual design [11].
In other words RED method is an engineering
design tool for identifying and assessing risks in
early design. This method produces risk assessment
based on catalogued historical failure data. This
method translates recorded information about
function and failure into categorized risk likelihood
and impact for a product [6]. Indeed this method is a
necessary extension of the FFDM that creates a
relationship between function and the risk in early
design by use of a mathematical mapping from
product function to likelihood and impact risk
assessment [17]. It uses a database including
historical failure event information to present
specific areas that are at risk of failure in a product.
REDs aim is to identify risks and communicating
of those risks [1].
Therefore, the RED method attempts that based
risk, related to product function, promotes the
identification historically. A 2-D Fever Chart with
axes of likelihood and impact of failure is used for
communication of those risks [1]. In the following,
we will describe this method. RED method consists
of 1+4 steps (Figure 1). This method focuses on the
relationship between function and risk by
representing a mapping from function to risk
likelihood and impact [1].







Figure 1 - Steps of RED Method
The starting point is RED Database Population. A
database of historical and corresponding failure
information is necessary for RED method. It has to
gather failure reports from various features. This
database should provide enough information to be
able for identifying specific component and failure.
In first step functional model is created for
generating a function failure matrix as part of RED.
Because in this step, it has to describe what the
product will do, so functional modeling will be
useful for this work [1]. In other words by using this
functional information about what the product does,
designers can begin to perform analysis and studies
about how the product can perform these functions
[5].
In next step function failure matrix for product is
produced by use of Equation 1. This matrix
provides the number of failures for each particular
function according to the historical database. This
step provides a starting point for determining the
likelihood and impact of product failure by a
particular function.
In step 3, the information as a yield of function,
failure mode, impact and likelihood is provided.
The aim of this step is risk calculation. The risk
likelihood and impact calculation are extension of
the FFDM and appropriate mapping should be
selected for applying this extension. A key
assumption for applying these mappings is that a
fully populated database of related historical
failures has been established. Without a strong
foundation, the risk assessments are not likely to
produce relevant and adequate risk data [1].
In the final step of RED, these risk elements must
be communicated to it, being easy to understand [1].
So after selecting the appropriate combination of
mappings, the data is summarized through the use
of a 2-D Risk Fever Chart.
A Fever Chart is a 5-by-5 matrix that shows impact
on the horizontal axis and likelihood on the vertical
axis. Each cell of the matrix displays the number of
elements falling into impact-likelihood combination
[18].
When all the risks for a product are plotted on the
Fever Chart, designers can quickly get a visually
feel of the risk level from the entire of system. If
most of the risk elements are in the green or low
risk areas, then the product is considered in low risk
and if a large number of the risk elements are
plotted in the red or high risk areas, then the
designers can identify that there is a significant
amount of areas of concern [1] (Figure 2).

443


Figure 2 - Risk Fever Chart

All information of RED can be achieved before the
product has assumed a physical form, where the
potential for positive impacts on product
performance, costs and reliability are the greatest.
Our proposed method relies on RED methodology
and it can be progressed by using multi agent
system

3. THE MODEL
At the first it is necessary to note, that in this
research we use features instead of components in
FFDM and RED methods. So in FFDM Method,
EC matrix shows relationship between function and
feature and CF matrix shows relationship between
feature and failure.
As a result EF matrix is relationship between failure
and function. In other words we analyze the
definition and expression of function and also we
build the function model of product. Furthermore,
we take apart the functions and achieve function-
feature mapping.
The feature has some functions and geometries, and
it can be combined with other features to create new
parts, then these new parts form products and are
related to product designing and product
manufacturing. Based on the definition of feature,
feature consists of some information for example
functional information and geometric information
[19].So we defined feature agent and feature risk
management agent.
And also according to risk assessment which is
based on function instead of physical form, function
agent and function risk management agent are
defined. A design database is defined to hold all the
data, needed for reasoning, to save records of the
interactions and updating the agent. The main
structure of the Multi-Agent system for our
approach is given in Figure3. Knowledge sharing
and exchange is particularly important to determine.




Figure 3 - Structure of Multi- Agent System

The design Meta agent provides support to the
design activities and initiates queries to the other
agents on Multi agent System. Design Meta Agent
consists of two agents: Feature Agent and Function
Agent. Feature Agent involves list of feature
parameters and indeed a database is need for data
representation of function and feature. This database
plays a significant role in making decision at the
early design stage.
Risk Management Meta Agent is defined for
determining and calculating the risk and consists of
Function Risk Management agent and Feature Risk
Management agent. According to historical data,
Feature Risk Management agent calculates risk,
related to each feature and a database represents
historical product and corresponding failure
information. To construct a database for performing
RED, failure reports from various products should
be gathered. The failures recorded in the database
provide a part of the context for which the product
risk is considered. Function Risk Management
determines risk related to functions.
Communication and messages between these agents
must be defined to sustain RED in design phase.
Figure 4 shows the proposed communication
architecture of Multi agent system. For a successful
communication in such environments, the agents
have to share knowledge with each other.
Messages:
1. Which feature can realize function?
2. List of alternative features that can realize
function.
What is the risk related to function according to
select feature?
4. What is existing failure for features?
5. Existing failure for features (relationship between
failure and feature).
6. Relationship between function and feature.
7. Risk related to function according select feature.
Here, the methodology is introduced applying
feature-based, parametric design concepts and a
Multi-Agent System to apply Risk in Early Design
(RED) method. We discuss agent communication
and knowledge sharing based on RED method for

444

achieving this purpose. The Figure 4 illustrates the
interactions between function, feature, function risk
management and feature risk management agents
and their communication by exchanging messages.
These messages express information which a
transmitting agent desires that the other agents take
into account.









Figure 4 - Multi-Agent System for sustain RED

According to the analysis of customers needs and
requirements, the designer determines which
function is able to acquire customers needs and
requirements. Indeed Functional modeling describes
what the product will do. So function entity is
created by the function agent. After that risk will be
calculate based on this entity. At the first function
has to realize by features, so initially, the request
"Which feature can realize function?" is made in
function agent in order to send to feature agent.
Feature Agent creates list of feature that can be
realize by function, by help of Function-Feature
Database. This "list of feature" is sent to Function
Agent by Feature Agent. Indeed, the relationship
between function and feature is created in this step.
The Function-Feature Database is populated with a
1 entry if the feature, in the corresponding column,
solves the function, otherwise a 0 is entered.
In next step, the aim is calculating of risk. Risk
Management Meta Agent is defined for determining
and calculating of risk. This Meta agent includes
Function Risk Management and Feature Risk
Management Agents. So message what is the risk
related to function according to select feature?"
from function agent is sent to Function Risk
Management Agent. For answering this message,
the relationship between function and failure is
needed.
Feature Risk Management Agent demonstrates the
relationship between Feature and Failure. This
message calculates risk related to each feature
according to historical failure Feature Risk
Management agent. Feature-Failure Database
contains these historical failures and it helps to
Feature Risk Management agent for risk calculation.
Therefore message "what existing failure for
feature? is sent from Function Risk Management
Agent to Feature Risk Management Agent. For
answering this question, Feature Risk Management
agent uses the Feature-Failure Database. Feature
Risk Management Agent demonstrates failures of
feature and then sends it in form of a massage to
Risk Management Meta Agent.
In addition, another message from Function Agent
that consists of the "relationship between function
and feature" is sent to Function Risk Management
Agent. In previous steps this relationship is found.
So the "relationship between failure and feature"
and "relationship between function and feature"
exist in Function Risk Management Agent. By
calculating these relationships with together the
"relationship between function and failure" is
created. This relationship shows type and number of
failures that have occurred for a particular function
and it can be used as a knowledge base to identify
and analyze potential failures for design of a
product. Indeed we provide a starting point for risk
calculation by exchange of requests, knowledge and
information between function, feature, function risk
management and feature risk management agents by
use of 1,2,3,4 and 5 messages.
The next step is risk calculation. This information
about relationship between function and failure in
function risk management agent translates into risk
likelihood and impact elements for product by
applying risk likelihood and impact mappings.
These risk likelihood and impact elements must be
communicated to it, for being easy to understand.
So after selecting the appropriate combination of
mappings, the data is summarized through Risk
Fever Chart. Each cell of the Risk Fever Chart
displays the number of elements falling into that
impact-likelihood combination. So designers can
quickly get a visually feel of the risk level of
product. So Risk Fever Chart represents a clear
communication risk.
So "risk related to function according select
feature" is analyzed. This result is sent to function
agent in form of message.

4. THE IMPLEMENTED SOFTWARE
In this section, results present the managing risk
during product design phase by development of a
computerized system by utilizing the concepts of
Multi agent system, RED methodology and rule-
based intelligent techniques. The suitable decision
for design is selected according to the acceptable
risk. This approach is validated through a RMD_
MAS_RED (Risk Management in Design by using
Multi Agent System and Risk in Early Design
method) tool development and the application of a
case study.

445

This approach is implemented with Visual Basic
(VB) but the concept of Multi-Agent system is used
for writing this program. In this tool, agents
communicate with each other by exchanging
messages through a communication language. For a
successful communication in such environments,
the agents have to share knowledge with each other.
For each Meta Agent, we defined an interface for
the creation of agents and the environment of their
operation as the platform of agents. These matters
show query of messages between agents. According
to previous section, seven messages communicate
and share knowledge and information between these
agents.
Figures 5 and 6 show Design Meta Agent interface.
This Meta Agent includes Function Agent and
Feature Agent and it provides support to the design
activities and initiates queries to the other agents on
Multi agent System.

Figure 5 - Design Meta Agent interface (Function Agent).


Figure 6 - Design Meta Agent interface (Feature Agent).

By entering function name in function agent of
interface, one can calculate risk related to this
function according to select feature. Feature agent
of this interface has ability of adding new feature to
database and also adding function and feature
information. This work helps to update function-
feature database. This database plays a significant
role in making decision at the early design stage.
Additionally Figures 7 and 8 show Risk
Management Meta Agent interface. This Meta
Agent includes Function Risk Management Meta
Agent and Feature Risk Management Meta Agent
and it is defined for determining and calculating of
risk.

Figure 7 - Risk Management Meta Agent interface
(Function Risk Management Agent).

Figure 8 - Risk Management Meta Agent interface (Feature
Risk Management Agent).

According to function-feature, database represents
historical product and corresponding failure
information and also failure reports from various
products, gathered in this database. This database
has to update during process. So Risk Management
Meta Agent interface has ability of updating
feature-failure database by use of next button and
pervious button.

5. THE PERSPECTIVES
The perspective of future work is determining detail
of this work. In Risk in Early Design (RED) method

446

the relationship between function and failure by
multiplication of the function-component matrix
(EC) and the component-failure mode matrix (CF)
is found by help of Function-Failure Design Method
(FFDM) and the failure for each feature is
considered by use of the Function-Feature Database
that it is populated with a 1 entry if the feature, in
the corresponding column, solves the function,
otherwise a 0 is entered. So relationship and
condition of features with each other are not
determined. So this approach does not cover this
aspect of risk.
We can generate similar approach by some
modification for risk calculation and management in
detail design. For example, to determine the
parameters of feature and in fact the amount of
these parameters can affect on risk (angle of slot,
diameter of hole and etc).
And also during detailed design phase, designers
can specify some property for each function for
example tolerance, roughness and etc that these
conditions also affect on risk of product. Indeed risk
calculation and management can be realized by
defining appropriate knowledge base and database.
By use of these knowledge base and database we
can modify this approach for detailed design.

REFERENCES
[1] Grantham Lough K. Stone R. and Tumer I. ,
PRESCRIBING AND IMPLEMENTING THE RISK IN
EARLY DESIGN (RED) METHOD, Proceedings of
DETC06:ASME 2005 International Design Engineering
Technical Conferences And Computers and Information in
Engineering Conference,pp.1-9
[2] Stone R Tumer I.and Stock, Michael E., Linking product
functionality to historic failures to improve failure analysis
in design, Research in Engineering Design (2005) 16: 96
108.
[3] Kurtoglu, Tolga, Tumer, Irem Y., A RISK-INFORMED
DECISION MAKING METHODOLOGY FOR
EVALUATING FAILURE IMPACT OF EARLY SYSTEM
DESIGNS ,Proceedings of the ASME 2008 International
Design Engineering Technical Conferences & Computers
and Information in Engineering Conference - IDETC/CIE
2008 August 3-6, Brooklyn, New York, USA.
[4] Book Chapter, Hazard Analysis, 2007, Process
Automation Handbook, Part I, Section 7, Pages 407-423.
[5] Grantham Lough, Katie, Stone, Robert B., Tumer, Irem Y.,
Implementation Procedures for the Risk in Early Design
(RED) Method, Journal of Industrial and Systems
Engineering, Vol. 2, No. 2, pp 126-143, summer 2008.
[6] Grantham Lough, Katie, Stone, Robert B., Tumer, Irem Y.,
Failure Prevention in Design through Effective Catalogue
Utilization of Historical Failure Events, J Fail. Anal. and
Preven. (2008) 8:469481.
[7] Arunajadai, Srikesh G., Stone, Robert B., Tumer, Irem Y.,
A FRAMEWORK FOR CREATING A FUNCTION-
BASED DESIGN TOOL FOR FAILURE MODE
IDENTIFICATION, Proceedings of DETC02 ASME 2002
Design Engineering Technical Conference and Computers
and Information in Engineering Conference, Montreal,
Canada, September 29 October 2, 2002.
[8] Price, J.P, Effortless incremental design FMEA, In
Proceedings of the Annual Reliability and Maintainability
Symposium. 1996.
[9] B., McKinney, 1991, FMECA, The Right Way,
Proceedings of the 1991 IEEE Annual Reliability and
Maintainability Symposium, Orlando, FL, pp. 253259.
[10] Bell, D., Cox, L., Jackson, S., and Shaefer, P., Using
Causal Reasoning for Automated Failure Modes & Effects
Analysis (FMEA), Proceedings of the 1992 IEEE Annual
Reliability and Maintainability Symposium, pp. 343353
[11] Kurtoglu, Tolga, Tumer, Irem Y.,A Graph-Based
Framework for Early Assessment of Functional Failures in
Complex Systems ,Proceedings of DETC 07 ASME 2007
International Design Engineering and Technical Conference
and Computers and Information in Engineering
Conferences, September 4 7, 2007, Las Vegas, NV.
[12] Clifton A., EricsonII., Fault Tree Analysis-A History, A
History from the Proceedings of the 17 International System
Safety Conference-1999.
[13] Arben Mullai , RISK MANAGEMENT SYSTEM RISK
ASSESSMENT FRAMEWORKS AND TECHNIQUES,
Publication series 5:200.
[14] Kurtoglu, Tolga, Tumer, Irem Y., Jensen, David C., A
functional failure reasoning methodology for evaluation of
conceptual system architectures, Research in Engineering
Design ,DOI: 10.1007/s00163-010-0086-1.
[15] Tumer, Irem Y., Stone, Robert B., Bell, David G.,
REQUIREMENTS FOR A FAILURE MODE
TAXONOMY FOR USE IN CONCEPTUAL DESIGN,
INTERNATIONAL CONFERENCE ON ENGINEERING
DESIGN, ICED 03 STOCKHOLM, AUGUST 19-21, 2003.
[16] jun, Zhang xiang, lin, Gui chan, Gene Models in Intelligent
ComputerAided Design , Chinese Journal of Mechanical
Engineering, 2001, 37(2):8-11.
[17] Muller Guillaume, Vercouter, Laurent, Decentralized
Monitoring of Agent Communications with a Reputation
Model,Lecture Notes in Computer Science, 2005, Volume
3577/2005, 144-161.
[18] Tumer, Irem Y., Stone, Robert B., MAPPING FUNCTION
TO FAILURE DURING HIGH-RISKCOMPONENT
DEVELOPMENT, Corresponding Author. Accepted to
Research in Engineering Design. Conference version
published in ASME/DETC 2001.
[19] Yongtao, Hao, Qin, Feature-Function Expression Model
and Gene Coding for Products, Fifth International
Conference on Natural Computation.


447

CHALLENGES IN DIGITAL FEEDBACK OF THROUGH-LIFE
ENGINEERING SERVICE KNOWLEDGE TO PRODUCT DESIGN AND
MANUFACTURE
Tariq Masood
Manufacturing Department, School of Applied
Sciences, Cranfield University, UK
(Seconded at Life Cycle Engineering, Rolls-
Royce plc, Derby, UK)
Email: Tariq.Masood@Cranfield.ac.uk
Rajkumar Roy
Head, Manufacturing Department, Cranfield
University, UK and Director, The EPSRC
Centre for Innovative Manufacturing in
Through-life Engineering Services
Email: R.Roy@Cranfield.ac.uk


Andrew Harrison
Rolls-Royce Engineering Associate Fellow,
Life Cycle Engineering, Rolls-Royce plc,
Derby, UK
Email: Andrew.Harrison@Rolls-Royce.com
Stephen Gregson
Head of Engineering for Services Global
Transformation Programme, Rolls-Royce plc,
Derby, UK
Email: Stephen.Gregson@Rolls-Royce.com


Yuchun Xu
Manufacturing Department, Cranfield
University, UK and The EPSRC Centre for
Innovative Manufacturing in Through-life
Engineering Services
Email: Yuchun.Xu@Cranfield.ac.uk
Carl Reeve
Engineering for Services Global
Transformation Programme, Rolls-Royce plc,
Derby, UK
Email: Carl.Reeve@Rolls-Royce.com
ABSTRACT
Even though knowledge management has been a subject of research for a long time,
management of through-life service knowledge has started getting more attention quite
recently. With the help of literature review and analysis, this paper identifies possible
drivers to extend the product life cycle; presents definitions of knowledge and service
knowledge, and identifies research gaps and challenges in digital feedback of through-
life service knowledge to product design and manufacture. The paper presents a causal
loop model to represent causes and effects of through-life service knowledge on product
design and manufacture. A digital framework is presented to address challenges in digital
feedback of through-life service knowledge to product design and manufacture. The
digital framework is developed with the intention of developing a service knowledge
backbone demonstrator application. Industrial experts have validated the initial
framework. Detailed case studies shall be undertaken to enhance this framework in
future.
KEYWORDS
Product design, digital feedback, design for service, through-life engineering service
knowledge, manufacturing.

448

1. INTRODUCTION
The knowledge intensive industries (e.g. aerospace,
construction) build complex and long-life products
(e.g. aircrafts, engines, buildings) and tend to
encourage the generation of very large amounts of
information and knowledge within the overall
design-use-upgrade life cycle (Tang et al, 2010).
The pressures have mounted in the airline industry
since last decade, to reduce operating cost and
improve service, whereas revenues are declining
(Harrison, 2006). This tendency is because the in-
service life is getting longer as a result of:
The product-to-service shift, which is
exemplified by Rolls-Royces fleet service
agreements (e.g. Trent XWB), aimed at
reducing the risk and cost of long-term service
and maintenance events to the customer, by
providing a fixed cost per flight hour. This type
of agreement provides a basis for continuity in
service records resulting in increased
documentation.
Evolution of product service systems (PSS),
especially technical or industrial PSS; and
Emerging changes in technology.
The product-to-service shift has necessitated
developing strong digital feedback links of through-
life service with design and manufacturing stages of
product life cycle. Digital feedback is necessary to
transform tacit knowledge to explicit knowledge.
However, there are challenges in achieving digital
feedback of through-life service knowledge.
This paper presents recent advances in through-
life service knowledge feedback to product design
and manufacture. The scope of the paper is generic
and covers the general body of literature. This paper
presents a definition of knowledge in the industrial
setting, distinguishing it from data, information and
wisdom/action. It also presents definitions of
service, service knowledge, and service knowledge
management. Issues and barriers to knowledge
reuse are also discussed. The paper presents current
challenges in through-life service knowledge
feedback to product design and manufacture. A
service knowledge backbone (SKB) framework is
proposed to overcome these challenges. A causal
loop model (CLM) is also presented, which shows
relationships between challenges and the SKB.
Finally, the paper concludes with key
recommendations for future research.
2. METHODOLOGY
The following steps are followed as part of the
overall methodology for this paper (see Figure 1):
Identifying challenges in digital feedback of
through-life service knowledge (through
literature);
Identifying existing solutions to the challenges
(through literature);
Creating a digital framework to address the
identified challenges;
Developing a causal loop model incorporating
the challenges and the digital framework; and
Vetting of the digital framework using
industrial experts.
Main sources used to identify challenges and
solutions include journals and thesis available via
Cranfield Universitys Search Point and CERES.
Keywords used for different searches include but
not limited to service knowledge feedback, product
design, service knowledge management,
manufacturing, and product life cycle.


Figure 1 Methodology Service knowledge feedback to
product design and manufacture
3. RECENT ADVANCES IN SERVICE
KNOWLEDGE MANAGEMENT
Knowledge management (KM) started gaining
popularity when the concept of a Knowledge
Economy was defined stating that the value of the
organisation lies not within the commodities
(product or service) that it produces, but within the
knowledge applied within the organisation to
produce it (Alavi and Leidner, 2001). Knowledge is
defined in different ways and accepting this may be
helpful to reduce further confusion. The hierarchy
of data-information-knowledge-wisdom is most
commonly referred to as a knowledge pyramid
(Hey, 2004). The shape of the hierarchy is
representative of large amounts of data, that are
refined creating smaller amounts of information,
followed by further distillation to create knowledge
and then to embed wisdom. Young et al (2005)
defined data as text or numbers (Young et al, 2005).
Data is the raw form of parameters e.g. signals
going to cockpit instruments. According to Wilson
(2002) and Young et al (2005), data becomes
information when embedded in a relevant context
(Wilson, 2002; Young, et al, 2005). Therefore,
information is meaningful data. Examples may

449

include air speed, altitude etc. Hey (2004) defined
knowledge as subjective, personal and shaped by an
individuals perceptions and experiences (Hey,
2004). Young et al (2005) defined knowledge as
interpretation of information in order to assign
meaning (Young, et al, 2005). Hence, it is
information with understanding either within the
human head, manually documented or
computerised/automated e.g. pilot or autopilot.
A meaningful definition of knowledge in an
industrial setting is actionable understanding. The
assertion is that industry functions on a return on
investment basis. Therefore to warrant the
investment of time or other resources in knowledge
management there must be an equal or greater
benefit returned for the activity to achieve
sustainability. Therefore an industrial definition of
knowledge (worth managing) must contain a
reference to the potential for its value release.
ACTIONABLE UNDERSTANDING contains
the two key characteristics: that it has to convey
understanding and must be able to be acted upon
(this implies it has intellectual property value and
therefore has worth to the organisation justifying
management above and beyond that extended to
pure information). It also implies that it only really
exists when contained in a human brain or other
device where the understanding can be applied to an
input to generate a decision. We could therefore
assert that Knowledge does not exist on paper; it
only becomes knowledge when it is interpreted by a
human being or other active medium (such as a
computer algorithm) where it is capable of
influencing decisions and actions.
According to Ackoff (1989), wisdom is linked to
the future, since it integrates design and vision,
whereas the other pyramid levels are connected to
the past, since they are related to known things.
Hence, it is an extrapolative process, through which
individuals differentiate between right and wrong,
good or bad by extension of their understanding to
different contexts. Figure 2 presents data,
information, knowledge and wisdom across axes of
understanding and context independence.
Knowledge can be typified broadly in terms of
tacit or explicit. Tacit knowledge comes from
experience and is quite unstructured and hard to
communicate (Nonaka, 1994; Sobodu, 2002).
Explicit knowledge is the one that can be
transmitted in formal, systematic and well-
structured language (Nonaka, 1994; Sobodu, 2002).
The scope of this paper includes research that
captures tacit knowledge and makes it explicit by
externalizing through knowledge capture and re-
use.
Understanding
Context
Independence
(Data + Context)
Relations
(Information + Actionable
Understanding)
Patterns
(Actionable)
Principles
Data
Information
Knowledge
Wisdom

Figure 2 Data, Information, Knowledge and Wisdom
KM is defined as a crucial construct in
understanding how humans convert information into
thought and consequently into action (Malhotra,
2001). KM is an activity for using information
technology in order to systematically classify, store,
and apply organisational and personal data and
information so that: (1) quality and quantity of the
creative knowledge within an organisation is
promoted, (2) the feasibility of knowledge is
improved, and (3) value is created for the
organisation (Liang, 2002). Five key activities can
be performed within the KM context in order to
remain competitive: acquisition, selection,
generation, internalisation and externalisation
(Holsapple and Singh, 2001). The realisation of
each stage is a pre-requisite for proceeding to the
following stage, hence resulting in the
externalisation of knowledge. A web based KM
system for virtual electronics manufacturing
enterprise is presented in (Chryssolouris et al,
2008).
Knowledge capture (or knowledge elicitation) is
important as loss of knowledge commonly occurs
when employees document investigations. When
investigating new issues employees will generally
look for similar issues that have occurred in the
past, but documentation created for the prior
solutions might not have captured the richness of
knowledge actually applied in the decision making
process. Hence, knowledge capture (elicitation)
helps the organisation in retention of knowledge for
future use (Sobodu, 2002).
Knowledge re-use (or feedback) is defined as
sharing of best practice for people to resolve
common technical issues (Markus, 2001). Weise
(1996) defined knowledge re-use as sharing of
information and documentation. The theory of
knowledge re-use, presented by (Markus, 2001),
emphasises on the role of knowledge management
systems (KMS) and the repositories. The knowledge
is systematically processed and stored, then re-used
repeatedly when any similar situation arises.

450

Benefits of re-using knowledge to organisations
include enhanced value of knowledge (Markus,
2001), capability to re-use knowledge (Baxter et al,
2008), common product characteristics (Baxter et
al, 2008), reduced time to develop new products
(Markus, 2001; Baxter et al, 2008), and reduced
business costs (Markus, 2001). Organisations re-use
knowledge to gain corporate competitive advantage
(Ma, 2005). It is more cost effective to re-use
knowledge that has already been created. Ma (2005)
argued that the process of learning from past
mistakes and successes can radically reduce design
and related production costs. Masood et al (2011a)
presented a framework for integrated use of design
attributes, knowledge and uncertainty in aerospace
sector.
Drawbacks of knowledge re-use include
information overload (Ma, 2005; Tang et al, 2010),
lack of ability to graduate the value of knowledge
(Tang et al, 2010), loss of time and heavy search
costs (Garud and Kumaraswamy, 2005), varying
requirements of knowledge re-users (Markus,
2001), and knowledge re-users unawareness of
knowledge sources (Galup et al, 2003). Costs
associated with introduction and maintenance of
contextual KM environments and employees
mentality for knowledge sharing and re-use are two
main barriers to knowledge re-use (Nunes et al,
2009).
Ontologies are another means of sharing and
reusing information (Nunes et al, 2009). Kabilan
(2007) described a key use of ontology as sharing of
information between people, databases and
applications. Gruber (1995) defined ontology as a
representation of a conceptualisation and as a
formal specification of the concepts and terms of
the information universe of a specific domain.
Ontologies aim at making implicit domain
knowledge explicit (Kabilan, 2007). Hence,
ontology is a form of knowledge representation
(Doultsinou, 2010). However, ontology is not
synonymous to knowledge base. A knowledge base
is an ontology populated with data (Doultsinou,
2010). However, there is a blurred line where the
ontology finishes and the knowledge base starts
(Noy and McGuinness, 2001). Ontologies are used
mainly for following reasons (Noy and
McGuinness, 2001):
Sharing common understanding of the structure
of information among people or software agents
Facilitating reuse of domain knowledge
Making domain assumptions explicit
Disconnecting domain knowledge from the
operational knowledge
Exploring domain knowledge.
The ontology is the basis for a context-aware
content description of knowledge sources (Han and
Park, 2009). The concept of ontology has been used
and applied in areas, such as KM, knowledge
acquisition, information retrieval and mining, and
knowledge modelling. Ontologies can be
categorised by three levels of abstraction i.e., upper,
mid-level and domain (Semy et al, 2004). An upper
ontology is independent of a particular domain and
provides a framework; a mid-level ontology can be
used as a link between abstract concepts used in the
upper and domain ontologies; and domain ontology
specifies concepts related to a particular domain.
Knowledge Management Systems (KMS) are IT-
based systems developed to sustain and improve the
organisational processes of knowledge creation,
storage/retrieval, transfer, and application (Alavi
and Leidner, 2001). The following are the major
classes of KMS:
Informational Knowledge Systems, which
primarily store and manage knowledge on a
potential use basis. Examples may include
databases and directories.
Knowledge Management Tools, which try to
simplify the access or provide direction to
knowledge and information within KMS by
decreasing the amount of time needed for the
user. Examples may include search tools or
portal applications.
Dynamic Knowledge Systems, which elicit
timely, on demand and in context of
information and knowledge from people when
someone else demands it. Examples may
include KM help desks.

4. DIGITAL FEEDBACK OF THROUGH-
LIFE SERVICE KNOWLEDGE TO
PRODUCT DESIGN AND
MANUFACTURE
The study of Service Knowledge Management
(SKM) is relatively novice in reported literature.
However, its popularity is growing among
academics and industry peers. To define SKM,
service needs to be defined first. The United
Nations provided a broad definition of services, as
Services are not separate entities over which
ownership rights can be established (United
ations, 2002). They cannot be traded separately
from their production. Services are heterogeneous
outputs produced to order and typically consist of
changes in the condition of the consuming units
realised by the activities of the producers at the

451

demand of the customers. By the time their
production is completed they must have been
provided to the consumers. Overall, services are
defined as activities, benefits, and satisfaction that
create returns directly or in relation to the sale of
goods (Kamponpan, 2007). The characteristics of
service include intangibility, inseparability,
heterogeneity, perish ability and ownership.
Maintenance is a major service activity, which is
required for a variety of products having
mechanical, electrical or hydraulic systems for
example. The aim of a maintenance service should
be an increase in customer functionality that may be
achieved through increased availability, which in
turn boosts the equipment reliability and cuts down
their repairing time (Viles et al, 2007). Broad
maintenance types and strategies include preventive
maintenance, corrective maintenance and predictive
maintenance. An approach to operational aircraft
maintenance planning is presented in (Papakostas et
al, 2010).
Servitization includes innovation of capabilities
and processes within an organisation that can lead
to a better value creation through a change from
selling products to selling product-service systems
(PSS) (Neely, 2008). Servitization literature makes
a significant distinction among four concepts: PSS,
servitization, servitized organisation and global
value system (Neely, 2008). PSS originated in
Scandinavia in the late 1990s and can be defined as
a marketable set of products and services capable of
jointly fulfilling user needs, while delivering value
in use (Meier et al, 2010; Annamalai et al, 2011).
The product/service ratio in this set can vary, either
in terms of function fulfilment or economic value
(Mont, 2002). Technical PSS is defined as PSS
having following characteristics (Roy and Cheruvu,
2009):
A physical product core (e.g. aero engine)
enhanced and customised by a mainly non-
physical service shell (e.g. maintenance,
training, operation, disposal)
Relatively higher monetary value and
importance of the physical PSS core, and
Business to business relation between PSS
manufacturers and customers (Aurich et al,
2006).
It is observed that the term industrial PSS is also
used in the same sense as technical-PSS in literature
(Roy and Cheruvu, 2009).
Service Knowledge (SK) can be defined as the
amalgamation of processed information, which is
required by service personnel for the execution of
their activities (i.e. planned and unplanned
maintenance, service exchange, product repair and
overhaul, retrofitting and upgrades, training) stored
by them to be re-used when needed. Experience of
service personnel, gained through their tasks,
should also be integrated into SK (Doultsinou,
2010). This is true only in a traditional organisation
where service is an isolated activity relative to the
product design. In PSS service knowledge is of
equal if not more value to the product and service
design teams as they have the greatest flexibility to
turn the understanding gained from service use into
action to improve future designs. The service
delivery team can only use service knowledge to
recover to the intrinsic performance level that is
designed into the system. The designer has the
power to fundamentally change the baseline
performance.
SKM deals with SK capture and re-use to support
product design and service engineering. Product
design has been categorised in the literature as
original or adaptive (Mountney, 2009). An original
design is a completely new solution and product. An
adaptive design will satisfy an existing requirement
by providing a solution in a new way, therefore
requiring new or substantial changes to existing
components and possibly assemblies. Adaptive
design may also be applied to incremental
improvement to an existing solution to meet a new
requirement. There are three major stages to the
product design process:
Concept Design Stage concerned with the
product function. During this stage, the intended
functions of the product and potential solutions
to achieve them are explored;
Preliminary Design Stage concerned with the
relationship between function and form. The
requirements and functions finalised during the
concept stage are transformed into an initial
engineering general arrangement (i.e. a physical
representation) during this stage; and
Detailed Design Stage concerned with the
detail form of every component in the product.
The arrangement from the preliminary stage is
optimised and finalised, each part is fully
defined (including geometric dimensions and
tolerances), the final material selection takes
place and the product is assessed for technical
and economic viability. The necessary
documentation is also created to enable the
product to be produced and maintained.
Design for Manufacture (DfM) methodology
aims at the design of products that are easier to
manufacture by assessing their manufacturability
during early design stages. Manufacturability can
be defined as process capability to meet the product

452

attributes. Extension of DfM is Design for
Manufacture and Assembly (DfMA), which aims at
the design of products that are easier to manufacture
and assemble. DfMA methodology aims to enable
greater thought around production and assembly
requirements before the detailed design stage
(Boothroyd et al, 2002). Process selection in
mechanical engineering design has been considered
in the literature, where the main driver for the
selection is cost assuming all relevant technologies
are available (Lovatt and Shercliff, 1998). Nowack
(1997) reported on assessment of
manufacturability during early design stages and
provided guidelines. Assembly process templates
for the automotive industry are presented in
(Papakostas et al, 2010).
DfS is a design process that aims to reduce
maintenance costs at the design stage by supporting
product design with service information. The
knowledge pyramid is important to designers in this
context. In literature, several authors discussed
certain aspects of service information that are
beneficial to designers. Norman (1988) mentioned
that past operating experience could contribute
towards forecast reliability/availability that also
depends on the sample size. Jones and Hayes (1997)
discussed the value of collecting field failure
information and during a products life, and the
analysis of this data to assess the products
reliability. Petkova (2003) described the flow of in-
service information back to the manufacturer
(within the context of consumer electronics
industry) and stressed the significance of the failure
root causes for the improvement of product quality.
In order to benefit the DfS process, identification of
information types is important. Constructive in-
service information in product design includes: in-
service component life, failure types, failure causes,
deterioration mechanisms regarding various
components, the occurrence rate and impact of these
mechanisms, reliability data and spares cost (Jagtap
et al, 2007). The service information that designers
are interested in is linked with the following (Jagtap
et al, 2007):
Failure mechanisms (e.g. failure mode);
Maintainability (e.g. accessibility);
Reliability (e.g. Weibull analysis of reliability);
Service instructions (e.g. inspection
recommendations);
Operating data (e.g. difference of various
performance parameters with operator);
Component cost (e.g. repair cost);
Design information (e.g. technical diagrams);
and
Component life (e.g. average life).
Availability of in-service information can help in
reducing maintenance costs, prediction of product
reliability/availability, evaluation of product
reliability in the field, maintenance optimisation,
reliability improvement of future products and
fulfilment of the maintainability and reliability
requirements (Jagtap et al, 2007).
Mapping service knowledge and design
requirements is very important for efficient SKM.
Baxter et al (2009b) concluded that there is a clear
design bias in the manufacturing knowledge
literature and that service research is severely
lacking and service (operation) is under represented
in the manufacturing knowledge domain. Service
knowledge is important, particularly in wake of
shifting nature of production and service (Baxter et
al, 2009a). Hence, there is a research gap and
further research is recommended.
Current industrial practice to inform design
functions mainly include face-to-face
communications (i.e. design review meetings, etc),
communities of practice sessions, group or
individual emails, and file-folders on local PCs or
LAN (for storage purposes). These forms of
communication have their pros and cons e.g. face-
to-face meetings are very good means of knowledge
transfer, and telephone/email communications
deliver required knowledge quickly. However, these
forms of communication are not well-structured so
that to use knowledge in a longer term as and when
required. Also, search capabilities are very poor in
current forms of communication, which are utmost
necessary to find relevant knowledge quickly and at
right time. In order to address these drawbacks,
digital feedback is required. Digital feedback may
be provided through structured knowledgebase
systems that have capabilities of uploading,
searching, prioritising and reporting. Hence, it is
important to develop digital feedback links between
through-life engineering service knowledge and
design in future research to support the product (e.g.
maintenance) and the customer. These links may be
additional to the links between service knowledge
and manufacturing, which also may have
commonalities amongst them. A clear set of
requirements specification for through-life
engineering service knowledge, its capture and
representation (data/information systems), and re-
use methodologies are required to fulfil such
requirements. SKM related research initiatives have
been undertaken in recent past including but not
limited to IPAS, DATUM, IITKM, HIPARSYS, X-
Media, SILOET, SAMULET, SKB (Masood et al,
2011b). Some of these are still on-going e.g.
SILOET, SAMULET and SKB. Current service
knowledge feedback challenges are discussed in the
following section.

453

4.1. CURRENT CHALLENGES
Following current challenges in feedback of
through-life engineering service knowledge are
identified from the literature review and analysis
presented in this paper:
Challenge 1: Feedback to Product Design:
Service Engineering is a core part of the product life
cycle, but there is a lack of focus on effectively
feeding back service knowledge to product design
stages. There is a lack of available structured
methodologies for capturing and structuring service
knowledge in order to map service knowledge onto
design requirements. The challenge here is to devise
an effective methodology to capture service
knowledge gained from previous learning (possibly
in a structured way) and then re-use by feedback to
conceptual and detailed product design stages so
that new/revised product design incorporates the
new learning. The future work may include: (1)
Development of a representation of the service
knowledge that can be used by design engineers to
improve product design; (2) Identification of service
knowledge required by product design engineers at
conceptual and detailed design stages; and (3)
Development of an effective methodology to re-use
service knowledge for product design and service
engineering stages.
Challenge 2: Feedback to Manufacturing:
Service knowledge is important for manufacturing
and assembly life cycle stages to improve its
processes when fed back through new/revised
product design based upon previous service
experience. This may also include considerations of
repair margins as required by repair engineers
through new/revised product designs incorporating
such considerations. The challenges here are two
fold: (1) to feed back through-life service
knowledge gained from previous service events,
root cause analysis, etc where repair engineers may
say that more margins are needed while design
engineers may argue against it; and (2) to feedback
from manufacturing/assembly to design engineers in
order to optimise their manufacturing/assembly
processes where they may ask designers to revise
features, profiles or contours etc for alternatives for
which they may have required fixtures and tooling
etc. In both cases, establishing an effective feedback
loop is challenging.
Challenge 3: Feedback to Service/Repair
Engineering: Service knowledge is important for
service/repair engineering functions of an
organisation, especially for its uses in root cause
analysis and problem solving, mitigation of
operational risks, improving repair policies,
recommendations of repair margins, etc.
Challenge 4: Use of Through-Life Engineering
Service Knowledge in Reducing Product Life Cycle
Cost: The knowledge of previous service
experience could help reduce product life cycle cost
by giving priority to mitigate risks imposed on those
product commodities, which exhibit high costs.
Challenge 5: Corporate Definition of Through-
Life Engineering Service Knowledge: There are
many definitions of service knowledge that define
how knowledge is managed and used. However, the
suitability for a specific knowledge type in a
specific situation varies accordingly. Through-Life
Service knowledge has earlier been defined as an
actionable understanding. This definition does not
apply to only through-life service knowledge but
has a broader scope and defines service
knowledge as discussed earlier in this paper.
However, getting a corporate level consensus on
this definition may be challenging, especially in
large global organisations.
A digital framework is proposed in the following
to address service knowledge feedback challenges.
5. SERVICE KNOWLEDGE BACKBONE
(SKB) FRAMEWORK
In order to address the challenges of through-life
service knowledge feedback to product design and
manufacture as discussed in the prescript, a service
knowledge backbone (SKB) framework is proposed
in Figure 3.
Service Knowledge
Backbone (TO-BE
Feedback)
Development
Design
Service
Development
Mitigation
Guidance
New
Development
Knowledge
Design
Mitigation
Guidance
New
Design
Knowledge
New
Service
Knowledge
Service
/Repair
Mitigation
Guidance
AS-IS
Feedback
AS-IS
Forward Link
AS-IS
Forward Link

Figure 3: SKB Framework
The SKB framework proposes to develop a
through-life service knowledge base of deterioration
mechanisms from the service stage of the product
life cycle with a feedback link to product design
features. The present link between engineering
service stage and product design stage is either
weak or takes a long time to acquire required levels
of knowledge necessary to undertake design
modifications. It is proposed here that risk
mitigation guidance should be uploaded to the

454

service knowledge base whenever investigations
into new service events are complete. It is important
to derive key mitigation guidance from high impact
service events, which might have gone through root
cause analysis, functional analysis and risk
mitigation in terms of what must be done, should be
done, could be done and not to do things. Such risk
mitigation guidance should first inform product
design engineers (at concept, preliminary and
detailed design stages) as most deterioration
mechanisms could be prevented or reduced at the
design stages. The SKB Framework provides an
interface to the communities of practice, where new
knowledge (design, development or service) is
uploaded when available and in return the
stakeholders (design, development or service) can
get actionable understanding in terms of
mitigation guidance (design, development or
service/repair). This framework places particular
emphasis upon strengthening the weaker AS-IS link
between service and design.
Causal loop models (CLMs) can represent causal
effects of activities (Forrester, 1961; Sterman,
2000). This type of modelling helps identify aspects
of complexities and dynamics can be modelled
through this technique (Masood, 2009; Masood et
al, 2010; Masood and Weston, 2011).
Implementation of CLM has been reported in
several case studies either standalone or as part of
integrated approaches (Masood, 2009; Zhen et al,
2009). The causes and effects of establishing SKB
on feedback of through-life service knowledge to
product design and manufacture is presented as a
CLM in Figure 4. The CLM is mapped across
design, development and service stages of product
life cycle. The design stage includes conceptual,
preliminary and detailed product design. The
development stage includes product engineering,
manufacturing, assembly and testing. The service
stage includes product service, repair and
maintenance. Here links between causes and effects
are represented across product life cycle stages and
have positive or negative links representing
increasing or decreasing effects of related causes.
The main idea revolves around providing an
enhanced SKB, and seeing effects of different
causes linked to this. For example, if service
knowledge capture is improved, it has a positive
effect in enhancing SKB, which results in affecting
positively on improving conceptual and detailed
design characteristics. On the other hand, it could
also result in increasing cost of service knowledge
capture and maintaining SKB, which further
increases life cycle cost. The CLM further suggests
that improved design could result in positively
affecting on improving manufacturing (fixtures,
tooling, inspection, quality). The overall effect
could lead to reduce maintenance burden and
frequency of occurrence in effect reducing
operational disruption, which could lead to reduce
number of maintenance and repair events. However,
too much rear view mirror approach to risk
management may detract from forward looking. The
CLM also suggests that life cycle cost could be
reduced as an overall effect of the presented
feedback loops, however a balanced view of
benefits and costs need to be considered.


Figure 4: Causal loop model: Through-life engineering service knowledge feedback to product design and manufacture

455

6. CONCLUSIONS
A state-of-the-art literature review and analysis on
through-life service knowledge feedback to product
design and manufacture is presented in this paper. It
is emphasised to define knowledge as
ACTIOABLE UDERSTADIG in an
industrial setting. The following through-life service
knowledge feedback challenges are discussed in the
paper:
Feedback of through-life service knowledge to
product design, manufacturing and
service/repair engineering;
Use of through-life service knowledge in
reducing product life cycle cost; and
Defining and implementing a corporate
definition of service knowledge.
A Service Knowledge Base (SKB) framework is
proposed to address the challenges of through-life
service knowledge feedback. A Causal Loop Model
(CLM) is also presented, which is drawn across
product life cycle stages of design, development and
service. Feedback of through-life service knowledge
to product design (conceptual/preliminary/detailed),
manufacturing/ assembly, and service/repair
engineering are modelled in this CLM. The CLM
presented increasing or decreasing links between
causes and effects associated with provision of an
enhanced SKB. It is proposed through this CLM
that the through-life service knowledge feedback
challenges could be overcome by establishing an
enhanced SKB. For future research, it is
recommended to develop further methodologies for
effectively capturing, representing, and re-using
through-life service knowledge to support product
design and manufacture. It is also recommended
that such frameworks be demonstrated through
application development and industrial case studies.
7. ACKNOWLEDGMENTS
The authors acknowledge Rolls-Royce plc, EPSRC
and TSB, UK for providing funds to Knowledge
Transfer Partnership Program No. 7767 on SKB.
KT-Box Project partners are also acknowledged for
sponsoring the SKB project.
REFERENCES
Ackoff, R.L., "From data to wisdom", Journal of Applied
Systems Analysis, Vol. 16, No. 1, 1989, pp 3-9.
Alavi, M. and D.E. Leidner, "Review: Knowledge
management and knowledge management
systems: Conceptual foundations and research
issues", MIS quarterly, Vol. 25, No. 1, 2001, pp
107-136.
Annamalai, G., R. Hussain, et al, "An Ontology for
Product-Service Systems", Functional Thinking
for Value Creation, 2011, pp 231-236.
Aurich, J.C., C. Fuchs, et al, "Life cycle oriented design
of technical Product-Service Systems", Journal
of Cleaner Production, Vol. 14, No. 17, 2006,
pp 1480-1494.
Baxter, D., J. Gao, et al, "A framework to integrate
design knowledge reuse and requirements
management in engineering design", Robotics
and Computer-Integrated Manufacturing, Vol.
24, No. 4, 2008, pp 585-593.
Baxter, D., R. Roy, et al, "A knowledge management
framework to support product-service systems
design", International Journal of Computer
Integrated Manufacturing, Vol. 22, No. 12,
2009a, pp 1073-1088.
Baxter, D., R. Roy, et al, "Managing knowledge within
the manufacturing enterprise: an overview"
International Journal of Manufacturing
Technology and Management, Vol. 18, No. 2,
2009b, pp 183-209.
Boothroyd, G., P. Dewhurst, et al, "Product design for
manufacture and assembly", M. Dekker, 2002.
Chryssolouris, G., S. Makris, et al, Knowledge
Management in a Virtual Enterprise - Web
Based Systems for Electronics Manufacturing,
"Methods and Tools for Effective Knowledge
Life-Cycle-Management", A. Bernard and S.
Tichkiewitch, Springer, 2008, pp 107-126.
Doultsinou, A., Service knowledge capture and re-use to
support product design, School of Applied
Sciences, Cranfield University, Cranfield, PhD
Thesis, 2010, p 377.
Forrester, J.W., "Industrial Dynamics", MIT Press,
Cambridge, MA, 1961.
Galup, S.D., R. Dattero, et al, "The enterprise knowledge
dictionary", Knowledge Management Research
Practice, Vol. 1, No. 2, 2003, pp 95-101.
Garud, R. and A. Kumaraswamy, "Vicious and virtuous
circles in the management of knowledge: The
case of Infosys Technologies", MIS Quarterly,
Vol. 29, No. 1, 2005, pp 9-33.
Gruber, T.R, "Toward principles for the design of
ontologies used for knowledge sharing",
International Journal of Human and Computer
Studies, Vol. 43, 1995, pp 907-928.
Han, K.H. and J.W. Park, "Process-centered knowledge
model and enterprise ontology for the
development of knowledge management
system", Expert Systems with Applications, Vol.
36, No. 4, 2009, pp 7441-7447.
Harrison, A., "Design for Service: Harmonising Product
Design With a Service Strategy", ASME Turbo
Expo 2006: Power for Land, Sea, and Air
(GT2006), American Society of Mechanical
Engineers, Barcelona, Spain, 2006, pp 135-143.
Hey, J., "The data, information, knowledge, wisdom
chain: The metaphorical link", Relatrio
tcnico, Intergovernmental Oceanographic
Commission (UESCO) (Journal Article), 2004.
Holsapple, C.W. and M. Singh, "The knowledge chain
model: activities for competitiveness", Expert
Systems with Applications, Vol. 20, No. 1, 2001,
pp 77-98.

456

Jagtap, S., A.L. Johnson, et al, "In-service information
required by engineering designers", 16th
International Conference on Engineering
Design (ICED'07), The Design Society, 2007.
Jones, J. and J. Hayes, "Use of a field failure database for
improvement of product reliability", Reliability
Engineering & System Safety, Vol. 55, No. 2,
1997, pp 131-134.
Kabilan, V., Ontology for Information Systems (O4IS)
Design Methodology, The Royal Institute of
Technology, PhD Thesis, 2007.
Kamponpan, L., Service supply chains, Cranfield
University, Cranfield, MSc Thesis, 2007.
Liang, T.-P. "Theory and practice of electronic
commerce", Taipei, Taiwan, Hwa-Tai
Publishing, 2002.
Lovatt, A.M. and H.R. Shercliff, "Manufacturing process
selection in engineering design Part 1: the role
of process selection", Materials and Design,
Vol. 19, No. 5-6, 1998, pp 205-215.
Ma, W., Factors affecting knowledge reuse: a
framework for study, National Sun Yat-sen
University, Taiwan, MSc Thesis, 2005.
Malhotra, Y., "Expert systems for knowledge
management: crossing the chasm between
information processing and sense making",
Expert Systems with Applications, Vol. 20, No.
1, 2001, pp 7-16.
Markus, M.L., "Toward a Theory of Knowledge Reuse:
Types of Knowledge Reuse Situations and
Factors in Reuse Success", Journal of
Management Information Systems, Vol. 18, No.
1, 2001, pp. 57-93.
Masood, T., Enhanced Integrated Modelling Approach
to Reconfiguring Manufacturing Enterprises,
Wolfson School of Mechanical and
Manufacturing Engineering, Loughborough
University, Loughborough, PhD Thesis, 2009, p
314.
Masood, T., J.A. Erkoyuncu, et al, "A digital decision
making framework integrating design attributes,
knowledge and uncertainty in aerospace sector",
7th International Conference on Digital
Enterprise Technology (DET), Athens, Greece,
2011a, pp 1-9.
Masood, T., R. Roy, et al, "Service knowledge feedback
to product design: Recent advances, challenges
and future trends", Cranfield University &
Rolls-Royce, Not submitted journal article,
2011b, pp 1-31.
Masood, T. and R. Weston, "An integrated modelling
approach in support of next generation
reconfigurable manufacturing systems",
International Journal of Computer Aided
Engineering and Technology, Vol. 3, No. 3-4,
2011, pp 372-398.
Masood, T., R. Weston, et al, "A computer integrated
unified modelling approach to responsive
manufacturing", International Journal of
Industrial and Systems Engineering, Vol. 5, No.
3, 2010, pp 287-312.
Meier, H., R. Roy, et al, "Industrial Product-Service
Systems-IPS2", CIRP Annals-Manufacturing
Technology, Vol. 59, No. 2, 2010, pp 607-627.
Mont, O.K. "Clarifying the concept of product-service
system", Journal of Cleaner Production, Vol.
10, No. 3, 2002, pp 237-245.
Mountney, S.L., Acquisition and sharing of innovative
manufacturing knowledge for preliminary
design, School of Applied Sciences, Cranfield
University, Cranfield, PhD Thesis, 2009.
Neely, A., "Exploring the financial consequences of the
servitization of manufacturing", Operations
Management Research, Vol. 1, No. 2, 2008, pp
103-118.
Nonaka, I., "A dynamic theory of organizational
knowledge creation", Organization Science,
Vol. 5, No. 1, 1994, pp 14-37.
Norman, D., "Incorporating operational experience and
design changes in availability forecasts",
Reliability Engineering and System Safety, Vol.
20, 1988, pp 245-261.
Nowack, M.L., Design guideline support for
manufacturability, University of Cambridge,
Cambridge, PhD Thesis, 1997.
Noy, N.F. and D.L. McGuinness, Ontology
Development 101: A Guide to Creating Your
First Ontology, 2001, Accessed 2010.
Nunes, V.T., F.M. Santoro, et al, "A context-based model
for Knowledge Management embodied in work
processes", Information Sciences, Vol. 179, No.
15, 2009, pp 2538-2554.
Papakostas, N., K. Efthymiou, et al, "Assembly Process
Templates for the Automotive Industry", 3rd
CIRP Conference on Assembly Technologies
and Systems (CATS 10), Trondheim, Norway,
2010, pp 151-156.
Papakostas, N., P. Papachatzakis, et al, "An approach to
operational aircraft maintenance planning",
International Journal of Decision Support
Systems, Vol. 48, No. 4, 2010, pp 604-612.
Petkova, V.T., An analysis of field feedback in
consumer electronics industry, Endhoven
University of Technology, Netherlands, PhD
Thesis, 2003.
Roy, R. and K.S. Cheruvu, "A competitive framework
for industrial product-service systems",
International Journal of Internet Manufacturing
and Services, Vol. 2, No. 1, 2009, pp 4-29.
Semy, S.K., M.K. Pulvermacher, et al, "Toward the use
of an upper ontology for US government and
US military domains: An evaluation", MITRE
CORP, BEDFORD MA, 2004.
Sobodu, O., Knowledge Capture and Representation in
Design and Manufacture, School of Industrial
and Manufacturing Science, Cranfield
University, Cranfield, MSc, 2002.
Sterman, J.D., "Business dynamics: systems thinking and
modelling for a complex world", McGraw-Hill,
2000.
Tang, L., Y. Zhao, et al, "Codification vs personalisation:
A study of the information evaluation practice
between aerospace and construction industries",

457

International Journal of Information
Management, Vol. 30, No. 4, 2010, pp 315-325.
United Nations, Manual on statistics of international
trade in services, United Nations Publications,
New York, 2002, pp 1-190,
<http://www.oecd.org/dataoecd/32/45/2404428.
pdf>
Viles, E., D. Puente, et al, "Improving the corrective
maintenance of an electronic system for trains",
Journal of Quality in Maintenance Engineering,
Vol. 13, No. 1, 2007, pp 75-87.
Weise, R.H., "Representing the corporation: strategies for
legal counsel", USA, Aspen Law & Business
Publishers, 1996.
Wilson, T.D., "The nonsense of knowledge management"
Information Research, Vol. 8, No. 1, 2002.
Young, B., A.F. Cutting-Decelle, et al, "Sharing
Manufacturing Information and Knowledge in
Design Decision and Support", Advances in
Integrated Design and Manufacturing in
Mechanical Engineering (Journal Article):
2005, pp 173-185.
Zhen, M., T. Masood, et al, "A structured modelling
approach to simulating dynamic behaviours in
complex organisations", Production Planning &
Control: The Management of Operations, Vol.
20, No. 6, 2009, pp 496 - 509.




458

A DIGITAL DECISION MAKING FRAMEWORK INTEGRATING
DESIGN ATTRIBUTES, KNOWLEDGE AND UNCERTAINTY IN
AEROSPACE SECTOR
Tariq Masood
Decision Engineering Centre, Manufacturing
Department, Cranfield University, UK
(Seconded at Life Cycle Engineering, Rolls-
Royce plc, Derby, UK)
Email: Tariq.Masood@Cranfield.ac.uk
John Ahmet Erkoyuncu
Manufacturing Department, School of Applied
Sciences, Cranfield University, UK
Email: J.A.Erkoyuncu@Cranfield.ac.uk


Rajkumar Roy
Head, Manufacturing Department, Cranfield
University, UK and Director, The EPSRC
Centre for Innovative Manufacturing in
Through-life Engineering Services
Email: R.Roy@Cranfield.ac.uk
Andrew Harrison
Rolls-Royce Engineering Associate Fellow,
Life Cycle Engineering, Rolls-Royce plc,
Derby, UK
Email: Andrew.Harrison@Rolls-Royce.com


ABSTRACT
The delivery of integrated product and service solutions is growing in the aerospace industry,
driven by the potential of increasing profits. Such solutions require a life cycle view at the design
phase in order to support the delivery of the equipment. The influence of uncertainty associated
with design for services is increasingly a challenge due to information and knowledge constraints.
There is a lack of frameworks that aim to define and quantify relationship between information
and knowledge with uncertainty. Driven by this gap this paper presents a framework to illustrate
the link between uncertainty and knowledge within the design context for services in the
aerospace industry. The paper combines industrial interaction and literature review to initially
define (1) the design attributes, (2) the associated knowledge requirements, and (3) uncertainties
experienced. The concepts and inter-linkages are developed with the intention of developing a
software prototype. Future recommendations are also included.
KEYWORDS
Knowledge, design, uncertainty, digital feedback, life cycle.
1. INTRODUCTION
The aerospace industry is experiencing a shift from
ad-hoc service provision to integrated product and
service solutions that enable the delivery of the
availability and capability required from an engine
(Alonso-Rasgado and Thompson, 2006). This has
promoted an emphasis of the life cycle implications
of engine design due to the shift in the business
model, which incentivises reduced maintenance cost
whilst enhancing equipment operability/
functionality (Datta and Roy, 2009). The need to
predict service requirements much earlier than the
traditional model (e.g. spares sales) and the bundled
nature of service delivery has increased the
uncertainties experienced by the Original
Equipment Manufacturer (OEM) (Erkoyuncu et al,
2009). As a result, the OEMs are facing challenges

459

associated with the boundaries of their knowledge
in delivering services within the emerging business
model.
Knowledge can be defined in terms of a justified
true belief (Nonaka, 1994). It involves personalised
information, which is processed in the minds of
individuals (Alavi and Leidner, 2001). In an
industrial setting, knowledge is considered as an
actionable understanding. Knowledge has
typically been classified into tacit and explicit
knowledge and the associated contents depend on
the context. Tacit knowledge refers to the personal
and experienced based nature of knowledge
(Sobodu, 2002). On the other hand, explicit
knowledge involves formally documented,
systematic, and well structured language (Nonaka,
1994). Knowledge within the context of life cycle
design includes a number of aspects associated to
the different phases of an aero-engine (Doultsinou,
2010). The existence of knowledge enhances the
confidence in events that have been predicted.
Uncertainty refers to things that are not known or
known imprecisely (Walker et al, 2003). The
sources of uncertainty have often been classified
into two bases, including epistemic and aleatory
(Erkoyuncu et al, 2010). Aleatory uncertainty refers
to the uncertainty that arises from natural,
unpredictable variation in the performance of the
system under study (Daneshkhah, 2004). On the
other hand, epistemic uncertainty arises from lack
of knowledge about the behaviour of the system that
is conceptually resolvable (Thunnissen, 2005). It is
worth recognising that uncertainty does not have to
hold negative consequences, it may also lead to
positive outcomes. Though, it may have a
constraining role from a decision-making
perspective when designing an engine.
The link between knowledge and uncertainty has
often been highlighted (in the case of epistemic
uncertainty). Ackoff (1989) presents that with
increased knowledge the level of uncertainty
diminishes, though no mechanism has been
proposed in literature that shows the relationship
between knowledge and uncertainty in a qualitative
or quantitative manner. To understand this
relationship will further enhance decision making
during the design process. For instance, it will be
possible to conduct cost-benefit analysis to
understand the value of changing the level of
knowledge.
In light of the challenge of achieving optimised
engine design, this paper aims to develop a
framework/methodology: (1) to demonstrate the
influence of knowledge on uncertainty, and (2) to
visualise the implications of changing the level of
knowledge on the level of uncertainty experienced
in life cycle design. The objectives include:
Capture the required areas of knowledge;
Define a mechanism to capture the required
value of knowledge;
Identify a mechanism to capture the current
state of knowledge;
Develop a mechanism to change the knowledge
level whilst representing the benefit; and
Build a mechanism that links the level of
knowledge and the level of uncertainty
Design attributes, knowledge and uncertainty in
design are discussed in the following. A digital
decision making framework based upon design
attributes, knowledge and uncertainty is also
presented along with discussion. This is followed by
conclusions and future work.
2. DESIGN ATTRIBUTES
Within the context of this study, design attributes
represent key features of customer requirements
regarding aerospace-engine design architecture.
Some of the key attributes include specific fuel
consumption, weight, maintenance cost, and unit
cost. Each design attribute should be considered as a
source of value to the customer (increasing their
revenue potential or reducing their costs). Whilst
there are many design attribute level options to
achieve product level requirements, analysing
different options in a systematic and rapid manner is
essential. Variation in options is driven by the
performance against targets for each of the
attributes, which may necessitate improving some
design attributes and downgrading others. Thus, the
manufacturer needs to devise measures (e.g. choose
a design attribute value to change) to account for
any difference between the current design attribute
state and the customer-required level. Following are
the key engine design attributes:
Specific Fuel Consumption (SFC): The weight
flow rate of fuel required to produce a unit of
power or thrust, for example, pounds per
horsepower-hour;
Weight: Whilst the granularity may vary (e.g.
engine, component) it focuses on the weight in
the product e.g. pounds;
oise: The total noise from all sources other
than a particular one of interest (usually
measured in decibels);
Unit cost: The cost of a given unit of a product;
Life Cycle Cost (LCC): A measurement of the
total cost of using equipment over the entire
time of service of the equipment; includes
initial, operating, and maintenance costs;
Emission: The substance discharged into the air,
e.g. by internal combustion engine;

460

Development and testing cost: Costs incurred
during development and testing;
Thrust: A propulsive force produced by fluid
pressure or change of momentum of the fluid in
a jet engine, rocket engine, etc; and
Reliability: Consistent and productive engines,
parts, etc.
Each design attribute will typically be assigned a
minimum and maximum (or additional threshold)
value agreed with the customer that guides the
solution provider throughout the equipment life
cycle. In achieving the requirements for each design
attribute the solution provider may face a number of
factors that influences its performance in achieving
these targets. Additionally, the targets may change
throughout the life cycle. The performance of the
solution provider in reacting to and/or driving
design attribute requirements throughout the
equipment life cycle partly determines the
satisfaction level of the customer and hence
influences competitive positioning.
3. KNOWLEDGE IN DESIGN
For the purpose of this paper, knowledge is defined
in the industrial setting as actionable
understanding. A knowledge hierarchy (data-
information-knowledge-wisdom) is defined, in
which simple data could be enhanced up to
information, knowledge and then wisdom level by
increasing understanding and context independence.
The authors contend that industrial value is only
released when this hierarchy generates sufficient
understanding to enable more effective or efficient
decisions and actions to be taken.
For example, customer value from an aero-engine is
released during the service phase of the product life
cycle. Whilst functioning in service the engine
contributes to the customer revenue generation (by
supplying the motive power). In stark contrast,
whilst out of operation for servicing the engine
contributes only to costs. It is therefore a key
requirement to understand the drivers of loss of
function and maintenance requirements in order to
achieve the maximum functional availability of the
product. A knowledge of the maintenance drivers
with availability of mitigation guidance for future
designs or redesigns is clearly important and of
significant value in this context.
Digital feedback of through-life engineering
service knowledge to product design and
manufacture is challenging. There is a lack of
available structured methodologies for capturing
and structuring service knowledge in order to map
service knowledge onto design requirements. The
challenge here is to devise an effective methodology
to capture service knowledge gained from previous
learning, possibly in a structured way, and then
feedback to conceptual and detailed product design
stages so that new/revised product designs
incorporate the new learning.
Service knowledge is also important for the
service/repair engineering functions of an
organisation, especially for its uses in root cause
analysis, problem solving, mitigation of operational
risks, improving repair policies, recommendations
of repair margins, etc. The knowledge of previous
service experience could help reduce product life
cycle cost by giving priority to mitigation of risks
on those product commodities, which exhibit high
costs. Keeping product life cycle cost at minimum is
challenging.
The following feedback loops of through-life
engineering service knowledge are considered in
this paper: (1) service to design; and (2)
manufacture/assembly to design. The design
function has to understand both manufacture,
assembly and operation and the challenge is to
achieve a balanced design that minimises the design
cost impact in all three stages (with appropriate
weighting to their impact on customer value). In
both cases, establishing effective feedback loop is
also challenging.
The through-life engineering service knowledge
and its impact on product design and manufacture is
presented as a causal loop model (CLM) in Figure
1. The CLM is mapped across design, development
and service stages of product life cycle. The design
stage includes conceptual, preliminary and detailed
product design. The development stage includes
product engineering, manufacturing, assembly and
testing. The service stage includes product service,
repair and maintenance. Here links between causes
and effects are represented across product life cycle
stages that have positive or negative links
representing increasing or decreasing effects of
related causes. The CLM revolves around
enhancing the service knowledge backbone (SKB)
of an aerospace organisation, which could be partly
achieved by improving service knowledge capture.
The enhanced SKB could increase knowledge levels
in conceptual and detailed product design stages,
which could lead to optimise design characteristics.
This could lead to increase confidence in previous
design, design robustness, and decrease design costs
and hence life cycle costs. Improved design
characteristics could lead to improve design of
fixtures, tooling and inspection and in effect the
actual equipment. This could result in minimising
maintenance burden, frequency of occurrence and
operational disruption. As a result, the number of
maintenance and repair events could be reduced
with a commensurate reduction in cost of

461

maintenance and repair leading to a reduction in life
cycle cost. Quality could be improved by achieving
an increased level of design robustness and together
with decreased operational disruption it could result
in improved customer service.
On other side, an increase in service knowledge
capture will also result in higher costs of capturing
and maintaining knowledge, hence increasing the
life cycle cost. Variability in customer requirements
is another factor that could lead to increase design
costs, hence increasing the life cycle cost. A robust
design may increase requirements of
capabilities/skills, while higher confidence in design
may reduce these requirements. Improved design of
fixtures, tooling and inspection also increases these
requirements, on provision of which the state of
fixtures, tooling and inspection improves as well as
quality. Provision of these requirements will result
in higher life cycle costs in both cases. However,
there is an optimum point at which the maximum
value versus cost of enhancing service knowledge is
achieved.

Service Development Design
(+)
(+)
(+)
(+) (-)
(-)
(-) (+)
(-)
(-)
(+)
(-)
(-)
(-)
(-)
(+)
(+) (+)
(-)
(-)
(-)
(-)
(-)
(-)
(-) (+)
(+)
Maintenance Burden
No. of maintenance
and repair events
Service knowledge
capture
Quality
Design of fixtures,
tooling and
inspection
Robust Design
Service Knowledge Backbone (SKB)
Operational
disruption
Conceptual and
detailed design
characteristics
State of fixtures,
tooling and
inspection
Frequency of
Occurrence
Confidence in
Previous Design
Customer Service
Cost of maintenance
and repair
Life cycle cost
Variability in
customer
requirements
(+)
(+)
Design cost Cost of capturing
and maintaining
knowledge
(+)
(+)
Provision of required
capabilities/skills
level
(+)
(+)
(-)
Capabilities/skills
requirements
(+)
(+)
(+)
(-)
(-)

Figure 1: Causal loop model: Through-life engineering service knowledge feedback to product design and manufacture
4. UNCERTAINTY IN DESIGN
There are many types of uncertainties from a
service perspective that can be experienced during
the product design process. The sources vary driven
by a number of factors and their degree of influence
evolves over time. Major categories of uncertainties
experienced in service delivery include:
Engineering uncertainty considers factors that
affect strategic decisions with regards to the
future service and support requirements (i.e.
how will the service be delivered? Offshore,
obsolescence management, rate of system
integration issues);
Operation uncertainty considers factors that
affect service and support delivery involved on
a daily basis. It focuses on equipment level
activities (i.e. how much service need will there
be? Onshore, maintenance, quality of
components and manufacturing, operating
parameters);
Affordability uncertainty considers the
predictability in the customers ability to fund a
project throughout its contractual duration (e.g.
Customer ability to spend, customer willingness
to spend);
Commercial uncertainty considers factors that
affect the contractual agreement, (e.g. exchange
rates, interest rates, commodity and energy
prices);
Performance uncertainty considers factors that
affect reaching the performance goals (e.g. key
performance indicators); and
Training uncertainty considers factors that
affect the delivery of training to the customer.
The specified categories of uncertainties may
have a strategic or operational influence over the
design considerations. Along these lines, the

462

affordability and commercial categories guide how
the contract should be agreed at the outset from a
financial perspective, whilst also taking account of
relationships across the supply network. Industry
and the customer jointly contribute the level of
uncertainty experienced in these categories. On the
other hand, the influence of the operation,
engineering and training categories tend to be at an
operational level on how service and support is to
be delivered. It is also interesting to note the inter-
linkages between each of these categories. For
instance, with the delivery of training the
uncertainty in the performance of the equipment
reduces. This is mainly associated to the enhanced
skill level to operate the equipment. A digital
decision making framework is presented in the
following section.
5. DESIGN ATTRIBUTES-KNOWLEDGE-
UNCERTAINTY (DKU) FRAMEWORK
5.1. DKU FRAMEWORK PROPOSED
A digital decision making framework (DKU
Framework) is proposed that links the role of design
attributes, knowledge and uncertainty. The overall
framework is presented in Figure 2. The DKU
framework visualises specific relationships between
the design attributes, knowledge and uncertainty in
a map form. Industrial product-service system
delivery is linked with design attributes, knowledge
and uncertainty through outgoing knowledge
adaptation and incoming service prediction
capability (as shown in Figure 2).


Figure 2: DKU Framework
CLMs are created and presented in the following
to further elaborate on the DKU framework.
5.2. DKU FRAMEWORK DISCUSSION ON
CAUSAL LOOP MODELS
A causal loop model (CLM) can represent causal
effects of activities (Daneshkhah, 2004; Masood et
al, 2011). This type of modelling helps identify
aspects of complexities and dynamics can be
modelled through this technique (Masood, 2009).
Implementation of CLM has been reported in
several case studies either standalone or as part of
integrated approaches (Masood, 2009; Rashid et al,
2009; Zhen et al, 2009; Masood et al, 2010; Masood
and Weston, 2011).
A CLM of the DKU Framework is presented in
Figure 3, which looks into causes and effects related
to an enhanced SKB. Here knowledge of nine (9)
important design attributes are considered, which
includes (as shown in columns): weight, SFC, noise,
unit cost, LCC/maintenance cost, emission,
development & testing cost, thrust and
reliability/operational disruption. Desired design
attribute trends are taken as initial conditions for
this CLM i.e. lower weight, lower SFC, lower noise,
lower unit cost, lower LCC/maintenance cost, lower
emission, lower development & testing cost, higher
thrust and lower operational disruption (higher
reliability). Uncertainties are categorised into
engineering, operation, affordability and
commercial. Engineering uncertainties include rate
of system integration issues, level of obsolescence,
rate of rework, rate of capability upgrade, failure
rate of software, maintaining design rights, cost
estimating data reliability & quality, efficiency of
engineering efforts, and cost of licensing and
certification. Operation uncertainties include
quality of components and manufacturing,
component stress and load, operating parameters,
maintainer performance, availability of maintenance
support resources, effectiveness of maintenance
policy part level, complexity of equipment,
equipment utilisation rate, performance of internal
logistics, supply chain logistics, rate of materials,
sufficiency of spare parts, performance of suppliers
logistics, failure rate of hardware, location of
maintenance, rate of beyond economical repair, turn
around (repair) time, choice of fuel, mean time
between failure data, no fault found rate, and rate of
emergent work. Affordability uncertainties include
customer ability to spend, customer willingness to
spend, and project life cost. Commercial
uncertainties include exchange rates, interest rates,
commodity and energy prices, material cost,
environmental impact, customer equipment usage,
suitability of requirements, labour hour, labour rate,
labour efficiency, clarity of customer requirements,
and experience in other engine service provision.
The DKU-CLM presented in Figure 3 revolves
around causes and effects of an enhanced SKB,

463

which are mapped onto design attributes (in
columns) and uncertainties (in rows). The CLM
presents positive or negative effects of an enhanced
SKB onto uncertainty types resulting in positive or
negative effect on design attribute. Taking the
reliability design attribute, it proposes that the effect
of an enhanced SKB would be negative onto
engineering uncertainty for reliability, which further
results in higher reliability. It affects similarly on
other uncertainties for reliability (operational,
affordability and commercial) that are considered in
this paper. It should be noted here that the paper
discusses uncertainty types and their resultant
effects; it does not go into detailed uncertainties, for
which increasing or decreasing effect may be
different. Thrust (another design attribute) has
similar effects to reliability, which ends up in an
increase with enhanced SKB while reducing
respective uncertainties. The enhanced SKB affects
respective uncertainties (engineering, operational,
affordability and commercial) negatively for other
design attributes (noise, weight, SFC, unit cost,
LCC/maintenance cost, emission and development
& training cost) resulting in negative effect on these
design attributes.

E
n
g
i
n
e
e
r
i
n
g
O
p
e
r
a
t
i
o
n
A
f
f
o
r
d
a
b
i
l
i
t
y
C
o
m
m
e
r
c
i
a
l
Reliability (+) Thrust (+)
Development &
Testing Cost (-)
Emission (-)
LCC /
Maintenance Cost
(-)
Unit Cost (-) SFC (-) Weight (-) Noise (-)
(-)
(-)
Reliability (+) Noise (-) SFC (-)
Development &
Training Cost (-)
LCC /
Maintenance Cost
(-)
Enhanced
Service
Knowledge
Backbone (+)
Emission (-)
Commercial
Uncertainty for
LCC /
Maintenance
Cost
Thrust (+) Weight (-) Unit Cost (-)
(-)
Commercial
Uncertainty for
Unit Cost
Commercial
Uncertainty for
SFC
(-)
(-) (-)
Commercial
Uncertainty for
Weight
Commercial
Uncertainty for
Noise
Affordability
Uncertainty for
Noise
Affordability
Uncertainty for
Weight
Affordability
Uncertainty for
SFC
Affordability
Uncertainty for
Unit Cost
(-) (-)
(-)
(-) (-) (-)
(-) (-)
Commercial
Uncertainty for
Emission
Commercial
Uncertainty for
Development &
Training Cost
Commercial
Uncertainty for
Thrust
Commercial
Uncertainty for
Reliability
(-)
(-)
(-)
Affordability
Uncertainty for
Reliability
Affordability
Uncertainty for
Thrust
Affordability
Uncertainty for
Development &
Training Cost
Affordability
Uncertainty for
Emission
(-) (-)
(-)
(-) (-)
(-)
(-) (-) (+) (+)
(+)
(+) (-) (-) (-) (-) (-)
(-)
Engineering
Uncertainties
for Noise
Engineering
Uncertainties
for Weight
Engineering
Uncertainties
for SFC
Engineering
Uncertainties
for Unit Cost
Engineering
Uncertainties
for LCC /
Maintenance
Cost
Engineering
Uncertainties
for Emission
Engineering
Uncertainties
for
Development &
Testing Cost
Engineering
Uncertainties
for Thrust
Engineering
Uncertainties
for Reliability
Operation
Uncertainty for
Reliability
Operation
Uncertainty for
Thrust
Operation
Uncertainty for
Development &
Testing Cost
Operation
Uncertainty for
Emission
Operation
Uncertainty for
LCC /
Maintenance
Cost
Operation
Uncertainty for
Unit Cost
Operation
Uncertainty for
SFC
Operation
Uncertainty for
Noise
Operation
Uncertainty for
Weight
(-)
(-)
(-)
(-)
(-)
(-) (-) (-) (-)
(+) (+) (-) (-) (-) (-) (-) (-) (-)
(-)
(-) (-) (-) (-) (-) (-) (+) (+)
(-)
(-) (-) (-)
(-)
(-)
(-)
(-)
Affordability
Uncertainty for
LCC /
Maintenance
Cost
(-)
(-)

Figure 3: DKU Causal Loop Model SKB, Design attributes and Uncertainties
Figure 4, through a causal loop model,
demonstrate the link between knowledge and
uncertainty within the context of reliability.
Further information about the content includes:
Rate of system integration refers to the
combination of individual systems whether
developed in house, outsourced or both. It typically
forms a major responsibility of OEMs, whilst
uncertainties drive the performance of individual
systems and the integrated architecture. Any
negative issues that may be experienced result in
diminishing reliability and increasing operational
disruption.
Level of obsolescence defines the uncertainty
in not being able to find replacement parts. As
obsolescence increases, with the arising need to
source alternative parts, the reliability diminishes
due to the new parts that are introduced to the
system.
Rate of capability upgrade involves
technological advancements that are made along the
equipment life cycle to enhance equipment
capability. It creates the uncertainty of how the
system will respond to changes. Furthermore,
Capability upgrade can be made with the
ambition of reducing uncertainty in reliability.
Quality of components and manufacturing is
associated to the reliability of parts that have been
developed either internally or externally, which

464

involves uncertainty in the quality. There is a
correlation between the quality and reliability.
Sufficiency of spare parts, when considered at
the integrated system level, influences the operation
of other integrated parts which affect the reliability.
Rate of rework largely originates from errors in
maintenance, which causes rework in the service
provision. As a source of uncertainty it has an
influence over reliability.
Failure rate of software and Failure rate for
hardware have a direct influence over the
reliability of equipment. The uncertainty is
associated to the when, where and how significant
the failure is.
Maintainer performance considers service
delivery from a resource dimension. The uncertainty
originates from human centred drivers such as skill
and motivation, which influence how the reliability
evolves.

Development &
Training Cost (-)
Unit Cost (-)
LCC /
Maintenance Cost
(-)
Customer Ability to
Spend (A1)
Exchange Rates (C1)
Suitability of
Requirements
Reliability (+)
Interest Rates (C2)
Enhanced Service Knowledge Backbone (SKB) (+)
Customer Willingness to
Spend
(+)
Failure Rate of Hardware
(+)
Sufficiency of Spare
Parts
(+)
Quality of Components &
Manufacturing
Maintainer Performance
(+)
(+)
Rate of Capability
Upgrade
Failure Rate of Software
Level of Obsolescence Rate of Rework
Rate of System
Integration Issues
(+)
(+)
(+)
(+)
(+)
(-)
(-)
(-)
(-)
(+)
(+)
(+)
(-)
(+)
Sales & Revenues (+)
(+)
(+)
(-)
(-)
(-)
(+)
(+)
(+)
Organisations Ability
to Spend
(+)
Organisations
Strategic Vision
(+)

Figure 4: DKU Causal Loop Model SKB, Reliability and
Uncertainties
Reliability forms a central focus of the Service
Knowledge Backbone. Any shift in reliability
directly influences Unit cost, Maintenance cost,
Development and training cost. Such changes
fundamentally affect Customer ability to spend
and Sales/revenues. The cost of purchase of new
equipment and services will vary driven by the
proposed reliability level. The spend will be in
purchase of new equipment and services rather than
on the maintenance costs that have now been
reduced i.e. it encourages a long term increase in
OEM revenue not short term.
The digital framework will be developed in MS
Excel and aims to be used as a decision support
tool. The main advantages of using MS Excel are
associated to its: (1) wide use and availability, and
(2) flexibility to make changes. The step-wise input
process will involve two sets of input requirements
to facilitate qualitative and quantitative analysis.
Firstly, the user will be offered to choose from a
pre-defined set of attributes, uncertainties and
service knowledge types based on their relevance to
the project/research at hand. This will assist the
qualitative analysis, mainly based on a tick box type
approach. Secondly, the quantitative analysis aims
to illustrate the degree of dependency between
uncertainty and attributes as well as between
knowledge and uncertainty. Various approaches
such as the analytic hierarchy process, which
facilitates pair-wise comparisons, will be
implemented to reflect the significance of each
element. As an output the tool will show the link
between uncertainties and knowledge. The tool will
offer further analysis to reflect the benefit in
reducing/increasing uncertainty by making a change
in the knowledge level.
The limitations of the DKU framework may
include a fewer industrial testing, which needs to be
applied widely in the industry. The framework
could also be compared and tested on platforms
other than MS Excel.

6. CONCLUSIONS AND FUTURE WORK
This paper presents DKU framework that supports
decision making in understanding the link between
the level of knowledge and uncertainty in life cycle
design within aerospace sector. Additionally, the
related preliminary CLMs are proposed in order to
visualise the application. The paper focuses on
addressing two major challenges:
How can the influence of knowledge on
uncertainty be captured and demonstrated?
How can the implications of changing the level
of knowledge or uncertainty be illustrated?
Based on initial feedback, through industrial
interaction (including semi-structured interviews),
there are a number of implications of the proposed
framework for decision-making. The framework
supports efficient and effective product design

465

through visualisation of the service impact of design
decisions at an earlier stage in the life cycle where
greater design freedom exists. The framework also
supports demonstrating the link between
achievements of the design attributes and the
associated knowledge and uncertainty. This enables
us to build an understanding of how uncertainties
can influence achievement of attributes and how
knowledge could be used to reduce the influence.
To summarise the following key industrial benefits
are envisioned by implementing the DKU
framework:
Reduced life cycle cost;
Enhanced, cost effective product & service
design;
Better targeting of knowledge requirements;
and
Improved understanding of the implications of
uncertainty on life cycle design.
The following future work is recommended:
Building and further enhancing relationships
between knowledge and uncertainty;
Building and further enhancing a dynamic
relationship between knowledge based
uncertainty and the design implications;
A mechanism to illustrate the value of changing
the level of knowledge in relation to the degree
of uncertainty and the implications of this on
the life cycle design. Also to have a cost-benefit
analysis of enhanced knowledge;
Need for frameworks to assess the knowledge
level for design attributes;
Mathematical optimisation of attribute set (over
time) given the influence of uncertainty;
Assessment of the value of knowledge;
Comparison of existing vs. required capability
and seeking benefits (if any) in changing the
knowledge value;
Further exploring the role and methods for
through-life engineering service knowledge
feedback to product design and manufacture in
life cycle engineering.
Despite all these benefits, there are few limitations
of the DKU framework in its limited testing in
different industrial sectors. Its also limited to MS
Excel based software platform, which may be
explored in future to apply on other platforms as
required by other industries.
8. ACKNOWLEDGMENTS
Rolls-Royce plc, EPSRC and Technology Strategy
Board are acknowledged for providing funds to
Service Knowledge Backbone Project (Knowledge
Transfer Partnership Program No. 7767). KT-Box
Project partners are also acknowledged for
providing funds. Authors are also grateful for kind
support from the Cranfield IMRC for funding this
research and the industrial partners for their support.
REFERENCES
Ackoff, R.L., "From data to wisdom", Journal of Applied
Systems Analysis, Vol. 16, No. 1, 1989, pp 3-9.
Alavi, M. and Leidner, D.E., "Review: Knowledge
management and knowledge management
systems: Conceptual foundations and research
issues", MIS quarterly, Vol. 25, No. 1, 2001, pp
107-136.
Alonso-Rasgado, T. and Thompson, G., "A rapid design
process for Total Care Production", Journal of
Engineering Design, Vol. 17, No. 6, 2006, pp
509-531.
Daneshkhah, A.R., "Uncertainty in Probabilistic Risk
Assessment"." University of Sheffield, 2004, pp
45-87.
Datta, P.P. and Roy, R., "Cost modelling techniques for
availability type service support contracts: a
literature review and empirical study,
Proceedings of the 1st CIRP IPS2 Conference,
Cranfield, 2009, pp 216-223.
Doultsinou, A., "Service knowledge capture and re-use to
support product design", School of Applied
Sciences, Cranfield University, Cranfield, PhD
Thesis, 2010, p 377.
Erkoyuncu, J.A., Roy, R., Shehab, E. and Cheruvu, K.,
"Understanding service uncertainties in
industrial productservice system cost
estimation", The International Journal of
Advanced Manufacturing Technology, Vol. 52,
No. 9-12, 2010, pp 1223-1238.
Erkoyuncu, J.A., Roy, R., Shehab, E. and Wardle, P.,
"Uncertainty challenges in service cost
estimation for product-service systems in the
aerospace and defence industries", Proceedings
of the 1st CIRP IPS2 Conference, Cranfield,
2009, pp 200-207.
Masood, T., "Enhanced Integrated Modelling Approach
to Reconfiguring Manufacturing Enterprises",
Wolfson School of Mechanical and
Manufacturing Engineering, Loughborough
University, Loughborough, PhD Thesis, 2009, p
314.
Masood, T., Roy, R., Harrison, A., Xu, Y., Gregson, S.
and Reeve, C., "Challenges in digital feedback
of through-life service knowledge to product
design and manufacture", 7th International
Conference on Digital Enterprise Technology
(DET), Athens, Greece, 2011, pp 1-11.
Masood, T. and Weston, R., "An integrated modelling
approach in support of next generation
reconfigurable manufacturing systems",
International Journal of Computer Aided
Engineering and Technology, Vol. 3, No. 3-4,
2011, pp 372-398.
Masood, T., Weston, R. and Rahimifard, A., "A
computer integrated unified modelling approach
to responsive manufacturing", International

466

Journal of Industrial and Systems Engineering,
Vol. 5, No. 3, 2010, pp 287-312.
Nonaka, I., "A dynamic theory of organizational
knowledge creation", Organization science, Vol.
5, No. 1, 1994, pp 14-37.
Rashid, S., Masood, T. and Weston., R. H., "Unified
modelling approach in support of organization
design and change", Proceedings of the
Institution of Mechanical Engineers, Part B:
Journal of Engineering Manufacture, Vol. 223,
No. B8, 2009, pp 1055-1079.
Sobodu, O., "Knowledge Capture and Representation in
Design and Manufacture", School of Industrial
and Manufacturing Science, Cranfield
University, Cranfield, MSc, 2002.
Thunnissen, D., "Propagating and Mitigating
Uncertainties in the Design of Complex
Multidiscuplinary Systems"."California Institute
of Technology", Pasadena, California, PhD
Thesis, 2005, pp 56-73.
Walker, W.E., Harremoes, P., Rotmans, J., Sluijs, J.P.V.,
Asselt, M.B.A.V., Janssen, P. and Krauss, K.V.,
"Defining Uncertainty: A conceptual basis for
uncertainty management in model based
decision support", Integrated Assessment, Vol.
4, No. 1, 2003, pp 5-17.
Zhen, M., Masood, T., Rahimifard, A. and Weston, R.,
"A structured modelling approach to simulating
dynamic behaviours in complex organisations",
Production Planning & Control, Vol. 20, No. 6,
2009, pp 496-509.



Proceedings of DET2011
7th International Conference on Digital Enterprise Technology
Athens, Greece
28-30 September 2011
467

PRODUCT TO PROCESS LIFECYCLE MANAGEMENT IN ASSEMBLY
AUTOMATION SYSTEMS
Izhar Ul Haq
Wolfson School of Mechanical &
Manufacturing Engineering, Loughborough
University, UK
Email: izhar.msi@gmail.com
Tariq Masood
Manufacturing Department, School of Applied
Sciences, Cranfield University, UK
Email: Tariq.Masood@Cranfield.ac.uk

Bilal Ahmad
Wolfson School of Mechanical &
Manufacturing Engineering, Loughborough
University, UK
Email: B.Ahmad@lboro.ac.uk
Robert Harrison
Wolfson School of Mechanical &
Manufacturing Engineering, Loughborough
University, UK
Email: R.Harrison@lboro.ac.uk

Baqar Raza
Wolfson School of Mechanical &
Manufacturing Engineering, Loughborough
University, UK
Email: B.R.Muhammad@lboro.ac.uk

Radmehr P Monfared
Wolfson School of Mechanical &
Manufacturing Engineering, Loughborough
University, UK
Email: r.p.monfared@lboro.ac.uk

ABSTRACT
Presently, the automotive industry is facing enormous pressure due to global competition and ever
changing legislative, economic and customer demands. Product and process development in the
automotive manufacturing industry is a challenging task for many reasons. Current product life
cycle management (PLM) systems tend to be product-focussed. Though, information about
processes and resources are there but mostly linked to the product. Process is an important aspect,
especially in assembly automation systems that link products to their manufacturing resources. This
paper presents a process-centric approach to improve PLM systems in large-scale manufacturing
companies, especially in the powertrain sector of the automotive industry. The idea is to integrate
the information related to key engineering chains i.e. products, processes and resources based upon
PLM philosophy and shift the trend of product-focussed lifecycle management to process-focussed
lifecycle management, the outcome of which is the Product, Process and Resource Lifecycle
Management not PLM only.
KEYWORDS
Product design, product life cycle, manufacturing process resource, powertrain assembly
automation, reconfiguration
1. INTRODUCTION
Today the global marketplace is changing
rapidly. Industries have to enhance their strategy in
order to respond efficiently to customer
requirements and market needs (Kalkowska and
Trzcielinski 2004). The long term goals of
manufacturing enterprises are to stay in business,
grow and maximise their profits (Gunasekaran,
Marri et al. 2000). The 21st century business
environment can be characterised by expanding
global competition and customer individualism
leading to a high variety of products made in
relatively low volumes. In 1970s the cost of

468

products was considered the lever for obtaining
competitive advantage. In 1980s quality superseded
the cost and therefore became an important
competitive dimension (Singh 2002). Now low unit
cost and high quality products no longer solely
define the competitive advantage for most
manufacturing enterprises. Today, customers take
both minimum cost and high quality for granted.
Factors like customisation, delivery, performance,
and environmental issues such as waste generation
are now assuming a more predominant role as
differentiators in defining the success of
manufacturing enterprises in terms of increased
market share and profitability (Gunasekaran, Marri
et al. 2000; Singh 2002). The question is what can
be done under these globally changing
circumstances in order to stay in business and retain
a competitive advantage.
The automotive industry is often described as
the engine of Europe (ACEA 2008). Powertrain
system is one of the key areas within the lifecycle of
automotive manufacturing. At the present time, this
industrial sector is under enormous pressure. In
past, business plans were designed for 10 to 15
years but todays need is for 6 to 9 months (Haq,
Harrison et al. 2007). For rapid response to ever
changing market demands, the western automotive
industry is looking to shorten production lifecycle
time when introducing new engine models (Masood
2009) (Haq 2009). The time taken by western
automotive industry to design a new engine model,
build production lines and commence mass
production is typically about 42 months while
Japanese automotives take 36 months and this
differential remains today (Harrison, West et al.
2001; Monfared, West et al. 2002; Haq, Harrison et
al. 2007). Also, it has been recognised in the
automotive industry that 6 months delay for the
launch of a new product such as motor vehicle or
large subassemblies e.g. transmission units and
engines, can cause a reduction by one third of its
profit margin (Lee, Harrison et al. 2007; Haq,
Monfared et al. 2010). Potentially, this is due to the
fact that manufacturing system requirements are
less effectively synchronised with product (design)
and geographically distributed manufacturing
operations in order to meet the global market
demands.
In response to ever increasing business needs,
highly flexible and responsive manufacturing
systems are needed to accommodate unpredictable
business changes (Masood 2009; Masood and
Weston 2011; Masood and Weston 2011). In
addition new business models, such as PLM, are
emerging to boost innovation during product design
and process development using information and
communication technologies (Sudarsan, Fenves et
al. 2005). The key components of PLM strategy are
to bridge the gap between innovative product design
and product delivery by managing design and
manufacturing execution processes in a concurrent
engineering environment (Sharma 2005). Such
models are supported by number of engineering
tools e.g. Computer Aided Design (CAD),
Computer Aided Manufacturing (CAM) and
Computer Aided Process Planning (CAPP) (Shyam
2006). These tools are developed for individual
system requirements in order to decrease lead time
and increase customisation. However, lack of data
interoperability (i.e. deficiency of data
standardisation for data structure/format) and high
ongoing integration and maintenance costs make
systems more complex and risky.
This paper summarises on-going research efforts
on the development of new process centric
Powertrain assembly automation systems for the
western automotive industry in particular to Ford
Motor Company and their supply chain
collaborators (e.g. Krause, Schneider Electric, and
Bosch Rexroth). Such process focused assembly
automation systems research concept is based on the
PLM philosophy, however instead of product
centric this research is focused on process centric
PLM. Product, Process and Resource (PPR) are the
key elements of engineering domain in any
automotive industry. Processes are the links
between products and resources and focussing on
processes automatically covers all key engineering
domains, therefore, it may be called Product Process
Resource Lifecycle Management (PPRLM). The
PPRLM concept is applicable to assembly
automation systems and may also be applied to
other manufacturing industries. Product-centric
PLM is good enough for the manufacturing
industries but for assembly automation systems,
focus shifts from product to process because
engineers mostly concentrate on how different
products may be assembled economically. This
necessity generates the idea of process focussed
PLM systems so that the manufactured products
may be assembled efficiently.
2. STATUS AND SCOPE OF PLM
In early 1980s, engineering design entered into a
new era with the advent of CAD to facilitate
designers to create, reuse and manipulate geometric
product models (Farhad and Deba 2005). In parallel
to this advent, Computer Aided Manufacturing and
Engineering (CAM/CAE) tools and Product Data
Management (PDM) systems appeared for easy,
quick and secure access to data during the product
design process. The first generation of PDM
systems, although effective within an engineering
domain but failed to encompass non engineering
areas within the enterprise such as sales, marketing

469

and supply chain management as well as external
agents like customers and suppliers (Tony Liu and
William Xu 2001). With the evolution of PDM
systems, the first wave of enterprise applications
such as Enterprise Resource Planning (ERP),
Customer Relationship Management (CRM),
Supply Chain Management (SCM), and so on, were
introduced. The purpose was to streamline and
improve the manufacturing business practices. But
the focus of each enterprise application solution is
on specific lifecycle processes and cannot
adequately address the need for collaborative
capabilities throughout the product lifecycle (Ming,
Yan et al. 2008). As a result PDM systems were not
able to provide the necessary support for
ERP/CRM/SCM. This was because PDM systems
were designed to handle engineering data and
usually require resources having engineering and
technical knowledge (Farhad and Deba 2005).
Secondly, in todays business world, multinational
companies work with project teams spread all over
the globe, where PDM offers insufficient support
for global communication within the system (Tony
and William 2001).
During mid 1990s, the concept of PLM evolved
(Kopcsi, Kovcs et al. 2007), with the aim to
streamline product development and boost
innovation in manufacturing by managing all the
information about an enterprise throughout the
product lifecycle (Sudarsan, Fenves et al. 2005).
The entire product lifecycle consists of a set of
processes, which include customer requirements,
product strategy, product portfolio planning,
product specifications, conceptual design, detailed
design, design analysis, prototyping and testing,
process planning, inventory management, sourcing,
production, inspection, packing, distribution,
operation and service, disposal and recycle. This
clearly indicates that processes throughout the entire
lifecycle are complex in nature (Ming, Yan et al.
2005). To deal with such complexity PLM is a
business strategy (Farhad and Deba 2005), to
rapidly plan, organise, manage, measure and deliver
new products or services much faster and more
economically in an integrated way (Ming, Yan et al.
2005). Therefore, PLM not only provides
management throughout the entire product lifecycle,
but also distinguishes itself from other enterprise
application systems such as ERP, CRM and SCM
by enabling effective collaboration among
networked participants (Ming, Yan et al. 2008). The
highest level of collaboration is based on web-based
services with standard industry processes followed
by industry players allowing virtual collaboration,
real time information processing and real time
process integration (Sharma 2005).
It has been recognised that current PLM
implementations are document oriented, with no
customisable data models and facing many inter-
enterprise integration difficulties (Aziz, Gao et al.
2005). In addition, PLM seeks to extend the PDM
beyond design and manufacturing into other areas
like marketing, sales and after sale service (Farhad
and Deba 2005). Therefore, appropriate technology
solutions for PLM are imperatively required to
facilitate the implementation and deployment of
PLM systems to benefit industrial applications
(Ming, Yan et al. 2008). The worlds leading
universities, institutes and solution vendors
recognise PLM as a big wave in the enterprise
application software market (Ming, Yan et al.
2005). In 2002, manufacturing companies invested
$2.3 billion in PLM systems, a possible reason was
that companies vastly wants to improve their ability
to innovate, get products to market faster, and
reduce errors (Sudarsan, Fenves et al. 2005; Haq,
Monfared et al. 2010). The greatest acceptance and
usage of PLM solutions has been in automotive and
aerospace industries, both have hundreds of
engineers located at various design centres that need
to be brought together (Shyam 2006).
According to (Ming, Yan et al. 2005), the
University of Tokyo is leading in academic research
contribution and mainly focuses on topics such as
lifecycle engineering, lifecycle design based on
simulation, lifecycle planning, lifecycle
optimisation, reuse and rapid lifecycle, etc.
Similarly focus of MIT: Centre for Innovation for
Product Development is on platform architecture,
distributed object-based modelling environment,
information flow modelling and product
development integration. For further details of the
most recent academic and industrial state-of-the-art,
refer to (Ming, Yan et al. 2005). However, the focus
of all academic research groups is on product design
and development activities using modern computing
and internet technologies to facilitate design
collaboration and potential innovation. In fact, such
product centric structures are no longer appropriate
(Baxter, Roy et al. 2009). As a result, so far little
efforts have been documented and results obtained
are still unsatisfactory. Similarly, there is still a
significant gap between increasing demands from
industrial companies and available solutions from
vendors e.g. using traditional product data
management systems and exchanging engineering
data with suppliers has proved difficult, slow and
geographically limited. Also, flawed coordination
among teams, systems and data incompatibility and
complex approval processes are common (Ming,
Yan et al. 2005; Ming, Yan et al. 2008).
Furthermore, the data interoperability issues are
obvious because the PLM systems that a company
employ to support its activities can be made of
many components and each of those components
can be provided by different vendors (Shyam 2006).

470



3. CURRENT INDUSTRIAL PRACTICE
Manufacturing companies are facing intense
pressure due to global competition and ever
changing customer demands. The product lifecycles
have considerably reduced over the past decade.
Every time there is a change in the product, it is
associated with heavy costs of redesigning and
rebuilding the tools to manufacture the changed
product. PLM systems help maintain the past
history of the product information to quickly adapt
to the changed customer needs. PLM systems act as
a main data repository to maintain all the
information related to a certain product. This
information includes everything from concept to the
end of the product. PLM is a business strategy and
is more product-centric, as described by (Farhad
and Deba 2005). The complete information of the
associated processes and resources is not well
established in existing PLM systems. This is
because creating a linked database of PPR
information in PLM systems is highly laborious and
sometimes fruitless if changes in the products are
too often. In fact, Process is an important parameter
in assembly automation systems that could link the
products to the resources.
The product-centric approach may be acceptable
in manufacturing industries. The current enterprise
systems do not adequately perform the intended
function of knowledge reuse in case the product
changes (Masood, Erkoyuncu et al. 2011; Masood,
Roy et al. 2011). The same applies to the PLM
systems in automotive engine assembly plants. The
reason for this is that in assembly of the powertrain
systems, the product (i.e. engine) is assembled from
hundreds of individual parts and the effect of
change in one part may cause a rippling effect in the
whole assembly processes. This initiates the need to
concentrate more on the processes parameters
rather than on the individual product parameters.
This phenomenon has been studied at different
plants of the Ford Motor Company, UK. For any
assembly plant in general and automotive plant in
particular, it is observed and experimented that the
PLM can be best utilised with a process-focussed
approach. When the PLM system is process-
focussed in the assembly automation systems, it
automatically takes products and resources into
account because the processes connect products and
resources. To implement this approach, the product
is given its due importance but related
manufacturing processes take priority consideration.
Product and process development in the
manufacturing sector of an automotive industry is a
challenging task for many reasons. An ongoing
globalization, mass customisation and technological
revolution bring new challenges to well-established
automotive sector. The continuity in change is
because of several reasons including customer
changed requirements, necessity to variety,
technology advancement, changing environmental
regulations, increased safety issues and many more.
Life of the assembly machines and resources
surpass to the life of the products made out of them.
Heavy investments could go unutilised or wasted
when a product changes. New strategies are
required, especially for systems in automotive
sector due to rapid changes in products and
consequent processes, to meet new business
requirements. Launching a new product variant in
automotive industry is also challenging because of
fragmented and manual processes though
collaborative engineering has shifted activities from
serial to parallel and advanced information and
communication infrastructure has replaced paper
based processes. In practice, the lack of theoretical,
systematic, and standardized methods to perform
information and knowledge integration has led to
the constructs of incomplete, irrelevant, or out-of-
date knowledge bases.
The assembly lines, such as powertrain assembly
line for automotive engine, have a limited capacity
to produce a variety of products. The built-in
capability has to be limited to justify investment and
a trade-off between the unpredictable changes and
the increased cost of flexibility. Technological
advancements also restrict to invest too much in the
present technology which might become inefficient
in forthcoming years. Designing and even
reconfiguring the assembly line is an extensive
process that requires expert knowledge, business
intelligence and involving several domains. Hence,
it becomes inevitable to use the best ICT tools and
infrastructure. PLM helps in the process of
designing and reconfiguring the line. However, still
focus of these activities is mostly on product and
information that revolves around product because
PLM has emerged from product data management
to product lifecycle management. The information
about the processes and resources does reside in the
PLM systems but it has no practical meaning
whenever there is a change in the product. Secondly
even if the PPR have been relationally structured in
the PLM systems, useful decision making process
cannot be supported by PLM systems and the
information about the processes and resources is
used once the product design or changes are
finalised or agreed upon conceptually. At this stage,
process planning becomes challenging which might
result in redefining the process constraints across
supply chain partners or redesigning the product.
This is because there is no information of pre-
defined processes or machine mechanisms
available. The unavailability of the processes and in

471

turn, resources, and constraints at the conceptual
phase of the design of the products is a major
discrepancy. A particular assembly system needs
process information up-front so that the decision for
manufacturing/assembling (manufacturing by
assembly) of the possible varieties of the products
could be made confidently. This will also help the
supply chain partners especially the assembly
machine tool builders to predict the time and cost
of building newly required machines.
PLM seeks to manage information through all
product lifecycle stages such as design,
manufacturing, assembly, marketing, sales and after
sale service. However, PLM usage throughout
product lifecycle is still mainly limited to product
design as shown in the Figure 1. It can be seen from
the Figure 1 that PLM is used nearly 10 times less
frequently in the service phases than in the design
phase (Lee, Ma et al. 2008).


Figure 1- PLM Usage throughout Product Lifecycle (Lee,
Ma et al. 2008).

Use of a PLM tool enhances collaboration but
potential benefits of PLM are still limited in use.
PLM has been used for collaborative design,
manufacture and service of products over the past
decade across extended enterprise. PLM systems
support the management of a portfolio of products,
processes and services from initial concept, through
design, engineering, launch, production and use to
final disposal. They coordinate and collaborate
products, project and process information
throughout the entire product value chain among
various players, internal and external to enterprise.
They also support a product-centric business
solution that unifies product lifecycle by enabling
online sharing of product knowledge and business
applications (Sudarsan, Fenves et al. 2005).
The establishment and use of product data in
PLM for assembly automation systems becomes a
labour-intensive challenge both in terms of data
integration and the continued management of the
application tools. Looking at the PLM system used
by Ford, this paper presents an approach to ease the
adoption and continued management of PLM
systems especially in assembly automation systems
by adding a resource library of possible processes as
a part of the current PLM repository.
In assembly automation systems, the product-
centric approach is not very useful because the final
assembled product is a combination of several
products, which are directly related to the processes
and resources in a linked manner. In manufacturing
industries, the focus on product gives required
results but in assembly automation it is not the same
case. Once the design is finalised it is difficult to
manage and control the resources and associated
processes, therefore the management of processes
and resources starts before the product design is
finished. Hence, there is a need to incorporate
process-centric approach in PLM systems rather
than product-centric for the assembly automation
systems in general and automotive sector in
particular. This research paper proposes PPRLM
approach in order to address existing PLM system
limitations within the manufacturing industry and in
particular to the powertrain assembly automation
systems.
4. PROPOSED PPRLM RESEARCH
The authors propose a process-centric approach
to PLM infrastructure. The idea is to integrate the
information of three key engineering chains i.e.
products, processes and resources based on PLM
philosophy, (i.e. overlap the activities of products
processes and resources design). This will shift the
trend of product-focussed lifecycle management to
process-focussed lifecycle management, the
outcome of which is the PPRLM not PLM only.
In this research, PPRLM approach focuses on the
assembly of powertrain automation systems.
Assembly operation is completely different from
machining/cutting operation. In assembly systems,
the sequence to assemble is important rather than
the method to manufacture, which has not been
realised and implemented in terms of real PLM
exploitation.
The process is still a very legitimate issue for
PLM to focus on. With respect to PLM, it is needed
to consider the following. Firstly, a deep
understanding of processes is needed. Secondly,
explicit (not tacit) definition of processes is
required. Thirdly, re-engineering of such processes
is required to adapt to a digital environment.
Finally, integration of processes is required across
the organisation. The component-based design
approach to automation systems is directly linked to
work on the modular composition of automated
manufacturing systems (Raza, Kirkham et al. 2009).
The component/modular automation approach is
proven to reduce the downtime of a line by reducing
the time taken to reconfigure a line thus saving
business money and increasing competitiveness of
the business, which is vital for low volume high
specification typical western manufacturers
(Harrison, West et al. 2001; Haq 2009; Raza,
Kirkham et al. 2009). In order to support such type

472

of engineering the product data needs to be both
managed and integrated into the overall line design.
This can only be achieved by the linkage of PLM
with the machine design lifecycle. This composition
of product, process and machine design is central to
the new level of PLM for automated manufacturing
suggested in this paper.
Figure 2 represents PPRLM approach three key
engineering chains of a medium to large enterprise
especially in an automotive sector. These chains
may consist of sub-chains. The chains and sub-
chains need to communicate with each other in
order to accomplish business objectives. Supply
chain partners and business management chains
communicate with and through engineering chains.
The dashed lines represent the relationship while the
solid lines represent communication among
different chains. The machine builders develop
resources that define processes by carrying out
operations to get the desired assembly processes.
End-user defines the final assembled product and
the control vendors define control logic required for
the machine tools to fulfil tasks. For powertrain
assembly systems, the product is the final
assembled engine and not a one-off part. Therefore,
the traditional product-centred approach to PLM
systems fails to fulfil the required objectives. For
assembly systems, process takes precedence and the
process-centred PLM paradigm, proposed and
verified in this research, is efficient for utilising
PLM applications. Business Process Modelling
(BPM) chain defines the final product and possible
processes. Machine builders define the resources
and potential processes achievable by the resources.
Product engineers from the end-user do not
necessarily worry about the machine configuration/
control logics rather about the end product and an
efficient process. Similarly, machine tools builders
concentration is on the resources and the processes
out of these resources, and not the final product.
The processes are the linking key/sub-chain for all
the stake holders in the assembly automation
systems, which are not being given its due
importance. The authors suggest defining the
products and resources in terms of processes to help
achieve agreement amongst all the stake holders to
speed up the assembly process.
The major considerations when formulating the
library of processes are identification of simple
mechanisms in machine tools/work stations used for
the assembly. These simple processes/mechanisms
may be: clamping part, lifting part, rotating part,
gripping part, stopping part, locating part, etc. These
mechanisms define the simple processes and
combination of different mechanisms can create
complex processes. When creating such processes,
the following questions needs to be considered:
Can mechanisms be readily re-used in this
specific application domain?
Can mechanisms be readily integrated with
components from other vendors?
Can mechanisms be readily reconfigured to
create different variant of similar processes
as requirements change?



Fig 2 - Key Engineering Chains of a Technical
Organisation
5. CASE STUDY APPLICATION OF
PRODUCT, PROCESS AND RESOURCE
LIFE CYCLE MANAGEMENT (PPRLM)
AT FORD PRODUCTION FACILITIES
This research is conducted as a part of a larger
research project at Loughborough University in
collaboration with Ford Motor Company, UK. Ford
is one of the leading automotive manufacturers in
the world. Ford has a desire to gain competitive
advantage through research and development. The
initial research study has helped to gain an insight
into the evolution of the industry and competitive
dynamics prevalent in the market as well as the
significant developments in the industry and the key
trends and issues. This research study is in
congruent to the strategic business planning of
automotive industry and in particular to Ford.
Engine assembly line is a highly sophisticated and
complex combination of sequential operations and
activities that are often automated. Presently, data
available at Ford is product (engine)-focussed and
the relationships among Products, Processes and
Resources (PPR) are not explicitly available. Every

473

time there is a change in the engine design, the
process engineers have to go through all the stations
of the assembly line to determine the potential
changes to be made in the assembly line. The
opportunity of pre-defined processes is not being
utilised.
Fords engine production lines are state-of-the-art
industrial application of complex engine assembly
operations. These production lines typically include
various combinations of production resources such
as machines, conveyors, human operators.
Globalisation and changing customer requirements
force the industry to make customised products at
shortest possible times and best quality possible. At
the same time, this study finds that implementation
of ICT technologies especially PLM systems should
be carefully examined for their impact on that
industrys (in this case, Fords) competitiveness.
The proposed research concept is to establish
relationships between PPR in a logical way and
make this knowledge available to all the
stakeholders throughout the supply chain. For this,
the authors proposed a new PPRLM model as
discussed in section 4 of this article. For real
implementation of the PPRLM concept, a Ford case
study was planned in four major steps:
Develop a standardised method to describe
assembly mechanisms and their associated
interface by collaboratively working with
Ford and Machine Tool Builders;
Develop a standardised method to identify
and classify mechanisms;
Decompose an existing assembly line into a
set of mechanisms; and
Develop a mechanism identification
template to facilitate capturing of standard
mechanisms.
Any particular station of the engine assembly line
can be decomposed to basic building blocks level of
modules of mechanisms, which are independent
from each other and can perform one operation
independently. Different modules can be combined
together to make a new station with changed
process capabilities. These mechanisms are the
building blocks of the predefined processes. To get
a library of processes, resources are decomposed to
mechanisms; each mechanism performing a
specified task. These mechanisms are used to define
the lowest levels of granularity of the functions
performed by resources. The resources work upon
products to make new products of desired
characteristics. These mechanisms are combined
together to define processes or series of processes.
The focus of the PLM is shifted from product to
mechanisms or processes and this is the basis of a
process focussed approach for an assembly
automation system.
In order to develop mechanisms or processes, one
of the key important objectives of this research
work was to consider commonalities of production
machines (i.e. engine assembly machines) within
existing projects and categorise them into common
mechanisms for the development of Manufacturing
Process Mechanisms Resource Libraries (MPMRL).
Such reusable mechanisms should have the ability
to reconfigure easily and quickly according to any
new business requirement. A mechanism
describes a unit that could be functional, control, or
structural, to meet specific process tasks e.g. part
move, bolt run down, etc. Therefore, in this research
mechanism decomposition was viewed from three
different perspectives, namely: function, process
and mechanism detail, as briefly described here:
Functionality describes the physical operation to
be performed by the mechanisms. Therefore from a
functionality view, mechanisms are categorised into
11 functional elements i.e. testing, gauging, robot,
sensor, lubrication, joining, tooling, translation,
grasping, transport and fixtures.
Process means which steps are required to be
performed by the mechanism in order to achieve the
functions (e.g. lifting, rotating).
Mechanism Detail means to consider the
mechanism at a very specific level e.g. looking at
the control logic aspects, geometrical, hydraulic,
pneumatic and electrical that combines to fulfil the
mechanism function.
The development of such mechanisms MMRL is
the creation of experienced knowledge repository to
design, build and implement new automation
systems. As a result, the PPRLM concept has
significantly changed not only the existing way in
which business and engineering processes are
carried out but also make processes more agile,
reconfigurable and robust. This research provides a
roadmap to streamline the business and engineering
processes across the supply chain collaborators and
assess the potential improvements.
Finally, process models are developed from the
end-users (Fords) perspective as shown in the
figure3. Purpose of such process models is to link
product, processes and resources (i.e. mechanisms)
to design and build new automation systems.



474



Figure 3 - Part of Proposed ew Process Model

Figure 3 illustrates part of the new engineering
process models to utilise existing libraries of pre-
tested mechanisms, and design of new required
mechanisms at various lifecycle phases of
Powertrain program. The required engineering
services are interpreted into engineering application
modules (element builder, component builder,
system builder, etc as highlighted in the figure 3
required at each stage of the engineering model. The
proposed models identify in great details which
application functionality (e.g. component builder or
system viewer is required for each business or
engineering process and which engineering
expertise (with what skill level) should use the new
engineering applications. The new model also
specifies changes on the current process flow,
information and resource requirements for each
process. Furthermore, it introduces a set of
interaction mechanisms with the supply chain (e.g.
exchange of information, documents and the timing
within the program life cycle) to outsource certain
part of the design process without losing control
over the program management or the knowledge
ownership. Due to the application of PPRLM, more
engineering activities can be completed
concurrently, which will result in compression of
the program overall time.
6. DISCUSSION AND RESULTS
The PPRLM approach is applied in one of the
leading automotive companies i.e. Ford. It is
observed during the case study that the time to
market for western automotive industry is one of the
crucial factors for survival in todays competitive
era. In response, the automotive industry is looking
for more advanced, collaborative, generic and open
solutions to meet instant changes in market
demands.
The key performance measures are examined
against the end-user business and engineering
priorities. One of the top priorities for Ford is to
establish a well integrated and proactive approach to
the manufacturing of Powertrain automation
systems prior product engineering and establish
relationship between Product, Process and Resource
to provide lifecycle support to Powertrain
automation systems. The ultimate goal is to bring
more agility within manufacturing systems and
enhance robustness in less time, cost and physical
resource. The overview of the existing approaches
within the automotive industry to design and build
new Powertrain assembly automation system is
highlighted in the Figure 4. Lack of advance, open
and generic solutions always require designing and
building of new automation systems from scratch.
Furthermore, existing sequential approaches (see

475

Figure 4) to design and build new automation
system has raised many fundamental issues, which
are currently manifested during implementation and
commissioning phases of new automation systems.
In response to such fundamental limitations
within the existing approach, the PPRLM approach
is proposed, designed and developed in this research
work for future Powertrain automation systems. The
Figure 4 presents an overview comparison between
the two approaches. This new vision potentially
enables supply chain collaborators to design, build
and reconfigure future powertrain assembly
automation systems more robustly using advanced,
open and generic solutions. This new vision offers
complete lifecycle support (i.e. from concept, build,
test and launch) with less engineering efforts and
better process management within the supply chain
of collaborators.
Migration from existing (As-Is) to future (To-Be)
approach has been critically assessed, measured and
evaluated based on four key performance measures
(i.e. robustness, time, cost and resources).
Significant benefits are predicted by proposing
PPRLM approach to design and build new
automation systems. In order to compare and
analyse potential benefits due to PPRLM,
simulation models were developed. Following a
comprehensive data analysis, significant results are
predicted with the application of To-Be approach,
particularly in planning and feasibility phases of the
automation system. For instance robustness for
planning and feasibility phases is increased from
50% to 92%.
Time saving is one of the major objectives for
western automotive industry. After re-engineering
business and engineering processes to apply
PPRLM, all processes were rescheduled. This
rescheduling in process timing is based on two
important considerations. One was generic solution
(pre-defined and pre-validated mechanism)
availability prior to product engineering and second
is availability of a new engineering software to
utilise such mechanism libraries in a more virtual
and collaborative environment in order to design
and build new powertrain assembly automation
system.
Similar to robustness analysis, As-Is and To-Be
approaches are compared for time using dynamic
modelling. From an end user perspective, three
different stages were compared for time a) time
saving between program approval (PA) to Job1, b)
average time reduction in ramp-up period and c)
reduction in overall project time. Due to application
of PPRLM, average five months time saving is
predicted from PA to J1 and 70 to 80 days time
saving during ramp-up period.
Statistical cost analysis is third important
measure and is examined using simulation models.
For the sake of manageability, cost analysis is
limited to the cost of human resources assigned to
each business/engineering process. Thirteen
different engineering groups are involved from end-
user perspective (i.e. Ford) in eight different
engineering domains to facilitate the design and
development of powertrain assembly systems.
Therefore reducing the investment cost of any new
program associated with all these engineering
groups is very crucial to Fords senior management.
The predicted impact on time due to the application
of PPRLM has made a direct impact on the
engineering cost associated with all thirteen
engineering groups. The simulation models predict
an average saving to 30% per typical program.



Figure 4: Comparison between As-Is and To-Be Approaches


476

Finally, human resource estimation is also an
important factor to make a decision prior to real
implementation of a new approach. As new
business/engineering processes are proposed and
introduced within different lifecycle domains of
powertrain assembly automation system due to
application of PPRLM, in response human
resources are re-assigned to all such processes based
on technical or managerial expertise required.
Simulation models are devised to optimise resource
capacity for thirteen different engineering groups.
7. CONCLUSIONS AND FUTURE
WORK
This research has focused on product to process
lifecycle management for automotive powertrain
assembly systems. Based on the literature reviewed
and industrial visits made to the Ford Motor
Company (UK), a detailed understanding was
gained of the design and build of new powertrain
automation systems. It was realised that existing
automation system design and build is very complex
in nature and often requires 3 to 4 years from
concept to launch. Thousand of business and
engineering activities are carried out between
globally distributed supply chain collaborators.
Despite technological advancements, the existing
solutions are still fragmented and are typically
implemented in a sequential manner. Also, there is
no well established and proactive engineering
approach available to investigate design alternatives
prior to the building and testing of physical systems.
In addition current methodology does not support
easy and quick reconfiguration to accommodate
unforeseen business changes. Fundamentally this is
due to fact that the engineering support for the
management of powertrain automation systems
implementation is not sufficiently developed to
cover the whole lifecycle. As a result the current
ramp-up period and reconfiguration processes are
too costly and too long with very little design reuse.
This research has described in detail the
application of PPRLM within the powertrain sector
of the automotive industry. To make existing PLM
systems more process focused, this research has
initially worked on two aspects 1) standard resource
libraries of manufacturing processes 2) new
engineering services required to reuse such
mechanisms for future automation systems. The
proposed concept facilitates advanced, generic and
open manufacturing solutions prior to product
engineering. The application of this research has
been carried out in one of the leading automotive
companies, Ford Motor Company, UK. Key
performance measures have been examined based
on the end-user business priorities. The application
of the PPRLM approach has the potential to enable
automation systems design and development in
parallel with product engineering. As a result,
overall 9 months time saving is predicted in a
typical engine program. In addition, the impact of
PPRLM to enhance design robustness and potential
savings in human resources and engineering cost
has been estimated and discussed in this paper.
The core concept driving this research is to
deliver agility and re-configurability within
automation systems via new engineering services
utilising reusable libraries of mechanisms. This
research to date has been principally focused at the
end-user. The PPRLM approach needs to be
implemented in engineering departments covering
all aspects within the powertrain sector of the
automotive industry on future engine programs.
From the authors view point there is a strong need
to expand the core concept of this research within
the business context of other supply chain partners.
This will help to identify their detailed business
needs and to understand their approach to design
and build of automation systems. This will also
provide a greater insight into the overall
effectiveness of the PPRLM approach.
8. ACKNOWLEDGMENTS
The authors gratefully acknowledge the support of
the ESPRC, via the Loughborough University
Innovative Manufacturing and Construction
Research Centre (IMCRC) as part of the Business
Driven Automation Project.
REFERENCES
ACEA. (2008). "The Engine of Europe European
Automobile Manufacturers Association." from
http://www.acea.be/index.php/news/news_detail
/the_engine_of_europe/.
Aziz, H., J. Gao, et al. "Open standard, open source and
peer-to-peer tools and methods for collaborative
product development." Computers in Industry
56(3): 2005. 260-271.
Baxter, D., R. Roy, et al. "A knowledge management
framework to support product-service systems
design." International Journal of Computer
Integrated Manufacturing 22(12): 2009. 1073-
1088.
Farhad, A. and D. Deba "Product Lifecycle Management:
Closing the Knowledge Loops." Computer
Aided Design & Applications 2(5): 2005. 577-
590.
Gunasekaran, A., H. B. Marri, et al. "Design and
Implementation of Computer Integrated
Manufacturing in Small and Medium-Sized
Enterprises: A Case Study." The International
Journal of Advanced Manufacturing Technology
16(1): 2000. 46-54.
Haq, I., R. Monfared, et al. "A new vision for the
automation systems engineering for automotive

477

powertrain assembly." International Journal of
Computer Integrated Manufacturing 23(4):
2010. 308-324.
Haq, I. U. (2009). Innovative Configurable and
Collaborative Approach to Automation Systems
Engineering for Automotive Powertrain
Assembly Wolfson School of Mechanical and
Manufacturing Engineering. Loughborough,
Loughborough University. PhD.
Haq, I. U., R. Harrison, et al. (2007). Lifecycle
Framework for Modular Configurable
Automation System. Fourth IFAC Conference
on Management & Control of Production &
Logistics. Romania.
Harrison, R., A. A. West, et al. "Distributed engineering
of manufacturing machines." Proceedings of the
Institution of Mechanical Engineers, Part B:
Journal of Engineering Manufacture 215(2):
2001. 217-231.
Kalkowska, J. and S. Trzcielinski "Some conditions of
implementing CE in advanced manufacturing
systems". Robot Motion and Control, 2004.
RoMoCo'04. Proceedings of the Fourth
International Workshop.2004. 299-309
Kopcsi, S., G. Kovcs, et al. "Ambient intelligence as
enabling technology for modern business
paradigms." Robotics and Computer-Integrated
Manufacturing 23(2): 2007. 242-256.
Lee, S., R. Harrison, et al. "A component-based approach
to the design and implementation of assembly
automation system." Proceedings of the
Institution of Mechanical Engineers, Part B:
Journal of Engineering Manufacture 221(5):
2007. 763-773.
Lee, S. G., Y. S. Ma, et al. "Product lifecycle
management in aviation maintenance, repair and
overhaul." Computers in Industry 59(2-3): 2008.
296-303.
Masood, T. (2009). Enhanced Integrated Modelling
Approach to Reconfiguring Manufacturing
Enterprises. Wolfson School of Mechanical and
Manufacturing Engineering. Loughborough,
Loughborough University. PhD Thesis: 314.
Masood, T., J. A. Erkoyuncu, et al. "A digital decision
making framework integrating design attributes
knowledge and uncertainty in aerospace sector".
7th International Conference on Digital
Enterprise Technology, Athens, Greece.2011.
Masood, T., R. Roy, et al. "Challenges in digital
feedback of through-life service knowledge to
product design and manufacture". 7th
International Conference on Digital Enterprise
Technology, Athens, Greece.2011.
Masood, T. and R. Weston "An integrated modelling
approach in support of next generation
reconfigurable manufacturing systems."
International Journal of Computer Aided
Engineering and Technology 3(3-4): 2011. 372-
398.
Masood, T. and R. H. Weston "Enabling competitive
design of next generation reconfigurable
manufacturing enterprises". 4th International
Conference on Changeable, Agile,
Reconfigurable and Virtual Production,
Montreal, Canada.2011.
Ming, X. G., J. Q. Yan, et al. "Technology Solutions for
Collaborative Product Lifecycle Management -
Status Review and Future Trend." Concurrent
Engineering 13(4): 2005. 311-319.
Ming, X. G., J. Q. Yan, et al. "Collaborative process
planning and manufacturing in product lifecycle
management." Comput. Ind. 59(2-3): 2008. 154-
166.
Monfared, R., A. West, et al. "An implementation of the
business process modelling approach in the
automotive industry." Proceedings of the
Institution of Mechanical Engineers, Part B:
Journal of Engineering Manufacture 216(11):
2002. 1413-1427.
Raza, M. B., T. Kirkham, et al. (2009). Evolving
knowledge based product lifecycle management
from a digital ecosystem to support automated
manufacturing. Proceedings of the International
Conference on Management of Emergent
Digital EcoSystems. France, ACM.
Sharma, A. "Collaborative product innovation:
integrating elements of CPI via PLM
framework." Computer-Aided Design 37(13):
2005. 1425-1434.
Shyam, D. (2006). Industry Requirements and the
Benefits of Product Lifecycle Management.
School of Industrial and Manufacturing Science.
Cranfield, Cranfield University. MSc.
Singh, N. "Integrated product and process design: a
multi-objective modeling framework." Robotics
and Computer-Integrated Manufacturing 18(2):
2002. 157-168.
Sudarsan, R., S. J. Fenves, et al. "A product information
modeling framework for product lifecycle
management." Computer-Aided Design 37(13):
2005. 1399-1411.
Tony, L. D. and X. X. William "A review of web-based
product data management systems." Computers
in Industry 44: 2001. 251-262.
Tony Liu, D. and X. William Xu "A review of web-based
product data management systems." Computers
in Industry 44: 2001. 251-262.



Proceedings of DET2011
7th International Conference on Digital Enterprise Technology
Athens, Greece
28-30 September 2011
478

DIGITAL FACTORY SIMULATION TOOLS FOR THE ANALYSIS OF A
ROBOTIC MANUFACTURING CELL
Caggiano A.
Fraunhofer Joint Laboratory of Excellence for
Advanced Production Technology,
Dept. of Materials & Production Engineering,
University of Naples Federico II, Italy
alessandra.caggiano@unina.it
Teti R.
Fraunhofer Joint Laboratory of Excellence for
Advanced Production Technology,
Dept. of Materials & Production Engineering,
University of Naples Federico II, Italy
roberto.teti@unina.it
ABSTRACT
Modern manufacturing systems need continuous improvement in order to meet the rapidly changing
market requirements. A new concept in the field of production engineering has been conceived to
optimize manufacturing systems design and reconfiguration: the Digital Factory. This approach is
based on the integration of diverse digital methodologies and tools, including production data
management systems and simulation technologies. In this paper, the Digital Factory approach is
applied to the analysis of an existing manufacturing system dedicated to aircraft engine components
production. Different manufacturing cell configurations involving the employment of handling
robots are studied through integration of modelling and simulation activities carried out by means of
both Discrete Event Simulation (DES) and 3D motion simulation software tools.
KEYWORDS
Digital Factory, Manufacturing Systems, Discrete Event Simulation, 3D Simulation

1. INTRODUCTION
Todays manufacturing industry is characterised by
a very dynamic environment, pushing at frequent
reconfiguration and improvement of manufacturing
systems. One of the main requirements of current
manufacturing systems is the so-called
responsiveness to external drivers such as market
demands, that enables rapid launch of new products,
fast adjustment of system capacity and functionality,
and easy integration of new technologies into
existing systems (Tolio et al, 2010).
Manufacturing system design and reconfiguration
require to examine as often as possible a number of
alternative solutions. Analytical methods, both static
and dynamic, have been proposed in literature and
are often used to calculate performance measures of
alternative solutions (Gershwin, 1994; Matta et al,
2005; Li and Meerkov, 2009). However, applied to
the analysis of modern complex manufacturing
systems, such methods can be very complicated and
time-consuming.
For this reason, design and reconfiguration can be
effectively supported by the employment of
Information Technology (IT) (Westkmper, 2007;
Maropoulos, 2003). In recent years, the evolution of
IT has encouraged a significant introduction of new
digital technologies in manufacturing (Chryssolouris
et al, 2008).
In this framework, the Digital Factory concept
has been introduced as a new paradigm in which
production data management systems and
simulation technologies are jointly used to optimize
manufacturing system design and reconfiguration
(Bracht and Masurat, 2005; Gregor et al, 2009;
Schloegl, 2005; Woern et al, 2000). The key factor
is the integration of the various processes and
activities by using common data for all the
applications (Kuhn, 2006).
The employment of digital modelling and
simulation tools can reduce time and cost in the
design of complex manufacturing systems, avoiding
hard analysis and experimentation (De Vin et al,
2004; Papakostas et al, 2011 ).

479

Simulation has a primary role and different
purposes on the basis of the simulation software tool
employed (Hosseinpour and Hajihosseini, 2009).
Discrete Event Simulation (DES) is a valuable
tool for experimenting with different manufacturing
system what if scenarios, allowing to investigate
the system performance in terms of production flow,
bottlenecks, productivity, etc. (Caggiano et al,
2009). On the other hand, 3D motion simulation can
be conveniently adopted to examine manufacturing
system layout, ergonomics, and robotics issues
(Ramirez Cerda, 1995; Caggiano and Teti, 2010).
In this paper, the Digital Factory approach is
applied to the analysis of two actual manufacturing
cells dedicated to the production of aircraft engine
products.
Different manufacturing cell configurations
involving the employment of a handling robot are
studied through integration of modelling and
simulation activities carried out by means of both
DES and 3D simulation software tools.
3D motion simulation is employed to perform a
detailed design of the manufacturing cells and
assess their feasibility with reference to robot
motion issues, as the possibility to reach all the
objectives, the safety of movements throughout the
manufacturing cell and the organisation of a suitable
layout.
The 3D simulation results concerning layout
modifications, equipments arrangement, estimated
robot loading/unloading and processing times are
used to set up the DES models of the various
manufacturing cell configurations. The DES
software tool is employed in order to analyse, for
each system configuration, its behaviour in terms of
production flow, productivity, utilization of
available facilities, system bottlenecks, and so on.
The DES results are then examined in order to
compare the diverse manufacturing cell
configurations, with the aim to support the decision
making process related to cell optimisation. The
research work shows the essential role of data
integration among different tools in order to carry
out an accurate and comprehensive analysis of a
manufacturing system, since in most cases a single
simulation tool is not sufficient to take into account
all the relevant issues of the design or
reconfiguration tasks, in agreement with the Digital
Factory concept.
2. INDUSTRIAL CASE STUDY
The Digital Factory approach has been employed in
this research work to carry out investigations on a
real industrial case study.
The manufacturing system under examination,
dedicated to turbine vanes production, belongs to
the facilities of an aircraft engine manufacturing
company. Two grinding phases together with air
cleaning, deburring, washing and precision
measuring operations are required for product
completion. Two parallel manufacturing cells,
provided with the same grinding machine model, are
available in the plant and they can perform similar
operations on various turbine vane part numbers.
In each of the two manufacturing cells, a human
operator places every vane on its proper fixture
(different for each grinding phase), moves the part-
fixture assembly to the various machines and
performs the manual deburring of vanes.
Since manual loading of the vane-fixture
assembly inside the grinding machine is a very time-
consuming operation (it can take up to 15 min), the
company has provided one of the two identical
grinding machines with a small robot dedicated to
loading and unloading of parts and fixtures on and
off the grinding machine. This solution decreased
the loading time from 15 to 3 min, thus reducing the
80% of its duration.
At present, two distinct manufacturing cells are
available for turbine vanes grinding: one with a
loading/unloading robot integrated in the grinding
machine and one without it that needs manual
loading/unloading. The schemes of the current cells
are shown in Figure-1 a-b, while their components
are summarised in Table-1.
Both cells have a deburring station where the
human operator performs manual deburring on each
vane after the grinding and the air cleaning phases.
This is a critical operation, since it is largely
dependent on the operators experience, manual
ability and attention. An incorrect deburring process
or even sporadic errors due to drop of operator
concentration can produce severe damages to the
part. These damages cannot be eliminated through
repair machining and the vane must be rejected.
In order to reduce such risks, the introduction of a
robot to automate the deburring process and avoid
human mistakes has been envisaged. The same
robot could be also employed to move part-fixture
assemblies among the various machines, thus
leaving to the human operator the only task of
placing parts on fixtures.
To decide in which of the two available
manufacturing cells the new robot should be
introduced, the employment of simulation tools
represents a very valuable support. These tools can
be utilized to verify the feasibility of the robot
employment in terms of layout and robotics issues
as well as to examine the consequences of the
introduced changes on the manufacturing system
productivity.

480


(a) (b)
Figure 1 - Schemes of the current cells: (a) without loading robot (b) with loading robot.
Table 1. Cells components: (a) without loading robot (b) with loading robot.
. Manufacturing Cell Component

. Manufacturing Cell Component
1. Fixtures Buffer 1. Fixtures Buffer
2. Input/Output Vanes Buffer 2. Input/Output Vanes Buffer
3. Vane/Fixture Assembly Bench 3. Vane/Fixture Assembly Bench
4. Coordinate Measuring Machine 4. Coordinate Measuring Machine
5. Washing Station 5. Washing Station
6. Automatic Deburring Station 6. Automatic Deburring Station
7. Air Cleaning Station 7. Air Cleaning Station
8. Grinding Input/Output Buffers 8. Grinding Input/Output Buffers
9. Grinding Machine 9. Grinding Machine
10. Tool Storage 10. Loading/Unloading Robot
11. Intermediate Buffer 11. Tool Storage
- - 12. Intermediate Buffer
(a) (b)
3. MANUFACTURING CELL SIMULATION
3.1. 3D SIMULATION
The introduction of a robot as new material handling
system requires a deep analysis to investigate the
solution feasibility in terms of reachability of all
targets, safety of movements and layout
reconfiguration.
This study can be carried out through the
employment of 3D motion simulation tools. The
latter can be suitably engaged in the design of a
material handling system, such as a robot, to verify
material handling layout and path as well as the
integration with other handling systems, equipments
and operators (Kuhn, 2006). Kinematics modules
can manage computation for robot kinematics, while
collision detection modules can sense collisions
among hitting surfaces.
Geometrical and functional features of machines,
equipments, and material handling systems are
particularly relevant in this type of simulation.
Models can be created on the basis of available
libraries, through design within the software
environment, or by importing already existing CAD
files. For this simulation activity, 3D models of the
components of both manufacturing cells were
created, while the already available CAD files of
parts and fixtures were imported.

481

As regards the robot models, dedicated libraries
offered by the simulation software were employed
to retrieve the already existing loading/unloading
robot model as well as to select and test the most
appropriate model for the new handling/deburring
robot.
On the basis of the schemes shown in Figure-1 a-
b, all the components of the two manufacturing cells
were arranged in the 3D simulation software to set
up the global layout with properly sized machines
and devices (Figure-3 and Figure-4).
For layout optimization, particular attention must
be paid to safety in order to avoid any interference
between the robot movement and the other cell
components as well as the human operator.
Moreover, the accessibility constraints related to the
robot must be taken into account to appropriately
locate the components of the manufacturing cell and
set their relative distances.
In this perspective, the employment of 3D
simulation proves essential to virtually verify the
activities that the robot has to perform in the
manufacturing cell, and to determine whether the
current layout allows to execute all the tasks, both in
terms of reachability and safety against possible
collisions.
A purpose of this simulation is to determine the
type of robot that should be employed as well as the
proper location of the robot that depends on its size
and the need to reach all the targets, in particular the
deburring station where the robot will perform the
automatic deburring process. By creating target
points in the 3D simulation software, it is possible to
check whether a robot is able to reach all the targets
with the current layout and modify the cell
configuration if this condition is not satisfied.
As regards the selection of the most suitable robot
model for the manufacturing cells, several factors
were considered: first, the payload that should be
carried and the robot size. Since the vane-fixture
assemblies weigh about 15 kg, to which the gripper
weight should be added, the 20 Kg robot category
robot was considered too risky and the immediately
subsequent 50 Kg robot category was chosen.
Another parameter to be considered is the robot
dimension, since it should be able to reach all the
desired targets within the manufacturing cell.
Finally, a very good accuracy is required in order
to perform an acceptable deburring of products.
On the basis of these criteria, the robot chosen for
handling and deburring tasks in the manufacturing
cell is the FANUC M710iC/50. It has 6 degrees of
freedom and can perform handling, loading and
unloading of medium loads (payload is about 50
kg). The maximum reach is 2050 mm and the
weight is 560 kg; its repeatability is < 0.07 mm.

Figure 3. 3D model of the manufacturing cell with the new
handling/deburring robot for 3D motion simulation.

Figure 4. 3D model of the manufacturing cell with both the
existing loading/unloading robot and the new
handling/deburring robot for 3D motion simulation.
A 3D model of the FANUC M710iC/50 robot
provided with the proper kinematics was obtained
from the software robot data base.
A gripper for the robot, similar to a fork, was
designed by the company engineers to handle the
fixtures by inserting its prongs into the two grooves
of the fixture base. The movement required to
download a fixture-part assembly from a machine
consists of a horizontal translation to insert the
prongs into the grooves and a vertical movement to
raise the assembly.
In order to take into account operation safety
requirements in the manufacturing cell, not only the
event of collisions between robot and machines
should be considered, but also any inconvenience
related to the human-robot interaction due to the
presence of a human operator assembling vanes and
fixtures on the assembly table.
A possible solution consists in bounding with a
barrier the entire manufacturing cell zone within
which the robot is free to move.

482

The simulation helped identifying the zones
where communication with the external
environment needs to be allowed.
In particular, the assembly bench was configured
as a rotating table provided with input and output
positions: once the labour has mounted the part on
the fixture outside the cell, the part-fixture assembly
enters the bounded zone automatically, allowing the
operator to use a working position distinct from the
robot area. A slot was designed in the barrier with a
height sufficient to allow the transfer of the part-
fixture assembly.
All the stations not having to be reached by the
labour were located inside the manufacturing cell
boundaries. As regards the grinding machines, the
robot should be able to load the part-fixture
assembly onto them, while the labour needs to
access the tool storage area on the grinding machine
side for tool change and maintenance. Thus, the
safety barrier was placed in line with the front of
each grinding machine.
With the described layout configurations, shown
in Figure-3 and Figure-4, the simulation of robot
movement throughout the cell was carried out to
investigate the feasibility of the whole
manufacturing cycle.
A cycle was simulated to verify the possibility for
the robot to reach each single target, to examine the
path followed by the robot from one target to the
next, and to check if any collision occurred during
the robot motion.
The robot model proved to be suitable for the
manufacturing cell, as it was able to reach all the
targets by suitably arranging all the manufacturing
cell components.
The results of this simulation offered information
on the proper layout to be adopted as well as on the
robot movement and loading/unloading times.
Once the feasibility of the robot introduction in
both the existing manufacturing cells was verified
through 3D simulation, a valid support tool to
decide in which of the two cells the robot should be
more conveniently placed is represented by Discrete
Event Simulation.
3.2. DISCRETE EVENT SIMULATION
Discrete Event Simulation (DES) was employed to
evaluate the effects of the robot introduction into the
two manufacturing cells in terms of productivity and
resource utilization.
DES models of the two existing manufacturing
cells (one without any robot and the other with a
loading/unloading robot) and the two new
manufacturing cells (both provided with a new
handling/deburring robot) were built using the 3D
simulation results.
The manufacturing cells components 3D models
already created for the 3D simulation were imported
using the standard exchange format IGES, and
arranged according to the 3D simulation final
layout. Other relevant data from the 3D simulation
results were the robot speed and the
loading/unloading time on the cell components.
Both the existing and the new manufacturing cells
were simulated, thus obtaining four simulation cases
to be employed for comparison and assessment
(Figure-5 a-d).
To further improve the productivity analysis, for
each of the four simulation cases, the number of
fixtures per phase was progressively increased to
examine the influence on production time cutback.
The resulting simulation runs, consisting in the
execution of the cycle to produce a full kit of vanes
(34 units), showed that adding fixtures per phase is
convenient only up to 4 fixtures for the cases n. 1
and n. 2, as further fixtures are not able to reduce the
production time and only increase the investment
cost. As regards the cases n. 3 and n. 4, 3 fixtures
per phase seem to be sufficient: the production time
is not affected by the addition of new fixtures that
only add to the cost.
In Figure-6, for each of the four cell
configurations, the total time required to produce an
entire kit of vanes is plotted versus the number of
fixtures per phase. It can be observed that the cell
configuration showing the minimum production
time is the n. 3, i.e. the one with a central
handling/deburring robot. Actually, the introduction
of the central handling/deburring robot in cell n. 1
yields a significant decrease of production time
whereas the same robot in cell n. 2 causes only a
small time reduction.
To support the decision concerning where the
robot should be placed, it is useful to analyse the
two possible configurations that would be created
by the introduction of the robot either in cell n. 1 or
in cell n. 2. In the first configuration, the central
robot is introduced into the cell n. 1, thus leading to
the layout of cell n. 3, while cell n. 2 remains
unchanged. In the second configuration, the central
robot is introduced into the cell n. 2, thus leading to
the layout of cell n. 4, while cell n. 1 remains
unchanged.



483


(a) (b)

(c) (d)
Figure 5. DES models for the four manufacturing cells arrangements:
(a) current cell without any robot (b) current cell with loading robot
(c) new cell with a central robot (d) new cell with both loading robot and central robot

Figure 6. Production time vs. number of fixtures per phase for the 4 simulation cases.
Cell n.1: cell without any robot
Cell n.2: cell with a loading/unloading robot
Cell n.3: cell with a central handling/deburring robot
Cell n.4: cell with both loading/unloading and handling/deburring robots

484


Figure 7. Production time vs. number of fixtures per phase for the two configurations.
Configuration 1: cell n.2 and cell n.3; Configuration 2: cell n.1 and cell n.4
The production time versus the number of fixtures
per phase is reported for both first and second
configuration in Figure-7. The bar chart shows that
the first configuration leads to shorter production
times for any number of fixtures from 1 to 6. Thus,
this configuration seems to be the best solution and
the handling/deburring robot should be placed into
cell n. 1, presently without any robot, so to optimize
the production time.
As regards the number of fixtures per phase,
Figure-7 shows that the minimum production time
for the first configuration is reached with 4 fixtures.
However, by examining Figure-6, one further
consideration can be made: the minimum production
time for cell n. 2 is achieved when 4 fixtures per
phase are employed, but for cell n. 3, only 3 fixtures
per phase are sufficient since no improvement is
verified with further fixtures.
Thus, the optimal solution in terms of both
productivity and fixtures cost is given by the first
configuration composed of cell n. 2 with 4 fixtures
per phase and cell n. 3 with 3 fixtures per phase.
4. CONCLUSIONS
In this paper, the Digital Factory approach was
applied to the analysis of a real manufacturing
system dedicated to the fabrication of aircraft engine
products.
Diverse manufacturing cell configurations
involving the introduction of handling robots were
studied through integration of modelling and
simulation activities carried out by means of DES
and 3D motion simulation software tools.
3D motion simulation was employed to perform a
detailed design of the manufacturing cells and
assess the feasibility of different cell configurations
in terms of robot motion, reachability of all targets
and safety of movements.
The 3D motion simulation results concerning
layout, equipments arrangement, estimated robot
loading/unloading and processing times were used
to define the manufacturing cells DES models.
For each cell configuration, DES was employed
to analyse the cell behaviour in terms of production
flow, productivity, resource utilization and
bottlenecks of the system. The DES results were
therefore examined to compare the diverse cell
configurations and their performance in terms of
production time in order to support the decision
making for optimal solution identification.
A single simulation tool was not sufficient to
consider all the relevant issues for manufacturing
system design as, in the Digital Factory framework,
integration of different simulation tools is essential
to reach an accurate and comprehensive analysis.
5. ACKNOWLEDGMENTS
This research work was carried out within the
framework of the Executive Program of Scientific
and Technological Co-operation between Italy and
Hungary, Ministry of Foreign Affairs, under the
Joint Project on: Digital Factory, in collaboration
with the Budapest University of Technology &
Economics, Budapest, Hungary (2011-2013).
The Fraunhofer Joint Laboratory of Excellence
for Advanced Production Technology (Fh -
J_LEAPT) at the Department of Materials and
Production Engineering, University of Naples
Federico II, is gratefully acknowledged for its
support to this research activity.
REFERENCES
Bracht U. and Masurat T., The Digital Factory Between
Vision And Reality. Computers in Industry, Vol. 56,
No. 4, 2005, pp. 325-333

485

Caggiano A., Keshari A. and Teti R., Analysis and
Reconfiguration of a Manufacturing Cell through
Discrete Event Simulation, 2nd IPROMS Int.
Researchers Symposium, Ischia, Italy, 2009, ISBN
978-88-95028-38-5, pp. 175-180
Caggiano A. and Teti R., Simulation of a Robotic
Manufacturing Cell for Digital Factory Concept
Implementation, 7th CIRP Int. Conf. on Intelligent
Computation in Manuf. Engineering, CIRP ICME 10,
Capri, Italy, 23-25 June 2010, ISBN 978-88-95028-
65-1, pp. 76-79
Chryssolouris G., Mavrikios D., Papakostas N., Mourtzis
D., Michalos G. and Georgoulias K., "Digital
manufacturing: history, perspectives, and outlook",
Proceedings of the Institution of Mechanical Engineers
Part B: Journal of Engineering Manufacture, Volume
222, No. 5, 2008, pp.451-462
De Vin L.J., Ng A.H.C. and Oscarsson J., Simulation
Based Decision Support for Manufacturing System
Life Cycle Management, Journal of Advanced
Manufacturing Systems, Vol. 3 No. 2, 2004, pp. 115-
128
Gershwin S.B., Manufacturing Systems Engineering,
PTR Prentice Hall, Englewood Cliffs, N.J., 1994
Gregor M., Medveck ., Matuszek J. and tefnik A.,
Digital Factory, Journal of Automation, Mobile
Robotics & Intelligent Systems (JAMRIS), Vol. 3, No.
3, 2009, ISSN 1897-8649, pp. 123-132
Hosseinpour F., Hajihosseini H., Importance of
Simulation in Manufacturing, World Academy of
Science, Engineering and Technology, Vol. 51, March
2009, ISSN: 2070-3724, pp. 285-288
Khn W., Digital Factory - Integration of Simulation
Enhancing the Product and Production Process
towards Operative Control and Optimisation,
International Journal of Simulation, Vol. 7, No 7,
2006, ISSN 1473-8031, pp. 27-39
Li J. and Meerkov S.M., Production Systems
Engineering, Springer, New York, 2009
Maropoulos P.G., Digital Enterprise Technology -
Defining Perspectives and Research Priorities,
International Journal of Computer Integrated
Manufacturing, Vol. 16, 7-8, 2003, pp. 467-478
Matta A., Semeraro Q. and Tolio T., Configuration of
AMSs, In: Design of Advanced Manufacturing
Systems, (ed. Matta A., Semeraro Q.), Springer, 2005,
ISBN 1-4020-2930-6, pp. 125-189
Papakostas N., Michalos G., Makris S., Zouzias D. and
Chryssolouris G., Industrial applications with
cooperating robots for the flexible assembly,
International Journal of Computer Integrated
Manufacturing, Vol. 24, No. 7, 2011, pp. 650-660
Ramirez Cerda C.B., Performance Evaluation of an
Automated Material Handling System for a Machining
Line using Simulation, Proc. of 1995 Winter
Simulation Conference, Arlington, Virginia, 1995,
ISBN: 0-7803-3018-8, pp. 881-888
Schloegl W., Bringing the Digital Factory Into Reality -
Virtual Manufacturing with Real Automation Data,
International Conference on Changeable, Agile,
Reconfigurable and Virtual Production CARV, 2005,
pp. 187-192
Tolio T., Ceglarek D., ElMaraghy H.A., Fischer A., Hu
S.J., Laperriere L., Newman S.T. and Vancza J.,
SPECIES - Co-evolution of products, processes and
production systems, CIRP Annals, Vol. 59, No. 2,
2010, pp. 672-693
Westkmper E., Digital Manufacturing in the Global
Era, Digital Enterprise Technology: Perspectives and
Future Challenges (ed. Cunha, P.F., Maropoulos,
P.G.), Springer, New York, 2007, ISBN 978-0-387-
49863-8, pp. 3-14
Woern H., Frey D. and Keitel J., Digital Factory -
Planning and Running Enterprises of the Future, 26th
Annual Conference of the IEEE Industrial Electronics
Society, 2000, ISBN 0-7803-6456-2, pp. 1286-1291


Proceedings of DET2011
7th International Conference on Digital Enterprise Technology
Athens, Greece
28-30 September 2011
486

DIVERSE NON CONTACT REVERSE ENGINEERING SYSTEMS FOR
CULTURAL HERITAGE PRESERVATION
Segreto T.
Fraunhofer Joint Laboratory of Excellence for
Advanced Production Technology,
Dept. of Materials & Production Engineering,
University of Naples Federico II, Italy
tsegreto@unina.it
Caggiano A.
Fraunhofer Joint Laboratory of Excellence for
Advanced Production Technology,
Dept. of Materials & Production Engineering,
University of Naples Federico II, Italy
alessandra.caggiano@unina.it


Teti R.
Fraunhofer Joint Laboratory of Excellence for
Advanced Production Technology,
Dept. of Materials & Production Engineering,
University of Naples Federico II, Italy
roberto.teti@unina.it

ABSTRACT
Reverse Engineering (RE) is the process of duplicating an existing part, subassembly or product,
without drawings, documentation or a computer model. RE is widely employed for applications in
areas as diverse as manufacturing engineering, industrial design and cultural heritage. As regards
the latter, the use of RE for tangible cultural heritage can be developed for purposes such as
reproduction, computer-aided repair, virtual museums, and artefact condition monitoring. Digital
data acquisition, i.e. the first stage of the RE procedure, is critical as the choice of the detection
methodology can affect the quality of the point cloud and the resulting surface reconstruction or
CAD model creation. In this paper, two non contact RE laser systems, respectively based on a
coordinate measuring machine and a portable 3D scanning device, are utilised for data acquisition
and digital reconstruction of an antique porcelain sculpture of complex geometry to comparatively
assess the RE systems performance.
KEYWORDS
Reverse Engineering, Laser Scanning, Cultural Heritage

1. INTRODUCTION
Objects are very important to the study of human
civilisation because they provide a concrete basis for
ideas and an essential contribution to their
validation. Indeed, the significance of physical
artefacts can be interpreted against the backdrop of
socioeconomic, political, ethnic, religious and
philosophical values of a particular culture. Efforts
in the conservation of objects from earlier times, i.e.
the preservation of tangible cultural heritage
artefacts, demonstrate a recognition of the necessity
of the past and of the things that tell its story (Levoy
et al, 2000). As a matter of fact, the actuality of an
object, as opposed to a reproduction, draws people
in and gives them a literal way of touching the past.
This unfortunately poses a danger as things are
damaged by the hands of tourists, by the light
required to display them, and by other risks related
to making an object known and available (Levy and
Dawson, 2006).
Digital acquisition methods, known as Reverse
Engineering (RE), provide a formidable
technological solution that is able to acquire the

487

shape and the appearance of artefacts with
unprecedented precision in human history (Varady
at al, 1997; Rho et al, 2002; Berndt and Carlos,
2000). Nowadays, RE is widely used in numerous
applications, such as manufacturing (from the
analysis of competitors products to quality control
of industrial parts), industrial design (models
creation for virtual reality environments), and
tangible cultural heritage conservation (Raja and
Fernandes, 2008). The employment of RE methods
for cultural heritage artefact preservation can be
developed for several purposes such as reproduction
(faithful copies can be easily and quickly obtained
from 3D model through machining processes or
rapid prototyping), computer-aided repair
(appropriate software tools allow the virtual
assembly of objects for artefact reconstruction
without actual physical intervention), virtual
museum realisation (location of the 3D objects in
their original environment through the creation of
suitable scenarios), monitoring (detection of artefact
modifications over time) (Segreto et al, 2010;
Bernard, 2005; Papaioannou, 2001; Buzzichelli et al
2003).
For all these categories of applications, a large
number of systems based on different approaches
(mechanical, optical, laser and ultrasonic based
sensors) have been developed and utilised. Data
acquisition is a critical step of the RE procedure; as
a matter of fact, the choice of the RE method affects
the quality of the acquired point cloud and,
consequently, the resulting reconstructed surface or
CAD model (Chivate and Jablokow, 1993).
In this paper, two different non contact RE
scanning systems based on laser devices are utilised
for the 3D data acquisition and digital
reconstruction of a cultural heritage artefact
consisting of an antique porcelain sculpture of
complex geometry, representing the bust of a child.
2. CASE STUDY
The cultural heritage artefact under study is an early
20th century bisque porcelain sculpture
manufactured in Limoges, reproducing one of the
two original terracotta busts, made in 1777 by Jean-
Antoine Houdon (1741-1828), which depict the
children of Alexandre-Thodore Brongniart (Louise
and Alexandre), distinguished Neoclassical French
architect (www.louvre.fr).
The sculpture, made of porcelain on a blue base
with gilt rings, reproduces the head, shoulders and
upper chest of the young Alexandre (Figure 1), with
the following features:
Overall dimensions: 190 mm x 130 mm x 80
mm (height x width x depth)
Weight: 0.5 Kg
Origin: Limoges, France
Dating: 1923
Preservation state: very good conditions,
without chips, cracks or repairs.


Figure 1 Early XX century porcelain sculpture depicting
the bust of the young Alexandre Brongniart
3. DATA ACQUISITION THROUGH
DIVERSE 3D LASER SCANNING
SYSTEMS
Two diverse RE laser scanning systems,
respectively based on a Coordinate Measuring
Machine (CMM) and a Portable Scanning System
(PSS), was employed for 3D digital data acquisition
of the bust of Alexandre Brongniart.
3.1. COORDINATE MEASURING MACHINE
(CMM)
A CMM is a high precision measuring device
consisting of four main components: the machine,
the measuring probe, the control or computing
system, and the measuring software. A CMM works
in much the same way as a finger when it traces map
coordinates. Its 3 axes form the machines
coordinate system. The CMM utilised in this paper
(Dea Global Image 091508) is characterized by a
precision error, E, equal to 4.9 m for point-to-point
measurement, a repeatability of 1.7 m and a
resolution of 0.1 m. The non contact probing
system mounted on the central carriage of the CMM
is composed of the probe head and the actual probe,
Metris LC 15. The latter is an optical laser probe
with a high resolution, a 8 m accuracy, and a laser
stripe scanner (scan speed: 19200 pps) working on
the basis of the triangulation principle.
The 3D digital data acquisition of the porcelain
bust was carried out through subsequent scans
characterized by the following parameters: point
distance = 0.15 mm, stripe distance = 0.15 mm,

488

overlap = 0.2 mm. Each scan is executed with a
specific probe head orientation, controlled by the
combination of two angles, A and B.
In order to acquire the sculpture geometry, 35
diverse A-B angles combinations were employed.
Moreover, two different positions of the bust
(Figure 2) were required to acquire the entire
sculpture geometry because of the constraints
related to the A angle maximum range value equal
to 97.5. As a consequence of the scan parameters
choice, the large number of angles combinations and
the two different positions required for the complex
sculpture geometry, the duration of the data
acquisition was rather long: about working 7 days.
The first position of the sculpture on the CMM
allowed to acquire the major part of the geometry
(Figure 3a), and required the longest time; however,
as the A angle constraints did not allow to reach the
area comprised between the chin and the neck
(Figure 3b), a second position was employed to scan
this specific area (Figure 3c). Then, the two diverse
scans were aligned and merged in order to obtained
a unique final point cloud (Figure 3d).

Figure 2 Two different positions of the bust for CMM laser scanning

(a) Initial point cloud (b) Missing area (c) Chin area scan (d) Final point cloud
Figure 3 Point clouds acquired through the CMM laser scanner
3.2. PORTABLE SCANNING SYSTEM (PSS)
The 3D Portable Scanning System (PSS) consists of
a high precision CIMCORE Infinite II SC
anthropomorphic arm with 7 rotational axes
provided with an optical laser system. The arm has
an ergonomic pistol grip to enable the manual
measurement of 3D points at any orientation within
the arms spherical reach (2.8 m), with a precision
of 0.040 mm and a repeatability of 0.028 mm.
The Perceptron Scanworks v4 optical laser system
mounted on the arm allows to collect up to 23.000
points per second with a precision of 0.024 mm.
The 3D digital data acquisition of the porcelain
bust was carried out through subsequent line scans
characterized by the following parameters: profile =
mat white (referring to the object surface colour),
scan rate = 30 Hz (maximum available value).
Each scan was executed by manually following
the surface profile without restriction to a specific

489

angle orientation; in this way, data acquisition was
extremely fast (1 hour).
Only one position was sufficient to acquire the
entire geometry as the arm can be rotated with a
large freedom of movement (Figure 4). As shown in
Figure 5, the diverse scans are highlighted by
different colours, and need further processing in
order to reduce scans overlapping.
In Table 1, the details of data acquisition and
digital model reconstruction for the two laser based
scanning systems are summarized.

Table 1- Data acquisition and digital model reconstruction details
Data Acquisition
System
CMM Portable Scanning System
Line Scans 253 51
Number of Points 1.251.149 104.529
Number of Triangles 2.501.750 207.650
Number of Curves 4967 2232
Angles 35 diverse A-B angles
combinations were
employed
Each scan was executed by manually following the surface profile without
restriction to a specific angle orientation
Position of the
sculpture
2 1
Duration of data
acquisition
About 7 working days

Data acquisition was extremely fast (1 hour)

Duration of digital
model reconstruction
About 7 working days

About 5 working days
4. REVERSE ENGINEERING DIGITAL
MODEL RECONSTRUCTION
The digital model reconstruction of the two acquired
point clouds obtained through the diverse RE
systems was performed using the same 3D
metrology software platform: Polyworks V11 by
Innovmetric (Reference Manual, 2011).
This software consists of several modules, each
dedicated to a specific phase of the reverse
engineering procedure. The IMAlign module allows
to scan objects and align the resulting datasets. The
IMEdit module covers several essential steps for
polygonal models editing, generation of curves,
NURBS patches and models that can be exported as
CAD files readable by other software tools. The
IMInspect module allows comparing data (e.g. point
cloud) to reference (e.g. CAD model), measuring
the dimensions of specific features, and generating
comparison and verification reports.
For CAD file reconstruction and data comparison
several steps were required:
- Point cloud processing: different operations were
performed to improve the two point clouds: noise
and overlap reduction, redundant points deletion.
The resulting clouds were then wrapped to draw a
triangular surface between every three data
points.
- Polygon Model editing: the geometries based on
triangles (polygon mesh) were enhanced through
several actions (e.g. fill holes, reconstruct mesh,
optimize mesh, etc.). In Figure 6 the meshes
obtained from (a) the CMM and (b) the PSS
acquired point clouds are shown.
- Curves and patches generation: curves were
created on the polygonal model in order to obtain
closed sets for NURBS patch construction.
Because of the sculpture geometry complexity,
the curves automatically generated by the
software needed further improvement to achieve
the 4-sided boundaries required for patch
generation. Figure 7 shows the curves and patches
generated by the software for (a) the CMM and
(b) the PSS polygonal models, indicating in violet
the patches to be modified, and in yellow the
patches surrounded by closed sets of 4
magnetised curves. As it can be noticed, many
errors occurred, and a large number of curve
editing operations such as edges and corners
reconstruction, boundaries creation, and fitting
was required to improve curves and patches. The
final result is shown in Figure 8 for (a) the CMM
and (b) the PSS curves. The final NURBS patches
were then employed to create the NURBS models
over the two polygonal models.
CAD phase: the NURBS models were finally
exported as IGES files, as shown in Figure 9 for (a)
the CMM and (b) the PSS models.
The time for the digital reconstruction starting
from the two acquired point clouds was about 5
working days for the PSS point cloud and about 7
working days for the CMM point cloud.


490


Figure 4 Portable scanning system: acquisition of the bust

Figure 5 Portable scanning system: acquired point cloud

(a) (b)
Figure 6 Polygonal models obtained from (a) the CMM and (b) the PSS point cloud

(a) (b)
Figure 7 Curves and patches generated by the software for (a) the CMM (b) the PSS polygonal model

491


(a) (b)
Figure 8 Improvement of curves/patches for (a) the CMM and (b) the PSS models

(a) (b)
Figure 9 URBS models obtained from (a) the CMM (b) the PSS

5. DIVERSE LASER REVERSE
ENGINEERING SYSTEMS COMPARISON
A comparison between the results obtained with the
two diverse laser scanning systems, CMM and PSS,
was carried out on the polygonal models generated
from the acquired point clouds. The 3D deviation
parameter selected was the shortest distance
between the two polygonal meshes of Figure 6 and
is reported as a coloured map in Figure 10. The
chosen maximum distance is ranged between 0.5
mm, where the red colour represents the 0.5 mm
distance, and the violet colour the -0.5 mm distance.
The areas in red and blue correspond to the highest
deviations between the CMM and PSS models,
verified for the smallest and most complex features
of the sculpture such as the eyes, the mouth, and the
curls. These high deviations result in the quality
difference between the two models that can be
easily verified by visual examination of the two
final CAD models shown in Figure 11.
Undoubtedly, the CMM CAD model is the most
accurate and truthful of the two reverse engineered
digital reconstructions of the sculpture, although the
processing time for the CMM procedure was much
longer than for the PSS procedure, particularly as
regards the point cloud acquisition (over 50 hours
versus 1 hour). At any rate, it may be worth noting
that the investment cost for the CMM base4d
scanning system is about 3 times higher than for the
portable scanning system.


492


Figure 10 Comparison between the CMM and PSS polygonal models; the two models are superimposed in the same figure

(a) (b)
Figure 11 URBS models obtained from (a) the CMM (b) the PSS
6. CONCLUSION
The full process of converting the complex
geometry of an antique porcelain sculpture into its
3D digital representation has been illustrated as a
case study of tangible cultural heritage preservation.
The geometrical information of the artefact was
collected using two diverse laser based RE systems,
the first mounted on a CMM and the second on a
Portable Scanning System (PSS).
The same 3D digital reconstruction procedure
was applied to the point clouds acquired with the
two RE systems in order to create 3D digital models
of the artefact to be used as references in the
assessment and comparison of the RE systems
performance.
The CMM RE procedure was shown to be the
most accurate and truthful but the point cloud
acquisition time was more than one order of
magnitude longer than for the PSS RE procedure.
7. ACKNOWLEDGMENTS
This research work was carried out within the
framework of the Executive Program of Scientific
and Technological Co-operation between Italy and
China, Ministry of Foreign Affairs, under the
Significant Bi-Lateral Project on: Reverse

493

Engineering Methods for 3D Digital
Reconstruction, in collaboration with the Tienjin
University, Tienjin, China (duration: 2010-2013).
The Fraunhofer Joint Laboratory of Excellence
for Advanced Production Technology (Fh -
J_LEAPT) at the Department of Materials and
Production Engineering, University of Naples
Federico II, is gratefully acknowledged for its
support to this research activity.
REFERENCES
Bernard A., Virtual Engineering: Methods and Tools,
Journal of Engineering Manufacture, Vol. 219, No B5,
2005, pp 413-422
Berndt E and Carlos J, Cultural Heritage in the Mature
Era of Computer Graphics, IEEE Computer Graphics
and Applications, Vol. 20, No 1, 2000, pp 36-37
Buzzichelli G, Iuliano L, Pocci D, Vezzetti E, Reverse
Engineering: applicazione al recupero di bassorilievi,
Progetto Restauro, Vol. 25 2003, pp 3-7
Chivate PN, Jablokow AG, Solid-model Generation
from Measured Point Data, Computer-Aided Design,
Vol. 25, No 9, 1993, pp 587-599Levoy M et al., The
digital Michelangelo Project: 3D scanning of Large
Statues, 27th Conf. on Computer Graphics and
Interactive Techniques, July 2000, pp 131-144
Levy R and Dawson P, Reconstructing a Thule
Whalebone House Using 3D Imaging, IEEE
MultiMedia, Vol. 13, No. 2, 2006, pp 78-83
Papaioannou G, Karabassi EA, Theoharis T, Virtual
Archaeologist: Assembling the Past, IEEE Computer
Graphics and Applications, Vol. 21, No 2, 2001, pp 53-
59
Polyworks V11, Reference Manual, 2011, Innovmetric
Raja V and Fernandes KJ Reverse Engineering: An
Industrial Perspective, Raja V and Fernandes KJ Eds.,
Springer-Verlag, London, 2008, pp 242
Rho H-Min, Jun Y, Park S, Choi H-R, A Rapid Reverse
Engineering System for Reproducing 3D Human Busts,
CIRP Annals, Vol. 51, No 1, 2002, pp 139-143
Segreto T, Caggiano A, Teti R, 3D Digital
Reconstruction of a Cultural Heritage Marble Sculpture,
7th CIRP Int. Conf. on Intelligent Computation in
Manufacturing Engineering CIRP ICME 10, Capri,
Italy, 23-25 June 2010, pp 116-119
Varady T, Martin RR, Cox J, Reverse Engineering of
Geometric Models an Introduction, Computer-Aided
Design, Vol. 29, No. 4, 1997, pp 255-26

Proceedings of DET2011
7th International Conference on Digital Enterprise Technology
Athens, Greece
28-30 September 2011
494

A TWO-PHASE ISTRUMET SELECTIO SYSTEM FOR LARGE VOLUME
METROLOGY BASED O ITUITIOISTIC FUZZY SETS WITH TOPSIS
METHOD
Bin Cai
University of Bath
b.cai@bath.ac.uk

JafarJamshidi
University of Bath
enpjj@bath.ac.uk


Paul G Maropoulos
University of Bath
p.g.maropoulos@bath.ac.uk

Paul eedham
BAE Systems Limited
Paul.Needham@baesystems.com
ABSTRACT
Instrument selection is deemed as a compulsory and critical process in automated inspection
planning for large volume metrology applications. The process identifies the capable and suitable
metrology devices with respect to the desired measurement tasks. Most research efforts in the past
have focused on the probe selection for coordinate measuring machines (CMMs). However,
increasing demand for accurate measurement in large scale and complex assembly and fabrication
industries, such as aerospace and power generation makes these industries to invest in different
measurement systems and technologies. The increasing number of systems with different
capabilities create difficulties in selecting the most competent Large Volume Metrology Instrument
(LVMI) for a given measurement task. Research in this area is sketchy due to having vast
candidates of qualified instruments and at the same time the complexly of understanding their real
capabilities. This paper proposes a two-phased approach to select the capable LVMIs and rank the
LVMIs according to the pre-defined Measurability Characteristics (MCs). Intuitionistic fuzzy sets
combined with TOPSIS method is employed to solve this vague and conflicting multi-criteria
problem. A numerical case study is given to demonstrate the effectiveness of the system.
KEYWORDS
Inspection process planning; Measurability characteristics; Large volume metrology; TOPSIS
methods; Measurement
1. ITRODUCTIO
In recent years Large Volume Metrology (LVM) has
been rapidly advancing and widely applied in high
value manufacturing industries such as aerospace,
automotive and power generation, (Estler et al, 2002;
Peggs et al, 2009). New development of LVM
systems, their application techniques and
performance evaluation methods paved the way for
enhancing product performance and quality with
reduced cost. Many efforts have been made to
integrate measurement throughout manufacturing
processes in order to maximize the benefits of the
latest technologies (Maropoulos et al, 2007 and
2008). As a result, metrology is not only considered
as a quality control manner but also an active
element in the early design stages.
Inspection process planning (IPP) has been
demonstrated as an effective process to take
metrology into account from the beginning of
manufacturing processes (Li et al, 2004; Zhao et al,
2009). Inspection plan is created along with product
design, before the commence of any production
activity.

495

Figure 1 Structure of the proposed instrument selection system in UML class diagram
This approach can eliminate rework, reducing the
possible negative impact engineering change.
However research regarding LVM IPP was absent
in the literature until Cai et al (2011) proposed the
first systematic large volume metrology inspection
system. Unlike probe selection in most IPP system
for coordinate measuring machines (CMMs),
selecting the suitable LVM instrument faces more
complexity and vagueness due to the large number
of available instruments and uncertain relationships
among instrument performance criteria. Previous
work (Cai et al, 2008 and 2010; Muelaner et al,
2010) has successfully defined the process of
measurability analysis. In this process a variety of
criteria are specified with corresponding evaluation
methods. Instrument selection is based on the result
of measurability analysis although automation is
severely limited. However, many task requirements
and related importance, which is usually unequal,
are ambiguous while defining the criteria. In
addition, some parameters of alternative instruments
cannot be quantified at this stage without detailed
sampling strategy and instrument configuration e.g.
inspection time and inspection cost. Vague
relationship among criteria also leads to uncertain
decision, such as the inherent trade-off relation
between cost and uncertainty. In most applications,
the selection process involves more than one
decision maker (DM) e.g. designers and
metrologists. The assigned preference of alternatives
may be different due to the unique understanding of
the task and unequal knowledge of the instruments.
This leads to different assigned weights when the
significance of different criteria is evaluated by
DMs. It is therefore formulated as a multi-criteria
multi-person decision making problem.
Fuzzy set theory (FST) was first introduced by
Zadeh (1965), with the objective of denoting
vagueness and fuzziness in a set and processing the
unquantifiable and incomplete information in
decision problems. Fuzzy linguistic models enable
the conversion of vague verbal expressions such as
extremely, very and medium into fuzzy
numbers, which allows DMs to estimate the
performance of alternatives and make decision
based on quantitative data. Atanassov (1986)
defined the concept of intuitionistic fuzzy set (IFS)
as a generalization of FST, characterized by a
membership function and a non-membership
function. IFS with technique for order performance
by similarity to ideal solution (TOPSIS) have
recently attracted great attention in multi-attributes
decision-making (MADM) process due to the
consideration of both positive-ideal and negative-
ideal solution (Karsak, 2002; Bozdag et al, 2003;
Chen et al, 2006; Boran et al, 2009; Onut et al,

496

2009). Precise decision can be made while
conflicting criteria are assessed using different
weights.
A two-phased instrument selection system is
proposed in this paper. System structure is presented
using UML class diagram in Figure2. Measurability
Characteristics (MCs) are identified and grouped
into quantitative and qualitative attributes. Phase-1
enables the filtration of instrument based on crisp
requirements of the inspection task. The remaining
instruments are assessed in Phase-2 according to
qualitative criteria and a rank list of alternatives is
given as the result.
2. MEASURABILITY
CHARACTERISTICS
It is imperative to clearly identify the
requirements of the measurement in order to select
the appropriate instrument. Based on the previous
work Cai et al (2008, 2010), Muelaner et al (2010),
the proposed MCs are categorized into two groups
to be assessed in two phases, respectively. For
detailed definitions of MCs consult the above
literature.
2.1 Crisp Measurability Characteristics
Crisp MCs are defined as C
ci,
which can be
precisely assessed based on the following criteria:
(a) the environmental conditions under which the
inspection task will be carried out for
instance the temperature, altitude and
humidity must meet the instrument specified
capabilities ;
(b) the inspection range or the distance of
measurement points from the instrument ;
(c) the material properties of the target product.
For example magnetic targets can not be
applied to aluminium or plastics and
transparent or reflective surfaces cannot be
scanned efficiently by some laser based
measurement systems
(d) the stiffness of the product, e.g. only non-
contact system can be deployed on product
with high flexibility due to undesired surface
movements ;
(e) the uncertainty requirement of the inspection,
e.g. the uncertainty of the selected instrument
should be confined by decision rules (BSISO
12453-1; 1999ASME B89.7.3.1).
2.2 Fuzzy Measurability Characteristics
Criteria with vagueness are defined as C
fi
:
(a) the uncertainty capability of the chosen
instrument;
(b) overall cost of deploying the instrument
which includes recurring cost e.g. purchasing
the system and mandatory training, and non-
recurring cost e.g. maintenance, depreciation;
(c) measurement speed;
(d) Technology Readiness Level (TRL) of the
instrument.
It is uncertainly beneficial to define those fuzzy
MCs due to the incomplete information at this early
planning stage and conflicting relationship among
them. For instance, uncertainty performance of most
instruments is related to the measuring distance to
the target, which is unknown without the detailed
configuration and topological plan of a specific
instrument. Measurement speed and cost can only
be determined when both sampling strategy and
system setup are available. In addition to that, a
non-linear trade-off relationship exists between cost
and uncertainty resulting in ambiguous decision.
An attempt to use more accurate instrument has a
potentially higher cost.
Table 1 Example of crisp MCs
3. PHASE 1: ISTRUMET FILTRATIO
Figure 2 shows the algorithm of Phase 1 using
UML activity diagram. The following steps detail
the algorithm of instrument filtration.
Step 1 Retrieving inspection requirements.
In this step, inspection features extracted from
design are retrieved individually with associated
parameters. The task identification process is
detailed in Cai et al (2011). Crisp MCs are then
obtained accordingly and set as criteria C
ci for
later
evaluation. Table 1 shows an example of interpreted
crisp MCs.
Step 2Filtering the instruments.
A capable instrument list (CPL) and an incapable
instrument list (IIL) are created to temporarily store
the result, facilitating the filtration process.
Instruments located in the large volume metrology
instrument database are activated sequentially with
associated specification I
si
. The data structure of the
database shown in Figure 1 and Table 2 are given as
Inspection ID 1
Crisp MCs Details
Environmental
conditions
Temperature 25
Altitude 500 m
Humidity 35%
Stiffness limitation contact& non-contact
Material property magnet applicable
Uncertainty requirement 0.2 mm
Range 14m

497


Figure 2 UML activity diagram of Phase 1
an example of FARO Laser Tracker. Comparisons
are then carried out between C
ci
and I
si
in such
order: stiffness of the product, environmental
conditions, material properties, uncertainty
requirement and inspection range. By assessing the
more obvious criteria first the sequence ensures that
minimum comparing loops are employed. Once
unsatisfied criterion is detected, I
i
is removed from
CIL to IIL and the rest of C
ci
are cancelled to save
computational power.

498

Table 2 Stored data of FARO Laser Tracker
Instrument ID 1
Instrument Type Laser Tracker
Maximum Operating
Temperature
50
Minimum Operating
Temperature
-15
Maximum Operating Altitude 2450 m
Minimum Operating Attitude -700 m
Maximum Acceptable Humidity 95% non-condensing
Minimum Acceptable Humidity 0
Low reflective target no
Magnetic target yes
Range 55 m
Uncertainty
ADM: 16m + 0.8m/m
Interferometer: 4m +
0.8m/m

The output of Phase 1 is a list with all capable
instruments with respect to inspection tasks and it is
passed to the next stage for further selection.
4.PHASE 2: FUZZY ISTRUMET
SELECTIO
4.1 ITUITIOISTIC FUZZY SETS
Zadeh (1965) defined a fuzzy set as:

= {

| } (1)

where

: [0,1] is the membership


function indicating the degree that element x
belongs to the set A. The closer the value of


is to 1, the more belongs to A.
Intuitionistic fuzzy set A can be written as:

= {

| } (2)

where

: [0,1] is the membership


function and

: [0,1]is the non-membership


function with the condition that

0

1 (3)

Another unique parameter

known as the
intuitionistic fuzzy index is defined as:

= 1

(4)

The multiplication operator of two IFSs A and B
in a finite set X is defined as

= {

| } (5)
4.2 IFSs ISTRUMET SELECTIO
Boran et al (2009) and Onut et al (2009)
proposed similar approaches to solve the MADM
supplier selection problem using TOPSIS. IFSs
were utilized to select the appropriate supplier by
aggregating individual opinions of DMs for
weighting the importance of both criteria and
alternatives (Boran et al, 2009). Their research
results demonstrated the effectiveness of the
approach. A similar method is adopted in this work.
The following steps detail the algorithm and process
of applying IFSs to instrument selection.
Step 1 Modelling the MADM problem.
(a) Let the capable alternative instruments stored
in CIL from Phase 1 be a finite set = {

, ,

}.
(b) Let the fuzzy MCs be a finite criteria set
= {

, ,

} , which includes instrument


uncertainty, overall cost, inspection speed and TRL.
(c) Let = {

, ,

} be the decision maker


set including both designers and metrologists in the
decision making process.
(d) Let

denote the
decision matrix of kth decision maker, where

is
the performance rating of alternative instrument

with respect to criterion

.
(e) Alternative instruments are linguistically rated
by DMs using terms defined in Table 3. The
importance of DMs is evaluated using the linguistic
term in Table 4, where the typical converged IFNs
are also given.
Table 3 Linguistic Performance and IFs
Linguistic Performance Evaluation IFs
Extremely good (EG)/extremely high (EH) (1.00,0.00)
Very good (VG)/very high (VH) (0.80,0.10)
Good (G)/high (H) (0.70,0.20)
Fair (F)/medium (M) (0.50,0.40)
Bad (B)/low (L) (0.25,0.60)
Table 4 Linguistic Importance and IFs
Linguistic Importance IFs
Very Important (0.90,0.10)
Important (0.75,0.20)
Medium (0.50,0.45)
Unimportant (0.35,0.60)
Very Unimportant (0.10,0.90)

Step 2 Assigning linguistic importance to designers
and metrologists, and calculating the corresponding
weights.
Let

= [

] be the intuitionistic fuzzy


rating of kth decision maker using linguistic term
and the weight of kth decision maker is calculated
as:

499

(6)

where 0

1 and

= 1

.
Step 3 Aggregating the decision matrix with
respect to the individual performance rating of
decision makers.
Having fused individual opinion

from all
weighted DMs, group opinion is aggregated as the
intuitionistic fuzzy decision matrix. IFWA operator
is utilized in the aggregation process proposed by
Xu (2007a and 2007b).

, ,


= [1 1

] (7)

The matrix is then written as

=

(8)

where

.
Step 4 Assigning linguistic importance to the
criteria and calculating the corresponding weights.
The system allows decision makers to assign
different weights to each criterion, which is a key
advantage for emphasizing the vague relation
existing among criteria, e.g. uncertainty, inspection
speed and cost.
It is assumed that the kth decision maker weights
the jth criterion with an intuitionistic number

. The overall weight of the jth


criterion is calculated using IFWA operator:

, ,


= [1 1

] (9)

and the weight matrix is then formed as

= [

, ,

] (10)

where

.
Step 5 Creating the weighted decision matrix by
aggregating P and W.
P and W are multiplied using Eq.5 resulting in the
weighted intuitionistic fuzzy decision matrix:

= {

} (11)

The matrix is then written as

=

(12)

and

= 1

(13)

where

.
Step 6 Calculating the separation distance of each
alternative to positive-ideal solution and negative-
ideal solution.
Criteria such as uncertainty, TRL and speed
denoted by

are beneficial while rating the


alternative instruments. By contrast, the overall cost
is considered as cost criterion denoted by

. The
intuitionistic fuzzy positive-ideal solution

and
negative-ideal solution

are defined as:


(14)

(15)

where



Normalized Euclidean distance is adopted in this
paper to measure the separation between
alternatives and positive-ideal solution

and
negative-ideal solution

as

and

(16)
and

=

500


(17)

Step 7 Ranking the alternative instruments based
on the relative closeness coefficient.
All instruments are then scored with the relative
closeness coefficient

with respect to the positive-


ideal solution:

(18)

The candidates are then ranked according to the
value of

. Higher score indicates more suitability


of the corresponding alternative instrument.
The algorithm of Phase 2 is shown in Figure 3
and the most suitable instrument is highlighted as
the result of instrument selection process.
5. UMERICAL CASE STUDY
An instrument is required for an inspection task
and the crisp MCs is shown in Table 1. Available
instruments stored in database include 2 laser
trackers, 2 laser scanners, laser radar, iGPS and
photogrammetry system. The filtration process is
implemented as follows:
(a) photogrammetry system is removed from
CIL due to insufficient range coverage.
(b) laser scanners and iGPS are filtered out due
to unsatisfied uncertainty requirement.
Under this circumstance, 2 laser trackers and
laser radar have remained from Phase 1, as the
alternative instruments:

I
1
: Laser Tracker 1
I
2
: Laser Tracker 2
I
3
: Laser Radar

One designer (DM
1
) and two metrologists (DM
2
and DM
3
) are involved in the performance
evaluation process based on the four fuzzy MCs

C
1
: Instrument uncertainty
C
2
: Overall cost
C
3
: Inspection Speed
C
4
: TRL

The process of fuzzy instrument selection is
consisted of the following steps:
Step 1Assigning linguistic importance to DMs
and calculating the corresponding weights.
Each DM is assigned with a linguistic importance
term shown in Table 6. This process is based on the
degree of knowledge possessed by DMs regarding
specific inspection task and instrument.
Corresponding weights are obtained using Eq.6.
Step 2 Aggregating the decision matrix with
respect to the individual performance rating of
decision maker.
The performance rating from each DM is shown
in Table 7.
Table 6 DMs importance and corresponding weights
DM
1
DM
2
DM
3

Linguistic
importance
Important Very important Medium
Weight 0.356 0.406 0.238

Table 7 Performance rating of alternatives
Criteria Instrument DM
1
DM
2
DM
3

C

uncertainty
I
1
EG EG VG
I
2
VG G VG
I
3
G G F
C

cost
I
1
VH VH H
I
2
M H M
I
3
EH EH EH
C

speed
I
1
VG F F
I
2
G G F
I
3
VG EG VG
C

TRL
I
1
VG EG VG
I
2
G VG G
I
3
F G B


The linguistic ratings are then converted to IFNs
using Table 3. The intuitionistic fuzzy decision
matrix is calculated according to Eq.7:


C
1
C
2
C
3
C
4

=
1.000,0.000,1.000 0.780,0.118,0.662 0.639,0.244,0.395 1.000,0.000,1.000
0.764,0.133,0.632 0.594,0.302,0.292
0.661,0.236,0.426 1.000,0.000,1.000
0.594,0.302,0.292 0.746,0.151,0.595
1.000,0.000,1.000 0.553,0.332,0.220

I
1
I
2

I
3


501


Figure 3 UML activity diagram of Phase 2
Step 3 Calculating the aggregated weight of criteria
The assigned importance by DMs with respect to
each criterion is shown in Table with converted
corresponding IFNs.
The weight matrix is aggregated using Eq.9 as:

= [

]

=
0.861,0.128,0.011
0.787,0.189,0.023
0.799,0.170,0.031
0.576,0.371,0.053



Step 4 Creating the weighted decision matrix by
aggregating matrices P and W.
With the constructed intuitionistic fuzzy decision
matrix P and weights matrix W, the aggregated
weighted decision matrix is established using Eq.


502

C
1
C
2
C
3
C
4

=
0.861,0.128,0.011 0.614,0.285,0.102 0.511,0.373,0.117 0.576,0.371,0.053
0.658,0.224,0.098 0.467,0.434,0.099
0.569,0.334,0.097 0.787,0.189,0.024
0.474,0.421,0.105 0.429,0.466,0.105
0.799,0.170,0.031 0.318,0.580,0.102

I
1
I
2

I
3


Table 8 Assigned importance for all criteria
Criteria DM
1
DM
2
DM
3

C
1

I VI VI
(0.75,0.2) (0.90,0.10) (0.90,0.10)
C
2

VI I M
(0.90,0.10) (0.75,0.2) (0.50,0.45)
C
3

I I VI
(0.75,0.2) (0.75,0.2) (0.90,0.10)
C
4

M M I
(0.50,0.45) (0.50,0.45) (0.75,0.2)

Step 5 Calculating the separation distance of each
alternative to positive-ideal solution and negative-
ideal solution.
In this case, uncertainty, TRL and speed are
considered as beneficial criteria and cost is deemed
as cost criterion. Therefore,

= [

] and

= [

] . The intuitionistic fuzzy positive-ideal


solution

and negative-ideal solution

are obtained
as:

= {0.861,0.128,0.011, 0.467,0.434,0.099,
0.799,0.170,0.031, 0.576,0.371,0.053}

= {0.569,0.334,0.097, 0.787,0.189,0.024,
0.474,0.421,0.105, 0.318,0.580,0.102}

Normalized Euclidean distance is obtained to
measure the separation between alternatives and
positive-ideal solution

and negative-ideal solution

using Eq.16 and Eq.17in Table 9.


Table 9 Separation measure and relative closeness
coefficient
Instruments D
*
D

RC
I
1
0.148 0.192 0.565
I
2
0.183 0.162 0.469
I
3
0.228 0.147 0.393

Step 6 Ranking the alternative instruments based
on the relative closeness coefficient.
All instruments are then scored with the relative
closeness coefficient

with respect to the positive-


ideal solution shown in Table 9. The ranking order
is

and I
1
is selected as the most
appropriate instrument in the alternatives since it
has the highest RC.

With the interest of demonstrating the sensitivity
of the decision model, a different importance set is
assigned to all criteria shown in Table 10 and the
results are given in Table 11.
Table 10 Assigned importance for all criteria-case 2
Criteria DM
1
DM
2
DM
3

C
1

I I I
(0.75,0.2) (0.75,0.2) (0.75,0.2)
C
2

VI I M
(0.90,0.10) (0.75,0.2) (0.50,0.45)
C
3

VI VI VI
(0.90,0.10) (0.90,0.10) (0.90,0.10)
C
4

M M I
(0.50,0.45) (0.50,0.45) (0.75,0.2)
Table 11 Separation measure and relative closeness
coefficient-case2
Instruments D
*
D

RC
I
1
0.168 0.165 0.495
I
2
0.182 0.153 0.456
I
3
0.157 0.178 0.531

The inspection speed is considered more
important than in the previous case while less
importance is given to inspection uncertainty. This
shifting leads to a clearly different decision as I
3
is
ranked first due to its significantly higher rating
than I
1
and I
2
in terms of measurement speed. The
decision model is successfully aware of this priority
change of criteria and reveals the correct selection
result.
6. COCLUSIO
The recent developments of inspection process
planning methodology demands instrument
selection as a mandatory and vital process, which
paves the way for subsequent planning activities.
Nevertheless, the measurement device selection for
large volume metrology (LVM) is rarely studied
while most research efforts focusing on the probe
selection for coordinate measuring machines. The
large and increasing number of available
instruments with a variety of assessing criteria
presents a barrier to an applicable selection system.
A two-phased LVM instrument selection system
using intuitionistic fuzzy sets combined with
TOPSIS method is described in this paper.
Measurability Characteristics (MCs) are first
identified with respect to specific inspection task
and grouped into quantitative (crisp) and qualitative
(fuzzy) attributes. An instrument filtration
procedure is implemented in Phase 1 based on the

503

results of assessing crisp MCs. In the second phase,
the remaining instruments are ranked using
intuitionistic fuzzy group decision-making method.
Vague criteria are appropriately assessed in this
early stage by taking advantage of linguistic
importance and performance rating.
A numeric case study shows the process of the
proposed approach. Furthermore, the sensitivity of
the decision model to the variable priority of the
criteria has been successfully demonstrated.
REFERECES
Atanassov, K. T., Intuitionistic fuzzy sets,Fuzzy Sets
and Systems, Vol. 20, 1986, pp. 8796
B89.7.3.1, Guidelines for Decision Rules: Considering
Measurement Uncertainty in Determining
Conformance to Specifications, American Society of
Mechanical Engineers, 2001
Boran, F.E.,Genc, S., Kurt, M. and Akay, D.,A Multi-
Criteria Intuitionistic Fuzzy Group Decision Making
for Supplier Selection with TOPSIS Method, Expert
SystemApplication, Vol. 36, No. 8, 2009, pp. 11363
11368
Bozda, C.E., Kahraman, C. and Ruan, D.,Fuzzy Group
Decision Making for Selection among Computer
Integrated Manufacturing Systems, Computers in
Industry,Vol. 51,2003, pp. 1329
BS EN ISO 14253-1, Geometrical Product
Specifications (GPS) Inspection by Measurement of
Workpieces and Measuring EquipmentPart 1:
Decision Rules for Proving Conformance or Non-
conformance with Specifications, 1999
Cai, B., Guo, Y., Jamshidi, J. and Maropoulos, P. G.,
Measurability analysis of large volume metrology
process model for early design, Proceedings ofFifth
International Conference on Digital enterprise
technology, 2008, pp. 807-820
Cai, B., Dai, W.,Muelaner, J.E. and Maropoulos, P.,
Measurability Characteristics Mapping for Large
Volume Metrology Instruments Selection,
Proceedings of the Advances in Manufacturing
Technology - XXIII, 7th International Conference on
Manufacturing Research, 2010, pp. 438-442
Cai, B., Jamshidi, J. and Maropoulos, P. G., A Two-
Phase Instruments Selection System for Large Volume
Metrology based on Intuitionistic Fuzzy Sets with
TOPSIS Method, 7thInternational Conference on
Digital enterprise technology, Greece, paper accepted.
Chen, C. T., Lin, C. T. and Huang, S. F., A Fuzzy
Approach for Supplier Evaluation and Selection in
Supply Chain Management, International Journal of
Production Economics, Vol. 102, 2006, pp. 289301.
Estler, W.T., Edmundson,K.L.,Peggs, G.N. and Parker,
D.H., Large-scale Metrology An Update,CIRP
Annals - Manufacturing Technology, Vol. 51, No. 2,
2002, pp. 587-609
Karsak, E. E., Distance-Based Fuzzy MCDM Approach
for Evaluating Flexible Manufacturing System
Alternatives, International Journal of Production
Research, Vol. 40, No.13, 2002, pp. 31673181
Li, Y.D. and Gu, P.H., Free-form Surface Inspection
Techniques State of Art Review, Computer-Aided
Design, Vol. 36, 2004, pp. 1395-1417
Maropoulos, P.G., Zhang, D., Chapman, P., Bramall,
D.G and Rogers, BC., KeyDigital Enterprise
Technology Methods for Large Volume Metrology
andAssembly Integration,International Journal of
Production Research, Vol. 45, No. 7, 2007, pp. 1539-
1559
Maropoulos, P.,Guo, Y.,Jamshidi, J. and Cai, B.,Large
Volume Metrology Process Models: A Framework for
Integrating Measurement with Assembly Planning,
Annals of the CIRP,Vol. 57, 2008, pp. 477480
Muelaner, J. E., Cai, B. and Maropoulos, P. G., Large
volume metrology instrument selection and
measurability analysis,Proceedings of the Institution
of Mechanical Engineers, Part B: Journal of
Engineering Manufacture, Vol. 224, No.6, 2010, pp.
853-868
Onuut, S., Kara, S. S. and Isik, E.,Long Term Supplier
Selection Using a Combined Fuzzy MCDM
Approach: A Case Study for a Telecommunication
Company,Expert Systems with Applications, Vol. 36,
No. 2, 2009, pp. 38873895
Peggs, G. N., Maropoulos, P. G., Hughes, E. B., Forbes,
A. B., Robson, S., Ziebart, M. and Muralikrishnan, B.,
Recent developments in large-scale dimensional
metrology, Proceedings of the Institution of
Mechanical Engineers, Part B: Journal of
Engineering Manufacture, Vol. 223, No. 6, pp. 571-
595
Xu, Z. S., Intuitionistic Fuzzy Aggregation
Operators,IEE Transaction of Fuzzy Systems, Vol.15,
No. 6, 2007(a), pp. 11791187
Xu, Z. S., Intuitionistic Preference Relations and their
Application in Group Decision Making, Information
Sciences, Vol. 177, 2007(b), pp. 23632379
Zadeh, L.A., Fuzzy Sets, Information and Control,
Vol. 8, 1965, pp. 338-353
Zhao, F., Xu, X. and Xie, S.Q.,Computer-aided
Inspection Planning The State of the Art,
Computers in Industry, Vol.60, 2009, pp. 453466

Proceedings of DET2011
7th International Conference on Digital Enterprise Technology
Athens, Greece
28-30 September 2011
504

VIEWPOINTS ON DIGITAL, VIRTUAL, AND REAL ASPECTS OF
MANUFACTURING SYSTEMS
Hasse Nylund
Department of Production Engineering
Tampere University of Technology
Tampere, Finland
hasse.nylund@tut.fi
Paul Andersson
Department of Production Engineering
Tampere University of Technology
Tampere, Finland
paul.andersson@tut.fi
ABSTRACT
Information and Communication Technology (ICT) plays a key role in improving the efficiency of
manufacturing activities. This paper proposes an approach where a manufacturing system is
logically divided into digital, virtual, and real existences. The digital and virtual parts present the
ICT view where the digital part includes the information and knowledge while the virtual part
corresponds to computer models and simulations. Both have their roles in improving the
manufacturing activities happening in the real part i.e. producing products and services to
customers. These issues are discussed from theoretical aspects explaining structures of
manufacturing entities and systems as well as manufacturing activities and improvement. The issues
are also explained and demonstrated in a context of an academic research environment.
KEYWORDS
Manufacturing, Digital, Virtual, Improvement

1. INTRODUCTION
In this paper digital, virtual, and real characteristics
of manufacturing systems are discussed. The main
idea is to justify the division of the computer world
into digital and virtual existences although existing
in an integrated fashion. The digital part refers to the
information and knowledge while the virtual part
includes computer models and simulations. Both of
the parts exist in multiple areas of overall
management of manufacturing systems and, at the
best, offer more efficient support for the decision
making in different manufacturing activities.
The idea for the division derives partially from
the concept of holons. Koestler (1989) introduced
the term holon, which derives from the Greek word
holos, meaning a whole, and the suffix -on,
meaning a part. In other words, a holon is a whole
system, and at the same time, it is part of a larger
system. Valckenaers et al., (1994) initiated the
concept of Holonic Manufacturing Systems (HMS).
HMS describes a manufacturing system in terms of
holons, autonomous and co-operative building
blocks of manufacturing systems, capable of
performing their own tasks independently, and to
collaborate with each other to fulfil their common
objectives. Additionally, in HMS, each holon
consists internally from an information processing
part and often a physical processing part (Van
Brussel et al., 1998). In the division presented in this
paper, the physical processing part corresponds to
the real part and the information processing part is
divided into the digital and virtual parts.
The division is discussed from different
viewpoints, including:
Describing the structures of manufacturing
entities and systems.
Discussing of the role of the division in
manufacturing activities and improvement.
The theoretical issues are also presented in the
context of an academic research environment, a
piloting environment for current and future research
topics, aiming to narrow the gap between the
theoretical research issues and their implementation
into industrial environments.

505

1.1. BACKGROUND
The research on digital and virtual manufacturing
systems, factories, and enterprises has no commonly
used definitions. However, they usually share the
idea of managing the typically isolated and separate
manufacturing activities as a whole by the means of
Information and Communications Technology (ICT)
(Nylund and Andersson, 2011). Typical examples,
often found in the literature, are (see, for example:
Bracht and Masurat, 2005; Maropoulos, 2003;
Offodile and Abdel-Malek, 2002; Reiter, 2003;
Souza et al., 2006):
An integrated approach to improve product and
production engineering technology.
Computer-aided tools for planning and
analysing real manufacturing processes.
A collection of new technologies, systems, and
methods.
Khn, (2006) describes the digital factory concept
that offers an integrated approach to enhance the
product and production engineering processes.
Typical areas of the processes are, for example:
product development and product lifecycle
management; production process development and
optimisation; factory and material flow design and
improvement; and operative production planning
and control.
Constantinescu and Westkmper (2010) have
presented a reference model for factory engineering
and design that is used as an integrated planning
environment. The planning viewpoints are
structured in four main clusters i.e. strategic,
structure, and process planning as well as the
planning of factory operation and use.
The digital and virtual aspects can also be divided
into information and knowledge management as the
digital view, and computer models and simulations
as the virtual part. Examples of the application areas
of the virtual part are:
Computer-aided manufacturing (CAM), e.g.
offline programming for virtual tool path
generation to detect collisions, analyse material
removal and optimise cycle times (Khn, 2006).
Visual interaction applications, e.g. virtual
environments and 3D-motion simulations that
offer realistic 3D graphics and animations to
demonstrate different activities.
Simulation for the reachability and sequences of
operations as well as internal work cell layout
and material handling design (Khn, 2006).
Discrete event simulation (DES) solutions
including the need for and the quantity of
equipment and personnel as well as evaluation
of operational procedures and performance
(Law and Kelton, 2000).
DES can also be focused on traditional supply
chain sales and delivery processes as well as to
complex networked manufacturing activities,
including logistical accuracy and delivery
reliability of increasing product variety
(Wiendahl and Lutz, 2002).
Effective knowledge management consists of
four essential processes: creation, storage and
retrieval, transfer, and application, which are
dynamic and continuous phenomenon (Alavi and
Leidner, 2001). Examples of the application areas of
the digital part are:
Email messages, Internet Relay Chat (IRC),
Instant Messaging, message boards and
discussion forums.
More permanent information and knowledge
derived from the informal discussions, stored in
applications such as Wikipedia.
Formally presented information systems, such
as Enterprise Resource Planning (ERP), Product
Data Management (PDM), and Product
Lifecycle Management (PLM).
The total information and knowledge of a
manufacturing system can be explained with explicit
and tacit components (Nonaka and Takeuchi, 1995).
The explicit part of the knowledge can be described
precisely in a formal way and can be included as the
digital part. The skills of humans are explained as
the tacit dimension of knowledge, which, presented
digitally, may lead to unclear situations and can be
wrongly understood. The importance of the
transformation from tacit to explicit knowledge has
been recognized as one of the key priorities of
knowledge presentation (Chryssolouris et al., 2008).
The division into digital, virtual, and real is
intentionally missing the tacit dimension, as it is
intended to be used in decision making processes by
humans, based on their skills and knowledge. At the
end, the humans are the ones that are making the
decisions, or are the ones that are creating the
decision making mechanisms. Mavrikios et al.,
(2007) have developed a concept of a Collaborative
Manufacturing Environment (CME) advancing the
capabilities of humans in areas of e.g. information
sharing, knowledge management, and decision
making.
The importance of the possibilities offered by
ICT tools and principles is ever more
acknowledged, not only in academia, but also in
industry. The Strategic Multi-annual Roadmap,
prepared by the Ad-Hoc Industrial Advisory Group
for the Factories of the Future Public-Private
Partnership (AIAG FoF PPP), lists ICT as one of the
key enablers for improving manufacturing systems
(FoF, 2010). The report describes the role of ICT at
three levels; smart, virtual, and digital factories.

506

Smart factories involve process automation
control, planning, simulation and optimisation
technologies, robotics, and tools for sustainable
manufacturing. Virtual factories focus on the value
creation from global networked operations
involving global supply chain management. Digital
factories aim at a better understanding and the
design of manufacturing systems for better product
life cycle management involving simulation,
modelling and management of knowledge.
2. DIGITAL, VIRTUAL, AND REAL
The division of manufacturing systems into digital,
virtual, and real parts was first introduced in
(Nylund et al., 2008) and has then been further
developed in the context of a framework for
Extended Digital Manufacturing Systems (EDMS)
(Nylund and Andersson, 2011). It is aimed as a
platform for collaboration between all
manufacturing activities and related parties. The
EDMS contains the digital information and
knowledge of the real manufacturing system
activities, as well as the computer aided tools for
simulation, modelling, analysis, and change
management of manufacturing systems.
2.1. MANUFACTURING ENTITIES
Internally, the manufacturing entities consist of
digital, virtual, and real existences as their
autonomy, and a communication part as the enabler
of collaboration with other manufacturing entities.
The internal structure of the manufacturing entities
is presented in Figure 1.

Communication
Communication
Digital
Information
Knowledge
Real
Physical
Virtual
Model

Figure 1 Structure of a manufacturing entity (ylund et
al., 2008)
The real part represents what exists physically in
a system while the virtual part is a presentation of
the physical entity, usually as a computer model.
The digital part holds the information and
knowledge of the real and virtual parts to fully
describe their characteristics and properties. The
communication part is responsible for the
collaboration activities between the entities.
The communication part connects the different
manufacturing entities and enables viewing them on
different structuring levels. Wiendahl et al., (2007)
recognize five structuring levels in the context of
changeability i.e. manufacturing networks, sites,
segments, cells, and stations. Each level can be
treated as a manufacturing entity, which consists of
the entities on a lower level and their collaboration.
The connection of the structuring levels can be
explained with, for example, Fractal Manufacturing
Systems (FrMS). In FrMS, a fractal is an
independently acting entity (on each of the
structuring levels) that can be precisely described
(Warnecke, 1993). For example, a manufacturing
segment consists of manufacturing cells and their
interaction. At the same time, it is a part of a
manufacturing site interacting with other segments
of the site.
2.2. MANUFACTURING SYSTEMS
In addition to describing the internal structure of a
manufacturing system divided into digital, virtual,
and real existences, the overall management of a
manufacturing system can be explained with the
same division. Figure 2 shows the basic principle of
utilising the division by dividing a manufacturing
system into its subsystems.

Material
Products
Services
Energy
Control Feedback
Other Outputs
Solution alternatives
Performance
measures
Digital Manufacturing System
Planning and Management
Real Manufacturing System
Operation
Virtual Manufacturing System
Development and Forecasting
Information
Ideas
Need for
change

Figure 2 Structure of a manufacturing system
The real manufacturing system is responsible for
the physical operations, i.e. transforming material
and components into finished products and related
services. In addition to material, the operations
consume energy and result in other outputs, such as
removed and unused material, other forms of
physical waste, and emissions. The behaviour of the
real manufacturing system is controlled in the
digital manufacturing system and the results of the
activities are again analyzed the digital
manufacturing system for planning and managing
upcoming production activities.
The virtual manufacturing system can present the
current manufacturing system or a proposed future

507

version of the real manufacturing. In the case of the
current system, it is usually used in forecasting e.g.
simulating changes in production volume and mix.
The development concerns the future of a system,
where the effects of possible configuration changes
or implementing new manufacturing resources can
be experimented.
2.3. MANUFACTURING ACTIVITIES
In a manufacturing system, the autonomous
manufacturing entities are acting and interacting
with each other. The activities can be explained as
services, loosely based on Service Oriented
Architecture (SOA). SOA consists of self-describing
components that support the composition of
distributed applications (Papazoglou and
Georgakopoulos, 2003) enabling the autonomous
manufacturing entities to negotiate and share their
information and knowledge. The basic conceptual
model of the SOA architecture consists of service
providers, service requesters, and service brokers
(Barry, 2003). Applying the principles of SOA onto
a manufacturing environment, the architecture can
be explained as follows (Nylund and Andersson,
2010b):
Products are service requesters when they are
realized as orders send into a manufacturing
system. They initiate service requests they
require in order to be manufactured.
Resources, such as machine tools, are service
providers having the capabilities needed to
provide the services that are requested.
A service broker plays the role of managing and
controlling the manufacturing activities. Its
function is to find service providers for the
requesters on the basis of criteria such as cost,
quality, and time.
The actual service can occur in the real part as a
physical transformation process, or in the virtual
part as a simulation. The input information for the
transformation process as well as the output
information produced from the process exists in the
digital part.
2.4. MANUFACTURING IMPROVEMENT
In (Nylund and Andersson, 2010a) improving a
manufacturing system is discussed from the
viewpoints of continuous, incremental, and radical
improvements.
Continuous improvement focuses on keeping the
current system running and eliminating the
deteriorating of the system. From the digital point of
view, it is mostly gathering predefined data from the
daily manufacturing activities to measure and
monitor the behaviour and variation of the system.
By monitoring the values, beforehand defined
actions can be taken, or in the lack of those, new
plans for actions can be developed. From the virtual
point of view, continuous improvement can be e.g.
planning and scheduling using simulation solutions
to fully utilize the potential of the system.
Incremental improvement focuses on the
development of an existing system. It causes
changes to the part or the function structure of the
system. Typical issues include configuring the
system to a new desired state, or implementing
something new to meet the requirements of the
desired state.
Radical improvement aims at something new that
is more effective than the existing system. It
questions the underlying assumptions on how the
activities are currently conducted. From the digital
and virtual points of view, it rather creates new
incremental improvement activities, than requires
digital or virtual activities itself.
One can argue that a radical improvement exists
before an incremental improvement. It again creates
changes for a continuous improvement. As the
radical improvement happens rarely compared with
the incremental improvement, most of the changes
to continuous improvement occur without the
radical improvement.
Any improvement activity aims at a better
performing manufacturing system. The performing
of manufacturing systems has to be able to be
measured. Therefore, suitable performance metrics
are required. Behn (2003) recognizes eight generally
applicable purposes of measuring performance: to
evaluate, control, budget, motivate, promote,
celebrate, learn, and improve, where the
improvement is the core purpose behind the other
seven. Typical performance metrics for
manufacturing improvement can be divided into
manufacturing process monitoring, manufacturing
flow efficiency, and competence of the company
(Nylund et al., 2011b). Examples of performance
metrics are discussed in Section 3.4.
3. RESEARCH ENVIRONMENT
The theoretical issues discussed are currently being
examined in an academic research environment. The
aim of the environment is to offer a research
platform that can be utilised in:
Designing, developing and testing current and
future research topics.
Prototyping possible solutions for industrial
partners in ongoing research projects.
Utilizing it as an educational environment for
university students and company personnel to
introduce the latest results in the area of
intelligent manufacturing.

508

3.1. OVERVIEW OF THE ENVIRONMENT
The research environment consists of typical
manufacturing resources and work pieces as
physical entities. The resources of the research
environment, offering different manufacturing
capabilities, are (see Figure 3):
Machine tools (a lathe and a machining centre)
for machining operations.
Robots for material handling and robotized
machining operations.
An automated storage for storing blank parts
and finished work pieces.
Laser devices for e.g. machining and surface
treatment.
A punch press, existing only virtually, for the
punching of sheet metal parts. The real punch
press is located at a factory of an industrial
project partner company.

Milling
Drilling
Material
handling
Punching Milling, Drilling
Common
Integration
Platform
Turning
Laser devices

Figure 3 Overview of the manufacturing resources of the
research environment
The work pieces, which can be manufactured in the
environment, are fairly simple cubical, cylindrical,
and flat parts in shape. They have several
parameterized features that can be varied within
certain limits, e.g. dimensions (width, length, and
depth), number of holes, internal corner radiuses,
and sheet thickness. The main reasons for the
parameterization are, firstly, that the number of
different parts can be increased with the variation
without having a large number of different types of
parts. Secondly, the parameters can be set in a way
where changing the parameters also requires
capabilities of different kind i.e. different
manufacturing resources are required. This gives
more opportunities to compare alternative ways to
manufacture the work pieces based on selected
criteria, such as the cheapest or fastest way to
manufacture a work piece.
3.2. VIEWPOINTS OF THE ENVIRONMENT
The research environment can be seen from the
digital, virtual, and real viewpoints. Figure 4 shows
the real and virtual views of the whole research
environment. The environment can be viewed from
three different structuring levels; the whole
environment, machining and robot cells, and the
individual machine tools and robots.
The real part of the environment exists in a heavy
laboratory and is divided into two main areas, one
including the robots and laser devices, and the
second the machine tools and the automated storage.
The real manufacturing entities on each structuring
level have their corresponding computer models and
simulation environments as their virtual parts.


Figure 4 Real and virtual views of the research
environment
The information and knowledge of the
environment is stored in local databases of the
manufacturing entities as well as in a common
Knowledge Base (KB) for the whole environment,
those presenting the digital part of the environment.
The actual connection is enabled by and executed
via the KB, see (Lanz et al., 2008), as all
communication activities use or update it. The KB is
the base for the development activities of the
research environment, presenting the role of a
service broker. It is a system where the data of the
environment can be stored and retrieved for and by
different applications existing in the environment.

509

Product
Requirements
Methods Descriptions
Feature Recognition
Manufacturing
Capabilities
Process Planning Virtual Test Manufacturing
Real Test Manufacturing
Digital Test Manufacturing
Service
Request
New
Service
Existing
Capability
Knowledge Base
New
Service
Rejected
Service
Test Manufacturing Environment
New
Capability
Product and Manufacturing
Information and Knowledge
New Product
Requirements
Figure 5 An overall view of the manufacturing tests of the research environment (ylund et al., 2011a)
3.2. MANUFACTURING CAPABILITIES
Figure 5 presents an overall view of the process
starting from comparing new product requirements
with manufacturing capabilities. The manufacturing
methods of the resources form the manufacturing
capabilities, which are formally presented and stored
in the KB. Similarly, the product requirements are
described formally and stored in the KB.
In the case of a new product, a CAD model is
required. The model will be analyzed to recognize
the features of the product. Each feature leads onto a
service request, which is send to the process
planning part of the environment. It includes the
process plans of the environment that are known. It
holds the information and knowledge of the
manufacturing capabilities i.e. it can be seen as the
digital memory of what has been manufactured
within the environment. If a suitable service exists,
i.e. this kind of feature has been manufactured
before, the system will return a result that the
feature can be manufactured. Otherwise, a new
service request is created and the manufacturability
of the feature can be tested.
3.3. TEST MANUFACTURING
The manufacturing tests can be divided into three
categories; digital, virtual, and real test
manufacturing. In principle the digital test
manufacturing would be a favourable choice as it is
basically comparing a set of parameters of a new
service request to the formally described capabilities
i.e. services that exist. The formal matching of
product requirements and resource capabilities is
currently under development, and at the moment,
can cope with fairly simple scenarios.
The second choice would be a virtual test
manufacturing i.e. typically modelling and
simulation. It requires more time depending if
existing simulation models can be used or new
simulation models need to be constructed. In the
case, where the existing simulation models cannot
be used, new ones are required to be built. The
creation of a new simulation model may be
reconfiguring the existing virtual system, or
implementing something new into the system. In
these alternatives, the test manufacturing is still
carried out with computers i.e. it does not interrupt
the use of the real manufacturing resources.
The real test manufacturing requires the physical
resources and the time used will reduce the time for
daily operations to manufacture customer orders.
The real test manufacturing will be the choice if the
simulation models are not accurate enough to fully
trust or understand their results. In some cases it is
also reasonable to conduct additional tests with real
manufacturing resources to reduce the risk of
implementing fault processes.
The responsibility of selecting, whether digital or
virtual test manufacturing would be enough, is to be
determined by humans, based on their skills and

510

knowledge of the matter in hand, and has to be
evaluated separately for each time a decision needs
to be made.
The result after the test manufacturing
alternatives is either a rejected or an accepted new
service. In a case of an accepted service, it is added
into the KB as a new capability. The result of
rejected service could happen if the product feature
cannot be manufactured within the existing system,
or even if it could be manufactured, it is e.g. too
expensive, uses too much time or does not output
desired quality. In these cases, the feedback goes
back to the product development to consider it the
feature can be redesigned.
3.4. PERFORMANCE METRICS
Several performance metrics are gathered from the
manufacturing activities of the research
environment. The metrics are used to monitor and
maintain the operation of the environment as well as
to measure the performance. The metrics are the
same whether they are being collected from the
simulation models or from the real system.
Examples of metrics gathered from the research
environment are:
Delivery reliability, throughput and tact times
Resource utilisation
Process, changeover, and unit times and costs
Work in progress and buffer sizes
Production volume and material consumption
Emissions, pollution, waste, and energy
consumption
Process quality assurance and stability
monitoring
4. CONCLUSIONS AND FUTURE WORK
The digital, virtual, and real aspects of
manufacturing systems were discussed in this paper.
The main idea was to separate the digitally
presented information and knowledge from the
virtual models of manufacturing entities. The virtual
part is the presentation of the real part i.e. a
presentation of the same thing using ICT tools and
principles, such as modelling and simulation, while
the digital part is common for both of them, as it is
used in the decision making in the overall
management strategy of a manufacturing company.
The division into digital, virtual, and real aspects
were used to explain the structures of individual
manufacturing entities and systems as well as
describing their role in manufacturing improvement
and activities. These theoretical viewpoints were
also explained in a context of an academic research
environment, utilized in current research topics
when applicable.
The research issues discussed in this paper are
also utilized and further developed in several
scientific research projects. Short descriptions of the
topics of the selected research projects, related to the
subject of this paper, are as follows:
Knowledge Intensive Product and Production
Management from Concept to Re-cycle in Virtual
Collaborative Environment (KIPPcolla). The project
focuses on the development of better product-
process knowledge exchange and knowledge
intensive meaningful models. The main focus is on
the digital part i.e. formalizing the ICT-related
activities.
The CSM-Hotel project aims to create a concept
to collect several small and medium enterprises
(SMEs) under the same factory roof offering
partially shared hardware and software resources
and solutions for collaboration. This enables the
companies to focus on their core competence and a
new level of sustainability and can be reached. The
research environment is used as a demonstration
platform in the project.
Framework and toolset for developing, analyzing
and controlling sustainable and competitive
production networks (NICO). The project focuses
on supporting Finnish companies in realizing
competitive and sustainable products and production
networks. Based on the identified requirements and
best available practices of CSM, the research project
will build higher level static analysis tools and
simulation supported dynamic ICT tools for the
design, implementation, measurement and
development of CSM factories and networks.
5. ACKNOWLEDGMENTS
The research presented in this paper is co-financed
by Tekes (the Finnish Funding Agency for
Technology and Innovation) and several major
companies in Finland.
REFERENCES
Alavi M and Leidner DE, Knowledge Management and
Knowledge Management Systems: Conceptual
Foundations and Research issues, MIS Quarterly,
Vol. 25, 2001, pp. 107-136
Barry DK, Web services and service-oriented
architecture: the savvy managers guide, 2003, San
Francisco, CA, Morgan Kaufmann Publisher
Behn R, Why Measure Performance? Different Purposes
Require Different Measures, Public Administration
Review, Vol. 63, No. 5, 2003, pp 586-606
Bracht U and Masurat T, The Digital Factory between
vision and reality, Computers in Industry, Vol. 56,
2005, pp 325-333

511

Chryssolouris G, Mavrikios D, Papakostas N, Mourtzis
D, Michalos G and Georgoulias K, "Digital
manufacturing: history, perspectives, and outlook",
Proceedings of the Institution of Mechanical Engineers
Part B: Journal of Engineering Manufacture, Vol. 222,
No. 5, 2008, pp.451-462
Constantinescu C and Westkmper E, Reference Model
for Factory Engineering and Design, Proceedings of
The 6th CIRP-Sponsored International Conference On
Digital Enterprise Technology, Vol. 66, 2010, pp.
1551 1564
FoF: A d-hoc Industrial Advisory Group, 2010, Factories
of the Future PPP - Strategic Multi-annual Roadmap,
20th January 2010
Hellmann A, Jessen U and Wenzel S, e-Services a part
of the Digital Factory, The 36th CIRP-International
Seminar on Manufacturing Systems, 2003, pp 199-203
Koestler A, The GHOST in the MACHINE, Arkana
Books, 1989
Khn W, Digital Factory - Integration of simulation
enhancing the product and production process towards
operative control and optimization, International
Journal of Simulation, Vol. 7, No. 7, 2006, pp 27-39
Lanz M, Garcia F, Kallela T and Tuokko R, Product-
Process Ontology for Managing Assembly Specific
Knowledge between Product Design and Assembly
System Simulation, In: Ratchev S and Koelemeijer S
(eds.), Micro-assembly technologies and applications,
2008, pp. 99-108
Law AM and Kelton WD, Simulation modeling and
analysis, 3rd Edition, 2000 McGraw-Hill, New York
Maropoulos PG, Digital enterprise technology-defining
perspectives and research priorities, International
Journal of Computer Integrated Manufacturing, Vol.
16, Nos. 7-8, 2003, pp 467-478
Mavrikios D, Pappas M, Karabatsou V, and
Chryssolouris G, "A new concept for collaborative
product & process design within a human-oriented
Collaborative Manufacturing Environment", The
Future of Product Development: Proceedings of the
17th CIRP Design Conference , Krause F (ed), Part 5,
2007, pp. 301-310, Springer-Verlag
Nonaka I and Takeuchi H, The Knowledge-Creating
Company: How Japanese Companies Create the
Dynamics of Innovation. Oxford University Press,
Oxford, 1995
Nylund H, Salminen K and Andersson P, Digital virtual
holons An approach to digital manufacturing
systems, In: Mitsuishi, M., Ueda, K. & Kimura, F.
(eds.) Manufacturing Systems and Technologies for
the New Frontier, The 41st CIRP Conference on
Manufacturing Systems, 2008, Tokyo, Japan, pp 103-
106
Nylund H and Andersson PH, Digital manufacturing
supporting integrated improvement of products and
production systems, Proceedings of the 17th CIRP
International Conference on Life Cycle Engineering,
2010a, Hefei, China, pp. 156-162
Nylund H and Andersson PH, Simulation of service-
oriented and distributed manufacturing systems.
Robotics and Computer Integrated Manufacturing,
Vol. 26, No. 6, 2010b, pp. 622-628
Nylund H and Andersson PH, Framework for extended
digital manufacturing systems, International Journal
of Computer Integrated Manufacturing, Vol. 24, No. 5,
2011, pp. 446 456
Nylund H, Lanz M, Tuokko R and Andersson PH, A
holistic view on modelling and simulation in the
context of an intelligent manufacturing environment,
21
st
International Conference on Flexible Automation
and Intelligent Manufacturing (FAIM2011), 2011a,
Taichung, Taiwan, pp. 801-808
Nylund H, Lanz M, Ranta A, Ikkala K and Tuokko R,
Viewpoints on Competitive and Sustainable
Performance Metrics of an Intelligent Manufacturing
Environment, Submitted to: Strojniki vestnik
Journal of Mechanical Engineering, 2011b
Offodile OF and Abdel-Malek LL, The virtual
manufacturing paradigm: The impact of IT/IS
outsourcing on manufacturing strategy, International
Journal of Production Economics, Vol. 75, 2002, pp
147-159
Papazoglou MP and Georgakopoulos D, Service
Oriented Computing, Communications of the ACM,
Vol.46, No.10, 2003, pp 25-28
Reiter WF, Collaborative engineering in the digital
enterprise, International Journal of Computer
Integrated Manufacturing, Vol. 16, Nos. 7-8, 2003, pp
586-589
Souza MCF, Sacco M and Porto AJV, Virtual
manufacturing as a way for the factory of the future,
Journal of Intelligent Manufacturing, Vol. 17, 2006, pp
725-735
Valckenaers P, Van Brussel H, Bongaerts L and Wyns J,
Results of the Holonic Control System Benchmark at
the KU Leuven, Proceedings of the CIMAT
Conference, 1994, Troy, NY, USA, pp 128-133
Van Brussel H, Wyns J, Valckenaers P, Bongaerts L and
Peeters P, Reference architecture for holonic
manufacturing systems: PROSA, Computers in
Industry, Vol. 37, 1998, pp 255-274
Warnecke HJ, The Fractal Company: A Revolution in
Corporate Culture, Springer-Verlag, Berlin,
Germany, 1993, p 228
Wiendahl HP and Lutz S, Production in Networks,
CIRP Annals - Manufacturing Technology, Vol. 51,
No. 2, 2002, pp. 573-586
Wiendahl HP, ElMaraghy HA, Nyhuis P, Zh MF and
Wiendahl HH, Changeable Manufacturing -
Classification, Design and Operation, CIRP Annals,
Vol. 56, No. 2, 2007, pp. 783-809

Proceedings of DET2011
7th International Conference on Digital Enterprise Technology
Athens, Greece
28-30 September 2011
512

VIRTUAL GAGE DESIGN FOR THE EFFECTIVE ASSIGNMENT OF
POSITION TOLERANCES UNDER MAXIMUM MATERIAL CONDITION
George Kaisarlis
Mechanical Design & Control Systems
Section, School of Mechanical Engineering,
National Technical University of Athens
(NTUA), Greece
gkaiss@central.ntua.gr
Christopher Provatidis
Mechanical Design & Control Systems
Section, School of Mechanical Engineering,
National Technical University of Athens
(NTUA), Greece
cprovat@central.ntua.gr


ABSTRACT
Geometric Dimensioning and Tolerancing (GD&T) is currently the dominant approach for design
and manufacturing of mechanical components and assemblies. A frequently used geometrical
tolerance is the Position Tolerance assigned at the Maximum Material Condition. Recently, due to
the progress made in coordinate measuring machines and 3D CAD modellers, the concept of
virtual gage appeared as an alternative to the traditionally used physical gages. The paper focuses
on the implementation rather than the inspection of GD&T specifications in 3D-CAD models of
mechanical assemblies by the use of virtual gages. The objective is to provide an easy-to-use CAD-
based tool for tolerancing visualisation and functionality analysis during the design phase. The
paper extends the utilization of published virtual gage models and investigates their direct
implementation on commercially available 3D-CAD environments. Preliminary validation of the
proposed approach on the components of a mechanical assembly show promising results in terms of
time savings and usability.
KEYWORDS
3D-CAD, Geometrical Tolerances, Dimensional Tolerances, Gages, Simulation

1. INTRODUCTION
Geometric Dimensioning and Tolerancing (GD&T)
is currently the dominant approach that controls
inevitable deviations from nominal geometric and
dimensional requirements within appropriate limits
during the design and manufacture of mechanical
components and assemblies. The main objectives of
tolerance assignment are to safeguard that the
critical functions of the assembly specifications will
be met and that the produced components will be
intercheangeable. A frequently used geometrical
tolerance is the Position Tolerance due to its
versatility and economic advantages. When it is
assigned at the Maximum Material Condition
(MMC) an increase in the position tolerance is
allowed, equal to the departure of the particular
feature from the maximum material condition size.
In the era of digital enterprise technology,
appropriate and cost effective assignment and
interpretation of dimensional and geometrical
tolerances, in conjunction with tolerancing
principles such as the MMC, constitute a major area
of concern for modern manufacturing (Diplaris and
Sfantsikopoulos, 2006).
Physical gages are traditionally used in industry
for the functional verification of components
accuracy specifications with low uncertainties;
however, they remain expensive and inflexible.
Recently, due to the progress made in Coordinate
Measuring Machines (CMM) and the rapid
development of 3D CAD modellers, the concept of
the virtual gage appeared as an attractive alternative
for component inspection. A virtual gauge is usually

513

defined by a set of surfaces with a fixed orientation
establishing the boundaries of the required tolerance
zone.
The paper focuses on the implementation rather
than the inspection of GD&T specifications in 3D-
CAD models of mechanical assemblies by the use
of virtual gages. The objective is to provide an easy-
to-use CAD-based tool for tolerancing visualisation
and functionality analysis during the design phase.
A set of virtual gage design criteria that are in
accordance with current industrial practice and
relevant technical standards are introduced. The
paper extends the utilization of published virtual
gage models on dimensional as well as geometric
tolerance requirements and investigates their direct
implementation on commercially available 3D-
CAD environments. The preliminary validation of
the proposed approach on components with true
position geometric tolerances under MMC that form
a mechanical assembly show promising results in
terms of time savings and usability.
The rest of the paper is organised as follows: in
the second Section the published technical literature
in the areas of virtual gaging and position
tolerancing is briefly referenced. Background
information about geometrical position tolerance
assignment and interpretation according to the
current GD&T standards is then provided together
with the outline of the problem that the paper is
focused on. The proposed approach for virtual gage
design in 3D CAD systems is then presented. The
functionality of the approach, illustrated through an
application example, is discussed in Section 5. Main
conclusions and future work orientation are
included in the final Section of the paper.
2. LITERATURE REVIEW
A large number of research articles on various
tolerancing issues in design for manufacturing have
been published over the last years, e.g.
Chryssolouris et al (2002), Marziale and Pollini
(2011). Designation of geometrical positional
tolerance has been adequately studied under various
aspects including tolerance analysis and synthesis,
composite positional tolerancing, geometric
tolerance propagation, reverse engineering tolerance
assignment, datum establishment, virtual
manufacturing, inspection procedures, e.g. Jiang
and Cheraghi (2001), Anselmetti and Louati (2005),
Diplaris and Sfantsikopoulos (2006), Kaisarlis et al
(2008), Martin et al (2011). Research publications
in the field of inspection and verification methods
for GD&T specifications are also numerous.
However, the potential use of simulated gages as an
alternative to the standard industrial practice of part
inspection by mechanical/ functional gages is
investigated by a relatively limited number of
researchers. The existing technical literature in this
area is directly linked with the development of
rigorous mathematic conditions and constraints that
ensure mechanical assembly under various
considerations; the theoretical interpretation of
geometric tolerances and the wide spread industrial
implementation of CMM over the last twenty years.
An early major study that links geometrical
position tolerancing specifications and simulated/
virtual gaging is published by Etesami (1991). A
mathematical model that addresses the verification
of 2D positional tolerance requirements for three
types of features (linear, circular and parallel lines)
is presented. The derived formulation leads to the
development of parametric acceptance zones
constrained by the datum features and associated to
part features considered in the tolerance statement.
Although the extension of the model to 3D is
discussed, no reference to a specific representation
is given. Later, Pairel (1997), (2007) developed a
conceptual model of virtual gauges for a broader
range of GD&T specifications called fitting
gauge. The model is based on the concepts of
geometrical tolerance zone, virtual conditions and
perfect datum features used by standardized
tolerancing. Integrated in specifically developed
experimental soft gaging software, the fitting
gauge model permits the building of the virtual
gages defined by the geometric tolerances and the
inspection of the manufactured components
according to a precise order, starting from a file that
contains the CMM sampled points of the part to be
inspected. The approach is compared with
traditional three dimensional metrology inspection
practices for the case of position tolerancing, (Pairel
2009). The comparison results show that the risk of
rejecting in-tolerance parts, which exists when using
classical CMM software, is eliminated by the
inspection through the virtual fitting gauges model.
Recently, Mailhe et al (2008), (2009) introduced
a verification method for ISO geometrical
specifications that is based on a virtual gauge and
the statistical representation of real manufactured
surfaces. The approach leads to a conformance test
that directly takes into consideration the
propagation of measurement uncertainties. A
method for modelling successive machining process
that considers the geometrical and dimensional
deviations produced with each machining setup and
the influence of these deviations on further setups is
studied by Moujibi et al (2010) in a recent research
publication. The authors present a virtual inspection
approach in order to identify and evaluate geometric
position tolerance values. The evaluation of the
geometric tolerance values is performed on the real
workpiece obtained by simulation of errors in a
multistage machining process.

514

The virtual representation of the real object,
usually as a set of CMM digitized point cloud data,
has been widely studied, among others, in most of
the above referenced research works. However, the
concept of the point cloud data analysis in order to
perform the geometrical tolerance verification is
completely disjointed from the tolerance
assignment process. In that context, the potential
benefits of a tolerance high-level CAD
representation, that goes beyond the simple form of
3D tolerance annotation within the current 3D-CAD
software packages, which are used during the
design intent definition, appear to be disregarded by
the published technical literature.
The use of the Pairel fitting gauge model for
the representation of the geometric tolerances in
CAD/ CAM systems as an important tool for the
extension of the tolerancing possibilities during
tolerance designation, is recently introduced by
Pairel et al (2007). In that publication the authors
point out the important distinction between the
syntax of a geometric tolerance and its semantics. In
an earlier publication, Dantan and Ballu (2002)
have stressed the importance of the meaning of
tolerance specifications in mechanical assemblies
expressed as specification semantics and the
potential of their modelling in the form of virtual
gages. However, Dantan and Ballu (2002) do not
discuss the integration of their approach in 3D-CAD
systems. Moreover, the model proposed by Pairel et
al (2007) only addresses geometric tolerances and
does not include the representation of dimensional
specifications. According to ASME 14.43 (2003),
the common usage of a functional gage is to verify a
workpiece's ability to be assembled. This shall be
accomplished through inspection of both the size
and the geometric characteristics of the workpiece
feature(s) under consideration.
The approach proposed in this paper is strongly
based on the Pairel et al (2007) fitting gauge model,
extending its application on both the geometrical
and dimensional tolerance specifications of
mechanical components and assemblies by the use
of virtual gauges. Moreover, our approach is readily
implementable on most current, commercially
available 3D-CAD modellers without the need of
development of a specialized soft gaging software
module.
3. BACKGROUND AND PROBLEM
DESCRIPTION
In the following paragraphs the geometrical position
tolerance assignment and interpretation according to
the current GD&T standards is presented and the
industrial engineering problem that the paper is
focused on is outlined.
3.1. POSITION TOLERANCING UNDER MMC
IN CURRENT GD&T STANDARDS
Position tolerancing is standardized in current
GD&T international and national standards, such as
ISO 5458 (1998), ISO 1101 (2004) and ASME
Y14.5M (2009). Although the ISO and the ASME
tolerancing systems are not fully compatible,
(Zbigniew, 2009), they both define position
geometrical tolerance as the total permissible
variation in the location of a feature about its exact
true position. For cylindrical features such as holes
or bosses the position tolerance zone is usually the
diameter of the cylinder within which the axis of the
feature must lie, the center of the tolerance zone
being at the exact true position, Figure-1, whereas
for size features such as slots or tabs, it is the total
width of the tolerance zone within which the center
plane of the feature must lie, the center plane of the
zone being at the exact true position.


Figure 1 Cylindrical tolerance zone of geometric true
position according to ISO 1101
The position tolerance of a feature is denoted with
the size of the diameter of the cylindrical tolerance
zone (or the distance between the parallel planes of
the tolerance zone) in conjunction with the
theoretically exact dimensions that determine the
true position and their relevant datums, Figure-2.
Datums are, consequently, fundamental building
blocks of a positional tolerance frame in positional
tolerancing. Datum features are chosen to position
the toleranced feature in relation to a Cartesian
system of three mutually perpendicular planes,
jointly called Datum Reference Frame (DRF), and
restrict its motion in relation to it. Positional
tolerances often require a three plane datum system,
named as primary, secondary and tertiary datum
planes. The required number of datums (1, 2 or 3) is
derived by considering the degrees of freedom of
the toleranced feature that need to be restricted.
Change of the datums and/or their order of
precedence in the DRF results to different
geometrical accuracies, Figure-3.

515



Figure 2 Position tolerancing of a cylindrical feature
(Kaisarlis et al, 2008)

Figure 3 Influence of datum precedence in a DRF to the
location of a feature (Kaisarlis et al, 2008)
Fundamental difference between GD&T and
conventional coordinate tolerancing is that the
former creates explicitly defined coordinate systems
and respective DRF. All features on a part are
unambiguously related to these coordinate systems
through geometric tolerances in feature control
frames and geometrical deviations of location,
orientation and run-out are thus controllable.
It can be easily shown that a position tolerance
increases the area inside which the axis of a, e.g.,
rotational feature must lie by +57%, against the
available area that is produced through conventional
coordinate tolerancing. Assignment of position
tolerances reduces in this way the number of rejects
without affecting the product quality. As illustrated
in Figure-4 (a) and (b), in multiple-hole assemblies
position tolerances are particularly convenient and
replace coordinate tolerances for the majority of
applications (Diplaris and Sfantsikopoulos, 2006).
Position tolerances are particularly helpful when
they are assigned at the Maximum Material
Condition. At MMC, an increase in position
tolerance is allowed, equal to the departure of the
feature from the maximum material condition size,
ISO 2692 (1988), ASME Y14.5M (2009). As a
consequence, a feature with size beyond maximum
material but within the dimensional tolerance zone
and its axis lying inside the enlarged MMC cylinder
is acceptable, Figure-5. The accuracy required by a
position tolerance is thus relaxed through the MMC
assignment and the reject rate reduced. Moreover,
according to the current ISO and ASME standards,
datum features of size that are included in the DRF
of position tolerances can also apply on either
MMC, Regardless of Feature Size (RFS) or Least
Material Condition (LMC) basis.


(a)

(b)
Figure 4 Co-ordinate tolerancing and composite position
tolerance under MMC (Diplaris and Sfantsikopoulos, 2006)

Figure 5 Position tolerance zone and MMC bonus on a
sample cylindrical feature of the Figure-4 component

516

Position tolerances mainly concern clearance fits.
They achieve the intended function of a clearance
fit by means of the relative positioning and
orientation of the axis of the true geometric
counterpart of the mating features with reference to
one, two or three Cartesian datums. The relationship
between mating features in such a clearance fit may
be classified either as a fixed or a floating fastener
type (ASME Y14.5M, 2009). Floating fastener
situation exists where two or more parts are
assembled with fasteners such as bolts and nuts, and
all parts have clearance holes for the bolts. In a
fixed fastener situation a bolt passes through a
clearance hole in one part and threads into a tapered
hole in the mating part.
3.2. EFFECTIVE GD&T ASSIGNEMENT IN
3D-CAD ENVIRONNEMENT
Digital modelling of parts, products and large
assemblies and their further analysis in various
''virtual'' ways before they are put into production,
in order to resolve most development problems in
the earliest possible stage and reduce the risk of
errors, has been made possible by sophisticated
CAD/CAM/CAE software packages. Available
CAD systems of to-day provide high level tools for
the realistic representation and visualization of the
components nominal geometry. In current
industrial practice, the designer works on the
nominal model of the product within a CAD system
which, GD&T annotated or not, only represents
nominal product information. Most of the
simulations to predict the behaviour and
performance of the final product are carried out in
this model. However, the nominal model limits the
ability to deal with geometric variability resulting
from the real environment during the product life
cycle, such as material property defects,
manufacturing processes, assembly errors, etc.
Any engineering component can be viewed as a
set of specifications regarding its geometrical and
functional characteristics that reflect the design
intend. The CAD model of the part must provide
high level dimensioning and tolerancing
information for functional use, coupled with its
nominal geometric model. Today, several tools for
tolerance specifications in the form of annotation
have been implemented into parametric CAD 3-D
systems. The majority of currently used solid CAD
modellers permit tolerances for a feature to be
associated with geometric entities in the CAD file.
However, tolerance annotation in most current 3D-
CAD systems mainly concerns the syntax of
geometric and dimensional tolerances, i.e. their
writing on the technical drawing, and not their
semantics, i.e. their functional meaning with regard
to the part and the final product assembly.
During the design phase of the product, there are
primarily three ways to determine whether the
product and process meet the dimensional and
geometrical product requirements, as they are
designed. These include (1) making an educated
guess, (2) building several hundreds of assemblies
using production tools and measuring the results, or
(3) simulating the design and assembly of the
product including 3-D geometry and GD&T
(Moujibi et al, 2010). The only clear and practical
way to determine if the product and process, as
designed, meet the dimensional and geometrical
product requirements is to use the third way,
namely, simulation.
Tolerance assignment is a rather complex, time
consuming and error prone stage during the design
process, which imposes high requirements for
designers as it is still strongly based on their
experience, design data of existing products and/or
related design handbooks. In that stage, designers
need to consider comprehensively the geometric
features, functional requirements, mating
conditions, tolerance specifications in accordance
with the standards and many other factors. The kind
of tolerance specification that is designed (e.g.
position tolerance with or without MMC versus
coaxiality or runout tolerance in a cylindrical
feature) and, as well as, whether it is
technologically reasonable depend mainly on
designers experience. Systematic approaches and
formal CAD system tools that aim on the
facilitation of the tolerance assignment task can
certainly reduce design cost, improve design quality
and enhance the overall efficiency of product
development.
The goal of our research is the study of easy-to-
use solutions for the GD&T functionality
confirmation of mechanical components and
assemblies during the tolerance designation stage.
The proposed approach is based on the strong
interrelation between the design and the inspection
phases and the tolerance interpretation according to
the international standards. In particular, the paper
investigates the direct implementation of virtual
gage models on commercially available 3D-CAD
environments.
4 VIRTUAL GAGE DESIGN IN 3D-CAD
SYSTEMS
In industrial geometrical metrology the term soft
gaging has been established to describe the
comparison of a set of coordinate measurement
data, such as the data generated by a CMM, with a
CAD model for purposes of part acceptance/
rejection. In general terms, the soft gaging process
works as follows (ASME Y14.43, 2003): (a) a part's

517

nominal geometry is modelled with CAD software,
(b) the CAD model is imported into the soft gaging
software, where tolerance attributes are attached to
part features (some CAD systems are able to
perform this step internally), (c) the soft gaging
software is used to generate a worst-case model
based on the nominal CAD geometry varying by the
amount allowed by the tolerances. This worst-case
model is called a soft gage, (d) a part is measured
on a CMM, generating a cloud of coordinate data
points, (e) the soft gaging software compares this
cloud of points (or, sometimes, a reverse-
engineered CAD model based on it) with the soft
gage model and displays out-of-tolerance
conditions. Advantages of this method are that
complex shapes may be measured with accuracy
and little or no hard tooling. The major
disadvantage is that, as with most CMM
measurements, the acceptance of a feature is based
on a sample of points, allowing the possibility that
small out-of-tolerance areas might not be evaluated.
In the scope of the paper, the term virtual gage
is adopted as en enhanced version of the above
described soft gage. We consider the virtual gage
as a powerful tool that can be utilized not only for
GD&T inspection purposes on the manufactured
component but also during the critical step of
tolerance assignment on the design process. In our
approach the design of virtual gages is primarily
based on a range of design criteria that have been
established in the early days of technical research
on the field of computer based tolerance analysis,
(Farmer and Gladman, 1986). These criteria serve
both as foundations and as evaluators in the
development of our approach and safeguard that the
basic dimensioning and tolerancing principles are
followed. In that context, the design of virtual gages
should:
i. Enable designers to specify and analyze
designed dimensions and tolerances so the
functional requirements of design can be verified;
ii. Convey the design information in a clear,
concise and efficient manner;
iii. Be unambiguously interpreted by all parts that
are involved in the different stages of a product life
cycle (e.g. design, manufacturing, assembly,
inspection);
iv. Provide the means to return the inspection
results for evaluating parts and identifying faults in
manufacturing processes.
Since the core concepts of physical and virtual
gages are technically the same, the basic principles
that apply during the design and manufacturing of
mechanical/ functional gages have to be taken into
account. These design principles and rules are
published in relevant technical handbooks and are
standardized in series of ad hoc standards, e.g.
ASME Y14.43 (2003), ISO/R 1398 (1971).
The fundamental tolerancing principle or
principle of independency as per ISO 8015 is
another critical consideration. Each specified
dimensional or geometrical requirement on a
drawing shall be met independently, unless a
particular relationship is specified. However,
depending upon its function, a feature of size is
simultaneously controlled by its size and any
applicable geometric tolerances. Material conditions
such as RFS, MMC or LMC may also be applicable.
In case of MMC or LMC application the geometric
tolerances are clearly no longer independent from
the dimensional ones. According to ASME Y14.5M
(2009), consideration must be given to the
collective effects of MMC or LMC and applicable
tolerances in determining the clearance between
parts (fixed or floating fastener formula), the
guaranteed area of contact, the thin wall
conservation, and, as well as, in establishing gage
feature sizes.
The proposed approach further develops the
fitting gauge model, considering the fact that
functional gages are devices that measure the
collective effects of size and geometric tolerances at
the same time, representing a simulated mating
condition. A virtual condition exists only for
tolerances (e.g. position tolerance) that control size
features (e.g. holes, pins, bosses, tabs) and is
defined as the collective effect of size and
geometric tolerances that must be considered in
determining the fit or clearance between mating
parts or features. The following formulas apply for
the calculation of the virtual condition (VC)
boundary of a size feature dimensioned in MMC
Figure-6(a):

(a). External features (Ex):
VC
Ex
= MMC (feature size) + Geometric Tolerance
at MMC (form, orientation, or location) (1)
(b). Internal features (In), Figure-6(b):
VC
In
= MMC (feature size) Geometric Tolerance
at MMC (form, orientation, or location) (2)

When MMC bonus on the geometrical tolerance
term is ignored, the above equations-1 and -2 can
apply in case when the size feature is dimensioned
in RFS for the calculation of their maximum outer
boundary (Ex) and minimum inner boundary (In)
respectively. Coupled with virtual condition, the
resultant condition is defined as the single worst-
case boundary generated by the collective effects of
a feature of the sizes specified MMC or LMC, the
geometric tolerance for that material condition, the
size tolerance, and the additional geometric

518

tolerance derived from the features departure from
its specified material condition. The following
formulas apply in order to calculate resultant
condition (RC) boundary of a size feature
dimensioned in MMC:

(c). External features (Ex):
RC
Ex
= LMC (feature size) Geometric Tolerance
at LMC (form, orientation, or location) (3)
(d). Internal features (In), Figure-6(c):
RC
In
= LMC (feature size) + Geometric Tolerance
at LMC (form, orientation, or location) (4)

In case that a size feature is dimensioned using the
LMC concept, the above equations-1 and -2 are
used for the calculation of the RC boundaries
whereas the equations-3 and -4 apply in order to
calculate the VC.

(a)

(b)

(c)
Figure 6 Virtual and Resultant Condition boundaries on
MMC dimensioned internal feature (ASME Y14.5, 2009)
In assembling a complex product, several
dimensions recognized as critical have a strong
impact on its functional performance. These critical
dimensions may either address dimensional (e.g.
clearance) or geometrical (e.g. perpendicularity)
specifications of the assembly and are chained to
particular GD&T specifications of the components
that form the product. The recognition of the latter
can either be performed by an experienced designer
or can be aided by relevant tolerance stack-up
analysis CAD tools, such as the TolAnalyst module
in SolidWorks 3D-CAD software of Dassault
Systmes. The design of the virtual gages in our
approach addresses the features of the components
CAD models that are associated with the above
mentioned specifications.
In the framework of this study, a virtual gage is
designed as set of ideal virtual features, in the same
3D-CAD system that the designer uses to model the
nominal geometry of the components which form a
mechanical assembly. The geometrical boundaries
of these features correspond to the tolerance zone of
a specified GD&T requirement. In the proposed
approach, for each type of GD&T specification (e.g.
flatness, perpendicularity, true position, runout) the
3D-CAD software is used to individually model the
relevant tolerance zone as a stand-alone solid CAD
part. This set of parts, which corresponds to the set
of the initially designated GD&T specifications,
composes the virtual gage and is assembled on the
nominal CAD model of the component.
The denoted datums and their preponderance are
taken into account during the assignment of the
mates that constrain the assembly of the virtual gage
with the component. Therefore, the degrees of
freedom of each individual solid part of the virtual
gage are restrained in accordance with the DRF of
the geometrical tolerance that they represent. In
case that the components features or the DRF
datums are dimensioned on MMC (or LMC) the
size of the CAD modelled tolerance zone is
appropriately controlled in accordance with
equations-1-- 4. Therefore, different configurations
of the virtual gage are created within the CAD
system corresponding to the required VC or PC of
each feature. This can be easily performed by the
designer using readily-available CAD tools, such as
the Design Table option in SolidWorks 3D-CAD
software of Dassault Systmes, that directly links
configured parameters with Microsoft Excel
worksheets. In that stage the designer is able to
visualize directly in 3D the allocated GD&T
specifications and gain a better understanding on
their impact. Finally, the components, properly
constrained together with their virtual gages, are
assembled in order to perform overall conformance
tests that include interference checks, minimum and
maximum clearance controls etc.

519

5. APPLICATION EXAMPLE AND
DISCUSSION
In order to assess the effectiveness of the virtual
gage approach on commonly used 3D-CAD
environment, a typical mechanical assembly is used
as an application example. The assembly concerns a
simple gearbox that comprises seven individual
components, namely: a main housing part, two
round cover plates, a top cover plate, one worm
gear, a worm gear shaft and an offset shaft, Figure-
7. For fastening purposes, twelve commercially
available fasteners (not modelled in Figure-7) are
used in order to assemble the two round cover plates
and the top cover plate with the main housing part.
The components were designed in Solid Works
2009 SP2.1 CAD software of Dassault Systmes
and annotated according to the GD&T standards by
the use of Solid Works DimXpert module. The
gearbox assembly is offered by Dassault Systmes
as part of their TolAnalyst tolerance stack-up
analysis module educational material.


Figure 7 Gearbox application example assembly: nominal
geometry of components 3D CAD models

Figure 8 GD&T specifications designated on round cover
plate of the gearbox assembly

Figure 9 GD&T specifications designated on main housing
of the gearbox assembly
For the economy of the paper we hereby present and
discuss the virtual gages application only on the
geometrical specifications of the assembly that
concern the position tolerances designated on the
group of four holes on the round cover plates,
Figure-8, and the corresponding two same groups of
four holes on each side of the main housing part,
Figure-9. These groups of holes constitute typical
floating fastener cases (ASME Y14.5M, 2009). The
nominal diameter and the dimensional tolerance
designated to all the holes of the mentioned groups
is 6.25 0.25mm. The position tolerances
designated in the groups of holes and the relevant
datum features are highlighted on Figure-10 and
denoted on Figures-8 and -9. They are all
dimensioned in MMC, however the size of the
position tolerance is 0.5mm for the round cover
plate holes and for the front group of holes of the
housing, whereas for the back group of the housing
it is 0.25mm. Moreover, for the latter, datum feature
E is not designated on MMC as it is the case for the
former ones, datums B and G respectively.


Figure 10 Groups of holes and relevant datum features
examined by the virtual gage approach

520

The critical assembly requirement that is here
examined is the location and orientation of the
worm gear shaft that clearly influences the location
and orientation of the worm gear and is therefore
considered of paramount importance for the proper
function of the gearbox. This critical assembly
requirement is directly associated with the location
and in fact with the position tolerance assigned in
the four groups of the 6.25 holes. Normally, the
designers main concern during the assignment of
their dimensional (0.25mm) and position tolerance
is to safeguard that the fasteners pass through them,
at the worst case conditions of the assembly,
without interference. Nevertheless, the relative
location of the front and back groups of holes in the
main housing, especially when assigned on MMC,
has here a direct impact on the location and
orientation of the worm gear shaft.
For the implementation of the initially assigned
positional tolerances and the visualisation of their
impact on the critical assembly requirements, the
proposed virtual gage approach was considered.
Following the approach of Section 4 three virtual
gages (two for the round cover plates and one for
the main housing) that model the position tolerances
zones of the groups of 6.25 holes were designed
and assembled on the components, e.g. Figure-11.
By the use of Solid Works DimXpert module the
allowable variation of the position tolerance zones
sizes, imposed by the MMC assignment, were
associated to respective EXCEL worksheets,
forming the Design Tables that allowed their easy
manipulation. In Table-1 the maximum values of
the positional tolerances for the different groups of
holes in the case of feature size on LMC, MMC and
datum size on MMC, are given. Through the design
process the acceptable limit for the worm gear shaft
dislocation was assigned as 1.8mm.
Table 1- Position tolerances values for the 6.25 holes in
the three groups of the gearbox assembly (in mm)
Max
Position
tolerance
on LMC
Max
Position
tolerance
on MMC
Max Position
tolerance on
MMCMMC
(Datum)
Back group
Main
housing
0.25 0.75 0.95
Front group
Main
housing
0.5 1 1.2
Cover plate
groups
0.5 1 1.1

The components of the gearbox, properly
constrained together with their virtual gages, were
then mated and assembled in order to perform a
series of conformance tests with variable values for
the position tolerances of the 6.25 groups of holes.
Different scenarios that were examined included the
allocation of datum feature E on MMC and the
assignment of position tolerance on 0.5mm for the
back group of the main housing holes. Supported by
the TolAnalyst tolerance stack-up analysis tool it
was observed that in both scenarios the 1.8mm limit
was violated. Moreover, by setting on the same
group of holes the position tolerance on 0.75mm
through the Design Table sheet, interference was
directly detected. The ease of manipulation, in
conjunction with the limited time needed for the
examination of the above scenarios and the
performance of interference checks, minimum and
maximum clearance controls etc, are encouraging
the use of the proposed virtual gage approach for
this kind of engineering problems.


Figure 11 Virtual gages for position tolerances of two
holes assembled on main housing CAD model
6. CONCLUSIONS
Anticipating the effects of manufacturing variation,
as quantified by the assigned GD&T specifications,
in the earliest possible product life stage is
paramount in order to achieve competitive quality,
cost and development time targets. The complete
3D CAD visualization of the dimensional and
geometrical tolerances that is accomplished by the
proposed approach, in conjunction with currently
available tolerance analysis and synthesis CAD
tools, facilitates and advances the design process
significantly. The proposed virtual gage approach
offers an easy-to-use tool that assists the tolerance
assignment task in a time and cost efficient way,
compatible with the current industrial insight. It is
therefore considered as a tool that can certainly
reduce design cost, improve design quality and
enhance the overall efficiency of product
development. The approach, to the extent of the
authors knowledge, is the first of the kind for this
type of engineering problems that is directly
implemented within a commercially available 3D

521

CAD environment. Future work is oriented towards
the enhancement of the approach through the
implementation of other types of geometric
tolerances (e.g. single and total runout) with the
criteria imposed by the ISO and ASME standards.

REFERENCES
ASME Y14.43-2003, Dimensioning and Tolerancing
Principles for gages and fixtures, The American
Society of Mechanical Engineers, New York, 2003
Anselmetti B and Louati H, Generation of
manufacturing tolerancing with ISO standards,
International Journal of Machine Tools &
Manufacture, Vol. 45, 2005, pp 11241131
ASME Y14.5M2009, Dimensioning and Tolerancing,
The American Society of Mechanical Engineers, New
York, 2009
Chryssolouris G, Fassois S and Vasileiou E, "Data
sampling technique (DST) for measuring surface
waving", International Journal of Production
Research, Vol. 40, No.1, 2002, pp 165-177
Dantan J-Y and Ballu A, Assembly Specification by
Gauge with Internal Mobilities (GIM)-A Specification
Semantics Deduced from Tolerance Synthesis,
Journal of Manufacturing Systems, Vol. 21, No. 3,
2002, 218 - 235
Diplaris S and Sfantsikopoulos M, Maximum material
condition in process planning, Production Planning
& Control, Vol. 17, No. 3, 2006, pp 293-300
Etesami F, Position Tolerance Verification Using
Simulated Gaging, The International Journal of
Robotics Research, Vol. 10, No. 4, 1991, pp 358 370
Farmer LE and Gladman CA, Tolerance Technology -
Computer Based Analysis, Annals of the CIRP, Vol.
35, Part 1, 1986, pp 7-10
ISO-1101, Geometrical Product Specifications (GPS) -
Geometrical tolerancing - Tolerances of form,
orientation, location and run-out, The International
Organization for Standardization, Geneva, 2004
ISO 5458, Geometrical Product Specifications (GPS) -
Geometrical Tolerance - Position Tolerancing, The
International Organization for Standardization,
Geneva, 1998
ISO/R 1938, ISO system of limits and fits - Part II:
Inspection of plain workpieces, The International
Organization for Standardization, Geneva, 1998
ISO 2692, Technical drawings - geometrical tolerancing
- maximum material principle, The International
Organization for Standardization, Geneva, 1988.
Jiang G and Cheraghi SH, Evaluation of 3-D feature
relating positional error Precision Engineering, Vol.
25, 2001, pp 284292
Kaisarlis G, Diplaris S and Sfantsikopoulos M,
Geometrical position tolerance assignment in reverse
engineering, International Journal of Computer
Integrated Manufacturing, Vol. 21, No. 1, 2008, pp 89
96
Maihle J, Linares JM, Sprauel JM and Bourdet P,
Geometrical checking by virtual gauge, including
measurement uncertainties, CIRP Annals -
Manufacturing Technology, Vol. 57, No. 2, 2008, pp
513516
Maihle J, Linares JM and SpraueL JM, The statistical
gauge in geometrical verification. Part II. The virtual
gauge and verification process, Precision
Engineering, Vol. 33, No. 4, 2009, pp 342352
Martin P, Dantan JY and D'Acunto A, Virtual
manufacturing: prediction of work piece geometric
quality by considering machine and set-up accuracy,
International Journal of Computer Integrated
Manufacturing, Vol. 24, No. 7, 2011, pp 610-626
Marziale M and Polini W, A review of two models for
tolerance analysis of an assembly: Jacobian and
torsor, International Journal of Computer Integrated
Manufacturing, Vol. 24, No.1, 2011, pp 74-86
Moujibi N, Rzine B, Saka A, Bouayad A, Radouani M
and Elfahim B, Simulation of Geometric Tolerance
Values Based on Manufacturing Process Constraints,
Journal of Studies on Manufacturing, Vol.1, Iss. 2-3,
2010, pp 91 99
Pairel E, The gauge model: A new approach for
coordinate measurement, Proceedings of the XIV
IMEKO World Congress, Tampere, Finland, 1997, pp
278-283
Pairel E, Three-Dimensional Verification of Geometric
Tolerances with the Fitting Gauge Model, ASME -
Journal of Computing and Information Science in
Engineering, Vol. 7, No.1, 2007, pp 26 - 31
Pairel E, Three-dimensional metrology with the virtual
fitting gauges, CD Proceedings of the 11th CIRP
International Conference on Computer Aided
Tolerancing, Annecy, France, 2009
Pairel E, Hernandez P and Giordano M, Virtual Gauge
Representation for Geometric Tolerances in CAD-
CAM Systems, in Models for Computer Aided
Tolerancing in Design and Manufacturing, JK
Davidson (ed.), Springer, The Netherlands, 2007, pp 3
12
Humienny Z, State of art in standardization in GPS
area CIRP Journal of Manufacturing Science and
Technology , Vol. 2, 2009, pp 1-7
Proceedings of DET2011
7th International Conference on Digital Enterprise Technology
Athens, Greece
28-30 September 2011
522

DIMENSIONAL MANAGEMENT IN AEROSPACE ASSEMBLIES: CASE
BASED SCENARIOS FOR SIMULATION AND MEASUREMENT OF
ASSEMBLY VARIATIONS
Parag Vichare
The University of Bath
P.Vichare@bath.ac.uk
Oliver Martin
The University of Bath
o.c.martin@bath.ac.uk


Jafar Jamshidi
The University of Bath
enpjj@bath.ac.uk
Paul G Maropoulos
The University of Bath
p.g.maropoulos@bath.ac.uk
ABSTRACT
In the manufacture of any complex product, the consideration of tolerances and their build up
during assembly is vitally important. Tolerances within an assembly are defined during the setting
of engineering specifications and in physical terms they arise from the individual components, their
manufacturing imperfections, materials and their compliance, the means by which they are fastened
and the assembly sequence used. The methodology reported in this paper aims at assessing and
predicting the dimensional influence of i) designed tolerances including component and assembly
level datum structures using Monte Carlo approach ii) designed assembly processes including
assembly sequence, fastening parameters and material compliance using finite element analysis
(FEA) methods iii) component and sub-assembly level measurement data for revising assembly
sequence if any concessions were issued on manufactured components. Thus, the proposed
methodology can be applied before (in the design phase) and during (in the production phase)
assembly process execution. The methodology is exploited in the case study for consolidating
design, tooling and metrology information to identify how tolerances can be managed more
efficiently within an aircraft structure so that the assembly key characteristics can be maintained or
improved while component tolerances are relaxed and interface management processes are
minimised using measurement assisted assembly techniques.
KEYWORDS
Aerospace Assembly; Assembly Measurement; Tolerance Analysis, Design Verification

1. INTRODUCTION
Although early design phase is identified as the best
opportunity to envisage the performance of the new
product or process, it is impossible to consider
every aspect of discrepancy in the early design stage
due to the fact that very limited information is
available regarding novel product development
philosophies. This is specifically true for the
aerospace/defence products where new design and
development philosophies are adopted and
conceived for long term. Unlike automotive
development philosophies, aerospace/defence
products have capital intensive prototype
development programs/phases typically stretching
up beyond 5 years. A typical automobile may
consist of several thousand parts while an aircraft
can have quite a few million parts. On the other
hand, these industries also have different
characteristics such as the number of units
manufactured per year as the automotive industry
has several million manufactured units per year as
opposed to the aerospace industry that manufactures
a few hundred aircrafts per year. This has created a

523

fundamental difference in the way new design
concepts are undertaken in both industries. For
example, new design concepts in the automotive
industry are first designed and verified using design
methodologies such as Digital Mock-up (DMU) and
Design for Manufacture and Assembly (DFMA),
which represent an ideal or nominal state of the
manufacturing and assembly processes (Bernard
and Tichkiewitch, 2008, Chryssolouris, et al.,
2002). In reality a number of deviations exists
between the real and the simulated world as digital
mock-up technology does not simulate the impacts
of all sources of assembly variations;
consequentially results from a simulation can
deviate significantly from the real outcome of the
actual assembly. Thus, further revisions in the
manufacturing processes are done in the production
phase using process monitoring techniques such as
Statistical Processes Control (SPC). However,
aerospace/defence industry lacks the luxury of
revising production processes due to traditional
process monitoring tools such as manufacturing
process capability and quality control charts have
limited applications where production volume is
very low; especially for a one-off prototype,
where novel manufacturing and assembly concepts
are employed for developing a new functional
prototype. This has created tremendous pressure on
new aircraft design and development philosophies;
resulting evolving assembly technologies such as
Determinate Assembly, Measurement assisted
assembly (MAA), Predictive shimming, Automated
Wing Drilling Equipments (AWDE), assembly
variation propagation modelling and analysis
workbenches etc. These techniques improve
assembly operations, simulate component mating
whilst considering in-process state of the assembly,
reduce post-assembly processes such as
readjustment, fettling or shimming, incorporating
the use of reconfigurable tooling and minimising the
effects of production inconsistencies related to
geometric dimensions and tolerances; thus, reserves
exponential merits if utilised in aero-structure
development projects (Chryssolouris, et al., 2000,
Jamshidi, et al., 2010, Maropoulos and Ceglarek,
2010).
For complex aerospace assemblies, product
verification methods in the digital domain and real
world (production phase) are currently fragmented,
prolonged and sub-optimal due to the fact that there
is no cohesive design methodology for developing
aerospace/defence products which can i) simulate
assembly variability considering tolerances
specified on the product and assembly jigs, ii)
identify the key tolerances responsible for affecting
assembly characteristics, iii) identify component
and assembly level measurement stages and
mandatory inspection points required to maintain
the assembly key characteristics, iv) incorporate
component and assembly measurement data for
planning subsequent assembly processes and v)
reconfigure assembly processes depending upon
measurement data. The paper reports on a new
hybrid methodology (Figure 1) that integrates
model-based analysis with physical measurement
plans and data in order to facilitate the early
verification of complex designs from the
perspective of satisfying key assembly criteria. The
scenario based case study is presented where
proposed methodology is exploited firstly for
verifying designed tolerances and assembly
processes, secondly for simulating the influence of
the component and sub-assembly level
measurement data on the final state of the assembly.
2. SIMULATION AND MEASUREMENT
OF ASSEMBLY VARIATIONS AND
ASSOCIATED CHALLENGES
2.1. SIMULATION OF ASSEMBLY
VARIATION
Aspects of variation propagation through designed
tolerances in the single and multi stage compliant
assembly have been researched (Hu and Camelio,
2006, Loose, et al., 2010), where dimensional
variation has been modelled using statistical and
mathematical prediction techniques. Few
approaches for analysing the effects of geometric
tolerances are commercialised and extensively used
today.
In addition to geometric tolerance, other
triggering factors for assembly variation are
material compliancy, assembly sequences and
fastening techniques. Currently, all these factors are
well studied and can be understood using state-of-
the-art proprietary tools and PLM systems before
executing physical production. Various European
Union (EU) research initiatives such as VIVACE
WP 1.5.1 (3D functional Tolerancing for Helicopter
Rotor) (Brujic, et al., 2005), NGCW project
(Jamshidi, et al., 2008), offer important information
regarding the necessity of combining geometric
tolerance analysis with material compliance.
Material compliance effects along with designed
GD&T were simulated in another EU project,
ALCAS, where datum structures, component and
assembly level tolerances were interrogated for
identifying the key tolerances responsible for
affecting assembly characteristics (Muelaner, et al.,
2011).



524


Figure 1. Implementation framework for a new Hybrid Model Based and Physical Testing Design Verification Methodology
Research in identifying the magnitudes of sources
of variation in assembly has enabled design
engineers to specify realistic product key
characteristics (PKCs) while considering
technology specific (assembly, manufacturing and
measurement) key process variables (KPVs).
However, new emerging assembly technologies
such as flexible assembly jigs, and reconfigurable
tooling in globally dispersed manufacturing always
induce unknown process variables. The
conventional method to assemble a component
within a global assembly coordinate system utilises
holding fixtures and locators with known
coordinates and datum features (Balsamo, et al.,
1999). Today, fixtureless assembly operations are
possible due to measurement-assisted-assembly
(MAA) processes (Jamshidi, et al., 2010), in which
a components placement can be accurately related
in the global coordinate system using optical, non
contact and dynamically closed-looped
measurement systems. Thus, modelling these state-
of-the-art MAA processes becomes imperative for
predicting assembly variability; without this
capability, a simulation model may advocate
assigning stringent assembly tolerances that are too
tight and therefore expensive for manufacturers to
produce. However, the performances of various
MAA processes in conjunction with assembly
variation parameters have never been verified in a
coherent manner. Recently, assembly process
decomposition approaches have been reported
(Nagel and Liou, 2010) for configuring assembly
systems. Here the assembly process is decomposed
into modular auxiliary processes, guided by rule-
based patterns that define how different types of
assembly activities break down into auxiliary
processes including their temporal and logical
relationships. Such modular process decomposition
approaches can be adopted for modelling MAA
processes.
However, there is a lack of modelling
methodology to observe the collective and
individual effects of these above mentioned factors
on the complex aerospace assembly. Possible reason
could be each contributing factor has its own
research depth, thus studied separately using
proprietary analysis tools in the supply chain.
Hence, the majority of the analysis knowledge and
its intricacy remain with suppliers (Tier 1 or 2)
rather than customers. This knowledge has to be
systematically presented, which would allow
customers to understand influence and possible co-
variances of various contributors in the assembly
variation. The current state-of-the-art solutions for
modelling variation propagation in aerospace
assembly lack a mechanism to identify unknown

525

process variables due to the fact that they can only
model well-researched and standard assembly
processes and resulting dimensional variations.
Thus, systematic knowledge regarding new
assembly processes and technologies is required to
be able to identify and analyse process variables.
2.2. MEASUREMENT OF ASSEMBLY
VARIATION
In addition to early product design and specification
stage, the assembly variations are required to be
simulated throughout the production phase for
determining effects of concessions and non-
conformance issued on the individual components
manufactured in the supply chain. Such
manufacturing problems are dealt in situ to absorb
as much variation as possible without affecting
products key characteristics. Thus, measurement
systems and networks are deployed in the assembly
execution processes where components can be
measured before, during and after assembly
operations.
Similarly, resources such as assembly jigs,
toolings, fixtures and material handling frames are
designed and manufactured to realise designed
prototype. The positional, dimensional and
geometric accuracy of the assembly is implied from
the tooling. That is to say, if the tooling is correct
and the components are positioned correctly within
the tooling, then the assembly is correct.
Measurement assisted processes ensure designed
tolerances are maintained. Measurement technology
suppliers create their own software to compliment
their respective measurement systems; for example
CAM2 (Faro), iSPACE (Nikon), emScon
(Leica). Difficulties occur when measurement
systems from different manufacturers require
integration. Third party software developers, for
example SpatialAnalyzer (SA) by New River
Kinematics (NRK) can often accommodate more
system interfaces with a range of functionalities
such as measurement data acquisition, post-
measurement analysis, measurement network
uncertainty evaluation, CAD data comparison and
automation of metrology instruments and complex
measurement processes and/or analysis via a
scripting process.
In addition, a feed-back loop that links assembly
execution and production instances (through
measurement systems) to initial (product, processes
and resource) design is missing within current PLM
systems. Thus, context of corrective measures taken
throughout assembly stages remains detached from
the initial assembly variation model. If
manufacturing or assembly capability does not meet
the design intent, the component/assembly variation
data require to be utilised in the simulation model to
determine out-of-spec constraints that may affect
the subsequent assembly adversely. In order to
achieve this, a hybrid model based assembly
variation and physical testing modules needs to be
incorporated in the PLM system to i) maintain the
required tolerances within the tooling and the
assembly process ii) preserve the information
regarding corrective actions taken throughout the
assembly stages iii) illustrate the effects of
manufacturing discrepancy on component and
assembly level.
3. IMPLEMENTATION FRAMEWORK
FOR A NEW HYBRID MODEL BASED
AND PHYSICAL TESTING DESIGN
VERIFICATION METHODOLOGY
The goal of the framework (Figure 1) is to
integrate model-based analysis with physical
measurement plans and data in order to facilitate not
only early verification of complex designs but also
in-process assembly stages from the perspective of
satisfying key assembly criteria. The framework
provides a hybrid design and verification
mechanism to combine and revise digital model
based verification aspects with the measurement
results obtained throughout physical assembly
build-up processes. The framework firstly
interrogates initial design (product, process and
resource) information to construct verification
model, then this model is validated and revised
during physical assembly build-up by utilising in-
process (component and sub-assembly level)
measurement data. This hybrid approach is iterative
in the nature, which consists of a set of analysis
routines throughout design and manufacture of the
new aero-structure assembly. These analysis
routines are illustrated in Figure 2 and are described
below:
Two modules and associated requirements
(Figure 1), namely Digital model based
requirement and Physical testing requirements
within proposed hybrid methodology are specified
in the framework to be able to develop a cohesive
environment where design, tooling and metrology
information can be utilised to design, reconfigure
and sequence prototype assembly processes.
Within a proposed framework, the first task is to
assess the contribution towards dimensional
uncertainty of designed tolerances. It is proposed to
adopt a Monte Carlo approach in which the
assembly structure is repeatedly analysed with
changes to the key parameters that are likely in
practice. This task uses a state-of-the-art,
commercially available tolerance analysis package
for disintegrating geometric tolerances into
positional vectors for identifying the axial and

526

planar deviation fields applicable to any points of
interest in the assembly structure. Thus, the effects
of designed geometric tolerances on the component
level can be understood, key tolerances can be
identified, and decisions can be made regarding the
manufacturing processes to adjust those key
tolerances.


Figure 2. The role of Measurement Assisted Assembly in integrating design of product and assembly system for aerospace
structures
The second task is to extend this analysis in order
to model typical combinations of fastenings (mainly
bolts and rivets) and investigate the effects that key
parameters have upon the assembly tolerances.
Finite element analysis (FEA) methods are used to
model and determine the characteristics of single
occurrences of such fastenings and investigate the
effects that parameters such as material properties
and applied forces and torques have upon the
resulting tolerances of the assembly. This
investigation also has considered the effects of
compliance within the individual components with a
view to improve assembly and reduce the tolerance
levels. The key parameters that have significant
effects are identified.
The process planning aspects such as selection of
measurement system, uncertainty of measurement
network, identification of features to be measured,
positing of measuring instruments are not often
thoroughly dealt with in the design phase. As a
consequence, unexpected geometric variations
acquired while producing the prototype makes
project objectives difficult to achieve, subsequently
resulting in delays. Within proposed framework,
component or assembly measurement processes are
considered as a part of assembly build operations.
Consequently, the third task is to revise developed
verification model by using physical measurement
data. Thus, to be able to verify and revise simulation
model continuously, every measurement that has
been taken while manufacturing components or
executing assembly build-up has to be fed into
simulation model.
4. HYBRID MODEL BASED AND
PHYSICAL TESTING DESIGN
VERIFICATION METHODOLOGY
4.1. NEW AERO-STRUCTURE PROTOTYPE
DESIGN AND MANUFACTURE
The development of new aerospace structure is
depicted in Figure 2. It starts with design of initial
implementation prototype that is tested for ability to
meet the functional requirements. Initially overall

527

prototype tolerances are designed on the basis of
past experience and design specifications to derive
product key characteristics (PKC). At this stage,
datum structures are allocated using GD&T, which
are considered in designing assembly jigs, tooling
and fixtures. This is then followed by specifying
competent level tolerances which defines
manufacturing key characteristics (MKC). The
MKCs represent manufacturing machine process
parameters that can significantly affect PKCs.
Often, for new manufacturing concepts,
manufacturing process capability data is either non-
existent or very hard to obtain (Maropoulos, et al.,
2011). Thus, measurement assisted
manufacturing/assembly processes (as shown in
Figure 2) are evitable in such situations to provide
real time enhancement of process capability. The
proposed framework aims to simulate theses
measurement assisted processes as a part of physical
assembly execution. Thus, measurement features,
measurement resources and associated uncertainty
can be planned or considered in the design phase.
This provides common platform to integrate design,
tooling and metrology knowledge and innovation.
4.2. THE METHODOLOGY
The goal of this analysis is to simulate the effects of
component and assembly level tolerances on the
assembly variability by disintegrating geometric
tolerances into positional vectors for identifying the
axial and planar deviation fields applicable to any
point of interest in the assembly structure. Thus, the
effects of designed geometric tolerances on the
component level can be understood, key tolerances
can be identified, and decisions can be taken
regarding the manufacturing processes to adjust
those key tolerances. A 3DCS Analyst, a versatile
and sophisticated tool allowing point or feature
based Monte Carlo Simulation has been selected to
carry out assembly variation modelling. Following
steps are taken to prepare the assembly variation
model:
Step 1: Identification of assembly stages and
associated components / sub-assemblies required for
the tolerance analysis. Load the appropriate nominal
assemblies or components into the 3DCS Analyst
software and create the product assembly tree in
accordance to the actual assembly sequence and
hierarchy. This may mean that product and tooling
are grouped together into sub-assemblies.
Step 2: Create points on components to represent
the physical interface between components as
shown in Figure 3. If the component geometry is
available, it is possible to dynamically link the
points to geometric features and create feature
points. However, it is possible to create points in
3DCS (through spreadsheet function) which can
represent physical measurement (measured
coordinates) taken on the component. This
functionality is exploited for revising nominal
variation model by consolidating measurement
information in the variation simulation.


Figure 3. Assembly interface points defined in 3DCS Analyst
Step 3: Define tolerances on these points to
represent component variability based on
component specifications. In fact two values can be
given for each tolerance; a range representing a
tolerance or known manufacturing or measurement
uncertainty; and an offset representing a known
error such as those that might be available when
assessing an actual build condition for a single
assembly where measurement data is available for
the component. Individual parts are compiled in the
assembly tree to form an assembly. Each of these
individually manufactured components can have
assembly interfacing points with several different
tolerances as shown in Figure 4. It is the effect of
these tolerances in the assembly build philosophy
that makes assembly tolerance simulation studies
necessary.

528

Any point with a tolerance associated to it will
deviate according to the distribution and limits of
the tolerance. Linear tolerance is used to vary a
point in one direction according to the specified
distribution, where direction is specified by a
vector. Similarly, Circular tolerance is used to
induce the variation of a point in a circular zone
which is perpendicular to the specified vector.
Tolerances that are specified correctly show the
exact effect they have on an assembly when
simulations are carried out.
Step 4: Create individual measurements at each
toleranced point to understand the effects of
tolerances in a model. Measurements are the
measured dimensions of a part's features or the
dimensions between features in an assembly. These
measurements quantify the variation induced in the
desired areas of the model. The outputs of a
tolerance simulation model are based on these
measurements. A type of measurements for
measuring a variety of geometric aspects defined
KCs, such as spatial coordinate, gap, deflection,
thickness, flatness etc are composed as shown in
Figure 4. This measurement planning is done in
3DCS Analyst in conjunction with Spatial Analyzer
to execute physical measurement plans and generate
measurement information required for verifying and
updating variation model. Specific measurement
process execution aspects considered in this Hybrid
design verification methodology are discussed in
Step 7.


Figure 4. Variation analysis step 4 and 5
Step 5: Create assembly moves to imitate effects
of assembly processes. Moves not only define how
a part locates to another part, but they can be
defined in a specific order to describe an assembly
or manufacturing sequence. For instance as shown
in Figure 5, it appears that the spars are rested
against various brackets (Side stay, Gear rib,
Trailing edge brackets shown in Figure 4) and hard
stops (also called as step gauges) mounted on the
Main assembly jig. This requires a simulation
model to simulate bending effect caused by both
resting elements (Brackets and hard stops). When
the Rear spar is assembled to all brackets, it appears
to be bending in the horizontal plane. Similarly,
when the spar is pushed against hard stops, it
appears to be bending in the vertical plane. Thus,
using two moves for horizontal and vertical planes
could simulate actual bending that could have been
resulted in the build process. The first step in spar
assembly move is to define first three primary
locator points in the bending plane. This is followed
by defining two secondary plane location points and
one tertiary plane location point. Later, additional
primary locator points needs to be described for
simulating bending of primary plane. This implies
resting of spar on a number of brackets or hard
stops throughout its length. Similarly, after
assembling Rear spar and Front spar on to the Main
assembly jig, Rib posts are assembled on respective
spars using appropriate moves to represent the
actual assembly sequence.
Step 6: Perform Monte Carlo Statistical Analyses
by running Simulation. Insure the final analysis
objectives at the specified locations are covered
through the created measurements. The variation
simulation takes as inputs the nominal form of
components, the variability of components and the
assembly process. It then outputs the overall form
and variability of the assembly. This allows
sensitivity studies to show to what extent individual
component tolerances contribute to the overall
assembly Key Characteristics (KCs) and for
different build sequences to also be compared to
show how they affect the overall assembly
variability. Review the nominal, mean, 6 sigma and
percent out-of-spec of each measurement. Perform
High-Low-Mean (HLM) Sensitivity Analysis and
evaluate tolerance contributions for each
measurement as shown in Figure 5.

529


Figure 5. Variation analysis step 5 and 6
Step 7: The physical component or assembly
level measurement information is generated in this
step (Figure 6), which is then fed to the initial
variation model. Assembly variation and
uncertainty propagation begin with the
commissioning of the fixturing. The combined
tolerance of the fixture, location pins, slips and
facility tooling must be less than the assembly
tolerances; ideally <10% although this is rarely
possible. Fixtures can have global tolerances of
around 0.15mm over 15-30m consuming a large
proportion of the assembly tolerance budget.
Additionally, the fixture must be commissioned
with an accuracy that is in an order of magnitude
better than the fixtures build tolerance; again, due
to the fixtures scale and the fabrication
environment the achievable measurement
uncertainty is more like 30% of the fixtures
tolerance budget. The measurement uncertainty and
tolerances associated with building the fixture
further drives tight tolerances during the design
process.
The environmental instabilities present during
production effects the measurement systems so that
the performance is often less than the instruments
manufacturers stated specification: especially with
laser based instruments. The required tolerances are
achieved by networking instruments together;
measuring points from multiple positions enables
optimisation algorithms to be employed, this
reduces the associated point uncertainties to a level
of around 50m (at 3). This high accuracy
constellation of points becomes a datum structure
for subsequent measurements, both for verification
and build purposes. Networking instruments also
overcomes the line-of-sight issue associated with
complex fixture geometries.
Measurement networks are only appropriate for
static point measurement; measurements assisted
assembly normally requires dynamic measurement
instruments; dynamic measurements have a higher
associated measurement uncertainty. However,
measurement uncertainty must be less than the
tolerance to be achieved. During variation analysis -
using the hybrid model described in Figure 1 -
measurement uncertainty can be used where
traditionally nominal build tolerances are input.
This method reduces the model variation as the
uncertainty is inherently less than the tolerance. The
measurement process, instrument placement and
measurement uncertainty can also be simulated to
model variation using MAA methods.


Figure 6. Physical measurement process execution and measurement data generation

530

Step 8: Define assembly processes for predicting
assembly variation. This task exploits TAA
(Tolerance analysis of deformable assembly)
workbench of CATIA to analyse the internal stress
and relative strain of each assembly component
resulting from planned assembly operations such as
clamping, bolting, riveting, welding, unclamping,
and force application. This also involves simulation
of the individual component arranged on the
assembly tooling for the measurement purpose. Any
compliance resulting from fixturing prior to
measurement process can be investigated and
considered for identifying stress and strain fields
induced in the component when it is assembled in
the final assembly as shown in Figure 7. The
analysis conceives comprehensive FEA simulation
strategy for determining effects of compliancy,
fastening techniques and associated tolerances.


Figure 7. Simulating effects of assembly processes, in-process deviations and concessions issued on the components
5. IMPLEMENTATION AND TESTING VIA
A CASE STUDY
The study was part of a major transnational
aerospace project focusing on advanced aircraft
structures. The specific objective of the case study
included understanding of how metrology data from
the wingbox build process could be used to; (i)
determine whether discrete components meet their
GD&T specifications, (ii) plan the measurement
processes for verification, and (iii) predict the final
assembly dimensions.
This case study used state of the art software and
systems to simulate and measure assembly
variation. Dimensional tolerances were considered
as a primary source of assembly variation at the
initial design stage. Required tolerances in the
simulation were selected form part specifications
and condition of supply (CoS) documents. A hybrid
approach has been followed in the analysis, where a
simulation model was developed to address the
model based verification functions (as shown in
Figure 1). The numerous assembly processes
involved in building the pilot wing box were
simulated. The variation simulation requires: the
nominal form of components, the variability of
components and the assembly process as inputs.
Subsequently, the output represents the overall form
and variability of the complete assembly. This
allows sensitivity studies to show to what extent
individual component GD&T specifications
contribute to the overall variability of PKCs. Figure
8 shows this variability across a rear spar lower
web, as the blue zone. Different build sequences can
be compared to show how they affect the overall
assembly variability, and this can be represented as
shown in Figure 8.


Figure 8. The effect of utilising measurement data for
reducing simulation variability for complex assemblies
The second phase of the case study involved
carrying out measurements at the assembly jig
throughout the assembly stages. The measurement
data of the wing box Key Characteristics (PKCs)
was then compared with the simulation results.

531

When using actual component measurements and
re-running the assembly process simulation model,
there is a clear improvement in the variability
prediction of PKCs. Figure 8 demonstrates this
radical improvement in the assembly variability
simulation of the rear spar lower web, as the
predicted variability is now confined in the red zone
across the length of the spar.
The research has developed and documented new
procedures for; (i) incorporating measurement data
into the simulation models of complex assemblies,
and (ii) utilising such data for modelling variability
of the overall assembly process. This measurement
data includes component level measurements and
the measurements of key features after component
assembly. These measurements can be used in the
simulation model to both validate the operation of
the model and replace simulation predictions in a
flexible way.
The simulation variability when incorporating
measurement data arises from the
measurement/instrument uncertainty which is
significantly smaller than the specified tolerances;
subsequently reducing the simulated assembly
variation. Hence, these procedures can be utilised in
the future for accurately predicting the assembly
variability of subsequent assembly build stages and
new build philosophies. Improving the accuracy of
model based verification would allow the early
detection of assembly non-conformance such as, the
early prediction of steps and gaps during assembly
hence, enabling the planning of predictive
shimming and machining.
6. CONCLUSION
The proposed methodology provides cohesive
design verification environment for unifying
fragmented state-of-the-art aero-structure
manufacturing and assembly processes. This
methodology conceives iterative environment,
firstly to simulate component and assembly level
variation due to designed tolerances, compliance
and involved assembly processes; secondly,
considers effects of measurement assisted assembly
technologies for making subsequent assembly
planning decisions; and thirdly provides assembly
planning options depending upon in-process
assembly measurement data. Effectively,
considerable learnings are achieved using this
design methodology which can be utilised for
developing and verifying next-generation aircraft
building philosophies.
REFERENCES
Balsamo A, Di Ciommo M, Mugno R, Rebaglia BI,
Ricci E and Grella R, "Evaluation of CMM
uncertainty through Monte Carlo simulations", CIRP
Annals 1999 - Manufacturing Technology, Vol. 1999,
pp.425-428.
Bernard A and Tichkiewitch S, "Methods and tools for
effective knowledge life-cycle-management",
Springer, Berlin, 2008.
Brujic D, Ristic M, Mattone M, Maggiore P and
Fernndez-Castaeda J, "Progress Report on New
Modelling Techniques", VIVACE
2.3/5/IMPERIAL/T/05503-1.1, 2005.
Chryssolouris G, Kotselis J, Koutzampoikidis P, Zannis
S and Mourtzis D, "Dimensional accuracy modelling
of stereolithografy parts", CIRP Journal of
Manufacturing Systems, Vol. 30, No.2, 2000, pp.170-
174.
Chryssolouris G, Fassois S and Vasileiou E, "Data
sampling technique (DST) for measuring surface
waving", International Journal of Production
Research, Vol. 40, No.1, 2002, pp.165-177.
Hu SJ and Camelio J, "Modeling and Control of
Compliant Assembly Systems", CIRP Annals -
Manufacturing Technology, Vol. 55, No.1, 2006,
pp.19-22.
Jamshidi J, Zheng M, Cai B, Dai D, Wang Z, Ferri C,
Muelaner JE, Robinson D and Maropoulos PG,
"NGCW MAA SoA Report No. 7: SMART
Metrology", Technical Report, Airbus:
HIVOLB1356A057, 2008.
Jamshidi J, Kayani A, Iravani P, Maropoulos PG and
Summers MD, "Manufacturing and assembly
automation by integrated metrology systems for
aircraft wing fabrication", Proceedings of the
Institution of Mechanical Engineers, Part B: Journal
of Engineering Manufacture, Vol. 224, No.1, 2010,
pp.25-36.
Loose JP, Zhou Q, Zhou SY and Ceglarek D,
"Integrating GD&T into dimensional variation models
for multistage machining processes", International
Journal of Production Research, Vol. 48, No.11,
2010, pp.3129-3149.
Maropoulos PG and Ceglarek D, "Design verification
and validation in product lifecycle", CIRP Annals -
Manufacturing Technology, Vol. 59, No.2, 2010,
pp.740-759.
Maropoulos PG, Vichare P, Martin O, Muelaner J,
Summers MD and Kayani A, "Early design
verification of complex assembly variability using a
Hybrid - Model Based and Physical Testing -
Methodology", CIRP Annals - Manufacturing
Technology, Vol. 60, No.1, 2011, pp.207-210.
Muelaner J, Vichare P, Martin O and Maropoulos PG,
"ALCAS Wing Build Tolerance Evaluation", 2011.
Nagel JKS and Liou FW, "Designing a Modular Rapid
Manufacturing Process", Journal of Manufacturing
Science and Engineering-Transactions of the Asme,
Vol. 132, No.6, 2010, pp.061006.

Proceedings of DET2011
7th International Conference on Digital Enterprise Technology
Athens, Greece
28-30 September 2011
532

EVALUATION OF GEOMETRICAL UNCERTAINTY FACTORS DURING
INTEGRATED UTILIZATION OF REVERSE ENGINEERING AND RAPID
PROTOTYPING TECHNOLOGIES
Stamatios Polydoras
Mechanical Engineer, PhD Cand.,NTUA
Mechanical Design & Control Systems
Section, School of Mechanical Engineering,
National Technical University of Athens
(NTUA), Greece
polsntua@mail.ntua.gr

George Kaisarlis
Mechanical Engineer, PhD, NTUA
Mechanical Design & Control Systems
Section, School of Mechanical Engineering,
National Technical University of Athens
(NTUA), Greece
gkaiss@central.ntua.gr


Christopher Provatidis
Mechanical Engineer, Professor NTUA,
Mechanical Design & Control Systems
Section, School of Mechanical
Engineering, NTUA, Greece
cprovat@central.ntua.gr

ABSTRACT
Rapid Prototyping Technologies (RPTs) quickly accomplish the realization of concepts related to
new product designs. The integration of RPTs with Reverse Engineering (RE) is nowadays widely
used in a range of applications, e.g. manufacturing of spare parts, digital reconstruction and
fabrication of anatomic structures. For certain applications, the geometrical accuracy of the RE
RP-produced part is critical. Nevertheless, due to inevitable uncertainties introduced in every step of
the process, the final component exhibits a variety of geometrical deviations. The paper indicates
that despite the advancement in the combined use of digital RE RP technologies for Rapid
Manufacturing (RM) purposes, there are still issues to be considered in application-level before
fully achieving the geometrical accuracy potential of RP and RE. Focusing on the evaluation of the
geometrical uncertainties during the RP stage of mechanical components RM process, affecting
parameters are identified and to a certain extent quantified through the use of an illustrative case
study.
KEYWORDS
Rapid Prototyping, Rapid Manufacturing, Reverse Engineering, Dimensional & Geometrical
Accuracy

1. INTRODUCTION TO RE & RP
TECHNOLOGIES
Reverse Engineering (RE) and Rapid Prototyping
(RP) technologies have both grown a lot during the
last decades and are nowadays very much utilised in
product design, digital manufacturing, digital
reconstruction and many other applications in the
technical world, (Chen and Ct, 1997; Shi et al,
2000).
RE is the in depth study and analysis of an
existing product or model in order to recreate the
information, engineering decisions and
specifications generated during the original design

533

(ElMaraghy, 1998). In RE, existing mechanical
components, for which technical documentation is
not available or accessible or do not exist, have to
be reconstructed and manufactured through a
variety of techniques, e.g. by the use of contact or
non-contact Coordinate Measuring Machines
(CMM) and dedicated software. RE techniques are
obviously needed only when engineering drawings
and other technical data are not available.
RP, also described as Layer Manufacturing,
consists of 3D digital data utilization for object
fabrication in successive layers and is performed in
many ways, with several -different in principle-
technologies and systems developed within the
years (Pham, 2001; Salonitis et al, 2003; Yongnian
et al, 2009). The most prevailing representatives of
RP today are Stereolithography (SL), Fused
Deposition Modeling (FDM), Selective Laser
Sintering (SLS), 3D Printing and UV Curing (by
Multi-jetting, DLP resin flashing and droplet
deposition), (Grenda, 2010).
Until recently almost all RP machines were
expensive to buy (costing several tenths to hundreds
of thousand of US$) and also to run, especially the
laser based systems, (Wohlers and Grimm, 2002).
But in the last 5 years there has been a
breakthrough, with small, office-friendly, desktop
scaled RP systems, often referred to as 3d
Modelers or 3d Printers, introduced to the
market by almost all of the major RP Vendors (3D
systems, Stratasys and others) for less than 20.000
US$ (Grenda, 2011). The Dimension uPrint
machine - hosted in NTUAs RP-RE Laboratory
since 2010 and used for the needs of the present
work- is a fine example of a small 3d modeler,
Figure-1. Recently it is also offered by Hewlett
Packard as an extension to their product range of
conventional printers.


Figure 1 Dimension (Stratasys) uPrint 3d Modeler

The 3d Modelers spread has also led to a boost of
use and to the widening of the range of applications
for RP, making it more accessible to small
enterprises, free-lance professionals, even students
and hobbyists. Within this extended applicability of
RP, integrated RE-RP utilisation is now practised
more often than before; mainly for the sake of
artefact reproductions (spare parts, organic,
medical, and cultural). In many of those cases, the
dimensional and geometrical accuracy of the
reproduced object, might be important, or critical,
therefore there is a need for identification,
allocation and evaluation of such factors that
introduce uncertainty to the overall result, in order
for the best possible quality to be obtained.
The paper reports the preliminary results of an
on-going study that concerns the evaluation of the
geometrical uncertainties during the integrated
utilization of RE and RP technologies for the Rapid
Manufacturing (RM) of mechanical components. In
the scope of the paper the parameters involved in
several stages of the RP process that impact the
geometrical uncertainty of the fabricated component
are identified and to a certain extend quantified
through the use of a case study. Finally, future
research directions are highlighted.
2. INTEGRATED RE RP APPROACH:
UNCERTAINTY FACTORS
In the integrated RE-RP approach the contribution
to inaccuracy, or to uncertainty regarding the
geometrical deviations of the result, is two-part.
One part comes from the inaccuracies of the RE
process providing the geometry for fabrication and
the other part comes from the RP process followed
according to the RP technology and system selected
or available to perform the build. It should be noted
that depending on the targeted application the
overall accuracy achieved might be of none (e.g.
digitizing and reproducing a concept model) to
critical importance (e.g. when an assembly is
reverse engineered, featuring several dimensional
and geometrical fits on it). In any case, it is very
useful to have a systematic approach available in
order to keep inaccuracies under the best possible
control, regardless of the equipment available.
Accuracy of measurement and/or digitization, as
well as the overall uncertainty of the produced data
is an issue of major concern in RE (Kaisarlis et al,
2006; 2007). The RE objective of remanufacturing a
needed mechanical component which has to fit and
well perform in an existing assembly and,
moreover, has to observe the originally assigned
functional characteristics of the product is rather
delicate. In order to achieve that a broad range of
technical specifications of the RE component, such
as material specifications, heat treatment, surface

534

treatment, surface finish, shape, size etc. and their
relevant accuracy requirements, have to be assessed.
Having a significant impact in its manufacturing
cost, assemblability and performance, the
assignment of the dimensional and geometrical
accuracy specifications of the reverse engineered
component is one of the most critical RE tasks. The
integrated RE-RP approach presented in the paper is
based on the methodology for the designation of
geometric and dimensional tolerances that match, as
closely as possible, to the original (yet unknown)
dimensional and geometrical accuracy
specifications, published by Kaisarlis et al (2006;
2007). In RE such accuracy specifications for
component reconstruction have to be reestablished,
one way or the other, practically from scratch. RE
tolerancing becomes even more sophisticated in
case that CMM data and a few or just only one of
the original components to be reversibly engineered
are within reach. Moreover, if operational use has
led to considerable wear/damage, then the
complexity of the problem increases considerably.
Although RE has an apparently significant role to
play in mechanical maintenance and plant
equipment availability, RE-accuracy and
tolerancing issues do not seem to have been, to this
date, adequately addressed. An approach to tackle
with this task in a systematic way that concerns the
identification and quantification of the full range of
RE geometrical uncertainty factors (equipment,
software and process-related) is currently under
development by the authors.
With RP, even in its early days in the 1990s and
later on, accuracy of the parts produced has always
been a question, even a matter for debate between
RP vendors. It has been expressed, mainly by the
vendors, in absolute values of deviating dimensions,
as a percentage of the dimension, even in terms
closer to actual paper printing, such as DPI, often
causing confusion to potential users. Consequently,
several research and benchmarking works have
been done to characterise and compare methods and
machines in terms of their accuracy performance,
e.g. Ippolito et al. (1995), Shellabear (1999). They
all agree that the majority of RP methods and
systems are by nature inferior to conventional
machining processes in terms of accuracy, in most
cases capable of achieving accuracy of a few tenths
of a millimetre. They also show that there are
several influencing factors introduced throughout
the steps of the complete RP process chain that
introduce uncertainty and have to be considered,
some of them often counteracting and neutralising
others. The main influencing factors related to RP
geometrical uncertainty as acknowledged and
proposed by the authors in the presented work are
described below.
2.1 TESSELATION - STL QUALITY AND
ERRORS
STL files are the de facto RP format. An STL file is
practically a mesh of triangles completely
surrounding the boundary surfaces of the CAD
model to be fabricated, (Kumar and Dutta, 1997).
In some cases STLs are directly formed from point
clouds from CMM digitization. Clearly, STLs are
approximations of the real geometries and they
inherently introduce geometric uncertainty,
especially to curved, free-form and or complex
features and regions of a part, as triangles deviate
more or less from the actual parts surface. Within
CAD environment, there is normally adequate
control capability of the generated STL quality,
through the so called facet deviation-chordal
tolerance and minimum triangle angle
parameters. Nevertheless, the user must always be
conservative and balanced at STL generation, as
very small values of these parameters can lead to
very large STL files, redundant and not easily
processed, while on the opposite the desired
detailed and accuracy can be lost by very large
values. Further, errors like inverted normals, holes,
triangle overlaps can also negatively influence the
final accuracy and must be kept in control. Figure-2
graphically depicts the tessellation error.


Figure 2 Tesselation Error

2.2 STL MANIPULATION ERRORS
In order to reduce redundancy of STL files, correct
or eliminate STL errors and optimize build
parameters and strategies, it is quite common to
utilise RP dedicated software in an independent step
after CAD and before RP machines, e.g. Magics RP
software by Materialise. Using such RP software
several optimizations that concern, orientation,
support structures, part nesting etc. can be
performed. Mesh correction and STL triangle
number reduction, in cases where redundancy and
extreme detail is diagnosed, is also possible. Of
course, errors can be generated throughout this step,
such as detail and geometry deterioration and even
feature eliminations.

535

2.3. RP SYSTEM RELATED ERRORS
Usually a very dominant factor in every RP System,
systemic, process or machine-related errors, are the
most likely ones to have a standardized and
repetitive behaviour. They can therefore be
cautiously diagnosed, investigated, registered and
under specific conditions even compensated
(Kechagias et al, 1999). They can further be placed
under four main categories:
2.3.1 Driving, positioning and Laser beam
or deposition thickness variations
In the case of laser based, or deposition based RP
technologies and machines, such as the FDM uPrint
examined in the present work, variation of the
thickness of the beam or deposition material along
the paths of each layer of the part may occur, which
-regarding the inner and outer borderline paths of
the part in a layer- induce errors, although
compensation might have been already applied by
the machines system software. Depending on the
form of each feature on the part, accuracy can vary
on the same part among different form features.
Driving and positioning of the X-Y and Z axes of
the RP machines (often by stepper or servo-motors)
during the build process, are also subject to
resolution and repeatability limitations of the
embedded hardware.
2.3.2 Orientation and Layer Thickness
In general, all RP technologies show a discrete
difference in the XY-plane and Z-direction of the
build, in a way that parts would not be characterized
as uniform. Thermomechanical phenomena in
almost every RP technology are different between
X-Y plane and Z-axis. At the Z-axis swelling or
shrinking could occur, in a different scale than in
the XY plane of the layers and the stair-stepping
phenomenon is always present, (Polydoras, S. and
Sfantsikopoulos M., 2001). Finally, because of the
fixed layer thickness of a build in an RP machine,
Z-oriented dimensions of part features are always
fitted by the system to their closest value that is an
exact multiple of the layer thickness used by the
specific RP machine; with the residue recognized as
another inaccuracy. This is often described as the Z-
axis quantisation phenomenon.
2.3.3 Filling Pattern and Supports
During solidification or deposition that occurs in
each layer during an RP build, first the inner and
outer boundary sections of the part are shaped.
Further, in order for the final part to be dense and
stable, the inner material-side of the part has also to
be solidified or filled with deposited material. At the
same time for proper support of the build,
supporting material is placed around the part in
critical support needing geometries. In many RP
systems like SLA or FDM there is the option to
choose between e.g. fully dense, honeycomb-like
and sparse narrow shaping for the inner material, as
well as for the supports. Depending on the desired
material economy, part strength and application of
the prototype, the way parts and supports are built
internally, affects their shrinkage and dimensional-
geometrical stability.
2.3.4 Material Behaviour
Different raw materials used in the RP machines,
apart from their differences in chemical
composition and thermomechanical properties, also
undergo different processes in different RP
technologies. This way, photopolymeric resins
undergo solidification via photopolymerisation in
SL and UV curing systems, where on the other
hand, cord shaped ABS materials undergo two
shifts between the solid and liquid phases through
thermal melting and subsequent resolidification in
the FDM process. Just after part completion, some
technologies and materials might also require
further secondary treatment or finishing of their
parts (SLA, FDM, UV curing, SLS, LOM etc.). It is
obvious that all these phenomena and process steps,
coupled with contact of parts with dissolvent, water,
air humidity and combined with thermal shrinkage
during cooling ensure that to some extent accuracy
and dimensional stability will also be affected.
3. PROPOSED APPROACH FOR
GEOMETRICAL UNCERTAINTIES IN
INTEGRATED RE-RP PROCESSES
The whole range of uncertainty influencing factors
described in the above sections ought to be
investigated by integrated RE-RP practitioners,
before they are capable of fully exploiting the
geometrical accuracy potential of the technologies
in their applications. This can be achieved by
performing Design of Experiments (DOE) and
Analysis of Variance (ANOVA) on all above
factors and their variables, in order to conclude
which ones and how much they really affect the
whole process. Nevertheless, this analytical
approach is usually a major, time consuming and
considerably effort-intensive task, difficult to
undertake in an everyday curriculum.
Another approach, as proposed by the authors in
the present work, is for RE-RP practitioners to focus
on pilot parts, very similar to their regularly
occurring applications and perform several
complete, or partial test runs of the RE-RP process,
carefully selecting and altering specific obviously or
probably affecting parameters of the process in
between. After assessing and evaluating each part
produced in terms of accuracy and geometrical
uncertainty and comparing the results with results
from other parts of the same or other runs, factors of

536

negligible effect can be omitted and the remaining
strong factors can be categorized according to their
magnitude of influence and to their relevance -or
not- with part geometries and system parameters.
This way, at the end, systematic and characteristic
uncertainty influencing factors could be
compensated in future parts for specific form
features of interest, and the rest, characterised as
erratic and non-manageable, could define the
overall accuracy performance of the integrated
process. The proposed approach is tested and
explained in a case study within the following
section of the paper, before conclusions are drawn.
4. CASE STUDY
For the case study, integrated RE-RP was attempted
through the use of a direct computer controlled
Mistral 070705 (Brown & Sharpe-DEA) CMM with
ISO 10360-2 max. permissible error
3,5(m)+L(mm)/250, using PC-DMIS v.4.2 (Wilcox
Associates) measurement software and a Dimension
uPrint 3D Modeler, a typical small scale entry level
RP system very likely to be met in everyday
applications.
As target parts for the process, a working couple
of mechanical components were selected, sharing
three different fits between them. A typical
cylindrical peg-hole fit, a circumferential radial fit
over an arc (section of circle) and a prismatic
(distance) fit, in terms of the arc-features width.
Apart from the working ones, several other readily
measurable cylindrical features were hosted on each
part, with RE tolerances prescribed to some of
them. Hereafter, these parts will be named
embolo and folia.

4.1 RE PROCESS STEPS
Typically, the RE process encompasses several
stages including component digitization using
contact or non-contact CMM, refinement of the
acquired cloud of points and 3D-CAD surface/solid
modelling. In the scope of this study, the above
mentioned critical fits were initially recognized and
the corresponding components form features were
identified. CMM measurements were performed on
four intact mating pairs of embolo and folia
components. In the context of feature-based RE
(Thompson et al, 1999), the CMM measurements
were focused on the dimensional evaluation of the
critical features. Therefore, RE tasks such as surface
digitization, curve fitting on cloud of points, etc
were not considered for the case study components.
The establishment of nominal dimensions and ISO
standard fits for the critical features was then
pursued by the application of TORE (TOlerances
for Reverse Engineering) methodology (Kaisarlis et
al, 2006; 2007), Figure-3 and Figure-4.


Figure 3 ISO-E Views of the embolo component


Figure 4 ISO-E Views of the folia component


Figure 5 Parts embolo & folia working together

537

4.2 PREPARATION FOR RP
After the completion of the RE steps full 3D
nominal models for embolo and folia were
available for STL extraction and RP fabrication. 2D
drawings of the parts are given in Figure-3 and
Figure 4. A rendered 3D representation of the
working pair is given in Figure-5.
Within 3D CAD software Solidworks 2010, two
variations of STL files with different accuracy
levels of approximation were extracted for each
part. They were made with the automatically
regulated values of facet deviation and minimum
triangle angle of the CAD software, for coarse
and fine approximation presets. Moreover, the
triangle reduction feature of Magics RP of
Materialise RP software, was used on the fine
version of the STLs with a setting of 0,2mm
minimum feature detail and 30 triangle angle, to
produce a third relaxed version of fine and
check for possible accuracy declines caused by this
interim RP software step. Details of the three STL
versions used are given in Table-1.
Table 1- STL variations of embolo and folia
STL file
Trian
gles
Kb
Facet Dev
(mm)
Min
Angle
(deg)
emb
coarse
438 22 0,035 30
emb fine
tr rd
648 32 - -
emb fine 728 36 0,013 10
fol coarse 662 33 0,040 30
fol fine tr
rd
826 41 - -
fol fine 1144 56 0,016 10

The three STL variations for embolo and folia
are also graphically shown in Figure-6, where their
difference in meshing density of the cylindrical
features is obvious.


Figure 6 Parts embolo & folia STLs

4.3 RP PROCESS STEPS
The three STL files of embolo and folia, before
their build in NTUAs Dimension uPrint 3D
Modeler, were processed with Dimensions
operating software CATALYST v.4.2, in order for
orientation, placement, fill style and support
material method to be selected. The authors decided
that (i) all cylindrical features to be examined would
be placed parallel to the X-Y plane, (ii) parts would
be oriented for minimum support material, (iii) low
approximation accuracy coarse STLs would be
built both with sparse - low density and solid
fills to investigate the difference, the triangle
reduced fine STLs with sparse high density fill
and the fine STLs with solid fill, and (iv) all
supports would be built with the automated
SMART setting, which is the most common
choice. The four different pairs of embolo and
folia parts that were actually built in the case
study are summarized in Table-2.
Table 2- Parts embolo and folia built
Part o Description Fill Pattern
embolo 1 coarse
Sparse
Low density
embolo 2 coarse Solid
embolo 3
fine triangle
reduced
Sparse
High density
embolo 4 fine Solid
folia 1 coarse
Sparse
Low density
folia 2 coarse Solid
folia 3
fine triangle
reduced
Sparse
High density
folia 4 fine Solid




Figure 7 Different fill patterns on embolo


538

A graphic example of sparse low density,
sparse-high density and solid fill patterns in the
area of the arc cylinder section of embolo is given
in Figure-7. All parts were symmetrically packed
and evenly placed in the build area and fabrication
was started on NTUAs uPrint machine, running the
latest Version 9.1 Build 3550 System Software
(Released May 2011). Parts were complete in less
than two hours.
After removing the build platform, a first set of
CMM measurements on accessible cylindrical
features, before the ultrasonic alkali solution
support removal process step, was performed on all
parts, to acquire some intermediate accuracy results
for comparison with the finished parts after support
removal. A picture of the CMM measurements on
the raw parts is provided in Figure-8. Parts were
then marked and placed in an alkali solution
ultrasonic cleaning tank, for approximately 3 hours
for support removal. The fully cleaned parts were
subsequently analytically measured again with the
CMM machine, on all their critical features and
dimensions of interest.


Figure 8 Raw Parts measured with the CMM

4.4 MEASUREMENT RESULTS
The critical features that are associated with the
assembly fits were CMM measured on the finished
parts. They are described and named in Table-3.
Table 3- Features measured on embolo and folia
Part ame Descript.
ominal
(mm)
Fitted
embolo A
External
Cylinder
7,5 -
embolo B
External
Cylinder
4 to C
embolo C
External
Cylinder
Arc Sector
22 to D
embolo D
Cylinder
Arc
Sectors
Width
external
7,5 to E
folia A
Internal
Cylinder
2,7 -
folia B
Internal
Cylinder
5 -
folia C
Internal
Cylinder
4 to B
folia D
Internal
Cylinder
Arc Sector
22 to C
folia E
Cylinder
Arc
Sectors
Width
internal
7,5 to D

In all measured cylindrical features a significant
form error was found (deviation from the form of a
perfect circle), ranging up to 0,2mm and with an
average of app. 0,1mm in external features and
ranging up to 0,45mm and with an average of app.
0,15mm in internal ones. In some of them it is even
visible to the naked eye on the RP parts, Figure-9.


Figure 9 Form Error on RP cylindrical features

Therefore it was decided that apart from the CMM
measured deviations of the dimensions of the
cylindrical features, based on measurements
calculated by the least squares fitting method, an
extra correction value equal to half the form error
of the corresponding feature would be added by the
authors to the deviations measured, thus leading to
the worst deviation found in each measured
feature. In this way, a more realistic approach of
the functional uncertainty of the features would be
obtained, since the test parts would be primarily
tested for fit in accordance with the maximum
material boundary conditions (Maximum Material
bore Minimum Material shaft) as standardized by
ASME Y14.5M (2009). The same does not apply
for dimensions D and E that are linear widths.
Nevertheless, for these width dimensions, a Z-
orientation quantization of the dimension applies,
which practically alters the nominal values from
7,5mm to 7,366mm for D and to7,62mm for E, as
verified with the help of the CATALYST 4.2 uPrint
operating software.
All worst deviations of cylinders and the width
deviations measured for the features of Table-3, are
graphically depicted in Figure-10 for embolo and
Figure-11 for folia. At the width deviation graphs

539

the theoretically calculated value expected due to z-
layer quantization is also depicted.





Figure 10 Worst Deviations on embolo






Figure 9 Worst Deviations on folia

The deviations of dimensions on features that have
a fit on the working couple of embolo and folia
are separately shown in Figure-12, with the graphs
placed according to parts assembly hierarchy.


540


Figure 12 Measurements related to fits between parts

4.5 ASSESMENT & EVALUATION OF
RESULTS
The analysis of CMM measurements and the
observation of all graphs produced from the
measured data have lead to some remarkable
findings. They can be summarized to the following:
The overall accuracy of the parts produced
lies in very respectable levels for a low cost entry
level RP machine, since all worst deviations
calculated, average as absolutes very closely to a
value of 0,25mm. It must be noted that this would
be a typical accuracy value for most of the larger
and more expensive RP machines a decade ago.
Also, if form errors were not considered for
correction, the overall inaccuracies would then
average closely to 0,15mm.
A form error is always constant on all
cylindrical features, in most cases around 0,1mm,
but with peak values of about 0,45mm in some
features. It seems RP-systemic, related to the way
FDM heads reach in and out of the deposited
circles.
Almost all XY-oriented dimensions
measured on the parts appear some 30 to 40m
smaller if the parts originate from the coarse STL
files compared to the ones made with fine STLs.
On features C and D though, the deviations were
much higher than the theoretically expected by the
tessellation error calculated on them. This
difference is RP system related, as repetitive
measurements excluded CMM uncertainty. It is
advised to use fine STLs.
External features generally seem to deviate
less than internal ones. There seems to be a shift in
their trend in a dimension value of app. 5mm, above
which measured dimensions seem to lack from their
nominals, while below they seem to exceed them.
Internal features are mostly less than nominal.
Compensation seems applicable to a certain extent.
For the Z-axis aligned D and E
dimensions, although at first glance D seems to be
less than nominal and E to exceed it, with the
adjusted nominals after quantization, on one hand D
(external dimension) values are clearly greater than
the expected new nominals and E (internal
dimension) smaller, with the exception of the first
value of part folia No1. This is an indication of
part swelling.
Parts made with the coarse STL files
presented far worse results compared to the ones
made with fine and fine triangle reduced
STLs. Furthermore, the triangle reduction with the
values used for it, appeared to have very little to
negligible effect regarding accuracy.
As for the fill patterns, it seems that poor
STLs built with SOLID fill patterns are the worst
accuracy combination, while fine STLs with sparse-
high density fill patterns are a little better than their
respective solid filled ones and possibly the best
combination. This can probably be explained by the
assumption that a solid filled part, although
stronger, would nevertheless shrink-expand more
than a sparsely filled one.
The finishing process of support removal in
alkali solution seems to have an effect of 50m by
maximum, reducing most external and increasing
most internal feature dimensions. Since this step in
the majority of parts cannot be omitted, the
difference is just noted as a partial influence.
Finally, as for the fits examined, although
the measured values only, indicate that 5 out of 12
fits would be succeeded, practically the full failure
of a loose fit on all four pairs of the fit B-C, which
is the first in the assembly hierarchy, dictates that a
fit of prototypes of the working couple embolo-

541

folia would not be possible at all, without a
secondary finishing step on the prototypes
produced. This of course was verified on the actual
FDM pars. The graphs, also indicate that in order
for the fits to be attainable right from the start,
external dimensions should be reduced and internal
ones increased about 0,3mm each, in CAD level
prior to the build.
5. CONCLUSIONS
RP equipment, especially with the addition of small,
cheap 3d modellers, combined with RE
methodologies and equipment, indeed vastly
increase the abilities of designers and engineers and
increase the range of their applications. The full
potential of such integrated processes, especially in
terms of accuracy can only be reached by
experimentation and methodological approaches.
The work presented in this paper is in such a
direction and has so far produced encouraging and
exploitable results. It is part of an on-going study
that concerns the overall evaluation of the
geometrical uncertainties during the integrated
utilization of RE and RP technologies for RM of
mechanical components and assemblies.
Apparently, it needs to be further extended by future
study on more types of characteristic features and
parts, possibly also made by combinations of
several RE and RP technologies. New runs on the
existing equipment, focused on the grey areas of
results of the present work will also further
highlight the factors of uncertainty and inaccuracy
and their effect in the integrated RE-RP process and
propose specific ways for their handling.
REFERENCES
ASME Y14.5M2009, Dimensioning and Tolerancing,
The American Society of Mechanical Engineers, New
York, 2009
Chen YH and Ct NG, Integrated reverse engineering
and rapid prototyping, Computers & Industrial
Engineering, Vol. 33, No.3-4, 1997, pp 481-484
ElMaraghy HA, Geometric Design Tolerancing:
Theories, Standards and Applications, Chapman
&Hall, London, 1998
Grenda E, Comparison Chart of All 3D Printer Choices
for Approximately $20,000 or Less, Worldwide
Guide to Rapid Prototyping, 2011, Retrieved: 22 7
2011 < http://www.additive3d.com/3dpr_cht.htm>
Grenda E, Commercial Rapid Prototyping System
Manufacturers and Technology Providers,
Worldwide Guide to Rapid Prototyping, 2010,
Retrieved: 22 7 2011 <
http://www.additive3d.com/com_lks.htm>
Ippolito R, Iuliano L and Gatto A, Benchmarking of
Rapid Prototyping Techniques in Terms of
Dimensional Accuracy and Surface Finish, CIRP
Annals - Manufacturing Technology, Vol. 44, No. 1,
1995, pp. 157-160, ISSN 0007-8506, DOI:
10.1016/S0007-8506(07)62296-3.
Kaisarlis G, Diplaris S and Sfantsikopoulos M, Datum
identification in reverse engineering, Proceedings of
the IFAC-MiM04 Conference, Elsevier, Amsterdam,
2006
Kaisarlis G, Diplaris S and Sfantsikopoulos M, Position
Tolerancing in Reverse Engineering: The Fixed
Fastener Case, Journal of Engineering Manufacture,
Proceedings of the Institution of Mechanical
Engineers, Part B, Vol. 221, 2007, pp 457 465
Kechagias J, Zannis J, S, Chryssolouris G, "Surface
Roughness Modelling of the Helisys Laminated
Object Manufacturing (LOM) Process", 8th European
Conference on Rapid Prototyping and Manufacturing,
Nottingham, U.K, 1999, pp. 141-152
Kumar V and Dutta D, An assessment of data formats
for layer manufacturing, Advances in Engineering
Software, Vol. 28, 1997, pp. 151-164.
Pham DT and Dimov SS, Rapid Manufacturing: The
technologies & applications of Rapid Prototyping and
Rapid Tooling, Springer-Verlag, London, 2001.
Polydoras S and Sfantsikopoulos M, On the Accuracy
Performance of the Laminated Object Manufacturing
Technology, Proceedings of the Euro RP2001/ 10th
European Conference on Rapid Prototyping &
Manufacturing, Paris, 2001
Salonitis K, Tsoukantas G, Stavropoulos P and
Stournaras A, "A Critical Review of Stereolithography
Process Modeling", (VRAP 03), 3rd International
Conference on Advanced Research in Virtual and
Rapid Prototyping, Leiria, Portugal, 2003, pp. 377-
384
Shellabear M, Benchmark study of accuracy and surface
quality in RP models, RAPTEC, Task 4.2 Report 2,
1999.
Shi X, Zhou X, Wei Y and Ruanet X, Integrated
technology of reverse engineering and rapid
prototyping in product design, Shanghai Jiaotong
Daxue Xuebao - Journal of Shanghai Jiaotong
University, Vol. 34, No.10, 2000, pp 1378-1381
Thompson W, Owen J, de St. Germain H, Stark S and
Henderson T, Feature-based reverse engineering of
mechanical parts, IEEE Trans Robotics Automation,
Vol. 15, No. 1, 1999, pp 5766
Wohlers T and Grimm T, The real Cost of RP, Time
Compression Magazine, Vol 10, No 2, March/April
2002.
Yongnian Y, Shengjie L, Renji Z, Feng L, Rendong W,
Qingping L, Zhuo X and Xiaohong W, Rapid
Prototyping and Manufacturing Technology:
Principle, Representative Technics, Applications, and
Development Trends, Tsinghua Science &
Technology, Vol. 14, No.1, 2009, pp 1-12, ISSN 1007-
0214, DOI: 10.1016/S1007-0214(09)70059-8.
Proceedings of DET2011
7th International Conference on Digital Enterprise Technology
Athens, Greece
28-30 September 2011
542

KNOWLEDGE CAPITALIZATION INTO FAILURE MODE AND EFFECTS
ANALYSIS
Gabriela Candea
Ropardo SRL
gabriela.candea@ropardo.ro
Ciprian Candea
Ropardo SRL
ciprian.candea@ropardo.ro


Claudiu Zgripcea
Ropardo SRL
claudiu.zgripcea@ropardo.ro


ABSTRACT
To achieve high quality designs, processes, and services that meet or exceed industry standards, it is
crucial to identify all potential failures within a system and work to minimize or prevent their
occurrence or effects. This paper presents innovative usage of knowledge system in Failure Mode
and Effects Analysis (FMEA) process. Knowledge system is built to serve multi-projects works that
nowadays are in place in any manufacturing or services provider, and knowledge must be retained
and reused not only at project level, but also at company level. Collaboration is assured through
web-based GUI that supports multiple users access at any time.
KEYWORDS
Failure Mode and Effects Analysis (FMEA), Decision Support Systems (DSS), Collaborative Work,
Case Based Reasoning (CBR), Knowledge Engineering, Knowledge Capitalization

1. INTRODUCTION
Preventing process and production problems
before they occur is the purpose of Failure Mode
and Effect Analysis (FMEA). Used in both the
design and manufacturing processes, they
substantially reduce costs by identifying product and
process improvements early in the development
process when changes are relatively easy and
inexpensive to make. The result is a more robust
process because the need for after-the-fact
corrective action and late change crises are reduced
and eliminated (McDermott, Mikulak and
Beauregard, 2008). Product development is the
result of a network-based collaborative process,
because most of them require co-operation among
geographically distributed experts with diverse
competences (Mavrikios, Alexopoulos, Xanthakis,
Pappas, Smparounis, Chryssolouris, 2011). The paper
presents an innovative approach to FMEA that uses
a knowledge system to capture and reuse content, a
system developed by Ropardo S.R.L. for supporting
the FMEA processes. The system is designed as a
web collaborative tool that supports integrated multi
project multi team multi language with
knowledge repository system (Experience
Database). At a higher level, this takes place inside
an integrated system called iPortal (Cndea, Cndea,
2011), which is actually a software suite for
different business related activities like project
management, document management, decision
support systems or other forms of collaboration.
In generic terms, knowledge is the internal state
of an agent (in this case, the FMEA team members
experts) that has acquired and processed information
from previous experience. An agent can be a human
being, storing and processing information in his/her
mind, or an abstract machine including devices to
store and process information.

543

A body of formally represented knowledge is
based on a conceptualization: the objects, concepts,
and other entities that are assumed to exist in some
area of interest and the relationships that hold
among them (Genesereth & Nilsson, 1987). A
conceptualization is an abstract, simplified view of
the world that we wish to represent for our purpose.
Every knowledge base, knowledge-based system, or
knowledge-level agent is committed to some
conceptualization, explicitly or implicitly.
Innovation in our case resides in utilization of the
Experience Database system as knowledge base for
FMEA tool and as an active support (knowledge
reuse) for FMEA team members in a multi-
dimensional space: company projects teams.
Access to the entire system is provided via web
interfaces with SSO (Single Sign-On) features so
that attending the FMEA work sessions is done via
web-browsers, not being restricted to localization.
Section 2 reviews briefly the existing FMEA
software and processes how we found them in
industry (automotive sector). Section 3 describes
the architecture of the FMEA and Experience
Database software systems, followed in Section 4 by
deep Experience Database description, while the
conclusions are presented in Section 5.
2. FMEA IN INDUSTRY
2.1 AN OVERVIEW OF BASIC CONCEPTS
Failure Mode and Effects Analysis (FMEA) and
Failure Modes, Effects and Criticality Analysis
(FMECA) are methodologies designed to identify
potential failure modes for a product or process, to
assess the risk associated with those failure modes,
to rank the issues in terms of importance and to
identify and carry out corrective actions to address
the most serious concerns (Figure 1).
Although the purpose, terminology and other
details can vary according to the type (e.g. Process
FMEA - PFMEA, Design FMEA - DFMEA, System
FMEA, Product FMEA, FMECA, etc.), the basic
methodology is similar for all, one common factor
has remained throughout the yearsto resolve
potential problems before they occur. For years,
FMEA/FMECA has been an integral part of
engineering designs. For the most part, it has been a
necessary tool for industries such as the aerospace
and automotive industries.
There are a number of published guidelines and
standards for the requirements and recommended
reporting format of Failure Mode and Effects
Analyses. Some of the main published standards for
this type of analysis include SAE J1739
1
, AIAG
FMEA-4 and MIL-STD-1629A. In addition, many
industries and companies have developed their own
procedures to meet the specific requirements of their
products/processes.
FMEA/FMECA is a group activity (normally
with 6-10 members), which may be performed in
more than one sitting, if necessary. The process
owner (or project manager) is normally the leader of
the FMEA exercise; however, to obtain the best
results, the process owner is expected to involve
multi-disciplinary representatives from all affected
activities. Team members should include subject
matter experts and advisors as appropriate. Each
Process Owner is also responsible for keeping the
FMEA updated.


Figure 1 - Main steps of FMEA
2.3 THE FMEA/FMECA METHOD
Failure modes means the ways, or modes, in which
something might fail. Failures are any errors or
defects, especially ones that affect the customer, and
can be potential or actual.
FMEAs are developed in very distinct phases
where actions can be determined (Tague 2004). For
FMEA, it is also required pre-work, in order to
assure that the robustness and past history are
included in your analysis. Bellow there are FMEA
phases:
1. Identify the functions of your scope.
Usually the scope will break into separate
subsystems, items, parts, assemblies or
process steps; identify the function of each.
2. For each function, identify all the ways
failure could happen. These are potential
failure modes.
3. For each failure mode (potential effects of
failure), identify all the consequences on the
system, related systems, process, related
processes, product, service, customer or
regulations.

1
SAE J1379 is the actual binding standard for utilization of
FMEA by the Big Three of the American automotive industry
(Daimler-Chrysler, Ford and General Motors).

544

4. Determine how serious each effect is. This
is the gravity rating, or G. Gravity is usually
rated on a scale from 1 to 10, where 1 is
insignificant and 10 is catastrophic. If a
failure mode has more than one effect, write
on the FMEA table only the highest gravity
rating for that failure mode.
5. For each failure mode, determine all
potential root causes. Use tools classified as
cause analysis tool, as well as the best
knowledge and experience of the team. List
all possible causes for each failure mode on
the FMEA form.
6. For each cause, determine the frequency
rating, or F. This rating estimates the
probability of failure occurring for that
reason during the lifetime of your scope.
Frequency is usually rated on a scale from 1
to 10, where 1 is extremely unlikely and 10
is inevitable.
7. For each cause, identify current process
controls that might prevent the cause from
happening, reduce the likelihood of
occurring or detect failure after the cause
has already happened (tests, procedures or
mechanisms that keep failures from
reaching the customer). For each control,
determine the detection rating, or D. This
rating estimates how well the controls can
detect either the cause or its failure mode
after they have happened but before the
customer is affected. Detection is usually
rated on a scale from 1 to 10, where 1
means the control is absolutely certain to
detect the problem and 10 means the control
is certain not to detect the problem (or no
control exists).
8. (Optional for most industries) Is this failure
mode associated with a critical
characteristic? (Critical characteristics are
measurements or indicators that reflect
safety or compliance with government
regulations and need special controls.) If so,
a column labelled Classification receives a
Y or N to show whether special controls are
necessary. Usually, critical characteristics
have a severity of 9 or 10 and occurrence
and detection ratings above 3.
9. Calculate the risk priority number, or RPN.

D F G RP =

10. Identify recommended actions. These
actions may consider design or process
changes to lower severity or occurrence.
They may be additional controls to improve
detection. Also, who is responsible for the
actions and target completion dates must be
written.
As actions are completed, note results and the
date on the FMEA form. Also, note new G, F or D
ratings and new RPNs (Figure-2).
The RPN's are calculated after three possible
action opportunities have occurred. Actions are not
only determined based on RPN values. RPN
threshold values do not play an important role in
action development, only in action evaluation when
completed.



Figure 2 FMEA example
2.4 THE FMEA CHALLENGE
The management of a companys knowledge is
made difficult mainly by two issues: relevant
knowledge may often not be found in a clear form
like databases, but in documents (project statements,
client requirements and QM handbooks) and the
access to knowledge is overloaded by the problem
that different actors use different terms to refer to
the same topic.
Today, FMEA is in widespread use by a
multitude of industries, many of which have begun
imposing FMEA standards. The purpose of the
FMEA is to take actions to eliminate or reduce
failures, starting with the highest-priority ones
(Tague, 2004). Nevertheless, the effort to develop
an FMEA is mainly considered as highly or very
highly due to the number of involved persons (Stock
& Stone & Tumer, 2003). In addition, the
advantages that result out of failure prevention
cannot be perceived immediately. To shorten the
process of FMEA development and earning results,
the knowledge included in already developed
FMEA has to be reused.
One of the most important keys of the FMEA is
the capitalization of knowledge. When workers

545

leave a company and take with them valuable job-
related information, managers and co-workers are
left to manage the new employees, disregarding
their own responsibilities. Another weakness of
FMEA is the experience of the FMEA team
members, thus the FMEA is only as good as the
members of the FMEA team.
The FMEA knowledge reuse suffers from a major
shortcoming mentioned by Wirth et al., 1996: the
FMEA-related information is acquired in natural
language. The analyses are hardly reusable because
the systematized components, functions and failure
modes are not made explicit. The meaning depends
on the interpretation of the team/ a team member
who performs the FMEA and can fluctuate when
another team reuses this FMEA, or even if the same
team tries to reuse it on a later occasion. Caused by
the lack of reusability the FMEA is regularly built
from scratch without making use of older FMEAs.
Although one person typically is responsible for
coordinating the FMEA process, all FMEAs are
team based. The scope for a FMEA team is to bring
a range of perspectives and experiences in the
project. Because each FMEA is unique in dealing
with different aspects of the product or process,
FMEA teams are formed and dispersed when is
needed
2
. Another limitation of the FMEA
methodology is sets by the unavailability (de-
located team, overlap of membership between the
teams) of team members to attend at FMEA
meeting.
3. SOFTWARE SYSTEMS
In this chapter the software system is presented from
concept to architecture. Our system is composed by
two major subsystems PEA and Experience
Database.
PEA (Process and Effect Analysis) is respecting
all FMEA requirements and processes of work; it is
a web-based software that allows team collaborative
work on FMEA.
Second major system is Experience Database
(Knowledge Repository System) that provides
knowledge capitalization for our system. In current
implementation Experience Database uses for
capitalization of knowledge a case base reasoning
(CBR) approach.
3.1 PEA PROCESS AND EFFECT
ANALYSIS

The purpose of FMEA software by Ropardo
(named PEA) is preventing process and production
problems before they occur. It is used both in design

2
http://www.fmeainfocentre.com/, FMEA Info Centre
and manufacturing processes and it substantially
reduces costs by identifying product and process
improvements early in the development process
when changes are relatively inexpensive to
implement. Process and Effect Analysis (based on
FMEA) processes are based on worksheets that
contain important information about the system,
such as the revision date or the names of the
components. On these worksheets all the items or
functions of the subject should be listed in a logical
manner, based on the block diagram. For each item
or function, the possible failure modes, effects and
causes are listed and each of them are graded for
their severity/gravity (G), frequency of occurrence
(F), and detection rating (D). Afterwards, the Risk
Priority Number (RPN) is calculated by multiplying
S, F and D. Once this is done it is easy to determine
the areas of greatest concern. This has to consider
the entire process and/or design and the items that
have the highest RPN should be given the highest
priority for corrective actions. After these values are
allocated, recommended actions with targets,
responsibility and dates of implementation are noted
on the worksheets which actually consist in the
output of this software.

3.2 EXPERIENCE DATABASE

The experience database aims to provide an easy to
use component by the knowledge engineer and by
other software modules.
A knowledge management system faces on few
major challenges: 1) Acquisition The main target
here is to get hold of the information that is around,
and turn it into knowledge by making it usable. This
might involve, making tacit knowledge explicit,
identifying gaps in the knowledge already held,
acquiring and integrating knowledge from multiple
sources. 2) Modelling Knowledge model
structures must be able to represent knowledge so
that it can be used for problem-solving. One
important knowledge modelling idea is that of
ontologies, which are specifications of the generic
concepts, attributes, relations and axioms of a
knowledge base or domain. Ontologies can act as
placeholders and organizing structures for acquired
knowledge, while also providing a format for
understanding how knowledge will be used.
3) Retrieval When a knowledge repository gets
very large, finding a particular piece of knowledge
can become very difficult 4) Reuse On problem in
using knowledge management systems is that often
knowledge databases are rebuilt for each end user.
5) Publishing can be described as follows:
knowledge, in the right form, in the right place, to
the right person, at the right time. 6) Maintenance

546

It may involve the regular updating of content as
content changes. In addition, it may also involve a
deeper analysis of the knowledge content.
The Experience Database component whose main
architecture is described in Figure-3 is using Case-
based Reasoning as computational engine. Actual
design of Experience Database allows defining and
storing different types of structures for knowledge
representation. These structures can be defined
using ontologies editor that allow you to keep an
organized and easy way to access and view the
database. In our case we are using Protg as
ontologies editor and defined ontology is stored
using Protg internal storage. We built a mapping
tool that allows exporting certain structure from
ontology to the Experience Database as showed in
Figure-3. Once that structure exported to CBR
engine, we will operate two atomic structures, one
original that is defined inside of ontology and the
other one, inside CBR system. In this way ontology
can evolve and new case structures can be created
any time, on the other side CBR once that case
is created and populated with data, this structure is
fixed and structure can be modified only manual
no automatic update process.
Figure 3 Experience Database architecture

The communication system assures the
independence of the module core processing model
from the communication methods. The default
implementation it is the direct Java calls: the client
will get a communication object, which exposes the
methods through a Java interface. The methods are
invoked by direct calling, all the data types being
passed without transformation (into/from XML or
similar); other methods are exposed by Experience
Database and can be used as well.
One important sub-system of Experience
Database is the input/output (I/O), validation that is
responsible for the translation of the data from
external sources into native data types, which can be
used by the controller.
The I/O system is split into two subsystems, one
for input and one for output:
The input subsystem achieves the translation of
information from generic formats (XML
structures) into Java formats (POJO Plain Old
Java Objects). This is done by validating and
parsing the XML input into the corresponding
POJO. The validation is done against the XSD
and it is different from the validation done
within the validation system it consists only in
checking if the syntax of XML is correct.
The output subsystem generates the XML
answers from the Java objects (it is mainly a
serialization of Java objects into the
corresponding XML representation, but
additional transformations may apply).
The validation system checks the incoming data
for inconsistencies and rejects the wrong ones.
Input Data Parser it is responsible with the
parsing of the input (request) information. The input
data requests are for similar cases (a search over the
stored cases using some filtering parameters) or
request for a single case (identified by its ID).
Feedback Data Parser it is responsible for the
parsing of the feedback data. The feedback consists
in changes to a stored case (different solution, etc.).
Ontology/Mapping Parser it is responsible for
the parsing of the domain/case ontology and with
the parsing of the mapping information that will be
used by the controller to solve the problem. The
mapping information is domain dependent and will

547

be defined by the knowledge engineer. The default
implementation will provide some default
mappings, but other will be needed to be defined.
Ontology Definition Sender it is responsible for
the formatting of the ontology definition from the
internal format into the XML file. The sender is
invoked by the controller upon a corresponding
request, which is received. Moreover, the ontology
is fetched from the database (please see relevant
sequence charts).
Output Data Sender it is responsible for the
formatting of the retrieved cases/answers into the
correct XML structures.
Input Validator needs to validate the parsed
input data for inconsistencies.
Feedback Validator needs to validate the
feedback information for inconsistencies.
Ontology/Mapping Validator needs to validate
the ontology and domain mapping information for
inconsistencies.
All the validation is done in order to lighten the
controller processing (the controller receives only
good information; the wrong input will be filtered
before).
The standard invocation process starts a search in
experience databases search that is done on
criteria of similarity functions (detailed description
on next chapter) that are defined for each data
structure. The search is done separately for each
data structure defined in separate spaces (case
space) in its database for a better case management.
To start a new search, an XML containing the case
pattern data is sent to Experience Database and, as
response, an XML with the best n cases is
returned. For the feedback phase PEA will send an
XML with feedback data for the specific case
pattern based on algorithms that Experience
Database learns.
5. CBR ON EXPERIENCE DATABASE
In our implementation of Experience Database, a
Case Base Reasoning engine is the core
computational engine that solves problems by
adapting solutions to older ones.
A CBR system involves reasoning from prior
examples, memorizing previous problems and
associated solutions and solving new problems by
referencing to that knowledge. (Sankar K. Pal,
Simon C.K. Shiu, 2004).
The problem-solving life cycle in our CBR
system consists essentially of the following four
parts as in Figure 4.

- Retrieving similar previously experienced cases
(e.g., problemsolutionoutcome triples) whose
problem is judged similar
- Reusing the cases by copying or integrating the
solutions from the cases retrieved
- Revising or adapting the solution(s) retrieved in an
attempt to solve the new problem
- Retaining the new solution once it has been
confirmed or validated

N
o

Figure 4 CBR internal design
5.1 CASE STRUCTURE

We start from a general case structure definition for
our implementation for FMEA process and contain
the following information.
- ID. A unique identification number of the
case base.
- Description. A brief description for the case.
- Meta-data. The case meta-data is maintained
for each case.
- Creator. Name of person/ organization /
project that created the case.
- Creation date/time. Date and time the case
was initially saved in the case base.
- Number of times accessed. Count of the
number of times the case has been retrieved
from the case base by a client.
- Date/time of last access. The date/time of the
last time the case was retrieved.
- Features. A list of case features. A case
feature is synonymous with a case index.
- Data or Sub-cases. This is also commonly
referred to as the case solution. The case data
(solution) contains the information that is
returned to the client during case retrieval. If
a case has child cases no data is associated
with the parent case. For these aggregate
cases, a list of child cases is maintained.

548


Starting from this general definition we defined
for FMEA a specific case schema and which
respects our scope for knowledge capitalization on
multi project and multi user usage.
We start from specific representation of FMEA
domain and we built the case structure where we
can find information about process/product, as well
as effects and measures that must be taken for each
case.
For example, we can consider next case structure.

PEAs Case
PN Project Name
P Product / Process
PE Effect
PC Potential Cause
The solution
NG New Gravity
R Remedy


Figure 5 Part of PEA case structure
5.2 SIMILARITY FUNCTION
Case retrieval is the process of finding, within a
case base, those cases that are the closest to the
current case. For an effective case retrieval, we start
from selection criteria in our case, a partial case
structure completed on the GUI by the user that
determines how to compute a case to make it
appropriate for retrieval. Starting from selection
criteria the closed case is searched through the cases
stored in the database. The most commonly
investigated retrieval techniques, are the k-nearest
neighbours (k-NN), decision trees, and their
derivatives. These techniques involve developing a
similarity metric that allows the closest (i.e.,
similarity) cases to be measured.
For example, if we are looking to find similar
cases to query case qc = (PN, P, PE, PC) case
retrieval is the process of finding what case is the
closest (cc) one to qc.
For each case from the database is calculated the
degree of similarity equation-1, between qc and
cci; i=1 to n; where n is the total number of cases in
the database.

diffrent common
common
cc qc SM
i
+
= ) , ( (1)

Where common represent the number of feature
whose value is the same between qc and cci, and
different represents the number of features whose
value is different between qc and cci.
In current implementation we are implementing a
similarity function that is based on the Euclidian
weighted distance equation-2. The distance is
calculated as the square root of the sum of the
squares of the arithmetical differences between the
corresponding coordinates of two objects (Sankar
K. Pal, Simon C.K. Shiu, 2004).

=
=
n
j
qj pj j j
w
pq
e e w d
1
2 2
) , ( (2)

Where w is the weight of the associated j the
feature that indicates the importance of that
feature
] 1 , 0 [
j
w
.
For distance measure computation, we used next
formulas.
- b a b a
j
= ) , ( if a and b are real numbers
- b a B A
B b A a j
=
,
max ) , ( if A and B are
intervals
-

=
b a if
b a if
b a
j
0
1
) , ( if a and b are symbols

6. CONCLUSIONS
In current implementation, we are proposing a
method to mobilize the professional knowledge of
the professionals involved into FMEA process.
Nowadays, in manufacturing sector, decisions
concerning processes and products must be
anticipated by integrating the professional
knowledge and know-how of experts from early
stages to motorization and correction. Different
aspects where investigated from artificial
intelligence (Barthe` s, J.P., 1996), Case-Base
Reasoning (Haque, B.U. et al., 2000) and

549

knowledge management of knowledge
capitalization.
We are proposing an innovative method that
allows knowledge capitalization for FMEA process.
Moreover, we design and built the software system
that, on the one hand offers a new approach for
FMEA standard collaborative on multi user, multi
project using web GUI PEA software; and on the
other one we put together the Experience Database
with the FMEA specific knowledge capitalization.
As a core computational engine for Experience
Database we used Case Base Reasoning engine and
for similarity function, we implemented Euclidian
weighted distance.
The software system presented in this article is
going to be lunched in production to the biggest
automotive spare parts supplier from Romania,
starting with Q4 2011.
As future work, we must investigate different
similarity functions and we are looking to
implement and evaluate fuzzy approach.
Other task that must be accomplished is the
maintenance of the CBR system whose current
configuration may become sub-optimal over time,
and therefore is critical to have the ability to
optimize the configuration.

7. ACKNOWLEDGMENTS
The research reported in this paper has received
funding from the European Union Seventh
Framework Programme (FP7/2007-2013) under
grant agreement No: NMP2 2010-228595, Virtual
Factory Framework (VFF).
REFERENCES
Barthe` s, J.P. (1996). ISMICK and Knowledge
Cndea, C., Cndea, G. (2011). iPortal and Factory
Hub, Digital Factory for Human-oriented Production
Systems: The Integration of International Research
Projects, L. Canetta, C. Redaelli, M. Flores (eds), 1st
printing, Springer, pp. 271-282
Cndea, C., Georgescu, A.V., Cndea, G. (2009). iPortal
Management Framework for Mobile Business.
Proceedings of the International Conference on
Manufacturing Science and Education MSE.
Filip, F.G. (2007). Decision Support Systems. 2nd
edition revised, pp 353 359. Ed. Tehnica, Bucuresti.
Georgescu, A.V., Cndea, C., Constantin, Z.B. (2007).
iGDSS Software Framework for Group Decision
Support Systems. In Procedings of The Good, The
Bad and The Unexpected.
Goddard, P.L. (2000). Software FMEA techniques.
Reliability and Maintainability Symposium. pp 118-
123.
Haque, B.U. et al. (2000). Towards the Application of
Case Reasoning to Decision-making in Concurrent
Product Development (Concurrent Engineering),
School of Mechanical, Manufacturing Engineering
and Management, pp. 101112, University of
Nottingham, Nottingham, NG7 2RD, UK.
Huang, G.Q., Shi, J., Mak K.L. (2000). Failure Mode and
Effect Analysis (FMEA) Over the WWW. The
International Journal of Advanced Manufacturing
Technology, Vol. 16 (8), pp. 603-608.
Mavrikios D., Alexopoulos K., Xanthakis V., Pappas M.,
Smparounis K., Chryssolouris G. (2011). A Web-
based Platform for Collaborative Product Design,
Review and Evaluation, Digital Factory for Human-
oriented Production Systems: The Integration of
International Research Projects, L. Canetta, C.
Redaelli, M. Flores (eds), 1st printing, Springer, pp.
35-55
McDermott, R.E., Mikulak, R.J., Beauregard, M.R.
(2008). The basics of FMEA. Productivity Press, USA
Puente, J., Pino, R., Priore, P., Fuente, D. (2002). A
decision support system for applying failure mode and
effects analysis. International Journal of Quality &
Reliability Management. Vol 19 (2), pp 137-150.
Sankar K. Pal, Simon C.K. Shiu, (2004). Foundation of
Soft Case-Based Reasoning
Schreinemakers, J.F. Knowledge Management:
Organization, Competence and Methodology,
Proceeding of ISMICK96, pp. 913, 2122 Oct,
Rotterdam, Wurzburg, Ergon Verlag.
Stamatis, D.H. (2003). Failure mode and effect analysis:
FMEA from theory to execution, William A. Tony,
USA
Tague, N.R. (2004). The Quality Toolbox, Edition, ASQ
Quality Press, pp 236-240.
Wirth, R., & Berthold, B., & Krmer, A., & Peter, G.
(1996). Knowledge-Based Support of System Analysis
for Failure Mode and Effects Analysis. Engineering
Applica-tions of Artificial Intelligence, 9, 219-229.
Automotive Industry Action Group, Implementing the
Failure Mode & Effects Analysis (FMEA) Reference
Manual, Retrieved: 8 August 2011,
http://www.aiag.org/scriptcontent/index.cfm,
FMEA Information Centre Team, Retrieved: 8 August
2011, FMEA Info Centre,
http://www.fmeainfocentre.com/
Automotive Quality and Process Improvement
Committee, Potential Failure Mode and Effects
Analysis in Design (Design FMEA), Potential Failure
Mode and Effects Analysis in Manufacturing and
Assembly Processes, Society of Automobile
Engineers, Retrieved: 10 August 2011,
http://standards.sae.org/j1739_200901
Proceedings of DET2011
7th International Conference on Digital Enterprise Technology
Athens, Greece
28-30 September 2011
550

ROBUST DESIGN OPTIMIZATION OF ENERGY EFFICIENCY: COLD
ROLL FORMING PROCESS
John Paralikas
Laboratory for Manufacturing Systems and
Automation
Department of Mechanical Engineering &
Aeronautics
University of Patras, Patras, 26500, Greece
jparali@lms.mech.upatras.gr
Konstantinos Salonitis
Laboratory for Manufacturing Systems and
Automation
Department of Mechanical Engineering &
Aeronautics
University of Patras, Patras, 26500, Greece
kosal@lms.mech.upatras.gr

George Chryssolouris
Laboratory for Manufacturing Systems and Automation
Department of Mechanical Engineering & Aeronautics
University of Patras, Patras, 26500, Greece
xrisol@lms.mech.upatras.gr

ABSTRACT
Cold roll forming is an important sheet metal forming process for the mass production of a variety
of complex profiles, coming from a wide spectrum of materials and thicknesses. Energy efficiency
is a major trend nowadays, towards the reduction of energy consumption and the better utilization
of manufacturing resources. The current paper has proposed a methodology for the robust design
optimization of energy efficiency of the cold roll forming process. The energy efficiency indicator
is calculated through an analytical model, and the quality characteristic constraints are checked
through a model of finites elements. The robust design optimization of the process parameters
algorithm is implemented, utilizing the analytical model of energy efficiency, so as to provide a
practical approach for determining the optimum set of process parameters, taking into account the
variability of noise factors. The current approach is applied to a U-section profile and is practical
since it reduces the computational costs and takes into account any uncertainties in a real
manufacturing environment.
KEYWORDS
Cold roll forming process, energy efficiency, optimization, robust design, noise factors
1. INTRODUCTION
Complex structural profiles from a wide spectrum of
material and thicknesses can be mass produced by
the cold roll forming process. Such a process
demonstrates a high material utilization and
productivity rates. With the introduction of high
strength materials, several challenges have emerged,
such as the requirement of increased deformation
work and consequently energy consumption.
Optimizing productivity and reducing energy
consumption, including real manufacturing
environment variability, can provide significant
advantages against other sheet metal forming
processes. Applying robust design techniques to the
cold roll forming process, enables the finding of the
optimum process variables that fulfil the objective
and quality constraints requirements, as these
remain stable when exposed to uncertain conditions
of noise factors (Jurecka, 2007).
Several studies have been presented towards the
robust design optimization of major sheet metal
forming processes, such as stamping and deep
drawing. Mathematical meta-modeling techniques,
such as Surface Respond Methodology (RSM), Dual
RSM and Adaptive RSM, were applied with a

551

stochastic analysis as calculating the estimation of
the mean and the variance of the response for sheet
metal forming processes (Hou et al, 2010) (Hu et al,
2008) (Donglai et al, 2008). The stochastic
variation of noise factors affecting the sheet metal
forming quality was studied, so as to minimize the
impact of variations and achieve reliable process
parameters (Tang and Chen, 2009). RSM was also
applied with the Pareto-based multi-objective
generic algorithm (MOGA) for optimization of the
sheet metal stamping process (Wei and Yuying,
2008). Regarding the cold roll forming process,
RSM was applied so as to study the effects of the
main process parameters, namely the bending angle
increment, toll radius, springback angle, and
maximum edge membrane longitudinal strains
(Zeng et al, 2009). Moreover, a semi-empirical
approach, utilizing the Taguchi methods was also
developed for the optimization of main roll forming
parameters (Paralikas et al, 2010a).
In the current paper, a methodology for robust
design optimization, utilizing orthogonal arrays, of
energy efficiency of the cold roll forming process is
proposed. Such methodology utilizes an analytical
model for the calculation of the energy efficiency,
and a finite elements model for the monitoring of
the quality characteristic constraints of the most
energy efficient solution. This hybrid optimization
solution provides a significant reduction in the
computational cost, as a finite elements model is
used only for checking the quality characteristic
constraints.
2. ENERGY EFFICIENCY OF COLD ROLL
FORMING PROCESS
Energy efficiency in a manufacturing process is a
generic term and mainly refers to less energy being
used for the production of the same amount or more
useful output, as products (Patterson, 1996). In the
manufacturing sector, the energy efficiency can be
defined as the energy required for producing an
amount of products per unit time. Hence, the energy
efficiency indicator for the cold roll forming process
can be defined by the ratio of the production rate per
hour to the energy required for a production of a
products meter:

,
int
RF
motor in
Useful output of process PRH
Energy input o process E
= = (1)

, where:

RF
is the energy efficiency indicator of the roll
forming process
( )
2
/ * m Joules hr
PRH is the production rate of the roll forming
mill per hour (products of one meter per hour, or
meters of products per hour)

, motor in
E is the total specific input energy to the
electric motor of the mill from the grid
(Joules/m)
The total energy motor input is calculated by an
analytical model, based on power distribution from
grid to the materials deformation (Figure 1).


Figure 1 Cold roll forming process layout for energy
motor input calculation
The electric motor and drivetrain efficiencies are
taken into account for power distribution to the roll
forming mill shafts. An analytical model calculates
the deformation work, the longitudinal stretching
work at flange and the frictional pulling work.
3. ROBUST DESIGN METHODOLOGY
In a robust design optimization problem, the
variables can be identified as: i) control factors as
design/operational parameters that can be easily
controlled, ii) noise factors that are hard or
expensive to be controlled and provide uncertainty,
iii) fixed parameters that provide the boundary
conditions, iv) objective responses to be optimized
and v) quality constrains. The main objective of the
robust design optimization of a manufacturing
process is that the control factors be calculated so as
to achieve objectively the best process performance
within quality constraints, being also insensitive to
the uncertainty of noise factors.


552


Figure 2 Process robust design optimization using hybrid
modeling approach
The proposed hybrid robust design optimization
methodology is presented in Figure 2. An analytical
energy model is used in the DoE/robust design
algorithm (Figure 3) for the determination of the
optimum mean value of energy efficiency with the
minimization of variance. This leads to (16x9) 144
runs, which is rather impractical to be applied
experimentally or for a computational expensive
finite elements model to be used. Such an optimum
energy efficient solution is investigated through a
finite elements model for the fulfillment of quality
constraints. In case those quality constraints are not
met, then the control variables are alternated based
on process guidelines and a new optimum solution
is checked again.


Figure 3 DoE / robust design algorithm for calculation of
optimum energy efficient solution
4. COLD ROLL FORMING PROCESS
VARIABLES
The fixed parameters, control and noise factors
need to be defined in order for the optimization
problem to be formulated. The fixed parameters are
mainly referred to the product geometrical
characteristics that have been set by the
product/customer as identified in Table 1 for the
demonstration of the U-section.

Table 1 Fixed parameters for a roll formed U-section
o Fixed parameters Units
1 Inside bending radii (r
i
) mm
2 Length of the flange (A) mm
3 Length of the web (w) mm
4 Total strip width (calculated)
(L)
mm
5 Final bending angle (A
final
) Deg.
6 Material type -
7 Strip thickness (nominal) - mm

Control factors are parameters easy to be
controlled from the design phase during processing
(Table 2). Such parameters do not affect the fixed
ones, but influence an objective response and the
quality characteristics that need to be optimized.
The bending angle concept refers to the bending
sequence of the roll formed profile, as the leading
and final angle increments should be 5-10 degrees
(Halmos, 2006). The middle angle increment can
range from 5 to 20 degrees, affecting the bending
sequence and consequently the number of roll
stations required. The bending concepts C5, C10,
C15 and C20 lead to 18, 10, 7 and 6 roll stations
respectively.

Table 2 Control factors for cold roll forming process
o Control Factors Units Range
1
Bending Angle
Concept (BAC)
degrees
C5 -
C20
2
Roller diameter
(D
L
)
mm
100 -
220
3 Line velocity (V) mm/sec
100
310
4
Rolls stations inter-
distance (L
R
)
mm
480 -
540
5
Rolls gap (clearance)
(G)
Thickness
% (2.0
mm)
1% - 3%
(0.02-
0.06
mm)

Noise factors are parameters that are expensive
or practically impossible to be controlled during
processing (Table 3). As tooling wears, the rolling
friction coefficient will be alternated randomly and
will not be easily determined online. Moreover,
there is a practical variation in the material thickness
and material parameters from coil-to-coil of the
material supplier.


553

Table 3 oise factors for cold roll forming process (AHSS
DP-600 material parameters)
o oise Factors Units
Mean
target
1
Rolling friction coefficient
(Crr)
- 0.01196
2
Material parameter:
Strength coefficient (K)
MPa 956.65
3
Material parameter:
Hardening coefficient (n)
- 0.171
4 Thickness of material (t) mm 2.0

Thus, there is a variation in such noise
parameters that cannot be easily controlled (Table
4). A range can be set from statistical values
(Kunitsyn et al, 2011) and material supplier data
(Arcelormittal, 2011), as it leads to a standard
deviation of noise factors.

Table 4 Stochastic variation of noise factors
o
oise
Factor
Mean
(
i
)
Range
Standard
deviation
(
i
)
1 Crr 0.01196 0.01072 0.00268
2 K 956.65 191.33 47.8325
3 N 0.171 0.0342 0.00855
4 t 2.0 0.16 0.05
5. QUALITY CHARACTERISTICS
CONSTRAINS
Quality characteristic constraints are based on
specific failure modes (defects), such as warping,
twist, edge waviness and bending edge cracking,
during the cold roll forming process (Figure 4).
Such failure modes are driven from specific
redundant deformations (Halmos, 2006).


Figure 4 Main defects of roll formed product (Halmos,
2006)
The materials Forming Limit Curve (FLC) can be
used as a metric for monitoring major (
1
) and minor
(
2
) strains of the profile. During the cold roll
forming process; major strains (
1
) can surpass the
critical forming limit point (FLC
0
). This can lead to
excessive and unequal plane strain and result in
warping and or cracking. Thus, the quality
constraint can be formulated as:

0 1
FLC > (2)

During cold roll forming the edge travels a greater
length than the web does, resulting in the
development of longitudinal strains at the edge.
Such longitudinal strains are alternated from
compressive, as the material reaches the lower
roller, to stretching, as the material s passes the
lower rollers centerline. Such longitudinal strains
at edge, if surpassing the materials elastic limit in
tensile (TYS) and buckling (CYS), are resulting in
edge wave (buckling). Thus, the next quality
constraint should meet:

@ peak edge
TYS CYS > > (3)

Strains in the direction of thickness can provide a
prediction to the thickness reduction along the
profiles cross section and also cracking, is
surpassing the ultimate elongation point strain
(
UEP
). Therefore, the quality constraint for cracking
can be formulated as:

3
0
TS peak UEP

< < (4)


6. COLD ROLL FORMING PROCESS
GUIDELINES
Several guidelines could be emerged from previous
studies (Paralikas et al, 2010a) (Paralikas et al,
2010b) (Paralikas et al, 2009) so as to apply and
alternate any undesired quality characteristic
measures. Such guidelines can be divided into i)
process and ii) rollers design guidelines.
Process guidelines involve the alternation of process
parameters for the relief and correction of such
quality characteristics. Increase in the rolls station
inter-distance can decrease the peaks of the elastic
longitudinal strains at the edge. Decrease in the
rolls gap (% of nominal strip thickness) can
decrease major-minor strains. Decrease in the
bending angle increment between the roll stations
can reduce dramatically the elastic longitudinal
strains, but it will require an additional roll station in
order for the desired final bending angel to be
produced.
Design guidelines involve the alternation of the
rollers design and the roll forming line
configuration. The downhill flow can be applied so
as to provide reduction in strip thickness.

554

Increasing the rolls diameter can yield a positive
effect on the elastic longitudinal strains at buckling.
Applying the variable bending radius along the roll
forming line, can result in decreasing thickness, but
can produce more elastic longitudinal strains.
7. CASE STUDY: U-SECTION PROFILE
A U-section profile (Figure 5) with a total
bending angle of 90 degrees was selected for the
application of the proposed robust design
methodology. Such a profile is commonly used in
many industrial sectors and applications.


Figure 5 U-Section roll formed profile (A=50mm,
w=200mm, L=1000mm, r=4mm)
Based on the fixed parameters from the U-section
profile design, the control and noise factors (as
defined above) the robust design algorithm (Figure
3) was applied for the calculation of the optimum
energy efficient solution, while the quality
constraints were checked through finite elements
modelling.
7.1. ROBUST DESIGN OPTIMIZATION FOR
ENERGY EFFICIENCY
Within the robust design algorithm, the analytical
model for the calculation of the energy efficiency
indicator is used. For control factors, the L
16
(4
5
)
orthogonal array was selected as there are five
factors with four levels each, yielding 16 runs.
Regarding noise factors, the L
9
(3
4
) orthogonal array
was selected as there are four factors with three
levels each, which yield 9 runs each (Phadke, 1989).
As each row of the control factors orthogonal array
is considered as input to each noise factors
orthogonal array, then the total number of required
runs is 144 (16x9). From each of the 16 sets of 9
runs each, the mean value and the variance can be
calculated. For the S/N ratio of the mean and
variance values (Table 5), the larger-the-better type
problem was selected, as the quality characteristic is
continuous and non-negative and the goal to be
maximized (Phadke, 1989), can be calculated as:

2
10 2 2
1
/ 10log * 1 3 S


( | |
| | | |
= +
( | | |
|
\ ( \ \

(5)

Table 5 Mean, variance and S/ ration for energy
efficiency indicator
Row
o.
Mean () Variance (
2
) S/ ratio
1 17.22% 0.00008484 -15.314
2 13.95% 0.00014558 -17.203
3 8.40% 0.00007021 -21.640
4 5.00% 0.00002480 -26.150
5 13.56% 0.00022106 -17.506
6 9.69% 0.00007665 -20.377
7 21.12% 0.00005534 -13.523
8 18.53% 0.00003214 -14.655
9 22.78% 0.00011877 -12.878
10 25.94% 0.00011143 -11.742
11 7.05% 0.00002520 -23.103
12 11.88% 0.00003294 -18.532
13 13.75% 0.00003407 -17.257
14 9.12% 0.00001559 -20.825
15 9.48% 0.00001811 -20.493
16 5.06% 0.00000499 -25.936

Based on the calculated S/N ratios the analysis of
means (ANOM) has been implemented (Table 6).
The plot of means for control factors was also
depicted within Figure 6.

Table 6 Analysis of Means for the energy efficiency S/
ratio response
Control Factors
Levels
1 2 3 4
A - Bending angle
concept
-20.0 -16.5 -16.5 -21.1
B - Roller radius -15.7 -17.5 -19.6 -21.3
C Line velocity -21.1 -18.4 -17.5 -17.1
D - Rolls inter-
distance
-17.0 -18.0 -19.2 -19.9
E - Rolls gap -15.5 -17.3 -19.4 -21.8


555


Figure 6 Plot of means for control factors for the S/ ratio
of the energy efficiency response
Based on the plot of the factor effects, the level with
the maximum value is the optimum level for each
factor and the level with the minimum value is the
worst level for each factor. The optimum and worst
set of the factors levels are calculated (Table 7) and
response characteristics (production rate, total
power input, energy efficiency and number of roll
stations) have been calculated.

Table 7 - Optimum and worst levels based on summary
statistic of mean for power response
Control Factors
Levels
Optimum Worst
A -Bending angle concept A2 C10 A4 C20
B - Roller radius (R) B1 0.1 B4 0.22
C Line velocity (U) C1 0.31 C4 0.1
D - Rolls inter-distance D1 0.48 D4 0.54
E - Rolls gap (% of t) E1 - 1 E4 3.1
Production rate
(parts/min)
192.41 84.9
Total power input (W/m) 29.75 43.53
Energy efficiency
indicator (%)
34.56 4.6
No. of RS 10 6

Moreover, the Analysis of Variance (ANOVA) was
implemented for the control factors, and the
responsibility of each control factor on energy
efficiency indicator was calculated (Figure 7). The
main factors affecting the energy efficiency of the
cold roll forming process are rolls gap, roller radius
and bending angle increment with 30.96%, 24.77%
and 23.62% responsibility respectively. The roll
forming line velocity and rolls inter-distance have
an effect on the energy efficiency by 13.79% and
6.86% respectively.


Figure 7 - Responsibility (%) of each factor on energy
efficiency indicator response
7.2. COLD ROLL FORMING PROCESS
QUALITY CONSTRAINS
Based on the optimum set of process parameters
levels, as calculated and shown in Table 7, the Finite
Elements Model (FEM) was implemented and the
cold roll forming process was simulated. An
explicit dynamics finite element modelling was used
for the cold roll forming process simulation as
discussed in (Paralikas et al, 2010a) by utilizing
shell elements for both a deformable strip and rigid
rolls. Quality constraints were checked utilizing the
FEM of the cold roll forming process with an
optimum set of process parameters (control factors)
and mean values of the noise factors. Such quality
constraints cover the elastic longitudinal strains at
the edge of the profile along the roll forming
direction, mapping of major and minor strains on
the materials FLD diagram of the roll formed strip
and prediction of a thickness reduction through total
strain in thickness direction.
Regarding the FLC and major-minor strains, the
FEM results provided that the maximum major
strain is below the critical forming limit point, as it
was also shown in Figure 8.

0 1
0.235 0.2305 FLC = = (6)


Figure 8 - Major and minor strains of strip after roll
forming processing

556

Longitudinal strains at the edge of the profile were
also plotted along the roll forming direction (Figure
9). All peaks of longitudinal strains, in compression
and tension, are within the materials elastic limits
as:

@
0.1715% 0.1715%
peak edge
> > (7)


Figure 9 - Elastic longitudinal strains at the edge of the
flange along roll forming direction
Thickness reduction was checked through the
mapping of strains, in a thickness direction along
the cross section of the U-section profile (Figure
10). Strains in thickness direction are within the
materials limits.

3
0 17.5% 17.9%
TS peak UEP

< = < = (8)




Figure 10 - Total strain in thickness direction along cross
section of profile (half profile due to symmetry)
8. CONCLUSIONS
A robust design optimization methodology of the
cold roll forming process energy efficiency, utilizing
a hybrid scheme of analytical and finite elements
models, is proposed. An analytical model is used
within the robust design algorithm for the
calculation of the energy efficiency indicator. The
Analytical model results of the roll forming mill
motor consumption are verified by an experimental
investigation for energy consumption of the roll
forming mill by (Lindgren, 2007). The optimum
energy efficiency indicator was calculated based on
the proper selection of control factors through the
analysis of means. The analysis of variance was
then implemented for the calculation of the
responsibility of the control factors on the energy
efficiency indicator. Based on the optimum solution
for the energy efficiency, the Finite Elements Model
was created and the quality characteristic constraints
were checked. Through the final step, the profiles
feasibility and quality were monitored.
A major innovation of the current study is the
introduction of an analytical model of energy
efficiency of the cold roll forming process
calculation within the robust design optimization,
through orthogonal arrays. The robust design
simulation consists of a total 144 runs (16 x 9 runs).
Using only the FEM, it is rather impractical, as an
FEM run is computationally expensive and takes
about 1.5 days in order to provide a run solution.
Utilizing a computational efficient and non cost
expensive analytical model for energy efficiency,
only one FEM run is required for checking the
feasibility of the optimum solution, under specific
quality characteristic constraints. This implies that
the current methodology is considered practical and
provides a guide towards the application of the
optimum energy efficient solution of the cold roll
forming process.
A U-section profile was demonstrated for the
calculation of a feasible solution to the optimum
energy efficiency. The optimum energy efficient
solution was calculated through the robust design
algorithm and the control and noise factors
orthogonal arrays. The production rate was 192.41
parts per minute, and the total power input to the
electric motor was 29.75 Watts per meter of the
produced profile, which yielded to an energy
efficiency indicator of 34.56%. The number of roll
stations, required for the optimum energy efficient
solution, was ten rolls stations. The cold roll
forming process parameters (control factors)
responsibility on energy efficiency was calculated,
with major parameters to be the rolls gap, the rollers
radius and the bending concept with a 30.96%,
24.77% and 23.62% responsibility respectively. The
optimum energy efficient solution feasibility was
checked through FEM, for showing that the
parameters levels provide a solution within specific
quality constraints for the major-minor strains, the
elastic longitudinal strains at edge along the roll
forming direction and the thickness reduction.
9. ACKNOWLEDGMENTS
The work reported in this paper was partially
supported by CEC / FP6 NMP Programme,
"Integration Multi-functional materials and related

557

production technologies integrated into the
Automotive industry of the future-FUTURA (FP6-
2004-NMP-NI-4-026621).
REFERENCES
Arcelormittal, Hot rolling according to requirements,
http://www.arcelormittal.com/gent/prg/selfware.pl?id_
sitemap=107&language=EN
Donglai, W., Zhenshan, C., Jun, C., Optimization and
tolerance prediction of sheet metal forming process
using response surface model, Computational
Materials Science, vol. 42, 2008, pp: 228233,
DOI:10.1016/j.commatsci.2007.07.014
Halmos, G.T., Roll forming handbook, CRC press,
2006
Hou, B., Wang, W., Li, S., Lin, S., Xia, Z. C., Stochastic
analysis and robust optimization for a deck lid inner
panel stamping, Materials and Design, 31, 2010, pp:
11911199
Hu, W., Yao, L-G., Hua, Z.-Z., Optimization of sheet
metal forming processes by adaptive response surface
based on intelligent sampling method, Journal of
materials processing technology, vol. 197, 2008, pp:
7788, DOI:10.1016/j.jmatprotec.2007.06.018
Jurecka, F., Robust Design Optimization Based on
Metamodeling Techniques, PhD thesis in Technical
University of Munich, 2007
Kunitsyn, G. A., Golubchik, E. M., Smirnov, P. N.,
Ongoing control of the transverse thickness
fluctuation of cold-rolled high-strength strip, Steel in
Translation, Vol. 39, 10, 2011, pp: 920-924, DOI:
10.3103/S0967091209100192
Lindgren, M., Experimental investigations of the roll
load and roll torque when high strength steel is roll
formed, Journal of Materials Processing Technology,
vol. 191, 2007, pp: 4447
Paralikas, J., Salonitis K., Chryssolouris G.,
Optimization of the roll forming process parameters -
A semi empirical approach, International Journal of
Advanced Manufacturing Technology, vol. 47, 2010a,
pp: 1041-1052, DOI 10.1007/s00170-009-2252-z
Paralikas, J., Salonitis K. and Chryssolouris G.,
Investigation of the effect of roll forming pass design
on main redundant deformations on profiles from
AHSS, International Journal of Advanced
Manufacturing Technology, vol. 55, 2010b, DOI:
10.1007/s00170-011-3208-7
Paralikas, J., Salonitis K., Chryssolouris G.,
Investigation of the roll forming process parameters
effect on the quality of an AHSS open profile,
International Journal of Advanced Manufacturing
Technology, 44, 2009, pp: 223-237, DOI
10.1007/s00170-008-1822-9
Patterson, M. G., What is energy efficiency? Concepts,
indicators and methodological issues, Energy Policy.
Vol. 24, No. 5, 1996, pp: 377-390
Phadke, M S., Quality Engineering using Robust
Design, Prentice Hall, Englewood Cliffs, New Jersey,
1989
Tang, Y., Chen, Y., Robust design of sheet metal
forming process based on adaptive importance
sampling, Struct Multidisc Optim, vol.: 39, 2009, pp:
531544, DOI 10.1007/s00158-008-0343-3
Wei, L., Yuying, Y., Multi-objective optimization of
sheet metal forming process using Pareto-based
genetic algorithm, Journal of materials processing
technology, vol: 208, 2008, pp: 499506,
DOI:10.1016/j.jmatprotec.2008.01.014
Zeng, G., Li, S. H., Yu, Z. Q., Lai, X. M., Optimization
design of roll profiles for cold roll forming based on
response surface method, Materials and Design, vol.:
30, 2009, pp: 19301938,
DOI:10.1016/j.matdes.2008.09.018
Proceedings of DET2011
7th International Conference on Digital Enterprise Technology
Athens, Greece
28-30 September 2011
558

AN ANALYSIS OF HUMAN-BASED ASSEMBLY PROCESS FOR IMMERSIVE
AND INTERACTIVE SIMULATION
Loukas Rentzos
Laboratory for Manufacturing Systems and
Automation, Dept. of Mechanical Engineering
and Aeronautics, University of Patras, Greece
rentzos@lms.mech.upatras.gr
George Pintzos
Laboratory for Manufacturing Systems and
Automation, Dept. of Mechanical Engineering
and Aeronautics, University of Patras, Greece
pintzos@lms.mech.upatras.gr


Kosmas Alexopoulos
Laboratory for Manufacturing
Systems and Automation,
Dept. of Mechanical
Engineering and Aeronautics,
University of Patras, Greece
alexokos@lms.mech.upatras.
gr
Dimitris Mavrikios
Laboratory for Manufacturing
Systems and Automation,
Dept. of Mechanical
Engineering and Aeronautics,
University of Patras, Greece
mavrik@lms.mech.upatras.gr
George Chryssolouris
Laboratory for Manufacturing
Systems and Automation,
Dept. of Mechanical
Engineering and Aeronautics,
University of Patras, Greece
xrisol@lms.mech.upatras.gr
ABSTRACT
Assembly simulation, with the help of Virtual Reality (VR), becomes a very challenging technology
due to its highly interactive context, imposed by a number of functions and the need for realism.
This study focuses on the development of an interactive simulation prototype for use in human-
based assembly operations. The design aims at improving the easiness and efficiency of VR when
used, in the early stages of a products lifecycle, by engineers that can exploit its benefits, without
having an expertise in the VR field. The development of this prototype is tested and evaluated
through its implementation on a use case, found in the daily practice of an aerospace industry. The
development is made in a platform independent architecture, in order for its possible integration
with different VR platforms to be facilitated and it is based on usability guidelines and a taxonomy-
based classification. Based on the requirements, provided by the aerospace industry, a validation
part is compiled for the evaluation of the interactive prototype developed, in relation to human-
centred specifications.
KEYWORDS
Engineering Simulation, Process Simulation, Virtual Assembly

1. INTRODUCTION
The manufacturing industry has turned to the
Virtual Reality (VR) technology, in order for the
time and cost of their PLM activities to be reduced
and particularly, the cycle time and cost, starting
from the conceptual phase of the product or process
development up to its production phase. In general,
VR is described as a computer-generated
environment that simulates the real world besides
imaginary worlds. Virtual Reality (VR) provides the
means by which humans visualize, manipulate, and
interact with computers and extremely complex data
(Chryssolouris, 2006). These computer-generated
environments, called Virtual Environments (VE),
consist of three-dimensional objects. VR users
interact with the VE or its content (e.g. virtual
objects) through 3D Interaction techniques (3DITs)
(Flasar, 2000).

559

VR is used in many engineering applications
such as product and process design, modelling,
manufacturing, training, testing and digital
validation. Virtual Manufacturing (VM) is a
technology that mimics real manufacturing
processes with models, simulations and artificial
intelligence (Lin et al, 1995). Another type of use of
VR in manufacturing is Virtual Assembly which is
defined as the use of computer tools in assisting
with assembly-related engineering decisions
through analysis, prediction models, visualization,
and presentation of data without creating the
product or the support processes (Mujber et al,
2004). The VR technology is also very popular for
validating through experimentation the ergonomics
of products and processes (Bi, 2010).
Traditionally, the verification of an assembly
process is performed in wooden or high-fidelity
scale model mockups. This can be a time and labour
intensive procedure. With the use of VR, more
realistic computer generated assembly environments
can be created and by using 3D Interaction
Techniques (3DITs), products can be manipulated,
assembled and disassembled as required (Wan et al,
2004). The application of immersive virtual reality
in a virtual model of the assembly environment, can
help design and evaluate manufacturing tasks and
different possible sequences, and choose the best
alternatives (Zhao and Madhavan, 2006). Also, by
immersing a real person into the virtual
environment, which interacts directly with the
elements of a simulated virtual world, ergonomic
data can also be acquired (Chryssolouris, 2006).
Assembly simulation, through the use of VR, is
very challenging due to the fact that the interactions
should be as natural as possible. In other
applications, the intuitiveness, or easiness to
perform the interaction, is usually preferred but of
course that is not the case when the goal is to
simulate a real process. During a VR session, the
user does not only interact with the virtual
environment but also with the system (Zachmann
and Rettig, 2001). The development of natural
interactions is a big challenge, because their
purpose is not just to facilitate the human-computer
interaction but also to ensure that the interaction
imitates the real function or operation, as
realistically as possible. This means that during the
design and development of a 3DIT, the designer
should respect the constraints that apply to the real
world and at the same time make the technique as
robust as possible. In (Chryssolouris et al, 2002), a
virtual experimentation environment was developed
as a planning and training tool for machining
processes. This approach involved the virtual
modelling of machining processes within a Virtual
Machine Shop environment. This environment
enabled an immersive and interactive process
performance and showed the potential of the
approach to provide significant advantages in this
field of applications against current desktop
simulation approaches. A study by (Rubio et al,
2005) presented a methodology of virtual models
being developed for manufacturing simulation.
According to the authors of (Connacher and
Jayaram, 1997), the acceptance and success of a
Virtual Assembly (VA) simulation application is
dependant on five factors. The first factor has to do
with the capacity of the application to enable
engineers in gaining perspective on assembly issues.
The second factor implies that the system should
support the engineers decision making activities.
The third and fourth factors suggest that the
technology should be applicable in real production
and easy enough so as to be used on a daily basis.
The fifth and final factor that should be taken into
consideration refers to the accuracy and fidelity of
the information derived from the simulation.
There is a broad spectrum of interaction
metaphors for manipulating objects in VEs but none
of them is considered as the ideal or dominant
3D manipulation technique. The reason for this is
that even if a technique is perfectly suited for a
specific application, it may not necessarily be the
best one for another. In almost all cases, any choice
of a 3DIT is the result of trade-offs, such as realism
over usability and ease of use in respect to the
accuracy of the simulation. Managing these trade-
offs for a more desirable outcome is one of the keys
to a good 3D IT design (Ottosson, 2002). Natural
user interactions play a key role in virtual assembly
because high interactivity is an intrinsic
characteristic of any assembly process and
therefore, in virtual reality, this feature is of the
outmost importance (Wan et al, 2004). In assembly
simulations, the most common techniques are the
ones that have to do with grasping and manipulating
a virtual object. When it comes to grasping, Pitarch
(2010) classifies the actions concerning grasping
objects into pre-grasp, grasp and after-grasp. In
(Jones, 1997) a distinction is made between five
types of grasping; Precision, Cigarette, 3-point
pinch, Power and Gravity grasping. However, using
many techniques can be confusing to the user and
thus prevent realism In (Wan et al, 2004) the
authors divide the types of grasping to Power
grasping and precision grasping. A mechanism was
developed that could recognize grasping actions by
using auxiliary virtual objects (Pappas et al, 2003).
According to the study, this technique resulted in
improving the user interaction realism within the
virtual process environment and the minimization of
the necessary time for a task to be executed. There
are other studies, also focusing on the simulation of
the grasping task, (Holz et al, 2008, Weber et al,
2006), however, all have come to the conclusion

560

that no matter how affluent literature may be, there
is no perfect modelling for grasping simulation. It is
a very complex task, which in most cases, is
adjusted to the requirements of the application.
There are cases in assembly simulation that
objects should be placed accurately and in respect
to a certain time-frame. In such cases, it is almost
impossible for all natural limitations to be
respected, mainly due to the absence of haptic
feedback and the inherent inaccuracy by which
multimodal devices are controlled, resulting in a
poor performance during the positioning or removal
of virtual objects. On top of that, feedback of the
products weight and collision with other objects
(when moved object collides with another object) is
still a problem for the VR technology (Ottosson,
2002). Consequently, the parts can hardly be placed
in their exact fitting position. In (Mavrikios et al,
2006) a solution to this problem was proposed
through the development of a function, which
released the virtual object from the users hand as
soon as a good position had been achieved.
2. INTERACTIVE ASSEMBLY
FEATURES
The classification of user interaction techniques and
a survey on the existing technologies and methods
in that field, are mandatory for the design and
development of advanced interaction metaphors.
The classification, followed in the conceptualization
and design of this study, is based on a common
categorization, provided by technical literature
(Bowman et al, 2004). Referring to this, preselected,
common interaction techniques are divided into
main categories such as travel, selection,
manipulation, system control and symbolic input
techniques.
For use in this study, has been employed the
concept of the decomposition technique (Bowman
et al, 2004). In this method, the techniques for a
particular task can be decomposed into sub-
techniques, which are called technique components
(TC). The formalization methodology followed in
this study for the development of the interaction
metaphors, is proposed by (Bowman and Hodges,
1999, see Figure 1) and suggests that in order for
the differences in modelling between 3DITs to be
better comprehended, the designer should arrange
them into categories based on various parameters.
The most important advantage of this approach is
the summative evaluation that compares technique
components rather than holistic techniques. After
the decomposition, each subtask can be translated
into functional components of its implementation
code. These components are designed to be
functions that can be easily facilitated by various
VR platforms so as for the techniques to be platform
independent.


Figure 1 Methodology of design, evaluation and
application of the 3DIT proposed by (Bowman and Hodges,
1999)
2.2. DRILLING PROCESS METAPHOR - DP
The Drilling Process interaction metaphor (DP) is a
specific technique used for the drilling process
simulation that allows an immersed user to virtually
drill holes on a virtual work-piece. The user can
modify certain parameters of the drilling process by
using a multimodal device. The devices structure
allows the user to dynamically change the values of
those parameters while the process is being
executed and the values can be simultaneously
modified.. Ultimately, the user controls the tools
velocity when it is inserted into the work-piece.
The device used for this technique is the Wii
Nunchuk, which is basically a 3D mouse with two
buttons and a joystick. A tracker is also attached to
the device so that its position and orientation can be
tracked. The two buttons are used for adjusting the
spindle rotational speed, while the joystick on the
for modulating the actual velocity of the drill tip
when it is inserted into the object. The lower value
of this parameter is zero and the maximum is
calculated depending on the spindle rotational
speed. The hole in the model is represented with a
cylinder and is modified according to the input from
the technique. The metaphor can be activated after
the tool is near, in the right position and has the
proper orientation. If the tool stops being in the
proper pose (position and orientation) the Drilling
metaphor is deactivated.
2.2.1. Task decomposition
The drilling task is divided into three sub-tasks. The
first task has to do with the definition of the drilling
spot on the work-piece. The second task has to do
with the testing of the tools position and the third
one with the modification of the model (in this case
hole-drilling) using various inputs from the user.
The architecture of the metaphors is generated from

561

the description of the tasks. For the definition of the
working spot, a common raycast interaction
technique (a technique where a ray is casted from
the users hand or tool) can be used with an
intersection test incorporated. A position control
technique is used for testing the tools placement
relative to the spot to be drilled. This is done by
calculating the gradient of the surface at the specific
point. Finally, for the models geometrical
modification, another technique is created that
modifies the model by translating the input from the
users peripheral device. The task decomposition of
the drilling process metaphor is graphically depicted
in Figure 2.


Figure 2 Drilling process task decomposition
2.2.2. Implementation
The DP works by translating signals, received by an
input device (representing the virtual drill) into
modifications of the hole-geometry. The IM is
initiated when it receives a positive input by the
Magnet Metaphor (see 0), which checks whether or
not the tool is in the right pose to perform the
drilling operation. The device used in the
application comprises two buttons and a small
joystick as shown in Figure 3.
The upper button of the device is used to
increasing the value of the spindle rotation
parameter and the lower one to decreasing it. The
joystick when moved forward increases the value of
the insertion velocity and decreases it when moved
backwards.
For a given type of drilling simulation, the
predefined amplitude of spindle values can be used.
Values inside this amplitude are given to the
parameter, from 0 to a maximum rotational speed.
The spindle rotational speed is controlled by the two
buttons of the device (see Figure 3).



Figure 3 Mapping of functions on the device
The second parameter adjusted by the user is the
actual velocity that the drill tip is inserted into the
object. The minimum value of this parameter is zero
(u
min
(t)=0) and the maximum is calculated as shown
in Equation (1).

) ( ) (
max
t F t u = (1)

Where:
F(mm/rev): is the feed rate being the
velocity at which the tool is fed into the
work-piece, expressed in millimeters per
revolution of the spindle. The feed rate
varies depending on the drill and has a
constant value.
u
max
(t) (mm/min): is the maximum velocity
that the drill tip can be inserted into the
work-piece, expressed in millimeters per
minute.
N(t) (rpm): is the spindle rotational speed
expressed in revolutions per minute.

The depth of the hole is calculated as shown in
Equation (2).

=
=
k
i
const i k
t t u t d
0
) ( ) (
(2)

Where:
i = 0, 1, 2k
d(t
k
) (mm): the depth of the hole at a given
time
k
t .
u(t
i
) (mm/s): the velocity at which the drill
bit inserts the hole at any given time t
i
.
t
const
(ms): time constant
i i const
t t t =
+1

represents the step followed in order for the
calculations to be made.


562

The geometry of the space in the hole is
represented by a cylinder whose length ) (
k
t b is
changed as shown in Equation (3).
L
t d
t b
k
k
) (
1 ) ( = (3)
Where:
L (mm): the initial length of the hole
geometry.

The geometry of the hole is scaled down in its
main axis according to the values of the b(t
k
). When
the depth reaches its maximum value and the b(t
k
)
reaches zero, the geometry stops scaling down and
is deleted.


Figure 4 Hole geometry modification

2.3. MAGNET METAPHOR
2.3.1. Task decomposition
The Magnet metaphor is an interaction metaphor
designed to test when a predefined acceptable
position and orientation have been achieved for the
placement of an object in a Virtual Environment.
For example, in the case of hole-drilling, the tool
should be placed in the right position, relatively to
the drilling spot. A virtual object is moved inside a
virtual environment, based on the tracking data
translated into coordinates, received by the
corresponding VR peripheral device found in the
hand of the immersed user (e.g. wand). When the
distance between the object and the predefined
position is within a certain threshold, the objects
texture is set to blue so as to inform the user of his
approaching the magnet spot. When the objects
orientation and that of the test are also close, the
tools texture is set to green and the interaction
technique places the object in the magnets position
(see Figure 5). The above algorithm of the metaphor
is presented in Figure 6.


Figure 5 Magnet metaphor concept

Figure 6 Magnet metaphor task decomposition
2.3.2. Implementation
The Magnet Metaphor, which is used for the precise
positioning of virtual objects in assembly
applications, works by testing the pose of an object
relatively to a predefined one, called the magnet
pose or test pose.
The first test that the metaphor executes is the
distance test, which calculates the distance between
the object and the magnets position. If the distance
is under a certain threshold, the orientation test is
initiated. Following the logic of the previous test, if
the orientation is under a certain threshold (fixed
vector), the object is placed in the desired position.
This metaphor is more suitable for operations such
as for positioning objects (e.g. parts to be
assembled) in virtual mockups. It also has to do
with placing the tool that the user is using (e.g.
screwdriver) in the proper position to perform a
task. Part positioning works when the object is
controlled by the movements of the users hand.
While the object is being moved, the proximity test
runs and when all conditions are satisfied (proper
position and orientation) the part is snapped to the
magnet position. In order for the object to be
released, the user needs to perform either a release
gesture, when a virtual hand is used, or to release
the button when a 3D mouse is used.
2.4. ADAPTIVE FINGER GRASPING
METAPHOR - AFG
2.4.1. Task decomposition
The Adaptive Finger Grasping interaction metaphor
(AFG) simulates the process of real objects being
grasped with ones hand. The immersed user, by
giving input through a data-glove can select and
manipulate objects in the VE.

563



Figure 7 AFG task decomposition
The AFG requires different conditions for
grabbing small objects (e.g. screws) and bigger ones
(e.g. tools), as it is illustrated in the decomposition
of the AFG algorithm in Figure 7.
2.4.2. Implementation
The geometries that compose the virtual hand are
usually more complex and thus intersection tests
between those components and the virtual objects
will be computationally intensive. Instead of
executing direct collision tests, simpler geometries
can be used to make the process simpler and more
effective. Invisible cylindrical geometries are
attached to the fingers of the users hand (Figure 8).
The collision detection test takes place between
these geometries and the virtual objects. That also
makes it easier for an object to be selected and helps
overcome the difficulties that data-gloves usually
bear with their signal mapping.


Figure 8 Virtual hand with invisible cylinders
When intersection occurs between the index
cylinder and one of the objects in the VE (the ones
defined as movable), the technique calculates the
volume of the objects bounding box so as to get a
rough estimation of its size. A value defined by the
user is used as the threshold between small and
bigger objects. If the object is defined as a small one
(volume under threshold value) the 3DIT will
require one more intersection to take place from the
thumb cylinder in order for the selection to be
activated (see Figure 9) and if it is defined as a
bigger object, the technique will require intersection
from both the thumb and the middle finger cylinder
(Figure 10). Once the collision requirements have
been met, the object is being attached to the virtual
palm and starts following the users hand. If one or
more of the intersection tests gives negative output
(thumb cylinder and/or middle finger cylinder) then
the object will stop following the users hand.
The external cylinders are very useful for the
picking up of smaller objects since any contact
between them and the objects of the virtual
environment happens more easily than when the
virtual hands geometry is being used.


Figure 9 AFG for small objects

Figure 10 AFG for medium sized objects
2.5. POOL TO HAND INSTANTIATION
METAPHOR - PHI
2.5.1. Task decomposition
The PHI metaphor is designed for use in cases that
the immersed user needs to select and manipulate
small objects in relation to the size of the virtual
hand. In addition, this problem becomes more
complex when the virtual objects beside their small
size are also great in population and in a condensed
space. In such cases, the intersection mechanism
usually used for the manipulation of virtual objects
does not work properly. For example, in a case that
an assembly operator wants to pick rivets from
virtual bags containing such objects, it is difficult to
execute a realistic manipulation mechanism because
of the great number of intersection tests that will
take place when the virtual hand will collide with
the volumes of the objects. The Pool To Hand
Instantiation metaphor (PHI) concerns the picking
of a virtual object from a geometry that exists as a
pool of such objects. To pick an object from the
pool the user simply puts two fingers into the virtual
bag (so as to cause an intersection) and then pulls
them out. When the fingers and the bag stop
intersecting (as the user removes his fingers from
the bag), the object is created on the virtual hand
(see Figure 11 and Figure 12).


564


Figure 11 Pool to Hand instantiation metaphor
decomposition

Figure 12 Pool to Hand Instantiation Metaphor
2.5.2. Implementation
The Pool To Hand Instantiation metaphor (PHI)
runs an intersection test between the geometry
representing a pool (e.g. bag) and the index finger
of the virtual hand. When the output of test is
positive (as the user puts his hand in the virtual bag
a file from a resource folder is copied to the VE,
which is the geometry of a small object such as a
rivet. In order for the metaphor to finalize the
procedure, the user needs to pull his virtual hand
out, thus providing to the intersection test a negative
output.
2.6. 3D ANNOTATIONS
2.6.1. Task decomposition
The 3D Annotations metaphor (see Figure 2) serves
the user by providing information about various
parameters of the scene, or the actions of the user,
in the VE. The annotations are pop-up screens that
follow the users movements and are stable when
the user is performing an operation. For example in
a drilling process, the 3D Annotations metaphor is
used to providing the user with information,
regarding the process parameters (e.g. depth of the
hole, velocity of the drilling or the spindle speed).
The decomposition of 3D annotations is shown in
Figure 13.
2.6.2. Implementation
The information is shown on small white screens as
shown in Figure 14. Those screens are three-
dimensional and serve as tablets providing titles,
values and if necessary, units (e.g. mm/sec). The 3d
object that is used for the screen follows the users
movement and is always perpendicular to the users
viewing plane. The 3D Annotations are more useful
than normal data communication techniques are,
when it comes to executing operations immersed
into a VE, because in contrast to most of the
techniques, the screen isnt always in the users
point of view, unless the user chooses to and it is
not stationary but moves as the user does so.


Figure 13 3D Annotations task decomposition

Figure 14 3D Annotations during a drilling process
3. AIRCRAFT ASSEMBLY TEST CASE
SIMULATION
The test case scenarios, used for the validation of
the metaphors developed, were provided by the
aerospace industry. The use-cases involved two
very common assembly operations in the everyday
practice of human-based aircraft assembly
manufacturing. These operations are the hole-
drilling and riveting tasks. Both of these assembly

565

tasks are performed in the junction areas of the
fuselage components of the aircraft. In the drilling
use case the operator needs to drill holes in the
fuselage. It takes about 30 seconds to drill one
single hole and the operator has to respect an
established drilling sequence. Therefore, the virtual
simulation model should provide a suitable
environment for testing both the operation of the
drilling task and the sequence through which it is
executed. In the riveting task, there are two
operators cooperating for the execution of this
assembly process. The point of this task is that
rivets be used for the mechanical fastening of the
fuselage components. In this task, outside the
fuselage, there is an operator, who is responsible for
putting the rivet (lock-bolt) in to the hole. The
second operator is located inside the fuselage and is
responsible for putting a ring in the rivet-gun for
swaging the rivet after the latter has been inserted
into the hole by the first operator. In the use case,
there has to be a cooperation between the two
operators in order for them to finalize the riveting
process in the holes previously created by the hole-
drilling process. In this use-case, therefore, it is
important that a simulation environment be created
so as for t the trainees to be able to exercise their
cooperation abilities while a process engineer will
be able to validate the sequencing and ergonomics
characteristics of the process.
3.1. HOLE-DRILLING USE-CASE
For the hole-drilling operation simulation, two
interaction techniques were utilized; the Magnet and
the DP interaction metaphors. In the sequence
described, these interaction techniques allowed the
user to drill holes in the fuselage and by controlling
the drilling process parameters the user had a direct
influence on the time that the process required. The
workflow of the operation along with the utilized
techniques can be seen in Figure 17.


Figure 15 Drilling test-case, executed on a virtual aircraft
fuselage.
The user places the tool on to the drilling spot and
performs the operation. Then he/she is able to move
to the next drilling spot or define a new one, upon
completing the previous hole-drilling.


Figure 16 Riveting test-case on the junction section of the
fuselage.

Figure 17 Workflow diagram for the hole-drilling process
task, along with the corresponding IMs.
3.2. RIVETING USE-CASE
During the riveting task, the operators work in pairs:
one operator is located inside the aircraft while the
other one is outside. In the riveting operation the
outside operator uses the PHI, AFG and Magnet
interaction metaphors to perform the rivet
installation and the inside operator uses the PHI,
AFG and Magnet interaction metaphors to finally
swag the rivet. The swaging of the rivet and the
collection of the queue is performed automatically
by pressing a button. When the button is pressed the
rivets geometry is modified accordingly. In order
for that to be done,, the user should first bring the
tool to the right position for which he receives
visual feedback from the Magnet metaphor as to
whether the pose of the rivet gun is proper for the

566

operation to be carried out. The workflow of the
assembly operation can be seen in Figure 18.


Figure 18 Workflow diagram for the riveting process task,
along with the corresponding IMs for both operators.
3.3. EVALUATION AGAINST INDUSTRIAL
REQUIREMENTS
The evaluation, based on the requirements having
been set by the aerospace industry, is adapted to the
specific applications of the IMs developed and are
applied to the certain scenarios. Since the
techniques are developed for carrying out certain
tasks in virtual environments, the results are
qualitative because the tasks are predefined and
have only one outcome. There is a list following
with the industrial requirements and the evaluation
of the respective interaction metaphors, used in the
use-case scenarios:
Table 1 Industrial requirements and evaluation of
interactions used in the test case simulation
Place object in proper position/orientation
The Magnet metaphor successfully helps the user place an
object in a predefined position/ orientation.
Grasping object
The requirement of grasping a virtual object is satisfied
through the AFG technique by adapting the selection
conditions of the techniques algorithm so that the user can
grasp and manipulate objects of different sizes.
Direct manipulation, natural interaction with objects
The PHI simulates with success the process of picking a
small object from a bag/box with many similar objects and
the AFG is also used for grabbing and moving bigger
objects in the environment.
Intuitive transformation of objects
The drilling metaphor successfully helps the user modify
geometry by drilling a hole.
Integration of human body for interaction visualization
All the tasks that are carried out through the use of the
techniques require the use of the users hands. The user
either controls the virtual hand with a data-glove or a tool
through a tracked input device (e.g. wand). Both ways
require the integration of the human body since the tracking
data come from the movement of the users body.
Interaction metaphors must work for head mounted
displays (HMD) and projection walls
All the techniques can be utilized with the use of HMD or
projection wall displays since none of them depends on the
devices used for visualization purposes.
One hand manipulation
The AFG technique provides one hand manipulation of
objects through selection and movement.
Intelligent interactions with objects
AFG can be characterized as intelligent technique since it
uses algorithms that recognize object size so as to adapt the
interaction behavior.
Facilitate the execution of complex interactions
The Drilling metaphor helps VR users perform an
immersive simulation of the drilling processes which
requires complex interaction mechanisms.

4. CONCLUSIONS
The interaction metaphors developed in this study,
were successfully implemented into use-case
scenarios, having derived from the everyday
practice of the aerospace manufacturing operation
whilst certain assembly processes were simulated
for validation purposes. The techniques were similar
or identical to those for the real-world interactions
in the form of natural behaviour of the interaction
and the duration of operations.
The techniques were designed and developed
based on the aerospace industrys requirements and
under a formal framework allowing for future use in
different industrial applications. Their design
concept was not task-based but based on modelling
and interpreting the subtasks of each process. For
example, the Adaptive Finger Grasping metaphor is
a technique that can be used in many industrial
applications where the user has to grab and move
objects of varying sizes with one hand.
The main goal of the design and development of
the techniques was to provide realistic interactions
with a VR system in the form of simulating real
world interactions. That goal was accomplished
through following two basic guidelines. The first
one was that the techniques were conceptually
designed by having decomposed each task into sub-
tasks and the second that the techniques were

567

developed so as to provide realism, in the form of
time and naturalness in interacting with the
environment. Instead of creating techniques for the
major tasks, natural techniques were implemented
that could be used as building blocks for satisfying
the requirements of bigger tasks when combined.
Although the techniques presented in this study,
aimed at a realistic representation and simulation of
human-based assembly processes, there is still a lot
of work that has to be done in order for an exact
representation of the real processes to be
accomplished. Future research should be distributed
to the fields of advanced visualization, interaction
techniques and in the development of new VR
peripheral devices that provide realistic feedback
without limiting the users movement. Haptic
feedback is the most realistic but it still has many
drawbacks such as the weight of the devices and
their being stationary. Research carried out in the
field of interaction, should focus on developing
isomorphic techniques, since they are more suitable
for the simulation of real interactions.
5. ACKNOWLEDGMENTS
This study was partially supported by the project
VISION/AAT-2007-211567, funded by the
European Commission in the context of the 7th
Framework Programme.
REFERENCES
Bi Z.M., Computer integrated reconfigurable
experimental platform for ergonomic study of vehicle
body design, Int J Computer Integrated
Manufacturing, 23/11, 2010, pp. 968-978
Bowman D.A., Hodges L.F., Formalizing the Design,
Evaluation, and application of Interaction Techniques
for immersive Virtual Environments, J. of Vis. Lang.
& Comp., 10, 1999, pp. 37-53
Bowman D.A., Kruijff E., LaViola Jr. J.J., Poupyrev I.,
3D User Interfaces: Theory and Practice, first ed.,
Addison Wesley Longman Publishing Co., Redwood
City, 2004
Chryssolouris G., Manufacturing Systems: Theory and
Practice, 2nd ed., Springer-Verlag, New York, 2006
Chryssolouris G., Mavrikios D., Fragos D., Karabatsou
V., Pistiolis K., A novel virtual experimentation
approach to planning and training for manufacturing
processes-the virtual machine shop, Int. J. of Comp.
Integr. Manuf., 15, 2002, pp. 214 - 221
Connacher H., Jayaram S., Virtual Assembly Using
Virtual Reality Techniques, Comp.- Aided Des., 29,
1997, pp. 575-584
Flasar J., 3D Interaction in Virtual Environment, Proc.
of the 4th Cent. European Semin. on Comp. Graph.,
2000, pp. 21-31
Holz D., Ulrich S., Wolter M., Kuhlen T., Multi-contact
grasp interaction for virtual environments, Journal of
Virtual Reality and Broadcasting, 5/7, 2008
Jones L., Dextrous hands:Human, prosthetic, and
robotic, Dep. of Mech. Eng., Massachusetts Inst. of
Tech., 1997.
Lin E., Minis I., Nau D.S., Regli W.C., Contribution to
Virtual Manufacturing Background Research, Inst.
for Syst. Res., Univ. of Maryland, 1995
Mavrikios D., Karabatsou V., Fragos D., Chryssolouris
G., A prototype virtual reality-based demonstrator for
immersive and interactive simulation of welding
processes, Int. J. of Comp. Integr. Manuf., 19, 2006,
pp. 294 - 300
Mujber T.S., Szecsi T., Hashmi M.S.J., Virtual reality
applications in manufacturing process simulation,
2004, J. of Mat. Process. Technol. 155-156 (2004)
1834-1838.
Ottosson S., Virtual Reality in the product development
process, J. of Eng. Des., 13, 2002, pp. 159 -172
Pappas M., Fragos D., Alexopoulos K., Karabatsou V.,
Development of a three-finger technique on a VR
Glove, Proc. of the 2nd Virtual Conc. Conf., 2003,
pp. 279-283
Pitarch E.P., Virtual Human Hand: Autonomous
grasping strategy, Uviversitat Politecnica de
Catalynia, 2010
Rubio E.M., Sanz A., Sebastian M.A., Virtual reality
applications for the next-generation manufacturing,
Int J Computer Integrated Manufacturing, 18/7, 2005,
pp. 601-609
Wan H., Luo Y., Gao S., Peng Q., Realistic Virtual
Hand Modeling with Applications for Virtual
Grasping, Proc. of the 2004 ACM SIGGRAPH int.
conf. on Virtual Reality contin. and its appl. in ind.,
2004, pp. 81-87
Wan H., Peng Q., Dai G., Gao S., Zhang F., MIVAS: A
MULTI-MODAL IMMERSIVE VIRTUAL
ASSEMBLY SYSTEM, ASME 2004 Des. Eng.
Tech. Conf. and Comp. and Inf. in Eng. Conf., 4
(2004) 113-122.
Weber M., Heumer G., Amor H.B., Jung B., An
Animation System for Imitation of Object Grasping in
Virtual Reality, Advances in Artificial Reality and
Tele-Existence, 16th International Conference on
Artificial Reality and Telexistence, ICAT, 2006, pp.
65-76
Zachmann G., Rettig A., Natural and Robust Interaction
in Virtual Assembly Simulation, 8th ISPE Int. Conf.
on Concur. Eng.: Res. and Appl., 2001
Zhao W., Madhavan V., Virtual Assembly Operations
with Grasp and Verbal Interaction, Proc. of the 2006
ACM int. conf. on Virtual reality appl., 2006, pp. 245
254
Proceedings of DET2011
7th International Conference on Digital Enterprise Technology
Athens, Greece
28-30 September 2011
568

PROTOTYPE DESIGNING WITH THE HELP OF VR TECHNIQUES: THE
CASE OF AIRCRAFT CABIN
Loukas Rentzos
Laboratory for Manufacturing Systems and
Automation, Dept. of Mechanical Engineering
and Aeronautics, University of Patras, Greece
rentzos@lms.mech.upatras.gr
George Pintzos
Laboratory for Manufacturing Systems and
Automation, Dept. of Mechanical Engineering
and Aeronautics, University of Patras, Greece
pintzos@lms.mech.upatras.gr


Kosmas Alexopoulos
Laboratory for Manufacturing
Systems and Automation,
Dept. of Mechanical
Engineering and Aeronautics,
University of Patras, Greece
alexokos@lms.mech.upatras.
gr
Dimitris Mavrikios
Laboratory for Manufacturing
Systems and Automation,
Dept. of Mechanical
Engineering and Aeronautics,
University of Patras, Greece
mavrik@lms.mech.upatras.gr
George Chryssolouris
Laboratory for Manufacturing
Systems and Automation,
Dept. of Mechanical
Engineering and Aeronautics,
University of Patras, Greece
xrisol@lms.mech.upatras.gr
ABSTRACT
The main focus of this study is to develop a highly usable immersive environment for virtual
aircraft products. The environment should address the modern usability guidelines regarding the
design of 3D immersive interfaces. The scope is that the interface mechanisms of virtual
environments be approached from an engineering point of view. The development of the
environment involves interaction metaphors that will aid a designer or engineer to immersively
create and test a product prototype, while exploiting the advantages of VR. In addition, it is
important that the user be able to exploit these benefits without being an expert in VR. The
development is made in a platform independent architecture in order for a possible integration with
other VR platforms to be facilitated. Finally, the proposed interfaces are validated and evaluated
based on the aircraft cabin case, using user task scenarios that have been designed for this particular
study.
KEYWORDS
Engineering Simulation, Interactive Prototype, Virtual Design

1. INTRODUCTION
Modern commercial design tools, used in the
process of new product development, enable
engineers to design, test and simulate various
parameters of the project in hand. However, there
are still a lot of areas in the engineering design,
where specific and customized tools are required in
order for productivity to be improved and the cost
of the development to be reduced. The Virtual
Reality (VR) technology has been widely known for
many years,, however, it still lacks being fully
facilitated in the everyday practice of product
development. The main reasons for this are being
the complex hardware systems, required for
facilitating the interaction between the human user
and the computer system, hosting the application,
apart from the lack in standardization and the
increased cost of development. There are, however,
inherent advantages of VR that together with the
modern affluence of new interaction devices,
mainly from the field of home games, have

569

rekindled the interest towards this technology. Some
of these advantages are the high level of flexibility
and reusability of the virtual prototypes, developed
with VR technology (Chryssolouris, 2006). The cost
of developing a virtual prototype application, for
simulating and facilitating engineering tasks and
tests, is substantially smaller than that for building a
real-life mock-up, which it does not even have
reusable value. Furthermore, in the virtual
environment, once the application has been built
and set up, the engineer is able to execute limitless
tests and experiments while changing and
modifying the design at every step. Recent research
has also focused on concurrent engineering
applications, such as multi-user collaborative
environments for real-time product design and
review (Chryssolouris et al. 2008, Pappas et al.
2006).
There are a lot of commercial and free tools,
offering possibilities and features of immersive
modeling, having as their target the design and
prototyping of products. In (Weidlich et al, 2007),
several tools are mentioned for offering ways of
product design, using predefined interaction
techniques or simulation components (e.g.
kinematics). The 3DVIA Virtools (Dassault
Systmes) is a very popular authoring tool for 3D
environments. It is also used as the development
platform for the virtual environment, reported in
this study. Nevertheless, all these tools offer limited
predefined interaction capabilities, especially for
complex product design and review sessions. The
interaction techniques hosted, are mainly limited to
tasks, such as ray intersection or look around
techniques.
The product modeling and simulation
technologies are constantly evolving and new ideas,
which allow the facilitation of such technologies, in
many other fields of engineering, are born.
Regarding Virtual Reality though, it is the
availability of such technologies that makes them
the most important factors in supporting
competitive engineering design. The advances made
in these areas are greatly responsible for an
engineering designs degree of reliance on physical
prototypes, instead of digital ones (Stark et al,
2010). However, there is still left a large margin for
improvement, mainly on the part that is responsible
for the interaction between the human-user and the
virtual environment.
The design and development of interaction
techniques for VR use, is greatly dependent on the
quality of the classification and survey on the
existing technologies and methods in that field.
Referring to this, common interaction techniques
are usually divided into main categories, such as
travel, selection, manipulation, system control and
symbolic input techniques (Bowman et al, 2004).
However, whereas the techniques focusing on
object selection and manipulation, apart from
travelling and way-finding, have already been
adequately researched and tested, they are
application control techniques, which are yet to be
fully considered (Dachselt and Hubner, 2007). On
the other hand, an engineering design environment
has as its key requirement the advanced natural
interaction with the user. For example, in (Moehring
and Froehlich, 2010), it is suggested that natural
interaction be the most important factor for the
virtual validation of functional aspects in
automotive product development.
This study is focusing on the application control
aspect of a virtual environment prototype, along
with symbolic input techniques. In order, however,
for a complete interactive environment to be
provided, some basic techniques have been
developed for accommodating the needs for the
selection and manipulation of objects.
2. INTERACTIVE PROTOTYPE DESIGN
AND REVIEW FEATURES
The interaction metaphors and features, developed
for the prototype design environment, are presented
in this section. Regarding the application control, a
3D immersive interface, which uses a menu
structure, is developed in order to facilitate all the
functionalities of the environment. Its architecture
aims at allowing a platform independent
implementation, while maintaining usability aspects
such as the users natural task flow. Another three
variants of alphanumeric input metaphors are
designed to accommodate the functionalities of the
3D interface, for tasks that require numeric data
input from the user. Since it is of first priority that
all user tasks be facilitated immersively, these
techniques are designed to easily and intuitively
accommodate the requirements for numeric data
input. In addition, an advanced object position
control metaphor ensures that the engineer be able
to control precisely the objects and parts during the
design sessions. Finally, some basic interaction
metaphors support the usability of the environment.
2.1. 3D IMMERSIVE INTERFACE
The 3D immersive interface template (see Figure
1) is an application control metaphor designed to
control various parameters in the Virtual
Environment (VE). The interface is represented by a
3D geometry that works as a touch screen. The
screen can be naturally manipulated by the user,
with the use of a virtual hand, or through a device
that controls a virtual pen. The main concept of the
3D immersive interface is that certain predefined

570

functional features be used inside the menus. There
are two main aspects that have been taken into
consideration during the development of the 3D
interface. The first aspect has to do with mapping
the features, displayed in the interface and the
second with the classifying those features.


Figure 1 - A visualization of the 3D Immersive Interface.
Before continuing with the more detailed
description of these aspects, it is important to
explain that the term feature refers to the
functional objects used for interaction, such as
buttons or text fields. Regarding the mapping of the
interaction features and their respective
functionalities, the 3D interface screen area is
divided into a grid with a certain resolution so that it
is much easier for the interaction features to be
correlated with those of the application behavior
and functionality. Taking also into account the
importance of having a platform independent
architecture, a 3D interface is hierarchically
structured in order to accommodate the various
features. The interaction area of the screen is
divided into two main areas, the tabs and the menu
areas. Using the same notions, the tabs are the
highest hierarchical objects of the 3D interface and
each one of them contains interaction features that
could either lead to another sub-menu or to certain
application behaviors.
Figure 2, shows the generic architecture of the
3D interface. The interaction features
accommodated by the 3D interface developed, are
buttons, numeric or text fields, scroll fields and
texture fields. Buttons are simple rectangular
objects that are linked to a certain sub-menu or
behavior in the application. Their activation and
selection is made by the user with a ray while they
are being immersed in the environment. Numeric
and text fields are used, together with the
alphanumeric metaphors (see section 2.2), for
inputting data in the application. Scroll fields
visualize predefined text data, stored in a repository,
to be used in the application and the texture fields
accommodate predefined textures and images,
which are difficult to be transformed into the
generic interaction features, provided by the
interface. In Figure 3, an example of menu
implementation, showing the different areas and
interaction features, is being depicted.


Figure 2 generic architecture of 3D Immersive Interface.

Figure 3 Add light source screen.
The implementation is made based on the
concept of building blocks. Each tab, menu and sub-
menu is a building block, with its output connected
to a certain behavior in the application.
2.2. ALPHANUMERIC INPUT
Alphanumeric input metaphors are used for defining
numbers and text in virtual environments. This
study proposes three different metaphors of
alphanumeric input, called Number Input Wheel,
Flower input and Alphanumeric input control.
The Number Input Wheel (see Figure 4) is a
numeric input technique that uses three concentric
circular geometries for the visual feedback of the
alphanumeric value. When the wheel is activated,

571

its center is attached to the position of the users
input device. By rotating his hand, the user can
increase or decrease the value of the wheel. The
distance from the input device to the center of the
wheel, determines the magnitude of the number
change. For example, if the user moves the stick a
little bit further from the wheel, the magnitude of
the change goes from 1 to 10 and when it is moved
even further away, it goes to 100. When the change
is set to magnitude 1, the small circle turns green
and when it is larger, the other two circles turn
green respectively.


Figure 4 umber Input Wheel Metaphor

Figure 5 Flower numeric input metaphor
The Flower Input Wheel (see Figure 5), helps the
user input numbers with the use of an intuitive
interface resembling a flowers petals. The symbols
on the interface are rotating, while the user hovers
his/her pointing device over them and the respective
geometry is scaled up, towards the user, in order to
provide feedback on the imminent selection. This
technique requires a small movement of the users
wrist, in order to change the symbol and give more
precision to the selection process.
The alphanumeric input control (see Figure 6) is
an interaction technique that incorporates a low-cost
input device for the selection of the required
symbol. The input device is a Wiimote, a mass
produced, gaming device that is ergonomically
designed. The symbolic input is conducted in cases
that there is a need for inputting values that are long
and complex. In Figure 6, there is a draft
representation of the concept of alphanumeric input
control. The user scrolls through the values and
digits of the input field with the help of the
Wiimote joystick.


Figure 6 Alphanumeric Input Control metaphor
2.3. ADVANCED OBJECT POSITION
CONTROL
The Object Position Control interaction metaphor
(OPC) is used for defining a certain pose
(combination of position and orientation data) of an
object and for feeding this information either to test
the placement of that object or for its accurate
positioning.
It consists of two stages, during which the user
selects the point in space that the position control
needs to be executed and then, the technique
calculates the proper pose with respect to the test
that will take place (see Figure 7). After the pose
has been calculated, the difference between the
defined pose and the objects pose is measured.


572


Figure 7 - Object Position Control interaction metaphor
task decomposition
In more detail, during the first stage of OPC, the
virtual input device is used as a pointer. A ray is
casted from the device, parallel to the axis of the
tool and upon intersection with an object, a small
red sphere moves to the respective coordinates,
thus, providing feedback as to where the tool is
pointing at. The second stage, begins as the user
clicks on the input device the corresponding button,
which confirms the selection of the coordinates.
Next, the calculations are stored as the position and
orientation vectors of the point in space.


Figure 8 - Rays casted on surface of model by 3D sources


Figure 9 - Calculation of orientation based on secondary
intersection points
The definition of the surfaces gradient is essential
for the calculation of the test pose. The calculation
takes place with intersection points used between
the rays casted from certain control sources and the
surface of the model. The control points follow the
initial intersection point (described above) and
sequentially, cast new rays forming new intersection
points, through which their orientation is calculated
by this technique. Four sources are used for ray
casting purposes. The orientation of the test pose is
calculated by the cross product of the vectors,
formed by the last intersection points. When the
user presses a button on the device, the position
from the first ray and the orientation from the other
casted rays are stored as the test pose (Figure 8).
The secondary raycast sources move across the
surface according to the calculated orientation so as
to be respectively perpendicular to the surface
(Figure 9).
This way, the object position control metaphor
enables the VR users to define a pose in the VE,
with precision, in respect to that by which they can
accurately place objects or create the position
control (Figure 10).


Figure 10 - Positioning of light in perpendicular angle to the
gradient of the overhead luggage bins surface.

573

3. AIRCRAFT CABIN DESIGN AND
REVIEW TEST CASE
The above developments were implemented in a
use-case, which focused on the design of the
lighting configuration of an aircraft cabin and the
review of the effects that this design had. For this
use-case, two (2) user task scenarios were
developed. One scenario was about the cabin
lighting design and the second for the cabin lighting
review.
The cabin lighting design test-case provides the
necessary environment to a lighting engineer in
order to immersively design the setup of the lighting
parameters and conditions of the cabin and
configure the layout of the light sources
accordingly. The 3D User interface (see Figure 11)
in this test-case, is used for controlling in the cabin,
the light parameters, such as the luminosity of the
light sources, the distance and angle of certain point
sources in respect to various reference objects in the
scene (e.g. passenger seats). The OPC metaphor is
used in conjunction with the previous, for
accurately defining the position and orientation of
the lights. Furthermore, all the necessary numeric
values are inputted in the system via the three
alphanumeric metaphors.


Figure 11 Cabin design test-case.
In order to carry out the tasks, the user will use an
implementation of the 3D interface. A part of this
implementation is visualized in Figure 12. For
example, the first tab of the 3D interface is
dedicated to Light Planning. It includes three sub-
menus, the Add Light Source, Delete Light
Source and Replace Light Source. These menus
are responsible for instantiating a new light source
in the scene, deleting an existing source or changing
the type of the light source. The Add Light
Source menu, hosts five interaction features,
namely one text field for supplying the type of the
source to be instantiated, two numeric fields for
inputting the quantity and distance between multiple
light source, in case of simultaneous instantiation
and a scroll field that selects between the three axes
of the coordinate system. Finally, a button called
APPLY is used for applying the functions. In
total, the implementation of the 3D interface for the
aircraft cabin use-case has five main tabs, which are
Light Planning, Light Positioning, Light
Configuration, Tools and FAP (Flight
Attendants Panel). All these tabs host menus that
support functionalities, apart from the ones
described above, namely the positioning of a light
source in certain coordinates or angles, the
modification of color parameters (see Figure 13), or
the luminance properties and measuring of
dimensions.


Figure 12 Indicative implementation of 3D Immersive
interface for cabin use-case.

Figure 13 Modify Colour menu.
There were many steps included in the cabin
design scenario, based on the input from a European
aerospace company. These steps were carefully
selected in order to cover a broad spectrum of the
activities, involved in the everyday practice of a
lighting engineer/designer. The total length of the
VR session in order for all the tasks to be executed,
was approximately 45 minutes without the frequent
breaks for the immersive user to rest, so as for any
simulation sickness to be avoided. In Figure 14,
there is an indicative part of the tasks involved in
the scenario, together with the respective
development used for the carrying out of the task.
First, the user positions a spot light in the service
unit above the seat and aligns the light beam
towards the seat. Next, the beam should be directed

574

towards the entertainment system and move the spot
light 532 millimeters towards the y axis. Then, the
position is placed perpendicularly to the sidewall
with a 10 millimeter offset and the color of the light
is modified with the indicated RGB values.


Figure 14 Indicative user task scenario for cabin light
design.
Continuing with the cabin review scenario,
Figure 15, shows a part of the tasks involved. This
scenario is carried out by the engineer/designer in
order to observe the effects of his design in the
cabin. It also utilizes the FAP tab of the 3D
interface, through which the user is able to simulate
the functionalities existing in a flight attendants
panel, such as opening or closing a group of lights
located in the grip rail, or individual reading lights
above the seats. Another available functionality is
that the user can open and close the cabins window
shades. Based on the figure, the user first observes
the lighting conditions over the grip rail and next on
the cabin ceiling. He/she then opens and closes the
window shades and the reading lights above the
seats 1B and 1C.


Figure 15 Indicative user task scenario for cabin lighting
conditions and effects review.
Finally, some additional interactions were
facilitated in order for the immersive functionalities
to be ensured in a complete context. The user
controls through his/her input device a virtual pen
inside the environment. This pen is used for
selecting points in space and for navigating in the
menus of the 3D interface. In addition, through the
intersection test, the user is able to grasp and
manipulate directly the 3d objects of the scene.
Another manipulation technique developed for the
environment is the copy-hand technique, through
which the user can remotely change the orientation
of a light source and consequently, its light beam,
by changing the orientation of his/her hand. At last,
a measuring tool function is accommodated that
allows the user to select two points in space and
measure the distance between them.
3.3. EVALUATION AGAINST INDUSTRIAL
REQUIREMENTS
The evaluation and validation of the VR
environment, developed in this study, was based on
industrial requirements from the aerospace industry.
The scenarios of the use-cases were developed,
based on the essential user task scenarios
template, specifically designed for developing
scenarios validating the technologies together with
the human factors requirements. In brief, for every
scenario, this template included the targeted actors,
the plot, goal/objective, the human factor objective
and measurement of the human factor. The
measurement of the human factors was based on
questionnaires that took into account the plot of the
scenario. For example, for the cabin design,
questions like How easily does the user position
the light source? were addressed to the immersed
user in order to extract qualitative feedback. The
evaluation process was carried out by both
experienced and novice users in the area of virtual
reality. However, all users were experts in the area
of engineering and design. The immersive
evaluation was carried out on a step-by-step basis.
The person conducting the test was reading to the
immersive user the plot of the scenario and the user
then made its execution as he/she thought
appropriate. In Table 1, the evaluation outcome of
certain industrial requirements is presented.
Table 1 Industrial requirements and evaluation of
interactions used in the test case simulation
Place object in proper position/orientation
The OPC satisfies the requirement by helping the user
place an object in reference to another objects surface.
Direct manipulation, natural interaction with objects
The user is able to directly and naturally manipulate light
sources either by grasping the lights through the
intersection of the virtual pen, or using the copy hand
function.

575

Integration of human body for interaction visualization
All the tasks that are carried out through the use of the
techniques require the use of the users hands. The user
either controls the virtual hand with a data-glove or a tool
through a tracked input device (e.g. wand). Both ways
require the integration of the human body since the tracking
data come from the movement of the users body.
Interaction metaphors must work for head mounted
displays (HMD) and projection walls
All the techniques can be utilized with the use of HMD or
projection wall displays since none of them depends on the
devices used for visualization purposes.
One hand manipulation
The techniques provide one hand manipulation of objects
through selection and movement.
Intelligent interactions with objects
The OPC can be characterized as intelligent technique since
it uses algorithms that recognize the surface gradient.
Facilitate the execution of complex interactions
The 3D interface metaphor helps VR users to perform
complex functions in the environment.

Most of the comments were positive about the
usability and intuitiveness of the interactions.
Besides fulfilling the industrial requirements
though, the users expressed their preferences and
likes and dislikes during a post-evaluation
debriefing. For the 3D interface, the users indicated
that it provided an easy way of controlling all those
complex functionalities in the environment. It was
also very important that the structure followed the
logical flow of the tasks. This helped a lot
especially where the novice users were concerned.
On the other hand, the position of the screen, in
front of the user, was not always in a very
ergonomic position. It needed a better calibration in
respect to the distance between the screen and the
head of the user. The object position control
together with the rest of the manipulation
techniques were easy to apprehend and intuitive to
use. Regarding the three different alphanumeric
input metaphors, the users expressed a preference to
the flower input wheel, because of its usability.
Although the other two metaphors were also very
interesting to use, the flower input gathered most of
the positive comments. As a whole, the environment
developed, provided a satisfactorily complete
platform for prototype design and review, since it
could accommodate many design tasks, related to
the aircraft cabin.

4. CONCLUSIONS
The study shows the potential of new VR
techniques for use in prototype designing and
review. The developed techniques are provided as
tools in support of the complex tasks, related to
design and are validated in an industrial use-case,
derived from the everyday practice of the aerospace
industry. The evaluation was conducted based on
the fact that the use-case highlighted the potential of
the techniques and the usability and flexibility that
they provide to an engineer/designer. The new
arising technologies reignite the interest around VR
and stress the advantages of virtual prototyping. In
this context, Figure 16, depicts two images of the
same simulation test. The upper image is a
screenshot from the simulation environment
developed for this study, while the lower image is a
real-life photo from the inside of a modern airliner
cabin during the testing of interior lights. The
virtual cabin has been set up to closely resemble
some key light sources from the real cabin in order
for the capabilities of the application to be
demonstrated. The most important aspect of this
comparison is the fact that a light designer would
need only a few minutes to adjust, modify or even
create a new light source in the cabin and observe
the effects in the environment, while it would take
him many hours to do so in a real cabin prototype.
The value of the design environment developed can
also be reckoned with the fact that the application
itself is designed in such a way so as to enable
designers and engineers, who do not have great
experience in the VR technology to make use of it.


Figure 16 Comparison between simulated cabin lighting
and real aircraft light testing.

576

5. ACKNOWLEDGMENTS
This study was partially supported by the project
VISION/ AAT-2007-211567, funded by the
European Commission in the context of the 7th
Framework Programme.
REFERENCES
Bowman D.A., Kruijff E., LaViola Jr. J.J., Poupyrev I.,
3D User Interfaces: Theory and Practice, first ed.,
Addison Wesley Longman Publishing Co., Redwood
City, 2004
Chryssolouris G., Manufacturing Systems: Theory and
Practice, 2nd ed., Springer-Verlag, New York, 2006
Chryssolouris G., Mavrikios D., Pappas M., ''A Web and
Virtual Reality Based Paradigm for Collaborative
Management and Verification of Design Knowledge'',
Methods and Tools for Effective Knowledge Life-
Cycle-Management , A. Bernard, S. Tichkiewitch
(eds), Springer, 2008, Part 1, pp.91-105
Dachselt R., Hubner A., Three-dimensional menus: A
survey and taxonomy, Computers & Graphics, 31,
2007, pp. 53-65
Moehring M., Froehlich B., Enabling Functional
Validation of Virtual Cars Through Natural Interaction
Metaphors, IEEE Virtual Reality 2010, 20 - 24
March, Waltham, Massachusetts, USA
Pappas M., Karabatsou V., Mavrikios D., Chryssolouris
G., "Development of a web-based collaboration
platform for manufacturing product and process
design evaluation using virtual reality techniques",
International Journal of Computer Integrated
Manufacturing, 2006, Volume 19, No. 8, pp. 805-814
Shehab E., Bouin-Porter M., Hole R., Fowler C.,
Enhancing digital design data availability in the
aerospace industry, CIRP Journal of Manufacturing
Science and Technology, 2, 2010, pp. 240-246
Stark R., Krause F.L., Kind C., Rothenburg U., Muller
P., Hayaka H., Stockert H., Competing in
engineering design The role of Virtual Product
Creation, CIRP Journal of Manufacturing Science
and Technology, 3, 2010, pp. 175-184
Weidlich D., Cser L., Polzin T., Cristiano D., Zickner H.,
Virtual Reality Approaches for Immersive Design,
Annals of the CIRP, 56/1, 2007, pp. 139-142





Proceedings of DET2011
7th International Conference on Digital Enterprise Technology
Athens, Greece
28-30 September 2011

577

KNOWLEDGE MANAGEMENT FRAMEWORK, SUPPORTING
MANUFACTURING SYSTEM DESIGN
Konstantinos Efthymiou
University of Patras
efthymiu@lms.mech.upatras.gr
Kosmas Alexopoulos
CASP
alexokos@casp.gr


Platon Sipsas
CASP
psipsas@casp.gr
Dimitris Mourtzis
University of Patras
mourtzis@lms.mech.upatras.gr
George Chryssolouris
University of Patras
xrisol@mech.upatras.gr
ABSTRACT
Knowledge, in the contemporary economy, represents a fundamental issue. Information is being
increasingly distributed across individual workers, work teams and organizations. Therefore, the ability
to create, acquire, integrate and deploy knowledge has become a significant organizationalfactor. In
particular, estimates suggest that a 55% to 75% of the engineering design activity comprises a reuse of
previous design knowledge in order for a new design problem to be addressed. The aim of the current
paper is the description of a knowledge management framework, able to support a factorys planning
throughout its lifecycle from design to dismantling. The main core of the proposed framework is the
ontology stored in the second component of the framework, i.e. the semantics based repository. On top of
that, there is another component, the knowledge association engine, being responsible for performing
similarity measurements and the inference rule definition and execution. Finally, two use cases, focusing
on the design phase, are presented in order to show the applicability of the proposed framework.
KEYWORDS
Knowledge Based Engineering, Ontology, Manufacturing, Key Performance Indicators

1. INTRODUCTION
Knowledge Management (KM) refers to a range of
practices and techniques used by organizations to
identify, represent and distribute information,
knowledge, know-how, expertise and other forms of
knowledge for leverage, utilization, reuse and
transfer of knowledge across the enterprise
[Chryssolouris et al. 2008].
PDM / PLM systems provide an integration of
product data with the use of conventional database
approaches. The ERP systems are less product-
focused and emphasize more on the integration of
the business processes. In the last years two
modules namely the CATIA V5 Knowledgeaware
and the Siemens Teamcenter v9. have emerged and
are mainly in support of product knowledge
management. Both of the modules use the concept
of the so-called templates or archetypes that focus
on reusing parameterized, modular part designs.
Both the EPR and the PDM/PLM systems are still
considered weak in knowledge management,
especially for the process related knowledge
management; while the integration of knowledge
based tools into PDM, such as the Distributed Open
Intelligent PDM system [Kim et al. 2001] are few
[Gao et al. 2003]. Another similar approach that
may act complementarily to the existing PLM/PDM

578


Figure 1 Knowledge Framework Components Architecture

systems and may reduce time and cost of the early
design phases has been proposed in [Papakostas et
al. 2010]. In particular, this work introduces a new
theoretical framework for analyzing and classifying
knowledge, related to existing assembly
configuration, in terms of manufacturing attributes
such as time, cost and flexibility.
Product cost estimation during the early design
phase of the product development is a process of
great significance since it facilitates the cost control
and influences positively the enterprise. A great
obstacle of estimating the product cost, during its
design phase, is the lack of information. Knowledge
based systems mostly following case based
reasoning approaches, attempt to overcome this
problem and provide a first estimation of the
product cost. The DATUM project (Design
Analysis Tool for Unit-cost Modeling [Scanlan et al.
2006] aimed at the development of a knowledge
based system, capable of estimating the cost of an
aircrafts engine and its subcomponents and
generating a process plan for the manufacturing of
the engine. A similar application to the automotive
industry has been developed in the context of the
MyCar project [Makris et al. 2010]. In particular, a
knowledge based system for subassembly cost
estimation has been implemented, following a
hybrid methodology utilizing case based reasoning
and regression analysis [Mourtzis et al. 2011].
The proposed knowledge based framework is
addressing knowledge management through the
factory lifecycle, from design to dismantling phase.
However, this paper emphasizes more on the design
phase of a manufacturing system by providing two
use cases from the ice cold merchandisers industry.
The remainder of this paper is structured as follows.
Section 2, provides the core elements of the
knowledge based framework, namely, the ontology,
the repository and the association engine. Two use
cases, related to manufacturing systems design, are
presented in Section 3. Section 4, concludes the
paper and proposes future research paths.
2. KNOWLEDGE BASED FRAMEWORK
2.1 ARCHITECTURE
The aim of this work is to model the existing data
with the use of the Semantic Web technology in
order for useful knowledge to be extracted. For this
reason, a Knowledge Repository (Dhavalkumar
Thakker et al, 2010) is developed following its
architecture and it is being explained.
A manufacturing ontology is created in the OWL
language (W3C, 2010) and it is then placed in a
Semantic Repository (SR). The SR is a web based
module used to saving and querying semantic data
(W3C, 2011). Moreover, it has a built in reasoning
engine to support rule inference over the deposited

579

data. The SR can be accessed through the Semantic
Repository Abstraction Layer (SRAL).
The SRAL is implemented utilizing the
Enterprise Java Bean technology and allows the
disconnection of the Semantic Repository from the
other Knowledge Repository components.
Exploiting the Semantic Web data model and the
reasoning engine, a web application is built that
enables the extraction of knowledge from the SR
data.
This application is named Knowledge
Association Engine (KAE), due to its ability to
operate over semantic data and to extract useful
knowledge from it. Apart from the KAE, there are
several other Virtual Factory (VF) modules that
exchange data with the SR through the Knowledge
Repository Interface (KRI) service; howerer,these
are beyond the scope of this paper.
Finally, the data in the SR is synchronized with
that from another module, known as the Virtual
Factory Manager. The synchronization is succeeded
with the implementation of the Virtual Factory
Manager Interface (VFMI).
The Virtual Factory Manager is an information
exchanging platform to support new and existing
software for manufacturing design and management.
Concerning its data, it provides concurrent data
access and consistency, evolving factory data
management and that of safety in cases of
malfunction. In this way, it enhances the data
exchange among different software tools. Moreover,
it has platform independent interfacing capabilities,
acceptable response time and semantic web
functionalities, enabling each software tool to
communicate with this service and handle its data
by using the semantic web technology. To further
enhance its usability, this platform has a versioning
system for its data and provides a set of methods for
versioning management and structured data
retrieval. In the case of the Knowledge Repository,
it is the aforementioned set of methods used. The
KR interacts with the VF Manager in order to
retrieve structured data exploiting its versioning
system. In order to synchronize its data with the VF
Manager platform while preserving the versioning
management and the data structures, the KR has its
own software module, namely the VFMI. The
VFMI has been designed to interact with both the
KR application and the KR Semantic Repository
and to perform data synchronization between the
KR Semantic Repository and the VF Manager
platform every time that isrequired.
2.2. MANUFACTURING ONTOLOGY
The current ontology aims to cover the
manufacturing domain, to model the structure and
the relationships among the primarily physical and
virtual entities of a manufacturing system
(Chryssolouris, 2006). The purpose of the ontology
is to represent plant, product, orders, performance
indicators and define their interconnections, in order
to support the modelling and the analysis of
alternative plant configurations useful for the
different phase of the factorys lifecycle, such as
design and planning. The ontology will answer
questions, concerning the assessment of
manufacturing performance indicators for
alternative plant configurations or alternative task
planning activities, providing this way, a decision
support mechanism. The overall ontology scheme is
illustrated with the help of figure 1. On the left side,
the product and class orders are presented while in
the center of the figure, the plant class is illustrated
and beyond it, the manufacturing attributes structure
is depicted. The ontologys basic scheme is further
analysed in the following paragraphs.
2.2.1. Plant Hierarchy
The factorys structure is represented with the use of
the plants hierarchy. In the current ontology, a
more coherent approach to plant hierarchy,
following a four level hierarchy (Chryssolouris,
2006), is adopted and includes:
Factory level
Job shop level
Work center level
Resource level
The factory is the highest level in the hierarchy
and corresponds to the system as a whole. The
factory further consists of job shops that represent a
group of workcenters. The workcenters are
considered as a set of resources that perform similar
manufacturing processes. A resource is regarded as
a generic entity that can be a machine, a human
worker or a storage area. In the present ontology,
each level is modeled as a class
2.2.2. Product Hierarchy
A generic and abstract structure of the product is
proposed for the needs of the ontology whereby the
Product is the main class. This concept represents
the actual finished goods produced by the plant.
2.2.3. Orders Hierarchy
Corresponding to the plant hierarchy there is also
the workload hierarchical breakdown. Orders are
broken down into jobs, which in turn, consist of a
number of tasks. An order corresponds to the overall
production facility and is divided into jobs that
based on their specifications can be processed only
by a suitable job shop. A job consists of tasks that
can be released only to one workcenter. Tasks can

580

be dispatched to more than one of the work centers
parallel resources (Chryssolouris and Lee, 1994).
Based on this concept, the orders hierarchy is
modelled with the following classes.
2.2.4. Performance Indicators Structure
The four most important attributes, used for making
decision in manufacturing during design, planning,
operation and in general during the entire factory
lifecycle, are cost, time, flexibility and quality
(Chryssolouris, 2006). Consequently, the main
classes of the performance indicators are classified
into time, cost, quality and flexibility. Then the
classes are further analyzed with more specific
subclasses and instances that are associated with the
plant hierarchy and order hierarchy classes with the
use of rules and ontology relationships. In the
current study, flexibility has three subclasses,
namely capacity, operational and product flexibility,
cost subclasses are modeled based on the Active
Based Costing method while Time includes,
production rate and flowtime. The aim of the
performance indicators hierarchy is to provide a
general scheme for classifying manufacturing
attributes and associating them with plant and order
classes, facilitating the assessment of performance
indicators for different level of the plant hierarchy.
The cost modelling is based on the Activity
Based Costing (ABC) method. So the main classes
and their structure follow the ABC modelling.
The class of Time is specialized by the subclass
of Production Rate. The scheme can be easily
enhanced with other specializations of the time
related performance indicators, such as lead time,
process time etc.
High flexibility or low sensitivity to a change
provides a manufacturing system with three
principal advantages. It is convenient to think of
these advantages as having ariseb from the various
types of flexibility that can be summarized in three
main categories as in (Chryssolouris, 2006), namely
those of product, capacity and operation flexibility.
2.3. SEMANTIC REPOSITORY
The semantic repository, which is built could be
paralleled with o a database. Its purpose is to store
data and to allow its external retrieval through
queries. However, the data is stored following the
Semantic Web model, written in the OWL language.
Furthermore, the SR provides a built in OWL
reasoned. This is a module, enabling the application
of rules over the stored data. The inference of the
rules can be realized when querying the SR and
retrieving data from it.
For the implementation of the SR, it is the Java
Technology used, combined with the Jena
Framework (Jena, 2011). A web application is built
in the Java language using the Jena Framework to
provide the semantic functionalities. The Jena
Framework is a set of Java libraries implementing
basic functionalities for semantic data storage and
querying, following the W3C standards (Mc Bride,
2011). Moreover, it provides a built in rule engine,
which executes rules over the stored semantic data.
The language of the rules is the Jena Rule language,
which is created to support this framework. Among
others, this framework is selected for the
implementation of the SR for the following two
main reasons. First of all, it is an open source and
very flexible framework, providing Java support and
secondly, it is the only open source framework with
built in support for rules that are freely distributed
for research purposes.

Figure 2 Basic classes of the proposed manufacturing
ontology and their relationships
2.4. KNOWLEDGE REPOSITORY
ABSTRACTION LAYER
The role of this component is to decouple the
Semantic Repository (off the shelf component) from
the rest of the Knowledge Repository components.
In this matter, it could be easier to change or switch
the repository module, if needs be, in the future.
This is rather important since this technology is not
well established yet, and the probability of a change
in the selected semantic repository module, is likely
to happen.
This module has been implemented as an Enterprise
Java Bean (EJB) application and is embedded in the
apache tomcat (The Apache Software Foundation,
2011) using the openEJB plugin. It was developed
as a stateful session bean, meaning that each client
has one instance in memory. It has direct access to
the SR and moreover, it can directly communicate

581

with the built-in rule engine in the SR. All other applications communicate with the SR through it.


Figure 3 Retrieval of past plant configuration solution

2.5. VIRTUAL FACTORY MANAGER
INTERFACE (VFMI)
The VFMI is the module responsible for the data
synchronization between the VF Manager and the
SR. It has been developed as an EJB application,
also embedded in the apache tomcat. The
synchronization mechanism has been implemented
as follows. When a Knowledge Application needs to
access data from the VF Manager, it communicates
with the VFMI, requiring the repository name by
which a specific version of VF Manager data exists
in the KR. The VFMI searches if the required
version exists in the SR and replies with the name of
this versions repository. If the version does not
exist, the VFMI requires the data from the VF
Manager and creates a new repository in o the SR.
Then, it replies to the name of this new repository.
2.7. KNOWLEDGE ASSOCIATION ENGINE
The aim of the Knowledge Association Engine
(KAE) is to extract useful knowledge using the
existing data found in the SR. It is developed as a
web application with a server side and a client side
part. The client side part is actually the Graphical
User Interface (GUI) of the application, by which
the users are allowed to have access to the
application services. The server side part is that
where all the services of the application have been
implemented (Carroll et al, 2003).
The Java technology along with some Java Script
open source libraries are used for the
implementation of this web application.. In this way,
a users only requirement is a web browser with an
internal Java Script engine, and all well known
browsers meet this requirement by their default. No
other external software is required for installation.
The entire application is hosted in an apache tomcat
server and it has already been tested with the apache
tomcat server v6.0 (The Apache Software
Foundation, 2011).
The Enterprise Java Beans (EJB) technology is
used for communicating with the SR. Through this
communication, the KAE is able to have access to
the data stored in the SR.
2.7.1. Similarity Measurement
Past or alternative plant configurations are
assessed with the use of KPIs. In other words, each
configuration is characterized by a set of KPIs. This
characterization is not restricted only to a plant
level, but it can be performed at a jobshop and
workcenter level as well. So, in the context of the
ontology, each class belonging to a plant hierarchy
is associated with classes of the performance
indicators structure. In particular, the connection
ismeasuredby, determines the relationship between
the classes plant hierarchy and KPIs structure.
During the design of a new plant, a jobshop or a
workcenter, engineers search for past solutions that
have fulfilled requirements similar to those of the
new plant. Requirements are expressed in terms of
KPIs, such as cost, time, quality and flexibility. The
similarity measurement functionality facilitates a
designer to search efficiently, in terms of time, past
configuration solutions. Firstly, the designer defines

582

the level of the plant hierarchy and the product or the part that is to be

Figure 4 Equipment selection based on inference rules

produced. Afterwards, the engineer determines the
KPIs that represent the requirements of the new
plant/jobshop/workcenter. Finally, the weighting
factors for each KPI are also defined, determining
this way, the significance of each of them, in the
similarity measurement procedure. The paragraphs
hereafter present the main algorithm behind the
similarity measurement.
There is a great number of similarity measures
that exist and apply to specific demands. The
nearest neighbour technique, a widely used
technology in the cased based reasoning retrieval
process, computes the similarity between past cases
(in our cases plant configurations) stored in the
repository and the new case i.e. new plant
configuration, based on weight features. In the
current approach, the following form of similarity
measurement is used.

=
=
n
i
s
i
t
i i j Global
x x f w S T Sim
1
) , ( ) , ( (1)

Where:
T: the new (Target) case (configuration)
S: the past (Source) case (configuration)
j: one of the past cases (configurations)
n: the number of KPIs characterize each case
(configuration).
i: one of the n individual configuration KPIs.
w
i
: the weighting factor for the i
th
KPI.
f: the similarity function.

t
i
x : the i
th
KPI of the target (new) case
(configuration).

s
i
x : the i
th
KPI of the source (past) case
(configuration).

As a similarity function, a slight modification of
the Minkowski measure is employed, providing
normalized values ranging from 0 to 1, where 1
means a total similarity in contrast to 0 meaning
total dissimilarity.

=
2
1 ) , (
i
i
i j
T
S
T S T f (2)

Thus, each one of the past (source) cases
(configurations) is compared with the new (target)
case (configurations). The past case (configuration)
with the greatest similarity value, i.e. nearest to 1 is
selected to be the proposed solution used as a
reference design configuration.
2.7.2 Rule Editing and Inference
The rule management of the SR is supported by the
KAE application. Rules are statements, which are
applied to the data existing in the SR so as to create
a new useful knowledge. Considering that for the
SRs implementation, the Jena Framework has been
used, the rules have to follow this approach. So, in
the KAE application, the Jena Rule Language is

583

supported for the writing of rules. For the rule
execution and validation the built-in support of the
Jena Framework is used for the implementation.
The Jena Framework includes a general purpose
rule-based reasoner. This reasoner supports rule-
based inference over RDF graphs and provides
forward chaining, backward chaining and a hybrid
execution model. To be more exact, there are two
internal rule engines; one forward chaining RETE
engine and one tabled datalog engine - they can be
run separately or the forward engine can be used to
prime the backward engine, which in turn, will be
used for answering to queries.
Rules in the Jena Rule Language have two parts,
that of the head and the body. These two parts have
an arrow between them, with direction from head to
body. Each part consists of a list of terms, the body
terms also called premises while the head terms are
also referred to as conclusions. Each term is either a
triple pattern, an extended triple pattern or a call to a
built-in primitive.
For the execution of rules there is a module called
rule engine. The latter are used to deriving
additional RDF assertions, which are entailed from
some base RDF together with any optional ontology
information and the axioms and rules associated
with them. The primary use of this mechanism is to
support the use of languages, such as the RDFS and
OWL, which allow additional facts to be inferred
from instance data and class descriptions.
The KAE application allows the user to store
rules in the SR and to activate them. For the
execution, a rule engine is used. The Jena
Framework, used for the SRs implementation of
the SR, is in support of a built in rule engine. In this
way, in order to export new data for the execution
of these rules, the KAE application sends SPARQL
queries (Mc Carthy, 2005) to the SR with a
parameter indicating that the rule engine should also
be used.
The added value of the KRE application is the
combination of the rules with the similarity
measurement mechanism. The similarity
measurement mechanism internally uses a similarity
algorithm, which is applied to the data that have
been retrieved by the SR. The kind of data to be
retrieved from the SR by the similarity
measurement function, is controlled by the rules.
Next, in order to be better explained how this
works, an example has been given. Supposing that a
user desires to do a similarity measurement among
all the plants stored in the SR that produce ice cold
merchandisers, then he needs to use the rule shown
in Table 1.

Table 1 - Rule example
[IceColdMerchandiser:
( ?kpi base:measures ?plant )
( ?plant base:produces ?product )
( ?product rdf:type base:Ice_Cold_Merchandiser )
->
( ?kpis base:smMeasures ?product )
( ?product <base#smElements> ?plant )
( ?product base:smType base:Plant ) ]

In the head of this rule, a list of terms exists in a
triple format. This list selects the plants Key
Performance Indicators (KPIs) and those of plants
which produce the type of an ice-cold merchandiser
product. In the body of the rule, the selected KPIs
are connected with this product by the
smMeasures predicate; this product is connected
with the selected plants by the smElements
predicate and also, this product is connected with
the Plant semantic URI by the smType predicate.
Those three predicates are used by the similarity
algorithm and have the following meanings. The
smMeasures means that for the similarity level
selection, the object of this triple should be
included. The subject of this clause should always
be a set of KPIs. The smElements predicate means
that if this similarity level is selected, the data of the
similarity algorithm is the object of this triple.
Finally, the smType predicate should always
connect this similarity level with the semantic class
type of the similarity data. In our case, the similarity
data belongs in the Plant semantic class.
3. USE CASES
This section describes two use cases from the ice
cold merchandisers industry, presenting the
applicability of the approach to real industrial
problems. The first use case focuses on the design
of a new production line. Past production lines
were stored in the Knowledge Base and those most
similar to the requirements of the new line
production lines were retrieved and proposed for the
new line, with the help of the Knowledge
association engine. The second use case
emphasizes on defining, in a systematic way, the
knowledge related to the equipment required for the
manufacturing of a product that emphasizes on the
problems technological side without taking into
account time and cost constraints. The knowledge
stored is later on retrieved for the selection of
equipment to be used for the production of a new
product. In the second use case, the rules
functionality provided by the knowledge association
engine, is mainly used.
3.1. PAST SOLUTIONS RETRIEVAL

584

The specific case study emphasizes on the
knowledge management during the design of a new
plant. The ICM plant building projects present a
great similarity. This means that new plant designs
may be based in an existing plant. Two phases can
be defined within the use case: a. plant
configuration definition, and b. similar plant
configuration retrieval. During the first phase,
knowledge engineers update the knowledge
repository with ontology instantiations. Each
instantiation provides information, related to the
plant configuration, the KPIs of this configuration
and the products that are produced. Plant
configuration includes the plants hierarchy,
consisting, of the jobshops, the workcenters and the
resources. Each plant is characterized by KPIs,
indicating the formers performance. In the current
use case, the KPIs are, investment cost, throughput
and YYY. The second phase concerns the retrieval
of a similar plant configuration. First, the engineer
defines the requirements, i.e. the KPIs and defines
their target and the relevant weight factors. Then
the plant configuration that is closer to the target
KPI values is proposed to the engineer as a draft
solution. The entire phase is illustrated within
figure 1.
3.2. RESOURCE IDENTIFICATION
The Second use case emphasizes on the definition
and storage of knowledge utilizing inference rules.
In particular, two types of inference rules are
utilized.



Figure 5 Inference Rules associating product with
required processes
The first type of rules associates a product with
processes, while the second one connects processes
with resources. In this way, the knowledge is
systematically stored in the knowledge repository
and is easily accessible by all the company
departments all over the world. Consequently,
engineers can query KR for the resources that are
required for a specific process or the process
required for a product and the determined resources
and processes to be provided.


Figure 6 Inference rules mapping processes with the
required resources
4. CONCLUSIONS
This paper proposed a knowledge based framework
that aimed to support the planning of a factory
throughout its lifecycle, with special emphasis
having been given to the design phase. The
knowledge framework was mainly based on three
pillars, namely that of ontology, the knowledge
repository and the knowledge association engine.
Ontology is responsible for modelling the
knowledge domain, in our case the factory planning
knowledge. The knowledge repository stores
knowledge by utilizing the semantics technology
and knowledge association engine, discovers new
knowledge with the similarity functionality and
defines new knowledge with the inference rules
functionality. Finally, two use cases are presented
illustrating the potentiality and the applicability of
the approach. In particular, the use cases show how
the past knowledge, in terms of past similar
problems and inference rules, can be utilized for the
design of a new assembly line. The specific use
cases are in a preliminary phase and will be further
enhanced in future showing the capabilities of a
knowledge manager to its full extent.
5. ACKNOWLEDGMENTS
This work has been partially supported by the
research project Virtual Factory Framework (VFF)
NMP2 2010-228595 funded by the European Union
Seventh Framework Programme (FP7/2007-2013).
REFERENCES
Chryssolouris, G., Manufacturing Systems: Theory and
Practice, 2nd Edition, Springer-Verlag, New York,
2006.
Dhavalkumar Thakker, Taha Osman , Shakti Gohil and
Phil Lakin, A Pragmatic Approach to Semantic
Repositories Benchmarking, L. Aroyo et al. (Eds.):
ESWC 2010, Part I, LNCS 6088, pp. 379393, 2010.

585

Carroll, J.J., Dickinson, I., Dollin, C., Seaborne, A.,
Wilkinson, K. and Reynolds, D., Jena: Implementing
the Semantic Web Recommendations, Proceedings of
the 13th international World Wide Web conference,
2003, pp 74-83
Jena, Jena A Semantic Web Framework for Java,
SourceForge.com, 2011, Retrieved: 15.06.2011,
<http://www.openjena.org/>
Mc Bride, B., An Introduction to RDF and the Jena
RDF API, 2010, Retrieved: 15.06.2011,
<http://jena.sourceforge.net/tutorial/RDF_API/>
Mc Carthy, P., Search RDF data with SPARQL, IBM,
developerWorks, 2005, Retrieved: 15.06.2011,
<http://www.ibm.com/developerworks/xml/library/j-
sparql/>
The Apache Software Foundation, Apache HTTP Server
Project, The Apache Software Foundation, 2011,
Retrieved: 15.06.2011, <http://httpd.apache.org/>
The Apache Software Foundation, Apache Tomcat 6.0,
The Apache Software Foundation, 2011, Retrieved:
15.06.2011, <http://tomcat.apache.org/tomcat-6.0-
doc/index.html>
W3C, OWL Web Ontology Language - Use Cases and
Requirements, W3C, 2004a, Retrieved: 15.06.2011,
<http://www.w3.org/TR/webont-req/#onto-def>
W3C, OWL Web Ontology Language - Reference,
W3C, 2004b, Retrieved: 15.06.2011,
<http://www.w3.org/TR/owl-ref/>
W3C, SPARQL Query Language for RDF, W3C,
2008a, Retrieved: 15.06.2011,
<http://www.w3.org/TR/rdf-sparql-query/>
W3C, SPARQL Update, W3C, 2008b, Retrieved:
15.06.2011,
<http://www.w3.org/Submission/SPARQL-Update/ >
W3C, Web Services Activity, W3C, 2011, Retrieved:
15.06.2011, <http://www.w3.org/2002/ws/>
W3C, XML Schema Part 1: Structures Second Edition,
W3C, 2004c, Retrieved: 15.06.2011,
http://www.w3.org/TR/xmlschema-1/
Kim Y, Kang S, Lee S, and Yoo S, A distributed, open,
intelligent product data management system.
International Journal of Computer Integrated
Manufacturing, Vol. 14, No.2, 2001, pp. 224235
Gao JX, Aziz H, Maropoulos PG, and Cheung WM,
Application of product data management
technologies for enterprise integration, International
Journal of Computer Integrated Manufacturing Vol.
16 No 7 2003, pp. 491500
Scanlan J, Rao A, Bru C, Hale P, Marsch R (2006)
DATUM Project: Cost Estimating Environment for
Support of Aerospace Design Decision Making.
Journal of Aircraft Vol. 43 No.4 pp.10221028
Papakostas N, Efthymiou K, Chryssolouris G, Stanev S,
Ovtcharova J, Schafer K, Conrad RP, and Eytan A,
Assembly Process Templates for the Automotive
Industry,(CATS 10), 3rd CIRP Conference on
Assembly Technologies and Systems, Trondheim,
Norway, 2010 pp. 151-156
Mourtzis D, Efthymiou K, Papakostas N, Product cost
estimation during design phase, 44th CIRP
International Conference on Manufacturing Systems,
Madison, USA 2011
Chryssolouris G, Mourtzis D, Papakostas D,
Papachatzakis V, and Xeromeritis S, "Knowledge
Management in Selected Manufacturing Case
Studies", Methods and Tools for Effective Knowledge
Life-Cycle-Management, A.Bernard, S.Tichkiewitch
(eds.),Part3, pp. 521-532, Springer 2008
Makris S, Michalos G, Efthymiou K, Georgoulias K,
Alexopoulos K, Papakostas N, Eytan A, Lai M,
Chryssolouris G, "Flexible assembly technology for
highly customisable vehicles", (APMS 10),
International Conference Competitive and Sustainable
Manufacturing, Products and Services, Como, Italy
(2010)


Proceedings of DET2011
7th International Conference on Digital Enterprise Technology
Athens, Greece
28-30 September 2011
586

A MANUFACTURING ONTOLOGY FOLLOWING PERFORMANCE
INDICATORS APPROACH
Konstantinos Efthymiou
University of Patras
efthymiu@lms.mech.upatras.gr
Konstantinos Sipsas
University of Patras
sipsas@lms.mech.upatras.gr


Dimitris Melekos
University of Patras
dmel@lms.mech.upatras.gr
Konstantinos Georgoulias
University of Patras
kgeo@lms.mech.upatras.gr
George Chryssolouris
University of Patras
xrisol@mech.upatras.gr
ABSTRACT
Ontology can be considered as the core of a knowledge management system, since it provides a
formal and explicit description of concepts in a discourse domain. This paper aims at defining a
manufacturing ontology, capable of modelling manufacturing systems, with special emphasis being
given to four performance indicators, namely cost, time, flexibility and quality. The proposed
ontology determines an overall scheme for the description of manufacturing knowledge, including
four sub-schemes for the performance indicators, the product, the orders and the plant. The classes
of each sub-scheme, their relationships and their attributes are presented in detail. Cost and time
assessment rules are defined, enhancing the ontology with reasoning mechanisms, and facilitating
the decision making process.
KEYWORDS
Ontology, Manufacturing Systems, Manufacturing Performance Indicators

1. INTRODUCTION
The demand for a manufacturing systems
knowledge management, throughout the whole
factory lifecycle, from requirements to the
dismantling phase, has been increasing steadily in
the past years. During the design phase, more than
75% of the activities comprise reuse of previous
design knowledge to address a new design problem
(Wildemann, 2003). Currently, engineers have to
rely upon their experience and search for past
relevant solutions in their companies databases.
Therefore, a framework supporting the knowledge
management systems, within a whole manufacturing
system, would create a great advantage
(Chryssolouris et al, 2009). The basic core of such a
framework is a manufacturing ontology that models
and represents the knowledge domain.
The Process Specification Language (PSL) is a
language capable of describing discrete
manufacturing and construction process data
(Gruninger et al., 2003) based on the CYC, a
commercial ontology including 200,000 terms,
(Cycorp, Inc., 2008), (Schelnoff et al., 1999).
In the context of the network-based assembly
design support, the morphological characteristics of
assembly joints are modelled utilizing an ontology
study (Kim et al., 2009). A variety of geometrically
and topologically similar joints, using open standard
technologies, are modelled and the assembly joint
knowledge technology is described in a standard
way using the ontology. In particular, the assembly
hierarchical relationships are modelled so as to
define a set of assembly structures, a set of parts and
a set of form features, while the mere topological
representation of assembly joints is utilized for the
definition of various different assembly joints. In
(Alsafi and Vyatkin, 2010) an ontology is proposed
as the basis for an agent based reconfiguration
mechanism. The agent uses, without human

587

intervention, a manufacturing environment
knowledge represented by ontology. In this way,
the overhead costs of the reconfiguration process are
minimized since the procedure has been automated.
The main classes cover tools, machines, material
resources, the manufacturing operations related to
the manufacturing environment, the logistic
operation and the controller. Ontology, also
describes the relationships among the classes and
their connection, providing hierarchies in general.
Similarly, in the context of reconfiguration needs, a
comprehensive equipment ontology is proposed to
facilitate the effective design of reconfigurable
assembly systems that is based on the function-
behaviour-structure paradigm (Lohse et al., 2006).
The specific ontology emphasizes on the functional
capabilities of the equipment that can be selected
and integrated effectively. This ontology covers
five main knowledge domains, concerning product,
process, equipment, function and behaviour
concepts that are included in three representation
levels. The first one the knowledge representation
level describes the way that the different concepts,
attributes, constraints and rules are implemented. At
the level of ontology, all the specific domain
concepts, attributes, constraints and rules are
defined, while the last is the instantiation level. The
equipment structure is modelled with a hierarchy,
and so is the assembly activity function structure.
The described ontology is applied to a simple
assembly scenario that concerns the replacement of
a SCARA-type robot with a new one. Another
research, aiming to provide an equipment ontology,
in particular, a machine tool model that aims to
facilitate manufacturing information and knowledge
management, is provided in (Kjellberg et al., 2009).
The identified ontology concepts, constituting the
core of the ontology, are mapped with the
information models, utilizing different standards.
The mapping prescribes what application objects to
be used and how. Taking into consideration the
type of information to be modelled, a variety of
standards can be used. The paper presents a
mapping example of machine tool kinematics,
connecting this way the ontology with the
information standard. In particular, concepts of the
tool model ontology, such as travel range and
kinematic range are instantiated with the AP214
model. The proposed ontology is expendable,
allowing the addition of new concepts that can be
later connected with the existing standards of the
users interest.
In (Lin and Harding, 2007), a manufacturing
ontology is proposed aiming to decrease the
complexity in exchanging manufacturing
information and sharing knowledge among
companies in various different projects. A general
manufacturing system engineering (MSE)
knowledge representation scheme is proposed,
aiming to facilitate the communication and
information exchange in inter-enterprise, multi
disciplinary engineering design teams by utilizing
the standard semantic web language (RDF, RDF
schema and ontology) (Lin and Harding., 2007).
The present ontology addresses inter-company
issues, related to the requirements of information
semantic interoperability for knowledge sharing.
The top level classes are the enterprise, the project
the flow, the resource, the process and the strategy,
all of which are linked with relationships. In the
context of the knowledge management system for
process planning an ontology is developed for the
description of the process planner environment
(Denkena et al., 2007). In particular, the instances
and classes of the process-planning ontology,
concerning facility data, such as machines, tools,
raw data and order data such as product geometry,
product dimensions, order quantity and due data, are
organized in facility and order data hierarchies.
Finally, the classes and their relationships have been
implemented in the Protg platform. In (Chen et
al., 2009), an integration mechanism for the product
lifecycle knowledge, concerning activities such as
coordination, communication and control is
proposed. The developed ontology is structured in
three different layers. The collaborative enterprise
defines a sharable local ontology, following the
local ontology schema as it is described by the
dominant enterprise. This layer concerns the
distributed local ontology. The second layer is
related to the product lifecycle. The dominant
enterprise defines a product lifecycle ontology,
based on the lifecycle phases and activities of the
required product lifecycle. Furthermore, the local
ontologies distributed, are integrated with the
product lifecycle ontology, constituting the
integrated global ontology layer. In this way, the
cooperating enterprises can share their knowledge
and can exchange information by utilizing the
developed product lifecycle ontology. The problem
of product knowledge exchange for collaborative
manufacturing is also identified in (Jiang et al.,
2010). In this approach, an ontology based
framework consisting of smaller integrated
ontologies is being proposed. The framework
includes five elements. Domain enterprises define
the required knowledge and transform it not as a
product knowledge ontology but as a local ontology.
The local ontologies of each enterprise are
integrated into the global ontology. Through this
global ontology, the enterprises can share and
exchange product knowledge leading to an
increased knowledge value. The process of
ontology integration consists of the following two
steps, the ontology mapping and the ontology

588

merging. The first one concerns the similarity
measurement, sorting, filtering and linking of the
ontology schema, while the concept names of
merging, compose the second step. The last element
has to do with the ontology querying. The user,
based on his own needs, is able to search for the
ontology. The knowledge output derives from
similarity computations for all the knowledge
searched.
In conclusion, the existing enterprise ontologies
are too generic to address the knowledge
representation needs of manufacturing. In
particular, this kind of ontologies is restricted to
terminological problems. On the other hand, a
variety of manufacturing ontologies that have been
proposed in the last years, emphasize on a specific
domain of manufacturing systems, and do not allow
an abstract description of the problems. For
instance, a lot of specific ontologies focus on the
representation of knowledge issues in assembly
processes, in product design or in collaboration
management and do not provide a holistic overview
of the manufacturing systems. Moreover,
ontologies that seem to be providing an adequate
modelling knowledge of the manufacturing domain,
they still do not cover the performance indicators
and their connection with the manufacturing classes
such as processes and resources.
The current ontology attempts to fill in this gap
by providing a generic framework that will enable
the successful knowledge representation for a
factorys lifecycle. The proposed ontology includes
a description of the manufacturing attributes domain
and associates them with the plant, product and
process classes. Apart from the knowledge
representation, the association of the manufacturing
attributes with the rest of the ontology classes,
emphasizes on the development of reasoning
mechanisms. The reasoning mechanisms are
responsible for deducing the manufacturing
attributes values, depending on the manufacturing
systems characteristics and the different production
systems hierarchies.
The introduced ontology is implemented in the
Protg tool editor by utilizing the OWL-DL
language. The rules that are incorporated into the
ontology are developed with the use of the SWRL
rules tab from Protg, while the execution of the
rules can be performed with the enabling of the Jess
engine plug in.
The rest of the paper is organized as follows.
Chapter 2, is dedicated to describing the structure of
the ontology and to presenting the proposed
knowledge representation of the manufacturing
domain. Chapter 3, describes the reasoning
mechanism used for deducing the manufacturing
attributes values, as well as for providing a
description of the rules and the associations related
to the manufacturing attributes. In chapter 4, an
instantiation of the proposed ontology is presented.
Chapter 5, concludes the basic outcomes of the
work and suggests future research directions.
2. MANUFACTURING ONTOLOGY
SCHEME
There are three main questions that must be
answered before the development of an ontology
(Noy McGuinness, 2002),
What is the domain that the proposed
ontology will cover?
What is the purpose of the ontology?
What are the questions that the ontology
will answer?
The current ontology aims to cover the
manufacturing domain, to model the structure and
the relationships among the primary physical and
virtual entities of a manufacturing system. The
purpose of the ontology is to represent plant,
product, orders, manufacturing attributes and define
their interconnections, in order to support the
modelling and the analysis of alternative plant
configurations useful for the different phases of the
factory lifecycle, such as design and planning. The
ontology will answer questions concerning the
assessment of manufacturing performance indicators
for alternative plant configurations or alternative
task planning activities, providing this way a
decision support mechanism.

Figure 1 Basic classes of the proposed manufacturing
ontology and their relationships
The overall ontology scheme is illustrated with the
help of figure 1. On the left side, the product class
and the orders class are presented while in the center
of the figure, the plant class is illustrated and

589

beyond it the manufacturing attributes structure is
depicted. The basic scheme of the ontology is
further analysed in the following paragraphs.
2.1. PLANT HIERARCHY
The structure of the factory is represented with the
use of the plant hierarchy. A five layer hierarchical
model of production control including: (i) facility,
(ii) shop, (iii), cell, (iv) workstation and (v)
equipment is proposed in (Jones and McLean
1986,McLean et al 1982). A more extended
hierarchy taking into account the production
network scale, apart from the plant scales, consists
of seven levels,namely: (i) production network, (ii)
production location/site, (iii) production segment,
(iv) production systems, (v) production cells, (vi)
workplaces/machines and (v) processes
(Westkmper and Hummel 2006). In the current
ontology, there has been adopted a more coherent
approach for plant hierarchy, following a four-level
hierarchy (Chryssolouris, 2006), including:
Factory level
Job shop level
Work center level
Resource level


Figure 2 Plant hierarchy scheme
The factory is the highest level in the hierarchy
and corresponds to the system as a whole, while it
also consists of job shops that represent a group of
workcenters. The workcenters are considered as a
set of resources that perform similar manufacturing
processes. A resource is regarded as a generic entity
that can be a machine, a human worker or a storage
area. In the present ontology, each level is modeled
as a class, as it is illustrated in figure 2. The Plant is
connected with the Jobshop using the relationship
consistsOfJobShop. Similarly, the jobshop class is
connected with the workcenter, with the relation
consistsOfWorkCenter, and the workcenter class
with the resource class, with the relation
consistsOfResource. The class Resource is includes
(i.e. is-a UML notation) the classes of Machine,
Human and Buffer. The Machine specializes in the
resource and can model a robot or lathe or any other
type of machine in general. Similarly, the Buffer
also specializes in the resource and represents the
storage area between machines. Finally, the Human
models the laborer, as a specialization of the
Resource as well. The Tool represents the tools
used by a machine or a human in order for a task to
be performed. Consequently, the Resource is
associated with tools via the object property
hasTool.
2.2. PRODUCT HIERARCHY
A generic and abstract structure of a product is
proposed for the needs of the ontology. The main
class of course is the Product. This concept
represents the actual finished goods, produced by
the plant. The ontology object property that
connects the product to the model is
consistsOfModel indicating that a product may have
more than one model. This object property can be
used for associating an instance of the class Product
with instances of the class Model. The class Model
is connected tothe class Variant with the
relationship consistsOfVariant. Part, is the lower
level in the product hierarchy specialized by the
SinglePart and the Subassembly. The relationships
of the subassembly, simple part and part are
modelled following the composite design pattern
(Gamma, 1995).

590

Product
Model
Variant
Part
Subassembly SimplePart
1
*
hasPart
1
*
consistsOfVariant
1
*
consistsOfVariant
1
*

Figure 3 Product hierarchy
2.3. ORDERS HIERARCHY
Corresponding to the plant hierarchy there is also
the workload hierarchical breakdown. Orders are
broken down into jobs, which in turn, consist of a
number of tasks. An order corresponds to the overall
production facility and is divided into jobs that
based on their specifications can be processed only
by a suitable job shop. A job consists of tasks that
can be released to one workcenter only. The tasks
can be dispatched to more than one of the work
centers parallel resources (Chryssolouris and Lee,
1994). Based on this concept, the orders hierarchy
is modelled with the following classes.
The class Order models the actual order that is
dispatched to a plant. Each instance of the class
order is associated with a plant, via the object
property dipachedTo. Additionally, each instance is
associated with an instance of the class Product, via
the object property isOrderFor. Finally, each
product is associated with instances of the class Job,
via the object property consistsOfJob. The class
Job, represents the jobs that constitute an order.
Each instance of the class Job is associated with an
instance of the class Jobshop and the instances of
the class Task, via the object properties
releasedToJobshop and consistsOfTask respectively.
Finally, the Task class models those tasks performed
by the resources. Each instance of the class Task is
associated with an instance of the class WorkCenter
and an instance of the class Resource via the object
properties releasedToWorkCenter and
dispatchedToResource respectively.
Order
JobShop Job
1
*
consistsOfJob
1 1
releashedToJobShop
Plant
*
*
undertakes 1
*
consistsOfJobShop
Product *
*
produces
* 1
isOrderFor
Resource Task
1
*
consistsOfTask
1 1
dispatchedToResource
WorkCenter
1
*
consistsOfWorckCenter
1
*
consistsOfResource
1
1
releashedToWorkCenter

Figure 4 Order hierarchy and its relationship with plant
hierarchy
2.4. MANUFACTURING ATTRIBUTES
HIERARCHY
The four most important attributes used for making
a decision in manufacturing during the design,
planning, operations and in general, during the
whole factory lifecycle are cost, time, flexibility and
quality (Chryssolouris, 2006). Therefore, the main
classes of the performance indicators are classified
into time, cost, quality and flexibility. Next, the
classes are further analyzed with more specific
subclasses and instances that are associated with the
plant hierarchy and order hierarchy classes with the
use of rules and ontology relationships. In the
current study, flexibility has three subclasses,
namely capacity, operational and product flexibility
and cost subclasses that are modeled based on the
Active Based Costing method, while Time includes,
production rate and flowtime. The aim of the
performance indicators hierarchy is to provide a
general scheme for classifying manufacturing
attributes and associating them with the plant and
order classes, thus, facilitating the assessment of the
performance indicators for a different level of the
plant hierarchy.


591


Figure 5 Performance indicators classification scheme

2.4.1. Cost
The cost modelling is based on the Activity Based
Costing (ABC) method. So, the main classes and
their structure follow the ABC modelling.
The Cost class is specialized in overhead and
operational cost classes. The Overhead class is
further specialized from the classes Building,
Consumables, Energy, Management, Maintenance.
The Operational cost class is further specialized
from the classes labour cost and equipment cost.
The labour cost class is specialized from the classes
overtime, wages and labour consumables. The
equipment cost class is specialized from the classes
depreciation, setup, equipment consumables,
equipment energy.
2.4.2. Time
The class of Time is specialized in the subclass of
Production Rate. The scheme can be easily
enhanced with other specializations of the time
related performance indicators, such as lead time,
process time etc.
2.4.3. Flexibility
A high flexibility or a low sensitivity to a change
provides a manufacturing system with three
principal advantages. It is convenient to think of
these advantages as arising from the various types of
flexibility that can be summarized in three main
categories as in (Chryssolouris, 2006), namely those
of the product, capacity and operation flexibility.
Product flexibility enables a manufacturing
system to make a variety of part types with the use
of the same equipment. In the short term, this means
that the system has the capability of economically
using small lot sizes to adapt to the changing
demands for various products (this is often referred
to as production-mix flexibility). In the long term,
this means that the systems equipment can be used
across multiple product life cycles, increasing
investment efficiency.
Capacity flexibility allows a manufacturing
system to vary the production volumes of different
products in order to accommodate any changes in
the volume demand, while remaining profitable. It
reflects the ability of the manufacturing system to
contract or expand easily. It has been traditionally
seen as being critical for make-to-order systems, but
is also very important for mass production,
especially for high-value products such as auto-
mobiles.
Operation flexibility refers to the ability to
produce a set of products using different machines,
materials, operations and sequences of operations. It
results from the flexibility of individual processes
and machines, that of product designs, as well as the
flexibility of the manufacturing system structure
itself. It provides a breakdown tolerance namely, the
ability to maintain a sufficient production level even
when machines break down or humans are absent.

592


Figure 6 ICM Plant Structure, detailed breakdown of the forming jobshop

3. REASONING MECHANISMS
The objective of the reasoning mechanisms is the
definition of the rules that will allow for an
estimation of the manufacturing attributes
associated with ontology classes. Rules are divided
into two categories. The first type, is responsible
for aggregating performance indicators that belong
to different plant hierarchy levels. For example, the
investment cost of a workcenter is the aggregation
of the investment cost of the resources belonging to
it. The rules that are responsible for aggregating
performance indicators are not restricted only to
summation. Assessment of the minimum or
maximum values of a performance indicator,
characterizing a class in the plant hierarchy or the
order hierarchy, can be also addressed by a rule of
this type. The second type of rules, are responsible
for defining the relationships among the different
performance indicators. For instance, an operation
cost is considered as the aggregation of the
equipment and labour costs.
In the following sections, two rules for the
assessment of the production rate and the cost are
presented. The production rate assessment rule
belongs to the first type of rules, while the cost
assessment can be considered as a rule of the second
type. Rules in the present approach, concern a serial
production. Initially, the concept of each rule is
described, with the classes and the subclasses that
are involved, while afterwards, it is the SWRL
human readable syntax of the rules that is provided.
3.1. PRODUCTION RATE ASSESSMENT
RULE
The production rate of a production line, whose
machines are in line, is dictated by the machine with
the lowest production rate. Hence, a rule,
identifying the minimum production rate of the
production line and assessing this production rate to
the production line, has been defined. The
determination of such a rule allows the automatic
assessment of the production rate of a production
line.
The production rate of a jobshop is the minimum
production rate of the workcenters, belonging to the
jobshop. The SWRL syntax of the rule is provided
hereafter.

ProductionRate(?pr) WorkCenter(?wc)
consistsOfResource(?wc, ?res) measuredBy(?res,
?pr) hasValueAsIs(?pr, ?val) sqwrl:min(?val)

For every resource (res) that belongs to a
workcenter (wc) and is measured by the production
rate (pr) whose value is (val), the minimum
production rate value is estimated.

Where,
pr: the variable, whose value is the
production rate
res: the variable of the resource
wc: the workcenter variable
val: the value of the production rate
3.2. COST ASSESSMENT RULE
The cost assessment rule has been developed in
order for the cost of a workstation, performing a
series of tasks to be calculated. Based on the cost
hierarchy, as it is defined in section 2.4.1, the cost
classes and subclasses are associated with the task,
resource, workcenter and jobshop classes.
The total cost of a jobshop is considered the sum
of the overhead costs and the operational costs for
the tasks, performed by the resources of the
workcenters belonging to the jobshop. This rule can
be represented in the SWRL syntax with the
following way.


593


Figure 7 Part of the ICM bill of materials, detailed breakdown of cabin and bucket

EquipmentCost(?eqc) hasValueAsIs(?eqc,
?eqcval) Task(?t) measuredBy(?t, ?eqc)
sqwrl:sum(?eqcval)

For every task (t) that is measured by the equipment
cost (eqc) whose value is eqcval, the sum of the
eqcval is calculated.

Where,
t: is the variable of task
eqc: the equipment cost variable
eqcval: the value of the equipment cost
4. CASE STUDY
The manufacturing ontology is discussed in a real
life scenario in an ice cold merchandisers industry.
An instantiation of the ontology is provided in order
for the applicability of the approach to real
industrial problems of knowledge representation to
be presented.
A simple subassembly of an ice cold
merchandiser, in particular Activator 700 and a job
shop of the production system, are presented. Two
main subassemblies of the Activator 700 are
presented, i.e. the cabin and the bucket, in Figure 7.
The cabin and canopy consist of 12 and 5 single
parts respectively. The plant includes 5 jobshops,
and the first jobshop, namely the Cabin forming
includes 5 workcenters. The diagram of figure 6
presents the hierarchy of the plant as it is
instantiated with the ontology.
The ontology is implemented in the Protg 4
platform, while for the rules and the reasoning
mechanisms the Jess engine has been utilized.
Figures 8 and 9 present the instantiation of the
product and plant respectively in the Protg
environment. In the same figures, in particular, on
the left,, the main classes of the ontology can be
viewed.
6. CONCLUSIONS
In this work an ontological approach for
structuring a manufacturing reference knowledge
model has been presented. The emphasis is given to
the modelling of performance indicators. The basic
plant and manufacturing attributes hierarchies and
taxonomies developed are described. The reasoning
mechanisms, utilizing the SWRL rules that are
included in the ontology, are also described.
Finally, an industrial use case is presented in order
to show, in the context of knowledge representation,
the ontologys efficacy in real industrial problems..

594

5. ACKNOWLEDGMENTS
This work has been partially supported by the
research project Virtual Factory Framework (VFF)
NMP2 2010-228595 funded by the European Union
Seventh Framework Programme (FP7/2007-2013).


Figure 8 Instantiation of product hierarchy classes

Figure 9 Instantiation of plant hierarchy classes
REFERENCES
Wildemann, H., Leitfaden zur Verkrzung der
Hochlaufzeit und zur Optimierung der An- und
Auslaufphase von Produkten, 1st ed., 29, TCW
Transfer-Centrum Verlag, 2003, Mnchen.
Chryssolouris G, Papakostas N, Mourtzis D, Makris S,
Knowledge Management in Manufacturing Process
Modeling - Case Studies in Selected Manufacturing
Processes, In: Methods and Tools for Effective
Knowledge Life Cycle Management, A.Bernard,
S.Tichkiewitch, Springer, 2009, pp 507-520
Gruninger M., Sriram R.D., Cheng J., and Law K.,
Process specification language for project
information exchange, International Journal of IT in
Architecture, Engineering and Construction, Vol. 01,
No 4, 2003, pp 307328
Kyoung-Yun Kim, Seongah Chin, Ohbyung Kwon and
R. Darin Ellis, Ontology-based modeling and
integration of morphological characteristics of
assembly joints for network-based collaborative
assembly design, Artificial Intelligence for
Engineering Design, Analysis and Manufacturing,
Vol. 23, No 1, 2009, pp 7188
Yazen Alsafi and Valeriy Vyatkin, Ontology-based
reconfiguration agent for intelligent mechatronic
systems in flexible manufacturing, Robotics and
Computer-Integrated Manufacturing, Vol. 26, No. 4,
2010, pp 381-391
Niels Lohse, Hitendra Hirani and Svetan Ratchev,
Equipment ontology for modular reconfigurable
assembly systems, International Journal of Flexible
Manufacturing Systems, Vol. 17, No 4, 2005, pp 301
314
T. Kjellberg, A. von Euler-Chelpin, M. Hedlind, M.
Lundgren, G. Sivard and D. Chen, The machine tool
modelA core part of the digital factory, CIRP
Annals - Manufacturing Technology, Vol. 58, No 1,
2009, pp 425428
H.K. Lin and J.A. Harding, A manufacturing system
engineering ontology model on the semantic web for
inter-enterprise collaboration, Computers in Industry,
Vol. 58, No 5, 2007, pp 428437
B. Denkena, M. Shpitalni, P. Kowalski, G. Molcho and
Y. Zipori, Knowledge Management in Process
Planning, CIRP Annals - Manufacturing Technology,
Vol. 56, No 1, 2007, pp 175-180
Yuh-Jen Chen, Yuh-Min Chen and Hui-Chuan Chu,
Development of a mechanism for ontology-based
product lifecycle knowledge integration, Expert
Systems with Applications, Vol. 36, No 2, 2009, pp
27592779
Yang Jiang, Gaoliang Peng and Wenjian Liu, Research
on ontology-based integration of product knowledge
for collaborative manufacturing, International
Journal of Advanced Manufacturing Technology, Vol.
49, No. 9, 2010, pp 1209-1221
Schlenoff C., Ivester R., Libes D., Denno P., and
Szykman S., An Analysis of Existing Ontological
Systems for Applications in Manufacturing and
Healthcare, NISTIR 6301, National Institute of
Standards and Technology, Gaithersburg, 1999
Noy N F and McGuinness D L, Ontology Development
101: A Guide to Creating Your First Ontology, 10-
05-2011, Retrieved:
<http://www.ksl.stanford.edu/people/dlm/papers/ontol
ogy-tutorial-noy-mcguinness-abstract.html >
Cycorp Inc., Ontological Engineers Hanbook, 2008,
Retrieved: 24-06-2011, <
http://www.cyc.com/doc/handbook/oe/oe-handbook-
toc-opencyc.html >
Westkmper E and Hummel V The Stuttgart Enterprise
Model - Integrated Engineering of Strategic &
Operational Functions, In: Manufacturing Systems:

595

Proceedings of the CIRP Seminars on Manufacturing
Systems Vol. 35, No. 1, 2006, pp 89-93
Chryssolouris G and Lee M An approach to real-time
flexible scheduling The International Journal of
Flexible Manufacturing Systems, Vol. 6, No 3, 1994,
pp. 235253.
Jones A and McLean C A proposed hierarchical control
model for automated manufacturing systems Journal
of Manufacturing Systems, Vol. 5, No. 1, 1986, pp. 15-
25
McLean C, Bloom H and Hopp T. The virtual
manufacturing cell In: Proceedings of the IFAC/IFIP
conference on information control problems in
manufacturing technology. Gaithersburg, MD, 1982
Gamma E; Helm R, Johnson R, Vlissides J M "Design
Patterns: Elements of Reusable Object-Oriented
Software". Addison-Wesley. pp.395, 1995
Chryssolouris G, "Manufacturing Systems: Theory and
Practice",2
nd
Edition, Springer-Verlag, New York
2006
Proceedings of DET2011
7th International Conference on Digital Enterprise Technology
Athens, Greece
28-30 September 2011
596

STRUCTURING AND APPLYING PRODUCTION PERFORMANCE
INDICATORS


George Pintzos
Laboratory for Manufacturing
Systems & Automation,
University of Patras
pintzos@lms.mech.upatras.gr
Kosmas Alexopoulos
Laboratory for Manufacturing
Systems & Automation,
University of Patras
alexokos@lms.mech.upatras.gr
George Chryssolouris
Laboratory for Manufacturing
Systems & Automation,
University of Patras
xrisol@lms.mech.upatras.gr


ABSTRACT
This study presents a methodology for structuring Production Performance Indicators (PPIs) and
their application to different production levels. A number of relevant characteristics, such as
hierarchical levels and relative stakeholders are proposed and the indicators can be structurally
defined through the use of a PPIs template. Two main types of indicators that involve near to
real time metrics and the calculation of future best practices, for various aspects of a
manufacturing system, are investigated, while at application level, the study focuses on PPIs
related to the automotive industry. The successful definition and application of these indicators
improves the transparency and awareness of the current status of different production steps, while
the proposed PPI structure, provides a meaningful comparison of the different manufacturing
processes, within a manufacturing firm and across company borders.
KEYWORDS
Manufacturing, Production Indicator, KPI, Template, Energy Efficiency

1. DEFINITION OF PRODUCTION
PERFORMANCE INDICATORS (PPIS)
Measuring the performance of production systems
is essential to every manufacturing enterprise since
in order for an activity to be controlled, its
efficiency should be measured and although the
accuracy of mechanistic or physical measurements
has advanced over the last years, measuring
manufacturing performance is still a complex
matter due to its multi-dimensional nature.
The key performance indicator (KPI), is a
number or value, which can be compared against an
internal target, or an external benchmark to give
an indication of performance (Ahmad and Dharf,
2002). Performance indicators for the assessment of
production performance are an essential
requirement. Some of these measures are specific
and are related to particular properties of particular
production processes (Ahmad and Dharf, 2002).
In general, there are four classes of manufacturing
attributes to be measured when a manufacturing
system is being monitored: cost, time, quality and
flexibility. These depend on the particular problems
specific objectives, the goals, and the criteria. An
objective is an attribute to be minimized or
maximized. A goal is a target value or range of
values for an attribute, and a criterion is an attribute
that is evaluated during the process of making a
decision (Chryssolouris, 2006).
Although monitoring manufacturing attributes
was the main issue in the previous decades, recently

597

special attention has been drawn to Energy
Efficiency. The manufacturing industry is one of
the main consumers of energy with 31% of the
primary energy use and 36% of carbon dioxide
(CO2) emissions (International Energy Agency,
2007). The European Commission (2006) had
estimated that the energy saving potential for the
manufacturing sector was 25% and it had set target
objectives for the annual consumption of the
primary energy to be reduced by 20% by 2020.
One of the definitions of energy efficiency is:
that the goal of efforts reduce the amount of energy,
required to provide products and services. Another
one is; that the same quality and level of some 'end
use' of energy be achieved with a lower level of
energy input (Ang, 2007). Of course, every
definition also depends on the type or category of
energy that is measured. For example, in the
Embodied Product Energy framework, the energy
consumed by various activities, in a manufacturing
system, is categorised into two groups: The Direct
and the Indirect Energy. Direct Energy is that used
by various processes, required for the
manufacturing of a product, whereas Indirect
Energy is that consumed by activities (e.g. lighting,
heating, ventilation) required for maintaining a
standard environment in the plant (Rahimifard et al,
2010).
Three main motivating factors have been
identified for the integration of an energy efficiency
monitoring and controlling system into
manufacturing companies (Bunse et al, 2010):
1. Rising energy prices: Soaring prices of oil and
gas as well as ofother fossil fuels, such as coal
due to scarcity of the specific resources.
2. New environmental regulations with their
associated costs for CO2 emissions.
3. The purchasing behaviour of customers, who
prefer more Green and energy efficient
products and services.
It is evident that the two main goals, concerning
Energy Efficiency, are reduction both in energy
consumption and in CO2 emissions. There is
however, a question that arises when developing an
Energy Efficiency monitoring system: what
differentiates these objectives from the
minimization of cost (economic efficiency) which
already exists and what are the relationships
between these objectives.
Energy Efficiency is usually measured
thermodynamically and therefore, it is considered
objectively and with a constant value under the
same conditions. That is true if, for example some
energy KPI is calculated with the use of a particular
thermodynamic formula, however, it is in contrast to
the energy efficiency measures that incorporate
economic units, which change as the economic
environment changes and hence so do the fuel
prices. However, it could be argued that an indicator
of purely economic measurements is not really an
indicator of energy efficiency. It can mostly be seen
as an economic efficiency indicator, since it is fully
enumerated in economic value terms, and is
therefore, dismissed as a measure of energy
efficiency (Patterson, 1996).
It is clear that a measurement system for the
assessment of a plants Energy Efficiency and
manufacturing processes, would require quantitative
indicators of both energy efficiency and
manufacturing performance. Therefore, a new type
of indicator should be utilized that could be used for
measuring energy consumption, costs etc. as well as
production performance at the same time.
In this document, the term PPI (Production
Performance Indicator) is introduced, which is an
indicator using historical energy and production
related data, near real-time monitoring data and is
used for the calculation and prediction of various
production related metrics. Some PPIs are used for
optimising the production, whilst others for
constructing optimisation objectives (Figure 1).

Figure 1-PPI definition
2. DESCRIPTION OF PPIS THROUGH A
COMMON TEMPLATE
To ensure that the PPIs can be used for a meaningful
comparison of different manufacturing processes
within the same manufacturing firm, a common
communication vessel must be established. The
main characteristics of all the PPIs should be
included in their description along with the ways
they are interrelated. Furthermore, the template

598

should include all the necessary information so that
the PPIs could be used by quantitative tools for
analysing complex systems for optimization,
simulation etc.
The characteristics chosen for the formation of the
template can be seen below:
Index number: By having index numbers
attached to PPIs, it will be easier to describe the
interrelationships between them.
Category: The categories will be Cost, Time,
Quality, Flexibility and Reliability. In the case
of energy related PPIs, they will be described
through cost since most of the industries are
concerned with energy only through cost
deployment.
Sector: In the sector column, the production
sector that the PPI applies to will be specified
(manufacturing, maintenance etc.).
Name: The name of the PPI as a periphrasis
(e.g. Mean Time Between Failure).
Acronym/Symbol: The acronym or symbol of
the PPI (e.g. MTBF).
Formula: The calculation formula of the
indicator with an explanation of the variables
and their units, if required.
Data Source Type: The type of the source that
the data will be acquired from (Sensor, MES
system etc.).
Unit: The units of the indicator. In case of
percentages the per cent sign (%) should be
used.
Relations: The relevant PPIs in terms of input
and output to the described PPI. For the
relations the index number of the PPIs should be
used (e.g. OUT:3, IN: 4,7 which means that the
PPI receives data from PPIs 4 and 7 and gives
data to PPI 3).
Event Type: The type of the event of the PPI
regarding the timestamps of the event. The
values are periodic and non-periodic. For
example in domains such as temperature, where
events happen between standard time frames,
the periodic value is inserted.
Target: The target value of the PPI. When the
target value of a PPI is calculated from other
PPIs, the index number of that PPI will be used
(e.g. IN: 5, meaning that the number 5 PPI is
used for calculating the target value of the
current PPI). BEST LOW or BEST HIGH
phrases will be used to denoting that the PPIs
value should be either as low as possible or as
high as possible respectively.
Level: The hierarchical level that the PPI
corresponds to.
3. IDENTIFICATION OF PRODUCTION
PERFORMANCE INDICATORS
3.1. MANUFACTURING SYSTEM ANALYSIS
METHODS FOR THE IDENTIFICATION OF
KPIS
When implementing a Manufacturing Execution
System (MES), diagnosing a disorder, concerning
material or information flow or when changing a
manufacturing process for improving performance, a
manufacturing analysis through modelling methods
is mandatory in order for decisions to be taken
(Vernadat, 2002).
A number of techniques exist to support the
manufacturing systems analysis (IDEF, simulation,
Petri Nets etc.). In (Hernandez-Matias et al, 2008) a
modelling framework was developed, called
Production and Quality Activity Model (PQAM)
which integrated hierarchy, database and
performance indicators. The four components that
the framework is comprised of are; the reference
information model (a reference for structuring and
classifying information), the quantitative and
qualitative IDEF0 model (which is used for
compiling all the information of the manufacturing
system), the manufacturing data-warehouse (storage
of all the information required for system diagnosis)
and the evaluation methods for the support of
decision-making issues. The component most
valuable for the identification of KPIs is the
reference information model, which uses analytic
hierarchical process (AHP) for linking activities.
This component is comprised of subsystems, the
third of which associates quantitative or qualitative
information with the activities. In the subsystem,
there are five types of information objects with a
specific library of attributes. These are activity data,
material input, material output and resource and
improvements data. In (Snchal and Tahon, 1998)
the authors present another modelling method that
can be applied to manufacturing systems. This
approach is based on modelling a manufacturing
system from two points of view, the functional and
the physical. Based on the functional view, the
system is described through its processes, while
when it based on the physical view, it is described
on a descending analysis of the resources. The
decomposition of the processes of the functional
analysis stops when the activities can be associated
with the physical elements (Figure 2).
In (Lee et al, 2011) indicators are developed
regarding the performance of multiple
manufacturing partners on the basis of the Supply
Chain Operations Reference (SCOR) model. The

599

SCOR model provides a reference of supply chain
processes and the metrics, whilst through the
identification of the processes and their metrics the
KPIs can be identified and developed

Figure 2 Physical-Functional decomposition
through a table with the following consecutive
columns; the Process of relevant SCOR Level, the
relevant metric; the derived KPI; KPI definition;
and the equation. Also, in (Cai et al, 2009) the
authors stated that given the complexity of supply
chains, a process-oriented SCOR model was the top
solution for the identification of the basic
performance measures and their KPIs. In practice,
most of the KPIs, in a supply chain, are correlated
and have causeeffect relationships. For that
reason, the KPIs that have high correlations with
each other have to be identified together with the
nature of their relationship. The relationships
among the KPIs are classified by the authors into
three categories: parallel, sequential and coupled.
3.2 PROPOSED METHOD FOR THE
IDENTIFICATION OF KPIS
It has become obvious that for a thorough analysis
of a manufacturing system and the identification of
the necessary indicators a combined methodology
should be utilised. The identification of the
necessary components of the methodology can be
made through the fulfilment of certain
requirements, regarding the analysis, namely:
Association of manufacturing goals with KPIs.
Link of KPIs to monitored systems
components.
Identification of relations between the KPIs.
KPIs association with the systems hierarchy
levels.
A kind of comparison among the different
methodologies can be seen in the table below:
Table 1- Modeling methods Vs. Requirements
Requirem.


Methods
Manuf.
goals to
KPIs
KPIs
to
system
links
KPIs
relations
Hierarchy
Levels
PQAM
(reference
information
model)


X
Physical-
functional
analysis
X X X
SCOR
applied to a
production
system
X

As seen in Table 1, none of the modelling methods
fulfils all the requirements, which means that none
of them can be used as is. Beyond the requirements
fulfilment, the modelling methods have advantages
and disadvantages or a kind of adjustability not
described in the table. For example, although the
physical-functional analysis only satisfies one
requirement, if the functions are replaced by KPIs,
derived from the goals, it will come closer to the
PQAM and meet the first two. Furthermore, the
KPIs relations of the SCOR models can be
integrated into the combined method so as to have a
clear view of their relations.
The proposed method, regarding the analysis of
manufacturing systems, for the identification of
PPIs, should combine all the advantages of the
methods, described in the previous section. The
method has four steps, which use different
components to accomplish a systems thorough and
precise analysis, to identify PPIs and define them.
The steps are described below:
1. Analysis of the physical elements (in respect to
the hierarchical levels of the system) and
description of the respective activities (Figure 3).

Figure 3 Physical elements analysis

600


Figure 4 Goals to PPIs expansion and PPIs relations
The description of the respective activities can
help with the definition of the low level PPIs.
2. Goals placement and expansion to desired PPIs.
The goals are vertically placed with respect to
the corresponding level and horizontally with
respect to the time frame they refer to
(Historical, Current and Predictive/ Future
information). This way it becomes easier to
perceive which PPI refers to which element or
activity.
3. Identification of the PPIs correlations and
visual representation as shown in Figure 4. The
connections of the PPIs will also reveal any
causal relationships. Arrows are used (e.g.
PPI_1.2 to PPI_1) to representing computational
relationships, while plain lines represent causal
relationships, which do not correspond to the
quantity associations between them.
4. Organization of PPIs in a PPIs template.
The process described does not necessarily have
a straightforward progression. Since every step
describes the analysis of a system from a different
point of view, gaps that could not be perceived
from a previous step can become apparent at any
time. Therefore, the process becomes iterative until
all PPIs are properly described through the PPIs
template and all three steps have been completed
for the system.

4. CASE STUDY
To demonstrate the new developments in the
method of analysis and the use of the PPIs
template, a case study was carried out with
requirements, provided by a European automotive
company.
4.1 REQUIREMENTS GATHERING AND
INTERPRETATION INTO PPIS
The requirements concern a machining line, which
comprises four main stations of machining tools
connected through gantries. The machines are also
connected to a number of necessary support systems,
including those of cutting fluid, compressed air, etc.
and a central HVAC system. The energy use is 50%
for process equipment and 50% for support systems.
The main requirement is the low level monitoring of
energy consumption of the main equipment and the
support systems and the every-day assessment
against best practices. The energy consumption
should be deployed to cost in order to provide a full
assessment of the factorys energy efficiency.
Through the first step, described in section 3.2, the
physical elements of the line were positioned at three
hierarchical levels accordingly (Production Line,
Station and Machine) with the HVAC being on the
top level, the Stations in the middle level and the
machines and gantries together with their activities at
the bottom level.
Following steps 2 and 3, the requirements were
placed on the top level and expanded to the required
PPIs and metrics (Figure 5). The PPIs highlighted in
green are the ones that require real time data
acquisition (i.e. sensors) and can be spotted since
they are (a) in the Current time frame and (b) at
the bottom of the PPIs connections (dont receive
input from other PPIs). All the other PPIs are
calculated or originate from them. The three main
types of PPIs that were defined are:


601

Electric Energy Consumption of previous days:
This PPI is placed in the historic time frame and is
used to extracting the necessary knowledge,
regarding the energy consumption of the previous
days.
Electric Energy Consumption: This PPI is used
to monitoring the energy consumption on a daily
basis.

) (
1
KWh
n
T W
EE
C
k
i
i
C

=
= (1)

Where

i
Number of instance when power
measurements are taken
k
Number of measurements
i
W Power measurement at i instance
C
T Constant time interval of measurements in
hours
n
Number of products produced that day

Electric Energy consumption for the remaining
days: the average energy consumption for the
remaining days of the year. It can be used in order
to benchmark the electric energy consumption that
the line should have for the remaining days of a
year by using data from the Electric Energy
Consumption of the previous days and the goal of
energy consumption for that year.
) (
1
KWh
k k
EE E
E
T
k
i
Ci y
d

=

=
(2)

Where

i
Number of day
T
k Total number of days that production is
scheduled for the current year
k
Number of the days passed in the current year
y
E Energy consumption goal for the current year
Ci
EE
Energy consumption of day number i

In Figure 5, all the relations between the PPIs can
be set up and get transferred to the Relations column
of the template. The PPIs of the same name that
have different hierarchy stamps (machine, station
etc.) are calculated through the sum of all the other
ones below them. For example, PPI 2 is calculated
through the sum of PPI 1 for all the machines and
PPI 4 is calculated through the sum of all the PPI 2
values of all stations, plus PPI 3 which is for the
support systems. As an example of the way the
indicators are organized in the template, two of them
are presented in Table 2.
5. CONCLUSIONS
In this paper, a methodology for identifying and
structuring performance indicators for
manufacturing systems was presented. The
methodology was based on existing methods, which
were connected and expanded sequentially. At the
Figure 5: Case studys goals to PPIs expansion and PPIs relations

602

Table 2 Example of PPIs organization into the PPIs template
o. Cate-
gory
Sector ame Acronym/
Symbol
Formula Data
source
type
Unit Rela-
tions
Event
Type
Target Level
1 Cost Manuf. Electric
Energy
Consump.
per Unit
(machine)
C
EE

n
T W
EE
C
k
i
i
C

=
=
1


Sensor KWh/
unit
OUT:2 periodic IN:9
(BEST
LOW)
Machine
9 Cost Manuf. Average
Energy
Consump.
for
remaining
days
P
EE

k k
EE E
E
T
k
i
Ci y
d

=

=1



Central
data
repository
KWh/
unit
IN:4,8
OUT:1
(target)
periodic - Line

end of the described process, all the PPIs are stored
into a PPIs template, which is useful for conveying
them into event processing and general monitoring
systems. The methodology was applied to an
automotive industrial use case through the
acquisition of industrial requirements, which
concern most of the firms in the manufacturing
domain.
The effort made for the work prepared and
presented in this paper was in order to establish a
common knowledge model, concerning the
description of production indicators. Further work
will be carried out and will be culminated with a
more descriptive model of knowledge indicators
that will be presented in future papers and a
possible integration with an indicators framework.
6. ACKNOWLEDGMENTS
This work has been partially supported by the
research project KAP, funded by the CEU. The
authors would like to express their gratitude to Dr
Thomas Lezama and Dr Zhiping Wang from Volvo
Technology Corporation (VTEC) for supporting
this research effort.

REFERENCES
Ahmad M.M. and Dhafr N., Establishing and
improving manufacturing performance measures,
Robotics and Computer-Integrated Manufacturing,
Vol. 18, No. 3, 2002, pp 171-176
Ang B.W., Energy Efficiency: Conceptual and
Measurement Issues, http://www.iea.org, 2007,
Retrieved: 20 September 2011, <
http://www.iea.org/work/2007/singapore/20_Ang_effi
ciency.pdf>
Bunse K., Vodicka M., Schnsleben P., Brlhart M. and
Ernst F.O., Integrating energy efficiency performance
in production management gap analysis between
industrial needs and scientific literature, Journal of
Cleaner Production, Vol. 19, No. 7, 2010, pp 667-679
Cai J., Liu X., Xiao Z. and Liu J., Improving supply
chain performance management: A systematic
approach to analysing iterative KPI accomplishment,
Decision Support Systems, Vol. 46, No. 2, 2009, pp
512-521
Chryssolouris G., Manufacturing Systems: Theory and
Practice, Third Edition, Springer-Verlag, New York,
2006, p 9
European Comission, Action Plan for Energy Efficiency:
Realising the Potential, Commission of the European
Communities, Brussels, 2006, p 3
Hernandez-Matias J.C., Vizan A., Perez-Garcia J. and
Rios J., An integrated modelling framework to
support manufacturing system diagnosis for continuous
improvement, Robotics and Computer-Integrated
Manufacturing, Vol. 24, No. 2, 2008, pp 187-199
International Energy Agency, Tracking Industrial Energy
Efficiency and CO2 Emissions, OECD/IEA, Paris,
2007, p 19
Lee J., Jung J.W., Kim S.K. and Jung J.Y., Developing
Collaborative Key Performance Indicators for
Manufacturing Collaboration, Proceedings of the
2011 International Conference on Industrial
Engineering and Operations Management, 2011
Patterson M.G., What is energy efficiency?: Concepts,
indicators and methodological issues, Energy Policy,
Vol. 24, No. 5, 1996, pp 377-390
Rahimifard S., Seowa Y. and Childs T., Minimising
Embodied Product Energy to support energy efficient
manufacturing, CIRP Annals-Manufacturing
Technology, Vol. 59, No. 1, 2010, pp 25-28

603

Snchal O. and Tahon C., A methodology for
integrating economic criteria in design and production
management decisions, International Journal of
Production Economics, Vol. 56-57, 1998, pp 557-574
Vernadat F., Enterprise modeling and integration
(EMI): current status and research perspectives, Ann
Rev Control, No. 2, 2002, pp 15-25




Proceedings of DET2011
7th International Conference on Digital Enterprise Technology
Athens, Greece
28-30 September 2011
A WEB-BASED PLATFORM FOR DISTRIBUTED MASS PRODUCT
CUSTOMIZATION: CONCEPTUAL DESIGN
Dimitris Mourtzis
Department of Mechanical Engineering
and Aeronautics, University of Patras,
Patras, Greece
mourtzis@lms.mech.upatras.gr
Michael Doukas
Department of Mechanical Engineering
and Aeronautics, University of Patras,
Patras, Greece
mdoukas@lms.mech.upatras.gr

George Michalos
Department of Mechanical Engineering
and Aeronautics, University of Patras,
Patras, Greece
mdoukas@lms.mech.upatras.gr
Foivos Psarommatis
Department of Mechanical Engineering
and Aeronautics, University of Patras,
Patras, Greece
psarof@lms.mech.upatras.gr
ABSTRACT
The currently promoted mass customization paradigm generates complexity in manufacturing and
high costs, issues that manufacturers need to address. The work discussed in this paper aims at
bridging the gap between mass production and mass customization, by engaging the customer in the
design of unique products and by enabling the OEMs to efficiently handle the production by
exploiting the benefits of decentralized manufacturing. The conceptual framework presented in this
paper is a web-based collaboration platform that consists of a) the user design system which focuses
on providing user-friendly design tools that will allow the customer to perform unique design
changes, in a controlled way, b) the decentralized manufacturing decision making tool, which
allows the systematic generation, evaluation and selection of cost-efficient and eco-friendly
manufacturing/supply procedures for the manufacturing of the product and c) the network
infrastructure for providing and maintaining the interoperability among the aforementioned systems.
The software implementation tools are presented and the expected results from the exploitation of
the platform are indicated.
KEYWORDS
Mass customization, user adaptation, decentralized manufacturing, web technologies
1. INTRODUCTION
The landscape of the global market has changed
over the last decade and centralized mass
production seems unable to cope with the emerging
production requirements that globalization has
imposed. Manufacturers from around the globe are
adopting new production paradigms in order to
maintain their competitiveness and responsiveness
in a rapidly changing environment. The transition
though, from the mass production to the mass
customization paradigm, posed the original
equipment manufacturers (OEMs) with
unprecedented problems. The dynamic nature of a
mass customization environment creates
disturbances in production, high complexity and
uncertainties that centralized control frameworks
cannot handle. The transition from a Build-To-
Stock (BTS) system, where products are fabricated
according to demand forecasts for an anonymous
mass market, to an assemble-to-order and Build-To-
Order (BTO), where products are only built in
response to an actual customer order, demands a
large number of adjustments in marketing,
production and logistics (Brauer, 2008 and Hu et al.,
2011). Assembling directly to order, on the other
hand, reduces the risk of loss due to obsolescence.
605

Completed products will become obsolete more
often than many of their components, thus there is a
definite advantage in not assembling before a
customer actually has ordered a product. In an
assemble-to-order environment, delayed
differentiation can allow production of some
standard components while the customer specific
features and components can be added as late as
possible (Hu et al., 2011). Moreover, the
configuration of the supply chain and transportation
logistics has become a much more complex task
than it was in the past. Quality, cost and delivery
have been the most important measures of
manufacturing systems performance, but
performance metrics in terms of environmental
sustainability and social / operator impact are
gaining much attention (Hu et al., 2011).
Furthermore, environmental issues imposed by
legislation, require green manufacturing practices
and effective management of materials and waste.
2. STATE OF THE ART
The manufacturing industry has evolved through
several paradigms since its birth two centuries ago
(Figure 1). The first paradigm that emerged, at
1850, was Craft Production, which focused on
creating exactly the product that the customer
requests but at a high cost. Afterwards, in the
1910s, Mass Production allowed low-cost
manufacturing of large volumes of products,
enabled by interchangeability and Dedicated
Manufacturing Systems (DMS). However, the
product variety offered by such production was very
limited, as evidenced by the famous quote from
Henry Ford, Any customer can have a car painted
any colour that he wants so long as it is black
(Ford, 2010). In the late 1980s, Mass
Customization (Pine, 1992) emerged as a new
paradigm in response to consumer demands for
higher product variety and manufacturers started to
offer larger numbers of product options or
variants of their standard product. This was partly
achieved cost-effectively by designing a series of
basic product options and allowing the customers to
select the assembly combination that they prefer
most. Such an approach allows the manufacturer to
achieve economy of scale at the component level,
and use reconfigurable assembly systems to create
high variety for the economy of scope of the final
assembly (Hu et al., 2011).
Over the last decades the local economy has
transformed to a global and highly competitive
economy. The globalization of the markets that
came along with technological innovations reshaped
the value added chain in the global manufacturing
network (Feldmann et al., 1996). Industries started
to operate globally expanding the limits of their
business. The export of finished goods to foreign
markets has been the dominant theme in the
international trade up to the 1990s and gained even
more attention the last decade (Abele et al., 2006).


Figure 1 - Changes in manufacturing paradigms (Hu et al.,
2011)
The growth of the Internet and the software
technologies that flourished at the same time
provided the means for this globalization
(Chryssolouris, et al., 2004). Meanwhile, the
transportation costs for the main intercontinental
transport modes, air and sea, have both dropped
significantly and manufacturers were able to
distribute their products at dispersed production
plants located at places with low human labour
costs (Abele et al., 2006). The decentralized
manufacturing approaches, that have replaced
centralized practices, have evolved further, aided by
the Internet, which aided the coordination process in
the manufacturing networks (Zhang et al., 2010).
Traditional manufacturing methods, like centralized
mass production, were no longer capable of
fulfilling the demand of the market, due to their
rigidity and low responsiveness (Leitao, 2009).
Decentralized approaches for manufacturing have
been studied extensively (Abele et al., 2006 and
Brauer, 2008), showing their benefit in delivery
times compared to centralized scenarios.
Chryssolouris stated in 2004: It is increasingly
evident that the era of mass production is being
replaced by the era of market niches. The key to
creating products that can meet the demands of a
diversified customer base is a short development
cycle, yielding low cost, high quality goods in
sufficient quantity to meet the demand. This makes
flexibility an increasingly important attribute to
manufacturing (Chryssolouris et al., 2004).
Currently the competitiveness of a company is
mostly dependent on its ability to perform well in
dimensions of cost, quality, delivery, dependability
and speed, innovation and adaptability to demand
606

variations (Mourtzis et al., 2008). Mass
customization on the other hand, which came to fill
the apparent gap in the market landscape, presented
the manufacturers with unprecedented problems and
imposed new requirements for the manufacturers.
The increase in manufacturing complexity, the
dynamic production environment, the escalating
cost, the information flow and environmental
legislations/ regulations are some of the issues that
emerge in a mass customization environment
(Zhang et al., 2010). Potential solutions in order to
achieve mass customization, including changes in
the product design and manufacturing process, have
been proposed (Tu et al., 2001, Monostori et al.,
2006). At present, though, most researches are
concerned with the strategic impact of mass
customization and do not address to specific
implementation issues (Helms et al., 2008).
In addition, the advent of the World Wide Web
(WWW) revolutionized the market scene. The last
fifteen years an increase in online purchases has
been recorded. The market today is shifting
towards online purchases, offering to the customers
a wide variety of products. Recent surveys show
that 89% of the buyers prefer to shop online over in-
store shopping (Knight, 2007). Following that,
consumers around the globe expressed the need for
unique products that combine quality, with short
life-cycles, that are also available at low prices, at
the right time (Thirumalai et al., 2011). A large
number of product varieties are offered to the
consumers by many different product manufacturers
in quite every industrial sector. Variety can be
achieved at different stages of product realization,
during design, fabrication, assembly, at the stage of
sales and through adjustment at the usage phase and
it also can be added during the fabrication process
e.g. through machining, or rapid prototyping (Hu et
al., 2011). In the networked world, firms are
recognizing the power of the Internet as a platform
for co-creating value with customers (Sawhney et
al., 2005). Online customization is becoming
available for every customer, and truly unique
products will be requested every moment by users
around the globe. The customer therefore, has to be
treated as an individual and not just as merely a
market segment (Helms et al., 2008). The needs of
the customer must be identified and sufficiently
satisfied. The end user should be integrated in the
design process so that through their involvement,
useful information can be extracted from the
manufacturers (Chryssolouris et al. 2008). Web-
based and e-Commerce systems have been
implemented and have proved to be very effective
in capturing the pulse of the market (Helms et al.,
2008). The online competition among companies,
results at a rapid evolution of web technologies.
Web-based toolkits for mass customization
purposes are deployed that aim at providing a set of
user-friendly design tools that allow trial-and-error
experimentation processes and deliver immediate
simulated feedback on the outcome of design ideas.
Once a satisfactory design is found, the product
specifications can be transferred into the firm's
production system, and the custom product is
subsequently produced and delivered to the
customer (Franke et al, 2008).
Web technologies, moreover, have been widely
employed in developing manufacturing systems to
associate various product development activities,
such as marketing, design, process planning,
production, supply network management and
customer service (Yang and Xue, 2010). In this
web-based decentralized manufacturing framework,
in order for the manufacturers to achieve production
capacity and costs that can be compared to those of
mass production, high coordination and efficient
integration must be achieved between the domains
of production (Huang et al., 2008). The integration
and collaboration among dierent partners of the
product development team improved the product
quality and reduced the product lead-time, thus
providing better global competitiveness for the
companies (Yang and Xue, 2010). Nowadays more
and more design and assembly work is conducted as
collaborative projects across globally distributed
design teams, companies and software modules (Hu
et al., 2011). Proposed approaches included e-
Assembly systems for collaborative assembly
representation (Chen et al., 2004) and web-based
collaboration systems (Janardanan et al., 2008).
The research in this area although, needs to be
expanded in order to provide tools for assembly
representation for product variant customization.
The reason is that globalized design and
manufacturing often require the variants for local
markets to be generated by regional design teams
that use different assembly software and supply
bases (Hu et al., 2011).
Effective upstream and downstream integration
between all business partners have only recently
became achievable up to a point, due to the web
services. The Internet effectively provided the
means to achieve the necessary integration between
every partner in the supply chain. Where real-time
demand information and inventory visibility were
once impossible, web-based technologies offered
the tools for supply chain forecasting, planning,
scheduling and execution (Frohlich

and Westbrook,
2002). Nowadays we see evolving production
networks with a temporary cooperation mostly
dedicated to the product life of a product family
(Wiendahl et al., 2007). Inside these networked
production environments, however, unexpected
607

changes occurring in the production plan can cause
major disruptions within the manufacturing
organization (Tolio and Urgo, 2007). The
manufacturing environments are to a great extent
characterized by uncertainty, whereas most
production planning approaches assume perfect
information and a static deterministic environment
(Tolio and Urgo, 2007). Real-time schedule
monitoring and filtering approaches based on
statistical throughput control have been described
for recognizing and evaluating the impact of
disturbances and changes in production flow
(Monostori et al., 2007). Agent-based control
systems have been developed later, based on real-
time systems (Monostori et al., 2006), which in
spite of their promising perspective are not fully
adopted by the industry. The main barriers that do
not allow the adoption of such systems are, firstly
the required investment for their implementation
and secondly the hesitation that characterizes the
manufacturers to embrace new production processes
(Leitao, 2009). Currently production planning in a
supply network is based on information flow
between autonomous enterprises, which is
asymmetric and in part uncertain. Mainly this is
attributed to the different goals of the stakeholders
and their opportunistic stance (Vncza, Egri and
Monostori, 2008).
3. THE WEB-BASED FRAMEWORK
ARCHITECTURE
In this research work, a Web based platform that
accommodates the supervision, planning and
coordination of the design, production and
distribution phases of highly customized products is
proposed. The conceptual system architecture
presented in this paper (Figure 2) comprises of
different modules that are responsible for
performing autonomous tasks. The output of a
module is used as input to another module and vice
versa. The User Adaptation module offers online
customization and ordering features, the
Decentralized Manufacturing module is responsible
for the generation and evaluation of the alternative
production and supply schemes and the Network
Infrastructure module manages and integrates the
previous autonomous modules. The efficient
collaboration between the aforementioned modules,
offers high coordination of the different production
stages, from the designing of a new product to the
customer delivery runs. Moreover, the deployment
of individualized modules separates the
maintenance concerns in terms of software. The
system proposed is scalable and extendable.
Further to that, the system is capable of
accommodating and cooperating with software such
as Enterprise Resource Planning (ERP) suites,
Product lifecycle Management (PLM) suites,
Computer-Aided Design and manufacturing
(CAD/CAM) software and other custom tools
through data exchange features and specialized
interfaces. The interconnection between the plug-in
tools and software suites, with the developed
platform, is achieved through Web Services and
Service-oriented Architecture (SoA). SoA
encapsulates approaches like Software as a Service
(SaaS) that is a software distribution model in
which applications are hosted by a vendor or service
provider and made available to the customers over a
network, typically over the Web. Therefore, the
external software is treated as an actor to the main
system and communicates with it via customized
Web Services (Sun et al., 2007).
The data exchange between the modules is
channelled through a central hub, namely the
Network Infrastructure module that is responsible
for the interoperability of the various modules, the
data exchange, integrity and storage and for the
management of the authoring privileges for the
various end user groups of the platform. The data
types that are shared by and produced from the
system components vary from 3D graphics and
eXtensible Markup Language (XML) files, to
simple text files and images.

Figure 2 The conceptual system architecture
608

3.1. SCENARIO DESCRIPTION
The typical scenario of the systems functionality is
described below and depicted in Figure 3. The end
users (actors) of the system access the web tool via
a web-browser and provide their personal data in
order to register to the system. The registered
users details are stored in the system database that
runs on a remote server. A customer accesses the
online platform in order to purchase a product or to
express his opinion, through a structured
commenting and voting system, for concept
designs, which are previously created and uploaded
to the system by the OEM. The customer performs
customization actions on the product through
specifically tailored Graphical User Interfaces
(GUIs). The customization options are unique for
each product and are performed in terms of
selection of accessory alternatives, different colours
and materials for a specific alternative, and even
geometric modifications, where feasible. The
different products and the customization options
that are available to the customers are defined by
the OEM and inserted to the system via specialized
GUIs. The data of the personalized product, based
on the customer selections, are transferred to the
Decentralized Manufacturing module through the
Network Infrastructure.


Figure 3 - Graphical description of a typical scenario
The Decentralized Manufacturing module is
responsible for the generation of customized Bill of
Materials (BoM) and Bill of Processes (BoP) takes
place, having as an input the customized product.
The output of the procedure is a number of
alternative production and supply schemes. The
supply schemes are generated, taking also into
account the transportation logistics and cost
parameters. The alternative schemes are evaluated
in terms of their environmental impact (CO
2

emission, Global Warming Potential-GWP 100,
etc.), production cost, delivery time and other user-
defined criteria and the best alternative solution is
presented to the OEM. The customer is presented
with simplified data regarding his/her order, like the
order cost, environmental impact of the selections
and delivery time. The OEM and the other
stakeholders are presented with more complex
information about the production processes, the
transportation schemes and the environmental
impact. The selected production scheme is stored in
a central database for future use on similar
customization profiles. In case a new customization
is performed by a customer, the system
automatically compares the manufacturing
requirements of the new order with the already
calculated scenarios that are stored in the database.
3.2. USER INVOLVEMENT IN THE PRODUCT
CUSTOMIZATION
The average Web user cannot comprehend the
actual complexity of a Web application by just
looking at its front-end that is rendered through a
Web browser. The HyperText Markup Language
(HTML) that defines the presentation of Web pages
is nothing but the surface of a Web application,
while the actual business logic is running on a
remote server, or in some cases it is even distributed
executed in series of remote servers (Castelyn et al.,
2009).
The User Adaptation Design module aims at
enabling the adaptation of a variety of users to the
platform. Different users will have different
privileges and interfaces with which they interact
with the system. The role groups are predefined
and each group is capable of performing a set of
actions. Adding to that, the users acquire a role
with specific authorization rights upon registering.
User-friendly interfaces with simple to use controls
are presented to the customers through the HTML
view rendered with a web-browser or WML
(Wireless Markup Language), whereas fully
featured and stricter interfaces are provided to the
OEMs and other expert users of the system. The
administrator of the site (usually the OEM) as well
as other expert users of the web-platform
(Suppliers, etc.) are presented with more
complicated GUIs using Swing, which is part of the
JFC (Java Foundation Classes), and XML based
communication. Customized interfaces can be
found in enterprise applications and provide support
for different actors with different interfaces (Figure
4). The goal is to effectively involve the customers
in the initial design of a product and to offer the
609

desired collaboration and communication to the
OEMs, suppliers, sales representatives, etc.


Figure 4 - Multiple interfaces and functionalities support
through MVC pattern
In the pre-production phase, the User Adaptation
system aims at capturing the markets pulse, which
has been proven to be an efficient way of delivering
to the customers, products that they truly desire.
The design phase of the product is crucial if a
product is to prevail the immense antagonism. By
engaging the customers in the early phase of a
products lifecycle, a competitive advantage is
given to the OEMs and enables them to deliver
products based on actual market feedback (Helms et
al., 2008). In order to accomplish that feat, original
equipment manufacturers (OEMs) must exploit the
latest tools and technics available. The World Wide
Web has provided the means to create online
communities of customers throughout the globe,
each with individual needs and acceptance criteria.
Web 2.0 ushered the creation of online communities
and the capability to the OEMs to extract valuable
knowledge that they can later exploit to achieve the
desired level of competitiveness in a globalized
market region. Research in human computer
interaction (HCI) is advancing rapidly and aims at
improving the client side application logic.
A crucial aspect for the success of a web
application is its look and feel. By providing
user-friendly interfaces to the customers comprised
of product galleries and visualization means, the
OEMs extract data very quickly and cost-efficiently.
The elaboration, however, of the users feedback is
a challenging task that needs to be addressed.
Voting mechanisms presented through dynamic
interfaces have been implemented in this proposed
work and offer advanced functionalities, by
exploiting state of the art client-side technologies.
Moreover, HTML5 includes state of the art features
and powerful client-side technologies like AJAX
(Asynchronous JavaScript and XML) or Flash
(Castelyn et al., 2009). Dynamic content inside the
web browsers is a fairly new web technic and the
most prominent websites nowadays comprise
entirely dynamic data content (Shi et al., 2003).
In the production phase of a product, the
customer can edit the product features, such as the
color, the material, the texture and even its 3D
geometry. The system is responsible of rendering
the image in real time. The customization options
are constrained, however, by the OEMs, due to
manufacturability issues. Moreover, the
manipulation of a product is performed using slide-
bars, radio buttons and other straightforward design
tools address to inexperienced users. The customer
is able to place an order, once the customization that
he performed fulfills his demands.
3.2.1. VIRTUAL AND AUGMENTED
REALITY FOR USER ADAPTATION
The involvement of the customer in the designing of
a new product can be further enhanced by state of
the art visualization aids like Virtual Reality (VR)
platforms and Augmented Reality (AR) technology.


Figure 5 - 3D visualization plugin for W3C browsers
The customized product can be visualized in an
immersive way on specific sites e.g. companys
sales representative site, or from a personal PC with
use of augmented reality applications. Moreover,
browser plugins are capable of supporting 3D
graphics representation and manipulation (Figure
5). Basic commands like fly-over, pan, rotate and
zoom are provided to the customers and aid the
visualization of the product. Further to that,
Augmented Reality (AR) technology has evolved
and it is now possible to embed AR applications
inside a web browser (Uva et al., 2010).
3.3. DECENTRALIZED MANUFACTURING
PLANNING MODULE
610

The customer needs, which are expressed through
the customized product, must be fulfilled by the
OEMs in a cost-efficient, fast and environmentally
friendly way. The decentralized production
network should be configured in order to carry out
the production of the customer preferences. In this
era of global sourcing, to reduce purchase costs and
attract a larger base of customers, retailers and
OEMs are constantly seeking suppliers with lower
prices. These suppliers however, are located at
greater and greater distances from the OEM sites
and retailer distribution centers (DCs) and stores
(Kim et al., 2011). The production and assembly of
the product and its components is carried out in
remote sites (Figure 6).


Figure 6 - Decentralized Production etwork
The decentralized manufacturing module defines
and guides the manufacturing processes in the
remote sites. The customized product acts as an
input to the decentralized manufacturing module
that is responsible for the generation of the formal
tree structure of the BoM and BoP, in an XML
format. The generation of the supply and
production schemes alternatives is performed
automatically by the system by comparing the
already evaluated scenarios that are stored in the
database, in order to identify the commonalities and
the differences between them, having as input data
the generated BoM and BoP for the particular
customization.
The optimization of the logistics chain can be a
challenging task because the members that form the
supply chain can be self-interested, so they may be
unwilling to share private planning information
(Putten et al., 2006). Moreover, such problems
consist of a collection of interacting sub-systems,
each one described by local properties and
dynamics, joined together by the need to
accomplish a common task which achieves overall
optimal performance (Androulakis and Reklaitis,
1998). Although firms throughout the globe realize
that the collaboration with their supply chain
partners can improve their profits, the
decentralization of the inventory and the
decentralized decision making is often unrealistic
(Li and Wang, 2007). There is a need to not only
coordinate the activities of the independent partners
but to also align their objectives to achieve a
common goal. In a distributed environment, it is
necessary to link the planning and scheduling
systems of a company with the planning systems of
its suppliers. A companys supply chain comprises
of geographically dispersed facilities, where raw
materials, intermediate products, or finished
products are acquired, transformed, stored, or sold
and transportation links that connect facilities along
which products flow. The agile manufacturing
framework proposed, aims at the exploitation of a
large number of suppliers and subcontractors,
distributed in dispersed geographical sites.
Moreover, a decision making tool is deployed for
the evaluation of the alternative production and
supply schemes. The evaluation process will be
based on a multiple criteria decision making process
with differentiated factor weights. The eventual
outcome will indicate the most efficient
manufacturing scheme.
3.4. SOFTWARE ARCHITECTURE AND
IMPLEMENTATION
The deployment of different modules that share and
provide services to a variety of actors and to each
other, based on a Service-oriented Architecture
(SoA), ensure at the same time that the
interoperability and integration among the systems
is achieved as well as the scalability and further
enhancement of the system is possible. Design
patterns offer nowadays powerful features, for the
deployment of web applications, such as reusability
maintenance and extension capabilities (Cooper,
2000). ModelViewController (MVC) is an
architectural pattern that divides the software into
Model, View and Controller components to better
control the software quality with respect to
processing, and interface design. After
identication of agents, adoption of MVC paradigm
will identify client and server-side component in
web based expert system (Hasan and Isaac, 2011).
The Model-View-Controller (MVC) architectural
pattern separates the implementation concerns and
makes the development scalable, easily

maintainable and capable of increased
complexity. In addition, the Java programming
language framework ensures the development of
domain independent solutions
functionalities. The structure of the
model for the business layer is presented in
7.
Figure 7 - Structure of service layer classes
The structure of the service layer is developed in
accordance to the MVC pattern. The s
the communication layer between the presentation
layer and the persistence layer. The
framework, which is an object-relational mapping
(ORM) library for the Java
framework, provides the ability for mapping the
object-oriented domain model to a relational
database.
End User End User
Request
Response
Web Web
611
capable of increased design
he Java programming
language framework ensures the development of
and advanced
The structure of the services data
is presented in Figure

cture of service layer classes
The structure of the service layer is developed in
accordance to the MVC pattern. The services are
the communication layer between the presentation
. The Hibernate
relational mapping
(ORM) library for the Java programming
es the ability for mapping the
oriented domain model to a relational
The Service Base Impl Classes
business logic. The Service Impl
the basic CRUD functions (
and Delete) needed for a persistence storage
approach. The Utility classes define
methods for a class that perform common and
re-used functions. T
Interface classes act as a contact between their
corresponding Classes
these Classes (Figure 7
Moreover, the AJAX
most recent trends in client side scripting.
Traditional web applications put the business logic
in the server side, whereas state of the art web
requirements need reallocation of the business logic
in a distributed manner between the serve
client (Castelyn et al., 2009).
which run on the client or on the remote server
retrieve data from the server asynchronously in the
background without interfering
behaviour of the existing page (Ravi et al.,
The logic of the proposed platform is contained
in the application server of the system (
The web server and servlet
research work is the Apache Tomcat v7.0.19
it is fully compliant with the latest advances in web
programming and with JSP and servlet
specifications. The Database Management System
(DBMS) used in the proposed approach is the
Relational Database Management System
(RDBMS) provided by Oracle, and more
specifically Oracle 9i
database relational schema will be performed in
Oracle SQL Developer Data Modeller v3.0
modelling of the system, based on the
envisioned to perform, is depicted with Unified
Modelling Language Diagrams (UML). The UML
diagrams are developed using
Architect v8.0.2 with UML 2.3 standards. Further
to that, the programming of the platform, the testing
and debugging will be accomplished using
J2EE v3.6 (Helios).
View Logic
Application Server Application Server HTTP Server HTTP Server
JSPs
Servlets
X
M
L

T
r
a
n
s
l
a
t
o
r
X
M
L

T
r
a
n
s
l
a
t
o
r
OEM Network
The Service Base Impl Classes implement the
business logic. The Service Impl classes implement
the basic CRUD functions (Create, Read, Update
) needed for a persistence storage
The Utility classes define a set of
for a class that perform common and often
. The implementation of the
terface classes act as a contact between their
es and other objects that use
7).
AJAX technology is amongst the
most recent trends in client side scripting.
Traditional web applications put the business logic
in the server side, whereas state of the art web
requirements need reallocation of the business logic
in a distributed manner between the server and the
client (Castelyn et al., 2009). In AJAX, scripts
which run on the client or on the remote server
retrieve data from the server asynchronously in the
interfering with the display and
behaviour of the existing page (Ravi et al., 2009).
The logic of the proposed platform is contained
in the application server of the system (Figure 8).
The web server and servlet container used in this
Apache Tomcat v7.0.19, since
it is fully compliant with the latest advances in web
programming and with JSP and servlet
The Database Management System
(DBMS) used in the proposed approach is the
Relational Database Management System
(RDBMS) provided by Oracle, and more
Oracle 9i. The development of the
database relational schema will be performed in
Oracle SQL Developer Data Modeller v3.0. The
modelling of the system, based on the requirements
envisioned to perform, is depicted with Unified
Modelling Language Diagrams (UML). The UML
diagrams are developed using IBM Rational
with UML 2.3 standards. Further
to that, the programming of the platform, the testing
ebugging will be accomplished using Eclipse
Data
Application Server Application Server
Database Server Database Server
J
D
B
C
DB
Cache

612

Figure 8 - Web server and the 3-Tier approach of the proposed framework
The tools, the programming languages and the
frameworks that will be used for the design and
implementation of the platform are depicted in
Figure 9

Figure 9 - Software Tools for the Design and
Implementation of the platform
4. CONCLUSIONS
The research work described in this paper focuses
on diminishing the aforementioned problems that
currently manufacturers from around the globe face,
taking steps beyond mass customization. The
requirements extraction for the platform
implementation has already been performed and
formalized using Unified Modelling Language
(UML) Annotation diagrams, such as Use case,
Sequence and Class diagrams. The current status of
the implementation procedure has yielded a first
prototype of the framework that is being tested and
debugged. The prototype version of the user
adaptation module comprises user-friendly
interfaces, for product customization, and a VR /
AR tool will be integrated as the development
progresses. The decentralized manufacturing
decision making module is currently under
development, and is expected to provide the means
for synchronizing the various industrial sectors and
minimize lead times, production cost and the work
in progress, providing an effective means for
knowledge management. The presented framework
will be tested on an automotive sector case study.
Expected results of the application of the proposed
framework indicate an increase in market share,
reduction of the energy consumption and reduction
and time-to-market.
5. ACKNOWLEDGMENTS
The work reported in this paper will be partially
supported by EC FP7 Factories of the FUTURE
Research Project e-CUSTOM "A Web-based
Collaboration System for Mass Customization"
(NMP2-SL-2010-260067).
REFERENCES
Abele E., Elzenheimer J., Liebeck T. and Meyer T.,
Reconfigurable Manufacturing Systems and
Transformable Factories - Globalization and
Decentralization of Manufacturing, 1
st
Edition,
Springer, New York, 2006, pp. 4-5
Brauer K., Decentralization in The European
Automotive Industry: A Scenario-Based Evaluation,
16th GERPISA international Colloquium, 2008,
http://gerpisa.org/rencontre/16.rencontre/GERPISAJu
ne2008/Colloquium/contributions.html#
Castelyn S., Daniel F., Dolog P. and Matera M.,
Engineering Web Applications, 1st Edition,
Springer-Verlag, Berlin, 2009
Chen L., Song Z. and Feng L., Internet-Enabled Real-
Time Collaborative Assembly Modelling via an E-
Assembly System: Status and Promise, Computer
Aided Design, Vol. 36, No. 9, 2004, pp. 835-847
Chryssolouris G., Manufacturing Systems: Theory and
Practice, 2nd Edition, Springer-Verlag, New York,
2006
Chryssolouris G., Makris S., Xanthakis V. and Mourtzis
D., Towards the Internet-based supply chain
management for the ship repair industry, International
Journal of Computer Integrated Manufacturing, Vol.
17, No. 1, 2004, pp. 4557
Chryssolouris G., Papakostas N. and Mavrikios D., A
perspective on manufacturing strategy: Produce more
with less, CIRP Journal of Manufacturing Science
and Technology, Vol. 1, No. 1, 2008, pp. 45-52
Cooper J. W., Java Design Patterns: A Tutorial,
Addison-Wesley Pub Co, Boston, 2000
Feldmann K., Rottbauer H. and Roth N., Relevance of
Assembly in Global Manufacturing, CIRP Annals,
Vol. 45, No. 2, 1996, pp. 545-552
Franke N., Keinz P. and Schreier M., Complementing
Mass Customization Toolkits with User Communities:
How Peer Input Improves Customer Self-Design,
Journal of Product Innovation Management, Vol. 25,
No. 6, 2008, pp. 546559
Frohlich M. and Westbrook R., Demand chain
management in manufacturing and services: web-
based integration, drivers and performance, Journal
of Operations Management, Vol. 20, No. 6, November
2002, pp. 729-745
613

Hasan S. S. and Isaac R. K., An integrated approach of
MAS-CommonKADS, ModelViewController and
web application optimization strategies for web-based
expert system development, Expert Systems with
Applications, Vol. 38, No. 1, January 2011, pp. 417-
428
Helms M., Ahmadi M., Jih W. and Ettkin L.,
Technologies in support of mass customization
strategy: Exploring the linkages between e-commerce
and knowledge management, Computers in Industry,
Vol. 59, No. 4, April 2008, pp. 351-363
Hu S. J., Ko J., Weyand L., ElMaraghy H. A., Kien T.
K., Koren Y., Bley H., Chryssolouris G., Nasr N. and
Shpitalni M., Assembly system design and operations
for product variety, CIRP Annals - Manufacturing
Technology, Vol. 60, No. 2, 2011, pp. 715-733
Huang X., Kristal M. and Schroeder R., Linking
learning and effective process implementation to mass
customization capability, Journal of Operations
Management, Vol. 26, No. 6, 2008, pp. 714-729
Janardanan V. K., Adithan M. and Radhakrishnan P.,
Collaborative Product Structure Management for
Assembly Modeling, Computers in Industry, Vol. 59,
No. 8, 2008, pp. 820-832
Kim H., Lu J. C., Kvam P. and Tsao Y. C., Ordering
quantity decisions considering uncertainty in supply-
chain logistics operations, International Journal of
Production Economics, Article in Press, 2011
Knight K., Survey: 89% of consumers prefer online
shopping, Bizreport, 2007,
http://www.bizreport.com/2007/12/survey_89_of_con
sumers_prefer_online_shopping.html
Leitao P., Agent-based distributed manufacturing
control: A state-of-the-art survey, Engineering
Applications of Artificial Intelligence, Vol. 22, No. 7,
October 2009, pp. 979-991
Monostori L., Kdr B., Pfeiffer A. and Karnok D.,
Solution Approaches to Real-time Control of
Customized Mass Production, CIRP Annals
Manufacturing Technology, Vol. 56, No. 1, 2007, pp.
431-434
Monostori L., Vancza J. and Kumara S. R. T., Agent-
based systems for manufacturing, Annals of the
CIRP, Vol. 55, No. 2, 2006, pp. 697720
Mourtzis D., Makris S., Xanthakis V. and Chryssolouris
G., Supply chain modelling and control for producing
highly customized products, CIRP Annals -
Manufacturing Technology, Vol. 57, No. 1, 2008, pp.
451-454
Pine B. J., Mass Customization: The New Frontier in
Business Competition, Harvard Business School
Press, 1992
Ravi J., Yu Z. and Shi W., A survey on dynamic Web
content generation and delivery techniques, Journal
of Network and Computer Applications, Vol. 32, No.
5, September 2009, pp. 943-960
Sander van der Putten S., Robu V., La Poutr H.,
Jorritsma A. and Gal M., Automating Supply Chain
Negotiations using Autonomous Agents: a Case Study
in Transportation Logistics, Proceedings of the 5
th

international joint conference on Autonomous agents
and multi-agent systems, AAMAS '06, New York
Sawhney M., Verona G. and Prandelli E., Collaborating
to create: The Internet as a platform for customer
engagement in product innovation, Journal of
Interactive Marketing, Vol. 19, 2005, pp. 4-17
Shi W., Collins E. and Karamcheti V., Modelling object
characteristics of dynamic Web content, Journal of
Parallel and Distributed Computing, Vol. 63, No. 10,
October 2003, pp. 963-980
Sun W., Zhang K., Chen S. K., Zhang X. and Liang H.,
Software as a Service: An Integration Perspective,
Service-oriented Computing, ICSOC 2007, Lecture
Notes in Computer Science, Vol. 4749, 2007, pp. 558-
569
Thirumalai S. and Sinha K., Customization of the online
purchase process in electronic retailing and customer
satisfaction: An online field study Journal of
Operations Management, Vol. 29, No. 5, July 2011,
pp. 477-487
Tolio T. and Urgo M., A Rolling Horizon Approach to
Plan Outsourcing in Manufacturing-to-Order
Environments Affected by Uncertainty, CIRP Annals
Manufacturing Technology, Vol. 56, No. 1, 2007, pp.
487-490
Tu Q., Vonderembse M. and Ragu-Nathan T.S., The
impact of time-based manufacturing practices on mass
customization and value to customer, Journal of
Operations Management, Vol. 19, No. 2, 2001, pp.
201-217
Uva A. E., Cristiano S., Fiorentino M. and Monno G.,
Distributed design review using tangible augmented
technical drawings, Computer-Aided Design, Vol.
42, No. 5, May 2010, pp. 364-372
Vncza J., Egri P. and Monostori L., A coordination
mechanism for rolling horizon planning in supply
networks, CIRP Annals - Manufacturing Technology,
Vol. 57, No. 1, 2008, pp. 455-458
Wiendahl H.-P., El Maraghy H. A., Nyhuis P., Zh M. F.,
Wiendahl H.-H., Duffie N. and Brieke M.,
Changeable Manufacturing - Classification, Design
and Operation, CIRP Annals - Manufacturing
Technology, Vol. 56, No. 2, 2007, pp. 783-809
Yang H. and Xue D., Recent research on developing
Web-based manufacturing systems: a review,
International Journal of Production Research, Vol. 41,
No. 15, 2010, pp. 3601 - 3629
Zhang L., Carman K., Lee C. and Xu Q., Towards
product customization: An integrated order fulfilment
system, Computers in Industry, Vol. 61, No. 3, April
2010, pp. 213222
Proceedings of DET2011
7th International Conference on Digital Enterprise Technology
Athens, Greece
28-30 September 2011
614

MACHINING WITH ROBOTS: A CRITICAL REVIEW
John Pandremenos
Laboratory for Manufacturing Systems &
Automation, Univeristy of Patras, Rio, Patras
26500, Greece
jpandrem@lms.mech.upatras.gr
Christos Doukas
Laboratory for Manufacturing Systems &
Automation, Univeristy of Patras, Rio, Patras
26500, Greece
cdoukas@lms.mech.upatras.gr


Panagiotis Stavropoulos
Laboratory for Manufacturing
Systems & Automation,
Univeristy of Patras, Rio,
Patras 26500, Greece
pstavr@lms.mech.upatras.gr
George Chryssolouris
Laboratory for Manufacturing
Systems & Automation,
Univeristy of Patras, Rio,
Patras 26500, Greece
xrisol@lms.mech.upatras.gr
ABSTRACT
Conventional material removal techniques, namely those of CNC milling, have proven to be able to
deal with nearly any machining challenge. On the other hand, the major drawback of using
conventional CNC machines is their restricted working area and their produced shape limitations
From a conceptual point of view, the industrial robot technology could provide an excellent base for
machining that would be both flexible and cost efficient. However, industrial machining robots lack
in absolute positioning accuracy, are unable to reject/absorb disturbances, in terms of process
forces, and lack in reliable programming and simulation tools so as to ensure right first time
machining, at production start-ups. This paper reviews the penetration of industrial robots into the
challenging field of machining.
KEYWORDS
Robot, Machining, Accuracy, Programming
1. INTRODUCTION
Recent developments in machining and tool design
technology, especially in milling operations, reflect
the requirements for flexibility in order to adapt to
the changes taking place in the market, in the
society and in the global economic environment
(Chryssolouris, 2006), for the reduction in weight
and dimensions, high surface quality and accuracy
of the produced parts (Fig. 1). These improvements
result in machine tools of high precision and
accuracy.
Major objectives of manufacturing engineering
today are being evolved. The market requirements
indicate that emphasis should be given to a low-
volume and a large-variety, even in a high volume
production industry, in order for global competition
to be dealt with. Flexibility is also required to use
the same facilities of the minor or major model
changes that come in equipments effective life
span.. The basic work-piece can be modified as
shown in Fig. 2, demanding flexibility in its
manufacturing (Sharma, 2001).
An industrial robot can fulfill the need for todays
and tomorrows industry for a cost and time
efficient, yet flexible means of material processing.
There are various robotic cells already introduced to
the process of welding and material handling and
excellent results have been achieved. Moreover,
many studies have been conducted on articulated
robot applications in machining processes, such as
polishing (Yamane et al, 1988), (Takeuchi et al.,

615

1993), grinding and de-burring (Kunieda et al, 1988).


Figure 1 - Dimensional size for the micro-mechanical
machining. Modified Tanigushi curve (Stavropoulos,
et al, 2007)
Moreover, statistical data have shown (Fig.3) that
the number of operational robotic cells is constantly
increasing worldwide and future predictions suggest
that the robot market should continue to increase in
the future. (Industrial Robot Statistics, 2010)



On the other hand, machining tasks are not
carried out by robots, despite being established in
most of the shops, since it is conventional CNC
milling machines that are being used. The major
drawback of using conventional milling machines is
their limited working area that usually forces one
part to be machined in multiple operations, or even
the part to be split in pieces and be reassembled
after completion. In some extreme cases
modifications of the machine itself takes place in
order to accommodate a bigger part. On the other
hand, robotic machining cells can machine large
parts in single operation setups, due to wider
working envelopes reaching up to 7.5 m^3 (cubic if
we talk about volume) In conjunction with the
extensive robots turning range, the working
envelope usually covers more than 20m3 (Chen &
Song, 2001), enabling the machining of very large
work-pieces. Finally, robot machining, as a tool
positioning system, due to the flexible kinematics of
robot arms, is often capable of machining parts with
intricate details and complex shapes, whilst a
conventional CNC machine needs special fixtures
and techniques to produce them.




Figure 2 Deviations in basic work-pieces with design change (Sharma, 2001).

Figure 3 Estimated worldwide operational stock of Industrial Robots (Industrial Robot Statistics, 2010)

616

2. ROBOTS IN AUXILIARY ROLE
2.1. WELDING AND FINISHING
In order for industrial robots to accomplish non-
structured and more sophisticated tasks, sensorial
capabilities are required. The automotive industry is
one of the major users of machine vision. One of the
first applications of vision systems in automotive,
was for seam tracking during welding operation
(Michalos et al, 2010). On the other hand, industrial
robots fulfill the automotive industrys need for
flexibility in tooling and fixturing (Papakostas et al,
2006). In other metal processing tasks, Asakawa
proposed a polishing system, using a purpose
developed CAD/CAM system, an ultrasonic
vibrational tool with a ceramic tip and a 6- degree
of freedom robot, achieving only limited results
(Asakawa et al, 1995). Sanders et Al. (Sanders,
Lambert, Jones, & Giles, 2010) proposed the
combination of a robot welding system with an
image processing system in order to gather data
about the weld characteristics/geometry generate
robot programs and calibrate the robot path.
2.2. RAPID PRODUCTION
Although machining with robots possesses some
unique advantages over conventional machining
processes, it also has some of their common
problems. The most common is the machining of
hollow features or deep cavities, when collisions
may occur between the tool holder and the part
surface. One way to overcome this issue is layer
based machining, making industrial robot excellent
candidates for Rapid Tooling (RT) machining
operations. The accuracy of robots, although it is
lower than that of a conventional machine tool, is
better or comparable with that of rapid prototyping
machines, such as SLA and SLS. At the same time,
robots can successfully machine in wood, wax,
foam and similar materials, without sacrificing the
accuracy levels. In comparison with the
conventional rapid prototyping machines (RT), the
material choice is far wider, indicating that a
standard robot, although limited to conventional
machining cases, could be a good substitute to the
RP machines (Song & Chen, 1999). Y Song et al.
proposed the usage of a conventional industrial
robot (IR) and CAD package with the purpose of
developing a module to check the programmed
toolpath for collision and create the actual robot
program. Based on the Denavit - Hartenberg
(Denavit-Hartenberg Parameters) rules, the model
calculates the geometry of the robotic arm while
machining. In order to detect gouging, Song et al.
used a virtual tool 0.1 mm shorter than the actual
one and calculated the intersection between the tool
and the workpiece. In order for gouging to be
avoided, when there was an intersection, the tool
path was redefined by moving the tool within a
cone, whose apex point was the tool contact point as
shown in Fig.4.

Figure 4 - Finding a collision-free tool position by moving
the tool along a cone
Huang and Oliver (Huang & Oliver, 1994)
presented a method which was capable of initial
chordal approximation, true machining-error
calculation, direct gouge elimination, and non-
constant parameter tool pass- interval adjustment;
Lin and Koren (Lin & Koren, 1996) proposed a
non-constant offset method, based on the previous
tool path in order to avoid redundant machining;
Oliver (Oliver et al, 1993) proposed an approach
which exploited surface normals and the geometry
of the tool to characterize accurately the chordal
deviation, having resulted from the actual tool
motion and could detect gouging in areas of large
curvature variation. As a specific implementation,
(Suh & Lee, 1990) experimented with nonmetal
materials and proposed a method to machine a
pocket with a convex or concave free-form surface
bounded by lines, circular arcs and free-form
curves. In order for the machining process to be
improved, in terms of speed and quality, first there
was used a rough cutting process to remove the
main volume of the material and then a second
finishing operation, with smaller tool and cutting
depth, to finalize the cutting process. As a result,
without sacrificing the machining time, the cutting
accuracy was slightly larger than the actual
accuracy of the robot.
3. ROBOTS IN MACHINING
3.1. ACCURACY ISSUES
Most industrial robots are constructed on a
cantilever concept. Their arms include motors,
brakes and reduction gears, struggling to achieve a
high positioning accuracy of 0.5-2mm (Vergeest &
Tangelder, 1996). In addition they are prone to
disturbances from the process forces. K. Young et
al. conducted a trial on modern serial linkage robots

617

to assess and compare robot accuracy. Each robot
has been measured, in a similar area of its working
envelope, by the laser interferometer system (Young
& Pickin, 2000). The results and conclusions from
this trial have shown that the accuracy is average,
though it is dependent on a calibration process
which is far from robust.
In figure 5, the straightness measurement of a
typical IR is presented. Deviation in the X-axis
when travelling in the Y direction is within an error
band of approximately 0.7mm across the measured
envelope, presented a significant deterioration of
accuracy, in 50% of the working envelope.
The main issue that prevents the usage of
industrial robots, in a heavy machining application
(metal milling, etc.), is their proneness to machining
forces, inducing disturbances and their inability to
reduce or eliminate them. In the course of the EU
funded research program COMET (COMET Project
- Plug and Produce COMponents and METhods for
adaptive control of industrial robots enabling cost
effective, high precision manufacturing in factories
of the future) the tool acceleration, while machining
with a robot, was measured indirectly, using 500
kHz acceleration sensors. The results showed that
there was a constant deviation in the Z-axis while
machining (Fig.6) (Euhus & Krain, 2010). This is
the result of the tools cutting edges going into the
material and producing cutting forces that affect the
cutting quality.

Figure 6 - First micro seconds of milling, tool contact
(sampling freq.: 500 kHz, sensor: Artis VA-2-S)
Matsuoka proposed the usage of an articulated
robot featuring a small diameter end mill to reduce
the cutting force in order to compensate for the low
stiffness of the articulated robot and high-speed
cutting (Matsuoka et al, 1999). Veryha proposed
and verified both theoretically and practically, a
method for joint error mutual compensation,
allowing localization of special robot poses in
increased accuracy. In order to use a non-uniformity
of the robot pose accuracy characteristics of the
different robot configurations, the method of the
joint error maximum compensation for redundant
robotic manipulators was developed (Veryha &
Kurek, 2003).

3.2. VIBRATIONS CHATTERING
Another main issue of robot machining is the effect
of vibrations on the surface quality produced. Due
to the low natural frequency of articulated robot
body, resonance can occur due to machining
process vibrations. Oki et al., focusing on the
cutting of workpieces from an extruded aluminum
alloy, assessed the effect of vibrations on the
characteristics of end milling operation. Their
experiments proved that during high-speed cutting,
the generated process frequency is high enough
without resonance issues, thus ensuring stable and
normal end milling with restrained chatter and
vibration of the articulated robot (Oki & Kanitani,
1996).
Oki et Al. also proved that the cutting direction
affected the process accuracy by experimenting on
machining right-handedly and left-handedly a
cylindrical shape. Due to the change in the cutting
surfaces side, the cutting forces differ and the
articulated robot is deformed differently in each
cutting direction and as a result, the accuracy of the
Figure 5 - Deviation in the X-axis when travelling in the Y
direction

618

machining process is different in the right (digger
diameter) and left (smaller diameter) hand cutting.

Figure 7 - Relationship between circularity and circularity:
(a) right-turn; (b) left-turn
Zengxi Pan investigated chattering due to
machining vibrations, using a Force/Torque sensor
between the robot wrists and the spindle. Zengxi
Pan observed that every time that chatter occurred,
the force amplitude increased dramatically, even
while machining was taking place in low cutting
depths (2mm) (Zengxi et al, 2006). Moreover, the
chatter characteristics were changing depending on
the robot pose, joint orientation and loading This
was mainly due to the dramatic difference in
stiffness characteristics from that of industrial
robots being less than 1N/um, while the standard
CNC machines have stiffness greater than 50 N/um,
thus the maximum cutting force for robot
applications is limited to around 150N parallel and
50N axially to the cutting direction, in order to
maintain reasonable accuracy. Based on the
experimental results and modeling of the process,
the margin while chatter was not introduced to the
process, was calculated to be in the range of 10Hz
(around 3600rpm). Tool breakage and premature
failure was a major issue in machining applications.
Jay Lee proposed the application of a force/torque
sensor to monitor the thrust force as an indication of
Tool condition (Lee).


Figure 8 - Thrust force sensing for drilling process
This proposal has also had an application in
overload cases, such as in collisions and robot
overloading.
3.4. CALIBRATIONS METHODS
In order to calibrate an industrial robot to reach
higher accuracy levels, various types of coordinate
measuring systems (CMM) and other ones have
been utilized. Chunhe Gong presented a
methodology to identify geometric errors of
industrial robots using a laser tracker and inverse
calibration process. Both geometric and thermal
errors were calibrated. The produced models were
built into the controller and were used to be
compensating for quasi-static thermal errors due to
internal and external heat sources. Experimental
results showed that the robot accuracy was
improved by an order of magnitude after calibration
(Gong & Yuan, 2000). Sabri Cetinkunt et Al.
developed a strategy for cutting force compensation
using wrist force sensors (Cetinkunt & Tsai, 1990).
Using wrist-force sensors, the reaction torques at
the joints due to the cutting force is compensated to
eliminate the cutting force reaction effects. The
movement of the robot hand with the tool is a
position control only with cutting. While it is

619

cutting, then the cutting force will react to the joints
as known disturbance torque inputs and should be
compensated based on force sensor measurements.
In other applications, CCD cameras were also used
(Huang & Lin, 2003) for identifying the parameters
that were to be corrected in conjunction with
software aspiring to calculate the corrected tool
path.
3.5. PROGRAMMING
As far as the programming side of machining with
industrial robots is concerned, it still has a major
disadvantage against that of the conventional CNC
machines. Robots are mainly programmed with the
use of the traditional teach and repeat method of
programming. The user manually moves the robot
in predefined positions and the latter saves the
coordinates. As a result, the accuracy is not of great
importance to this method and is generally
considered low. The CNC machines are
programmed offline; usually using dedicated
software, based on known reference coordinates. Y.
H. Chen et Al. proposed a programming method for
industrial robot rough machining. Using a grid
array, in XYZ directions, both the actual tool and
the model of the part to be cut is represented (Chen
& Hu, 1995). By changing the grid resolution, the
accuracy of the generated toolpath is also altered
(Hu & Chen, 1999).

Figure 9 - Rectangular grid approximation

Figure 10 - Error analysis
By the same approach, secondary operations such
as gauging, can be detected and avoided. On the
other hand, as the grid size affects the surface
quality besides the roughing process, it also affects
the actual size of the program, making it difficult to
be handled by some controllers. Y. N. Hu et Al.
proposed the method of finishing machining, by
implementing in conventional 5-axis strategies the
advantage of industrial robots 6th axis motion,
called swivel (Hu & Chen, 1999).
A dual robot setup that is widely used in
assembly and handling applications can also be used
in applications where a single robot setup cannot
reach all points. W. S. Owen et Al. proposed a dual
robot setup, with one robot, handling the material
and the second one, bearing the tool (Owen, Croft,
& Benhabib, 2004).

Figure 11 - Two manipulator machining system
Due to the redundant DoF, the authors designed
an off-line programming system with an integrated
algorithm to optimize the tools trajectories, using
the pseudo-inverse method. The approach monitors
the torque in the robot axes while it also finds the
optimum configuration/poses to improve the
accuracy of the final part by decreasing tool
deflection and the optimum absorption of
machining forces. Hsuan-kuan Huang et Al.
developed a dual independent robot machining cell,
where the programming development was carried
out through the CAM software for the generation of
cutter location data of 5-axis milling, together with
a post processor for the translation of CL data into
linear and rotational motions of the robot cell
controller. The implementation of the dual robot
setup was achieved by dividing the original CL data
into two parts taking into account the collision
detection between the two robots and the
minimization of force generated inaccuracy of the
final geometry. The author also developed an
offline programming module, enabling an off-line
programming and simulation of the dual robot
machining cell (Huang & Lin, 2003).

620

4. CONCLUSION
Industrial Robots can considerably contribute to
improving the efficiency of machining operations.
Their high level of flexibility and extended working
space can outperform the conventional machine
tools. Due to their extra degree of freedom, an IR
can machine complicate geometries that otherwise
would need special fixturing elements and multiple
machining operations. In addition, an IR offers the
possibility of a dual machining setup, either with
spindle on the robot, or the workpiece on the robot
and in conjunction with the extended payload range
(from 5kg to more than 400kg); even the heavier
and larger parts can be machined. Finally, industrial
robots, mainly used in handling and welding
applications, are already well established in most
machine shops.
On the other hand, it is clear that robots have
some serious disadvantages in terms of accuracy,
repeatability and handling machining processes,
when they are being compared with the
conventional CNC machine tools. Although there
are cases of machining with robots, they are
extremely limited. This is due to their low absolute
positioning accuracy and their lacking in a
programming and simulation system sufficient
enough for the generation of 100% correct robot
path programs.
The major drawbacks of an IR are currently being
addressed, by having developed advanced
simulation technics, intelligent programming
software and improved calibration processes. The
novel IR models feature improved controllers with
reference to their computing power and precision,
enabling the usage of more complex calibration
algorithms and more detailed NC programs. At the
same time, advanced external compensation
mechanisms and tracking systems, are under
development, in order to improve even further the
IRs machining accuracy, at the same or higher
levels than those of the conventional NC machine
tools.
The EU funded research program COMET
develops a CAM based industrial robot
programming system, which incorporates analytical
dynamic and kinematic models of IR and in co-
operation with a robot tracking system as well as a
high dynamic compensation mechanism, developed
to this effect, the COMET platform aims at
achieving an accuracy level of less than 50um in
milling applications. The final result will be a plug
and produce platform that will be replacing the
ordinary machine tools without sacrificing accuracy
and machining speed (COMET project results).
REFERENCES
Industrial Robot Statistics. (2010, Sept 30). Retrieved
July 20, 2011, from The International Federation of
Robotics:
http://www.ifr.org/uploads/media/Charts_industrial_r
obots_2010.pdf
Asakawa, ., Daisuke, O., & Takeuchi, Y. (1995).
Automation of polishing work by an industrial robot
(5th report, 6-axis control polishing with ultrasonic
vibrational tool). Transactions of the Japan Society of
Mechanical Engineers, pp 4049-4054.
Cetinkunt, S., & Tsai, R.-L. (1990). Position error
compensation of robotic contour End-Milling.
International Journal of Machine Tools and
Manufacturing, Vol.30, pp 613-627.
Chen, H., & Hu, Y. (1995). Implementation of a Robot
System for Sculptured Surface Cutting. Part 1. Rough
Machining. International Journal of Advanced
Manufacturing Technology, pp 624-629.
Chen, Y., & Song, Y. (2001). The development of a layer
based machining system. Computer-Aided Design ,
Vol. 33, pp 331342.
Chryssolouris, G. (2006). Manufacturing systems
theory and practice 2nd ed. ew York: Springer-
Verlag.
COMET Project - Plug and Produce COMponents and
METhods for adaptive control of industrial robots
enabling cost effective, high precision manufacturing
in factories of the future. (n.d.). Retrieved June 20,
2011, from http://www.cometproject.eu/:
http://www.cometproject.eu/
COMET project results. (2011). Retrieved May 20, 2011,
from Plug-and-produce COmponents and METhods
for adaptive control of industrial robots enabling cost
effective, high precision manufacturing in factories of
the future:
http://www.cometproject.eu/publications/Delcam-
PSIR.pdf
Euhus, D., & Krain, R. (2010). First metal cut at TEKS.
Retrieved May 20, 2011, from Plug-and-produce
COmponents and METhods for adaptive control of
industrial robots enabling cost effective, high
precision manufacturing in factories of the future:
http://www.cometproject.eu/publications/ARTIS-first-
metal-cut-at-TEKS.pdf
Gong, C., & Yuan, J. . (2000). Robot thermal error
calibration and modeling with orthogonal modeling.
Japan USA Symposium on Flexible Automation.
Hu, Y. ., & Chen, Y. H. (1999). Implementation of a
Robot System for Sculptured Surface Cutting. Part 2.
Finish Machining. International Journal of Advanced
Manufacturing Technology, pp 630-639.
Huang, H.-k., & Lin, G. C. (2003). Rapid and flexible
prototyping through a dual-robot workcell. Robotics
and Computer Integrated Manufacturing, Vol. 19, pp
263-272.

621

Huang, Y., & Oliver, J. H. (1994). on-constant
parameter C tool path generation on sculptured
surfaces. International Journal of Advanced
Manufacturing Technology, pp 281-290.
Jinno, M., Ozaki, F., Tatsuno, K., Takahashi, M., Kanda,
M., Tanada, Y., et al. (1995). Development of a force
controlled robot for grinding, Chamfering and
polishing. IEEE International Conference on Robotics
and Automation, pp 1455-1460.
Kunieda, M., akagawa, T., & Higuch, T. (1988). Robot-
polishing of curved surface with magnetically pressed
polishing tool. J. JSPE 54(1), pp 125-131.
Lee, J. (n.d.). Apply Force/Torque Sensors to Robotic
Applications.
Lin, R. S., & Koren, Y. (1996). Efficient tool-path
planning for machining free-form surfaces. ASME
Journal of Engineering for Industry, Vol. 118, pp 20-
28.
Matsuokaa, S.-i., Shimizua, K., Yamazakib, ., & Okib,
Y. (1999). High-speed end milling of an articulated
robot and its characteristics. Journal of Materials
Processing Technology, Vol. 95, pp 83-89.
Michalos, G., Makris, S., Papakostas, ., Mourtzis, D., &
Chryssolouris, G. (2010). Automotive assembly
technologies review: Challenges and outlook for a
flexible and adaptive approach. CIRP Journal of
Manufacturing Science and Technology, pp 81-91.
Oki, Y., & Kanitani, K. (1996). Development of robot
machining system for aluminum building materials. J.
JSME 99, pp 78.
Oliver, J. H., Wysocki, D. A., & Goodman, E. D. (1993).
Gouge detection algorithms for sculptured surface C
generation. ASME Journal of Engineering for
Industry, Vol. 115, pp 139-144.
Owen, W. S., Croft, E. A., & Benhabib, B. (2004, April).
Real-Time Trajectory Resolution for Dual Robot.
Proceedings of the 2004 IEEE International
Conference on Robotics & Automation.
Papakostas, ., Makris, S., Alexopoulos, K., Mavrikios,
D., Stournaras, A., & Chryssolouris, G. (2006).
Modern Automotive Assembly Technologies: Status
and Outlook. The 1st CIRP-International Seminar on
Assembly Systems.
Sanders, D. A., Lambert, G., Jones, J. G., & Giles, E.
(2010). A robotic welding system using image
processing techniques and a CAD model to provide
information to a multi-intelligent decision module.
Assembly Automation, pp 323332.
Sharma, I. R. (2001). Latest Trends in Machining.
Retrieved July 19, 2011, from Indra Drishtikona:
http://www.drishtikona.com/books/latest-trends-in-
machining/ch-all.pdf
Song, Y., & Chen, Y. H. (1999). Feature-based robot
machining for rapid prototyping. Proceedings of the
Institution of Mechanical Engineers, Part B: Journal
of Engineering Manufacture 1999, pp 451.
Stavropoulos, P., Salonitis, K., Stournaras, A.,
Pandremenos, J., Paralikas, J., & Chryssolouris, G.
(2007). Advances and challenges for tool condition
monitoring in micromilling. IFAC Workshop on
Manufacturing Modelling, Management and Control,
(pp. pp 211-216). Budapest, Hungary.
Suh, Y. S., & Lee, K. (1990). C milling tool path
generation for arbitrary pockets defined by sculpture
surfaces. Computer-Aided Design, Vol. 22, pp 273-
284.
Takeuchi, Y., Asakawa, ., & Dongfang, G. (1993).
Automation of polishing work by an industrial robot.
SME International Journal, Series C 36, pp 556-561.
Vergeest, J. S., & Tangelder, J. W. (1996). Robot
machines rapid prototype. Industrial Robot, pp 17-20.
Veryha, Y., & Kurek, J. (2003). Application of Joint
Error Mutual Compensation for Robot End-effector
Pose Accuracy Improvement. Journal of Intelligent
and Robotic Systems, Vol. 36, pp 315329.
Wilson, M. (1999). Vision system in the automotive
industry. Industrial robot: An international journal, pp
354-357.
Yamane, Y., Yin, R., Okamoto, H., Yamada, M.,
Yonezawa, Y., & arutaki, . (1988). Application of
multi-joint type industrial robot to scraping of surface
plate. J. JSPE 55, pp 1787-1792.
Young, K., & Pickin, C. G. (2000). Accuracy assessment
of the modern industrial robot. Journal of Industrial
Robot, Vol. 27, pp. 427-436.
Zengxi, P. b., Zhang, H. a., Zhub, Z., & Wang, J. a.
(2006). Chatter analysis of robotic machining process.
Journal of Materials Processing Technology, Vol.
173, pp 301309.
Proceedings of DET2011
7th International Conference on Digital Enterprise Technology
Athens, Greece
28-30 September 2011
622

A PUSHLET-BASED WIRELESS INFORMATION ENVIRONMENT FOR
MOBILE OPERATORS IN HUMAN BASED ASSEMBLY LINES
Sotiris Makris
Laboratory for Manufacturing Systems &
Automation,
Department of Mechanical Engineering &
Aeronautics,
University of Patras, Greece
Email: makris@lms.mech.upatras.gr
George Michalos
Laboratory for Manufacturing Systems &
Automation,
Department of Mechanical Engineering &
Aeronautics,
University of Patras, Greece
*Email: michalos@lms.mech.upatras.gr


George Chryssolouris
(corresponding author)
Laboratory for Manufacturing
Systems & Automation,
Department of Mechanical
Engineering & Aeronautics,
University of Patras, Greece
*Email:
xrisol@lms.mech.upatras.gr

ABSTRACT
This paper deals with the wireless manufacturing research topic and discusses the distribution of
information in real time using the pushlets / comet technology. RFID based identification
techniques are used to track product and operators at the shop floor in real time. This identification
triggers the automatic transmission of assembly instructions and multimedia material to handheld or
stationary terminals reducing the time required to retrieve and assimilate the information. This
paper discusses the architecture design and the implementation aspects of the pushlet-based wireless
information environment. The system was validated in a truck assembly line and the findings
indicate that its applicability can be extended to other industrial sectors such as the car production,
the ship building, aerospace etc.
KEYWORDS
Wireless manufacturing, Flexible assembly, Operator support

1. INTRODUCTION
This paper discusses a software system utilising
wireless information and communication
technology, for supporting assembly line operators
in performing their assembly tasks. The
environment has been applied into a pilot case from
the automotive industry.
The pursuit of mass customization has led to an
explosion of product variability (Pil and Holweg
2004) which in cases such as the one of the
automotive industry can reach up 10
25
options for
the customer. It is becoming apparent that the
number of variants can reach such a value, since the
automotive companies are offering numerous
options on their products, in different ways
including body variants, power-train, paint and trim
combinations and factory fitted options (Michalos et
al. 2007).

623

On the other hand, analyses on human based
assembly systems have pointed out that human
operators in general, are considered as major
flexibility enablers, since they are capable of
quickly adapting to changing products and market
situations (Feldmann and Slama, 2001). A key
objective nevertheless is to integrate the human
worker in the assembly process in a more flexible
way by providing them the right instructions at the
right time and at the right place, allowing them to
perform different tasks.
The accommodation of such a great number of
vehicle options on a single human-operated
assembly line is challenging because of the
following reasons. Each option requires assembly
operations that in general vary per option.
Therefore, the more options that have to be built on
a vehicle, the higher the amount of assembly
operations variations have to be handled by the
assembly line operator. The ability of the operators
to memorize a finite number of assembly
operations, the monotony caused by the repetition
of similar tasks over long periods of time and the
accumulated fatigue are considered among the most
important factors that affect final product quality
(Michalos et al. 2010a). As a result, research in the
areas of human support systems has indicated that
the adoption of human support systems is a very
promising approach (Feldmann et al. 2003,
Michalos et al. 2007). Several case studies have
been conducted to prove that the enablement of
information mobility by means of wireless and
wearable devices such as the iPod touch, can
create new communication channels that in turn will
create more flexible and faster information flows
reducing the cognitive requirements on behalf of the
operators (Fasth et al., 2010). The direct benefit
from the enhancement of information presentation
to the operators is deeply related to the final product
quality as in many cases the number of rejected
parts can be largely reduced by up to 40%
(Bckstrand et al., 2008).
Although wireless technologies have been
considered in manufacturing and assembly
environments for some time, their use has mainly
been limited to production monitoring and
production system performance measurement
applications (Zhou et al. 2007, Ng and Ip 2003).
More recently, the installation of wireless
technologies in shop floor such as: RFID, GSM and
802.11 have been a new IT application area in the
industrial shop-floor (Egea Lopez et al. 2005,
Chryssolouris et al., 2009). However, the
integration of wireless IT technologies at shop level
is often prevented due to the demanding industrial
requisites such as immunity to interference, security
and high availability. Nevertheless these
technologies have mainly been used for shop floor
monitoring and control applications, as well as for
performance analysis and performance
improvement of plants. The presented framework
however aims at integrating such technologies
towards the enhancement of information flow
within manufacturing environments to assist human
operators.
As these technologies seem to have reached a
mature level of implementation, more content-rich
functionalities are now emerging. Especially in the
case of hybrid assembly systems where a mixture of
automation and human workforce coexist, the role
of wireless IT is particular critical in enhancing the
flexibility and responsiveness of assembly systems.
As far as the automated identification of products
and operators is concerned, bar code based
identification has been extensively used over the
last years. Although it has been a proven, low cost
and reliable technology several drawbacks such as
the inability of real time update, the inability of
tracking single items and the visual contact
requirement have allowed RFID technology to gain
ground over the last decades which are starting to
become cost efficient (Stankovski et al. 2009).
However, in order to implement all the
aforementioned functionalities a setup like the one
in Figure-1, which requires a considerable amount
of RFID antennas (at least 7) which can yield a
significant investment cost for a single line that may
involve multiple stations.

Figure 1: RFID enabled assembly station
This work discusses a framework for enabling
pushing the right amount of information to the right
operator at the right time, exploiting the benefits of
wireless communication through WiFi systems,
radio frequency identification (RFID), web based
information sharing and pushlet based technology
thus reducing the cognitive load on the operator and

624

achieving reduction of assembly errors in the
assembly line.
2. WIRELESS COMMUNICATION AND
INTEGRATION MODEL
The basic model of operating a multi variant
assembly line which is human-operated, utilizing
wireless information technology is discussed in this
section.

Trace
arriving
chassis
Trace parts
on the shop
floor
Identify
suitable
operators
Load instructions
for assembly
operations
Create map of operator
assembly place and
assembly instruction
Allocate an assembly
task and assembly
place to an operator
Push information
to operators
device

Figure 2. Operator rotation and information push model
logic
Assembly operators are residing on the shop floor.
The chassis is moving along the assembly line and
operators are residing on the two sides of the
chassis. Two operators can be assigned to each
side, one in the front and one in the back side. The
parts to be assembled on the chassis are located in
the two sides of it. As soon as the chassis arrives to
an assembly station, an event is generated by the
RFIDs tracing infrastructure. This signals a query
on which parts should be assembled on the chassis
on that station. As soon as the right part is
identified, the RFIDs on that part triggers a presence
event. Subsequently, the wireless devices of the
suitable operators are also traced, while the
instructions for assembling the specific part on the
chassis are loaded. An assembly task is assigned to
the operator and a mapping is done that involves a
human operator, the assembly place that he should
go to, the part that he should pickup and instruction
for him to accomplish the assembly task. The
information is filtered per operator and assembly
task, therefore each operator receives a notification
on which place he should move to, which parts are
required for the assembly task and instructions on
how to assemble the part on the chassis. This
overall job rotation and information push model is
briefly shown in Figure-2.
2.1. CHASSIS TRACING AND EVENTS
GENERATION
The flow of operations and events generation is
shown in Figure 3. The system consists of two
computers that are called servers, the Workflow
control and Pushlet host that is responsible for
controlling the flow of the operations and the
Location identification Server that is responsible
for generating trace events. The vehicle chassis is
divided into four zones, namely the front right, front
left, back right and back left. An exciter is installed
in each one of the zones capable of tracing the
operators mobile device as well as the arriving
parts.
Location identification
Server
Exciter 1
Exciter 4
Exciter 3
Exciter 2
Front left zone
Front right zone
Back right zone Back right zone
Back left zone
1
2
3
4
5
6
Workflow control
and Pushlet host
Chassis
Arrival

Figure 3. Data/Operation flow at the assembly station
Upon the chassis arrival the following events take
place.
Step 1: Operator wireless device negotiates with
the Workflow control and Pushlet host web
server.
Step 2: Operator moves to the Back left zone of
the assembly station.
Step 3: Exciter identifies the RFID Tag of the
operator.
Step 4: Exciter notifies the Location
identification Server for the RFID Tag.
Step 5: Location identification Server
constructs a SOAP notification XML message and
forwards it to the Workflow control and Pushlet
host web server, which is called the SOAP entry
point.
Step 6: The Workflow control and Pushlet host
updates the operators handheld device information
accordingly.
The SOAP message exchange that is called and
entry point serves as the interface between the
Location identification Server and the Workflow
control and Pushlet host user interface. The entry
point is triggered when a tracking device namely an
RFID Tag, moves to within a defined proximity of
tracking receiver that is an exciter. The SOAP
message includes both the tracking device (RFID

625

Tag), and tracking receiver (Exciter) identification
information.
2.2. JOB ROTATION LOGIC
The Job rotation function is implemented as a
module of the Workflow control and Pushlet host
server. It offers the ability to handle increased
demand and large product variability through multi-
skilled workers. Used in assembly operations, job
rotation has allowed assembly workers to learn the
assembly operations for multiple workstations and
for a great number of product variants. Job rotation
creates a more varying work load as monotonous
repetitive work has been identified as a major cause
of work load related disorders in the work force
causing costs for absenteeism and replacements.
The job rotation scheduling tool takes into
consideration the accumulated fatigue on each
worker, the skills of each worker, noise exposure,
team compatibility, error probability and cost, in
order to reschedule and allocate the workforce
accordingly. However these criteria are subject to
changes depending on the applications of this
method. In this context different criteria can be used
for the operators in the final assembly of a truck
where tasks are more demanding in terms of
physical strength and in the case of car final
assembly. In Figure 4, the design of the dynamic job
rotation tool is presented. The objective is to enable
human operators rotate continuously and in an
effective way while at the same time supporting
them with the right amount of information at the
right place so that they can carry out any assembly
operation without compromising product quality
(Michalos 2010b, Michalos 2011).
The parameters that affect the rotation should be
accessible for the tool at any time. These parameters
are measured or calculated by the human worker
support system and the existing production
monitoring systems. The stations that each operator
is assigned to, are stored in the database along with
the production status, production flow, orders and
due dates. When the job rotation tool is calculating
and determining every possible alternative for the
next rotation, is informed for the accumulated
fatigue on each worker. Shop floor specifications,
such as noise levels, are also stored in the
monitoring system and serve in the determination of
alternatives in the same manner as explained before.
The skills and competence level of each worker are
stored in the database in the form of a worker pool,
from which the tool is choosing and allocating the
workers. With the use of all the previous data along
with the job rotation criteria, the tool is able to
determine the most efficient job ration schedule.

Current
Production
Status
Shop Floor
Monitoring
Database
Job rotation criteria:
Competence
Fatigue
Distance
Cost
Repetitiveness
Worker pool
Criteria
Alternatives
Decision Matrix
Cr1 Cr2 Cr3 Cm
Alt1
Alt2
Alt3
.
.
Altn
Dynamic job rotation
scheduling tool
IT / Legacy
system
Workforce
Workload
Due Dates
Time
Cost

Figure 4: Job rotation integrated with the operator support
system
In order to analyze all the gathered information,
the tool is using a decision matrix. The decision
matrix receives as input all the possible alternatives
and it weighs them according to the determined
criteria. The output is the most suitable alternative
choice. The alternatives are generated by the job
rotation tool and the criteria are pre-determined,
along with their weight factors, by the user. The tool
uses a worker pool from which it allocates the
workers. The new task assignments are presented,
on the workstation terminal, for the worker, as
rotation instructions. The task assignment and
rotation frequency are defined by the user. For
example job rotation frequency could be defined
within the shift period of a worker or a team. Each
worker is assigned to spend a certain amount of
time within his/her.
An example for the operation of the job rotation
tool is discussed in the following paragraphs. The
case focuses on the assembly of two products.
There are 2 workstations (Workstation1 and
Workstation2), on which 8 tasks are taking place,
one in each workplace. For these 8 tasks there are 8
workers in the worker pool (see Figure 5). Workers
current state in terms of fatigue, cost rate and
competence are represented as bar charts next to
each operator. These criteria are represented in a
table from which the decision matrix will calculate
and identify the best alternative.
The use of job rotation within the assembly line,
in order to be successful, requires the existence of
wireless information environments such as the one
discussed in this work. The reason is that the
frequent rotations forces operators to seek
information on their new position thus demanding
mental effort and increased assimilation time from
their side.

626


Figure 5: The assembly station along with the worker pool
in which the current state of workers is presented
The job rotation tool is calculating all the
possible alternatives. In this case, all the workers
can carry out all the tasks. In some cases, not all
workers are able to carry out all the tasks. In Figure
6, possible alternatives for the first rotation step are
presented indicatively. A rotation step represents
part of the complete alternative. The latter includes
all the operator to task assignments for all the
vehicles that will pass through the assembly station.


Figure 6: Possible alternatives
After all possible alternatives are calculated, the
tool is using the table with the state of the workers
in order to determine, from the decision matrix, the
most suitable alternative. In the current case the
most suitable solution is Alternative2 as shown in
Figure 7.


Figure 7: The decision matrix is used to determine the most
suitable alternative
2.3. INFORMATION FILTERING AND
SYNCHRONISATION
The two servers are communicating and the logical
sequence of information is shown in Figure-8. Each
object in the assembly line, is recognized with
respect to time and space; therefore a common time
clock is used as well as a physical to computer
reference matrix that will specify the RFID
topology with respect to the physical world (i.e.
interrogator 5, antenna 1 is placed at air tank
assembly entrance point).
When an RFID tag that is mounted on a chassis,
arrives at the recognition area, the following events
are generated by the RFID interrogator and
collected by the data collection service of the
Location identification Server:
RFID tag arrived event, is generated once
and indicates the first successful read in the
recognition area. It includes the data: date-time
stamp, reader reference, antenna reference, RFID
tags ID.
RFID tag inventory event is generated
continuously while the RFID tag is in the
recognition area and includes the data: date-time
stamp, reader reference, antenna reference, RFID
tag ID, tag reads.
RFID tag departed event is generated
once following a time-out of a last successful tag
read indicating a departure of the RFID from the
recognition area. It includes the data: date-time
stamp, reader reference, antenna reference, RFID
tags ID.

627

A service that is called business logic service
runs asynchronous and uses the relationship already
established between the objects and RFIDs and the
above events are translated to chassis/tool/human
arrived, in recognition area and departed events.
The events are referenced to the defined topology
and are in the form:
Chassis # arrived at Assembly Station #. It
includes the data: date-time stamp, chassis #,
assembly station #.
Tool # applied at Chassis #. It includes the
data: date-time stamp, chassis #, tool #.
Operator # requested instructions for
Chassis #. It includes the data: date-time stamp,
chassis #, operator #.

Workflow
control and
Pushlet host
Interface to
Legacy
Vehicle
RFID tag
Location
identification
Server
Vehicle ID
Assembly Instructions
Checklists
CAD files
Interface
Portable
Device
Worker
Ethernet/
Wireless
Filtered
Information
Push
W
o
r
k
e
r

F
e
e
d
b
a
c
k

Figure 8: Human based assembly system overall concept
A service that is called recognition and event
generation service runs asynchronous and uses the
event definitions with conjunction to the object
events in order to recognize assembly line events
and generate real time triggers when these occur.
Such events may be used for passing information to
other It systems or alerts to operators:
Assembly Line Information: Information
Chassis # arrived at Assembly Station # on date-
time, Operator # requested instructions at time, Tool
# applied at time, Chassis # departed on date-time.
Assembly Line Alert: Information Chassis #
arrived at Assembly Station # on date-time, Chassis
# departed on date-time. ALARM no Tool
applied.
This procedure ensures that the proper
information for the right vehicle is filtered and
pushed to the proper operator.
3. WORKFLOW CONTROL AND
PUSHLET HOST SOFTWARE
ARCHITECTURE
The integration of the system has been carried out
under the Workflow control and pushlet host
software tool. The architecture of the tool consists
of two basic modules a) the web server module and
b) the web browser module. The web server
module is responsible for storing, retrieving,
handling and pushing the information to the web
browser while the latter is used as the end user
interface. As shown in Figure 9, the discussed
information system adopts the Model View
Controller - MVC (Mourtzis et al 2008, Makris et al
2011) architectural pattern by utilising the STRUTS
framework (Brown 2001). Following, the different
layers of the in Figure 9 are presented in detail from
the bottom to the top:
In the physical data layer, the model of the
architecture is implemented as an Object Relational
Mapping ORM, utilizing the Hibernate framework
(URL Hibernate 2011) for mapping an object-
oriented domain model to a relational database. The
software system uses an Oracle Database Server for
storing application data and an external database to
read RFID generated data.
The controller of the architecture is
implemented in JAVA and JSP and utilizes XML
and Pushlet technologies. The application logic is
used to determine the distribution of information to
the clients and also to perform the desired
manufacturing-related activities such as operators
planning, ordering of materials etc. Pushlets are an
Open Source servlet-based mechanism where data
is pushed directly from server-side Java objects to
(Dynamic) HTML pages within a client-browser
without using Java applets or plug-ins. This allows a
web page to be periodically updated by the server
(URL Pushlets 2011). Every front user, which in
this particular paper is an operator of the assembly
line, is registered as a pushlet client in the systems
pushlet server allowing the content that the front
users see to be updated when the system requires it.
Finally the upper layer of the application is
responsible for presenting the desired information to
end users, which are the assembly line operators,
through html and javascript that is dynamically
generated by using JSP technology.

628


Figure 9. Architecture of the Human Based Assembly
support system
The logical workflow for the operation of the
discussed system can be summarized in the
following steps:
1. The user of the web server module uses the
web interface to store relevant assembly instructions
in the database.
2. When the vehicle chassis rolls into one of
the assembly stations, the RFID antennas identify
the tag that is attached to it and updated the RFID
database with the arrival of the chassis.
3. The application logic uses the event that
was registered in step 2 and filters the information
from step one so that the instructions for the specific
product are only handled.
4. The pushlet server receives the information
from step 3 and forwards it automatically to the
terminal installed at the specific workstation. The
terminal can consist of either stationary monitors
(usually floor or ceiling mounted) or mobile devices
carried by the operators.
5. The end user (operator) utilizes the web
based interface to request and retrieve all available
information and also to perform further function
such as to request for consumables and so on.

The Workflow control and pushlet host software
system is implemented both as web application with
interfaces to legacy systems for tracking problems
in a production line as well as interfaces to newly
introduced software applications that is the RFID
monitoring of parts inside a production line or for
assembly operators. It also includes an internal
database facilitating the authentication and
authorization of end users, the job rotation module
as well as the persistence storage of data exchanged
between the software applications. In the next
paragraphs the main modules of the system that
have been implemented are discussed.
4. SOFTWARE SYSTEM
IMPLEMENTATION
4.1. MODEL VIEW CONTROLLER
FRAMEWORK
Model-view-controller - MVC is an architectural
pattern for separating the data model from the user
interface and the business logic, so that changes to
the user interface will not affect data handling, and
that the data can be reorganized without changing
the user interface (Brown 2001).
In the MVC design pattern, application flow is
mediated by a central Controller. The Controller
delegates requests - in our case, HTTP requests - to
an appropriate module that is called a handler. The
handlers are tied to a Model, and each handler acts
as an adapter between the request and the Model.
The Model represents, or encapsulates, an
application's business logic or state. Control is
usually then forwarded back through the Controller
to the appropriate View. The forwarding can be
determined by consulting a set of mappings, usually
loaded from a database or configuration file. This
provides a loose coupling between the View and
Model, which can make applications significantly
easier to create and maintain as shown in Figure-10.
A commonly used framework adapting the MVC
pattern is Apache Struts (Brown 2001, URL Apache
2011).


JavaBean Database
call
call
browser Servlet
JSP
send
send
use
instantiate

Figure 10: MVC architecture
4.1.1. Authentication/Authorization
To properly manage the operators access to the
right information, the Java Authentication and
Authorization Service -JAAS has been used for
authentication of users, to reliably and securely
determine who is currently using the software
system and for authorization of users to ensure they
have the access control rights (permissions)
required to do the actions performed (URL JAAS
2011).

629

4.1.2. Model
The Model of the Human Based Assembly Support
system is the persistence storage related code. The
database supports the RFID integration, the
user/role management (authentication /
authorization module), the assembly information
display / management / feedback and the Job
Rotation module (assembly operator support,
operators to assembly tasks dispatch system,
assembly operator monitoring).
Object-Relational mapping has been used for data
modelling. It creates a virtual object database which
can be used from within the programming language
(URL Hibernate 2011).


Figure 11: Controller to Persistence Storage Class Diagram
The Integration of the storage with the rest of the
application is managed from an implemented class
that manages the transactions and the persistence
storage of objects, it is the HibernateUtil class that
is shown in the class diagram of Figure-11.


Figure 12: Database ERD extract

An extract of the database schema that provides
the support to the previous defined modules lies
hereafter, Figure-12.

4.1.4. View
The applications graphical user interface is split
into two views. The Administration interface for
manipulating data like the instructions content, Job
Rotation module input and output data and the
assembly instruction panel which provides the
assembly operator with instruction (visual and text).
The Administration interface is implemented in
the form of jsp pages (the View of the MVC
architecture) and are used to enter data or perform
advanced operations, such as generating schedule
on demand, adding and modifying instructions,
mapping with the RIFD infrastructure etc.
The Assembly Instruction Panel on the other
hand and because of the fact that is not user driven
but automatically displays information to the
operators. This panel implements a specific design
pattern in order to automatically identify changes to
the system model and display related information.
The design pattern will be the observer-observable.
The Observer-Observable design pattern is useful
for maintaining one-way communication between
one object and a set of other objects (Cooper 2000).
Implementing the above described design the
Observable object is the object that represents the
vehicle coming to the production station that is
realized through the RFID integration.

Figure 13: Operator Panel Class Diagram
The GraphicalInstructionsPanel and the
TextInstractionsPanel implement presentation of
information to the operator via the display of the
device. The class that implements the
RFIDSubjectInterface, which is the implementation
of the Observable interface is the RFIDReader class
which uses the RFIDInterface for reading the values
of the hardware RFID reader. Upon sensing a
change this class automatically notifies the panel

630

classes to display relevant information to the
assembly worker, as shown in Figure-13
5. INDUSTRIAL CASE STUDY
The discussed system was implemented and
validated in under a real industrial environment and
more specifically the final assembly line of a
European truck producers facilities. The study
involved two assembly stations where the air tanks
of the vehicle are installed (similar to Figure-5). The
operations that are carried out in these stations are
considered to be very complicated due to the large
number of variations in the number, type, mounting
positions and mounting procedures of the air tanks
for each truck. Assembly of the wrong tank type or
mounting using wrong holes on the chassis are the
main quality errors that are observed in this stage.
Therefore support is required towards the operators
in this stage. The case study involved the assembly
of two different vehicles in terms of the type and
number of air tanks to be assembled.
The workflow that was followed during the
application of the case study is as follows:
The job rotation module was used to
determine the most suitable rotation
schedule for each operator.
RFID identification was used to identify the
chassis upon its arrival in the workstation.
The system automatically retrieved the
related information for the specific tasks
from the database
The four operators entered the assembly
area and were identified again using the
RFID identification system.
The system checked whether the operator
was at the intended location based on the
job rotation schedule and redirected him if
necessary (see also Figure 15).
The pushlet based system forwarded the
retrieved information to the handheld
device of each operator.
Operators successfully carried out all tasks
by consulting the handheld devices in case
of uncertainty. Images and videos of the
parts and the assembly process were the
most used features of the system.
Figure-15 provides a screenshot of the interface of
the handheld device which indicates the job rotation
schedules for the operator. The current position
along the line and the remaining tasks are shown on
the top of the screen while the next rotation steps
that the operator will need to follow are also
presented.


Figure 15: Handheld device interface
The performance of the system was proved
sufficient for the needs of the assembly process in
terms of information retrieval speed and robustness.
The identification of operators was almost real time
and the transfer of multimedia files through the
local wireless network allowed for instant retrieval
of assembly instructions. The drastic reduction of
time and effort for searching and acquiring
assembly information allowed the operators to
concentrate on the actual task and thus minimize the
probability of errors.

6. CONCLUSIONS AND OUTLOOK
In this work the implementation of a new approach
for real time information sharing at the shop floor
level has been presented, utilizing mainly free, open
source and widely used technologies that can
guarantee the economical and technical feasibility
of the application.
A prototype version of the system has been
realized and tested in actual truck assembly
environments. The findings indicate that a
considerable amount of flexibility potential that was
previously not exploited is now available through
the real time, place independent information flow
that is enabled by the utilized wireless technologies.
The synergy effect that can be achieved by the
integration of further functionalities can
significantly improve the performance and safety of
a wide range of manufacturing and assembly
systems. Typical examples include:
RFID technology used for localization of
operators in large scale assembly environments (e.g.
ship building) and real time information provision.
The pushlet/comet technology offers a great
potential by forwarding to the operator the
necessary information without having him searching
in extensive lists of data.
Different concepts of operator localization
techniques are being developed with the most
promising being the WiFi based triangulation. In
this concept any WiFi enabled device can be traced
within a 2 dimensional space based on the received

631

signal strength of the device at each access point.
Since the mobile devices that the operators carry are
WiFi enabled it is convenient that their location can
be retrieved by using this localization concept.
Nevertheless the accuracy of the application in such
cases needs to be investigated.

ACKOWLEDGMENTS
This work has been partially supported by the
research project MyCar, funded by the CEU.

REFERENCES
Bckstrand G., Thorvald P., Hgberg D., De Vin L. and
Case K, The Impact of Information Presentation on
Work Environment and Product Quality: A Case
Study, Proceedings of NES 2008, Reykjavik, Iceland,
CD-ROM
Brown S., Professional JSP 2nd Edition, Wrox Press
Inc, 2001
Chryssolouris G., Mavrikios D., Papakostas N., Mourtzis
D., Michalos G. and Georgoulias K., "Digital
manufacturing: history, perspectives, and outlook",
Proceedings of the I MECH E Part B Journal of
Engineering Manufacture, Vol. 222, 2009, pp.451-462
Chryssolouris G., Manufacturing Systems -Theory and
Practice, 2nd Edition, Springer-Verlag, New York,
NY, 2006
Chryssolouris G., Makris S., Xanthakis V., Konstantinis
V., An XML based implementation of the Value
Added Chain in Manufacturing: A Ship repair Case
Study, CIRP Journal of Manufacturing Systems, 32
(6), 2003
Cooper J., Java(TM) Design Patterns: A Tutorial,
Addison-Wesley Professional, 2000
Egea-Lopez E., Martinez-Sala A., Vales-Alonso J.,
Garcia-Haro J., Malgosa-Sanahuja J., Wireless
communications deployment in industry: a review of
issues, options and technologies, Computers in
Industry, Vol. 56, No. 1, 2005, pp 29-53
Fasth ; Stahre J, Fssberg T.; Nordin G., iPod touch
an ICT tool for operators in factories of the future,
Proceedings of 43rd CIRP International Conference
on Manufacturing Systems, 2010
Feldmann K. and Junker S., Development of New
Concepts and Software Tools for the Optimization of
Manual Assembly Systems, CIRP Annals
Manufacturing Technology Vol. 52, No. 1, 2003 pp.
14
Feldmann K., Slama S., Highly Flexible Assembly
Scope and Justification, Annals of the CIRP, Vol. 50,
No.2, 2001, pp. 489499
Makris S., Zoupas P., Chryssolouris G., Supply chain
control logic for enabling adaptability under
uncertainty, International Journal of Production
Research, Vol. 49, No. 1, 2011, pp. 121-137
Michalos G., Makris S., Papakostas N., Mourtzis D. and
Chryssolouris G., "Automotive assembly technologies
review: challenges and outlook for a flexible and
adaptive approach", CIRP Journal of Manufacturing
Science and Technology, Vol. 2, No. 2, 2010a, pp. 81-
91
Michalos G., Makris S., Rentzos L., and Chryssolouris
G., "Dynamic job rotation for workload balancing in
human based assembly systems", CIRP Journal of
Manufacturing Science and Technology, Vol. 2, No.3,
2010b, pp. 153-160
Michalos, G., Makris, S., Mourtzis, D., A web based
tool for dynamic job rotation scheduling using
multiple criteria, CIRP Annals Manufacturing
Technology, Vol. 60, No. 1, 2011, pp. 453 -456
Michalos G., Papakostas N., Mourtzis D., Makris S.,
Rentzos L. and Chryssolouris G., "Human
Considerations in automotive assembly systems:
Conceptual design", Proceedings of the IFAC
Workshop on Manufacturing Modelling, Management
and Control, Budapest, Hungary, 2007, pp. 175-180
Mourtzis D., Papakostas N., Makris S., Xantakis V.,
Chryssolouris, G., Supply chain modeling and
control for producing highly customized products,
CIRP Annals - Manufacturing Technology, Vol. 57,
No 1, 2008, pp. 451-454
Ng J.K.C, Ip W.H., Web-ERP: the new generation of
enterprise resources planning. Journal of Material
Processing Technology, Vol. 138, No. 1, 2003, pp.
590593.
Pil F.K. and Holweg M., Linking Product Variety to
Order-Fulfillment Strategies, Interfaces, Vol. 34, No.
5, 2004, pp. 394-403.
Stankovski S., Lazarevic M., Ostojic G., Cosic I., and
Puric R., RFID technology in product/part tracking
during the whole life cycle, Assembly Automation,
Vol. 29, No. 4, 2009, pp. 364370.
URL Apache, Apache Struts, The apache software
foundation, 1 June 2011 http://struts.apache.org/
URL Hibernate, Hibernate, 1 June 2011,
http://www.hibernate.org/
URL JAAS, Java Authentication and Authorization
Service (JAAS), Oracle, 1 June 2011,
http://www.oracle.com/technetwork/java/jaas-
135236.html
URL Pushlets, http://www.pushlets.com, 2011
Zhou S., Ling W., Peng. Z., An RFID-based remote
monitoring system for enterprise internal production
management, International Journal of Advanced
Manufacturing Technology, 2007, Vol. 33, No. 7-8,
pp. 837844.
Proceedings of DET2011
7th International Conference on Digital Enterprise Technology
Athens, Greece
28-30 September 2011
632

RFID-BASED REAL-TIME SHOP-FLOOR MATERIAL MANAGEMENT:
AN AUTOM SOLUTION AND A CASE STUDY
Ting QU
Department of Industrial and Manufacturing
Systems Engineering,
The University of Hong Kong
quting@hku.hk
George Q HUANG
Department of Industrial and Manufacturing
Systems Engineering,
The University of Hong Kong
gqhuang@hku.hk


Yingfeng ZHANG
Key Laboratory of
Contemporary Design and
Integrated Manufacturing
Technology,
Ministry of Education,
Northwestern Polytechnical
University, Xian, P R China
zhangyf@nwpu.edu.cn
Xindu CHEN
Faculty of Electromechanical
Engineering, Guangdong
University of Technology,
Guangzhou, P R China
chenxindu@gdut.edu.cn
Hao LUO
Department of Industrial and
Manufacturing Systems
Engineering,
The University of Hong Kong
luohao403@hku.hk
ABSTRACT
Radio Frequency Identification (RFID) technologies provide automatic and accurate object data
capturing capability and enable real-time object visibility and traceability. Potential benefits have
been widely reported for improving manufacturing shop-floor management. However, reports on
how such potentials come true in real-life shop-floor daily operations are very limited. As a result,
skeptics overwhelm enthusiasm. This paper introduces an AUTOM solution which provides an
easy-to-use and simple-to-deploy framework for manufacturers to implement RFID/Auto-ID
enabled smart shop-floor manufacturing process. A real-life case of adopting AUTOM to realize
RFID-enabled material distribution in a large air conditioner manufacturer is introduced, aiming to
re-vitalize the RFID efforts in manufacturing industries. It is hoped that insights and lessons gained
could be generalized for future efforts across household electrical appliance manufacturers in
specific and for other types of manufactures in general.
KEYWORDS
Radio Frequency Identification (RFID), air conditioner, manufacturing execution, production
management, material distribution.


633

1. INTRODUCTION
Radio Frequency Identification (RFID) technologies
offer the capability of automatic and accurate object data
capturing and enable real-time traceability and visibility
(Chalasani and Boppana, 2007). While supply chain
logistics industries have mandated the adoption of the
technologies and initiated substantial research and
development activities (Williams, 2004), manufacturing
industries have also made practical progress (Huang et
al., 2007). Manufacturers deploy RFID devices to shop-
floor objects such as men, machines and materials to
capture data associated with their statuses (Brintrup et
al., 2009; Ren et al., 2010). Such RFID-enabled real-
time visibility and traceability substantially improve
shop-floor management in general and Work-In-
Progress (WIP) materials management in particular
(Huang et al., 2008).
Despite widespread enthusiasm, reports on real-life
industrial practices, either successful experiences or
painful lessons, are very limited. Majority of the reports
have been based on preliminary industrial experiments
rather than implementations for everyday operations.
Skeptics increasingly shadows potential benefits
claimed. In order to re-vitalize the effort, this paper will
introduce an AUTOM solution which means using Auto-
ID technologies to enable a ubiquitous and smart
manufacturing process. It is put forward by the authors
research team and has been successfully applied in
several real manufacturing companies to achieve various
real-time shop-floor management purposes. In the later
part of this paper, we will also present one of these real-
life case studies implemented in an air conditioner
manufacturer.
The major purpose of this paper is multi-folded.
Firstly, the paper shows how RFID-enabled potential
benefits come true in a real-life company in terms of
improved visibility and traceability, information
accuracy, operation efficiency, reduced costs, increased
speed and responsiveness, and better product quality
control (Clarke et al., 2006; Henseler et al., 2008).
Secondly, the paper demonstrates how a company deal
with technical, social and organizational challenges in
adopting the RFID solutions for shop-floor management.
Finally, the paper generalizes a procedure on how RFID
solutions can be applied in manufacturing shop-floors
for household electrical appliance products with similar
characteristics.
The remainder of the paper is arranged following a
general implementation procedure (Ngai et al., 2010):
(1) Analysis of existing business processes, (2) Creation
of RFID-Enabled Manufacturing Shop-Floor, (3)
Implementation of reengineered business process, (4)
Champion of good practices, and (5) Reflection for the
future. Section 2 discusses how AUTOM solution is
used for creating a RFID-enabled smart and ubiquitous
shop floor. Section 3 introduces a real-life case study to
exemplify how the shop-floor operational problems
could be solved by AUTOM (RFID) applications.
Conclusions are given in Section 4.
2. CREATING RFID-ENABLED SHOP FLOOR
WITH AUTOM SOLUTION
The implementation of a shop-floor RFID system is
far from simply placing readers everywhere. It is a
systematic process involving appropriate tagging
scheme, reasonable reader arrangement, standard
information communication and synchronized process
integration etc. In fact, a series of implementation
questions normally perplex a company during its initial
investigation: What shop-floor objects should be tagged
and equipped with RFID readers? Where to place RFID
readers and in which form and frequency? How to
convert the basic real-time data into useful visibility and
traceability supports to facilitate the daily shop-floor
operations? Although partial solutions could be found
for each single question, integrated frameworks for
practical shop-floor implementation are few.
AUTOM solution has been put forward as a holistic
framework for implementing shop-floor RFID
applications. AUTOM is an umbrella technology for
manufacturing solutions enabled by Auto-ID devices
such as RFID, barcode and other types of wireless
devices such as Bluetooth, Wi-Fi, GSM and infrared. It
not only defines the overall information infrastructure
within a standard shop-floor resource model, but also
provides facilities to enable the real-time data processing
in the model. This section will introduce the AUTOM
solution and how it can be customized to create a typical
RFID-enabled shop floor. It should be mentioned that,
AUTOM indeed provides an effective and efficient way
for RFID implementation, but is not the unique solution.
However, all the implementation principles and
procedures introduced below are general enough to be
followed if other solution is used.
2.1 REAL-TIME SHOP-FLOOR INFORMATION
INFRASTRUCTURE
The role of AUTOM infrastructure is shown in Figure
1. It is designed to be consistent with the standard
enterprise hierarchy defined by ISA-95 enterprise-
control system integration standard (http://www.isa.org).
An enterprise hosts one or more manufacturing sites or
areas (e.g. factories or workshops), each of which
consists of several production lines/cells or storages
zones (e.g. assembly lines or warehouses). The operation
of a production line involves a variety of production
units, whose operations are concerning with both

634

manufacturing resources (e.g. materials, equipments and
operators) and their logical combinations (e.g. product
assembling).

Enterprise Information Systems (EISs)
RFID-Gateway
Assembly Line
Buffer Area
Warehouse
Logistics
802.11g Bluetooth ZigBee USB Serial Port
Stationary Gateway Portable Gateway
Adaptor
Active Smart Object
Workstation
Shelf/Storage Unit
Forklift/truck
Data Collection Point
Passive Smart Object
Material
Product /WIP
Equipment
Pallet/Container
Personnel
Smart Object
Bar-code Reader HF RFID Reader Terminal Ethernet Hub
Pallets Operator
Barcode
UHF RFID HF RFID
Barcode
Final Products
UHF RFID Reader
S
i
t
e
/
A
r
e
a
P
r
o
d
u
c
t
i
o
n

L
i
n
e
S
t
o
r
a
g
e
Z
o
n
e
P
r
o
d
u
c
t
i
o
n
U
n
i
t

/

S
t
o
r
a
g
e

M
o
d
u
l
e
R
e
s
o
u
r
c
e
s
P
r
o
c
e
s
s

S
e
g
m
e
n
t
s
ISA-95
E
n
t
e
r
p
r
i
s
e
Shop-floor Application system
Warehouse Buffer Logistics Final Assembly
RFID-Enabled Real-Time Material Distribution System
Materials

Figure 1 - AUTOM Infrastructure
Corresponding to the four-level ISA-95 enterprise
hierarchy shown in the left part of Fig. 2, the AUTOM
information infrastructure is also developed to comprise
four technical levels, as shown in the right part. The
highest level includes those conventional enterprise
information systems (EISs), such as ERP, MRP etc, used
by enterprise management for making production plans.
The three lower levels are RFID-enabled AUTOM
facilities, including two hardware facilities named smart
objects and RFID-Gateway, and a shop-floor application
system. Direct instantiation of these AUTOM facilities
provides an effective and efficient way for creating a
RFID-enabled shop floor suitable for real-time plan
execution. The following subsections will details such
instantiation process.
2.2 RFID-ENABLED SHOP-FLOOR HARDWARE
FACILITIES

Smart Object
A smart object (SO) is a physical manufacturing
resource that is made smart by equipping it with
certain degree of intelligence: Auto-ID, memory,
communicating ability etc. Normally, two typical Auto-
ID techniques, namely RFID and barcode, are used in
combination. SOs could be categorized into passive and
active SOs. Those attached with Auto-ID tags and
readers are called passive SOs and active SOs
respectively.
The creation of passive smart object has two steps.
The first is to scope the objects to be tracked and the
second it to determine which kind of tag will be used.
This case focuses on the material distribution process
which concerns about delivering the right item by the
right operator with the right tools in the right quantity at
the right time from the right source to the right
destination. Therefore, the objects need to be tracked
include raw materials, final products, storage locations,
circulating boxes, pallets and operators. The tagging of
these objects follows the close-loop RFID system
application principle advocated in Schmitt et al. (2007):
RFID transponders should be attached to those objects
which are shipped or moved within a cycle and
eventually returns to its point of origin. Therefore,
internally used or circulated resources are attached with
RFID tags, such as operators, storage locations, shipping
pallets and circulating boxes. Since the raw materials
and final products in the case company have already
used barcode labelling in agreement with other supply
chain partners, they are kept unchanged. The WIP in the
distribution process will be traced with their containers.
Active SOs are manufacturing resources equipped
with RFID readers. They are normally installed at value-
adding points of a process where passive SO are to be
tracked. Normally, the influential range of an active SO
is limited, e.g. around a workstation or a storage rack. In
this case, active SOs are all integrated in RFID-
Gateways (see next section) as built-in active SOs and
only control the entrance points of line-level areas. This
is due to two reasons. First, all the assembly lines
operate in a continuous production mode. One reader put
at the end of the production line could help derive the
statuses of all the comprised workstations. Second, for a
logistics process combined with storing, loading,
transporting and buffering, stochastic mobile data
reading and processing may happen anywhere. Portable
RFID-Gateway is needed for this case.

RFID-Gateway
Instead of exposing RFID devices directly to
application systems, AUTOM solution puts forward an
intermediate level called RFID-Gateway to form
loosely-coupled system architecture (Zhang et al., 2010).
RFID-Gateway has a hardware hub and a suite of
management software which acts as a server to host all
the (active) SOs within a certain working area. Through
incorporating various drivers of SOs to form a driver
library, RFID-Gateway is enabled to work in a Plug and
Play fashion to newly plugged SO. Heterogeneous SO
drivers are wrapped into standard web service interfaces,
enabling upper-level applications to use all the devices
in a uniform way. The influential range of RFID-
Gateway is the union of ranges of all the hosted active
SOs, normally a work cell or a production line. Despite
the key word RFID, RFID-Gateway is substantially
supported by other technologies, e.g. barcodes, Wi-Fi,
PC, PDA etc.

635

There are stationary, mobile and portable RFID-
Gateways. Stationary RFID-Gateway is placed at a fixed
location, such as the gate of a warehouse. Items are
moved to the stationary RFID-Gateway to be tracked. A
mobile RFID-Gateway is installed to a moveable
manufacturing resource, such as a forklift truck. Tagged
items are not only carried but also traced by the movable
recourses during long distance of transportation. A
portable RFID-Gateway is a handheld device responsible
for distributed item identification within a certain area or
along a certain process. Unlike the previous two types, a
portable RFID-Gateway is always moved close to the
objects by an operator for tag reading.
(a) Lab Version
Stationary RFID-Gateway
(b) Onsite Version
Stationary RFID-Gateway
(c) Portable RFID-Gateway

Figure 2 - Implementation of RFID-Gateway
The case company which will be introduced in next
section mainly uses stationary and portable RFID-
Gateways. Figure 2(a) and 2(b) show the lab version and
onsite version stationary gateways. The former exposes
some of the integrated Auto-ID devices (i.e. active smart
objects), including Alien ALR-9800 UHF RFID readers,
ACS 120 HF RFID readers, and Metrologic MS9535
Bluetooth barcode readers. Figure 2(c) shows the
portable gateway which is implemented based on
Motorola MC9090-G RFID handheld reader.
2.3 RFID-ENABLED APPLICATION SYSTEMS
Shop-floor application system is a RFID-enabled real-
time information system providing a two-way
information channel between shop-floor execution and
decision (Zhang et al., 2010). From execution to
decision, the system collects real-time information of the
smart objects involved in a manufacturing process via
RFID-Gateways for adaptive decision or process control.
From decision to execution, the system can transfer and
interpret shop-floor decisions into executive work orders
that should be followed by smart objects.
Such application system normally includes visibility
and traceability modules. A visibility module shows the
real-time operation status of a specific manufacturing
site with graphical user interfaces to facilitate the easy
operation of operators. The principle followed is what
you see is what you do and what you do is what you see.
Traceability is a backend control mechanism which
integrates the real-time information captured from
different manufacturing stages. Typically, information
from different RFID-Gateways could be synchronized to
enable coordinated operations, while history information
of a manufacturing object or process could be retrieved
in a later time for failure investigation. Normally,
application system is very process-specific, and thus
hardly any off-the-shelf system on the market is directly
ready for using.
3. A CASE STUDY OF RFID-ENABLED
SHOP-FLOOR MATERIAL DISTRIBUTION
SYSTEM
3.1 ABOUT THE CASE COMPANY
The case company is a large air conditioner
manufacturer which has a huge annual output over 4
million sets. Therefore, it pays much attention on
improving the quality of its material distribution to
secure an efficient and stable production process. In the
air-conditioners whole production process, the case
company only undertakes the last two assembly stages,
i.e. preassembly and final assembly. Over 90% of the
parts and accessories manufacturing are outsourced to
suppliers and shareholding subsidiaries. Currently, the
case company has three assembly workshops responsible
for three different product lines, but uses over 5000
types of materials purchased from nearly 80 suppliers.
All the materials are stored in 49 separate warehouses
and are distributed to workshops by more than 100
logistics staffs. At current stage, we choose one
workshop and one warehouse to implement a small-scale
pilot project. Then, the pilot achievements will be
extended to other workshops and warehouses.
The typical shop-floor material distribution process is
cooperated by four major functional departments:
Production Plan Department for making both production
and material requirement orders; Warehouses for storing
both materials and final products; Logistics Department
for distributing materials to Production Department, and
taking the finished products back to warehouses. As can
be seen in Fig. 1, production and logistics departments
are located inside a workshop, while the other two
departments are outside. The production department has
two preassembly lines and ten final assembly lines
positioned in two parallel production areas, with two
corresponding buffers located to the left. Buffers and all
the logistics operations are managed by the logistics
department.

636


Final Assembly Lines
Pre-Assembly Lines
Warehouse
Preassembly Buffer
Final Assembly Buffer
Logistics Department
a2
e
Orders / Status
ERP
(SAP)
Raw Materials
Raw Materials
WIP
d
b
Production Department
1
2
3
4
5
Workshop
Production Plan Department
3
a1
c
Logistics
Manager

Figure 3 Maximum dimensions for figures in single column
3.3 OPERATIONAL PROBLEMS IN SHOP
FLOOR PRIOR TO RFID IMPLEMENTATION
A typical shop-floor material distribution process
contains five stages as shown in Figure 1. This section
will conduct confirmatory cause and effect analyses will
be conducted to find out whether and how the RFID
facilities are required in the material distribution process.
Stage 1 Production and Material Requirement Plan
Production plan department makes and releases order
in paper-based multi-copy form to shop floors. A
production order is made either for final assembly line or
preassembly line, while the associated material orders
for both are released to logistics department and
warehouse. Several order copies will also be circulated
to document office for recording.
Three problems currently exist: (1) paper-based order
is wasteful, time-consuming, error-prone, and subject to
reprinting when order changes; (2) planned orders are
always unachievable due to the various operational
dynamics, e.g. machine breakdown or defective
material/product; (3) out-of-stock material in warehouse
cannot be timely identified and replenished, leading to
frequent production delay or order changes.
The causes of the above problems are: lack of
appropriate visibility and traceability functions at key
working areas, e.g. order visibility; lack of data
synchronization between ERP system and shop floors
key value-adding points (locations labeled from a to
e).
Stage 2 Material Preparation and Delivery
Material operators at warehouses pick up all the
materials based on the received material orders and load
to pallets in the afternoon. Logistics workers collect all
the loaded pallets from warehouses and deliver to
workshops in the next morning. All the finished material
orders are manually updated to ERP system by
warehouse manager when they are free, normally with a
one-day delay.
Three problems currently exist: (1) material locating
and distinguishing in manual ways is time-consuming,
especially for the same type of materials purchased from
multiple suppliers; (2) logistics jam at a warehouse gate
happens frequently because the logistics works happen
in a relatively fixed period; (3) delayed materials
consumption information normally leads to untimely
replenishment and thus stock-out situations.
The causes of the above problems are: lack of Auto-
ID tags on material packages, circulating boxes and
pallets; lack of data capturing devices at warehouse
(a2); lack of real-time data synchronization between
warehouses (a1) and ERP system. These causes are
suffered in common in the follow stages.
Stage 3 Material Buffering
Materials delivered to workshops will be stored in
preassembly or final assembly buffers first before being
replenished to the corresponding lines. Components
(WIP) made by preassembly lines will also be stored in
final assembly buffer.
Two problems currently exist: (1) materials checking
in and out buffer involves complex manual data
transaction works; (2) real-time buffer information is not
real-time available. In case of order changing, buffered
materials cannot get efficient handing, e.g. order transfer
or re-warehoused.
The causes of the above problems are similar as those
for warehouses discussed in stage 2, except the data
capturing synchronization point will be b.
Stage 4 Preassembly
Logistics workers make their rounds along the
preassembly lines periodically, and replenish materials
from buffers to lines when the line-side material
inventory is below a safety level. The finished WIP will
be sent to final assembly buffer.
Two problems currently exist: (1) a lot of unnecessary
labor costs are wasted on line-side inventory checking;
(2) without real-time WIP information, final assembly
may start earlier or later, resulting in either production
delay or redundant WIP.
Besides the basic cause of the lacks of Auto-ID tags
on materials and devices at preassembly line (d),
lacking of synchronization between preassembly line
(c) and final assembly line (d) is the main cause for
the above two problems.
Stage 5 Final Assembly

637

The way of line-side inventory replenishing is similar
as that for preassembly line, while the finished final
products will be sent back to warehouses instead.
Two problems currently exist: (1) the wastes of
unnecessary labour costs on line-side inventory checking
still exist; (2) logistics manager cannot arrange timely
warehousing for final products without real-time output
information, resulting in high product inventory at the
lines end.
Besides the basic cause of the lacks of Auto-ID tags
on materials and devices at final assembly line, lacking
of synchronization between final assembly (e) line and
logistics department is the main cause for the above two
problems.
3.4 APPLICATION OF AUTOM SOLUTION
3.4.1 AUTOM HARDWARE DEPLOYMENT
All the numbered locations in Figure 3 have been
identified as value-adding points and will be equipped
with suitable RFID-Gateways. The detailed
configurations of these RFID-Gateways are listed in
Table 1 for better comparison.

Table 1 - Configuration and Deployment of RFID-Gateway
o. Location User Major Functions
Gateway
Hardware
Active Smart
Object
a1
Warehouse
Storage Area
Material Operators
(1) View the pallet loading scheme;
(2) Bind materials (barcode) with circulating box (UHF);
(3) Bind circulating boxes (UHF) with pallet (UHF);
(4) Bind pallets (UHF) with location (UHF).
Barcode
UHF RFID
a2 Warehouse Gate
Logistics Operator
(1) Read staff card (HF) and get associated delivery tasks;
(2) Locate the pallet to be delivered;
(3) Check out the pallet.

HF RFID
UHF RFID
Warehouse Keeper
(1) Check the status of logistics operators;
(2) Check the validity of materials and pallet to be checked out.
b
Preassembly
Buffer Gate
Logistics Operator
(1) Check in materials (on pallet) delivered from warehouses;
(2) Check out buffered materials and send to preassembly lines.

HF RFID
UHF RFID
c
End of
Preassembly Line
Loading Operator
(1) Report finished preassembly tasks;
(2) Bind finished WIP with pallet.

UHF RFID
d
Final Assembly
Buffer Gate
Logistics Operator
(1) Check in materials (on pallets) delivered from warehouses;
(2) Check in WIP (on pallets) delivered from preassembly lines;
(3) Check out buffered materials and send to preassembly lines.

HF RFID
UHF RFID
e
End of Final
Assembly Line
Packing Operators
(1) Report finished final assembly tasks;
(2) Bind finished products with pallet being loaded;

Barcode
UHF RFID
3.4.2 AUTOM APLLICATIONS DEVELOPMENT
The project team customized a RFID-enabled real-
time shop-floor material distribution system (RT-SMDS)
for the case company. The system is implemented based
on service-oriented architecture (SOA) architecture,
shown in Figure 4.
Visibility modules are implemented in the form of a
set of web explorers to be flexible downloaded and used
in RFID-Gateways. Both the traceability modules and a
data services are implemented in web services. Data
services not only enable standard XML-based data
exchanging between application system and its own
database, but also implement an ERP Adaptor
integrating the RT-SMDS with the case companys SAP
system. The adaptor has four pairs of SAP RFC (Remote
Function Call). They are Production Order RFC,
Material Order RFC, BOM RFC, and Order Change
RFC. The former two are used to get newly made
production and material orders from ERP. The latter two
are used to modify the released orders if a production
order is changed by production plan department. Details
of the individual visibility modules and traceability
services will be given in Section V with a scenario
description.
> REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO
EDIT) <

638

638
SAP
Oracle
AUTOM RT-SMDS
Line-side Inventory
Traceability Service
Traceability
Services
Visibility Modules
Final Assembly Line
Module
Final Assembly Buffer
Management Module
Preassembly Buffer
Management Module
Warehouse Material
Check-out Module
Warehouse Material
Preparation Module
Pallet Load Scheme (PLS) Module
Shop-floor Dynamics Viewer
Preassembly Line
Module
ERPAdaptor
Data Services
SAP
Proprietary
Database
AUTOM
Database

Order Traceability
Service

Figure 4 - System Architecture
3.5 OPERATION PROCESS AFTER RFID
IMPLEMENTATION
3.5.1 SCENARIO DESCRIPTION
A complete material distribution process of using RT-
SMDS to accomplish one single production order will be
illustrated in this section. All steps are arranged
according to the logical sequence of the involved
operators and activities, see Fig 6. This process is
common for other workshops and warehouses because
the material distribution processes are very similar.
There are ten operational steps in this process. Each
step is directly enabled by one of the visibility modules
downloaded to the corresponding RFID-Gateway or PC
while supported by the traceability services running at
the back end. The central part in Figure 4 illustrates the
operation details of each step including the venue, while
the surrounding part shows the visibility interfaces. The
five steps to the right are more logistics related, aiming
at delivering materials from warehouse to workshops
based on planned material requirements; the five steps to
the left are more production related, responsible for
replenishing materials to assembly lines following actual
production tempo. The process can also be divided into
plan and execution steps in the upper and lower parts
respectively. The road signs in the central part show the
above conceptual division.
The operation process is directly supported by a series of
visibility modules installed at the onsite RFID-
Gateways. But the information interrelationships among
different operation steps are maintained through two
major traceability modules. The usages of these modules
will be illustrated in this section with the scenario.
3.5.2 VISIBILITY ENABLED OPERATION
PROCESS

Step 1 Production Manager: Make Production and
Material Requirement Plans
With Real-time Shop Floor Visibility Explorer, the
manager reviews the real-time shop-floor status
including progresses current orders, status of production
lines, and material availability. Such information enables
an adaptive planning mode and makes the orders more
feasible for execution. Generated orders are then
released to shop floors.

Step 2 Logistics Manager: Make Pallet Loading
Scheme
When a new material order is released, the logistics
manager opens Pallet Loading Scheme Editor to make
the so-called pallet loading scheme (PLS). A PLS
indicates which and how many materials of which
material orders should be loaded on the same pallet and
delivered to which workshop and on what time. Based
on PLS, the subsequent material distribution process
could be conducted in batches (based on PLS instead of
individual materials) to enhance the handling efficiency.

Step 3 Material Operator: Material Preparation in
Warehouse
All PLS are sequenced by the designated finish times
and shown in Warehouse Material Preparation
Explorer on the material operators portable RFID-
Gateway at the warehouse. The material operator selects
the most urgent PLS to prepare. Materials are put into
circulating boxes first and then load to pallets. Barcodes
of materials will be bound to the RFID tags of
circulating boxes and pallet barcodes by the portable
RFID-Gateway according to their inclusion
relationships. Finished pallets will be placed to the
warehouses shipping dock. Tags of the pallet and
specific storage cell will also be bound to record a
pallets current position.

Step 4 Logistics Manager: Assign Material Picking
Task
Finished PLSs will be highlighted in the Pallet
Loading Scheme Editor. The logistics manager will
then select a specific logistics operator to get the loaded
pallet back from the warehouse, namely assigning a
material picking task. Tasks containing final assembly
and preassembly materials are shown the RFID-
Gateways at the corresponding buffer gates. The
logistics operator being assigned a task will then set off
to the warehouse for material picking.

Step 5 Logistics Operator: Material Picking
Through patting staff card on the RFID-Gateway at
the warehouse gate, a logistics operator may view the
assigned material picking tasks from Shipping Dock
> REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO
EDIT) <

639

639
Visibility Explorer. With the specific pallet position of
each PLS, the logistics may easily find the pallet to be
picked. When checking out the pallet from warehouse,
the Warehouse Material Check-out Explorer at the
same RFID-Gateway will catch the pallet tag and check
whether the PLS associated with this pallet is belonging
to the operators logistics tasks.

Step 6 Warehouse Keeper: Materials Check-out
Validating
When a logistics operator tries to check out a pallet
out of the warehouse, the materials actually loaded on
the pallet will be automatically retrieved through the
binding relationships created during material preparation
process and shown on Warehouse Material Check-out
Explorer. These materials are contrasted against the
original PLS. If they are matching, the logistics operator
is allowed to proceed. Otherwise, the warehouse keeper
will request the logistics operator either to reload the
materials or modify the PLS. All the materials being
checked out are deducted from the warehouse inventory
and updated to the ERP system through the ERP adaptor.

Step 7 Logistics Operator: Preassembly Material
Buffering
If the picked pallet from warehouse contains
preassembly materials, it will be delivered to the
preassembly buffer. When the logistics operator checks
in the pallet, the RFID pallet tag is captured by the
RFID-Gateway at the buffer gate. All the contained
materials will be automatically shown on the
Preassembly Buffer Visibility Explorer and recorded
through retrieving relative PLS and marked as in-
buffer, indicating they are ready for replenishing to
lines. When a preassembly line is about to finish a
production order, the explorer will inform the relative
logistics operators to replenish materials for the next
production order.

Step 8 Preassembly Operator: Preassembly and WIP
Loading
Components made by preassembly lines are loaded on
pallet by preassembly operators. Since such WIP are not
tagged, the load (quantity) and affiliated production
order are manually bound with the pallet tag. As the type
of WIP from a preassembly line is normally fixed, they
are normally loaded with default capacity. Hence, the
manual binding operation is not time-consuming.
Production order, default loading capacity and pallet tag
will be automatically bound together by the
Preassembly Line Visibility Explorer. Only when the
actual load is not the default value, manual intervention
is required. A loaded pallet will be delivered to final
assembly buffer.

Step 9 Logistics Operator: Final Assembly Line
Replenishing
Final assembly buffer receives both materials from
warehouses and WIP components from preassembly
lines. Their check-in processes are all the same (see step
7), enabled by Final Assembly Buffer Visibility
Explorer. Buffered materials will be replenished to
final assembly lines by logistics operators based on the
real-time status of line-side inventory shown on the
explorer. When materials are checked out from buffer,
either in circulating box or pallet, their information will
be automatically captured by the RFID-Gateway through
retrieving the tags current binding relationship.
Quantity of each material will be added to the
corresponding line-side inventory.

Step 10 Final Product Assembling, Packaging and
warehousing
All the final products made by final assembly lines will
be packaged and tagged by barcodes based on
customers requirement. Every tagging operation will
trigger the Line-side Inventory Traceability Service
once and the consumed materials will be deducted from
current line-side inventory. Packaged products will be
loaded on pallets for warehousing. Similar as material
preparation in warehouse, the product barcodes will be
bound with the pallet RFID tag by the Final Assembly
Line Visibility Explorer on portable RFID-Gateway.
Information of all the loaded pallets will be sent to both
logistics managers PC. The logistics manager will
create logistics tasks accordingly by assigning logistics
operator to deliver them to warehouses. This process is
very similar to Step 4.

> REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE
TO EDIT) <

640

640
Production Plan Department
Production Manager
Logistics Department
Logistics Manager
Warehouse
Material Operator
Logistics Operator
Preassembly Buffer
Buffer Manager
Final Assembly Buffer
Logistics Operator
Warehouses
Warehouse Keeper
Final Assembly Lines
Packaging Operator
Preassembly lines
Preassembly Operator
Warehouses
Make pallet loading scheme
Production Logistics
P
l
a
n
E
x
e
c
u
t
i
o
n
1
2
Logistics Department
Logistics Manager
5
6 7
8
9
10
PC
@ Logistic Depts Office
Portable RFID-Gateway
@ Storage Area
Load materials to contains
and pallets and bind their tags
Be notified of the completed
loading scheme
PC
@ Logistic Depts Office
Get the target pallet and deliver
to assembly line buffer
Check the validity of materials
and logistics operator
RFID-Gateway
@ Warehouse Gate
RFID-Gateway
@ Warehouse Gate
Check the validity of delivered
materials
Deliver the finished WIP to
final assembly line buffer
RFID-Gateway
@ Preassembly Buffer Gate
RFID-Gateway
@ Final Assembly Buffer Gate
Replenish materials to final
assembly line
RFID-Gateway
@ Final Assembly Buffer Gate
Final product packaging and
loading to pallet
RFID-Gateway
@ Final Assembly line
Make adaptive production plan
and material requirement plan
4
3
PC
@ Production Plan Department

Fig 1. A Representative Operational Scenari
3.5.3 TYPICAL TRACEABILITY SERVICES
Traceability services in this system mainly serve
for two purposes, tracing the information of another
RFID-Gateway or tracing the history information of
an operation process. The former is required in the
normal operation of a material distribution process,
such as the Line-side Inventory Traceability
Service used in Step 10. However, when an order
change happens in a production process, another
traceability service called Order Traceability
Service will be needed.

Line-side Inventory Traceability Service
Current system implementation only installs
RFID-Gateways at the end of assembly lines instead
of equipping every station with active smart objects
(e.g. RFID readers). However, the line-side material
stocks need to be monitored to enable timely
replenishing. This service fulfils this need. Product
outputs captured at the end of assembly line are
used in connection with the products BOM
structure to calculate the lines real-time material
consumption. Quantities of materials having been
replenished to line are captured by the RFID-
Gateway at the buffer gate. By subtracting the
accumulative material consumption from the
accumulative material replenishment, the real-time
line-side stocks could be easily traced.

Order Traceability Service
Specifically, inserted production orders may
result in new material orders and logistics tasks,
while cut or cancelled production orders may result
in reversed logistics tasks, i.e. material return tasks.
In either case, the tasks execution process is very
similar to the normal material distribution process.
Exceptional processes are possibly needed if the
released production orders are changed by
production plan department. This service generates
a remedy material distribution process for order
changes. In case a new production orders is
inserted, a new material order will be generated
through tracing the products BOM structure via
SAP Adaptor (i.e. BOM RFC). Processing of this
new material order will follow Step 2 to 10 of the
normal scenario. In case a released production order
> REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE
TO EDIT) <

641

641
is reduced of its quantity or completely cancelled,
on the other hand, this service will trace its
corresponding material order and all the related
logistics tasks. Those logistics tasks having not been
started will be directly cancelled, while those in
processing or being processed will be assigned a
reversed logistics tasks. For example, a pallet of
materials in preassembly buffer will be assigned a
material return task requiring it to be sent back to
the warehouse from where it is delivered.
4. CONCLUSION
This paper has presented an innovative shop-floor
RFID implementation framework called AUTOM
which has facilitated the project team to realize
several successful RFID applications in a
systematical way. Typical shop-floor requirements
have been concluded and generalized into technical
and management needs for RFID technologies. Both
technical and managerial issues concerning with the
RFID project implementation have been discussed
in depth for manufacturing companies.
A case of applying RFID for managing material
distribution in the complex assembly shop floor of a
large air conditioner company is chosen in this
paper to demonstrate the usage and effectiveness of
AUTOM. The system has been continuously tested
in the pilot workshop for a month. Several rounds of
feedback collection from users have largely
improved the usability and efficiency of the system
on the one hand, helped all the operators familiar
with the system on the other. Although positive
ratings have been generally reported from both
management and frontline operators, the evaluation
process also revealed some challenges that deserve
more attention in the future implementation.
5. ACKNOWLEDGMENTS
The authors are most grateful to the collaborating
company for technical and financial supports. The
authors would also like to thank Hong Kong SAR
ITF (GHP/042/07LP), RGC GRF HKU 712508E,
National Science Foundation of China 50805116
and 60973132, and HKU Research Committee
Small Project Funding (200907176106) for
providing partial financial supports.
REFERENCES
1. Chryssolouris, G., et al., Digital manufacturing:
history, perspectives, and outlook. Proceedings of the
Institution of Mechanical Engineers Part B-Journal of
Engineering Manufacture, 2009. 223(5): p. 451-462.
2. Williams, D., The strategic implications of Wal-
Mart's RFID mandate. Directions Magazine,
2004(Available on the Internet at
http://www.directionsmag.com/article.php?
article_id=629&trv=1&PHPSESSID=a942fc54a3350
2601eb2cbbec3fced74.).
3. Huang, G.Q., P.K. Wright, and S.T. Newman,
Wireless manufacturing: a literature review, recent
developments, and case studies. International Journal
of Computer Integrated Manufacturing, 2009. 22(7):
p. 579-594.
4. Mo, J.P.T., et al., Directional discrimination in radio
frequency identification system for materials flow
control in manufacturing and supply chain.
Proceedings of the Institution of Mechanical
Engineers Part B-Journal of Engineering
Manufacture, 2009. 223(7): p. 875-883.
5. Brintrup, A., D. Ranasinghe, and D. McFarlane, RFID
opportunity analysis for leaner manufacturing.
International Journal of Production Research, 2010.
48(9): p. 2745-2764.
6. Ren, Z., C.J. Anumba, and J. Tah, RFID-facilitated
Construction Materials Management (RFID-CMM)-A
Case Study of Water-Supply Project. Advanced
Engineering Informatics, 2010.
doi:10.1016/j.aei.2010.02.002.
7. Huang, G.Q., et al., RFID-enabled real-time Wireless
Manufacturing for adaptive assembly planning and
control. Journal of Intelligent Manufacturing, 2008.
19(6): p. 701-713.
8. De Toni, A.F. and E. Zamolo, From a traditional
replenishment system to vendor-managed inventory:
A case study from the household electrical appliances
sector. International Journal of Production
Economics, 2005. 96(1): p. 63-79.
9. Perona, M. and N. Saccani, Integration techniques in
customer-supplier relationships: An empirical
research in the Italian industry of household
appliances. International Journal of Production
Economics, 2004. 89(2): p. 189-205.
10. Clarke, R.H., et al., Four steps to making RFID work
for you. Harvard Business Review Supply Chain
Strategy Newsletter, 2006(P0602D).
11. Henseler, M., M. Rossberg, and G. Schaefer,
Credential Management for Automatic Identification
Solutions in Supply Chain Management. Ieee
Transactions on Industrial Informatics, 2008. 4(4): p.
303-314.
12. Ngai, E.W.T., et al., RFID systems implementation: a
comprehensive framework and a case study.
International Journal of Production Research, 2010.
48(9): p. 2583-2612.
13. Huang, G.Q., Y.F. Zhang, and P.Y. Jiang, RFID-
based wireless manufacturing for real-time
management of job shop WIP inventories.
International Journal of Advanced Manufacturing
Technology, 2008. 36(7-8): p. 752-764.
14. Zhang, Y.F., et al., Agent-based Smart Gateway for
RFID-enabled Real-time Wireless Manufacturing.
International Journal of Production Research,
2010(Accepted).
15. Schmitt, P., F. Thiesse, and E. Fleisch, Adoption and
Diffusion of RFID Technology in the Automotive
Industry. Proceedings of the ECIS European
> REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE
TO EDIT) <

642

642
Conference on Information Systems, St. Gallen,
Switzerland, 2007.
16. Langer, N., et al., Assessing the impact of RFID on
return center logistics. Interfaces, 2007. 37(6): p. 501-
514.
17. Huang, G.Q., et al., RFID-Enabled Real-Time
Services for Automotive Part and Accessory
Manufacturing Alliances. International Journal of
Production Research, 2010. Under Review.

You might also like