You are on page 1of 626

Proceedings of ICM 2007

The 6th International Conference on Management August 3-5, 2007 Wuhan, China

National Natural Science Foundation of China

Huazhong University of Science and Technology

Proceedings of ICM 2007


The 6th International Conference on Management August 3-5, 2007 Wuhan, China

Globalization Challenge and Management Transformation


Zhang Jinlong, Zhang Wei, Xia Xinping, Liao Jianqiao (Eds.)

(VOL.I)

Science Press
Beijing, China
ii

ABSTRACT
This book is the proceedings of the 6th International Conference on Management (ICM2007). ICM2007 is a conference organized by Department of Management Science, National Natural Science Foundation of China (NSFC) with Huazhong University of Science and Technology, which is held on August 3-5, 2007, in Wuhan, China. About 300 papers are selected in the proceedings. These papers, submitted from 11 countries and areas, cover three key issues in management science: basic theories and methods, business administration, and macroeconomic management. The proceeding is a precious reference to those people who wish to either understand the new trends and new requirements on management sciences in the foreseeable future, or familiarize with the status and development of management sciences.

Published by Science Press 16 Donghuangchenggen North Street, Beijing,100717, P.R.China Copyright 2007 by Science Press ISBN 978-7-03-019372-8/C.246 All right reserved. No part of the material by this copyright notice may be reproduced or utilized in any form or by any means,electronic or mechanical,including photocopying, or by any information storage and retrieval system,without written permission from the copyright owners.

iii

Preface
The International Conference on Management (ICM) is a conference in series organized by Department of Management Science, National Natural Science Foundation of China (NSFC) with a local partner. The first to five of this conference were successfully held in Beijing (1994), Hong Kong (1996), Shanghai (1998), Xian (2001), and Macao (2004). A common sense is that the scope of management sciences is dramatically expanding and developing with the globalization of the world economy. Hence, the theme of this conference, i.e., the 6th ICM to be held in Wuhan, is determined as Globalization Challenge and Management Transformation. The conference provides an international academy forum for various management professionals to share and exchange new ideas, theories, methods, and fruits on the theory and practice of management studies, and promotes the communion and innovation of the management science between China and the world. About 300 papers will be presented at this conference and are selected in the proceedings. These papers, submitted from 11 countries and areas, cover three key issues in management science: basic theories and methods, business administration, and macroeconomic management. I believe that this proceeding is a precious reference to those people who wish to either understand the new trends and new requirements on management sciences in the foreseeable future, or familiarize with the status and development of management sciences in China. I would like hereby to thank Science Press for the timely and elegant publication of the proceedings. All members of the program committee and organizing committee contributed to the success of the conference. Particularly, several faculty members of Huazhong University of Science and Technology and National Natural Science Foundation volunteered their time and efforts in organizing the conference and editing the papers. Their helpful works are greatly appreciated. Meanwhile, I would like to express my gratitude to many organizations domestic and abroad for their sponsorships. The names of these organizations are listed at the front page of this proceeding. Finally, I wish the conference successful and valuable to all involved.

Guo Chongqing
Director General, Department of Management Sciences, NSFC August 2007

iv

Proceedings of ICM 2007 The 6th International Conference on Management

Globalization Challenge and Management Transformation


Sponsor
Department of Management Sciences, National Natural Science Foundation of China

Organizer
Huazhong University of Science and Technology

Jointly Organizers
Southwest Jiaotong University Chinese University of Hong Kong

Conference Chairman
Prof. Guo Chongqing (, Beijing, China)

Consultant Committee (by alphabetic)


Prof. Cheng Siwei (, Beijing, China) Prof. Li Jingwen (, Beijing, China) Prof. Li Shantong (, Beijing, China) Prof. Liu Renhuai (, Guangzhou, China) Prof. Liu Yuanzhang (, Beijing, China) Prof. Lu Youmei (, Wuhan, China) Prof. Wang Yingluo (, Xian, China) Prof. Wang Zhongtuo (, Dalian, China) Prof. Wu Jiapei (, Beijing, China) Prof. Wu Qidi (, Beijing, China) Prof. Yu Jingyuan (, Beijing, China) Prof. Zhao Chunjun (, Beijing, China)

vi

Program Committee
Chairman Prof. Li Peigeng (, Wuhan, China) Co-Chairman Prof. Huang Haijun (, Beijing, China) Prof. Wang Shouyang (, Beijing, China) Prof. Zhang Wei (, Beijing, China) Members (by alphabetic)
Prof. Chen Guoqing (, Beijing, China) Prof. Chen Jiyong (, Wuhan, China) Prof. Chen Naiji (, Macao, China) Prof. Chen Qijie (, Shanghai, China) Prof. Chen Shou (, Changsha, China) Prof. Chen Rongqiu (, Wuhan, China) Prof. Chen Xiahong (, Changsha, China) Prof. Deng Mingran (, Wuhan, China) Prof. Gao Ziyou (, Beijing, China) Prof. Gu Siming (, Taiwan, China) Prof. Guo Daoyang (, Wuhan, China) Prof. Hong Maowei (, Taiwan, China) Prof. Huang Haijun (, Beijing, China) Prof. Lan Hailing (, Guangzhou, China) Prof. Li Hanling (, Taiwan, China) Prof. Li Rui (, Hong Kong, China) Prof. Li Tiansheng (, Hong Kong, China) Prof. Li Weian (, Tianjing, China) Prof. Li Yijun (, Haerbin, China) Prof. Liang Dingpeng (, Taiwan, China) Prof. Liu Jianhua (, Hong Kong, China) Prof. Liu Xing (, Chongqing, China) Prof. Luo Zuwang (, Shanghai, China) Prof. Qi Ershi (, Tianjing, China) Prof. Shen Zhonghua (, Taiwan, China) Prof. Shi Qintai (, Taiwan, China) Prof. Song Kai ( , Taiwan, China) Prof. Wang Chongming (, Hangzhou, China) Prof. Wang Fanghua (, Shanghai, China) Prof. Wei Minghai (, Guangzhou, China) Prof. Wei Yiming (, Beijing, China) Prof. Wu Shinong (, Xiamen, China) Prof. Wu Shoushan (, Taiwan, China) Prof. Xi Youmin (, Xian, China) Prof. Xiao Baichun (, Chengdu, China) Prof. Xu Erming (, Beijing, China) Prof. Yang Deli (, Dalian, China) Prof. You Jianxin (, Shanghai, China) Prof. Yu Li (, Dalian, China) Prof. Zhang Weiying (, Beijing, China) Prof. Zhang Xinming (, Beijing, China) Prof. Zhang Yishan (, Jilin, China) Prof. Zhao Shuming (, Nanjing, China) Prof. Zheng Zukang (, Shanghai, China) Prof. Zhu Daoli (, Shanghai, China) Prof. Xiuli Chao (North Carolina, USA) Prof. Ming-Jer Chen (Virginia, USA) Prof. Ming Huang (New York, USA) Prof. Wei Huang (Ohio, USA) Prof. Teck-Hua Ho (California, USA) Prof. K. K. Lai (Hong Kong, China) Prof. Lode Li (Connecticut, USA) Prof. Shouyong Shi (Toronto, Canada) Prof. Jing-Sheng Song (North Carolina, USA) Prof. Zhigang Tao (Hong Kong, China) Prof. Jiang Wang (Boston, USA) Prof. K. K. Wei (Hong Kong, USA) Prof. Tianqin Xu (West Ontario, Canada) Prof. David Yao (New York, USA) Prof. Yingyu Ye (California, USA) Prof. Shaohui Zheng (Hong Kong, China)

vii

Organizing Committee
Chairman
Prof. Zhang Jinlong (, Wuhan, China)

Co-Chairman
Prof. Chen Xiaotian (, Beijing, China) Prof. Jia Jianmin (, Chengdu, China)

Secretary General
Prof. Xia Xinping (, Wuhan, China) Prof. Liao Jianqiao (, Wuhan, China)

Members(by alphabetic)
Prof. Cao Yong (, Wuhan, China) Prof. Chen Pinglu (, Wuhan, China) Prof. Cui Nanfang (, Wuhan, China) Prof. Feng Zhiyan (, Beijing, China) Prof. Gong Pu (, Wuhan, China) Prof. Huang Dengshi (,Chengdu,China) Prof. Jing FengJie (, Wuhan, China) Prof. Li Hao (, Wuhan, China) Prof. Li Jun (, Chengdu, China) Prof. Liu Zuoyi (, Beijing, China) Prof. Long Lirong (, Wuhan, China) Prof. Lu Yaobin (, Wuhan, China) Prof. Ma Shihua (, Wuhan, China) Prof. Meng Erling (, Wuhan, China) Prof. Tian Zhilong (, Wuhan, China) Prof. Wang Zongjun (, Wuhan, China) Prof. Yang Cao (, Wuhan, China) Prof. Yang Liexun (, Beijing, China) Prof. Zhu Xuezhong (, Wuhan, China)

ii

CONTENTS
VOL.I
SECTION ONE Management Science
An Agent-Based Knowledge Transfer Model for Implementing and Operating ERP Systems Dong Jichang, Tan Dapeng, Huo Guoqing 3 An Adaptive Particle Swarm Algorithm for Global Optimization Guo Chonghui, Li Hong 8 A Study on Dynamic Modeling for Chinese Residents Consumption and Investment Decision with Housing Liu Zhixin, Huang Lingling 13 New Formulations for Second-Best Congestion Pricing Problems on a General Transportation Network with Elastic Demands: An Excess-Demand Approach Liu Nan, Chen Daqiang 19 Fuzzy MPMP Production Planning Problem with Minimum Risk Criteria Yuan Guoqiang, Liu Yankui 29 Optimizing Designs to Improve the Blockage Behavior of Emergency Networks Wu Weiwei, Angelika Ning, Ning Xuanxi 37 An Approach to Solve Large Scale Multiple Traveling Salesman Problem with Balanced Assignment Wang Dazhi, Wang Dingwei 43 Optimization Methods and Models Supplier Selection for N-Product under Uncertainty: A Simulation-Based Optimization Method Wu Chunxu, Dong Feifei, Fu Jianbing 46 A Feasible Case Study of Applying Critical Chain Multi-Project Management in Semiconductor Turnkey Services His-Hsien Yu, Sheng-Kuan Chiu 51 A Fuzzy Multi-Objective Programming for Optimization of Reverse Logistics for Solid Waste He Bo, Yang Chao, Zhang Hua 59 The FIP Problem with Uncertain Demand in Several Scenarios Yang Jun, Liu Shuji, Xiong Jing 65 Information Sharing Strategy and Data Processing Model for SMLE Alliance Qi Ershi, Gao Yinan, Huo Yanfang 71 Solving Capacitated Facility Location Problem Using Tabu Search Minghe Sun, Eliane Aparecida Ducati, Vincius Amaral Armentano 76 Research on the Hybrid Ant Colony Labor Division Algorithm with Optimization and Its Application Xiao Renbin, Zhang Qiang and Zhang Xinhui 82 Semicontinuty of the Solution Mapping of -Vector Equilibrium Problem Kenji Kimura, Yeong-Cheng Liou, Jen-Chih Yao 90

iii

Optimization of Distribution Network Design Based on Retailer Satisfaction Zhang Min 101

Optimizing Linear Optimization Problems under Fuzzy Relational Equations with Max-Star Composition Yan-KuenWu 107

SECTION TWO Operational Management


Virtual Cellular Manufacturing Systems: Emerging Research Issues in Global Manufacturing & Supply Chain Contexts Nallan C. Suresh 117 A Study on the Performance Characteristics of Closed-loop Asynchronous Automatic Assembly Systems W.K. Leung, W.C. Ng, Y. Ge 122 Container Slot Allocation Model with Liner Shipping Revenue Management Bu Xiangzhi, Chen Rongqiu, Li Li 130 Production Order Evaluation Based on Neutral Network Model Qiu Jie, Chen Zhixiang 138 Distribution the TOC Way: Review and a Case Study Cui Nanfang, Leng Kaijun 145 The Research on Distribution of Large Scale Chains Enterprise with Time Window Da Qingli, Zhang Heng 152 Multi-Objective Layout Optimization in Dynamic Environments: A Heuristic Approach Dong Ming, Liu Fei, Hou Forest, Zhang Franky, Chen Feng 159 A Centralized Inventory Policy for Open-Loop Reverse Supply Chain Gou Qinglong, Liang Liang, Xu Xiping, Xu Chuanyong 165 Research on Development of Modern Logistic Industry in Area along New Eurasian Land Bridge (Chinas Section) Based on Spatial Economic Relationship Ji Shouwen, Chen Jiajuan, Xie Fang 175 A Simulation Based Research on Loading -Unloading Strategy in Railway Container Terminal Li Dong, Wang Dingwei 180 A Fuzzy Evaluation Method of Integrated Logistics Service Networks Zhao Zhiyan, Li Bo 186 Formulation and Complexity Analysis for 3PL Transportation Problems Li Kunpeng 191 Study of the Partial Task Management under Constrained Resources Li Mingyu, Bi Yiming, Li Bin 195 A Study on Guaranteed Delivery Time Based Inventory Model of Component Commonality in Assemble-to-Order Systems Lin Yong, Chen Kai 200 Outsourcing Decision-Making of Equipment Maintenance in Process Industries Liu Liping, Ji Jianhua, Fan Tijun, Hu Jiling 212 A Location-Allocation Problem Applying in Scrap Steel Recycling Network Design liu Yang, Tang Jiafu 215 A Dynamic Inventory/Allocation Problem Based on Internet Auctions Liu Shuren, Hu Qiying 221 iv

Reliability of Transportation Network Service Capacity Based on Effective Agile Theory Song Rui, He Shiwei 226 Information Increment and Information Value in Supply Chains Wang Jing, Wang Xun, Li Yuxiang 232 Information Sharing in a Supply Chain with Learning Effect Wu Jianghua , Xin Zhai 239 A Study of the Production Planning and Control System for Remanufacturing Xie Jiaping, Ren Yi , Zhao Zhong 245 A Production I1nventory Model for Intangible Deteriorated Items with Demand-Dependent Deteriorating Rate Xu Xianhao, Tang Ziyi 254 The Impact of Positive Externality on Returns Policy Xu Chuanyong, Liang Liang, Gou Qinglong, Zha Yong, Zhou Chuiri 260 Study on a Model of Lean Supplier Management Based on the Lean Production Xu Zhiduan, Guo Yixun 269 Study on Lead Time Inventory Models Based on Customer Waiting Time Yan Lei, Chen Rongqiu, Li Li 276 The Method Based on the PSO Algorithm for the Order Planning of the Steel Plant Zhang Tao, Cheng Haigang, Chu Xiaoxuan, Xie Meiping 281 Positioning Model of Purchasing Based on Kraljics Purchasing Portfolio Matrix and Factor Analysis Zhao Zhenfeng, Guo Danxia, Ding Liuming 289 Research on Coordinated Replenishments by Alternative Supply Sources in a Logistics System Zheng Aihua, Zhao Qiuhong 296 Solving the Joint Replenishment Problem with Warehouse-Space Restrictions Using a Genetic Algorithm Ming-Jong Yao 302 An GA-based Alternative Approach on the Capacitated Warehouse Allocation of Customers with Uncertain Demands ZhouGengui, Ye Feng, Cao Jian, Cao Zhengyu 308 Evolution of Manufacturing Systems: from Product Competitive Advantage towards Collaborative Value Creation Yongjiang Shi 316

SECTION THREE Decision Theory and Application


Efficiency Improvement with Minimum Amelioration Zha Yong, Liang Liang A Method of Fuzzy Multi-Attribute Group Decision-Making Problem with Linguistic Variable Deng WeiWu Qizong Research on E-Business Investment Decision Making Based on Option Games Han Guowen Resolve the Multi-Factor Tradeoff Problem in Bid Evaluation by Using Computer Regression Analysis He Liang, Zhao Ping, Wu Ming, Kang Hui Monotonic Vector Space Model and Its Partition Algorithms Hu Jianwen, Hu Xiaofeng, Zu Shuguang, Si Guangya Fitting Analysis of the Airport Passenger Throughput Based on Arima Model Jia Chuanliang, Sun Ying, Wang Lubin, Ma Yanlin v 340 345 353 359 325 333

Research on the Knowledge Reorganization Methods of Emergency Plans Jia Xiaona, Rong Lili Relative Closeness Method for MAGDM with Heterogeneous Information Li Dengfeng A Maxmin Model for Allocating the Fixed Cost Based upon DEA Li Yongjun, Liang Liang 378 Empirical Analysis on the International Competitiveness of China Telecommunication Operators Li Yuanhui, Ding Huiping 384 An Improved Approach to Conjoint Analysis for the Complex Decision-Making Liu Chengming, Li Chunhao 392 Applying Fuzzy Set and Rough Set to Evaluate Risk Level in IT Project Management Lu Xinyuan, Zhang Jinlong 399 Research on the Dynamic Choosing Question about Network Product Compatibility and Price Sheng Yongxiang 405 Military Conflict Decision Modeling Based on Fuzzy Hypergame Song Yexin, Dai Mingqiang, Cui Yan 410 Research on Inferring ELECTRE-III s Parameters and a Case on Naval Gun Weapon System Integration Sun Shiyan, Wei Hua, Wang Hangyu, Li Lu 416 A Co-Marginalistic Contribution Value for Set Games on Matroids Sun Hao, Xu Genjiu, He Hua 425 Dynamic Features Extraction in Soybean Futures Market of China Meng Jie, Wang Huiwen 431 Extension of the VIKOR for Decision-Making Problems under Fuzzy Environment Wang Yongchun 437 Extended Models to Non-uniqueness of Cross Efficiency in Cross Efficiency Evaluation Wu Jie, Liang Liang 443 Evaluating the Comparative Performance of the Regional S&T Competitiveness of China: A DEA Application and Problem Wu Qiang, Wang Xiaoye 448 Auctioning Total Permitted Pollution Discharge Capacity under a Uniform Price Zhao Yong, Chen Yang, Wang Qing 454 Application of the Fuzzy Simulation in the Evaluation Investment of Advanced Manufacturing Technology Zhao ZhenwuTang Wansheng 459 Survey on Rough Set Theory Based on Connection Degree Zhou Xianzhong, Li Huaxiong, Huang Bing, Yang Pei A Study on the Safety Situation and Countermeasures to Chinese Natural Resources Zhao Guohao 466 473 372 365

SECTION FOUR Information Management and E-Commerce


A Comparison of Domestic and Overseas Current Situation of Web-Based Survey Research Cheng Du, Shao Peij, Fang Jiaming Factors Influencing Consumers Repeated Online Shopping in China: An Empirical Study Chang Yaping, Zhu Donghong Risks versus Intendance Policies to Information Systems Engineering in China Fang Deying, Sun Guorui vi

481 487 493

Research on Core Competence of Enterprises in Electronic Business Management Mode Hao Chunlu 499 The Xia-Jin Bridges Impact on Local Tourism of both Sides of the Taiwan Strait from the Customer Value Perspective Chien Yung-Tsai, Liao Sen-Kuei, Duan WanChun 505 Studying on Customer Demands Information Processing Lei Yi, Tang Bingyong 512 The Evaluation of Knowledge Management Performance Based on AHP Liu Peide 517 A Study of Pricing Patterns for Keyword Advertising Auction Liu Shulin, Rong Wenjin 523 The Model Research of the Knowledge Transfer and Sharing in the ERP Implement Yunfeng ShiLingling Zhang, Xiuyu Zheng 532 A Framework of Identifying IT Application Capabilities Based on IS Lifecycle Wang Nianxin, Zhong Weijun, Mei Shue, Zhong Yulin 539 Study on Product Lifecycle Management Coordination Management System Model of the Shipbuilding Enterprise Wang Zhiying, Ge Shilun 546 Design of Link Structure of Electronic Commerce Website Wu Shaofei, Wei Siying, Zhang jinlong 552 Mitigating Risks in Software Projects Through Phased Development Process: A Real Options Analysis Chen Tao, Zhang Jinlong 556 Building on Management Knowledge Platform of Outsourcing Wu F., Li P.P., Wang Q. 562 EAI(Enterprise Application Integration)Conceptual Architecture Composition in Telecom Industry Yang Hongbin 569 Online Training Industry Supply Chain System Planning Based on CAS Theory Zhao Jinshi, Zhao Ying, Zheng Xiaotao 574 A Tool for Risk Mitigation in Public Sector IS/IT Projects: An Evidence-Based Information Systems Project Risk Checklist Lihong Zhou, Ana Cristina Vasconcelos, Miguel Baptista Nunes 580 Assessing Risks through Information Systems Failure Probability Based on the Life Cycle Theory Liu Shan, Zhang Jinlong, Chen Tao, Cong Guodong 590 The Research of Project Performance-Oriented of Software Process Improvement Model and the Decision Support System Yu Benhai,Zhang Jinlong, Chen tao, Cong Goudong, Zhang Dongfeng, Hu Xianbiao 599 Research of the Collaborative E-Business System Constructed Yang Limao, Tian Liang 605 Index of First Authos 613

vii

SECTION ONE
MANAGEMENT SCIENCE

An Agent-Based Knowledge Transfer Model for Implementing and Operating ERP Systems
Dong Jichang, Tan Dapeng, Huo Guoqing
GS School of Management of the Chinese Academy of Sciences, Beijing, P. R. China, 100049

Abstract To implement and operate an Enterprise Resource Planning (ERP) system, knowledge transfer is an important method to constitute effective Knowledge Management (KM) systems and develop effective transfer strategies to enhance competitive advantage. In this paper, an agent-based knowledge transfer framework is presented for ERP life cycle, and discussions about the relationship between the knowledge transfer in ERP life cycle and ERP effects are provided. We investigate the use of knowledge transfer to select, implement and use ERP systems and propose some knowledge agents. Key words ERP, Knowledge transfer, Agent, ERP effects

1. Introduction
In the last few years, thousands of companies around the world have implemented enterprise resource planning (ERP) systems. The commercially available ERP software packages promise seamless integration of all information flows in the company, such as financial and accounting information, human resource information, supply chain information, and customer information. ERP system provides two major benefits that do not exist in non-integrated departmental systems: (1) a unified enterprise view of the business that encompasses all functions and departments, and (2) an enterprise database where all business transactions are entered, recorded, processed, monitored and reported. The unified view increases the requirement for, and the extent of, interdepartmental cooperation and coordination. ERP system enables companies to achieve their objectives of increased communication and responsiveness to all stake holders (Dillon, 1999)[3]. ERP implementations are accompanied by large data warehouses that are designed to facilitate data access and improve the reporting capabilities. Because of the size and cost of the data warehouse, some knowledge managements may be able to exploit the underlying information and database structure. In other hand, it has been estimated that virtually all of the Fortune 500 firms have either implemented an ERP system or are implementing an ERP system. Also, some small to medium enterprises have adopted ERP systems too. So the knowledge transfer needs can vary substantially across different clients for the ERP firms. More often, implementation of ERP systems needs consulting firms. During the late 1990s, it was estimated that roughly one-third to one-half of the consulting done by the major consulting firms has to do with selecting, implementing, or using ERP systems (Public accounting report, 1998)[10]. Further, additional consulting often is done after the ERP system has been installed, e.g., improving configuration and security. As a result, there is a large potential for knowledge transfer through the life cycle, both for consultants and the companies implementing the software. Also, when the ERP software is developed, there is also knowledge transfer between the consultants and the programmers who may be from the ERP companies or specific software companies (Daniel, 2002)[2]. Furthermore, all too often the knowledge assets created during ERP projects are lost, ignored, or not leveraged to their fullest potential in the implementation (Linda, 2001)[5]. These valuable assets include the relationships and rules embedded in the software, the project design criteria and decision history, testing and training scenarios and scripts, and user experiences. The most successful implementations avoid losing the knowledge created in the selection, implementation and early deployment stages through careful, early planning of KM systems. Organizations can improve their institutional learning curve by establishing the processes and systems needed to capture, manage, access, and replenish knowledge across the entire ERP life cycle. From the analysis above, we can find that knowledge transfer is emerging as important tool to support ERP

Supported by SSFC (No. 05CTQ002), China.

systems. The purpose of this paper is to discuss the knowledge transfer across the entire life cycle of ERP system to select, implement and use ERP system. In this paper, we discuss the relationship between the knowledge transfer in an ERP life cycle and ERP effects in Section 2. Then a multi-agent based framework for knowledge transfer in the ERP system is proposed in Section 3. Section 4 provides the knowledge agents in the entire ERP life cycle for enterprises to select, to implement and use ERP system. Finally, we summarize the paper briefly in Section 5.

2. The Effects of Knowledge Transfer to ERP Life Cycle


Entire ERP life cycle is generally characterized by three stages: selecting, implementing and using. These stages are different in their critical needs and goals, but they share the common objective, and improve the competitive power and value of the enterprise. According to Linda (Linda, 2001)[5], a great deal of codified knowledge is contained in an ERP system, both in the software structures provided by the vendor and in the process knowledge and business rules developed by the enterprise. But a corresponding set of auxiliary knowledge is captured only in the form of project documents (formal and informal), training documents, data models, additional business rules and formulas, and in the brains of the project team. We cannot easily access or derive this valuable knowledge from the software or the operational system. In general, we can conclude the relationships between the knowledge transfer in ERP implementation and ERP effects as below: (1) The knowledge level of consultants (or experts) that assistant to software development is positively related to ERP implementation effect. A great deal knowledge is embedded in the ERP software. The process of software development just is the process that the consultants or experts transfer theirs management method and other knowledge into the software. It is the first and more important transference which affects the effect of the implementation. Scott (2000) testifies that the knowledge contents in ERP software are positively related to their assistance to the function of an enterprise[11]. The effect of SAP, People Soft and other ERP firms are benefit from the assistance of the consultant company, such as Price Waterhouse Coopers. (2) The gap between the current knowledge of an enterprise and the knowledge needed by ERP implementation is positively related to the risk of ERP implementation. In an enterprise, the key leaders would not support the implementation of the ERP, if they were short of understanding the value of the ERP software. If the education and training were not well done, it must lead to the bigger gap between the current knowledge of an enterprise and the knowledge needed by ERP implementation. Also, culture gap can be considered as a knowledge gap, but this gap will be changed difficulty. All these knowledge gaps can result in great effects to ERP implementation effect. (3) When the individuation level of ERP software is higher, an enterprise transferring more its special knowledge to the ERP software is positively related to ERP implementation effect. However, when the individuation level of ERP software is lower, an enterprise transferring its optimal practice knowledge to the ERP software is positively related to ERP implementation effect. In general, the ERP firms and the consultant company often advise the users not to make a big modification to the ERP software because the standard flows of the ERP are from the optimal practices of the relative industry. The bigger modification would affect the ERP implementation effect (Bancroft, 1998; McCaskey, 1999)[1][6]. An enterprise needs to fit the ERP software through modifying the flows of it. But some scholars argue that ERP firms cannot provide every flow for each industry because each enterprise holds its special flows and rules, so it is necessary to redevelop the software in some extents (Swan, 1994)[12]. This problem lies in the individual characters of ERP software in nature, and the enterprise should transfer more its special knowledge to the ERP software.

3. An Agent-based Knowledge Transfer Framework


The levels of the knowledge in the ERP software depend on the management, and the process of knowledge 4

from the expertise transferred to a software programmer and embedded in the software at last. An enterprise implementing ERP is a process that internal and external knowledge are transferred into the software and become the power of the enterprise. Just these knowledge transferences decide the effect of the implementation of the ERP system. Fig. 1 illustrates the knowledge transfer process in ERP life cycle. To gain the useful knowledge, a knowledge transfer framework (Nonaka, 1991)[8], which identifies which information we should retain, who will eventually use that information, and how we will capture that information at each stage of the ERP life cycle, should be provided. In addition, the knowledge transfer framework provides a model of the knowledge environment over the life cycle of a system and illustrates the relationships and dependencies of people, business processes and events, and knowledge domains. With this framework, we can anticipate and plan for the knowledge requirements of subsequent phases of the ERP project. We can also develop mechanisms for transferring or transforming the knowledge base from one phase to another. Selecting, implementing and using REP systems can result in a number of issues that can benefit from the knowledge transfer. We provide an agent-based framework for knowledge transfer in ERP system life cycle (Shown in Fig. 2). Different knowledge contents in each stage of life cycle of ERP system can be dealt by the knowledge agents: Selecting Agent, Implementing Agent and Using Agent in the framework. Using of this framework, we can see that both explicit knowledge and tacit knowledge can be transferred among the actors in the process of implementation of ERP system. The following section will provide a detailed introduction to the knowledge agents in the framework.

Industrial Expertise Consultant

Software Programmer

ERP Software

ERP Company

ERP Implementation Enterprise

ERP Implementation Consultant

First Knowledge Transfer

Second Knowledge Transfer

The Efficiency of ERP Implementation


Fig. 1 A general knowledge transfer process in ERP life cycle

Selecting Agent Implementing Agent Using Agent

Web Browser

User Interface

Fig. 2 An agent-based knowledge transfer framework

4. Knowledge Agents
4.1 Selecting Agent Knowledge transfer can facilitate the choice of ERP system. In order to facilitate access to information and eliminate information asymmetries, a Selecting Agent designed to capture and make available information regarding the ERP systems in a timely manner, would have been a helpful device. Information about each of the ERP systems under consideration could have been embedded in this agent. According to the current literatures including selection process the following knowledge contents can be transferred into this agent from the external entities or internal entities of the enterprise (Langenwalter, 2000; Minahan, 1998; Oden, 1993)[4][7][9]. (1) Vision. Clearly define the corporate mission, objectives and strategy. (2) Feature. A proposal that contains the feature and function list should be created. Typically, this proposal describes how the company wants each department or function to operate and consists of instructions to the supplier, the terms and conditions, supplier response forms, and so on. (3) Software Candidate. Select only ERP providers that are right for the business of an enterprise. Price, supplier support, ease of implementation, closeness of fit to the companys business, flexibility when the companys business changes, technological risk and value (total implemented cost versus total value to the company) are the important criteria to select an ERP system. Using this knowledge agent, make a final decision on the implementation. In extreme cases, if necessary, reverse the decision to implement ERP, change vendors, or renegotiate the contract. During this stage, the knowledge actors include a team composed of respected individuals who are familiar with the various software packages, company processes and the industry, business unit managers, the manager of the enterprise, consultants, and so on. 4.2 Implementing Agent During the implementation stage, Combining business and industry-related knowledge with the knowledge structures (processes, data architectures, and so forth) of the ERP system is an important goal at this stage. The project managers of ERP implementation will require current, accurate information and support tools in order to accomplish tasks. The human experts and system knowledge, the project management and change request documentation and audit trails, and other information are the main knowledge in this stage. In order to accomplish these goals, the Implementing Agent is needed. The following knowledge contents can be transferred into this agent: (1) ERP Model. Capture the organizations business processes and rules in the context of the ERP software and provide models for the final systems configuration. (2) Toolkit. Provide project management, document management, collaboration and message, expertise and team-skills management, and a lessons-learned process repository. (3) IT. IT knowledge components-software code, models, rules, and so on. Moreover, the Implementing Agent should provide an integrated access to external data sources and integrates the information received in the context of the ERP implementation project instance: context, priority, and others; a real time communications and collaboration environment that includes shared documents, media, and discussion spaces with appropriate security and access controls; a single point of access to a wide range of relevant knowledge bases (external and internal), without regard to data format or sources that include documents, databases, management reports, and performance metrics. 4.3 Using Agent When enterprise is living with the ERP system, a Using Agent is needed to ensure the transfer of knowledge to new team members and system users, to use the system, and to continue to identify and communicate key issues to all the communities. Developing and delivering training and maintaining training documentations are also important in order to support the ERP-system knowledge transfer to the business community in the implementing stage. In this stage, we will also need to provide complete and accurate documentation for testing, 6

diagnostics, audit trails, and the resolution of potential problems, issues, and errors. Moreover, the organization should anticipate the future requirements of the IT organization as well as those of the business community. To properly evaluate the impact of future upgrades, functions and modules, or the integration of new applications (such as data warehouses, analytic support systems, supply chain management systems, sales force automation, human resource systems, call centers, and customer value management systems), this knowledge must be preserved in a usable and accessible form. Further, In order to create a dynamic and positive long-term learning environment, knowledge needs to be usable and accessible across the value chain for both ERP and non-ERP communities. In addition, improving and sustaining the efficiency of the ERP system by decreasing support and maintenance costs and providing integration with non-ERP sources should be an ongoing goal. All the above can be proved by the Using Agent. The following major functional knowledge should be included in this agent: (1) Training and Learning documents. (2) Analytic Reporting.

5. Conclusions
This paper has focused on knowledge transfer for selecting, implementing, and using ERP systems. In particular, an agent-based framework is presented for knowledge transfer in ERP life cycle. Also, discussion about the relationship between the knowledge management in an ERP implementation and ERP success is provided. Although this paper has addressed some important issues, it is just the beginning. There are a number of other issues integrating ERP systems and knowledge transfer. Organizational learning is becoming increasingly coupled with knowledge management. As a result, it is important to analyze how ERP systems can facilitate organizational learning and how organizational learning can be built into ERP systems. Further, research could focus on devices designed to facilitate knowledge management within or between organizations. For example, a number of virtual communities have been developed to solicit and generate knowledge about specific ERP packages, such as SAP or Oracle.
References [1] [2] [3] [4] [5] Bancroft N H, Seip H, Sprengle A. Implementing SAP R/3. Greenwich: Manning Publications, 2nd Edition, 1998 Daniel E O. Knowledge management across the enterprise resource planning systems life cycle International. Journal of Accounting Information Systems, 2002, 3: 99-110 Dillon C. Stretching toward enterprise flexibility with ERP. APICS-The Performance Advantage, October, 1999, 38-43 Langenwalter G. Enterprise Resources Planning and Beyond: Integrating Your Entire Organization. Boca Raton: St. Lucie Press, 2000 Linda E C. Business impact: a lack of ERP knowledge and commitment undermines the efficacy and progress of an ERP implementation and threatens the realization of intended business benefits. Intelligent Enterprise Magazine, A Stitch in Time 7, 2001 McCaskey D, Okrent M. Catching the ERP second wave. APICSThe Performance Advantage, 1999, 34-38 Minahan T. Enterprise resource planning. Purchasing, 1998, 16: 112-117 Nonaka I. The knowledge-creating company. Harvard Business Review, 1991, 69: 96-104 Oden H, Langenwalter G, Lucier R. Handbook of Material and Capacity Requirements Planning. New York : McGraw-Hill, 1993 Public accounting report. Big Six Dominate Systems Integration Market. New York: Aspen Publishers, 1998 Scott J E, Kaindl L. Enhancing functionality in an enterprise software package. Information and Management, 2000, 37: 111-122 Swan J, Newell S, Robertson M. The illusion of best practice in information system for operations management. European Journal of Information System, 1994, 8: 284-293

[6] [7] [8] [9] [10] [11] [12]

An Adaptive Particle Swarm Algorithm for Global Optimization*


Guo Chonghui1, Li Hong2
1 Institute of Systems Engineering, Dalian University of Technology, P.R.China, 116024 2 Department of Mathematics, Dalian University of Technology, P.R.China, 116024

Abstract Particle swarm optimization is a relatively new category of meta-heuristic global optimization algorithms. It has been widely concerned by people because of its feasibility and effectiveness. In this paper, an adaptive particle swarm optimization algorithm, which introduces two adaptive acceleration factors in terms of the convergence speed and global search capability of the PSO algorithm, is proposed. A novel weighted function has been introduced and some particles are to be updated in a new way when the proposed algorithm traps in local optimum. The proposed algorithm is shown to enhance the convergence speed and global search capability on different benchmark optimization functions. Key words Particle swarm algorithm, Meta-heuristic, Global optimization

1. Introduction
The particle swarm optimization algorithm (PSO), originally introduced in terms of social and cognitive behavior by Kennedy and Eberhart in 1995 [1, 2], has proven to be a powerful competitor to other meta-heuristic algorithms for global optimization problems. The PSO models the optimal exploration of a problem space by a population of agents or particles; the success histories of the agents influence both their own search patterns and those of their peers. The search is focused toward promising regions by biasing each particles velocity vector toward both the particles own remembered best position and the communicated best ever swarm location. The relative weights of these two positions are scaled by two factors, aptly called the cognitive and social scaling parameters [3]. Incidentally, these two components are the main governing parameters of swarm behavior (and algorithm efficiency), and have previously been the topic of extensive studies [4, 5]. A newcomer among optimization algorithms, the derivative-free PSO has recently received a lot of attention, with some conferences devoted solely to this topic. The reasons for the interest in the PSO are numerous, but include the following: The algorithm can easily be parallelized on massive parallel processing machines, since the individual searches of the simulated particles are independent of each other, and communication between particles is only required once all particles have evolved to the same pseudo time state. Furthermore, the PSO is simpler, both in formulation and computer implementation, than the genetic algorithm (GA). In addition, the PSO seems to outperform the GA for a number of difficult programming classes, notably the bound constrained global optimization problem [6]. Notwithstanding its recent popularity, the PSO has a number of drawbacks, one of which is the presence of problem dependent parameters. A further drawback of the original algorithm proposed by Kennedy and Eberhart [1, 2] lies therein that the algorithm is known to quickly converge to the approximate region of the global minimum. However, the algorithm does not maintain this efficiency when entering the stage where a refined local search is required to pinpoint the minimum exactly. This has led to a number of variations on the original PSO being proposed to overcome this shortcoming. Some of the most notable of these formulations are the introduction of an inertia term by Shi and Eberhart [3], and more recently, the so-called constriction factor by Clerc [7] in his swarm and queen approach, which is named CPSO in this paper. Constriction seems superior to the introduction of inertia [8]. In the latter approach, the inertia term is either kept constant or decreased linearly as the search progresses, with the linear decrease in inertia more efficient than a constant inertia term. In this paper, an adaptive particle swarm optimization (APSO) algorithm, which introduces two adaptive acceleration factors and a new weight function in terms of the convergence speed and global search capability of the PSO algorithm, is proposed. This paper is organized as follows. In section 2, we present the global optimization problem and introduce the standard PSO algorithm. In section 3, firstly we introduce the CPSO algorithm, and then a new adaptive particle swarm optimization (APSO) algorithm is presented. Experimental
*This work was supported by Natural Science Foundation of China under grant No. 10571018.

results are reported in Section 4. Finally, some concluding remarks are given in section 5.

2. Standard PSO Algorithm


Let us assume that it is needed to solve an unconstrained (or bounds constrained) n-dimensional optimization problem by minimization of the objective or the fitness function f(x) defined on the set D . From a mathematical point of view, the global optimization problem can be expressed as

min

f ( x) , x = ( x1 , x2 ,..., xn ) D R n

(1)

If the objective function and/or the feasible domain D are non-convex, then there may be many local minima which are not globally optimal. The problem of determining an accurate estimate of the global optimum is mathematically ill-posed in the sense that very similar objective functions may have global optima very distant from each other. Nevertheless, the need in practice to find a relative low local minimum has resulted in considerable research over the last decades to develop algorithms that attempt to find such a low minimum. Many global optimization techniques have emerged as attractive, alternative tools. Particle swarm optimization is one of such stochastic global optimization tools, which has been proven to provide successful solutions for such difficult optimization problems. The PSO algorithm simulates social behavior among particles or individuals flying through a multidimensional search space. Each particle represents a single potential solution. The particles evaluate their positions relative to the fitness at each iteration, and companion particles share memories of their best positions, and then use these memories to adjust their own velocities and positions. Let i indicates a particle index in the swarm which includes m particles. Each particle flies through an n-dimensional search space. X i = ( xi1 , xi 2 , , xin ) and Vi = (vi1 , vi 2 , , vin ) represent the current position and velocity of particle i respectively. They are dynamically adjusted according to their own previous best position Pi = ( pi1 , pi 2 , , pin ) and the best position of the entire swarm Pg = ( pg1 , pg 2 , , pgn ) . The particles interact and move according to the following equations:

vij (t + 1) = w(t )vij (t ) + c1r1 j ( pij (t ) xij (t )) + c2 r2 j ( pgj (t ) xij (t ))


xij (t + 1) = xij (t ) + vij (t + 1) ,

(2) (3)

i = 1, 2,

, m; j = 1, 2,

,n

where r1 j , r2 j are random numbers between zero and one. c1 is called the cognitive learning rate and c2 is called the social learning rate. The portion of the adjustment to the velocity influenced by the particles previous best position is considered the cognitive component, and the portion influenced by the best in the swarm is considered the social component. w(t) is the weight function that dynamically adjusts the velocity as the swarm evolves. In order to avoid too rapid movement of particles in the search space, vij is usually constrained in a range, i.e. vij [vmax , vmax ] .

3. CPSO and APSO Algorithms


The cognitive learning rate and the social learning rate determine the relative influence of the cognitive and social components. Clerc introduced a constriction factor to the standard PSO (CPSO) [7]. The evolving equations of CPSO can be described as follows:

vij (t + 1) = K (vij (t ) + c1r1 j ( pij (t ) xij (t )) + c2 r2 j ( pgj (t ) xij (t ))) xij (t + 1) = xij (t ) + vij (t + 1)
where K =
2 2 C c2 4C

(4) (5)

i = 1, 2,

, m;

j = 1, 2,

,n

, which is the constriction factor, and C = c1 + c2 , C > 4 .

Generally, in the standard PSO algorithm, the cognitive learning rate and the social learning rate are often both set to the same constant to give each component equal weight. However, each particles search ability varies as the swarm evolves, and further more, even at the same generation different particles among the swarm have 9

different search abilities. The fixed constant setting on c1 and c2 constrains the adjustment of velocity and direction of particles greatly. In this paper, two adaptive acceleration factors are introduced to the cognitive and social components respectively,
Ai1 = ln( Ai 2 = ln( f ( Pi (t )) f ( X i (t )) PF f ( Pg (t )) f ( X i (t )) GF
1
m g

+ e 1) + e 1)

(6) (7) ,

where PF =

( f ( P (t )) f ( X (t ))) m
i i i =1

, GF =

( f ( P (t ) f ( X (t ))) m
i i =1

f (x) is the object fitness

function, and e is the base of the natural logarithm, i.e. ln e = 1 . PF and GF are considered to be the criterion whether the particle need to be accelerated or not. If f ( P (t )) f ( X i (t )) > PF and f (P (t)) f ( Xi (t)) > GF , then i g

Ai1 > 1 , Ai 2 > 1 , accelerate the particle; If

f (P (t)) f ( Xi (t)) < PF and f ( Pg (t )) f ( X i (t )) < GF , then i

Ai1 <1,

Ai 2 <1, decelerate the particle. As shown in the previous section, the portion between cognitive component and
social component can influence the performance of the algorithm. In this paper, the cognitive learning rate and social learning rate are defined as follows:

w1 = c12 /(c1 + c2 )
2 w2 = c2 /(c1 + c2 )

(8) (9)

where c1 and c2 are the same as in the standard PSO. The weight function w(t) plays the role of balancing the global and local searches. In general, PSO algorithm uses a linearly-decreasing weight function which lacks sustained search ability and tends to trap in a local optimal solution. So in this paper, we define the weight function as follows:

w(t ) ~ U ( w0 ,2w0 )

(10)

where w0 is a constant parameter, the suggested range for w0 is [0.3, 0.6]. U represents a uniform probability distribution. In order to improve the particles global search ability and avoid getting into local optimal solution, if the algorithm doesnt catch a better solution for 10 continue generations this paper updates part particles as follows: Randomly select 30% particles in current population and randomly change their positions within a range of 40% of the current positions, in other words, to set the particles current position xij equals to a random value within

[0.8 xij ,1.2 xij ] for each dimension.


Finally, considering all the mentioned above, the evolving equations of particles can be described as follows:

vij (t + 1) = w(t )vij (t ) + w1r1 j Ai1 ( pij (t ) xij (t )) + w2 r2 j Ai 2 ( pgj (t ) xij (t )) xij (t + 1) = xij (t ) + vij (t + 1) i = 1, 2, , m; j = 1, 2, ,n

(11) (12)

where w(t) is as shown in (10), w1 and w2 are as shown in (8), (9), Ai1 and Ai 2 are as shown in (6), (7) respectively. In the light of above considerations, APSO algorithm can be demonstrated by following: Step1: Initialize the initial population or swarm with m particles. For each particle, it has a random position and velocity in the initial range. Step2: Evaluate the desired optimization fitness function for each particle. Step3: Compare the evaluated fitness value of each particle with its Pi, which is the best position of its personal history. If current value is better than Pi, then set the current value as Pi. Step4: Compare the evaluated fitness value of each particle with its Pg, which is the best position of all the particles history in the swarm at some generation. If current value is better than Pg, then set the current value as 10

Pg . Step5: Update the velocity and position of each particle according to the equations (11) and (12). Step6: Update part particles if the algorithm doesnt catch better solution for 10 continue generations. Step7: Loop to step2 until a stop criterion is met, usually a sufficiently good fitness value or a predefined maximum number of generations.

4. Experiments and Results


The performance of the proposed APSO is evaluated by comparing its results with standard PSO and CPSO. We implement the standard PSO, CPSO and the proposed APSO algorithm in Visual C++6.0 language with a Pentium IV 2.0 GHz and 256M RAM PC. The DeJong test suite, which is widely used in evolutionary computation and was also used in established works on PSO [8, 9, 10] was considered for the investigation of APSOs performance. The definitions of the test problems as well as their dimension, admissible range of the parameters and error goal values are reported in Tab. 1.
Tab. 1 Name Sphere Rosenbrock Rastrigrin Griwank Schaffer The DeJong benchmark problems suite. Formula Dim.
n n

Range [-100,100] [-30,30] [-5.2,5.2] [-600,600] [-100,100]

Goal 0.01 100 100 0.1 0.00001

f ( x) = xi2 f ( x) = xi2
i =1 i =1

30 30 30 30 2

f ( x) = (100( xi +1 xi2 ) 2 + ( xi 1) 2 )
i =1

f ( x) = ( xi2 10 cos(2 xi ) + 10)


i =1

f ( x) =

n x 1 n 2 xi cos( ii ) + 1 4000 i=1 i =1

f ( x) = 0.5 +

(sin x12 + x22 ) 2 0.5 (1.0 + 0.001( x12 + x22 )) 2

For each test problem, 50 independent experiments were conducted. The swarm size was set to 30 in all cases. The swarm was allowed to perform a maximum number of 5000 iterations. For the standard PSO algorithm, a linearly decreasing weight function is used which starts at 0.9 and ends at 0.4 with c1c22.0 as recommended by Shi and Eberhart [3]. For the CPSO and APSO algorithm, we set c12.8, c21.3 under the suggestion of Clerc [7]. For the APSO, we set w00.4.The final results reported in Tab. 2 take from the mean value of 50 runs for each algorithm.
Tab. 2
 Algorithm

A comparative study of APSO, PSO and CPSO Sphere 100% 3094.94 1.117 96% 301.06 0.253 100% 171.66 0.126 Rosenbrock 98% 3334.76 2.646 82% 311.17 0.818 100% 209.42 0.185 Rastrigrin 90% 2470.52 1.128 98% 175.79 0.079 98% 73.87 0.051 Griwank 100% 3018.48 1.485 98% 258.12 0.118 100% 112.72 0.062 Schaffer 100% 1382.86 0.124 90% 481.48 0.021 100% 114.64 0.009

Index Success ratio Number of iterations Time consuming(s) Success ratio

PSO

CPSO

Number of iterations Time consuming(s) Success ratio

APSO

Number of iterations Time consuming(s)

From Tab. 2, we can see it clearly that APSO algorithm performances much better than standard PSO and the CPSO algorithms. For example, the mean number of iterations of PSO and CPSO for Rastrigrin function is 2471 and 176, respectively, and the mean number of iterations of APSO for Rastrigrin function is only 74. Among the 11

three algorithms, the convergence speed of APSO is quickest. It indicates that APSO has great performance of convergence property over the standard PSO and CPSO algorithms. For all the functions tested, the running time of APSO is shortest and the mean results obtained are best. That is to say, APSO yield better results than PSO and CPSO.

5. Conclusion
Particle swarm optimization algorithm is a meta-heuristic global optimization algorithm which appeared recently. It has been widely concerned by researcher because of its feasibility and effectiveness. How to improve the performance of PSO is an important problem. In this paper, an adaptive particle swarm optimization algorithm, which introduces two adaptive acceleration factors in terms of the convergence speed and global search capability of the PSO algorithm, is proposed. A novel weight function has been introduced and some particles are to be updated in a new way when the proposed algorithm traps in local optimum. Experiments results show that APSO can overcome the premature convergence and has a marked improvement in performance over the traditional PSO and CPSO.
References Kennedy J, Eberhart R C. Particle swarm optimization, In: Proceedings of the 1995 IEEE International Conference on the Neural Networks, Piscataway, NJ: IEEE Service Center, 1995. 1942-1948 [2] Eberhart R C, Kennedy J. A new optimizer using particle swarm theory. In: Proceedings of the 1995 6th International Symposium on Micro Machine and Human Science, Piscataway, NJ: IEEE Service Center, 1995. 39-43 [3] Shi Y, Eberhart R C. A modified particle swarm optimization. In: Proceedings of the IEEE International Conference on Evolutionary Computation, Piscataway, NJ: IEEE Press, 1998, 69-73 [4] J Kennedy. The Particle Swarm: Social adaptation of knowledge. In: Proceedings of the IEEE International Conference on Evolutionary Computation, Piscataway, NJ: IEEE Service Center, 1997, 303-308 [5] Shi Y, Eberhat R C. Empirical study of particle swarm optimization. In: Angeline P J, Michalewicz Z, Schoenauer M, Yao X, Zalzala A, eds. Proceedings of the Congress of Evolutionary Computation Vol.3, IEEE Press, 1999, 1945-1950 [6] Schutte J F, Groenwold A A. A study of global optimization using particle swarms. Journal of Global Optimization, 2005, 31: 93-108 [7] Clerc M. The swarm and the queen: Towards a deterministic and adaptive particle swarm optimization. In: Angeline P J, Michalewicz Z, Schoenauer M, Yao X, Zalzala A, eds. Proceedings of the Congress of Evolutionary Computation Vol.3, IEEE Press, 1999, 1951-1957 [8] R C Eberhart, Y Shi. Comparing inertia weights and constriction factors in particle swarm optimization. In: Proceedings of the 2000 Congress on Evolutionary Computation, Piscataway, NJ: IEEE Service Center, 2000. 84-88 [9] Clerc M, Kennedy J. The particle swarmexplosion, stability, and convergence in a multidimensional complex space, IEEE Transactions on Evolutionary Computation, 2002, 6 (1): 5873. [10] Trelea I C. The particle swarm optimization algorithm: Convergence analysis and parameter selection. Information Processing Letters, 2003, 85: 317325. [1]

12

A Study on Dynamic Modeling for Chinese Residents Consumption and Investment Decision with Housing
Liu Zhixin, Huang Lingling
School of Economics & Management, BeiHang University, P.R.China, 100083

Abstract We solve a realistically calibrated life-cycle model of consumption and portfolio decision for Chinese families with housing and stochastic labor income, and try to find the important factors for residents consumption and investment decision. Our analysis demonstrates that labor income risk and house price risk crowd out the level of stockholdings because of the increase of protective saving, while down payment ratio reduces stockholdings of resident due to the illiquid home equity restrictions. Our analysis also indicates that, with the increase of house price risk and down payment ratio, non-housing consumption of households is reduced in the early life-cycle, and increased in the latter life-cycle. Key Words Life-cycle, Consumption and investment, Decision, Housing

1. Introduction
The issue of consumption and investment decision over life cycle is encountered by every household. It needs to decide the size of housing to buy, how much to consume goods other than housing, how much to borrow using the house as collateral, and portfolio composition among risky stocks, relatively safe assets and saving. To answer these questions, we can construct a dynamic realistic model of the optimal portfolio and consumption decision of a typical family based on the Life-Cycle Hypothesis, which provided by F.Modiglianli in 1963[1]. It states that individual can smooth his consumption during all his life, and choose an optimal consumption and investment path in order to maximize his life cycle utility. For most Chinese residents, housing is the single most important consumption good which derives utility from it, and, at the same time, the dominant asset in the portfolio. For instant, the survey of household asset by National Bureau of Statistics of China revealed that house value-to-total asset ration among city residents had reached 47.9 percent in 2002. Although housing is difficult to model because of its simultaneous role as a consumption and investment good, illiquidity, and high transaction cost, it plays a crucial role in explaining the consumption and the portfolio of households all over their lives. Therefore, its quite meaningful to make clear household consumption and investment decision of Chinese families with housing restrictions and related factors. In this paper we develop a calibrated life-cycle model with stochastic labor income, costly stock market participation and risky owner-occupied housing, and try to answer the following questions: Firstly, how about the character of Chinese residents consumption and investment behavior over life cycle in presence of housing? Secondly, do labor income risk and house price risk lead families to reduce their other consumption goods and financial assets? Thirdly, how does the increase of down payment ratio affect the consumption and the composition of Chinese households portfolio? Several recent studies attempt to reveal the effect of housing on households life-cycle stock investment. For instant, Cocco(2005)[2] introduces housing to a standard life-cycle consumption and portfolio decision model, Hu(2005)[3] studies portfolio choices for homeowners in the presence of a house rental market in a stylized five-period model. Chen & Fu(2006) [4] construct a life-cycle model to explain the relationship between monetary policy and the consumption and investment behavior of Chinese families. Three key features distinguish our model from the others: First, we introduce housing and related factors into a realistic framework. Second, we describe the character of Chinese families with the special housing risk, mortgage factor and stochastic wage. Third, we try to forecast the effect of the policy about housing in China, and provide useful suggestions to the policy makers.

2. The Model
We model the consumption and investment choices of a resident who lives from time 1 until time T
Supported by program for new century excellent talents in university ( NCET-050184) and Natural Science Foundation of China (70521001)

13

according to the character of Chinese economy. Fig. 1 is the structure of our model.

Fig. 1

Structure of the model

To simplify the problem, we put forward the following hypotheses according to the realistic situation: Hypothesis 1: There are two financial assets: A riskless asset ( B t ) with gross real return r f , and a risky asset( St )
with gross real return rs , t . A investor changes his riskless asset without incurring any cost, and spends a transaction rate on risky stock.

Hypothesis 2:. The resident owns the house he lives in. If he want to buy a new house at time t, he need sell his old house at the same period with a transaction cost equals to a fraction ( ) of the house value.
Hypothesis 3: Short sale of financial assets is prohibited. 2.1 Preference Function The household derives utility from housing services ( H t ) and a numeraire good ( Ct ), and after date T from

bequeathing terminal wealth WT +1 . The residents preference is characterized by the recursive Epstein-Zin utility function, and described by:

U 1 = E1 [ t 1U ( C t , H t )] + T B ( Q t ) ,
t =1

(1) . (2)

Where Where

U (C t , H t ) =

( C t H t1 ) 1 , 1

B (Qt ) =

WT1+ 1 1

is the coefficient of relative risk aversion, is the time discount factor, determines preference

for housing relative to the other consumption good. 2.2 Labor Income Process For simplicity, retirement is assumed to be exogenous and deterministic, with all residents retiring in the period of K. Before retirement, the resident receives a stochastic labor income Lt ,is given by:

Lt = Lu t + L t ,

Lut = bLut 1

for t K

(3)

Where Lut is the Permanent component of labor income which determined by economy situation and individual characteristics of household. L t is an idiosyncratic temporary shock distributed as N ( 0 , l2 ) . Retirement income is modeled as a fixed pension (P).

Lt = P

for tK

(4)

2.3 Financial Assets The riskless asset yields a constant gross return r f , while the return on the risky asset rs , t is given by:

rs , t = rf + + t

(5)

Where is given by:

t distributed as

N (0, 2 ) , denote the risk compensation of stock. No short sale of financial assets

14

St > 0 ,

Bt > 0

for all t

(6)

2.4 Housing and Mortgage Contracts The resident needs to pay at least a proportion (d) of the house value as a down payment at loan initiation, and finance the rest by borrowing a mortgage with a constant rate of interest rd . We assume that the investor is

allowed to renegotiate costlessly the desired level of debt in every period. The mortgage balance at time t denoted by M t needs to satisfy:

0 M t (1 d ) Pt H t

for all t

(7)

Where Pt is the price per unit of housing at time t, and the appreciation rate rH , t follows an i.i.d normal
2 process N ( H , H ) . The shock to house prices is affected by the shock of labor income. We assume that a house

sale is associated with transaction cost equals to a fraction (n) of the house value,. Maintenance costs of each period equal to a proportion (m) of the house value. 2.5 Budget Constraints We denote the residents spendable resources by Qt , which can be written as follows:
S t 1 (1 + rs , t 1 ) + Bt 1 (1 + r f ) + Lt 1 M t 1 (1 + rd ) = Qt Therefore, the budget constraints need to satisfy: Case1: Case2: (8)

Ht = Ht 1 Qt = St + Bt + Ct + mPt H t 1 Dt + St St 1 Ht Ht1, Qt = S t + Bt + Ct + mPt H t 1 Dt (1 n ) Pt H t 1 + Pt H t + S t S t 1

(9)
(10)

2.6 The Optimization Problem The target of household is to maximize his lifetime utility. The complete optimization problem is then Max{S , B , C , H , D }T E (U1 ) , where U1 is given by equations (1) and (2), and is subject to the constraints given by
t t t t t t =1

equations (3) to (10), plus the constraints that consumption and house must be non-negative at all dates.

3. Parameterization and Numerical Solution Method


3.1Model Parameterization We assume that a resident enters the economy at age of 25, retires at age 60, and dies at age 75. Each period of the model is calibrated to one real life year, and the decision frequency is annual. To parameterize the labor income process of Chinese residents, we use the date 899RMB as the base year income at the beginning of Life-cycle according to the disposable income of Chinese citizen in 1987. The incremental rate is 15% according to the average incremental rate of labor income from 1987 to 2004. Retirement pension is set to 90% of the Labor income at time K-1. Our baseline parameter values shown in Tab.1:
Tab.1 Parameter Coefficient of relative risk aversion Time discount factor Housing preference factor Variance of labor income Incremental rate of labor income Risk free rate Risk compensation of stock Variance of stock return Transaction rate of stock Down payment requirement Mortgage rate Maintenance cost of house Transaction cost of house Baseline parameter values symbol value 0.99 0.99 0.3 0.02 0.069 0.02 0.07 0.04 0.005 0.2 0.0551 0.01 0.015

l2

rf

rd
m n

15

3.2 Numerical Solution Method Analytical solutions to this problem do not exist; we thus derive numerical solutions through value function iterations. With the recursive nature of the problem, we can rewrite the consumption and investment problem with Bellman equation as follows:

Vt ( X t ) = max {
{ At }

( C t H t1 )1 + E t [Vt + 1 ( X t + 1 )] + B (Qt )} 1
vector of is endogenous state variables, and

Where

X t = {Lt , PH ,t , H t 1 , Dt 1 , Qt } the

At = {St , Bt , Ct , H t , Dt } is the vector of choice variables. We use Newton iterated algorithm to optimize the
different choices. We iterate backwards, and find a good feasible solution with improving the initial states.

4. Results
4.1 Baseline Case Analysis
a: labor Income,Non-Housing Consumption,Housing Positions
200000 180000 160000 140000 100000 80000 60000 40000 20000 0 25 30 35 40
Age

b: Stock vs. Riskless Asset


25000 20000 15000 10000 5000 0
RMB

120000
RMB

45

50
lt

55

60
Ct

65

70

75

25

30

35

40

45

House position

50 Age

55

60

65
Bt

70
St

75

Fig.2

Simulation analysis on baseline case

Fig.2 shows the residents life-cycle housing positionsconsumption other than housestock and riskless asset choices on baseline case. From this figure we can find that: (1) As labor income and accumulated wealth increasing, the non-housing consumption increases accordingly, and exceeds the labor income after retirement. (2) Our model can generate a hump-shaped life-cycle housing position, which increases before 60, and descends after 65. It means that the household may optimally choose to refinance his mortgage, and cash out a fraction of his house equity to increase his consumption at the end of life. (3) With a large illiquid house equity position, the resident holds less stock before 50, and increases his stockholdings after accumulating enough liquid wealth. After retirement, the resident descends stock level because of risk aversion and supporting his consumption. (4) Riskless asset of resident reaches a peak at the age of 59, and descend rapidly at 60-65. 4.2 The Effect of Labor Income Risk Fig. 3 presents the impact of labor income risk for housing consumption other than house stock and riskless asset of the resident. It indicates that Labor income risk crowds out housing position, and cuts down the non-housing consumption at the early life-cycle because of protective saving. Due to the accumulation of wealth, the growth rate of non-housing consumption with highest level of labor income risk is fastest at latter period of life-cycle. With the increase of labor income risk, the level of stockholdings observed in portfolio composition descends since labor income substitutes for riskless asset holdings and the level of riskless asset increases most of the time during life-cycle.

16

a: Labor Income Risk:Housing and Non-Housing Consumption


350000 300000

b:Labor Income Risk:Stock and Riskless Asset


35000 30000 25000

250000

RMB

150000 100000 50000 0 25 30

RMB
15000 10000 5000 0
35 40 45 Ct(lv=0,02) Ct(lv=0.04) Ct(lv=0.1) 50 55 60 65 70 75 House position(lv=0.02) House position(lv=0.04) House position(lv=0.1)

200000

20000

25

30

35

40

45

50

55

60

65

70

75

Age

Age

Bt(lv=0.02) St(lv=0.04)

St(lv=0.02) Bt(lv=0.1)

Bt(lv=0.04) St(lv=0.1)

Fig.3

The effect of labor income risk

4.3 The Effect of House Price Risk Fig. 4 presents housingconsumptionstock and riskless asset with different levels of house price risk. Our

analysis indicates that the impact of house price risk to non-housing consumption is not the same in different stage of the life-cycle. With the increase of house price risk, the resident reduces the non-housing consumption in the early life-cycle, and increases his consumption in the latter stage of life-cycle. Meanwhile, the optimal household financial asset allocation changes because of house price risk, the level of stockholding descends, and the level of riskless asset increases during life-cycle.
a:House price risk:House and No-Housing Consumption
350000 300000 250000
RMB

b:House Price Risk:Stock and Riskless Asset


25000 20000 15000

200000 150000 100000 50000 0 25 30 Age 35 40 45 50 55 60 65 70 75


Ct(Hv=0) Ct(Hv=0.02) Ct(Hv=0.04) House position(Hv=0) House position(Hv=0.02) House position(Hv=0.04)

10000 5000 0 25 30 35 40 45 50 55 60 65 70 75

RMB

Age

Bt(Hv=0) St(Hv=0.02)

St(Hv=0) Bt(Hv=0.04)

Bt(Hv=0.02) St(Hv=0.04)

Fig.4

The effect of house price risk

4.4 The Effect of Down Payment Ratio Fig.5 presents the consumption and portfolio choice among stock and saving of household with different down payment ratios (d=20%d=30%d=40%). It indicates that the household increases his housing position

during his life-cycle with the increase of down payment ratio, Meanwhile, the level of stockholding descends because of illiquid home equity restrictions. The resident reduces the non-housing consumption in the early life-cycle due to the low financial net-worth, and increases his consumption in the latter stage of life-cycle because of the accumulation of wealth.
a:Downpayment Ratio:Housing and Non-housing consumption
400000 350000 300000 250000 200000 150000 100000 50000 0 25 30 35 40 45 Ct(d=20%) Ct(d=30%) Ct(d=40%) 50 55 60 65 70 75 House position(d=20%) House position(d=30%) House position(d=40%)

RMB

Age

Fig.5

The effect of down payment ratio

17

5. Conclusions
In this paper, we developed a dynamic life-cycle model to study the optimal consumption and investment decision with housing of Chinese resident. It is important to introduce housing into our model, since housing is the single most important asset for many households, and has significant implications for residents consumption and portfolio choice. Our model incorporates many realistic features including stochastic labor income processstock market transaction costhousing restrictions and liquidation cost. The model generates hump-shaped life-cycle stockholdings and house position of Chinese resident. Our analysis indicates that labor income riskhouse price risk and down payment ratio have significant impact for residents consumption and investment choice. Labor income risk and house price risk crowd out the level of stockholding because of protective saving, while down payment cuts down the stockholding due to illiquid home equity restrictions. The impact of house price risk and down payment ratio to non-housing consumption is not the same in different stage of the life-cycle. Specifically, with the increase of house price risk and down payment ratio, young residents reduce their non-housing consumption, while old residents increase their non-housing consumption.
References Ando, A. and F.Modigliani, The Life-Cycle Hypothesis of Saving: Aggregate Implications and Test, American Economic Review, 1963,Vol.53:55-84 [2] Cocco, Joo F, Portfolio Choice in the Presence of Housing, Review of Financial Studies, 2005,Vol.18 issue 2,535-567 [3] XiaoQing Hu, Portfolio choices for homeowners, Journal of Urban Economics, Jul2005,Vol.58 issue 1,114-136 [4] XueBin Chen, DongSheng Fu and ChengJie Ge, The Dynamic Optimization Simulation Analysis on Chinese Residents Consumption and Investment Behavior, Journal of Financial Research, 2006(3),21-35 (in Chinese) [5] Gomes,Francisco, Michaelides and Alexander, Optimal Life-Cycle Asset Allocation: Understanding the Empirical Evidence, Apr2005, Vol.60 issue 2,869-904 [6] Rui Yao and Harold H. Zhang, Optimal life-cycle asset allocation with housing as collateral, The Review of Financial Studies, 2005, vol 18 No. 198-239 [7] Carroll. C. D and W. E. Dunn, Unemployment expectations, jumping triggers, and household balance sheets, NBER Macroeconomics Annual, 1997, 165- 229, MT Press, Cambridge, MA [8] Kullmann Cornelia and Stephan Siegel, Real estate and its role in household portfolio choice, Working paper, University of British Columbia, 2003 [9] McCarthy, David G, A life-cycle model of defined- benefit pension plans, Journal of Pension Economics and Finance, 2003,2(2): 99-126 [10] Flavin and Marjorie, Owner-occupied housing in the presence of adjustment costs: implications for asset pricing and nondurable consumption, 2002, Working Paper, UC San Diego. [1]

18

New Formulations for Second-Best Congestion Pricing Problems on a General Transportation Network with Elastic Demands: An Excess-Demand Approach
Liu Nan, Chen Daqiang
College of Management, Zhejiang University, Hangzhou, P.R.China, 310058

Abstract This paper examines the problem of second-best congestion pricing problem for a general urban highway transportation network with multiple time periods and two modes. The previous second-best congestion pricing models for a simple road network are reviewed, and then extended to a general network with travelers also having modal choice of car and bus. New formulations of the congestion pricing problems are presented with an excess-demand approach for the problems of no-toll, first-best and second-best, and the first-order conditions are derived. The new approach has the advantage of incorporating mode choice into the existing models. Key words Congestion pricing, Traffic equilibrium, Elastic demand, Second-best

1. Introduction
Rapid development of urbanization and growth of car ownership in China have led to serious urban traffic congestion. Recently congestion pricing as a demand-side strategy to tackle congestion has received increasing attention. The issue of road pricing/congestion pricing has been studied by many researchers, the representative results focus on following four directions: (1) congestion pricing with bottlenecks, Arnott et al. (1990)[1][2]analyzed user equilibrium, system optimum, and various pricing regimes for a network of two routes in parallel based on Vickrey (1969)[3], which models roads as bottlenecks; Hai-jun Huang (2000)[4] studied the fares and tolls in a competitive system with transit and highways with two groups of commuters. (2) social welfare effect of congestion pricing, Small (1992)[5] analyzed the congestion pricing impact s on travelers different income levels in San Francisco Bay area and Los Angeles; Paolo Ferrari (2002) [6]present an optimal algorithms for the social welfare by co-share of the driver and authority. (3) general network congestion pricing problem, Liu and Boyce (2002) [7]developed a model of the systemoptimal travel choice problem and efficient congestion tolls for a general transportation network with multiperiods; Hai Yang et al. (2004)[8] studied the multi-criteria or the cost-versus-time network equilibrium and system optimum problem in a network with a discrete set of VOTs for several user classes. (4) second-best congestion pricing problem, Verhoef (1996, 2002)[9][10]analyzed second-best congestion pricing for a simple and a general networks, respectively and for a single period; Yang and Huang (1999)[11] studied carpooling and congestion pricing in a multilane highway with high-occupancy vehicle lanes; Liu and McDonald (1998, 1999)[12][13]used economic theory of the second best and a simulation model to compare first-best, second-best and no-toll solutions for a model with two routes and two time periods (peak and pre-peak). This paper focuses on efficiency issues associated with second-best congestion pricing, i.e., use of congestion tolls to achieve the best utilization of resources for a transportation network with constraints that part of the network cannot be tolled. The organization of the paper is as follows. In Section 2 we first review a second-best congestion pricing model presented in Liu and McDonald (1999) [13]. Then, in Section 3, we extend the previous model to a general network with travelers also having modal choice of car and bus, and present new model formulations of congestion pricing problems consisting of no-toll, first-best and second-best. The final section summarizes the major conclusions and suggests further research topics.

This research has been supported by the National Natural Science Foundation of China (Efficiency and equity problems of multi-period second-best congestion pricing schemes in urban road systems, No: 70471053).

19

2. Previous Formulations of Second-best Congestion Pricing Problems


In this section we review the formulations of congestion pricing models by Liu and McDonald (1999) [13], which serve as a basis for our new approach. The second-best problem aims to maximize the social welfare or net benefits (total benefits B minus total costs C of the transportation system): max W = B C
=
( v1 , v 2 )

( 0,0)

P (v , v
1 1

)dv1 + P2 (v1 , v 2 )dv 2

(1)

[v1t c1t (v1t ) + v1 f c1 f (v1 f )] [v 2t c 2t (v 2t ) + v 2 f c 2 f (v 2 f )]

subject to
P1 (v1 , v 2 ) = c1 f (v1 f ) = c1t (v1t ) + 1t P2 (v1 , v 2 ) = c 2 f (v 2 f ) = c 2t (v 2t ) + 2t v1 = v1t + v1 f , v 2 = v 2t + v 2 f v1t 0, v1 f 0, v 2t 0, v 2 f 0, v1 0, v 2 0 (1a ) (1b) (1c) (1d )

where congestion tolls on the toll route are denoted by 1t and 2t . Eq. ( 1a ) is the constraint on the pricing of

the free route in the peak period. In the peak period the equilibrium price of a trip on either route is equal to the average cost on the free route. The equilibrium price of the trip on the toll route is the average cost plus the congestion toll in the peak period. Similar to the condition for the peak period, Eq. ( 1b ) is the constraint on the pricing of the free route in the pre-peak period. Eq. (1c ) states that the total traffic volume in the peak (pre-peak) period is equal to the sum of the traffic on each route in the peak (pre-peak) period. Eq. ( 1d ) is the nonnegativity
condition for traffic volumes. By solving the model for optimal traffic volume allocation ( v1t , v1 f , v 2 t , v 2 f , v1 , v 2 ), the second-best congestion tolls ( 1t , 2 t ) on the toll route for the peak and pre-peak periods are determined by:

1t = P1 (v1 , v2 ) c1t (v1t ) 2t = P2 (v1 , v 2 ) c 2t (v2t )


For the first-best problem, the pricing constraints Eq. (1a ) and Eq. ( 1b ) do not exist since congestion tolls can be imposed on both routes; and its objective function is the same as Eq. (1) subject to traffic volume constraints Eq. ( 1c ) and nonnegativity condition Eq. ( 1d ). For the no-toll problem, no maximization is involved; therefore, the optimal traffic volume allocation ( v1t , v1 f , v 2 t , v 2 f ) is determined by the following equilibrium conditions:
P (v1 , v2 ) = c1t (v1t ) = c1 f (v1 f ) 1 P2 (v1 , v2 ) = c2t (v2t ) = c2 f (v2 f )

In order to solve the problems of the second-best and first-best, the following theorem is needed (Pressman 1970) [14]: Theorem 1. Suppose Pi (v1 , v2 ) (i=1,2), and Pi / v j (i,j=1,2) are continuous and single valued at every point of a simply connected region. Then, if and only if P1 / v2 = P2 / v1 will the line integral
B=
( v1 ,v2 ) ( 0,0)

P (v , v )dv
1 1 2

+ P2 (v1 , v2 )dv2 :

a. be independent of the path from (0,0) to (v1 , v2 ) ; b. be zero around every closed curve in the region; c. there exists a function B(v1 , v2 ) such that:
dB = P1dv1 + P2 dv2 ; and

d. the following equations are satisfied:


20

B (v1 , v2 ) = P1 (v1 , v2 ) v1

B(v1 , v2 ) = P2 (v1 , v2 ) v2

The above models provide a starting point for evaluating second-best congestion pricing schemes, However they have some shortcomings as they only deal with a simple network that contains two alternative roads (a toll route and a free route) and there is no mode choice. In the next section, we extend the previous models to a general network with travelers also having modal choice of car and bus.

3. Extensions of the Previous Models


3.1 Description of the transportation system 3.1.1. The setting In this section we extend the previous models by adding mode choice to a general transportation network, and formulate the problems with a so-called excess-demand formulation method (Sheffi, 1985) [15]. It is assumed that all trips in the system are divided into two modes (private car and bus). In order to simplify the question we assume only the private car has the excess-demand, and the bus traffic is carried in a special lane which has no interaction with car traffic. The condition can be illustrated in a simple topology shown in Figure 1. In the network, for an origin-destination (OD) paizzr rs, some routes are toll routes and, due to technical and/or political constraints, other routes must remain untolled (free routes). In addition to describe the spatial framework, the study also considers two time periods (peak and off-peak (pre-peak) periods) to describe travelers departure time choice.
excess-demand route(private car) toll route(private car) free route(private car) bus lane

Fig.1 Illustration network for the problem

3.1.2. Travel costs The average cost of two modes on link a in time period i is denoted by ~ ~ x c = c (~ ) c = c (x )
ia ia ia ia ia ia

(2)

where ~ia and x ia are traffic volumes for private car and bus, respectively, on link a in time period i , which is x different from the previous formulations where denoted by vi . The average cost is assumed to be a monotonically increasing function of the traffic volume, and there is no interactive between two modes: ~ ~ dcia dcia dcia dcia (3) > 0, = 0, ~ =0 ~ > 0, dx dx dx dx
ia ia ia ia

Travel cost on route k connecting OD pair rs ~ ~ C = c rs , C


irs , k a k ia a ,k

irs , k

c
a k

ia

arsk ,

k rs , i , rs

3.1.3 Demand characteristics The demand in one period is a function of trip prices in both peak and pre-peak periods. The income effect is assumed to be negligible. The demand functions of total traffic volumes (two modes) on OD pair rs for the peak ( i =1) and pre-peak ( i =2) periods are given by:
q1rs ( P rs , P2 rs ) = q1rs 11,rs P rs + 12,rs P2 rs 1 1 q2 rs ( P rs , P2 rs ) = q2 rs + 21,rs P rs 22,rs P2 rs 1 1

(4)

21

where q irs is the traffic demand between OD pair rs in period i , Pirs is the trip price for OD pair rs in period i , q irs is the total potential fixed-demand according to the excess-demand formulation method. For demand functions Eq.(4), the following assumptions are made regarding dependency: 1). Negative own-price effect: q1rs / P rs < 0 , q2 rs / P2 rs < 0 1 2). Positive cross-price effect: q`rs / P2 rs > 0 ,
q2 rs / P1rs > 0

Then the excess-demand functions of private car are denoted by


e1rs = q1rs q1rs = 11,rs P rs 12,rs P2 rs 1 e2 rs = q2 rs q2 rs = 21,rs P rs + 22,rs P2 rs 1

(5)

From Eq.(5), the inverse excess-demand function for OD pair rs can be derived as a function of the excess-demand traffic volume in both the peak and pre-peak periods, or
E1rs = E 2 rs =

22 ,rs 12 ,rs e1rs + e 11,rs 22 ,rs 12 ,rs 21,rs 11,rs 22 ,rs 12 ,rs 21,rs 2 rs 21,rs 11,rs e1rs + e 11,rs 22 ,rs 12 ,rs 21,rs 11,rs 22 ,rs 12 ,rs 21,rs 2 rs
E1rs = b22,rs e1rs + b12,rs e2 rs

or

E2 rs = b21,rs e1rs + b11,rs e2 rs

(6)

3.1.4. Mode split Mode split or called mode choice is the result of a serial decision. Here we simplify the mode split by a logit choice function. 1 ~ N irs = N irs ~ ( C irs C irs ) (7) 1+ e ~ N irs = N irs N irs ~ where N irs is the total number of persons traveling between OD pair rs in period i , N irs is the number of persons choosing private car between OD pair rs in period i , N irs is the number of persons choosing bus between ~i i OD pair rs in period i , Crs and Crs are the minimum travel costs between OD pair rs in period i for both car and bus, respectively. ~ The number of persons N irs (that is N irs , N irs , N irs ) between OD pair rs in period i for both trip mode i ~ and the related traffic volume q rs (that is qirs , qirs , qirs ) satisfy the following relations
N irs = qirs N irs = qirs qirs + qirs = qirs

(8)

where are and are loading factors (persons per vehicle) for car and bus, respectively.
When the interaction between both modes is ignored, we can treat the bus lane as another excess-demand route. Based on the Eq.(7), we have 1 N 1 N C irs = ln irs 1 + C irs = ln irs + C irs N irs N irs Then the equivalent travel cost of the bus lane can be derived as:
W irs ( q irs ) = q ln ~ ~irs q irs 1

q irs 1 + C irs = ln ~ q q irs irs

+ C irs

(9)

3.2 New formulations of congestion pricing problems 3.2.1 no-toll problem In this problem we consider the model of user equilibrium with variable demand for two periods and two modes:

22

min Z ( x , e, q ) =
i a

xia

cia ( w ) dw +
rs

( e1 rs , e2 rs )

( 0,0 )

E1rs ( w1 , w2 ) dw1 + E 2 rs ( w1 , w2 ) dw2


(10)

+
i rs

qirs

Wirs ( w ) dw rs , k , i

s.t.

k rs

f irs , k + eirs + qirs = qirs

f irs , k 0,
where

qirs 0 , eirs 0, xia 0 ~ ~ = x ia f irs , k arsk ,


rs k rs

1) Equivalent conditions Let the objective function in (10) be decomposed as follows:


min Z ( x , e , q ) = m in [ Z 1 ( x ) + Z 2 ( e ) + Z 3 ( q ) ]
where:

(11)

Z1 ( x) = cia ( w)dw
xia i a 0

Z 2 (e) =
rs

( e1rs , e2 rs )

(0,0) qirs

E1rs ( w1 , w2 )dw1 + E2 rs ( w1 , w2 )dw2

Z 3 (q) = Wirs ( w)dw


i rs 0

The Lagrangian associated with (11) can then be written as


L ( f , e, q , ) = Z ( f , e, q ) +
i

rs

i rs

i i i i q rs f rs , k e rs q rs k rs

(12)

The first-order conditions for this program are:


~ L L ~ 0, f irs , k 0 ~ ~ f irs , k = 0, f irs , k f irs ,k L L q = 0, 0, q irs 0 irs irs q irs q L L e = 0, 0, eirs 0 eirs irs eirs L = 0 irs

(12a ) (12b) (12c) (12d )

According to Theorem 1, we have E1rs = E2 rs ;and Z 2 = E1rs (e1rs , e2 rs ) , Z 2 = E2 rs (e1rs , e2 rs ) . e2 rs e1rs e1rs e2 rs
Z 2 L = 1rs = E1rs (e1rs , e 2 rs ) 1rs e1rs e1rs

Z 2 L 2 rs = E 2 rs (e1rs , e 2 rs ) 2 rs = e 2 rs e 2 rs x L L ~ ~ x = ~ ~ ia = c ia ( ~ia ) arsk irs , ~ f irs , k x ia f irs , k ak L = Wirs (q irs ) irs q irs ~ L = q irs f irs , k e irs q irs irs k rs

(13)

The first-order conditions are thus:

23

~ ~ ~ ~ rs ~ x k c ia ( ~ia ) ars,k irs 0 , f irs ,k 0 c ia ( x ia ) a , k irs f irs , k = 0 , a a k W ( q ) q = 0, irs irs W irs ( q irs ) irs 0 , q irs 0 irs irs E irs ( e1 rs , e 2 rs ) irs e irs = 0 , E irs ( e1 rs , e 2 rs ) irs 0 , e irs 0 ~ q irs f irs , k e irs q irs = 0 k rs

[ [

(14 a ) (14 b ) (14 c ) (14 d )

Eq. (14a) states that if the traffic volume of private cars between OD pair rs in period i is positivethat is ~ f irs ,k > 0 , then cia ( xia ) arsk irs = 0 holds, it means that the travel cost of private cars on the used ,
~ ~ route k connecting OD pair rs is the same as the minimum one ( Cirs = min(Cirs ,k ) = irs ); on the other side, ~ ~ if Cirs ,k > irs holds, then theres no traffic volume between OD pair rs in period i , that is f irs ,k = 0 .
ak

Eq. (14b) states that if the traffic volume of the bus lane between OD pair rs in period i is positive, then Wirs ( qirs ) = irs it means that the travel cost of the bus lane on route k connecting OD pair rs is the same of ~ ~ the minimum travel cost of the private car ( Cirs = min(Cirs ,k ) = irs ). Eq. (14c) states that if the excess-demand of the private car between OD pair rs in period i is positive( eirs > 0 ), then Eirs (e1rs , e2 rs ) = irs holds, it means that the travel cost of the excess-demand route connecting OD pair rs is the same as the minimum travel cost of private car. Eq. (14d ) represents a set of flow conservation constraints. 2) Uniqueness conditions The strict convexity of Z1 , Z 2 and Z 3 is obvious, which implies that program (10) has a unique solution. Based on the above analysis, the equilibrium conditions fulfill: 1) mode split accords with a logit choice function; 2) the private car traffic volume allocation satisfies user equilibriums condition; and 3) cross-price effect of different periods demonstrates by different pricing strategy. ~ Total number of persons traveling by each mode can be calculated by Eq.(8) and , . 3.2.2 First-best problem In this problem we consider the model of system optimal with variable demand for two periods and two modes:
m in Z ( x , e , q ) =

c
i a

ia

( x ia ) x ia +
irs

rs

( e1 rs , e 2 rs ) ( 0 ,0 )

E 1 rs ( w1 , w 2 ) d w1 + E 2 rs ( w1 , w 2 ) dw 2

+ s.t.

W
i rs

( q irs ) q irs rs , k , i e irs 0, x ia 0

(15)

k rs

f irs , k + e irs + q irs = q irs q irs 0 , ~ f irs , k arsk ,

f irs , k 0,

where

~ = x ia

rs k rs

The objective function in (15) can be decomposed as follows:


min Z ( x , e , q ) = min Z 1 ( x ) + Z 2 ( e ) + Z 3 ( q )
where

(16)

Z1 ( x) = cia ( xia ) xia


i a

Z 2 (e ) =
rs i

( e1 rs , e2 rs )

(0,0)

E1rs ( w1 , w2 )dw1 + E2 rs ( w1 , w2 )dw2

Z 3 (q) = Wirs (qirs ) qirs


rs

The Lagrangian associated with (16) can then be written as


L ( f , e, q , ) = Z (e, q , f ) +
i

rs

irs

q irs f irs , k eirs q irs k rs

(17)

24

The first-order conditions for this program are:


L ~ L ~ f irs , k = 0, 0, f irs , k 0 ~ ~ f irs , k f irs , k L L 0, q irs = 0, q irs 0 q irs q irs L L e = 0, 0, e irs 0 irs e e irs irs L =0 irs

(17 a ) (17b) (17c) (17 d )

Here we use the following notation:


dc ( ~ ) x ~ x mc ia ( ~ia ) = c ia ( ~ia ) + ~ia ia~ ia x x d x ia

dc ( ~ ) x ~ MC irs , k = mc ia arsk = C irs , k + ~ia ia~ ia x , d x ia ak ak dW irs ( q irs ) d q irs

(18)

MW irs ( q irs ) = W irs ( q irs ) + q irs

where mcia is the marginal cost on link a in period i ; MCirs ,k is the marginal cost on route k between OD pair rs in period i for private cars; MW is the equivalent marginal equivalent cost on route k between OD pair rs in irs period i for the bus lane. Then the first-order conditions are thus (the same procedure as the no-toll problem):

~ ~ MCirs ,k irs f irs ,k = 0, MCirs ,k irs 0, f irs ,k 0 MWirs (qirs ) irs 0, qirs 0 MWirs ( qirs ) irs qirs = 0, E (e , e ) e = 0, Eirs (e1rs , e2 rs ) irs 0, eirs 0 irs irs irs 1rs 2 rs ~ qirs f irs ,k eirs qirs = 0 krs

( [ [

(19)

The first-order conditions states that the marginal travel cost of private car travel on route k connecting OD pair rs in time period i is equal to the minimum travel cost (i.e., the travel price) of private car between OD pair rs ( min(Cirs ,k + irs ,k ) = irs ); and the equivalent marginal travel cost of the bus lane travel on OD pair rs is equal to the minimum travel cost of the private car travel on all routes connecting OD pair rs ( MWirs (qirs ) = irs ). Then, irs = MWirs (qirs ) = min(Cirs,k + irs,k ) i, rs, k . Here we can calculate the efficient congestion tolls for route k for private cars:
dWirs (qirs ) . dcia ( xia ) , and efficient route congestion tolls for bus lane: irs = qirs dxia dqirs ak The strict convexity of Z1 , Z 2 , Z 3 is obvious, which implies that program (15) has a unique solution.

irs ,k = xia

3.2.3 Second-best problems Based on the analysis process of the above problems, we can find that the no-toll problem and first-best problem are two extreme forms of the second-best problem, this indicates we can construct the model of the second-best problem by modifying the objective functions of no-toll and first-best problems. 1) Second-best problem I In this problem we consider the scheme that charges no-toll for bus lanes, and only charges private cars on all links and in all time periods:

25

m in Z 1 ( x , e , q ) =

c
i a i

ia

( x ia ) x ia +
rs

( e1 rs , e 2 rs ) ( 0 ,0 )

E1 rs ( w1 , w 2 ) dw1 + E 2 rs ( w1 , w 2 ) dw 2

+ s.t.
k rs

rs

q irs 0

W irs ( w ) dw rs , k , i eirs 0, x ia 0

(20)

f irs , k + eirs + q irs = q irs q irs 0 ,

f irs , k 0,

The first-order conditions for this program are (see the procedure of the no-toll problem and first-best problem):
~ ~ MC irs ,k irs f irs ,k = 0, MC irs ,k irs 0, f irs ,k 0 Wirs ( q irs ) irs 0, q irs 0 Wirs ( q irs ) irs q irs = 0, E ( e , e ) e = 0, E irs ( e1rs , e2 rs ) irs 0, eirs 0 irs irs irs 1rs 2 rs ~ q irs f irs ,k eirs q irs = 0 krs

( [ [

(21)

The first-order conditions states that the marginal travel cost of private cars on route k connecting OD pair rs in time period i is equal to the minimum travel cost (i.e., the travel price) of private cars between OD pair rs ( min(Cirs ,k + irs ,k ) = irs ); and the equivalent travel cost of the bus lane travel on OD pair rs is equal to the minimum travel cost between OD pair rs Wirs (qirs ) = irs ). Then, irs = Wirs (qirs ) = min(Cirs,k + irs,k ) i, rs, k , which matches the travel cost characteristics under second-best problem I. 2) Second-best problem II In this problem we consider the scheme that charges no-toll for all bus lanes, only charges private cars on toll routes (or certain links) in all time periods:
min Z ( x, e, q) = cia ( xia ) xia +
i ak kK i ak k K

xia

cia ( w)dw
qirs

+
rs

( e1 rs ,e2 rs )

(0,0)

[ E1rs (w1 , w2 )dw1 + E2 rs (w1 , w2 )dw2 ] + 0


i rs

Wirs ( w)dw

(22)

s.t.

krs

irs , k

+ eirs + qirs = qirs

rs, k , i

firs ,k 0, qirs 0, eirs 0, xia 0

where k denotes a tolled route, K is the set of all toll routes between all OD pairs; k denotes a free route, and K is the set of all free routes between all OD pairs. And the first-order conditions for this program are: ~ (MC ) ~ = 0, f irs ,k MCirs ,k irs 0, f irs ,k 0 k irs ,k irs ~ ~ ~ ~ C f irs ,k 0 k irs f irs ,k = 0, Cirs ,k irs 0, irs ,k (23) Wirs (qirs ) irs 0, qirs 0 Wirs (qirs ) irs qirs = 0, Eirs (e1rs , e2 rs ) irs 0, eirs 0 Eirs (e1rs , e2 rs ) irs eirs = 0, ~ q f eirs qirs = 0 irs krs irs ,k

( [ [

The first-order conditions states that the marginal travel cost of private cars on toll route k connecting OD pair rs in time period i is equal to the minimum travel cost (i.e., the travel price) of private cars between OD ~ ~ pair rs ( min(Cirs,k + irs ,k ) = irs ); the travel cost of private car travel on free route k connecting OD pair rs in ~ time period i is equal to the minimum travel cost of private cars between OD pair rs ( min(Cirs ,k ) = irs ); and the equivalent travel cost of the bus lane between OD pair rs is equal to the minimum travel cost between OD pair rs ( Wirs (qirs ) = irs ). Then, irs = Wirs (qirs ) = min(Cirs ,k + irs ,k ) = min Cirs ,k i, rs, k , k , which matches the travel cost characteristics under Second-best problem II. 3) Second-best problem III
26

In this problem we consider the scheme that charges no-toll for all bus lanes, only charges private cars on toll routes (or certain links) in peak period:
min Z ( x, e, q) = c1a ( x1a ) x1a +
ak kK ak kK x2 a 0

c2 a ( w)dw +
i

ak k K

xia

cia ( w)dw
qirs

+
rs

( e1 rs , e2 rs )

(0,0)

[ E1rs (w1 , w2 )dw1 + E2rs (w1 , w2 )dw2 ] + 0


i rs

Wirs ( w)dw

(24)

s.t.

krs

irs , k

+ eirs + qirs = qirs

rs, k , i

firs ,k 0, qirs 0, eirs 0, xia 0

The first-order conditions for this program are: ~ ~ ~ ~ C1rs ,k 1rs f1rs ,k = 0, C1rs ,k 1rs 0, k f1rs ,k 0 ~ ~ (MC2 rs ,k 2 rs ) f 2 rs ,k = 0, MC2 rs ,k 2 rs 0, k f 2 rs ,k 0 ~ ~ ~ ~ f irs ,k 0 k Cirs ,k irs f irs ,k = 0, Cirs ,k irs 0, Wirs (qirs ) irs 0, qirs 0 Wirs (qirs ) irs qirs = 0, E (e , e ) e = 0, Eirs (e1rs , e2 rs ) irs 0, eirs 0 irs irs irs 1rs ~2 rs qirs f irs ,k eirs qirs = 0 krs

( [ [

(25)

The first-order conditions states that in peak period (i = 1) the marginal travel cost of private cars on toll route k connecting OD pair rs is equal to the minimum travel cost (i.e., the travel price) of private cars between OD pair rs ( min(C1rs ,k + 1rs ,k ) = 1rs ), the travel cost of private cars on free route k connecting OD pair
~ rs is equal to the minimum travel cost of private cars between OD pair rs ( min(C1rs ,k ) = 1rs ); in off-peak period

(i = 2) , the travel cost of private cars on either toll route k or free route k connecting OD pair rs is equal to the minimum travel cost of private cars between OD pair rs ( min C2 rs ,k = min C2 rs ,k = 2 rs ). And the equivalent

travel cost of the bus lane between OD pair rs is equal to the minimum travel cost of private cars between OD pair rs ( Wirs (qirs ) = irs ). Then,
1rs = W1rs (q1rs ) = min(C1rs ,k + 1rs ,k ) = minC1rs ,k 2rs = W2rs (q2rs ) = min C2 rs ,k = minC2rs ,k
i = 1, rs , k , k i = 2, rs , k , k

which matches the travel cost characteristics under Second-best problem III.

4 Conclusions and further research


In this paper we have extended the previous models of congestion pricing models by adding mode choice to a general transportation network. For the new model formulations, we have shown the equivalence conditions and the uniqueness conditions, indicating the validity of the new formulations that studied in this paper. As a consequence, in further research we would extend the models by considering the vehicle operating costs, more time periods, and different groups of travelers with different income levels, e.g.rich and poorbecause the equity issues are more important to realize congestion pricing in practice. Acknowledgments The authors would like to acknowledge The National Natural Science Foundation of China (NSFC) that supports the research. This research was supported by a research grant 70471053 from the NSFC.
References [1] [2] [3] Arnott R., de Palma A., Lindsey R. Economics of a bottleneck. Journal of Urban Economics, 1990, 27: 111-130 Arnott R., de Palma A., Lindsey R. Departure time and route choice for the morning commute. Transportation Research B, 1990, 24: 209-228 Vickrey W. Congestion theory and transportation investment. American Economic Review, Papers and Proceedings, 1969,59:251-261

27

[4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15]

Huang Haijun. Fares and tolls in a competitive system with transit and highways: the case with two groups of commuters. Transportation Research Part E, 2000, 36: 267-284 Small K. Urban Transportation Economics. Harwood Academic, Chur, Switzerland. 1992 Paolo F. Road network toll pricing and social welfare. Transportation Research Part B,2002,36: 471-483 Liu L.N., Boyce D. Variational inequality formulation of systemoptimal travel choice problem and efficient congestion tolls for a general transportation network and multiple time periods, Regional Science and Urban Economics, 2002, 32:627-650 Yang H, Huang Haijun. The multi-class, multi-criteria traffic network equilibrium and systems optimum problem, Transportation Research, Part B,2004, 38 (1): 1-15 Verhoef E.T. Second-best congestion pricing, the case of an unpriced alternative. Journal of Urban Economics , 1996,40: 279-302 Verhoef E.T. Second-best congestion pricing in general networks, Heuristic algorithms for finding second-best optimal toll levels and toll points. Transportation Research, Part B,2002,36: 707-729 Yang H., Huang Haijun. Carpooling and congestion pricing in a multilane highway with high-occupancy vehicle lanes. Transportation Research Part A, 1999, 33: l39-l55 Liu L.N., McDonald J. Efficient congest tolls in the presence of unpriced congestion: A peak and off-peak simulation model, Journal of Urban Economics, 1998, 44:352-366 Liu L.N., McDonald J. Economic efficiency of second-best congestion pricing schemes in urban highway systems. Transportation Research Part B,1999, 33:157-188 Pressman I., 1970. A mathematical formulation of the peak-load pricing problem. Bell Journal Economics2, 304-324. Sheffi Y. Urban Transportation Network: Equilibrium Analysis with Mathematical Programming Methods. Englewood Cliffs, New Jersey: Prentice-Hall. 1985

28

Fuzzy MPMP Production Planning Problem with Minimum Risk Criteria


Yuan Guoqiang, Liu Yankui
College of Mathematics and Computer Science, Hebei University, Baoding Hebei, P.R.China, 071002

Abstract Based on credibility theory and two-stage fuzzy optimization method, this paper presents a new class of two-stage minimum risk multi-product multi-period (MPMP) production planning model. Then we deal with the approximation of the MPMP production planning problem. After that, a heuristic algorithm, which combines approximation approach, neural network (NN) and simulated annealing, is designed to solve the production planning problem, and a numerical example is given to show the feasibility of the algorithm. Key words Production planning, Credibility theory, Two-stage fuzzy optimization, Approximation approach, Simulated annealing

1. Introduction
Uncertainty plays a key role in manufacturing systems. Thus, the accuracy of production planning is to a large extent dependent on the presence of uncertainties. Because of the perfect forecast of production planning problem, it attracts many researchers interests (Lin and Yao, 2000; Hodges and Moore, 1970; Wu and Chang, 2003; Hsu and Wang, 2001)[1][2][3][4]. In stochastic decision systems, Bakir and Byrne (1998)[5] considered an MPMP production planning problem in stochastic environment. They dealt with a more realistic planning environment with only stochastic demand. Subsequently, they developed a new demand stochastic LP model based mainly on the theoretical presentation of two-stage deterministic equivalents. Since the pioneering work of Zadeh (1978)[6], possibility theory was being perfected and became a strong tool to deal with possibilistic uncertainty (Dubois and Prade, 1988; Nahmias, 1978; Wang, 1982)[7][8][9]. Many researchers applied the theory successfully to fuzzy optimizations (Shih, 1999; Wang and Fang, 2001)[10][11]. Among them, Shih (1999)[10] applied three fuzzy linear programming models to solving transportation planning problem. Wang and Fang (2001)[11] presented a fuzzy linear programming method for solving the aggregate production planning problem with multiple objectives. Although possibility theory is very popular and widely used in fuzzy community, the recent study (Liu and Liu, 2002; Liu and Wang, 2006)[12][13] showed that it is credibility measure instead of possibility measure that plays the role of probability measure in a fuzzy decision system, and an axiomatic approach based on credibility measure, called credibility theory, was developed by the motivation of the fact (Liu, 2004)[14]. Based on credibility theory, Liu (2005)[15] studied a class of two-stage fuzzy programming with recourse problem. More recently, Yuan and Liu (2006)[16] took credibility theory as the theoretical foundation of fuzzy optimization, and developed a two-stage fuzzy production planning model with fuzzy variable coefficients. The purpose of this paper is to apply credibility theory and two-stage fuzzy programming theory to MPMP production planning problem. We assume that the demand and cost are characterized by a fuzzy vector, and adopt minimum risk criteria in the objective. The rest of this paper is organized as follows. We first recall some basic concepts in Section 2, and then propose a new class of two-stage fuzzy minimum risk production planning problem in Section 3. In Section 4, we employ the approximation approach (Liu, 2006)[17] to the credibility objective of the MPMP production planning model, and deal with the convergence of approximation approach. The convergent result facilitates us to incorporate approximation approach, neural network (NN) and simulated annealing to solve the proposed production planning problem in Section 5, and a numerical example is also given in Section 5 to illustrate the feasibility of the algorithm. Section 6 summarizes the main results in this paper.

This research was supported by the National Natural Science Foundation of China (Fuzzy Random Programming with Recourse and Its Applications, No: 70571021).

29

2. Basic concepts
Given a universe , P ( ) is the power set of , and Pos is a set function defined on P ( ) . The set function Pos is said to be a possibility measure (Wang, 1982; Klir, 1999)[9][18] if it satisfies the following conditions: Pos 1) Pos ( ) =0, and Pos ( ) =1; Pos 2) Pos ( iI Ai ) = supiI Pos ( Ai ) for any subclass { Ai i I } of P ( ) , where I is an arbitrary index set. Based on possibility measure, a self-dual set function, called credibility measure Cr, is defined as (Liu and Liu, 2002)[12] Cr ( A ) = (1+Pos ( A ) Pos ( A )) The triplet (, ( ), Cr ) is called a credibility space (Liu, 2006)[19]. Definition 1: Let (, ( ), Cr ) be a credibility space. If = (1 , 2 ,
n

1 2

, n ) is a function from to the

space , then it is called a fuzzy vector. As n = 1 , it is called a fuzzy variable. The possibility distribution of is defined as (Liu, 2006)[19]

(t1 ,
for any (t1 , denoted by 1 of the subset
n
n

t n ) = min{2Cr{ 1 ( ) = t1 ,

n ( ) = t n },1}
, n , and

, t n ) . It is also called the joint possibility distribution of fuzzy variables 1 , (t1 , , tn ) .


, tm )
m

Let be an m ary fuzzy vector. The support of the fuzzy vector , denoted , is defined as the closure
Cr { ( ) } = 1 .

{(t ,
1

(t1 ,

, tm ) > 0

of , it is the smallest closed subset of


m

such that

An m ary fuzzy vector is said to be bounded if its support is a bounded subset of .

3. The formulation of two-stage fuzzy MPMP production planning models


In this section, we will construct the formulation of two-stage fuzzy minimum risk model of an MPMP problem. The characteristic of this manufacturing system can be summarized as follows, which was also considered by Bakir and Byrne (1998)[5] in a stochastic decision system. There are N types of products that are produced in the system, and the decision of production levels to meet market demand with the minimum cost must be taken for T periods. The demand for each product in each period is not known with certainty and is characterized by a fuzzy variable with known possibility distribution. The profit and cost coefficients that are used in the model objective function consists of net production profit, inventory carrying and shortage costs. Some of the cost coefficients are not known exactly and assume to be represented by fuzzy variables. The manufacturing system is assumed to be a flow type but not a pure one. There are K machining centers in the systems, and each center is provided with one machine. All products trace the same route, and may skip some machining centers. A station is not to be visited more than once. The transfer times of parts between stations are negligible. The process times of each machine center are supposed to be deterministic. The setup times for each product in each machine center are included in the process times. Notations: xit : the amount of product i to be produced in period t ;
cit : the unit net production profit or revenue of product i in period t ; I it : the amount of positive inventories of product i at the end of period t ;
+

30

I it : the amount of negative inventories or shortages of product i at the end of period t ;

qit : the unit cost of positive inventories of product i in the period t ; qit : the unit cost of negative inventories or shortages of product i in the period t ; aik : the process time of product i on the machine center k ;

d it : the demand for the i th product in t th period; MCkt : the capacity of machine center k in period t ; W : the recourse identity matrix.

In this paper, we present a new class of fuzzy two-stage production planning problem, in which there are two optimization problems to be solved. The first-stage decision variables xit , which represent the amount of product
i to be produced in period t are fixed before observation of fuzzy event , here refer to the realization of

fuzzy variables coefficients. The second-stage decision variables I it are fixed after observation of fuzzy event

.
According to this scheme, we present a N -product T -period production planning problem. The second-stage problem is formulated by assuming xit and to be fixed, and is as follows
Qt ( xit , ( )) = min qit ( ) I it s.t . WI it = Axit + WI it 1 d it ( )
I it = I it I it
+ + i

qit ( ) = qit ( ) qit ( ) I it 0, I it 0, qit ( ) 0, qit ( ) 0 i = 1,2,


, N , t = 1,2, ,T .
+ +

(1)

and ( ) is obtained by piecing together the fuzzy components of the second-stage problem data qit ( ) and d it ( ) in problem (1). The solution of the model involves deciding what products should be produced and how much of each product to produce. The inventory at the end of period t is available for withdrawal by the next period, and provides the linkage between the periods of the multi-period model. There is no relation between the shortages in periods t and t 1 . That is, backlogging is not allowed. Suppose that the decision variable xit has to satisfy the following deterministic constraints:
aik xit MCkt , xit 0.

Now we want to speak about the solution of production planning problem, it is necessary to introduce additional constraint on xit . Let K be the set of all those xit vectors for which problem (1) has a feasible solution for almost every possibly realized fuzzy event . Then K can be expressed as
K = x | x

NT

, Cr Q ( x, ( )) < = 1 ,

} }

where is the fuzzy vector obtained by piecing together the fuzzy demands and some of fuzzy costs, and the vector x = ( x11 , x12 ,
T

, x NT ) . Qt ( xit , ( )) is the optimal value of

problem (1) at fixed xit and ( ) ,

and is usually called second-stage value function in two-stage fuzzy programming with recourse (Liu, 2005)[14]. 1 1 N The vector Q ( x, ( )) = (Q1 ( x11 , ( )), Q2 ( x12 , ( )), , QT ( xNT , ( ))) . But when the Axit + WI it 1 d it ( ) , the second-stage value function Qt ( xit , ( )) = min qit ( ) I it ; on the other hand, when Axit + WI it 1 < d it ( ) , the second-stage value function Qt ( xit , ( )) = min qit ( ) I it . So the problem (1) has always optimal value at each i NT and t , thus we have K = . Denote z ( x, ( )) = iN 1 T=1[cit xit Qti ( xit , )] . With a given threshold z , the excess credibility functional = t
31
0

QC ( x ) = Cr iN 1 T=1[ cit xit Qti ( xit , ( ))] > z = t

} = Cr { z ( x, ( )) > z } measures the credibility of revenue


0

profit z ( x, ( )) exceeding z . As a consequence, the first-stage of production planning problem is formulated as follows
max QC ( x )

s.t . aik xit MCkt xit 0 i = 1, 2,


, N , t = 1, 2, ,T.

(2)

Combining problems can be built as

(1) and (2), a fuzzy two-stage minimum risk MPMP production planning problem
max QC ( x )

s.t . aik xit MCkt xit 0 i = 1, 2,


, N , t = 1, 2, ,T.

(3)

where QC ( x ) = Cr z ( x, ( )) > z
i

} , and

Qt ( xit , ( )) = min qit ( ) I it s.t . WI it = Axit + WI it 1 d it ( )


I it = I it I it
+ +

qit ( ) = qit ( ) qit ( ) I it 0, I it 0, qit ( ) 0, qit ( ) 0 i = 1,2,


, N , t = 1,2, ,T .
+ +

(4)

Note that credibility measure has self-duality property. If we denote C ( x ) = 1 QC ( x ) , then the above fuzzy two-stage minimum risk MPMP production planning problem can be rewritten as the following equivalent form
min C ( x ) s.t . aik xit MCkt xit 0 i = 1, 2, , N , t = 1, 2, ,T.

(5)

where C ( x ) = Cr z ( x, ( )) z
i

} , and

Qt ( xit , ( )) = min qit ( ) I it s.t . WI it = Axit + WI it 1 d it ( )


I it = I it I it
+ +

qit ( ) = qit ( ) qit ( ) I it 0, I it 0, qit ( ) 0, qit ( ) 0 i = 1,2, , N , t = 1,2, ,T .


+ +

(6)

Because problem (5) and (6) include fuzzy variables coefficients which are often with infinite supports, they are inherently infinite-dimensional optimization problems that can rarely be solved directly. Thus algorithms to solve such optimization problems must rely on intelligent computing and approximation scheme, which results in approximating finite-dimensional optimization problems. The issues of approximation approach and heuristic algorithm will be discussed in the next section.
32

4. An approximation approach to credibility functions


In this section, we first discuss the approximation approach of a fuzzy vector with an infinite support by a sequence of primitive fuzzy vectors with finite supports. In order to solve fuzzy two-stage minimum risk MPMP production planning problem, it is required to evaluate the credibility function
C : x Cr z ( x, ( )) z

(7)

where is the fuzzy vector obtain by piecing together the fuzzy demands and some of fuzzy costs. For any given feasible decision x , we can evaluate the value of the credibility function (7) at x according to the following methods. Suppose that = (1 , 2 ,
, )
m

is a continuous fuzzy vector with the following infinite support

= m=1 [ a j ,b j ] , where m = NT + l , 1 l NT , and [ a j ,b j ] is the support of j . Then we will try to employ j

the approximation approach (Liu, 2006)[17] to approximate the possibility distribution function of by a sequence of possibility distribution functions of discrete fuzzy vectors { s } . The detailed approach can be described as follows. For each t {1, 2, as follows
k k g s ,t (ut ) = sup{ k Z , s.t . ut }, ut [ at , bt ] , s s

For each integer s , the discrete fuzzy vector s = s ,1 , s ,2 ,

, m} , define fuzzy variables s ,t = g s ,t ( t ) for s = 1, 2,

, s ,m

is constructed as follows. , where the function g s ,t is

and Z is the set of integers. From the construction of s ,t , for each , we have t ( ) s ,t ( ) <
1 . Note that and s are s

m ary fuzzy vectors, and t and s ,t are their t th components, respectively. Then we have

s ( ) ( ) = tm 1( s ,t ( )t ( ))2 , , = s
which implies that the sequence { s } of fuzzy vectors converges to fuzzy vector uniformly. In what follows, we refer to the sequence { s } of discrete fuzzy vector as the discretization of the fuzzy vector . The convergence of the approximation approach is ensured by the following theorem. As a consequence, the original credibility function Cr z ( x, ( )) l can be estimated by the approximating credibility function
Cr z ( x , n ( )) z

provided that n is sufficiently large.

Theorem 1: Consider fuzzy two-stage minimum risk MPMP production planning problem (5) and (6). Suppose the fuzzy variables coefficients is a continuous and bounded fuzzy vector, and the sequence { n } of fuzzy vectors is the discretization of , then for every feasible solution x , we have
n

lim Cr{ z ( x, n ( )) z } = Cr{ z ( x, ( )) z }


0

provided that the function Cr{ z ( x, ( )) l} is continuous at l = z . Proof: Since for any feasible solution x and every realization ( ) of fuzzy variables coefficients , z ( x, ( )) is not , which together with the suppositions of the theorem satisfy the conditions of Theorem 1 (Liu, 2006)[17]. As a consequence, the theorem is valid.

5. Heuristic algorithms and numerical examples


In this section, we incorporate approximation approach, neural network (NN) and simulated annealing to
33

produce a heuristic algorithm for solving the two-stage minimum risk MPMP production planning problem. In the preceding we have discussed the computation of the credibility function C ( x ) by approximation approach. It is easy to see that the discretization of fuzzy variables coefficients is a time consuming process since in each iteration of discretization, we have to solve the programming (7) in the second-stage many times. To speed up the solution process, we desire to replace the credibility function C ( x ) by a trained NN, and design a heuristic algorithm, which can be summarized as follows. Algorithm 1: A heuristic algorithm 0 Step 1. Generate a set of input-output data for a credibility function C : x Cr{ z ( x, ( )) z } by the proposed approximation approach; Step 2. Train an NN to approximate the credibility function C ( x ) by the generated data; Step 3. Set the initialized temperature and generate initialized state. Then calculate its corresponding objective value by the trained NN; Step 4. Check whether the outer-circulation terminated rule is satisfied. If yes, then report the optimal solution and the optimal value; otherwise, go to the next step; Step 5. Generate a new state by the state-produced function and calculate its corresponding objective value by the trained NN; Step 6. Check whether the new state is accepted by the state-accepted function. If yes, then update the current state; otherwise, go to the next step; Step 7. Check whether the inner-circulation terminated rule is satisfied. If yes, then go to the next step; otherwise, return to step 5; Step 8. Check whether the outer-circulation terminated rule is satisfied. If yes, then report the optimal solution and the optimal value; otherwise, update the temperature and return to step 7. In order to show the feasibility and effectiveness of the heuristic algorithm, we will consider the following production planning problem with N = T = 3 , K = 4 , and the demands and costs are assumed to be triangular fuzzy variables. The required data set for this manufacturing system is covered in Tab.1. The possibility distributions of fuzzy demands and costs are provided in Tab.1. If a decision maker takes the level z = 20000 , then the fuzzy two-stage minimum risk production planning problem is formulated as follows
min C ( x ) s.t . aik xit MCkt xit 0 i = 1, 2, 3, t = 1, 2, 3, K = 1, 2, 3, 4.
0

(8)

where C ( x ) = Cr { z ( x, ( )) 20000} , and


Qt ( xit , ( )) = min qit ( ) I it s.t . I it = xit + I it 1 d it ( )
I it = I it I it
+ +
i

qit ( ) = qit ( ) qit ( ) I it 0, I it 0, qit ( ) 0, qit ( ) 0 i = 1,2, 3, t = 1,2, 3.


+ +

(9)

where
z ( x , ( )) = 150 x11 + 140 x12 + 130 x13 + 100 x21 + 110 x22 + 120 x23 + 115 x31 + 125 x32 + 135 x33 3=1 3=1Qti ( xit , ( )). i t

In order to solve this problem, for any feasible solution x , we generate 3000 sample points via approximation approach to estimating the value of credibility function 34

C : x Cr{ z ( x, ( )) 20000}

i.e., we are required to solve the second-stage programming problem 3000 times and obtain the second-stage values z ( x, ( s )) for s = 1, 2, 3000 . Then the value of C ( x ) at x can be computed by the definition of credibility.
Tab.1 The data set for production planning problem

The demand matrix ( d it ) Periods Product 1 2 3 The production profit matrix ( cit ) Periods Product 1 2 3 The cost matrix ( q ) Periods Product 1 2 3 The cost matrix ( q ) Periods Product 1 2 3 The machine centers capacities ( MCkt ) machine center (min) Periods 1 2 3 The machine process time ( aik ) machine center Product 1 2 3 1.5 2.5 4 3 5 4 3 2 2 MC1 MC2 MC3 MC4 400 400 400 400 400 400 400 400 400 400 400 400 MC1 MC2 MC3 MC4 (85,90, 95) 70 85 (95, 100, 105) 80 95 (105,110,115) 90 105 1 2 3
it + it

(30, 35, 40) (33, 36, 42) (22, 28, 33)

(36, 42, 46) (32, 37, 40) (20, 26, 33)

(25, 30, 38) (35, 39, 45) (36, 41, 46)

150 100 115

140 110 125

130 120 135

(10, 15, 20) 15 10

(10, 16, 22) 16 11

(10, 17, 24) 17 12

35

Tab.2 The optimum production levels

Product Production Level Product Production Level Product Production Level

X 11
36

X 12
32

X 13
29

X 21
22

X 22
33

X 23
42

X 31
15

X 32
17

X 33
31

Using the method described above, we generate 3000 input-output data to train an NN to approximate the credibility function C ( x ) . After that, the trained NN is embedded into a simulated annealing algorithm to produce a heuristic algorithm. If the inner-circulation terminated rule is repeated 3000 times and outer-circulation terminated rule is to set the terminated temperature and to reduce temperature according to exponential rate with parameter = 0.999 , then the optimum production plan is reported in Tab.2, whose objective value is 0.100738. That is, the minimum credibility of z ( x, ( )) 20000 is 0.100738, and because of credibility measure has self-duality property, the maximum credibility of z ( x, ( )) > 20000 is 0.899262.

6. Conclusion
In this paper, we have presented a new type of fuzzy two-stage minimum risk MPMP production planning problem. Since the possibility distribution of fuzzy variables coefficients has an infinite support, the minimum risk MPMP production planning problem is inherently an infinite-dimensional optimization one that can be not solved directly. Therefore, a heuristic algorithm, which combines approximation approach, neural network (NN) and simulated annealing, was designed to solve the proposed production planning problem, and a numerical example is provided to illustrate the effectiveness of the heuristic algorithm.
References

[1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19]

Lin D.C., Yao J.S. Fuzzy economic production for production inventory. Fuzzy Sets and Systems, 2000, 111: 465-495 Hodges S.D., Moore G. The product-mix problem under stochastic seasonal demand. Management Science, 1970, 17(2): B107-B114 Wu C.C., Chang N.B. Gray input-output analysis and its application for environmental cost allocation. European Journal of Operational Research, 2003, 145: 175-201 Hsu H.M., Wang P.Z. Possibilistic programming in production planning of assemble-to-order environments. Fuzzy Sets and Systems, 2001, 119: 59-70 Bakir M. A., Byrne M. D. Stochastic linear optimisation of an MPMP production planning model. International Journal of Production Economics, 1998, 55: 87-96 Zadeh L. A. Fuzzy sets as a basis for a theory of possibility. Fuzzy Sets and Systems, 1978, 1: 3-28 Dubois D., Prade H. Possibility Theory. New York, Plenum Press, 1988 Nahmias S. Fuzzy variables. Fuzzy Sets and Systems, 1978, 1: 97-101 Wang P. Fuzzy contactability and fuzzy variables. Fuzzy Sets and Systems, 1982, 8: 81-92 Shih L.H. Cement transportation planning via fuzzy linear programming. International Journal of Production Economics, 1999, 58: 277-287 Wang R., Fang H. Aggregate production planning with multiple objectives in a fuzzy environment. European Journal of Operational Research, 2001, 133: 521-536 Liu B., Liu Y.-K. Expected value of fuzzy variable and fuzzy expected value models. IEEE Transactions on Fuzzy Systems, 2002, 10(4): 445-450 Liu Y.-K., Wang S. Theory of Fuzzy Random Optimization, Beijing: China Agricultural University Press, 2006. 32(in Chinese) Liu B. Uncertainty Theory: An Introduction to Its Axiomatic Foundations. Berlin, Springer-Verlag, 2004 Liu Y.-K. Fuzzy programming with recourse. International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems, 2005, 13(4): 381-413 Yuan G.Q., Liu Y.-K. Two-stage fuzzy optimization of an MPMP production planning model. Proceedings of 2006 International Conference on Machine Learning and Cybernetics: Vol 3-7. Dalian, China, 13-16 August 2006. 1685-1690 Liu Y.-K. Convergent results about the use of fuzzy simulation in fuzzy optimization problems. IEEE Transactions on Fuzzy Systems, 2006, 14(2): 295-304 Klir G.J. On fuzzy-set interpretation of possibility theory. Fuzzy Sets and Systems, 1999, 108: 263-273 Liu B. A survey of credibility theory. Fuzzy Optimization and Decision Making, 2006, 5(4): 387-408

36

Optimizing Designs to Improve the Blockage Behavior of Emergency Networks


Wu Weiwei1, Angelika Ning2, Ning Xuanxi2
1 Civil Aviation College of Nanjing University of Aeronautics and Astronautics, 2 College of Economics and Management of Nanjing University of Aeronautics and Astronautics, Nanjing 210016, P.R.China, 430074

Abstract For transport networks, communication networks, and especially emergency evacuation networks, moving objects tend to be not very obedient to any law. Each flow is stochastic and uncontrollable. We may fail to reach a maximum flow and get an undesired maximal flow, which is called a saturated flow. This reflects on how inefficiently the network is utilized. Therefore it is important to consider the values of the saturated flows in the process of designing the network. In this paper a model for optimizing designs is proposed to improve the blockage behavior of originally designed networks. Three network examples are chosen to illustrate the optimization procedure. Results show the rationality and feasibility of optimization designs. Key words Emergency Network, Saturated flow, Blockage

1. Introduction
The maximum flow problem is often discussed in the classical network flow programming [1]. In the corresponding model, normally flow distributions of arcs are supposed to be controllable, that is, arc flow values can be increased or decreased arbitrarily as long as requirements for the restriction and conservation of capacity are satisfied. However, this isnt coincident with reality. In the real world, some networks, such as transport networks, communication networks, and especially emergency evacuation networks provide evacuation routes for the crowd. In emergency situations, moving objects tend to move randomly. They are easy to be blocked at some nodes (called structurally-blocking nodes in [3]) and no one is willing to withdraw a little. In this case the flow is saturated and cant be increased any more. Its described as a saturated flow or a blocking flow. This flow is often smaller than the maximum flow [2~3]. The smaller the flow is, the worse the flow capability of the network. When the flow is the minimum saturated flow, the blockage of the network is the worst. This flow shows the network at its most inefficient state. Therefore, the minimum saturated flow is equally important as the maximum flow in the process of designing communication or transportation networks. Some algorithms of solving the minimum saturated flow are presented in [4~7]. The concept on blocking flows was early presented in the maximum-flow algorithm of Dinitz [8]. The blocking flows are intermediate values in maximum-flow algorithms [8~11]. Ning [2] first put forward the blocking flow model in which blocking flows are studied as possible flow distributions in emergency evacuation and similar situations. Additionally, some Japanese experts also discussed and researched on blocking flows. They called it uncontrollable flows [12]. Ning [3] presented balanced networks in which capacity differences of the nodes except for the source and the sink are all zero. Balanced networks are similar with proper networks in which every saturated flow is equal to the maximum flow [13]. In these two kinds of networks, no blockage is possible to happen at any node. Considering the flow can only be blocked at structurally-blocking nodes, an optimization design to decrease the possibility of blockages in a network is discussed in this paper. M. Iri [12] designed a model to improve the blockage of networks, but made no particular analyses. In this paper, an optimization design model reducing the blockage in networks is introduced. The goal is to keep the costs as low as possible, considering the restrictions by the lower and upper capacity, while changing the arc capacities in the original design as to eliminate or at least decrease structurally-blocking nodes of the network as much as possible in order to ensure the efficiency of the network.

37

2. Preliminaries
For a better understanding of the blocking flow theory, some principles are introduced as follows: Definition 2.1 The sum of capacities of all arcs outgoing from the source is defined as the incoming flow of networks. Definition 2.2 If one path exists in a network from the source to the sink in which each arc is directed forward and unsaturated, it is described as a forward directed augmenting path [3]. Definition 2.3 If there exists no forward directed augmenting path for a feasible flow in a network, this feasible flow is called saturated flow of the network. Definition 2.4 The saturated flow is said to be a blocking flow of the network, if its flow capacity is smaller than the incoming flow of the network. Definition 2.5 The saturated flow is defined as the minimum saturated flow, if it is the minimum of all saturated flows of the network. Definition 2.6 The capacity difference A at node A is defined as the difference between the sum of the capacities of all arcs with A as its initial node and that with A as its terminal node, i.e.
A =

[c(a) | v (a) = A] [c(a ) | v (a) = A]


i j

(1)

c(a) denotes the capacity of arc a; vi(a) denotes the initial node of arc a; vj(a) denotes the terminal node of arc a. Definition 2.7 A network is called normalized network if the following conditions are satisfied for all nodes in the network except for the source and sink: the capacity of each arc outgoing from node vi is less than or equal to the total sum of the capacities of all arcs incoming to node vi. (The networks introduced in this paper are normalized networks) For example, for the network shown in Figure 1, the arc capacities are 1, node s is the source and node t is the sink. The number in parenthesis beside each node is the capacity difference of the node. The numbers xij/cij beside arcs are flow/capacity. In Figure 1(a), the flow is the maximum flow. In Figure 1(b), there exists no forward directed augmenting path from the source to the sink and the flow is saturated. The flow value is 1 and obviously smaller than the maximum flow 2. The flow in arc sB is unsaturated, but the flow of the arcs outgoing from node B is saturated. If the flow is distributed according to Figure 1(b), it cant be increased any more. The capacity difference of node B is negative, so its a structurally-blocking node of the network.
A(1) 1/1 s 1/1 B(-1) Fig. 1(a) The maximum flow 0/1 1/1 1/1 t s 0/1 B Fig.1(b) The minimum saturated flow 1/1 1/1 1/1 A 0/1 t

Figure 1 The blockage phenomenon of the network

Definition 2.8 A network is called a blocking network, if there exists a saturated flow of the network smaller than the maximum flow. As shown in Figure 1(b), a blockage in a network only can happen, when structurally-blocking nodes exist. Eliminating or decreasing the A of the structurally-blocking nodes as much as possible is critical to avoid the occurrence or the probability of blockages. In the process of designing a network, avoiding structurally-blocking nodes is vital in order to decrease the probability of blockages and to enhance the flow capability of the network.

38

3. The optimization design of improving the blockage of emergency networks


A network N=(V,E,s,t,c) is considered, with s the source and t the sink, V a finite vertex set and E a finite arc set. For the original design of network N it is known: 1. The original designed capacities of arcs and the upper (lower) capacities 2. The incoming flow of network N, i.e. the possible maximum flow value from the source to the sink 3. The constructing cost per unit capacity of each arc in network N Now the originally designed arc capacities are to be changed so that the capacity differences of the nodes (except for the source and the sink) wouldnt be negative. Therefore, the problem to prevent blockages of the network N is reduced to how to adjust the original designed arc capacities at the lowest cost, such that capacity ~ differences of the nodes (except for the source and the sink) are larger than or equal to zero, i.e. v 0, v V . This paper discusses optimization of the original designed scenario of networks. Increasing or decreasing the arc capacities is always relative to the original designed scenario. We assumed that the cost for increasing the unit capacity is the same as the cost for decreasing the unit capacity for the same arc. The cost for increasing or decreasing the unit capacity is denoted as b(eij) for arc eij. ~ (2) We introduce the variable: c(eij ) = c (eij ) c(eij )
~ where c(eij) denotes the original designed capacity of arc eij and c (eij ) the capacity of arc eij in the

optimized design. The objective function to be minimized is expressed as follows:


min(

{b(eij ) c(eij ) | eij E})


i

(3)

Subject to:

nodes with negative capacity differences:


~ vi =

{c(eij ) | eij E} {c(e ji ) | e ji E} + v


j j

= 0 vi V

(4)

nodes with nonnegative capacity differences:


vi

{c(eij ) | eij E} {c(e ji ) | e ji E} 0 vi V


j j

(5)

Restriction for the incoming flow of network N:


~ c (esj ) = F ,
j

v j V

(6)

Restriction for flow values and arc capacities:


~ x(eij ) c (eij ), eij E eij E

(7) (8)

~ mc2 (eij ) c (eij ) mc1 (eij ),

where F is the incoming flow of network N, x(eij) is the flow of arc eij, mc1(eij), mc2(eij) are the upper and lower limit of the capacity of arc eij.

4 Examples for the optimization process


In this section, three examples are given to analyze the processes of optimization designs. The four parameters of each arc are (c(ei ), mc1 (ei ), mc 2 (ei ), b(ei )) , where c(ei) is the capacity of arc ei and b(ei) is the cost for increasing or decreasing the unit capacity for arc ei. The numbers in parenthesis beside each node are the capacity differences of the nodes. 4.1 Optimizing the design of simple network 1 Given the requested incoming flow of the network F =10, as shown in Figure 2.

39

A(5) (5,7,2,1) s (5,7,2,2) e1 e2 (5,7,2,2) e4 e3 e5 (5,7,2,3) B(-5) Figure 2(a) The original design of network 1 t s (3,7,2,2) (7,7,2,1) (5,7,2,1)

A(0) (5,7,2,2) e1 e2 e4 e3 e5 (5,7,2,3) B(0) Figure 2(b) The optimized design of network 1 (2,7,2,1) t

Figure 2 Optimizing the design of network 1

To improve the blockage behavior of network 1, we adjust the capacity differences of the nodes at the lowest cost. Finally, the structurally-blocking node is eliminated from network 1. The optimization model is formulated as follows: The objective function is (the cost to be minimized)
min(

{b(ei ) c(ei ) | ei E}

(9) (10) (11) (12) (13)

Subject to:
5 c(e3 ) + c(e4 ) c(e1 ) 0 c(e5 ) c(e2 ) c(e3 ) = 5
~ ~ c (e1 ) + c (e2 ) = F , e1 and e2 are the incoming arcs of network
mc2 (ei ) c(ei ) + c(ei ) mc1 (ei ), ei E

According to the above formula, we programmed this example with the software Lingo. the target value of this model is -5, i.e. the cost of the optimized design of network 1 is lower than the cost of the original design. It is decreased by 5 units. The results are shown in Table 1.
Table1 Comparing the optimized design with the original design of network 1 Maximum flow Minimum Capacity distribution of the arcs (e1,e2,e3,e4,e5) saturated flow parameters of the original (5,5,5,5,5) 10 5 design parameters of the (7,3,2,5,5) 10 10 optimized design

Construction cost 45 40

4.2 Optimized design of network 2 Given the requested incoming flow of the network F =15, as shown in Figure 3. According to the formulas (2)(8), the target value of this model is -32, i.e. the cost of optimization design of network 2 is lower than the

cost of the original design. It is decreased by 32 units. The results are shown in Table 2.
v1(6) (8,10,2,2) e1 e2 (9,9,2,3) e3 (5,10,2,2) (2,5,2,4) e5 (6,10,2,3) e4 v3(-8) (5,10,2,1) e8 e7 e9 (10,10,2,4) v4(7) t

s (7,10,2,3)

(9,10,2,2) v2(-5)

e6

Figure 3 Optimizing the design of network 2

40

Table 2 Comparing the optimized design with the original design of network 2 Capacity distributions of the arcs Maximum flow Minimum saturated (e1,e2,e3,e4,e5,e6,e7,e8,e9) flow

Construction cost 163 131

Original design parameters Optimized design parameters

(8,7,5,9,2,9,6,5,10) (10,5,2,8,2,9,2,8,7)

14 15

9 15

4.3 Optimizing the design of network 3 Given the requested incoming flow of the network F =17, as shown in Figure 4. According to the formulas (2)(8), the target value of this model is 26, i.e. the cost of the optimized design of the network is higher than the

cost of the original design. It is increased by 26 units. The results are shown in Table 3.
(7,9,1,1) (6,8,1,1) (8,10,1,2) e1 s e3 (3,5,1,1) (3,5,1,2) v3(6) (3,6,1,1) e2 e5 v1(-11) (2,10,1,3) e6 e4 (1,5,1,2) v2(-3) (6,8,1,1) (3,5,1,2) e8 e9 v6(1) e7 v4(2) e12 (2,5,1,1) e10 (2,10,1,1)

(6,10,1,2) v5(-6) e13 e14 e11

(10,10,1,1)

Figure 4 Optimizing the design of network 3


Table 3 Comparing the optimized design with the original design of network 3 Capacity distributions of the arcs Maximum flow Minimum saturated (e1,e2,e3,e4,e5,e6,e7, flow e8,e9,e10,e11,e12,e13,e14) Original design (6,8,3,7,3,2,1, 6 3 parameters 3,3,2,6,2,6,10) Optimized design (8,4,5,2,1,10,3, 17 17 parameters 1,3,1,7,9,10,10)

Construction cost 87 113

Analyzing the three network examples above, we can optimize the scenarios of the original designs. In optimized designs, the structurally-blocking nodes are eliminated. The capacity differences of the nodes with original negative capacity differences are changed to zero, and the capacity differences of nodes with original nonnegative capacity differences are kept nonnegative. In the optimized networks, the flow cant be blocked at any node. The minimum saturated flow equals the maximum flow.

5 Conclusions
Network design is involved in many fields such as computer and communication systems, transportation systems, etc. Traditionally, people think the network flow is controllable and distributed according to the demand, i.e. designing and operating networks in an optimum way, so the maximum flow is often studied in the classical network programming. In reality, especially in emergency networks, every moving unit tends to not obey any rule; the flow is stochastic. Therefore, a blockage easily takes place at a structurally-blocking node. In this case the blocking flow is smaller than the maximum flow of the network. The minimum of all blocking flows is called the minimum saturated flow. In the process of designing one network, its important to avoid a blockage structure (there exist no structurally-blocking nodes in the network) so as to improve the blockage behavior of the network. How to optimize the original designed network at the lowest cost? In the practical alteration of networks, its critical to modify the arc capacities of the network so that the capacity differences of the structurally-blocking
41

nodes can be increased to zero. Capacity differences of the other nodes can be restricted between zero and their original designed values. Networks after optimizing the design have lower utilization than balanced networks. Some arcs have redundant capacities when the saturated flow reaches the maximum flow of the network. However it avoids the possibility of blockage and keeps the cost for constructing the network low. This method is more applicable. Research on the model of improving the blockage behavior of emergency networks has practical significance. It serves as a guide on how to design networks preventing blockage. This research can also be extended to reconstruct existing networks.
References

[1] [2]

[3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13]

Ford L. R., Fulkerson D. R. Flows in networks. NJ: Princeton University Press. 1962. X. Ning, Y. Shi. The maximum flow problem of a network in the emergency situation. In: Proceedings of the Second International Conference on Systems Science and Systems Engineering, 555-558, International Academic Publishers, Beijing, China. 1993. Xuanxi Ning, Angelika Ning. The blocking flow theory and its application to Hamiltonian graph problems. Shaker Verlag. 2006. Ning Xuanxi. Two-way flow-augmenting algorithm for solving the minimum flow problem of a network. Systems Engineering. 1997, 15(1): 50~57. (in Chinese) Ning Xuanxi. The minimum flow problem in a network and its Branch-andBound method. Systems Engineering. 1996, 14(5): 61~66. (in Chinese) M.Shigeno,I.Takahashi and Y.Yamamoto, Minimum Maximal Flow Problem: An Optimization over the Efficient Set, Journal of Global Optimization 2003, 25: 425-443. Le Dung Muu, D.C. Optimization Methods for Solving Minimum Maximal Network Flow Problem. www.mmm.muroran-it.ac.jp/~shi/MuuShi.pdf E.A.Dinic. Algorithm for solution of a problem of maximum flow in networks with power estimation. Soviet Mathematics Doklady, 1970, 11: 1277-1280. Tarjan, R.E., Data structures and network algorithms. Society for Industrial and Applied Mathematics, Philadelphia, PA, 1983 A.V.Goldberg and S.Rao, Beyond the flow decomposition barrier, Proc. 38th Annual IEEE Symposium on Foundations of Computer Science (FOCS 1997), pp. 2-11. Mikhail J.Atallah, Algorithms and theory of computation handbook, CRC Press LLC, 1999: 7.12-7.14 Iri Mason. Theory of uncontrollable flows-a new type of network-flow theory as a model for the 21st century of multiple values. Computers Math Applic.1998. Lin Yixun, Li Xianglu, Deng Junqiang. The minimum saturated flow problem in emergency networks. OR Transactions. 5(2). 2001.

42

An Approach to Solve Large Scale Multiple Traveling Salesman Problem with Balanced Assignment
Wang Dazhi, Wang Dingwei
College of Information Science and Engineering, Northeastern University, P.R.China, 110004

Abstract A special case of multiple traveling salesman problem which requests the cities each traveling salesman visited to be the same and the total tour length to be minimized is calculated by Lin-Kernighan algorithm with a two stage procedure. The calculation result is satisfactory. Key words TSP, MTSP, Lin-Kernighan algorithm

1. Introduction
The traveling salesman problem (TSP) is the classical problem in Combinatory Optimization filed, and the multiple traveling salesman problem (MTSP) is the extension of TSP. A special case of MTSP could be defined as: Given N cities and M traveling salesman and the distance matrix Dij = (d ij ) nn which shows the distance of any two cities. Each of the M traveling salesman must visit some cities and each city can only be visited by one salesman once. The original city of each route is uncertain and each salesman does not need go back to the original city. If i j d ij = d ji , then such case could be described as Symmetric MTSP, otherwise Asymmetric MTSP.

2. Solution method
In general, this problem can be viewed as a two level optimization problem, one level is how to divide the N cities into M group, and the other is the route optimization problem. Compared with the classical TSP, such problem is harder to solve and the solution space is larger than TSP. One possible solution is called the adding virtual node method [1]. For example, assume 3 salesman and 13 cities, and one possible solution is: the first route sequence for the first salesman is (1, 13, 7, 5, 8), the second route sequence is (2,12,9), the third route sequence is (11,6,3,4,10).After adding virtual node 14,15,16,and make the distance of each original node and each virtual node to be zero, the distance of each two virtual node to be infinite, then such problem could be transformed to the traditional TSP. The corresponding solution is (1, 13, 7, 5, 8, 14, 2, 12, 9, 15, 11, 6, 3, 4, 10, 16).

3. Computation algorithm and analysis


Algorithms to solve TSP are divided into two kinds, called precision and approximation algorithms respectively. Considering the execution time, the approximation algorithm is usually used to solve large scale TSP. The Lin-Kernighan is one of the efficient local search algorithms for TSP [2]. Many computation instances show that Lin-Kernighan has a high performance and efficiency. In this paper, all the computation instances are cited from the TSPLIB, and the number of salesman is 3, 5, and 7 respectively. The computation result is shown in Tab. 1.
Tab. 1

Symmetric

Asymmetric

Name bier127 ts225 rat783 kro124p ftv170 rbg443

Computation result when M=3, 5, 7 Scale M=3 127 95592 225 117960 783 8708 100 33655 171 2498 443 2621

M=5 87562 113562 8650 32247 2368 2555

M=7 80283 110656 8597 30915 2272 2489

Key Project Supported by National Natural Science Foundation of China (70431003); Innovative Research Team Project Supported by National Natural Science Foundation of China(60521003)

43

For rat783 instance, it is a problem having 783 cities. When M=7, the optimization total route length is 8597. The number of cities each salesman traveled is 2273259147198318 respectively. Fig. 1 show the route sequence of each salesman.

Fig. 1 route sequence of example Rat783 when M=7

To get the object of total tour length minimized often leads to the result that some salesman visits very few cities and some visit quite a few. This is the unbalanced result. The object of total tour length minimized often conflict with the object of balanced task. [3] In this paper, we propose a two stage method to get a balanced solution. The solution includes two steps. The first is the optimization procedure and the second is the regroup and re optimization procedure. Below show the detailed steps. First step: use the Lin-Kernighan algorithms to find a solution which minimize the total tour length and keep the sequence of each salesmans travel route and then discard the virtual nodes. After this step we get a TSP tour solution. To get a balanced solution, we need to cut M edges whose total amount of length is max. In the process of cutting edges, we also have to follow the constraint that the city numbers between any adjacent discarded edges is N/M. That is the regroup procedure. Tab. 2 is the calculation result for all computation instances. Second step: after the regroup procedure, the original optimized route is damaged and the new travel route for each salesman need to be optimized again. Fig. 2 shows the travel route of each salesman after the first step. Fig. 3 is the final optimization route.

Fig. 2 route sequence of example Rat783 when M=7

44

Fig. 3

route sequence of

example Rat783 when M=7


M=5 balance 135931 138981 9982 39222 2875 2645 M=7 balance 119947 150792 9727 37773 2813 2602

Name bier127 Symmetric ts225 rat783 kro124p Asymmetric ftv170 rbg443

Tab. 2 computation result when M=3, 5, 7 M=3 scale balance 127 109272

225 783 100 171 443

132637 9206 37398 3008 2637

4. Conclusion
For a special case of MTSP which request a balanced assignment, we propose a two stage method to solve such problem by using Lin-Kernighan algorithm. The computation result shows that such method is suitable and efficient. Computation instance are all cited from the TSPLIB including symmetric and asymmetric problem. The computation result is satisfactory.
References

[1] [2] [3]

Lixin Tang, Jiyin Liu, Aiying Rong and Zihou Yang. A multiple traveling salesman problem model for hot rolling scheduling in Shanghai Baoshan Iron & Steel Complex. European Journal of Operational Research, 2000, 124(2): 267-282 S. Lin, B. W. Kernighan. An effective heuristic algorithm for the traveling salesman problem. Operation Research, 1973 (21):498-516 Houqing Lu, Huidong Wang. Divide into equal task of MTSP. System Engineering, 2005, 23(2):19-21

45

Optimization Methods and Models Supplier Selection for N-Product under Uncertainty: A Simulation-Based Optimization Method
Wu Chunxu, Dong Feifei, Fu Jianbing
Management School, University of Science and Technology of China, Hefei, 230026, China chunxuwu@ustc.edu.cn, ffdong@mail.ustc.edu.cn, jpfu@mail.ustc.edu.cn

Abstract The paper considers supplier selection problem for N-product under uncertainty, that is, variable lead time of suppliers and stochastic demand. The problem is a multi-level problem that includes both strategic and operational level. And the inventory system of the Supply Chain (SC), which consists of costs of ordering and receiving order, purchasing cost and shortage cost, is the key factor for the problem. A simulation-based optimization method is proposed to solve it. More specially, an inventory model under variable lead time and random demand, such as, demand with a normal distribution, is presented and a genetic algorithm with disperse-event simulation is developed for it to find the satisfying suppliers from a set of pre-selected candidates and at the same time decide the order strategy, including the reorder level and order quality. A numerical example is attached to illustrate the method. Keywords Supplier Selection, Genetic Algorithm, Simulation-base Optimization, Disperse-Event Simulation

1 Introduction
The issue of single-sourcing versus multiple-sourcing procurement systems has been discussed widely as a result of the success of Japanese purchasing practices. And the uncertainty is one of the most important factors that make the decision more difficult[1]. And supplier selection for N-product must be in concern for its more close to the real situation. A simulation-based optimization method, which builds an inventory model under variable lead time and random demand, such as demand with a normal distribution, is presented to solve it. The model considers the inventory system of the Supply Chain, which consists of cost of setting up orders, cost of receiving orders, purchasing cost and shortage cost. At last, a genetic algorithm with disperse-event simulation is developed for it to find the satisfying suppliers from a set of pre-selected candidates. The order strategy is determined at the same time.

2 Problem Description
2.1 Flow Figure 1 shows that a cycle of the inventory level vs. time of one product, such as the jth product[2 3,4,5]. At

first, there is some inventory of the product. When it goes down to a certain count, the new orders of the product are sent out. This is the reorder level. The orders are sent to the selected suppliers. A cycle starts at this time. After the lead times of some suppliers, the inventory is replenished by each delivery. And the inventory continually goes down for the demand at the same time. If the inventory goes to zero before next delivery, shortage occurs and it results in demand-lost. In a manufacture firm, the assembly line will be stopped for the shortage, so the inventory of all other products is affected and stays on current level. This situation remains till the next lead time and the delivery of the supplier arrived. It assumed that the inventory goes higher than reorder level after the last delivery, otherwise the system cannot form a renewal process and the decomposition analysis based on a cycle does not work. When the inventory drops to reorder level again, new orders are sent out once more. This is the end of a cycle and the start of a new cycle for the product.

46

Inventory Level

Max. Lead Time

ith Lead Time Reorder Level s

1st Delivery

ith Delivery

Last Delivery

Time

A Cycle

Figure 1: Inventory of one product vs. time

2.2 Assumptions 1) Order Policy (s, Q), where sj is the reorder level of jth product and Qij is the order quantity of jth product from ith supplier is adopted for the inventory system. 2) After a cycle of jth product, the inventory of jth product is above the reorder level, sj. This assumption keeps the inventory system working continually. 3) Lead times are random variable, independent with each other, but not with the same distribution. 4) Unit time demand for finished product is a random variable too. And the demands for supplied products are pro rata to it. For instance, a stool needs one seat and four legs, etc. 5) There is a capacity limit of each product for every supplier 2.3 Notations M: Number of suppliers. n: Number of products. i: Supplier index. j: Product index. s: Reorder level; sj: Reorder level for jth product. ij : = 1 shows that ith supplier for jth product is selected, 0 otherwise.

Qij: Order quantity for jth product from ith supplier in a cycle of jth product. Qij=0 if otherwise. Qj: Order quantity for jth product in a cycle of jth product. And Q j = m Qij . i =1
Lij: Lead time of ith supplier for jth product.

ij =0; Qij>0

d: Unit demand of finished product. dj: Unit demand of jth product and d j = j d . K1j: Cost of sending out orders for jth product to the selected suppliers with quantity of Qj. K2ij: Cost of accepting the delivery for ith supplier for jth product. Pij: Price offered by ith supplier for jth product. Cij: Capcity of ith supplier for jth product of one year. hj: Rate of inventory holding cost. qij: Perfect rate of ith supplier for jth product. qaj: Minimum accepted perfect rate of jth product. j : Unit cost of shortage for jth product.

47

3. Model Formulation
K1 j + i =1 K 2ij .
m

A. The cost of sending out orders and accepting the delivery for jth product in a cycle time is

B. The inventory holding cost for jth product in a cycle time is h j

T t =0

I j (t ) , where

I j (t ) = ( I j (t 1) + Qj (t ) d j (t )) + , if I j ' (t 1) + Q j ' (t ) > d j ' (t ) ,

I j (t ) = ( I j (t 1) + Q j (t ) j ( I j ' (t 1) + Q j ' (t )) / j ' ) +


C. The shortage cost for jth product in a cycle time is
T

if

I j ' (t 1) + Q j ' (t ) < d j ' (t )

and

I j ' (t 1) + Q j ' (t ) > 0 , I j (t ) = I j (t 1) + Qj (t ) , if I j ' (t 1) + Q j ' (t ) < 0 .

j t =0 S j (t ) , where

S j (t ) = (d j (t ) I j (t 1) Qj (t )) + .
D. The purchasing cost for jth product in a cycle time is E. Constrains: Capacity constrains: Quality constrains:

i =0

Pij Qij .

Q (t ) C . q Q /Q q
Y t =0 ij ij m i =0 ij ij j

aj

Renewal process constrains: I(T)>s. Variable constrains: s>0, d>0, Qij 0 ,

ij {0,1} .

4 The simulation optimization based genetic algorithm


Its unrealistic to solve this problem using linear programming, for the single object programming is non-linear. The genetic algorithm provide a optimization solution, which optimizes the non-linear programming step by step to find the most optimized or satisfied solution[6]. Moreover, for the random variables in the model, a emulation method is used to simulate them, that is, the lead times and demand of the products are simulated by their probability distribution. For the values of Qij is related to ij , a mixed optimization method is used to solve the problem of supplier selection, for it can reduce the range to be searched. That is, first search the Qij in the value field under a certain known ij , to optimize the object. Then search the ij using these optimized values resulted by Qij. The ij and Qij are most optimized or satisfied solution, which can result the most optimized value. 4.1 Initialization The first stage to start the optimization process is to generate an initial population of solutions. The number is equal to the population size in GA option. The selection of values for these solutions is carried out randomly in the range option. The reason for this is to find the best solution more efficiently. The next step in initialization phase is to estimate the fitness. 4.2 Evaluation Before using the crossover and mutation genetic operators to create the new generations, some chromosomes which are too bad need to be excluded. And the problem is how to get its fitness value. In order to estimate the fitness of a chromosome it is necessary to run the simulation model for each one of the solutions represented by the chromosomes. Before running the simulation the database containing all the simulation results should be checked to see if that particular solution was run already. This is important since simulation time is costly. If that parameter configuration was contained in the database, then its output can be use for estimating the chromosome fitness. If it is not found then the simulation model has to be used to estimate the output. 4.3 Search The third stage of the proposed optimization algorithm is the search phase. By using the stochastic processes
48

of selection, crossover, and mutation the population is able to find new and improved solutions. Selection is the process of selection which alternatives are going to be used for generating the new population. This process is based on the selective probabilities, assigned to each alternative based on its rank within the population, often referred to as ordinal based selection. This type of selection is often preferred over proportional selection that is based on alternatives relative fitness to the other (Miller, 1997)[7]. The crossover is the process of mixing two chromosomes to create two offspring. This is the search primary mechanism to find new and improved solutions. Mutation is a background operation that ensures the search recovers from lost behavior information of the chromosomes. The search phase continuous until the stopping criterion is met. 4.4 Stopping Criteria The last step in the methodology is to check if the GA found a best or satisfied solution. The first possibility is that the GA has reached a point of total convergence. This happens when all the chromosomes in a generation are exactly the same. In this case the process should be stopped since no further improvement could be reached. This is not the most common case in real world problems, so other stopping criteria are necessary. The simplest one is related to the amount of time or number of generations. If this is the case the process should be stopped when this constraint has been met and the best solution reached so far can be selected as the final solution. Other types of stopping criteria are related to the improvement observed among generations. For example, if no significant improvement is observed in a number of consecutive generations the GA should be stopped.

5 A numerical example
More clearly, an illustrative numerical example is shown in this section. The example is based on a supply chain of a manufacturer and its suppliers selection process. There are three products for assembling and supplied by three suppliers. Table 1 shows the parameters of the model. The lead time fro each product is exponential distributed, with mean value 28 for product 1, 23 for product 2 and 29.5 for product 3. And the demand rate is random under normal distribution with mean 20 and variance 10. These data is deduced from the history information of the firm. Using Gatool of Matlab for solving this model provides the results as shown in Table 2. The main advantage of such an analysis for purchasing department is that the manger can select the best supplier(s). Moreover, this approach provides the order policy (s, Q) in the operation level.
Table 1: Parameters for the model

product supplier K1j K2ij Pij Cij hj qij qaj

j
j

1 1 150 0.32 3200 0.91 2 285 192 0.35 2500 0.040 0.87 0.875 0.35 1.2 0.86 0.90 403 0.29 3500 35 0.28 1700 3 1

2 2 253 315 0.32 3900 0.037 0.89 0.881 0.28 4.6 0.95 0.91 12 0.50 2000 269 0.36 2800 3 1

3 2 311 255 0.45 2900 0.041 0.92 0.877 0.33 7.3 0.89 134 0.37 3100 3

6 Conclusion and future research


This paper has presented a new approach to solve supplier selection for N-product under uncertainty. This approach integrates a simulation model with a stochastic genetic algorithm heuristic and a goal programming model. Finally, a future research about the applicability for more situations has to be considered to continue with this research. These include the application of the methodology to some test bed cases, the selection of the most appropriate GA conditions, and further study in the multiple comparison techniques area. 49

Table 2: Solution for the problem

product supplier

ij

1 1 1 1962 2 0 767 0 1335 0 3 1 1 0

2 2 0 664 0 2.7942e4 3679 0 3 1 1 0

3 2 1 650 1941 2621 3 1

sj Qij TC

References

[1] [2] [3] [4] [5] [6] [7]

Dotoli, M.; Fand, M.P.; Meloni, C.; Zhou, M.C.A decision support system for the supply chain configuration[J]In: Systems, Man and Cybernetics, 2003. IEEE International Conference on, Volume: 3 , Oct. 5-8, 2003Pages:2667-2672 Jason Chao-Hsien Pan, Ming-Cheng Lo and Yu-Cheng Hsiao. Optimal reorder point inventory models with variable lead time and backorder discount considerations. 2004, European Journal of Operational Research. Volume 158, Issue 2: 488-505 Chuda Basnet and Janny M. Y. Leung. Inventory lot-sizing with supplier selection. Computers & Operations Research. 2005, Volume 32, Issue 1: 1-14 Zarandi, M.H.F. and Saghiri, S. A comprehensive fuzzy multi-objective model for supplier selection process. In: Fuzzy Systems, 2003. FUZZ'03. The 12th IEEE International Conference on. 2003,Volume 1: 368-373 Ruengsak Kawtummachai and Nguyen Van Hop. 2005. Order allocation in a multiple-supplier environment. International Journal of Production Economics. Volumes 93-94: 231-238 Hedlund, H.E. and Mollaghasemi, M. 2001. A genetic algorithm and an indifference-zone ranking and selection framework for simulation optimization. Simulation Conference, 2001. Proceedings of the Winter. Volume 1: 417-421. Miller, B.L. 1997. Noise, Sampling, and Efficient Genettic Algorithms. IlliGAL Report No. 97001. University of Illinois at Urbana-Champaign, Illinois Genetic Algorithms Laboratory, Urbana, IL.

50

A Feasible Case Study of Applying Critical Chain Multi-Project Management in Semiconductor Turnkey Services
His-Hsien Yu, Sheng-Kuan Chiu
Kunming University of Science and Technology And National Chiao Tung University

Abstract We present a semiconductor supply chain model constructed under the concept of Theory Of Constraints (TOC) [1] as well as Critical Chain Multi-Project Management [2] to monitor and control FABLESSs manufacturing orders throughout the entire supply chain. The objective of Critical Chain Multi-Project Management is to assure that projects can be delivered on time. However, whether Critical Chain Multi-Project Management is applicable for the assuring that orders, view as tasks or projects, can be delivered on time in semiconductor turnkey services deserves to be studied. There are basically two major concepts in Critical Chain Multi-Project Management. They are project buffer and buffer management in multi project operation. The former is utilized to calculate the due date while the latter serves as the monitoring and control mechanism. The project buffer strongly influences the due-date performance. Thus to verify the feasibility of the estimate of project buffer proposed by Dr. Goldratt is necessary. The concept that Dr. Goldratt arbitrarily divided the project buffer into three equal zones for buffer management [2], which has been criticized by many researchers, needs to be justified, too. A case with real world data is employed in this study. Simulation model build in eM-Plant is experimented. Results show that the estimate of project buffer may not be proper while the buffer management works well in this typical case. Key words Critical Chain Multi-Project Management, Theory of Constraints, Supply Chain Management

1. Introduction
Critical Chain Multi-Project Management, a method of multi-project management, is employed in this study to monitor and control FABLESSs manufacturing orders throughout the entire semiconductor supply chain. Its objective is to make sure those orders can be delivered on time. As the IC design houses release their manufacturing orders to the foundries, we view them as tasks or projects. However, whether Critical Chain Multi-Project Management is applicable for the semiconductor turnkey services deserves to be studied. There are basically two major concepts in Critical Chain Multi-Project Management project buffer and buffer management. The former can be utilized to calculate the due date while the latter serves as the monitoring and control mechanism. The project buffer strongly influences the due-date performance. Therefore to verify the feasibility of the estimate of project buffer proposed by Dr. Goldratt [2] is necessary. Similarly, he arbitrarily divided the project buffer into three equal parts for buffer management [2] (see Fig.1), which has been criticized by many researchers and therefore, needs to be justified, too.

PB

OK Zone 3

Watch & Plan Zone 2


33% 33%

Act Zone 1
0%

100% 67% 67% Rem ai ng Pr ectBuf er ni oj f :


Fig.1 CCPM Mechanism

51

2. LITERATURE REVIEW
In this part, we will review the turnkey services in the semiconductor industry briefly and then we will pinpoint at the Critical Chain Multi-Project Management. 2.1 Turnkey Service There are many different definitions and types of the turnkey services. Wu [3] classified two basic kinds of turnkey services depending on who is in charge of it. One is controlled by the foundry FAB, the front end, and the other is controlled by the assembly and final testing, the backend. The turnkey service managed by FAB will be completed till the last stopfinal testing. That is the so-called one-stop shopping turnkey service and is the most common definition in semiconductor industry. Additionally, it is the turnkey service scenario that we are going to utilize when building the experimental model.
IC Design House Only one communicating channel FAB Obligation to Information Flow

CP

Assembly

FT

Finished Goods

Fig.2 One-stop Shopping Turnkey Service [3]

2.2 Critical Chain Project Management Critical Chain Multi-Project Management provides a substantial step in continuous improvement to the Project Management Body of Knowledge (PMBOK). It derives from applying TOC and statistical theory to the project system. It provides a conceptually simple and practical process to plan and manage projects by using the buffers as an immediate and direct measurement tool to control project schedule [4]. Fig.3 shows the locations of buffers. The aggregated project buffer inserted at the end of the critical chain provides for contingencies on all critical activities, while the feeding buffer that was placed where non-critical paths feed into the critical chain is to prevent delay of the execution of work on the critical chain when work on a non-critical path is delayed [5]. The main intention of these buffers is used to protect projects from strikes of the Murphys Law: if anything can go wrong, it will. In other words, it protects the contingencies and exceptions.
Feeding Buffer Non-critical A ctivity Critical Activity C ritical A ctivity C ritical A ctivity Project Buffer

Fig.3 Feeding Buffer and Project Buffer [5]

2.3 Buffer Management The buffer management in Critical Chain Multi-Project Management controls project as a flagging. The risk of a delay is measured by the extent that the buffers have been consumed. Therefore, the project buffer and feeding buffers should be constantly being monitored [5]. The decision levels are in terms of the buffer size, measured in days: (a) within the first third of the buffer: No action, (b) penetrate the middle third of the buffer: Assess the problem and plan for action, and (c) penetrate the last third: Initiate action (See Fig.4). 52

0/3 Project Buffer Feeding Buffers No action X

1/3

2/3

3/3

Plan X Act X Fig.4 Example of Using the Buffers [4]

3. METHODOLOGY
This study treats turnkey services as one of the critical programs for the foundry fabrication (FAB). The turnkey service is assumed to be managed by her production control (PC) staffs, who we view them as project managers in Critical Chain Multi-Project Management. Our objective aims to ensure high performance of average on-time-delivery percentage (AOTDP) through the usage of the concepts of Theory of Constraints and Critical Chain Multi-Project Management. The Critical Chain Multi-Project Management has been proven to outperform the PERT/CPM approach when dealing with constrained resource. But its applicability on semiconductor turnkey service needs to be justified. 3.1 Estimates of Project Buffer and Feeding Buffer There are two methods to estimate the size of buffers. They are (1) the cut and paste method and (2) the root-square-error method. Detailed illustration, tutorial information and statistical stands can be found in the following website: http://www.pdinstitute.com/ [6].We will utilize the root-square-error method instead of the cut and paste method due to its implicitness. The RSE method requires two estimates of task duration from individual contributor. One is the safe estimate, which makes the estimator feel eased when making a commitment. In other words, this estimate must consist of enough padding. The other is the average estimate, the one without any padding or protection. In addition, the contributor should assume that the task would be worked at a full level of effort, with no interruption by contingencies or external factors. What we view will serve as the expected value (see Fig.5).

S a fe E s tim a te (S ) A v e r a g e E s tim a te (A )
Fig.5 Two Required Estimate of RSE Method [6]

The difference of these two estimates, denoted as D, is treated as the magnitude of uncertainty to the average estimate. Uncertainty: D = Safe Estimate (S) Average Estimate (A). For each task of a project, D is used to calculate the most likely error or uncertainty in the project duration. The detailed calculation is shown in Fig.6. The RSE method assumes independence between tasks. Moreover, one major flaw of this method is that the safe estimates are subjective and dependent on contributors measurement. In other words, the feasibility of these data provided from each task doesnt work. This does not seem sound and may be criticized.

53

(1) Collection of Average Esitmates

A1

A2

A3

A4

A5

A6

A7

A8

(2) Buffer Size D12 + D 22 + D32 + (3) Insertion of Project Buffer

+ D n2 n = 8 here

A1

A2

A3

A4

A5

A6

A7

A8

Project Buffer

Fig.6 Illustration of RSE Method [6]

Thus, we modified the RSE Method as below: 3.2 The revised RSE method: First, we still apply the historical data from each plant. We collect their mean and standard deviation of cycle time. Second, according to the Central Limit Theorem (CLT), the sampling distribution approximates the normal distribution for n as approaching infinite. However, in practice, the normal distribution provides an excellent approximation to the sampling distribution of x for n as small as 25 or 30, with hardly any restrictions on the shape of the population [7]. Then we take 80-percent-likely-to-be-done point derived from its probability density function (PDF). See Fig.7 for illustration.

Fig.7 Normal Distribution with Probability of 80 Percentages

However, experienced researchers may query that (1) how can we assume that cycle time per layer (CTPL) in the FAB obeys normal distribution with iid and (2) why do we choose 80-percent-likely-to-be-done point instead of 70% or 90% ones. For the former question, we have to clarify that the cycle time per layer we referred contains processing time, down time and waiting/queue time. Each layer consists of dozens of steps. Processing time in the FAB is constant when machine is stable or extremely high when machine is down. It is constant but the waiting time isnt. There are many steps within each layer so that we apply the CLT and assume that they abide by the normal distribution. Nonetheless, are layers independent with one another? We are also skeptical about it. Thus we place this controversial issue in the future study, but only highlight the application of Critical Chain Project Management. For the latter question, it will actually depend on the shop floor status, i.e., when the capacity is approaching limit, we will suggest use 90% while lower utilization of equipments with 70% or even 60%. Here, we utilize the 80% one as the initial searching point.

4.Case STUDY AND sIMULATION


In this chapter, the procedure is organized in 4 steps as follows:
54

4.1Step one: Define the scenarios Our turnkey service model is an intricate network under the charge of headquarters of FAB. There are six parts in this study and each product will have its routing predetermined and priority of substitute plants (see Tab.1). The upstream plant will not release all manufacturing orders to certain downstream plant so that the product will be released by proportion to the top priority, the second one, and so on. In addition, all of the lots will be delivered daily (8 am) from upstream to downstream except from FAB to CP, which is assumed to be real time releasing.
Tab.1

Part type A B C V X Y

Fabrication FAB_01 FAB_01 FAB_01 FAB_02 FAB_02 FAB_02

Routing and Priority of Substitute Plant for each Part Chip probing Assembly Final Test

From FABLESS_A FABLESS_A FABLESS_B FABLESS_C FABLESS_C FABLESS_D

CP_01 CP_01 CP_02 CP_01 CP_01

AS_01,02,03 AS_01,02,03 AS_04,03,02 AS_01,02,03 AS_01,02,03 N/A

FT_01 N/A FT_02 FT_01 FT_01

4.2 Definition of Layer within FAB: In the experiment, we categorize the process flow of FAB as layers. We choose the definition identified from ShiehFrom wafer start to wafer out, the entire process flow can be divided into layers cut by the bottleneck. Note that layer 0 is defined as the operation stages from wafer start to the first time visit to the bottleneck [8]. 4.3Step two: Fit Historical Data into our Model and Check the Feasibility of Critical Chain Project Management The historical data we acquired is CTPL in FAB, cycle time in the CP, in the assembly and in the FT. However, we do not have their corresponding variance, which is used to induce the project buffer. Therefore, finding the corresponding variance is the first move before we calculate project buffer. We fit the historical data into our model and obtain their variance. In the simulation section, we release each product regularly in accordance with the product mix whatever comes out. Simulation Period is 540 days. In order to reduce transient effects, the warm-up period is 1 year. We collect the data from 366th day to 540th day, around half a year. Results are shown in Fig.8.
90 80 70 60 AOTDP 50 40 30 20 10 0 1500 2000 1575 2100 1590 2120 CONWIP Level 1620 2160 1650 2200 1800 2400

FIFO CCPM3+CR

EDD CCPM4+CR

CCPM3 CCPM5+CR

CR CCPM6+CR

Fig.8 AOTDP Under Different Dispatching Rules with Fitted Data

According to Fig.8, we conclude the followings: (1) Critical Chain Multi-Project Management partially works in this experimental simulation. It may indicate that the estimates of project buffer and lead-time are probably inappropriate because we only have about 50 percent probability to have the lots completed on time at CONWIP level of 5 percentages more under legacy dispatching rule. (Allow us to remind readers of the fact that project buffer strongly affects the length of due date and therefore, influences the due-date performance.) This
55

phenomenon may result from the imprecision of our model for lack of detailed data, i.e., we only overlook from the whole plant as jobs in and jobs out in the backend. Or, it may also result from the sensitiveness of the FAB. Because when we release a few more lots into FAB, the cycle time ascend dramatically and it makes the performance badly. (2) Buffer management of Critical Chain Multi-Project Management, in Figure 4-5 we call it CCPM3, also partially works because its performance is merely the same with EDD. However, as the buffer management plus CR (we call it CCPM3+CR here), it creates sublime AOTDP. It outperforms the other dispatching rules obviously. Thus, we conceive what if it works when we divide the project buffer into more zones? Therefore, we tested the performance again by cutting the project buffer and wish to stop at the best AOTDP. Even so, this figure shows that it makes no difference as we did it. And (3) Since the project buffer may not be appropriate, we try to utilize the simulation to designate the due date, which is used to calculate new project buffer, and then re-check the feasibility of Critical Chain Multi-Project Management and performance under different dispatching rules. Issue of determining due date is not the focus/purpose in this study. 4.3 Step three: Designate the Due Date and confirm it To assign a due date, rule first in first out (FIFO) is employed as the foundation, on which the confirmation and comparisons are based. We find the throughput cease increasing at CONWIP Level of 5 percentages more so the mean cycle time of each product (50% OTDP) is assigned as new due date and the difference between the sum of average cycle time per layer and due date are regarded as new project buffers. Now, we keep conducting another experiment to confirm it. What we get is shown in Fig.9. We conclude that Fig.8 shows consistency with Fig.9 that CCPM3+CR does prevail the others again and achieves higher AOTDP. In addition, even we cut the project buffers into more sections but it doesnt make significant difference (see Fig.10).
Performance

120 100 80 AOTDP 60 40 20 0 1500 2000 1575 2100 1590 2120 CONWIP Level
CCPM3 CCPM5+CR FIFO CCPM6+CR CCPM3+CR EDD CCPM4+CR CR

1620 2160

1650 2200

1800 2400

Fig.9 AOTDP Under Different Dispatching Rules with Designate Due Date

100

95

90

AOTDP

85

80

75

70

CCPM3+CR

CCPM4+CR

CCPM5+CR

CCPM6+CR

Fig.10 AOTDP with Different Divisions

4.4 Step four: Compare the performance to legacy system Bottom on our results of experimental simulation, we identified CCPM4+CR or above show no noticeable or 56

superior results. Therefore, we ceased searching for any better performance and ended up with CCPM3+CR. We decide to utilize it to construct our cycle time controlling/monitoring mechanism. But before presenting it, we need more confidence to say that CCPM3+CR does prevail or make distinction with legacy dispatching rule. Accordingly, we conduct another 30 runs with different seed values for CCPM3+CR and CR, respectively. The reason why we chose CR is because it yielded better performance than other legacy rules did in this typical case. In addition, we employ independent t-Test to check their performance and significance (see Tab.2). We conclude that (1) CCPM3+CR in a nutshell prevails the CR in two aspects: (a) higher AOTDP and (b) higher throughput. And (2) we now have strong evidence (p-value is approaching zero) to claim that CCPM3+CR is better than legacy dispatching rule, CR.
Tab.2 Main results of independent t-test N Mean Std. Deviation

RULE AOTDP TPT CCPM3+CR CR CCPM3+CR CR

30 30 30 30

76.72 63.71 11394.80 11349.77

.5379 .3223 22.736 10.20

Std. Error Mean 9.82E-02 5.88E-02 4.151 1.862

5.Conclusion and Future Study


We itemize the conclusions and future study as follows: Critical Chain Multi-Project Management partially works in this experimental simulation; that is, estimates of project buffer proposed by Dr. Goldratt may not be proper on turnkey services. It may indicate that the estimates of project buffer and lead-time are probably inappropriate because we only have about 50 percent probability to have the lots completed on time at CONWIP level of 5 percentages more under legacy dispatching rule. The key reason we projected is because those methods are only workable in the single project instead of in multitasking environments. Critical Chain Multi-Project Managements buffer management works well with critical ratio (CR) as a tie-break rule on semiconductor turnkey services. This method yields superior AOTDP than others do. In addition, we directly proved that Dr. Goldratts arbitrary dividing project buffer into 3 zones is reasonable and workable in this typical case. Even we divide the project buffer into more zones; it does not make significant difference compared to CCPM3+CR. We indirectly search the controlling criteria of cycle time, which is used to distinguish the state of each lot or order. Due date is designated based on the average cycle time obtained with first in first out (FIFO) where 50% AOTDP will be achieved. CCPM3+C$ yields AOTDP of 76% and higher throughput (TPT) than CR does in this typical case. The underlying assumption in this studycycle time per layer (CTPL) abide by normal distribution with iid is debatable, which needs to be proven. Precise due date acquisitionalthough buffer management works, it only shows that we can get higher OTDP and throughput. However, inaccurate due date will lead to wrong project buffer. In addition, it will always sacrifice the products with highest buffers. Therefore, this shall be the main following future study we believed.
References

[1] [2] [3] [4]

E. M. Goldratt and J. Cox, The Goal: A Process of Ongoing Improvement, New York: The North River Press, 1986 E. M. Goldratt, Critical Chain, New York: The North River Press, 1997 W. C. Wu, A Study of Turnkey Service for Semiconductor, Master Thesis, Department of Industrial Engineering and Management, National Chiao Tung University, HsinChu, Taiwan, ROC, 2001 L. P. Leach, Critical Chain Project Management Improves Project Performance, Project Management Journal, Vol. 30, No.

57

[5] [6] [7] [8]

2, 1999, pp39-51 Herman Steyn, An investigation into the fundamentals of critical chain project scheduling, International Journal of Project Management, Vol. 19, 2000, pp363-369 http://www.pdinstitute.com/, Critical Chain Multi-Project Management Tutorials I. Miller and J. E. Freund, Probability and Statistics for Engineers, 2nd Edition, New Jersey: Prentice-Hall, 1977, pp169-170 J. Y. Shieh, A Study of Dispatching Logic for the Photolithography Area and the Furnace Area in a Wafer Fab, Master Thesis, Department of Industrial Engineering and Management, National Chiao Tung University, HsinChu, Taiwan, ROC, 1998

58

A Fuzzy Multi-Objective Programming for Optimization of Reverse Logistics for Solid Waste
He Bo, Yang Chao, Zhang Hua
School of Management, Huazhong University of Science & Technology, Wuhan, P.R.China, 430074

Abstract This paper develops a multi-objective mathematic model for the location of treatment sites and transfer sites for municipal solid wastes. Based on the fuzzy satisfactory levels of objectives, a two-phase fuzzy algorithm is proposed. Through solving the model, we can determine the locations and numbers of these sites and how to assign the generation sites to transfer sites. Therefore, a reverse network for municipal solid waste is constructed. Finally, an example proves the availability of our approach. Key words Reverse logistics, Multi-objective optimization, Solid waste, Fuzzy algorithm

1. Introduction
There are increasing interests in the reverse logistics research in last few years. From a logistical perspective, the reverse logistics is opposite to the conventional forward logistics and has its own characteristics different from the forward logistics[1]. It means the re-use activities give rise to an additional goods flow from the consumers back to producers. The existing research is mainly concerning about the product recovery such as carpet waste[2] copy machines[3] electronic equipment[4] but a few research concerning about the municipal solid wastes from the reverse logistics perspective. In recent years, modern society have experienced an increasing public awareness of minimize human pollution, improving transport and disposal of waste and hazardous materials, and consequently, to minimize the potential accidental risk. An important planning problem in environmental engineering field is locating landfills or treatment facilities for waste and hazardous materials. This problem usually faces conflicting goals such as cost and risk, so it is difficult to formulate and solve. Although these facilities are necessary for society, they usually have a negative impact on property values or the quality of life because of pollution. Moreover, the public wants these facilities as far away as possible, so these kinds of facilities are called undesirable facilities or obnoxious facilities. If such facilities are located too far from urban areas which generate most of the solid waste, the operating costs of transporting waste tends to increase, so there is a need to minimize the transportation costs, which is in conflict with the original objective to locate landfills far from generation regions. This has led to research on mathematical location models for undesirable facilities[5~9]. But uncertainty frequently plays an important role in planning reverse logistics network. The random character governing solid waste generation, the estimation errors in parameter values and the vagueness of planning objectives and constraints are all possible source of uncertainty[10]. Chang applied fuzzy goal programming in dealing with several specific issues in the integrated solid waste management in Taiwan[11~12]. Few papers focus on the simultaneous locating the transfer stations and treatment stations. So, we propose a fuzzy multiobjective integer programming formulation that incorporated the three important goals with fuzzy satisfactory level: cost-minimization, minimization of distance between transfer station and dwell district and maximization of distance between treatment station and dwell district. Our paper is a pioneering work in this area. The paper is organized as follows. In Section 2 we define the problem and propose a multi-objective integer programming model. The model is transformed into a single unified min-max goal model in section 3 and in section 4 the case study was based on the data collected from a city in China to illustrate the application of the method for the optimization problem.

2. Model Formulation
With the fast economic development, the solid waste has been produced vastly in China. The amount produced in China has overrun that produced in America since 2004. China has been the largest waste generation country in the world. According to the estimate of the World Bank, China should build about 1400 landfills in the
59

next twenty years. If the treatment station is in the neighborhood of dwell district, it will impose ill impact on the dwellers. Therefore, the dwellers hope that keep far way from their district. If the transfer station is far from the dwell district, it will cause inconvenience to the dwellers. So, the dwellers hope it in the neighborhood of their district. In a word, it is very important for the decision makers to choose the reasonable locations to build transfer stations and treatment stations while minimize the sum cost of construction and transportation. 2.1 Assumptions We give the assumptions as follows: a) The solid waste generation area is the dwelling district. The more people, the more amount of solid waste will be generated. b) The transportation cost is linearly proportional to the transportation distance. c) The generation rate of the dwelling district is constant in our planning period. d) We will build one treatment station and several transfer stations. 2.2 Parameters and decision variables Parameters : i I index for dwelling district; j J index for transfer stations;
k K index for treatment stations; Hi size of population of dwelling district i ;

Gk Fj dij d jk dik

amount of waste generated per day per person; fixed cost associate for opening treatment station at site k ; fixed cost associate for opening transfer station at site j ; Euclidean distance between dwelling district i and transfer station j ; Euclidean distance between transfer station j and treatment station k ; Euclidean distance between dwelling district i and treatment station k ; transportation cost per unit distance per unit amount ;

maximum coverage of transfer station Decision variables:


1 if a transfer station is located at site j Xj = 0 otherwise

1 if a treatment station is located at site k Yk = 0 otherwise

1 if district i is assigned to transfer station j Z ij = 0 otherwise

2.3 Model The model FMOLP is formulated as follows:

min

F j X j + Gk Yk + H i ZijYk d jk + H i Zij dij k j i j k i j

(1)

min W max P s.t. Z ij = 1,


j

(2) (3) i I

(4) (5) (6) (7) (8)

Z ij X j i Ij J ,

dik Yk P, i I
k

W dij Zij ,i I ,
j

dij Z ij D
60

Yk = 1
k

(9) (10) (11) (12)

X j ( 0,1) , j J , Z ij ( 0,1) , i Ij J
Yk ( 0,1) , k K

The objective (1) minimizes the total cost including the fixed cost for opening stations and transportation cost. Objective (2) minimizes the distance between dwelling district and transfer station which is defined in constraint (7). Objective (3) maximization the distance between treatment station and dwell district which is defined in constraint (6). Constraint (4) ensures every dwelling district should be assigned to one and only one transfer station. Constraint (5) ensures only the open transfer station can receive the waste from the dwelling district which is assigned to it while constraint (8) stipulates the generation area shouldnt be assigned to the transfer station which cant cover it. Constraint (9) means only one treatment station will be build. Constraint (10)(11) and (12) enforce the integrality restriction on the decision variables. 2.4 The membership function of objectives In a minimization problem such as minimizing the distance from a transfer station to a dwelling district, the objective stated by a Decision Maker (DM) is to be best less than an optimistic value f k+ , and be surely less than a pessimistic value f k . This requirement possesses the fuzzy nature and can be treated by introducing the following linear membership function[13]. Let f1 , f 2 , f 3 denote the objectives (1)(2)(3) respectively. It is assumed that the kth membership function should be 1 if the objective achieves the optimistic value, 0 if the pessimistic value is not achieved, and linear from 0 to 1. Such a linear membership function is illustrated in fig1.
f

f
1 f f1 f1 = 1 + f1 f1 0
f1+ f1

if f1 f1+ if f1+ f1 f1 if f1 f1

1 f f2 f2 = 2 + f2 f2 0
f 2+ f 2

if f 2 f 2+ if f 2+ f 2 f 2 if f 2 f 2

f1

f2

f3
0 f f f3 = 3+ 3 f 3 f3 1
f 3

if f 3 f3 if f 3 f3 f 3+ if f3 f 3+

f3+

f3

Fig.1 membership functions of the three objectives

3. Two-phase Fuzzy Algorithm


Zimmermann proposed a fuzzy algorithm to solve the classic multiobjective linear programming[13]. But if the optimal solution to the equivalent problem is not unique, his method cant ensure the solution obtained is optimal solution to the primary problem. Li and Lees method based on Zimmermanns approach can remedy this drawback[14]. The steps of our approach are as follows: Let u composed of X j , Yk , Z ijk is the solution to the primary problem. The feasible zone of the primary problem is denoted as U . At the first phase, the model FMOLP can be transformed into the equivalent form FMOLP1 as follows: max
61

(13)

s.t.

f1 f1 ( u ) f1 f1+

(14)

f 2 f 2 ( u ) f 2 f 2+ f3 ( u ) f3 f3+ f3

(15)

(16)

constraint ( 4 ) ~ (11)

(17)

Solving the model FMOLP1, the optimal value of satisfactory level of objective set can be attained. And the optimal solution u to the equivalent problem which is feasible solution to the primary problem can also be obtained. If the optimal solution u is unique, then it is the optimal solution to the primary problem, otherwise, a model FMOLP2 that maximize the average satisfactory level of objective set is formulated as follows at the second phase, (18) 1 max = [ 1 + 2 + 3 ] 3 (19) f1 f1 ( u ) s.t. 1 f1 f1+

2 3

f 2 f 2 ( u ) f 2 f 2+ f3 ( u ) f3 f3+ f3

(20)

(21)

1 , 2 , 3 [ 0,1]
constraint ( 4) ~ (11)

(22) (23)

Solving this model, we can obtain an efficient solution to the primary problem with the maximum average satisfactory levels of objective set.

4. Case study
To demonstrate how the model works and to verify its usefulness, the model was applied to the real-world problem facing by the Gongyi city in Henan Province in China. We aggregated the waste generation areas into 30 population centers through clustering technology. The coordinate of the 30 centers and their size of population are shown in table 1. Through prescreen, 10 candidate sites for opening transfer stations were determined and shown in table 2. Five candidate sites for opening treatment stations are shown in table 3. According to the data from the World Bank, in 2010, the amount of waste generated per day per person in China will be 1.1kg. The transportation cost per kilometer per ton is 1.0. integer programming and nonlinear Ling8.0 is an optimization software which can solve linear programming programming. We use this software to solve our problem. The positive ideal solution is f1+ = 684.19, f 2+ = 10.15, f 3+ = 42.06 ,the negative ideal solution is f1 = 1008.90, f 2 = 52.16, f3 = 0 . The optimal value of model FMOLP1 is 0.97 . Since the optimal solution is unique, so the solution to model FMOLP1 is the optimal solution to model FMOLP. The results are showed in table 4. Figure 2 shows a graphic representation of our solution.

62

Tab.1 Locations and costs of potential sites of transfer stations

No 1 2 3 4 5

X 20.58 32.36 9.58 37.54 10.14

Y 27.25 28.59 6.51 19.31 46.21

cost 50 55 60 45 50

No 6 7 8 9 10

X 25.97 43.57 25.23 43.04 16.08

Y 40.89 38.65 20.25 4.97 21.71

cost 55 55 60 65 60

Tab.2 Locations and populations of generation areas

No. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

X 15.69 18.67 12.6 7.43 5.08 11.14 28.62 24.86 33.42 27.23 37.32 46.37 14.93 18.07 9.77

Y 3.80 14.28 9.13 11.27 5.43 10.85 20.00 29.39 35.85 21.9 27.23 36.36 32.6 43.38 40.5

Population(104) 1.2 0.3 1.4 2.1 1.9 1.0 2.7 2.2 1.5 2.9 2.2 2.1 1.1 2.7 1.4

No. 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30

X 18.61 12.8 7.13 15.78 5.69 33.51 40.38 45.29 39.51 42.82 39.28 40.68 27.30 30.59 28.43

Y 31.99 41.71 37.98 46.81 37.24 16.72 9.07 7.87 13.92 46.24 42.90 48.38 36.77 38.45 41.33

population(104) 1.1 2.6 2.2 3.7 0.5 3.2 1.6 2.2 0.6 1.8 2.9 1.4 1.7 2.1 2.3

Tab. 3 Locations and costs of potential sites of treatment stations

No 1 2 3 4 5

X 8.58 50.36 30.58 72.54 75.14

Y 88.25 73.59 85.51 60.31 26.21


Tab.4 Solutions

cost 300 450 350 400 380

Transfer station: Generation area allocated to the transfer station 31,3,4,5,6 47,11,21,22,24 515,17,18,19,20 68,14,28,29,30 79,12,25,26,27 923 102,10,13,16

Treatment station to be build is potential site 1 Objective 1: 6,943,000 Objective 2: 11.6 Objective 3: 42.1

5. Conclusions
With the fast development of economy in China, the amount of solid waste is also produced increasingly. In
63

this paper, we proposed a model to design an integrate management system for municipal solid wastes from the reverse logistics perspective. 1. The proposed model and solution procedure point to a number of directions for future work: 2. The model can be expanded to include the element of uncertainty involved in the location problem such as the amount of waste generation. The consideration of what-if scenarios involving changes in parameters values over time may be explored in the future.
90 80 70 60 50 40 30 20 10 0
90 80 70 60 50 40 30 20 10

10

20

30

40

50

60

70

80

10

15

20

25

30

35

40

45

50

Fig.2 The graphic display of the best solution.

Acknowledgements This study is partially sponsored by a National Natural Science Foundation of China under Grant number 70471042, 70601011.
References

[1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14]

Fleischmann M. Quantitative models for reverse logistics. Berlin: Springer-Verlag, 2001. Louwers D, Kip BJ, Peters E, Souren F, Flapper SDP. A facility location allocation model for the re-using carpet materials. Computers and Industrial Engineering 1999, 36(4):1-15. Thierry M, Salomon M, van Nunen J, Van Wassenhove L. Strategic issues in product recovery management. California Management Review 1995, 37(2):114-135. Jayaraman V, Guide Jr. VDR, Srivastava R. A closed-loop logistics model for remanufacturing. Journal of the operational research society 1999, 50:497-508. Erkut, E., Neuman, S. Analytical Models for Locating Undesirable Facilities. European Journal of Operational Research. 1998, 40:275-291. G F List, et al. Modeling and Analysis for Hazardous Materials Transportation: Risk Analysis, Routing/Scheduling and Facility Location. Transportation Science.1991, 25(2) 100-114. Current, J.R., S, Ratick. A Model to Assess Risk, Equity, and Efficiency in Facility Location and Transportation of Hazardous Materials. Location Science. 1995,3:198-202. Mark, S. Daskin. Network and Discrete Location: Models, Algorithms and Applications. Wiley Interscience, New York, 1995. Emanuel, Melachrinoudis., Hokey Min., Xing Wu. A Multiobjective Model for The Dynamic Location of Landfills. Location Science. 1995, 3(3): 143-166. N.B, Chang., Y.L.Wei. Siting recycling drop-off stations in urban area by genetic algorithm-based fuzzy multiobjective nonlinear integer programming modeling. Fuzzy Sets and Systems. 2000,114: 133-149. N.B. Chang, S.F. Wang, Managerial fuzzy optimal planning for solid waste management systems, J. Environ. Eng. ASCE 1996,122(7): 649-658. N.B. Chang, S.F. Wang, A fuzzy goal programming approach for the optimal planning of metropolitan solid waste management systems, European J. Oper. Res. 1997, 99(2): 287-303. H.J.Zimmermann. Fuzzy programming and linear programming with several objective functions. Fuzzy Sets and Systems 1978,1(1):45-55. Rongjun Li, E. Stanly Lee. Fuzzy multiple objective programming and compromise programming with pareto optimum. Fuzzy Sets and Systems. 1993, 53: 275-288.

64

The FIP Problem with Uncertain Demand in Several Scenarios


Yang Jun, Liu Shuji, Xiong Jing
School of Management, Huazhong University of Science and Technology, P.R.China, 430074

Abstract In this paper a special FIP model is formulated to address the issue of locating facilities when there is uncertainty in demand on the paths. Given several possible scenarios, the planner would like to choose a set of locations will perform as well as possible over all future scenarios. This paper uses the Maxmin approach and regret approach to formulate two models to address the FIP problem under uncertainty. A heuristic method is developed. Computational experiments are described and compared to the solutions using branch-and bound. Key words Facilities location, Uncertain demands, Scenarios, Heuristic

1. Introduction
The flow Interception problem (FIP) is proposed by Berman [1] and Hodgson [2]. The basic problem of FIP [1] is to locate m facilities to intercept as much flow as possible from a given set of pre-existing flows on the network. The FIP model assumes that the flows on each path are deterministic. During the last two decades, uncertainty theory has experienced spectacular growth and is hotspot in the location science. The present papers recognize uncertainty in the demand or population at the nodes of the network or the different travel times between the nodes, which depend on the time of the day or day of the week. Uncertainty theory has been considered in the traditional location models (P-median, center problem, set-covering problem), which study on the problem of how locates the facilities to satisfy the demand from the nodes on the network. Such as Weaver and Church (1983a,b, 1984)[3]-[5]; Mirchandani and Odoni(1979)[6]; Daskin and Hagani (1984);[7] Daskin (1982, 1983, 1987)[8]-[10]. In this paper, we consider the flow interception problem with the uncertain demand on the paths. Uncertainty is treated by the classic scenario approach in which different patterns of demand are realized in different scenarios. These types of uncertainty are especially common in a logistic network. For example, the vehicle flows in a farm product logistic network are seasonal. This situation implies that an optimal location of service stations for the vehicles during the spring can be very far from acceptable during the summer. Therefore, both uncertain situations have to be considered. Two approaches to the FIP with uncertain demand in several scenarios are considered. First, over a range of possible demand scenarios, facilities are located in such a way as to maximum the minimum flows intercepted in these given scenarios (MAXMIN approach). Second, over that same range of scenarios, facilities are positioned in such a way as to minimize the maximum regret. A similar approach was used to locate the facilities in a competitive environment, Serra and Revelle(1996)[11]. In the following section, two FIP models with uncertain demand are presented, corresponding to the MAXMIN and REGRET approaches respectively. In section 3, a heuristic method based on greedy algorithm is developed. In the section 4, computational experiments are described and compared with the results using the branch and bound.

2. Problem formulation
2.1 The basic FIP model The basic form of the FIP (Berman et al., 1992) [1] is to locate m facilities to intercept as much flow as possible from a given set of pre-existing flows in the network; the interception occurs if a flow path pass through at least one facility. The problem can be formulated as an IP problem as followed: (FIP1)

Maximum

f
pP

yp

This research has been supported by National Natural Science Funds of China (No: 70601011).

65

Subject to

x
j =1

=m
( p P)

(1) (2) (3)

x
j P

yp

y p , x j {0,1}

( p P, j N )

where f p is the demand flow along path p; P is the set of nodes capable of capturing the flow on the path p;
1 if facility located at node j, xj = 0 otherwise.

1 if there is at least one facility on path p, yp = 0 otherwise.

The objective is to maximize the flow captured. The first set of constraints assures that m facilities are located. The second set of constraints implies that for each path in the network, y p = 1 if there is at least one facility on path p; otherwise, y p = 0 . The last two sets of constraint are the binary requirements for the decision variables. 2.2 The MAXMIN FIP model with uncertain demand in k scenarios We consider the flow interception problem in which demand flows on each path are different in different scenarios. The problem now is that the extent of the efficiency of the service depends not only on the location of the facilities but also on the scenarios of demands. As mentioned, we use the MAXMIN and REGRET approaches to obtain the final locations of this problem. The mathematical formulation of the model using the MAXMIN approach, based on the basic FIP model, is as follows: Maximum F Subject to

f
pP

k p

yp F

(k K )

(4) (5) (6) (7)

x
j =1

=m
( p P) ( p P, j N )

x
j P

yp

y p , x j {0,1}

where K is the set of the demand scenarios, k K ; f pk is the demand flows along path p in the kth scenario . The first set of constrains (4) is directly related to the objective. The left side of each constraint (one for each scenario) represents the sum of flows intercepted in corresponding scenario. The right-hand side, F , is the same in each constraint. The objective of the model is to maximize F . That is, the model is tried to find a set of locations that will maximize the minimum flow intercepted when evaluated for all scenarios. The rest constraints are as same as the constraints set of the basic FIP model. 2.3 The REGRET FIP model with uncertain demand in k scenarios If the REGRET objective is used, the objective and constraint set (4) are replaced by the following objective and constrains set: Minimum F
Fk f pk y p F
pP

(k K )

(8)

where Fk is the optimal objective, a known value, found when m facilities are located optimally in each scenario k. The value is found by applying the basic FIP model to each scenario individually. The unknown
66

variable F in this case represents the largest regret evaluated over all scenarios. Therefore, the new objective is to minimize F , that is, we seek to minimize the maximum regret over all scenarios. The objective improves as the largest regret found in a given scenario is reduced. The regret can be expressed in relative terms. The regret from a large optimal value may be large in absolute terms, but small in relative ones. Therefore, in some cases the relative regret can be used, replacing constrains (4) by the following ones:
Fk f pk y p
p p

Fk

(k K )

(9)

3. Solution method
Since the basic FIP is NP-hard, not to mention the MAXMIN FIP and the REGRET FIP, developing good heuristic approaches is more appropriate than the optimal method in terms of solving efficiency, especially for large-sized problems. This is especially true for the formulations proposed in this paper, due to the large number of variables and constrains involved and to the constrains (4) in the MAXMIN FIP model and the constrains (8) in the REGRET FIP model. In this paper, an exchange heuristic based on Teitz and Bart procedure [12] is developed to solve both these formulations. The heuristic algorithm involves two phases. In the first one, an initial solution is obtained to solve the basic FIP of each scenario. We can use the branch and bound by EXCEL or LINDO to solve the small or medium problem. For the largest-sized problem, greedy algorithm is very efficient and extremely robust to resolve the basic FIP in an authentic urban road network in Hodgson, Rosing and Storrier [13]. In the second phase, a one-opt trade is solved to try to improve the initial objective. Phase 1 can be obtained as follows. For each scenario, a FIP problem is used to find the optimal locations. Once the optimal locations are obtained for each individual scenario, we can compute the intercepted flows that could be achieved in the other scenarios. This can be represented in a matrix form, where each row represents a given scenario, and each column represents the intercepted flows that are achieved given the optimal locations obtained in each scenario (row). For example, the value of the matrix element f ij corresponds to the intercepted flows that would result for scenario j if the optimal locations in scenario i were implemented. Therefore, initial locations for heuristic can be chosen depending on the objective used. For the MAXMIN FIP model, the initial locations will correspond to scenario (row) i where the smallest f ij is maximum. For the REGRET FIP model, the initial location will correspond to the scenario i where the largest regret is as small as possible. Once the initial locations are obtained, the second phase of the heuristic algorithm is used to improve the objective. One location is traded during the iteration. If it leads to an improvement of objective, this location is stored as the best solution. Otherwise, this location is ignored and the previous solution is restored. The one-opt trade will be done for all the candidate nodes. The algorithm for the MAXMIN FIP model as follows: Algorithm A: Step 1: For each scenario, find the optimal location of m facilities where the intercepted flows are achieved. (a basic FIP is solved for each scenario). Step 2: Compute, again for each scenario, the intercepted flows are achieved for the location patterns (one pattern for each scenario) found in step 1. Step 3: Choose, as the initial location for the second phase, the location whose minimum intercepted flows across all scenarios are the largest. Step 4: Trade the location of one of the m facilities. Step 5: Compute the intercepted flows achieved in each scenario. For the MAXMIN FIP model, if the minimum intercepted flows are larger than before the trade, keep the solution. If not, restore the old solution. Repeat steps 4 and 5 until all facilities and nodes have been exchanged in a complete cycle of trade. Step 6: If the MAXMIN objective has improved after step 4 and 5, go to step 4 and restart the procedure. When no improvement is achieved on a complete set of one-at-a-time trades, stop. For the REGRET FIP model, the algorithm can be changed as follow:
67

Algorithm B: Step 1: For each scenario, find the optimal location of m facilities where the intercepted flows are achieved. ( a basic FIP is solved for each scenario). Step 2: Compute, again for each scenario, the intercepted flows are achieved for the location patterns (one pattern for each scenario) found in step 1. Step 3: Find, for each scenario j , the regret from optimally locating m facilities in all scenarios. The initial solution is the optimal solution of the scenario where the largest regret is minimum. Step 4: Trade the location of one of the m facilities. Step 5: Compute the intercepted flows and regret achieved in each scenario. For the REGRET FIP model, if the regret is smaller than before the trade, keep the solution. If not, restore the old solution. Repeat steps 4 and 5 until all facilities and nodes have been exchanged in a complete cycle of trade. Step 6: If the REGRET objective has improved after step 4 and 5, go to step 4 and restart the procedure. When no improvement is achieved on a complete set of one-at-a-time trades, stop.

4.

Computational experience

The proposed model MAXMIN FIP and algorithm A are tested with a small-sized example network. For the small-sized example network, the first phase to get the initial location of each scenario is solved by branch and bound in EXCEL. The second phase of the heuristic algorithm is programmed by turbo C. All programs were implemented on a Pentium personal computer. Consider the small-sized example network with 7 nodes and 10 arcs which is used with four different scenarios, shown in Fig.1[3]. The total number of paths in this network is 12. Nodes and flows of each scenario for each path are given in Table 1.
7

3
Fig.1. Example network [3].

Tab. 1 Nodes and flows information of four scenarios for each path in example network
 path

Nodes on the each path 1-2 1-3-5 1-7-6 1-3 2-3-4 3-5-6-7 3-6-7 6-5-4 6-7-1-2 7-6-5 7-6-3-2 5-3-1

fB
14 8 15 15 18 25 16 12 18 16 22 12

fC
8 13 10 15 8 12 18 10 16 18 15 25

fD
16 8 14 18 10 20 10 16 12 12 18 14

1 2 3 4 5 6 7 8 9 10 11 12

12 15 18 9 14 16 20 10 25 12 20 16

68

The solution to MAXMIN FIP model of this problem by the branch and bound is in table 2. The initial solution to MAXMIN FIP model of this problem by the algorithm A is in table 3.
Tab. 2 The optimal solution to Maxmin FIP model by the branch and bound
m

Nodes where facilities locate 6 1 and 6 2,3 and 6

Intercepted flows 99 158 168

1 2 3

After the exchange heuristic based on Teitz and Bart procedure of algorithm A, the solution is as same as the result of Table 2. For the largest-sized problem, greedy algorithm is very efficient and extremely robust to get the initial solution. Then we can get the satisfy solution by the exchange heuristic algorithm.

5. Conclusion
The formulation has been presented to deal with the flow interception problem with uncertain demand in several scenarios. Two different objectives were used: maximize the minimum intercepted flows across scenarios (MAXMIN objective) and minimize the maximum regret across scenarios (REGRET objective). This last objective can be formulated in absolute and relative terms. An optimal solution and a heuristic algorithm were used to solve a small size problem. In the models presented, each scenario is equally weighted. In the further study, we will concern the FIP with uncertain demand in several scenarios which have different weights and the FIP with uncertain demand of random distribution.
Tab.3 The initial solution to Maxmin FIP model by algorithm A The optimal The optimal The optimal solution for A solution for B solution for C Nodes where facilities locate 6 6 3

The optimal solution for D 6 121 124 99 102 1 and 6 173 173 160 158 1,3,6 187 191 168 168

A m=1 Intercepted flow in each scenario B C D Nodes where facilities locate A m=2 Intercepted flow in each scenario B C D Nodes where facilities locate A m=3 Intercepted flow in each scenario B C D
References

121 124 99 102 3 and 6 175 177 160 152 1,4,7 187 191 168 168

121 124 99 102 3 and 6 175 177 160 152 2,3,6 187 191 168 168

110 116 106 98 3 and 6 175 177 160 152 1,2,6 187 191 168 168

[1] [2] [3] [4]

Berman O., Larson, R.C. Fouska N., Optimal location of discretionary service facilities, Transportation Science , 1992, 26 , 201-211 Hodgson M J. A flow capturing location-allocation model. Geograph Anal, 1990, 22:270-279 Weaver, J., Church, R., Computational procedures for location problems on stochastic networks. Transportation Science ,1983a, 17, 168-180. Weaver, J., Church, R.,. A median facility location problem with non closest facility service. Working Paper, Department of Management Sciences and Statistics, College of Commerce and Business Administration, University of Alabama, 1983b, 83-101

69

[5] [6] [7] [8] [9] [10] [11] [12] [13]

Weaver, J., Church, R., The vast median facility location model. Working Paper, Department of Management Sciences and Statistics, College of Commerce and Business Administration, University of Alabama, 1984. 84-02 Mirchandani, P., Odoni, A., Locations on medians on stochastic networks. Transportation Science, 1979, 13, 85-97. Daskin, M., Application of an expected covering model to emergency medical service system design. Decision Sciences, 1982, 13(3), 416-439. Daskin, M.,. A maximum expected covering location model: formulation, properties, and heuristic solution. Transportation Science 1983, 17, 48-70. Daskin, M., Location, dispatching and routing models for emergency services with stochastic travel times. In: Ghosh, A., Rushton, G. (Eds.), Spatial Analysis and Location-Allocation Models. VanNostrand Reinhold, New York, 1987, 224-265. Daskin, M., Hagani, A.,. Multiple vehicle routing and dispatching to an emergency scene. Environment and Planning A , 1984,16, 1349-1359. Serra, D., ReVelle, C., The maximum capture problem with uncertainty. Environment and Planning B, 1996, 62, 49-59. Teitz, M.B., Bart, P., Heuristic methods for estimating the generalized vertex median of a weighted graph. Operations Research, 1968,16, 955-965. Hodgson,M.J., Rosing, K.E., Storrier, A.L.Y., Testing a bicriterion location-allocation model with real-world network traffic: The case of Edmonton, Multicriteria analysis: Proc XIth int Conf MCDM, Coimbra, Portugal, 1994.

70

Information Sharing Strategy and Data Processing Model for SMLE Alliance
Qi Ershi, Gao Yinan, Huo Yanfang
School of Management, Tianjin University, Tianjin, PR. China, 300072

Abstract Small and Medium-sized Logistics Enterprises (SMLEs) in China can establish alliances to break the resource limitation and improve their service quality. This paper mainly talks about what kinds of information should be shared to ensure an effective cooperation and the information sharing strategy. A model for collaborative logistics plan-making system is also described Key words Small and Medium-sized Logistics Enterprises (SMLEs), collaborative logistics, logistics plan, distributed database

1. Introduction
To carry out the target of building itself into the global manufacturing centre, it is important for China to have an advanced and effective logistics industry. Logistics industry in western countries and Japan has finished the acquisition and re-composition. Larger-scale logistics companies can provide professional services to a certain industry. However, in China small and medium sized logistics Enterprises (SMLEs) make the majority of the third-part logistics (3PL) companies which remain in quite primitive stage. Although they have certain amount of resources such as warehouses and trucks, they still suffer for small scale and low reputation. At the same time, problems in management, process design and personnels capability cumber their development. According to the special situation in China, establishing alliance between SMLEs would be a good choice for them. Through the sharing of information and cooperation provided by the alliances, SMLEs can offer more rapid and responsive services to their customers. This paper mainly talks about the strategy in information sharing and processing.

2. Information in SMLEs alliance


Information holds an unparalleled role in the logistic industry. The main kinds of information in alliance-based SMLE include resource information, partners information, logistic task information and knowledge information. Resource information refers to the description of capacity and availability of warehouse and transportation ability. Partners information tells about other SMLEs allocation, business coverage and resource. Logistics task information answers the question such as what is stored, transported or processed and the question when the tasks should be finished. The knowledge information is the useful summary extracted from the past tasks, which includes slandered operation procedure, anticipation of the future market and evaluation of partners cooperative ability. The relationship between effectiveness of cooperation and extent of information sharing is show in Fig 1 as following

effectivness of cooperation

extent of information sharing Fig 1 relationship between effectiveness of cooperation and extent of information sharing

71

The extent of information sharing exerts a direct influence on the effectiveness of cooperation. If the knowledge information can be shared among alliance, smooth services can be provided and high competitiveness achieved.

3. Mechanism of effective information sharing in SMLEs alliance


After we figure out what kind of information should be shared to make an effective alliance of SMLEs, the mechanism of information shared should be examined. A lot of SMLEs in China have their own information systems and database systems. But these systems developed by different programmers have no uniformed standards and poor expansibility which result in difficulty in database coordination. A centralized database has its disadvantage that if the center breaks down, the whole system will paralyze. And access control and data update would be a complex issue because the data belong to different companies Considering SMLEs situation and alliances need, a distributed database system would be the best solution, which has uniformed business logic but distributed physical location. It coordinates (manage and control) several logically independent units with different layout (usually centralized database) in different physical location by internet technology. Its main characteristics include physical distribution, unified logic, unit autonomy, transparency of distributed data, combined mechanism of centralized and autonomous control, certain extend of data redundancy and distributed business process management. Independence of each companys database, flexibility in access control and high efficiency in datas inquisition and update are the three main advantages of this system.

SMLE a3 Centric node


ais database Data transformation interface collection for a i

Coordinating Logistics data warehouse server server

All members public class information Cooperative class information Private class information Operating data

SMLE a 1

SMLE a2

Fig 2(A) Framework of distributed database system

Fig 2(B) Framework of local database system

The layout of distributed database is shown in Fig 2(A) and Fig 2(B). There are two main servers in the centre node: logistics data warehouse server and coordinating server. Logistics data warehouse server stores the information of finished tasks to support the logistics plan making process. While coordinating server has a coordinating function among each members database according to a standard controlling and updating strategy. It main function includes: 3.1 information transformation Each node in the distributed database may have different data format because company may has its own database storing many data in it before joining the alliance and this data format has proved effective. So it is advisable to keep these data format to keep the high efficiency, low cost and familiarity it can provide. We design
72

a mechanism showed in the Fig 3 as following:


Data transformation interface collection for ai

Interface for a1 and an

A1's data format

An's data format

Fig 3 Data transformation interface

Take for example member a1 who has a different data format from member an. Considering both a1 and ans format, we design a data transforming interface i (a1, an). In the same way, we can get I= {i(a1,a2), i(a1, a3), i(a1, a4)i(a1,an)}, which is created, stored, managed in the centric node and transported to node a1. 3.1.1 public informations update To what extent the information should be shared is a very important issue, for the alliance exists in a both cooperative and competitive environment. Each SMLE has to keep some information as business secret to keep a advantageous position in the competition, at the same time it has to share some information to carry out cooperation. The data stored in the distributed database can be put into three classes according to its sharing level: public information, cooperative information and private information. Public information is the information to which all members in the alliance has access. It includes each members location, business coverage and resources without mention to their amount, availability and restricts. Cooperative information means the information required in cooperative plan-making. When several members in the alliance form a dynamic team to complete a certain task, they should share each others resource information and its restricts. Private information refers to the information that company wants to keep as secret. It includes customers data and resources dynamic information. Using this information, each partner can make its individual plan. If member a1s public information changes (for example, it may have just set up a qualified warehouse for dangerous chemical), a1 should inform the coordinating server in the centric node. Then the coordinating server will modify the collection of all nodes public information and pass the new version of the collection to each node by broadcast

4. Processing of information and constitution of collaborative logistics plan


When a SMLE finds it difficult to meet customers demand due to its limited resource, it should try to find partners within the alliance and make a collaborative logistics plan to finish the task. This process includes four stages: set up a dynamic team, make up initial plan, make up local plan and make up finial plan. The functions that three classes information carry out are shown in Fig 4. The following parameters will be used in the plan-making procedure. a (Agent) a company in the alliance team

73

a dynamic team set up for a certain task res (Resource) resource can be described as res (type, amount); type represents certain kind of transportation ability or storage ability rep (Reputation) evaluation of a SMLEs ability to fulfill a plan t (Task) a logistics task, t(res,ts,te): ts represents the time when we start using resource; te represents the time when we stop using resource. c (Commitment) allocation of a task to a agent: c (a,t) p (Plan) the logistics plan: the collection of c(Commitment)
Public class Information

Resources needed

Cooperative class Information

Public class information stored in the local database

setup of dynamic team

establishment of initial logistics plan

Private class Information

Member selection interface

establishment of logistics plan

local

Dynamic team

Fig 4 Function of different class information in different plan-making stage

Fig 5 member selection of dynamic team

Stage 1: setup of a dynamic team When agent a0 finds that its own resource cannot support customers need, it starts to search for agent in the alliance A={ai..aj} who has the corresponding resource. The searching fields include the resource restriction and agents reputation. And then a dynamic team is established with leader a0. The procedure is shown in Fig 5. Stage 2: establishment of initial logistics plan A logistics plan falls into several tasks. Each task can be described as t ( res ( type, amount ), t s , t e ) where res
represents resource while ts and te stand for the starting time and the end time of the resources occupation correspondingly. a0, as the leader of the team, should divide the order into tasks, and then allocate each task to members in the team according to assigning algorithm, data in the center logistics data warehouse and its own preference. Once a task is assigned to an agent, it becomes a commitment. If all the tasks turn into commitments, the initial plan P0 will be set up. And then all the information of commitments will be passed on to the corresponding agents for their affirmation. Fig 6 shows the process.
Stage3 establishment of local logistics plan

When receiving the commitment assigned by a0, ai should decide whether to accept it or not. If ai decides to join the team, it can make its own local logistics plan according to many existing algorithms. However if the answer is no, a0 has to repeat stage2 until all agents accept their commitments. Stage4 establishment of final logistics plan After all the local plans from each partner, a0 may do some modification and decides each partners final commitment to make the final logistics plan.

74

order

Order decomposing interface

task1

task2

task n

Order allocation interface

commitment1

commitemt2

Commitment3

Fig. 6 process of initial logistic plan making

5. Conclusion
Information sharing strategy for SMLEs alliance can help logistics companies to carry out cooperation more effectively, and the application of distributed database can reduce the cooperation cost and add flexibility to data management and access control.
References

[1] [2] [3]

Afsarmanesh H. Camarinhal. Federatedi information management for cooperative virtual organizations. In Proceeding of the 8th Int Conf on Database and Expert Systems Applications (DEX97)September1997 Talluri S , Baker R C.A quantitative frame work for designing efficient business process alliances. International Conference of Engineering Management and Control (IEMC) 1996:656-661 XU Xiao Fei, ZHAN De chen, YE Dan. The establish of dynamic alliance and its integrated supporting system[J].Computer Integrated Manufacturing Systems,19 98,4(1):9-15.

75

Solving Capacitated Facility Location Problem Using Tabu Search


Minghe Sun1, Eliane Aparecida Ducati2, Vincius Amaral Armentano2
1 College of Business, The University of Texas at San Antonio, San Antonio, TX 78249-0634 minghe.sun@utsa.edu 2 Faculdade de Engenharia Eltrica e de Computao, Universidade Estadual de Campinas Caixa Postal 6101, Campinas - SP, CEP 13083-970, Brazil {eliane, vinicius}@densis.fee.unicamp.br

Abstract This study proposes a tabu search heuristic procedure for the capacitated facility location problem. The heuristic procedure uses recency based short term memory and frequency based long term memories performing diversification and intensification functions. Its performance is tested through a computational experiment using test problems from the literature. It found optimal solutions for almost all test problems. As compared to a Lagrangian heuristic method, the tabu search heuristic procedure found much better solutions using much less CPU time. Keywords: Capacitated facility location, Tabu search, Metaheuristics

1 Introduction
Facility location is a large area with a vast literature and numerous applications in the private and public sectors [5][7][19][20]. This study focuses on the capacitated facility location problem (CFLP) that consists of m potential sites where facilities may be located and n clients with demands that may be satisfied from any facility. Each facility i has a capacity ai and each client j has a demand bj . The fixed cost of operating facility i is fi and the unit transportation cost from facility i to client j is cij . The status of facility i zis represented by a binary variable yi such that yi =1 if it is open and yi =0 if it is closed. The quantity shipped from facility i to client j is represented by the variable xij . The CFLP is represented by the following mixed integer programming model. min

i =1 j =1 cij xij + i =1 fi yi
m n m

(1)

s.t.
n

i=1 xij = b j
m

j = 1, i = 1, ,m ,n

,n

(2) (3) (4) (5)

j=1 xij ai yi

xij 0 i =1, , m ; j =1, yi {0,1} i =1, ,m.

In the model, the objective function (1) represents the total cost to be minimized. Constraints (2) ensure that the demand of each client is satisfied and constraints (3) limit the amount of supplies to all clients from each facility i to be within its capacity ay . Constraints (4) and (5) define the values that the variables can assume. By assigning values to the binary variables, the resulting primal structure is a transportation problem. Setting bj =1 in (2) and replacing the constraint in (3) for each i with xij yi , j = 1, , n, the CFLP model becomes a model for the uncapacitated facility location problem (UFLP). In this study, a tabu search (TS) heuristic procedure is developed to solve the CFLP. The TS procedure uses recency based short term memory and frequency based long term memories to implement the main as well as the diversification and intensification functions. The performance of the TS heuristic procedure is tested using test problems from the OR-Library [2]. The Lagrangian heuristic (LH) method [3] is used as a benchmark to measure the performance of the TS heuristic procedure. The CFLP has been studied extensively. Many exact algorithms and heuristic methods have been developed to solve it in the last 40 years.

2 The Tabu Search Heuristic Procedure


The TS metaheuristic [11][12][13] has been successfully applied to a variety of combinatorial optimization problems, but not much has been reported in using it to the CFLP. To our knowledge, the TS heuristic procedure proposed in [14] is the only reported study that was applied to the CFLP with limited computational results. 76

However, TS heuristic procedures have been developed for other types of facility location problems [1][6][8[9][10][15]18][21][22]. responsive exploration and flexible memory to guide the search in the solution process. By responsive exploration, it determines a search direction based on the properties of the current solution and the search history. By flexible memory, it uses short and long term memory structures to record a selective search history. The short term memory used in this study keeps track of attributes of the solutions visited in the recent past. These attributes are used to prevent solutions from being revisited. The long term memory contains a selective history of solutions and their attributes encountered during the search process. In this study, the long term memories using residence and transition frequencies are used to implement the intensification and diversification functions. The proposed TS heuristic procedure is composed of search cycles. Each search cycle comprises the short term memory, intensification and diversification, in that order. The components of the TS heuristic procedure are described in the following.

2.1 Feasible and Infeasible Solutions


Let I ={1, , m} be the set of indices of the facilities. For a given solution, I is partitioned into I0 ={i I | yi =0} and I1{iI | yi =1}.Accordingly, the numbers of closed and open facilities are represented by m0 and m1 , respectively. The total capacity of all open facilities of a given partition is denoted by A = iI a j ai and the
1

total demand of all clients is denoted by B =

n b j =1 j

. The solution corresponding to a partition is feasible if and

only if A B. A facility of a feasible solution can be closed and the resulting solution is still feasible only if A ai B . However, if the current solution is feasible, the resulting solution is always feasible when a currently closed facility iI0 becomes open. After the status of any facility i changes, the value of A is updated accordingly, i.e., A A+ai if iI0 or AA ai if iI1 . Starting with all facilities closed, an initial solution is obtained in this or
Aa study by moving facilities iI0 to I1 in the order of decreasing values of

n b j =1 j

/ j =1 cij b j ai fi until
n

the feasibility condition A B is satisfied.

Neighborhood A move is defined as the status change of any facility i , i.e., yi 1yi . The neighborhood of a feasible solution is the set of the m distinct solutions that can be reached by making one move from the current solution. We use k to count the number of moves or iterations made since the search started. The cost of the best solution found in a single search cycle is denoted by z0 and the iteration at which z0 is found is denoted by k0 . The cost of the best solution found since the search started is denoted by z00 . The total cost of the solution obtained by changing the status of facility i from the current solution at iteration k is denoted by zi When the variables yi are fixed to their binary values for a given partition of I , the CFLP (1)(5) reduces to a transportation problem with m1 supply nodes and n demand nodes. In the implementation, these transportation problems are solved using CPLEX (ILOG CPLEX 9.0). After a yi changes its value, a new transportation problem is extracted and re-optimized by CPLEX starting from the optimal solution of the current transportation problem. 2.3 Visited and Evaluated Solutions The transportation problem for a given partition of I needs to be solved only once. Once solved, the solution and its corresponding minimum cost are saved. Solutions with known minimum total costs are labeled as visited solutions and evaluated solutions. The current solution at any move is a visited solution. In the solution process, a solution is only expected to be visited at most once. If a solution is visited the second time, repeated visiting of a subset of solutions may occur. Before a facility is selected to switch status for the next move, a number of adjacent solutions in the neighborhood are evaluated to find the minimum cost for each. A solution is evaluated if its minimum total cost has been found.

2.2

For a problem with m candidate facility sites, there are fewer than 2m feasible solutions. Only a tiny portion of
77

these feasible solutions is evaluated. Hashing is used to store the evaluated solutions in an array with fewer than 2m elements. The vector of binary values (ym, ym-1, ..., y2, y1) corresponding to a given partition of I is mapped by a hashing function into a single integer hash value N [4][23],
N=

m wy i =1 i i

) mod p

(6)

In (6), each wi is a randomly generated integer in the interval (0, 215) and p assumes the value 99991, as suggested in [16]. The hash value N determines the position where the total cost and the solution associated to the partition of I are stored in the array. Before the solution corresponding to a partition of I is evaluated, N is determined first through (6). If a solution is already stored at position N of the array, the transportation problem was previously solved and the total cost is retrieved. Otherwise, the feasibility of the partition is checked by comparing the total capacity of the open facilities with the total demand. If feasible, the transportation problem is solved to obtain the minimum total cost; otherwise, a minimum total cost is assigned to the partition. The minimum total cost and the solution are then stored at position N of the array. 2.4 The Short Term Memory The tabu tenure of a move is denoted by l . A facility is restricted to stay at its new status for at least l moves after its status change unless the aspiration criterion is satisfied. The value of l is selected from the integers in the interval [lmin ,lmax ]. Computational experience showed that the values of lmin and lmaxshould be related to m . A new
value for l is selected from the same interval when a new search cycle starts. The tabu condition is implemented

through the integer vector t . The element ti of trepresents the iteration number at which facility i changed its status the last time. At iteration k , a move switching the status of facility i is selected, such that, zi = min{zi | i = 1, The following tabu condition is then checked k ti l
move is tabu and is made only if the following aspiration criterion is satisfied

, m}

(7)

(8)

The selected move is not tabu if (8) is not satisfied. In this case, the move is made. Otherwise, if (8) is satisfied, the

zi < z0 .

(9)

If tabu and (9) is not satisfied, the TS heuristic procedure sets zi , selects another move according to (7), checks its tabu condition according to (8), and so on. This process continues until a move that is not tabu or tabu but satisfies the aspiration criterion is found and the move is made. The short term memory process stops if z0 is not improved after 1m moves, where 1 >0 is a parameter of the TS procedure. 2.5 The Long Term Memories The long term memories are used to perform intensification and diversification functions. Residence frequency and transition frequency memory structures are used for this purpose. An integer vector is used to represent the residence frequency where the i th element i is the number of moves duringwhich facility i is open. In the implementation, i is updated only when facility i changes its status from yi =1 to yi =0 . An integer vector is used to represent the transition frequency where the i th element i is the number of times that facility i changed its status. In the implementation, i is incremented each time when facility i changes status. The criterion to select a facility i for a move at iteration k in the intensification process follows

thatproposed in [21], i.e.,

zi =min{ zi | i =1,
where zi is definzwed as

, m},

(10)

78

and f denotes the average fixed costs, i.e.,

The intensification process stops if z0 is not

improved after (1 +2)m moves, where 2 >0 is a parameter of the TS heuristic procedure. Residence and transition frequencies are used alternately to perform the diversification function. A move switching the status of facility i is given a penalty pi . The updated residence frequency is represented by if residence frequency is used or pi =i k if transition frequency is used. A move is then selected to switch the status of facility i according to (12) where, with a diversification parameter d , zi is defined as (13) The search process terminates after C search cycles have been executed. In the c th search cycle, a total of c moves are made in the diversification process and c0 is used to count the number of moves that have been made in the current diversification process. 2.6 The Tabu Search Procedure A step-by-step description of the TS heuristic procedure is given in the following. These steps are roughly grouped into different sections according to their functions.

Initialization
Step 1 Find an initial solution with a total cost z . Let z0 z and z00 z0 . Determine m1 . Choose values for lmin and lmaxand select a l such that l [lmin ,lmax ]. Let ti l and i 0, i =1, , m ; let i 0 if yi =0 or i l if yi =1 . Select values for the parameters 1, 2 and C. Let k 1, k0 1, c0 1, c 1. Short Term Memory Step 2 Obtain zi , i =1, , m . Step 3 Select a move switching the status of facility i according to (7). Check the tabu status of the selected move according to (8). If the move is tabu, go to Step 4; otherwise, go to Step 5. Step 4 Check the aspiration criterion of the selected move according to (9). If (9) is satisfied, go to Step 5; otherwise, set zi and go to Step 3. Move Execution Step 5 Let yi 1 yi and i i +1. If yi = 1 , let m1 m1 +1; otherwise, let m1 m1 y1 and

i i +k- ti . Let ti k and k k + 1. If zi <z0 , let z0 z and k0 k. If z00 <z0, let z00 z0.
Step 6 If k-k 0 1m , go to Step 2. If k-k0 (1 +2)m , go to Step 7. Otherwise, go to Step 10. Intensification Step 7 Compute zi azwccording to (11), i =1, , m. Step 8 Select a move switching the status of facility i according to (10). Check the tabu status of the selected move according to (8). If the move is tabu, go to Step 9; otherwise, go to Step 5. Step 9 Check the aspiration criterion according to (9). If (9) is satisfied, go to step 5; otherwise, set zi and go to Step 8. Diversification Step 10 If c > C, Stop. If c0> c, go to Step 13. Step 11 Compute zi according to (13), i =1, , m. Step 12 Let c0 c0 + 1. Select a move to switch the status of facility i according to (12) and go to Step 5.
79

Step 13 Let c0 1, c c + 1, z0 z, k0 k. Reset the value of l such that l [lmin ,lmax ] and go to Step 2.

3 Computational Experiments
The TS heuristic procedure and the LH method [3] were coded in C. The computational experiments were conducted on a desktop computer with a Pentium IV processor (2.8GHz frequency and 1GB memory). The values of the parameters of the TS heuristic procedure used in the computational experiments are lmin = [m / 6], lmax = [m / 3], d = 1000, 1 = 0.5, 2 = 0.5 and C = 5 for problems with m 50or C = 7 for

problems with m =100. The 49 test problems with known optimal solutions in the OR-Library [2] are used for the computational experiment. These test problems are divided into four sets. The first set has 13 test problems with mn=16 50; the third set has 12 problems mn= 2550; the second set has 12 test problems with with mn=50 50;; while the forth set has 12 test problems with mn=1001000 . The computational results, i.e., solution quality and CPU time, obtained with the TS heuristic procedure and the LH method [3] are reported in Table 1. The results in the rows labeled with TS1 are intermediate results obtained before the long term memories are invoked and those in the rows labeled with TS2 are the final results of the TS heuristic procedure. For each group of test problems, solution quality of a heuristic method is measured with the mean gap and the percentage of problems for which optimal solutions are obtained by the respective method. The gap in percentage for a test problem is the relative deviation between the total cost of the final solution (i.e., the final value of z00 in the TS heuristic procedure) and that of the optimal solution. Using z final to denote the total cost of the final solution obtained with a heuristic method and zopt to denote the total cost of the optimal solution, the gap is defined as g =(z final zopt ) zopt 100% . For each group of test problems, the mean CPU time in seconds taken by each solution method is reported. For the TS heuristic procedure, the mean CPU time in seconds needed to reach the best solution is also reported. The TS heuristic procedure outperforms the LH method for all these problems even before the diversification and intensification functions are invoked. Compared to the LH method, the TS heuristic procedure without the long term memories obtained better solutions but used a small fraction of CPU time. With the long term memories, the TS procedure found even better solutions but still used much less CPU time. As the problems become larger, relatively less CPU time is spent by the TS heuristic procedure to run additional search cycles. Lorena and Senne [17] used the 12 test problems with mnfor their computation experiment. The solutions obtained with the LH method in this study are very similar to those reported in their paper.
Table 1.

Size ( mn) 16 50 25 50 5050 1001000

Heuristic Method TS1 TS2 LH TS1 TS2 LH TS1 TS2 LH TS1 TS2 LH

Results of Test Problems from the Literature Mean Time Mean Time to Mean Gap (seconds) best (seconds) 0.000 0.048 0.033 0.000 0.091 0.053 0.038 2.668 0.090 0.114 0.031 0.176 0.265 0.000 6.426 0.042 0.103 0.813 0.582 0.000 1.241 0.755 0.119 22.223 0.312 342.307 311.649 0.003 547.261 509.930 0.216 1557.013

Optimum (%) 100.00 100.00 61.54 91.67 100.00 58.33 75.00 100.00 50.00 58.33 91.67 50.00

4 Conclusions
This paper presents a TS heuristic procedure for the CFLP. Computational results show that this TS heuristic procedure is very effective in finding good solutions. Without the use of the long term memories, the TS heuristic procedure is more effective and efficient than the LH method. The employment of the long term memories performing intensification and diversification functions further improves the effectiveness of the TS heuristic
80

procedure. Although using longer CPU time with the long term memories, the TS heuristic procedure takes only a small fraction of the CPU time taken by the LH method. The use of the long term memories is especially effective for larger problems.
References

[1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25]

Al-Sultan K. S. and Al-Fawzan M. A. A Tabu Search Approach to the Uncapacitated Facility Location Problem, Annals of Operations Research, 1999, 86: 91-103. Beasley J.E. OR-Library: Distributing test problems by electronic mail, Journal of the Operational Research Society, 1990, 41:1069-1072. Beasley J.E. Lagrangian heuristics for location problems, European Journal of Operational Research, 1993, 65: 383-399. Carlton W.B. and Barnes J.W. A note on hashing functions and tabu search algorithms, European Journal of Operational Research, 1996, 95: 237-239. Daskin M.S. Network and Discrete Location, Models, Algorithms, and Applications, Wiley, New York, 1995. Delmaire H.J., Diaz J.A., Fernandez E. and Ortega M. Reactive grasp and tabu search based heuristics for the single source capacitated plant location problem, Information Systems and Operations Research, 1998, 37: 194-225. Drezner Z. and Hamacher H.W. Facility Location: Theory and Algorithms. Springer Verlag, 2001. Ferreira Filho V.J.M. and Galvo R.D. A tabu search heuristic for the concentrator location problem, Location Science, 1998, 6:189-209. Frana P.M., Sosa N.G. and Pureza V.M. An adaptive tabu search approach for solving the capacitated clustering problem, International Transactions in Operational Research, 1999, 6: 665-678. Ghosh D. Neighborhood search heuristics for the uncapacitated facility location problem, European Journal of Operational Research, 2003, 150: 150-162. Glover F. Tabu Search, Part I, ORSA Journal on Computing, 1989, 1: 190-206. Glover F. Tabu Search, Part II, ORSA Journal on Computing, 1990, 2: 4-32. Glover F. and Laguna M. Tabu Search, Kluwer Academic Publishers, Hingham, MA, 1997. Grolimund S. and Ganascia J.G. Driving tabu search with case-based reasoning, European Journal of Operational Research, 1997, 103: 326-338. Hoefer M. Experimental Comparison of Heuristic and Approximation Algorithms for Uncapacitated Facility Location, in K. Jansen et al. (Eds.): WEA 2003, LNCS 2647, 165-178. Klein R. Project scheduling with time-varying resources constraints, International Journal of Production Research, 2000, 38: 3937-3952. Lorena L.A.N. and Senne E.L.F. Improving traditional subgradient scheme for Lagrangean relaxation: an application to location problems, International Journal of Mathematical Algorithms, 1999, 1: 133-151. [18] Michel, L. and Van Hentenryck, P. A simple tabu search for warehouse location, European Journal of Operational Research, 2004, 157: 576-591. Mirchandani P.B. and Francis R.L. Discrete Location Theory, Wiley, New York, 1990. Owen S.H. and Daskin M.S. Strategic Facility Location: A Review, European Journal of Operational Research, 1998, 111: 423-447. Sun M. Solving Uncapacitated Facility Location Problems Using Tabu Search, Computers and Operations Research, 2006, 33(9): 2563-2589. Tuzun D. and Burke L.I. A Two-Phase Tabu Search Approach to the Location Routing Problem, European Journal of Operational Research, 1999, 116(1): 87-99. Woodruff D.L. and Zemel E. Hashing vector for tabu search, Annals of Operations Research, 1993, 41: 123-137.

81

Research on the Hybrid Ant Colony Labor Division Algorithm with Optimization and Its Application
Xiao Renbin1,2, Zhang Qiang2 and Zhang Xinhui3
1 School of Management, Huazhong University of Science and Technology, P.R.China, 430074 2 CAD Center, Huazhong University of Science and Technology, P.R.China, 430074 3 Department of Industrial Engineering, Wright State University, Dayton, Ohio, 45435, USA

Abstract In view of the deficiency of the existing heuristic methods and stochastic methods for task assignment problem, a hybrid ant colony labor division algorithm with optimization is designed according to the internal similarity between the ant colony labor division and the task assignment problems. The new algorithm integrates the advantages of the heuristic search in ant colony optimization algorithm and the stochastic search of ant colony labor division. In the new algorithm, a strategy of reserving the current optimal solution based on the elite ant colony is proposed and the threshold matrix is updated by rewarding and punishing the ant colony, and the optimal task assignment is obtained by the stochastic search of labor division algorithm. Experiments on a real-life problem validate that the new algorithm can converge to the satisfactory solutions in a short time stably. Key words Ant colony optimization, Ant colony labor division, Task assignment

1. Introduction
Task assignment problem is a widely used traditional problem, which is applied in manufacturing, industrial and agricultural production, transportation and services industry, such as production line scheduling, task assignment of workers, multi-agent system job scheduling. Task assignment can be classified into balanced problems and unbalanced problems. For the problems with one person per job or one job per person, Hungary algorithm is a very effective heuristic algorithm [1]. But this kind of problems is rare in actual application, since different work have to be finished by a variety numbers of people and people who work on the job have different efficiencies due to the deviance of the work and their ability [2]. Therefore, it is necessary to solve this kind of unbalanced task assignment problems with different methods, which belong to the typical NP-hard combinational optimization problem that has no algorithm with polynomial time complexity to find out global optimal solutions [3]. The approximate methods to solve this kind of problems fall into two categories: heuristic method and stochastic search method [4]. Heuristic method uses heuristic information, and performs search extension in the most hopeful section with certain experience and principle. Heuristic is more efficient than uninformed search, but it depends too much on the cognition and experience on the nature of problems, which makes it merely able to solve problems that are not so complicated. Hungary algorithm mentioned above is one example. Some academicians have improved Hungary algorithm to solve unbalanced task assignment problems and someone transferred the problems to solve it with the solution methods for traditional problems, such as traditional assignment problem and transportation problem[5]which also belongs to heuristic method since it also takes advantage of heuristic information of problems. Although there are many heuristic algorithms in literatures trying to get solutions of high quality, the results are still unsatisfactory [6]. However, stochastic method does not depend on the nature of problem. It firstly chooses multi solutions from solution space, and checks the feasibility of them, then finds out the best solutions in these feasible solutions. The advantage of stochastic method is simplicity, which can reach the solution in a short time. Many academicians both in and abroad have solved this kind of problems with genetic algorithm, solve multiprocessor task scheduling problem with genetic algorithm in literature [7], for example; besides, many academicians have tried improved genetic algorithm and immune algorithm to solve this kind of problems [1,8]. But all these algorithms depend on the type of problems, which have no relation with algorithms themselves in principle. On the basis of the internal similarity between ant colony labor division and task assignment problem [9,10,11] , this paper designs a hybrid ant colony labor division algorithm with optimization based on the heuristic
Supported by the National Natural Science Foundation of China under the Grant No. 60474077 and the Program for New Century Excellent Talents in University of China under the Grant No. NCET-05-0653.

82

search of ant colony optimization algorithm and random search of ant colony labor division algorithm and applies it to the solution of task assignment problems.

2. Ant Colony Labor Division Model


2.1 Bonabeaus basic model Bonabeau and his colleagues proposed the Fixed Response Threshold Model (FRTM) [12,13] by the investigation on task assignment behavior of ant colony deeply. This model can be described briefly as follows: there is a fixed response threshold for every ant, and every fixed response threshold corresponds to the response of a specific task, the response threshold level of an individual reflects the difference of the individuals or the way reflecting the task stimulate. There is a spur corresponds to a task in the environment, when the spur of a task exceed the response threshold of an ant, this ant begin to do the task. When the ants doing a task exit it (the response threshold below the spur of the task), the needs and spur of the task will increase, until it reaches the response threshold of the individual, so as to stimulate the ant to do the task. According to the experimentation which taken by Bonabeau and Dorigo of this algorithm, and compared the result with some existed data, they found out that this model is similar with the actual situation. We can conclude some characteristic form this model: Dynamic adaptability Individual spontaneity and colony self- organization Robustness. 2.2 Extended labor division model Because the model above is too simple, the characteristics of task and ant are all single, it cannot used to solve complex task assignment problem. So we need to modify it, and redesign the characteristics, task action and learning rules of ants [14]. First of all, we give the definitions of some variables in the model: n is the number of ants in a colony, m is the number of task in the environment, Ai(i=1, 2, , n) is the individuals in an ant colony, Tj(j=1, 2, , m) is the tasks needed to done by ant colony, ij is the response threshold of ant i correspond to task j, sj(j=1, 2, , m)

is the environment spurs formed by all the tasks, j is the increment of environment spur formed by task j, j is the efficiency of ants doing task j measuring the ratio of demand decrease caused by the activity of one ant, Sij (t ) is the action state of ant i correspond to task j in time t (1 means active and 0 means inactive),
P ( Sij = 0 Sij = 1) is the probability of ant i taking part in task j, P ( Sij = 1 Sij = 0) is the probability of ant i

exiting task j, is the learning coefficient of ants(once an ant learned to do a task, the response threshold of this task will decrease), and is the forgetting coefficient of ants(once an ant take part in a task, its response threshold corresponding to others tasks will increase). The structure of extended ant colony labor division model is described as follows. (1) Environment spur When a series task T = {T1 , T2 , , Tm } appeared in the environment, the task will have their own urgency degree, these are spurs correspond to every task in the original model, the high the spur, the more likely the ants will be attracted to do the task. Ants can decide whether to join a task by this spur and response threshold of itself. The environment spur will increase if the task is not completed like in the basic model of Bonabeaus. The urgency level variable of sub task will increase by a constant j after each unit of interval. According to the number of ants joined in the task, it will be done some parts, this is decided by n j act and j The increase rule of the spurs sj(j12q) is:
s j (t + 1) = s j (t ) + j i Sij ( t )
i =1 N

(1)

n j act = Sij (t )
i =1

(2)

(2) The characteristics of ants Each ant has one or more thresholds correspond to different tasks. The ability of ant i correspond to task Tj was calculated by its core competence. And once the ant i takes part in the sub job, the threshold ij of ant i 83

correspond to task Tj is updated by the following equations

il (t + 1) = lil (t ), l n j

(3) (4)

ik (t + 1) = ik (t ), k j and k n j

In the equations, nj indicates the sub sets similar to task j, and are learning and forgotten coefficients, 1 1. (3) Probability of join and exit We use the same way in the basic model mentioned above:
P ( Sij = 0 Sij = 1) = s jn s j n + ij n

(5) (6)

P ( Sij = 1 Sij = 0) = p, ( p is a constant)

(4) Rules of simulation In a discrete time period, inactive individuals will decide whether to participate in the implementation according to their own threshold and the stimulation of tasks by the rules; active individuals will decide whether to exit the mission by the implementation of the tasks and their rewards. Ants have the learning ability, they will adjust their own response threshold according to the implementation of the task in the past. Once the task is executed, the spur formed by it will decrease, so the probability of choosing to perform by ants will decrease. But when a task has no ant to choose it for a long time, the spur will increase to a high level until there are ants to choose it.

3 Hybrid ant colony labor division algorithm model with optimization


3.1 The basic algorithm design ideas We define some variable as follows: l is the number of ant colony, C is the iterative number of the running of algorithm, and ij is the quantity of ant i doing task j in a unit of time

The algorithm initiation set l ant colonies, n ants in every colony, and m tasks in environment. So every ant colony has a nm matrix for the ability of ants, and a vector s which have m members. The initiate matrix and vector s of every ant colony is all the same. Then every ant colony completes the tasks according to the equations above independently. An iteration cycle terminates when all the ant colonies have finished their tasks. At the end of each iteration cycle the algorithm will judge whether the programs create by every ant colony meet the bounds and the corresponding function tests have improved, and then be rewarded or punished according to them. If the number of iterations achieves C, the algorithm stops. The differences of the proposed algorithm and the labor division model are: (1) The algorithm uses more than one ant colony, but the colony runs independently according to labor division model. (2) For predigest in the labor division model, we set the quantity that every ant doing a task is equal, but when we use the algorithm to solve task assignment problem its wrong, so we extend it to ij ; (3) Since we changed the , the formula that calculates s is also changed as the following:
s j (t + 1) = s j (t ) + j ij Sij ( t )
i =1 n

(7)

3.2 Elite ant colony Refer to the elite strategy in GA and elite ant in ant algorithm, we propose a new concept: Elite Colony. Elite Colony is the colony that achieves optimal solution in the first iteration cycle. According to whether the optimal solution is global optimal solutions, it is divided into global elite colony and local elite colony. The 84

optimal solution achieved by global elite colony is the best in all the previous iteration, however, the optimal solution achieved by local elite colony is just the best in it own iteration. In each iteration cycle, every colony achieves the solution independently. Elite colony is used to save the best solution in the previous iteration, almost all the optimal solutions. At the same time, all the colonies will not converge to the same solution so as to avoid the local optimal. After each iteration cycle, both local elite colony and global elite colony are incensed. 3.3 Reward and punishment Strategy (1) Reward Strategy In each cycle, compare all the solutions achieved by ant colony, the best value is the optimal solution in this cycle, the colony that achieves the best value is the local elite colony, and the ant colony should be awarded. If the local optimal solution is global optimal solution as well, the corresponding ant colony is global elite colony, award the colony for the second time. Reward method: travel all the ants in the colony. If any ant is in the task, update its threshold that corresponds to the task according to the formula (3). (2) Punishment Strategy In every cycle, after the test distribution for every ant colony according the labor division algorithm, if the test distribution strategy of certain ant colony doesnt meet all the restriction, the ant colony should be punished. Punishment method: travel all the ants in the colony. If any ant is in the task, update its threshold that corresponds to the task according to the formula (4). 3.4 Optimization mechanisms From the ant colony labor division model above, we can find that the formula (5) is a core formula, what is used to calculate the probability of an ant involves in a task. In the formula (5), when keep constant, the larger s is, the greater P is. This shows that when the stronger stimulating environment is, the greater the probability of individual tasks response. However, when s keep constant, the larger is, the smaller P is. This shows that the larger threshold value of individual, the smaller the probability of individual tasks response. From the above, we can also see that the two attributes, environmental stimulation s and individuals response threshold , are the most crucial variables. All the other formulae and attributes are based on them. In the optimization process, a very important feature is that the past optimization information can be reflected in the present search process so as to guide the search. In this algorithm, after several times vertical and horizontal comparison, the past optimization information is saved in the response threshold matrix for every ant in the colonies and use the reward and punishment strategy. At the same time, these thresholds guide the ants to choose the tasks, in turn. The overall optimization process of algorithm is shown in Fig. 1. In Fig. 1, AC is used to represent ant colony, each horizontally to represent a cycle, and sub script to represent ant number in a cycle, and superscript to represent cycle number for distinction.
Term 1

AC1 (1)
Update response threshold matrix

Horizontal comparison

AC2(1)

ACl (1)

Vertical global comparison

Term 2

AC1 (2)

Horizontal comparison

AC2(2)

ACl (2)

Term C

AC1 (C )

Horizontal comparison

AC2(C )

ACl (C )

Fig.1 Graphical process optimization of algorithm

85

(1) Ant Colony Information Each ant colony contains information as follows: environmental stimulation vector, each ants response threshold matrix as well as learning and forgotten coefficients. The initial value of environmental stimulation vector for each ant is just the same; both learning and forgotten coefficients remain unchanged in the entire optimization process. (2) Response Threshold Matrix In the initialization of the first cycle, the response threshold matrix is generated randomly for each ant colony, then with the end of each cycle, after the results were compared, the response threshold matrix will update automatically according to the rule described in the previous section, make the ant colony to better direction, detailed update process will be described in the below. (3) Horizontal Comparison At the end of each cycle, results of each ant colony in the cycle are compared, reward and punish each ant colony according to the compared result. (4) Vertical Comparison In order to save the optimization information in the ant colony which have achieved the global optimal value up to the current cycle, after the horizontal comparison, the comparison result will be compared with the optimal value of history, and then according to this results, reward and punish each ant colony with the methods described in section 3.3. Fig.1 is the entire optimization process of several ant colonies. In this process, each ant colony completes tasks and updates its response threshold according to Fig.2.
Initia threshold matrix Calculate the allocation result Reward and punish Update the matrix

Fig.2 Evolution of each ant

Each ant colony will complete its tasks according to its own characters, then compare the results of all the ant colonies, and update the response threshold matrixes to accomplish their evolutions. (1) Each ant colony will create an initial threshold matrix, and will update it after every term. (2) Each ant colony will complete all its tasks by extended Ant Colony Labor Division Model in every term. (3) Every time when the term is completed, the threshold matrixes will be updated by the results of vertical and horizontal comparisons and the update rules are described in the above. An update example is given below. Assuming that the threshold matrix of ant colony l is while term t beginning (Assuming that there are 3 ants in an ant colony, and 4 tasks), if we find out that by the comparison in the end of this term, ant 1 do task 1, ant 2 do task 3, ant 3 do task 2 and 4 need to be rewarded, ant 2 do task 2, ant 3 do task 1 need to be punished, and ant 3 do task 2 is a part of global optimal solution, then the threshold of ant l must be updated like this:

ACl

(t )

11 12 13 14 21 22 23 24 31 32 33 34

reward and punish

11 21 31

12 22 32

13 23 33

14 24 34

AC ( t +1) l

The flow chart of the hybrid ant colony labor division algorithm with optimization is described in Fig.3.

86

Initiation period:=1 Calculate every ant colony until all the tasks were completed Compare the result and decide the Encourage and punish Update the response threshold matrix period:=period+1 period>C Yes END No

Fig.3 Algorithm flow chart

4 A case and some analysis


4.1 Problem description and modeling Seven small enterprises constitute a stratagem federation through negotiation for the purpose of evading the risk. The members of the federation decide to try their best to cooperate with others and so that they can bring all their predominance into play and make maximal decrease on the risk when they have any opportunity. Now one member accepts an order from which it cannot fulfill in time by itself and so it needs to cooperate with other members. The order form could be divided into 5 sub tasks according to the enterprise characteristic through analysis. The risks matrix of five sub tasks on every enterprise is listed in Tab.1 below through the analysis and scoring of the expert group.
Tab.1 Risk matrix of finish the tasks

m n 1 2 3 4 5

1 0.2 0.5 1 0.5 1

2 1 0.3 1 1 0.4

3 1 0.5 0.5 1 0.5

4 1 1 0.2 1 0.2

5 1 1 0.5 0.5 1

6 0.4 1 1 0.4 1

7 0.4 0.3 0.4 0.4 1

This is a typical task assignment problem. We can suppose independent variable x as follows

1 x ij = 0

member i do the sub task j member i don't do the sub task j

(8)

We can analyze the problem above and get the follow restriction: (1) The largest number of the members that processing the same sub task at the same tine is Nc. In this case Nc=3. (2) As for each sub task, the largest finishing risk is Pc. In this case, Pc=0.2. The optimizing goal is to minimize the risk of finishing all the sub tasks; that is to maximize the probability of finishing all the tasks. We shift the minimizing goal to the maximizing goal aims at resolving easily. So, we can found the following model subjecting to the above conditions.

87

max f ( x) = [1 p(i, j )]xij


i =1 j =1

x {0,1} ij m S .T = xij N c j =1 n p (i, j ) xij Pc i =1

(9)

4.2 The results and analysis The ordinary operation research methods cannot solve the problem for the reasons that there exist n consecutive parameters multiplying in the constraint 3. So we have to use Illumination Algorithm to find an approximately solution in a short time. According to the above algorithm, we implement computation by programming in the simulation environment as follows: CPU of Celeron 1.3G, memory of 512M and compiled in VC6. Repeat the program for 20 times and we get the average running time is 0.8s. Setting the learning coefficient to be 1.0, the forgotten coefficient separately to be 1.0, 1.05, 1.1 and 1.2, we get the following results as shown in Fig.4.

Fig.4 The result computed through the algorithm

Fig.5 The result computed through the algorithm

It can be inferred from Fig.4 that the larger the forgetting coefficient is, the quicker is the constringency of the algorithm. It can also be inferred that the algorithm would fall into local furthest solution because of quickly convergence when it achieves a certain value. Analyzing from the theory of the algorithm, we can find out that the forgotten coefficient can influence the transformation of the matrix of the threshold value. The influence can be described that the larger the forgotten coefficient is, the greater the transformation of the matrix of the threshold value changes, and thus more quick convergence the algorithm makes. But when it achieves a certain value, the algorithm can reach a local furthest value but can hardly get out of this value because the range of each transformation is too large. It is the same situation when referring to the learning coefficient. Then we keep the forgotten coefficient fixed and the learning coefficient changes into 1.0, 0.95, 0.9 and 0.8 respectively, we can get the following results as shown in Fig.5 after computation. It can be inferred from Fig.4 that both the changes of the learning coefficient and the forgotten coefficient in the situation above have some positive influence for the convergence. The four curves in Fig.5 all have approximately the same trends as the curves in Fig.4 except that the convergence time of each curve moves up, which is caused by the additional learning coefficient in Fig.5 while the forgotten coefficient is just the same as in Fig.4. Moreover, the local furthest solution is much more unreasonable because of such quickly convergence as in Fig.5.

88

5 Conclusions
One class of Hybrid Ant Colony Labor Division Algorithm with Optimization was brought forward and used into the problem of task assignment. In the algorithm, the learning coefficient and the forgotten coefficient are added in on the base of the threshold value update strategy of labor division model. Also the elite ant colony concept is introduced based on consideration of the merits of the outstanding member strategy of generation algorithm and Ant Colony Food Searching optimizing algorithm, which ensures that the algorithm could not only use the self-organization characteristic of Ant Colony Labor Division Model, but also use the characteristic of Ant Colony Food Searching optimizing algorithm. It is proved that the convergence speed of the algorithm is high and the satisfactory solution can be reached in short time, and it is an effective algorithm to solve the task assignment problem.
References

[1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14]

Gan Yingai. Operational Research. Beijing: Tsinghua University Press, 2003. 128-134(in Chinese) Su Xiangding, Sun Tong, Ma Lin. Application of difference method in unequally assignment problem. Computer Engineering, 2005, 31(22): 178-180(in Chinese) Zhong Yiwen, Yang Jiangang. Immune genetic algorithm for independent tasks assignment problem in heterogeneous environments. Mini-Micro Systems, 2006, 27(8): 1498-1502(in Chinese) Man Zailong. Task Assignment and Scheduling Based Genetic Algorithm. Ms D Thesis. Qingdao: Qingdao University, 2004(in Chinese) Xiao Jixian, Han Runchun, Meng Xiangyun. A class of non-sure assignment problem and solution. Industry Technology Economy, 2005, 24(4): 107-108(in Chinese) Braun T, Siegel H, Beck N. A comparison study of static mapping heuristics for a class of meta-tasks on heterogeneous computing systems. In: 8th IEEE Heterogeneous Computing Workshop, 1999. 15-29 Correa R C, Ferreira A, Rebreyend P. Scheduling multiprocessor tasks with genetic algorithms. IEEE Transactions on Parallel and Distributed Systems, 1999, 10(8): 825-837 Jiao Licheng, Wang Lei. A novel genetic algorithm based on immunity. IEEE Transactions on Systems, Man and Cybernetics-Part A, 2000, 30(5): 552-561 Dorigo M, Gambardella L M. Ant colony system: A cooperative learning approach to the traveling salesman problem. IEEE Transactions on Evolutionary Computation, 1997, 1(1): 53-66 Dorigo M, Maniczzo V, Colomi A. Ant system: Optimization by a colony of cooperation agents. IEEE Transactions on Systems, Man and Cybernetics-Part B, 1996, 26(1): 29-41 Wu Chunming, Chen Zhi, Jiang Ming. Research on Initialization of ants system and configuration of parameters for different TSP problems in ant algorithm. Acta Electronica Sinica, 2006, 34(8): 1530-1533(in Chinese) Bonabeau E, Theraulaz G, Deneubourg J -L. Quantitative study of the fixed threshold model for the regulation of division of labour in insect societies. Proc. Roy. Soc. London B, 1996, 263: 1565-1569 Drigo M, Bonabeau E, Theraulaz G. Ant algorithms and stigmergy. Future Generation Computer Systems, 2000, 16: 851-871 Li Shiyong, Chen Yongqiang, Li Yan. Ant Colony Algorithms With Applications. Harbin: Harbin Institute of Technology Press, 2004. 29(in Chinese)

89

Semicontinuty of the Solution Mapping of -Vector Equilibrium Problem


Kenji Kimura1, Yeong-Cheng Liou2, Jen-Chih Yao3
1 Department of Applied Mathematics, National Sun Yat-sen University, Kaohsiung, Taiwan 80424, R.O.C. E-mail address: kimura@math.nsysu.edu.tw 2 Department of Information Management, Cheng Shiu University, No.840, Chengcing Rd., Niaosong Township, Kaohsiung County 833, Taiwan, R.O.C. E-mail address: ycliou@csu.edu.tw 3 Corresponding author. Department of Applied Mathematics, National Sun Yat-sen University, Kaohsiung, Taiwan 80424, R.O.C. E-mail address: yaojc@math.nsysu.edu.tw

Abstract In this paper, we study the -vector equilibrium problem (-VEP). Several existence results for -VEP were established. We also investigate continuities of the solution mapping of -VEP. In particular, a result concerning the lower semicontinuity of the solution mapping of -VEP is presented. Key words -Vector equilibrium problem, Vector equilibrium problem, C-quasiconvexity, C-continuity, C-compactness

1. Introduction
The concept of -solution is very adaptable for the cases that the feasible regions are non-convex or non closed sets. In fact, the original problems are the special cases of - approximate problems such as the famous Eklands variational principle, which is an -solution rule for optimization problem. The concept of -solution also is the basis of numerical computing, e.g., stability, well-posedness and so on. The interesting example of the -equilibrium problem is the generalized game for -strategy in economics. The notion of approximate solutions adapted in this paper follows from the concept of -efficiency originally introduced in multiple objective programming by Loridan [17] in 1984. Two years later, White [27] introuduced six alternative definitions of -efficent solutions and established the relationships between these concepts. -efficiency for more general vector optimization problems are considered in [19, 21].For the concept of -solution for (vector) variational inequality problem, Tammer [22, 23] studied the existence and the generalization of Ekelands variational priciple. Since vector equilibrium problem is a very general mathematical model covering vector optimization, vector variational inequalities and so on as special cases, the main motivation of this paper is to study the behavior of the solution map of the parametric vector equlibrium problems by following the idea of Loridan [17]. Let X be a real Hausdorff topological vector space and Z a real topological vector space. A set C Z is said to be a cone if x C for any 0 and for any xC. The cone C is called solid if it has nonempty interior, i.e., int C . A cone C is said to be pointed if C(-C) = { z } where z denotes the zero vector of Z. For any set A Z, we let bd A and cl A denote the boundary and closure of A, respectively. Also, we denote A the complement of the set A. For any set A of a real vector space, the convex hull of A, denoted by co A, is the smallest convex set containing A. Let f : X X Z and K X. For fixed intC, the -vector equilibrium problem (-VEP, for
c

short) is to find x K such that

Let : intC 2 i.e.,

be the set-valued mapping such that () is the solutions set of -VEP for intC,

We remark that -VEP is closely related to the vector equilibrium problem (VEP) which is to find x cl K such that

90

Let S denote the solution set of VEP, i.e., If K is closed, then VEP becomes the ordinary vector equilibrium problem. The classical vector equilibrium problems and its extensions have been extensively studied in the literature. See, [1, 2, 3, 6, 7, 8, 13, 20, 28] and the references therein. We may regard solutions of -VEP as approximate solutions of the problem VEP. We remark that S . Does not imply () ., for allintC. Example 1. Let X = R, Z = R , C = R + , and K = (0, 2). Supose that f : X X Z defined by
2 2

Then 0 S, but ()

for each > 0.


x

The purpose of this paper is to establish relationship between the sets () and S for intC. We also investigate continuities of the solution mapping : intC 2 . In particular, a result concerning the lower semicontinuity of is presented. We observe that our results in this paper can be employed to study the behavior of solution maps of parametric vector optimization, parametric vector variational inequalities, parametric generalized games.

2. Preliminaries
Definition 2.1 (C-quasiconvexity, [9, 18, 24]). Let X be a vector space, and Z also a vector space with a partial ordering defined by a pointed convex cone C. Suppose that K is a convex subset of X and that f is a vector-valued function from K to Z. Then, f is said to be C-quasiconvex on K if it satisfies one of the following two equivalent conditions: (i) for each x1 , x2 K and [0,1] ,

f (x1 + (1 ) x2 ) z C , for all z C ( f ( x1 ), f ( x2 )) , where C ( f ( x1 ), f ( x2 )) is the set of upper bounds of f ( x1 ) and f ( x2 ) , i.e.,
(ii) for each z Z,

is convex or empty. Definition 2.2 (C-proper quasiconvexity, [24]). Let X be a vector space, and Z also a vector space with a partial ordering defined by a pointed convex cone C. Suppose that K is a convex subset of X and that f is a vector-valued function from K to Z. Then, f is said to be C-properly quasiconvex on K if for every x1, x2 K and [0,1] we have either

f is said to be strictly C-properly quasiconvex on K if for every x1, x2K and (0,1) we have either

Usually we define cone concavity of f by the fact that -f is cone convex. However the following definition for cone concavity is also natural. Definition 2.3 (C-quasiconcave). Let X be a vector space, and Z also a vector space with a partial ordering defined by a pointed convex cone C. Suppose that K is a convex subset of X and that f is a vectorvalued function from K to Z. Then, f is said to be C-quasiconcave on K if for each z Z, the following set: is convex or empty; f is said to be strictly C-quasiconcave on K if for each z Z, the following set:
91

is convex or empty. Proposition 2.1. Let X be a nonempty compact subset of a real topological vector space and Z a real topological vector space with a proper solid convex cone C. Suppose that f : X Z is (-C)-properly quasiconvex on X. Then f is C-quasiconcave on X. Proof. Let zZ and x1, x2{xX : f(x) z - intC}. Since f is (-C)-properly quasiconvex on X, for each x [x1, x2] Hence f(x ) z -intC. Therefore {x X : f(x) z - intC} is convex on X, i.e., f is C-quasiconcave on X.
Definition 2.4 (C-continuity, [18, 25]). Let X be a topological space, and Z a topological vector space with a partial ordering defined by a solid pointed convex cone C. Suppose that f is a vector-valued function from X to Z. Then, f is said to be C-continuous at x X if it satisfies one of the following two equivalent conditions: (i) For any neighbourhood Vf(x) Z of f(x), there exists a neighbourhood Ux X of x such that f(u) Vf(x) + C for all u Ux. (ii) For any k intC, there exists a neighbourhood Ux X of x such that f(u) f(x) - k + intC for all u Ux. Moreover a vector-valued function f is said to be C-continuous in X if f is C-continuous at every x on X. Remark 1. Whenever Z = R and C = R + , C-continuity and (-C)-continuity are the same as ordinary lower

and upper semicontinuity, respectively. In [25, Definition 2.1], C-continuous function is called C-lower semicontinuous function, and (-C)-continuous function is called C-upper semicontinuous function. Proposition 2.2. [24, Proposition 2.1] Let X be a topological space, and Z a topological vector space with a partial ordering defined by a solid pointed convex cone C. Suppose that f is a vector-valued function from X to Z. 1 Then f is C-continuous on X if and only if for each z Z, f (z + intC) is an open subset of X. Definition 2.5 (Continuity for Set-Valued mapping, see also [5]). Let X and Y be two topological spaces, T : Y X 2 a set-valued mapping. (i) T is said to be upper semicontinuous (u.s.c. for short) at x X if for each open set V containing T(x), there is an open set U containing x such that for each z U, T(z) V ; T is said to be u.s.c. on X if it is u.s.c. at all x X. (ii) T is said to be lower semicontinuous (l.s.c. for short) at x X if for each open set V with T(x) V , there is an open set U containing x such that for each z U, T(z)V ; T is said to be l.s.c. on X if it is l.s.c. at all x X. (iii) T is said to be continuous at x X if T(x) is both u.s.c. and l.s.c.; T is said to be continuous on X if it is both u.s.c. and l.s.c. at each x X. Y Lemma 2.1. [12, p.144]. Let X and Y be two topological spaces, T : X 2 a multivalued mapping. (i) If for any x X, T(x) is compact, then T is u.s.c. on X if and only if for any net {x } X such that x
x and for every y T(x ), there exist y T(x) and a subnet { y } of {y }, such that y y.
(ii) T is l.s.c. at x X if and only if for any net {x } X, x x, and for any y T(x), there is a net { y } such that y T(x ) and y y.
Definition 2.6 (C-compactness, [18]). Let C be a nonempty convex cone in a Hausdorff topological space Z. We

say E Z is C-compact if any cover of E of the form

admits a finite subcover. Lemma 2.2. [18, Theorem 7.2] Let X be a nonempty compact convex subset of a real Hausdorff topological vector space. Let Z be a real topological vector space with a solid pointed convex cone C Z. Suppose that f is a vector-valud function from X to Z. If f is C-continuous, then xX {f(x)} is C-compact.
92

Definition 2.7 (KKM-map). Let X be a topological vector space, and K a nonempty subset of X. Suppose X that F is a multifunction from K to 2 . Then F is said to be a KKM-map, if

for each finite subset {x1,...,xn} of X. Remark 2. Obviously if F is a KKM-map, then x F(x) for each x K. Lemma 2.3 (Fan-KKM; see [14]). Let X be a Hausdorff topological vector space, and K a nonempty subset of X, and let G be a multifunction from K to 2X. Suppose that G is a KKM-map and that G(x) is a closed subset of X for each x K. If G( x ) is compact for at least one x K, then

xK

G(x) .

Proposition 2.4. Let Z be a real topzological vector space, A a subset of Z, and C a solid convex cone in Z. If A (-intC) = , then

Proof. Suppose to the contrary that there exists z (A + cl C) (-intC). Then there exist a A, c clC and c intC such that contradicts to the fact that A (-intC) =

. Hence

, by Proposition 2.3. This

Proposition 2.5. Let Z be a real topological vector space and C a solid pointed convex cone in Z with k intC. Then the following properties hold: (i) for every z Z there exists t R such that z t k + intC; (ii) for every z intC there exists t > 0 such that z -tk intC.

Proof. (i). Let z Z. -k + intC is a neighborhood of thet aZ. Since Z is a topological vector space, each neighborhood of Z is absorbing. Hence there exists > 0 such that z (-k + intC), i.e., z (- k +
intC. Then there exists a neighborhood U of Z such that z + u intC. Since Z is t.v.s., there exists > 0 such that k U. Hence z - 1 k (z + U) intC.

intC). (ii). Let z

3. Main Results
is not empty for intC under suitable conditions. Theorem 3.1. Let X be a real Hausdor? topological vector space. Let Z be a real topological vector space with a solid pointed convex cone C Z. Suppose that K is a nonempty subset of X, that f is a vector valud function from X X to Z. Also we assume that the following conditions: (i) S := {x clK : f(x,y) intC for all y clK} ; (ii) clK is compact; (iii) f is C-continuous on X X. Then -VEP has at least one solution for each intC. Proof. Let intC and x S . Then by condition (iii), for each y and v y of y such that In this section, we will establish several results for -vector equilib rium problems. First we derive that ()

clK there are neighborhoods u y

of x

Since

yclK

vY cl K and clK is compact, we can choose


n yi

yi

clK, i =1,...,n, such that

i =1

v yI clK.

Then for U :=

u
i =1

, we have

93

Hence

Since x S and y1,...,yn

clK we have

from which it follows that

In particular, x (). Therefore the problem -VEP has at least one solution. Example 2. Let f : R R R 2defined by

Consequently, f(u,y) + intC for all u U and y clK. Moreover K U because of x clK. Let x

K U. Then f( x ,y)+ -intC for all y

clK.

C = R+ and K = (-1,1). Then 1

, i.e., S

-VEP has at least one solution for each > 0 by Theorem 3.1. Actually, x = 1 Example 3. Let f : R R R2 defined by

, clK is compact,and f is C-continuous on RR. Thus


2
is a solution of -VEP.
2

= and also for each (0,1), () = . We observe that we need not only conditions (ii)and (iii) but also (i) in Theorem 3.1. Corollary 3.1. Let X be a real Hausdorff topological vector space. Let Z be a real topological vector space with a solid pointed convex cone C Z. Suppose that K is a nonempty subset of X and that f is a vector-valud function from X X to Z. Also we assume that the following conditions: (i) S := {x clK : f(x,y) -intC for all y K} ; (ii) clK is compact; (iii) f is C-continuous on X X; (iv) f(x,) is (-C)-continuous on bdK for some x S. Then -VEP has at least one solution for each intC. Proof. Let x S which satisfies condition (iv). Then for every y K,

,K = [-1,1] and C = R+ . Then S

Suppose to the contrary that there exists y


that

f( x ,y)

-intC.

bdK such that f( x , y )

By condition (iv), f( x ,) is (-C)-continuous at y

-intC.

(1)

bdK. Hence there exists a neighborhood V of y such

Because of y

bdK, V

. Hence there exists y V K such that

94

This contradicts to (1). Therefore for each y i.e., for every y clK,

bdK,

f( x ,y) f( x ,y)

-intC, -intC.

Hence x S , i.e., S

Now the result follows from Theorem 3.1. Theorem 3.2. Let X be a real Hausdorff topological vector space. Let Z be a real topological vector space with a solid pointed convex cone C Z. Suppose that K is a nonempty subset of X and that f is a vector-valud function from X X to Z with f(x,x) -intC for all x X. Also we assume that the following conditions: (i) clK is compact convex; (ii) f(x,) is C-quasiconvex on X for each x X; (iii) f(,y) is (-C)-continuous on X for each y X; (iv) f is C-continuous on X X. Then the problem -VEP has at least one solution, i.e., () Proof. Let for each y is nonempty for each intC.

clK,

First, we show that G(y) is a KKM-map. Suppose to the contrary that there exists 1,...,n) such that

i [0,1], xi clK (i =

Then f(x, xi )

- intC,

i = 1,...,n.

Moreover x clK because of the convexity of cl K. Hence by condition (ii),

intC for all x X. Next by condition (iv) and Proposition 2.2, A := {y X : f(x,y) -intC} is an open subset of X. Then G(y) c = clK ( A ) is a closed subset of X. Hence G(y) is closed for each y K. Also clK is compact. Hence G(y) is
which contradicts to the fact that f(x,x) compact for each y K. Thus we can apply Lemma 2.3,and so

Hence by Theorem 3.1, the problem -VEP has at least one solution. Remark 3. We observe that the condition that f is both C-continuous and (-C)-continuous doesnt imply that f is continuous. See, e.g., [18,Theorem 5.3 and Remark 5.4]. Remark 4. Theorem 3.2 is only one of variations of Theorem 3.1. Using various existance results for VEP, we may obtain conditions of nonemptyness of S , and then we can derive another existence results for -VEP easily. If we
assume closedness of C, we may utilize existence results for genelized VEP in the following papers [4, 10, 11, 16]. Next, we show that the solution mapping of -VEP is upper semicontinuous on intC under some suitabloe conditions.

Theorem 3.3. Let X be a real Hausdor? topological vector space. Let Z be a real topological vector space with a solid pointed convex cone C Z. Suppose that K is a nonempty subset of X, that f is a vectorvalud function from X X to Z. Also we assume that the following conditions: (i) K is compact; (ii) f(,y) is (?C)-continuous on X for each y X; 95

(iii) () is nonempty for each intC. Then is u.s.c. on intC. Proof. Let and x ( ). Since K is compact, we can assume, without loss of generality,

-intC.Since Z is a topological vector space, there exists a neighborhood U of z


condition (ii), there exists a such that for every ,f( x ,y) +

x x K. Suppose to the contrary that x

().

Then there exists y K such that f(x,y) + such that f(x,y) + + U +

U -intC. Then f(x,y) + +U + U - intC (-intC - intC) -intC. Because of that x ( ).Hence x ( ). Therefore by Lemma 2.1,

, x x, and

, -intC. This contradicts the fact


() is u.s.c. on intC.

Corollary 3.2. Let X be a real Hausdorff topological vector space. Let Z be a real topological vector space with a solid pointed convex cone C Z. Suppose that K is a nonempty subset of X, that f is a vectorvalud function from X X to Z with f(x,x) -intC for all x X.Also we assume that the following conditions:

(i) K is compact convex; (ii) f(x,) is C-quasiconvex on X for each x X; (iii) f(,y) is (-C)-continuous on X for each y X; (iv) f is C-continuous on X X. Then is u.s.c. on intC. Proof. The result follows from Theorems 3.2 and 3.3. We now establish that the solution mapping of -VEP is lower semicontinuous on intC under suitable assumptions. Theorem 3.4. Let X be a real Hausdor? topological vector space. Let Z be a real topological vector space with a solid pointed convex cone C Z. Suppose that K is a nonempty subset of X, that f is a vectorvalud function from X X to Z. Also we assume that the following conditions: (i) K is compact convex; (ii) f(x,) is C-continuous on X for each x X; (iii) f(,y) is strictly C-quasiconcave on K for each y K; (iv) () is nonempty for each intC.

Suppose that x () and that x (), where (0,1). We choose x V, where (a,b) denotes the line segment between a and b. Obviously x (). Because of condition (iii),
Since - clC is a closed set, for each v

Then is l.s.c. on intC. Proof. Let intC. Let V be an open set of X with

() .

(x,

x)

X there exist a positive number tv > 0 such that

Because of conditions (i) and (ii), by Lemma 2.2, intC is a neighborhood of f( x ,v) and

UX

f( x ,v) is Ccompact. Clearly f( x ,v) - tv +

Hence there exist v1,...,vn

X such that

(2) (0,1) such that

Since f( x , i )- t vi --clC, i = 1,...,n, there exist corresponding numbers t1,...,tn


96

Let = min{t1,...,tn}. Then by Proposition 2.4,

Because of (2),

Accordingly

i.e., Therefore x ()for all

(1 -)+ intC. Hence is l.s.c. on intC.

Theorem 3.5. Let X be a real Hausdorff topological vector space. Let Z be a real topological vector space with a solid pointed convex cone C Z. Suppose that K is a nonempty subset of X, that f is a vectorvalud function from X X to Z. Also we assume that the following conditions: (i) K is compact convex; (ii) f(x,) is C-continuous on X for each x X; (iii) f(,y) is strictly (?C)-properly quasiconvex on K for each y K; (iv) () is nonempty for each intC.
Then is l.s.c. on intC. Proof. Let intC be arbitrary but fixed and V be an open set with V ( ) Then we show there exist x

V and > 0 such that for all (1 -) + intC, we have


f( x ,y) + -intC, for all y K.

. Let x

V ( ).

We note that (1 -) + intC is a neighborhood of .


First we select x

V in the following way. Let

(0,1), x0 ( ) and

Next we find corresponding (0,1 -). Because of the way in selecting x , we have

or

( x0 ,y) + intC}. By condition (ii) and Proposition 2.2, A := {y X : f( x ,y) f( x0 ,y) + intC} is an open set of X. Then Ac = {y X : f( x ,y) f(x0,y)+intC} is a closed set of X. c Hence K = (K A ) is a closed set, i.e., compact set. Because of Proposition 2.5, we have f( x ,v) f( x ,v) + intC for all v K .Thus, for each v K there exists v (0,1-) such that
Let K = {y K : f( x ,y) Hence

97

Putting = min{ v1 ,..., vn }, we have

Hence

Because of x ()

Hence by Proposition 2.4,

Therefore by (3) On the other hand for each v (K K ) , f x, v f ( x 0 , v ) + int C . Since x0 ( ) , we have

f x, v + int C . Because of < (1 ) ,

( )

( )

from which it follows that

Let U = (1-) + intC. Then U is an open set containing . For every U,

Therefore by Proposition 2.4,

from which it follows f( x ,v) + -intC for all v K, i.e., x l.s.c. at . Since is arbitrary, is l.s.c. on intC.

() for

all U. Hence is

Corollary 3.3. Let X be a real Hausdor? topological vector space. Let Z be a real topological vector space with a solid pointed convex cone C Z. Suppose that K is a nonempty subset of X, that f is a vector valud 98

function from X X to Z with f(x,x)

-intC for all x X. Also we assume that the following conditions:


X;
X

(i) K is compact convex; (ii) f(x,) is C-quasiconvex on X for each x

(iii) f(,y) is strictly (?C)-properly quasiconvex on K for each y (iv) f(,y) is (?C)-continuous on X for each y

K;

(v) f is C-continuous on X X. Then is continuous on intC. Proof. The result follows from Theorems 3.2 and 3.5. Corollary 3.4. Let X be a real Hausdorff topological vector space. Let Z be a real topological vector space with a solid pointed convex cone C Z. Suppose that K is a nonempty subset of X, that f is a vector-valud function from X X to Z. Also we assume that the following conditions: (i) K is compact convex; (ii) f(., y) is (-C)-continuous on X for each y X; (iii) f is C-continuous on X X; (iv) f(., y) is strictly (-C)-properly quasiconvex on K for each y K; (v) ( ) is nonempty for each intC. Then is continuous on intC. Proof. The result follows from Theorems 3.3 and 3.5. Finally we show that nonemptyness of the solutions sets of -VEP implies that there exists a solution to VEP under mild conditions. Theorem 3.6. Let X be a real Hausdorff topological vector space. Let Z be a real topological vector space with a solid pointed convex cone C Z. Suppose that K is a nonempty subset of X, that f is a vectorvalud function from X X to Z. Also we assume that the following conditions: (i) cl K is compact; (ii) f(., y) is (-C)-continuous on X for each y X; (iii) ( ) 0 ; for all intC. Then S is nonempty. Proof. Let { } intC, (ii), there is a

Z , and x clK . Then by condition (i), without loss of generality, we

assume x x and x clK . Suppose to the contrary that f(x, y) -intC for some y K. Then by condition

0 such that for all 0

f(x, y) -intC. This contradicts to the fact that x ( ) Hence f(x, y) =-intC for all y K and thus x S from which the

result follows. We remark that from the proof of Theorem 3.5, one can see that Condition (iii) in Theorem 3.5 can be replaced by the condition that ( ) for some net { } intC such that Z .

4. Acknowledgement
This research was partially supported by the grant NSC 94-2115-M-110-004. The authors thank the referees for their helpful comments and suggestions.
References

[1] [2] [3] [4] [5]

Q. H. Ansari, I. V. Konnov and J. C. Yao, Characterizations of solotions for vector equilibrium problems, J. Optim. Theory Appl. 113 (2002), 435447. Q. H. Ansari, I. V. Konnov and J. C. Yao, On generalized vector equilibrium problems, Nonlinear Anal. 47 (2001), 543554. Q. H. Ansari, S. Schaible and J. C. Yao, System of generalized vector equilibrium problems with applications, J. Global Optim. 22 (2002), 316. Q. H. Ansari and J. C. Yao, An Existence Result for the Generalized Vector Equilibrium Problem, Appl. Math. Lett. ,12(8) (1999), 5356. C. Berge, Topological Space, (1963) Oliver&Boyd, Edinburgh and London.

99

[6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23]

[24] [25] [26] [27] [28]

M. Bianchi, N. Hadjisavvas and S. Schaible, Vector Equilibrium Problems with Generalized Monotone Bifunctions, J. Optim. Theory Appl. 92(3) (1997), 527542. X. P. Ding and J. C. Yao, Maximal element theorems with applications to generalized games and a system of generalized vector quasi-equilibrium problems in G-convex spaces, J. Optim. Theory Appl. 126 (2005), 571588. X. P. Ding, J. C. Yao and L. J. Lin, Solutions of system of generalized vector equilibrium problems in locally G-convex uniform spaces,J. Math. Anal. Appl. 298 (2004), 398410. F. Ferro, A Minimax Theorem for Vector-Valued Functions, J. Optim. Theory Appl. 60(1) (1989), 1931. P. Gr. Georgiev and T. Tanaka, Vector-valued set valued variants of Ky Fans inequality, Nonlinear Convex Anal. 1(3) (2000), 245254. P. Gr. Georgiev and T. Tanaka, Fans inequality for set-valued maps, Nonlinear Anal. 47(1) (2001), 607618. F. Giannessi, (2000). Vector Valiational Inequality and Vector Equilibria, Mathematical Theories, Kluwer Academic Publishers Dordrechet, Boston. N. J. Huang, J. Li and J. C. Yao, Gap functions and existence of solutions for system of vector equilibrium problems, J. Optim. Theory Appl. (2006), to appear. K. Fan, A Generalization of Tychonoffs Fixed Point Theorem, Math. Ann. 142 (1961), 305310. S. Helbig, On the connectedness of the set of weakly efficient points of a vector optimization problem in locally convex spaces, J. Optim. Theory Appl. 65(2) (1990), 257270. E. L. Kalmoun and H. Riahi, Topological KKM Theorems and Generalized Vector Equilibria on G-convex Spaces with Applications, Proc. Amer. Math. Soc. 129 (2001), 13351348. P. Loridan, Epsilon-solutions in vector minimization problems, J. Optim. Theory Appl. 43(2) (1984), 265276. D. T. Luc, (1989). Theory of Vector Optimization: Lecture Notes in Economics and Mathematical Systems, Springer-Verlag Berlin Heidelberg. A. B. Nemeth, Between Pareto efficiency and Pareto epsilon-efficiency, Optimization, 20(5) (1989), 615637. W. Oettli, A remark on vector-valued equilibria and generalized monotonicity, Acta Math. Vietnam. 22(1) (1997), pp.213221. W. D. Rong and Y. N. Wu, Epsilon-weak minimal solution of vector optimization priblems with set-valued maps, J. Optim. Theory Appl. 103(3) (2000), 569579. C. Tammer, A generalization of Ekelands variational priciple, Optimization, 25(23) (1992), 129141. C. Tammer, Existence results and necessary conditions for epsilon-efficient elements. In Multicriteria Decision, Proceedings of the 14th Meeting of the German Working Group Mehrkriterielle Entscheidung, edited by B. Brosowski et. al., Peter Lang Verlag, 1992, pp.97110. T. Tanaka, Cone-quasiconvexity of vector-valued functions, Sci. Rep. Hirosaki Univ. 42 (1995), 157163. T. Tanaka, Generalized semicontinuity and existence theorems for cone saddle points, Appl. Math. Optim. 36 (1997), 313322. T. Tanaka and D. Kuroiwa, The convexity of A and B assures intA + B = int (A + B), Appl. Math. Lett. 6 (1993), 8386. D. J. White, Epsilon efficiency J. Optim. Theory Appl. 49(2) (1986), pp,319337. L. C. Zeng and J. C. Yao, An existence result for generalized vector equilibrium problems without pseudomonotonicity, Appl. Math. Lett.,19 (2006) 13201326.

100

Optimization of Distribution Network Design Based on Retailer Satisfaction*


Zhang Min
School of Information Management, Wuhan University, P.R.China, 430072

Abstract There has emerged a growing inter-dependency amongst the parties in supply chain. With this inter-dependency has come a realization that co-operation and partnership are essential pre-requisites for the achievement of long-term mutual benefit. Now, optimization of distribution network design has become one of the most important problems in supply chain management. In this paper, we base on the core manufacturing enterprise to select distributors. For distributor selection, we use the method combining Network data envelopment analysis (DEA) with Activity Based Costing (ABC) approach to evaluate their relative validity and complete the first stage of distributor selection. Here, we need to divide distribution process into several related sub-processes according to ABC approach. After that, we study retailer demand and set retailer evaluation principles to the whole supply chain, then we choose several primary retailers to mark the importance degree of each principle. A model is built to select supply chain partnerships combination which can maximize retailer satisfaction. At the end of this paper, a numerical example is given. Key words Optimization, Distribution network, Retailer satisfaction

1. Introduction
Throughout the public, private, and non-profit sectors, there is increasing experimentation with the use of partnerships, alliances, and networks to design and deliver goods and services [1]. To supply chain management SCM, it has been considered as a competitive strategy for integrating suppliers and customers with the objective of improving responsiveness and flexibility of manufacturing organizations, and these organizations have actively sought to reduce the number of partnerships, including suppliers and distributors, they do business with. The motivations for this move towards partnerships rationalization are based partly upon economics, partly upon the search for continuous quality improvement and innovation but also on a realization that there is a limit to the extent to which multiple supplier relationships can be effectively managed [2]. As a result of these changes in SCM there has emerged a growing importance of construction of supply chain distribution network partnerships. Vast literatures have been devoted to this field. A large body of research exists examining different criteria of selecting suppliers, for example, five kinds of criteria of selecting suppliers: performance, economy, entirety, fitment, and legality have been proposed (Lehamann and Shaughnessy, 1982) [3]. Another strand of research proposed that when enterprises select suppliers, some soft standards must be considered such as management compatibility, goal consistency, suppliers strategic directionality besides such quantization criterion as cost, quality, delivery date (Ellram, 1990) [4]. some researchers discuss criteria, methods and theory frameworks of selecting partnersXin an Ma , 2000[5]. The selecting process of supply chain partnerships is very complex, and a classic paper had reviewed related literatures and concluded 4 main methods Weber, 1991[6]. So far, more methods are applied to select supply chain partnerships, such as ABC (Activity Based Costing) approach, DEA (Data Envelopment Analysis), GA (Genetic Algorithm) and TOPSIS. Now, combined application of these approaches has become popular, such as AHP/DEA and AHP/ TOPSIS [7]. So far, there are lots of researches on supply chain distribution network design, but few literatures study from the aspect of retailer satisfaction. In fact, retailers are playing a very important role in SCM, and they have many significant characteristics: (1) retailers themselves produce no products, they only provide service; (2) retailers create values through some logistics process; (3) retailers are always stable and secular in supply chain relationship;(4) retailers provide service for many organizations, so they do not depend on just one enterprise. In supply chain, retailers are the service centers, information-collection centers and alarm centers. We can say that retailer satisfaction stands customers satisfaction in some degree. The objective of this article is to study an efficient approach to design distribution network based on retailer satisfaction. In this paper, we consider to select distributors with some kinds of evaluation principles according to
This research has been supported by National Natural Science Funds of China (No: 70601011).

101

their different characteristics. For distributor selection, we use the method combining Network DEA with ABC approach to evaluate their relative validity and complete the first stage of distributor selection. After that, we study retailer demand and set retailer evaluation principles to the whole supply chain, then we choose several primary retailers to mark the importance degree of each principle. A model is built to select supply chain partnerships combination which can maximize retailer satisfaction. At the end of this paper, a numerical example is given.

2. The model
2.1 Distributor selection Distribution process in SCM consumes various resources and produces various values. Its performance evaluation involves transformation efficiency among multi-inputs and multi-outputs. Here, we use a kind of Network DEA model, which has proved to be fruitful in engineering and operations research applications, among others to deal with distributor selection. The traditional models for DEAtype performance measurement are based on thinking about production as a black box. Inputs enter and outputs exit, with no consideration of the intervening steps. Consequently, it is

difficult, if not impossible, to provide individual DMU managers with specific information regarding the sources of inefficiency within their DMUs. So when researchers apply DEA to specific industries or situations, they have often added more structure to the model to better suit the application [8-11]. Network DEA model is widely used to deal with this kind of situation, and it applies to DMUs that consist of a network of Sub- DMUs, some of which consume resource produced by other Sub- DMUs and some of which produce resource consumed by other Sub-DMUs. Network DEA model can be divided into input orientation and output orientation. In this sector, we combine ABC with Network DEA model and divide distribution process into several related sub-processes. In literature [12], it divided distribution process into sale sub-process, distribution sub-process and service sub-process. In fact, distribution process consists of many logistics treatments, such as transportation, distribution, storage and packaging, which may lead many quality problems. Distributors are not charge of those fault products and they will return them to manufacturing enterprise, so products withdrawal process is very important in distribution process. In this paper, we divide distribution process into sale sub-process, withdrawal sub-process and service sub-process. Assume that there are n DMUs in distribution process. Detailed operation processes in DMUj are shown in Fig.1.

x10 j
input

P1

y14 j

20 j

y12 j
P2

y 24 j

output

0 j

x 30 j

y 23 j
P3
Fig. 1 Distribution process analysis

y 34 j

In distribution process, sale departments sell products to retailers. At the same time, they pass sale information to service departments and withdrawal departments. After receiving sale information from sale departments, service departments need to distribute products to retailers and transfer fault product information to withdrawal departments, then withdrawal departments return these fault products to manufacturing enterprise [13]. In Fig.1, P1 stands sale sub-process, P2 stands service sub-process and P3 stands withdrawal sub-process. We define that x 0 stands total capital input of DMUj and it is divided into three parts according to the three j sub-process x10 , x 20 and x30 . Outputs of DMUj are their service to retailers. Now, we describe each sub-process j j j
102

in details: to P1, x10 is the capital input, y12 information to P2 in one output and y14 ( information to j j j retailers ) is another output; to P2, x 20 is the capital input and y12 ( information from P1 ) is another input. j j
y 23 (information to P3) is one output and y 24 (information to retailers ) is the other output. to P3, x30 is the j j j

capital input, y 23 information from P2is another input. y 34 (information to retailers ) is the output. Our j j Network DEA model is described as M1: s.t. P1 : P2 : P3 :
20 30 0 x10 + x0 + x0 = x0 0

(1) (2) (3) (4) (5) (6) (7)


n)

j =1

1j

x10 x10 , 1 j y12 y12 , 1 j y14 y14 0 0 0 j j j


j =1 j =1 n n n

j =1

2j

20 12 23 24 x 20 x0 , 2 j y12 y0 , 2 j y 23 y0 , 2 j y 24 y0 j j j j

j =1

j =1

j =1

j =1

3j

30 23 34 x30 x0 , 3 j y 23 y0 , 3 j y 34 y0 j j j

j =1

j =1

y12 y14 0 0
23 24 y0 y0 34 24 y0 + y14 y0 0

kj 0, x kj 0 0(k = 1, 2,3; j = 1, 2,
y12 0, y 23 0 j j

(8) (9)

In model M1, suffix j ( j = 1, 2, DMU. kj ( k = 1, 2,3, j = 1, 2,

n ) is corresponding to one DMU and suffix 0 stands the evaluated

n ) is variable of Network DEA model. stands relative validity of evaluated

DMU. = 1 means this DMU is relatively effective, and < 1 means this DMU is relatively ineffective. Expression (1) describes the quantitative relationship between sub-process capital inputs and total capital input. Expression (2) describes constraints in P . Expression (3) describes constraints in P2 . Expression (4) describes 1 constraints in P3 . Expression (5) stands that information from sale departments to service departments not more than that to retailers; expression (6) means that information from service departments to withdrawal departments not more than that to retailers; expression (7) means that information from service departments to retailers is not more than those from sale departments and withdrawal departments. Input data into model M1 and solve it for n times, then we can get the relative validity of each evaluated DMU. During distributor selection, we can exclude those relatively invalid distributors and remain distributors with relatively high validity. 2.2 Models of selecting supply chain partnerships based on retailer satisfaction Assuming that after the first stage of partnerships selection, the selected suppliers and distributors can assemble into m kinds of supply chain partnerships combination. There are S retailers and each retailer has an expected demand according to their history demands. In our model, we choose average cost(C), product quality Q, response timeTand reliabilityRas our supply chain evaluation principles. The importance level of each evaluation principle is divided into 10 degrees. Retailers mark these evaluation principles according to their own experience. Our objective is to maximize retailer satisfaction, and this objective can be divided into 4 sub-objectives according to evaluation principles, they are cost minimization, quality maximization, response time minimization and reliability maximization. For each evaluation principle, unit is different, so we need to standardize these units. Our model is constructed as model M2 as following. s t cij (10) Ci = min w j p (jc ) j =1 k =1 cmax

103

Qi = max Ti = min
s t s t

qij tij rij

j =1 k =1 qmax

w j p (jq ) w j p (jt ) w j p (jr )

(11) (12) (13)

j =1 k =1 t max

Ri = max

j =1 k =1 rmax

In model M2, w j stands the expected demand of retailer j , p (jc ) stands the importance degree of retailer j to evaluation principle C. p (jq ) stands the importance degree of retailer j to evaluation principle Q, p (jt ) stands the importance degree of retailer j to evaluation principle T, p (jr ) stands the importance degree of retailer j to evaluation principle R, cij stands the mark of evaluation principle C when retailer j select supply chain combination i , qij stands the mark of evaluation principle ZQ when retailer j select supply chain combination i , tij stands the mark of evaluation principle T when retailer j select supply chain combination i , rij stands the mark of evaluation principle R when retailer j select supply chain combination i . At the same time,
cmax ( i ) = max {cij , j = 1, 2, rmax (i ) = max {rij , j = 1, 2, s} . Then, model M2 can turn into model M3 as following:
s t cij Ci ' = max 1 cmax j =1 k =1

s}

qmax ( i ) = max {qij , j = 1, 2,

s}

tmax ( i ) = max {tij , j = 1, 2,

s}

and

(c) wj p j

(14) (15) (16) (17)

Qi = max
j =1 k =1

qij qmax

w j p (jq )

s t tij (t ) Ti ' = max 1 wj p j tmax j =1 k =1

Ri = max

rij r

j =1 k =1 max

w j p (jr )

Now, all the units have been standardized, so we get model M4:
s t c t q r Z = m ( Ci' +Q +Ti' +Ri ) = axwj 1 ik p(jc) + ik p(jq) +1 ik p(jt) + ik p(jq) ax m i c i i qmax rmax j=1 k=1 tmax max

(18)

According to model M4, we can find out the optimal supply chain partnerships combination to maximize retailer satisfaction.

3. Numerical example
3.1 Validity filtration In this section, we use models proposed above to analyze distribution partnerships consisting of manufacturing enterprises, distributors and retailers. Tab. 1 shows details of candidate distributors.
Tab. 1 Distributor details Sale Sub-Process Service Sub-Process

Distributor Y1 Y2 Y3 Y4 Y5 Y6

Input x10 75 30 60 45 25 35

Withdrawal Sub-Process

10

y12
100 82 80 65 80 50

y14
100 85 82 68 80 53

20

y12
100 82 80 65 80 50

y 23
12 4 11 6 3 7

y 24
98 102 83 67 96 55

x30
20 5 10 5 3 3

y 23
12 4 11 6 3 7

y 34
25 22 15 13 19 11

45 20 35 30 18 27

10 5 15 10 4 5

We put candidate distributor details in Tab.1 into model M1, solve it with Lingo and we can easily calculate
104

that distributor Y2 and Y5 are relatively valid. 3.2 Selecting supply chain partnership combination After the first stage of partnerships selection, two candidate distributors are selected. They can form 2 kinds of partnerships combination. In this sector, 5 primary retailers are chosen to evaluate supply chain performance. Details are shown in Tab. 2.
Tab. 2 Retailer evaluation

Retailer 1 2 3 4 5

Expected Demand 10000 7000 8500 6200 12000

Price 7 9 8 7 9

Eligible Rate 10 10 10 10 10

Response Time 8 7 8 9 7

Reliability 9 8 6 9 8

Tab. 3 Details of supply chain partnerships combination

Combination M Y2 M Y5

Price 790 860

Eligible Rate 0.90 0.85

Response Time 15 25

Reliability 0.80 0.75

Z 927810 744224

We can get the values of these 4 kinds of supply chain partnerships combination according to model M4 as Tab.3 shows. Comparing values of Z in Tab.3, we can draw a conclusion that with the combination M Y2, retailers can maximize retailer satisfaction. Therefore, this supply chain alliance partner combination is our optimal combination.

4. Conclusions
In this paper, we do some beneficial research on distribution network partnerships selection. We focus on retailer satisfaction and study the simplest combination. But we just study single-product system and suppose that retailer demand is known. In practice, retailer demand is often various, stochastic and dynamic. In the following research, we will study supply chain partnerships selection with stochastic demand under dynamic environment. On the other hand, computer aided decision will also be applied in our study.
References

Jennifer M. B. Assessing and improving partnership relationships and outcomes: a proposed framework. Evaluation and Program Planning, 2002,25: 215-231 [2] Martin C., Uta J. Developing strategic partnerships in the supply chain: a practitioner perspective. European Journal of Purchasing & Supply Management, 2000, 6: 117-127, [3] Donald R. L., Shaughnessy O. Decision criteria used different categories of products. Journal of Purchasing Materials Management, 1994, 6: 9-14, [4] Ellram L. M. The supplier selection decision in strategic partnership. Journal of Purchasing Materials Management, 1990, 26: 8-15 [5] MA X. A., ZHANG L. P., Feng Y. Supply Chain Partnership and Partner Selection. Industrial Engineering and Management, 2000, 5(4): 8-15in Chinese [6] Weber C. A., Current J. R., Benton W. C. Vendor selection Criteria and Methods. European Journal of Operational Research, 1991, 50: 2-18 [7] Felix T. S., Chan, S. H., Chung S. W. A hybrid genetic algorithm for production and distribution. Omega, 2005, 33: 345-355 [8] Charnes A., Cooper M. M. Measuring the efficiency of decision making units. European Journal of Operational Research, 1978, 2: 429-444 [9] Mickael L, Magnus T. Productivity and customer satisfaction in Swedish pharmacies: A DEA network model. European Journal of Operational Research, 1999, 115: 449-458 [10] Rolf F., Shawna G. Network DEA. Socio-Economic Planning Sciences, 2000, 34: 35-49

[1]

105

[11] Herbert F. L., Thomas R. S. Network DEA: efficiency analysis of organizations with complex internal structure. Computers & Operations Research, 2004, 31: 1365-1410 [12] Yin M. Y., Wang M. G., Liu S. X, Operational Performance Measurement in Supply Chain Distribution. Systems Engineering-Theory Methodology Applications, 2004, 13(5): 400-403,.in Chinese [13] Jennifer M. B. Assessing and improving partnership relationships and outcomes: a proposed framework. Evaluation and Program Planning, 2002, 25: 215-231

106

Optimizing Linear Optimization Problems under Fuzzy Relational Equations with Max-Star Composition
Yan-KuenWu
Department of Industrial Management,Vanung University,320,Chung-Li,Taoyuan,Taiwan,R.O.C.

Abstract Max-minand max-product compositions are commonly utilizedto optimizea linearobjective functionsubjecttofuzzy relationalequations.Botharemembersintheclassof max-t-normcomposition. In this study, the max-star composition is considered for the same optimization model, which does not belong to the max-t-norm composition. However, max-star composition generates some properties of the solutionsetthatare similartothe max-productcomposition. Thankstotheseproperties,asimple value matrix with rules can be applied to reduce problem size. Thus, this study proposes an effcient procedure for obtaining optimal solutions without decomposing the problem into two sub-problems or nding all thepotential minimal solutions. Keywords Fuzzy optimization,Fuzzy relational equations, Max-star composition

1 Introduction
In literature, the matrix-form representation of an optimization problem subject to a system of fuzzy relational equations canbe described as

where ci R is the cost coecient associated with variable xi , A=[ aij ] is an m n nonnegative matrix with aij 1, b=( b1 . bn ) is an n-dimensional vector with0 b j 1, and the operation o in (2) represents one algebraic composition. In this study, operation o represents the max-star composition (Khorram, Ghodousian, and Molai, 2006). Let T={1,2,m}and J ={1,2,,n}be two index sets. Solving the constraint part of fuzzy relational equations with max-star composition in (2) is to nd a set of solution vectors x = ( xi ) iI such that

The (1)-(2) optimization problem subject to fuzzy relational equations with dierent algebraic operations has been proposed. The typical framzeworks for the (1)-(2) model assume that operation o takes either max-min or max-product compositions. Fang and Li (1999) indicated that the (1)-(2) model with max-min composition can be converted into a 0-1 integer programming problem and solved using the branch-and-bound method. Wu, Guu, and Liu (2002) improved Fang and Lis method by providing an upper bound for the branch-and-bound procedure. As an application, the (1)-(2) model with a max-min composition has been employed for streaming media providers seeking minimum cost while meeting the requirements assumed by a three-tier framework. For a detailed description of problem, see (Lee and Guu, 2002). In solving the same optimization problem with positive cost coecients in the objective function, Wu and Guu (2005) proposed a necessary condition for the optimal solution. Based on this necessary condition, three rules have been applied to simplify the work of nding an optimal solution. Lu and Fang (2001) developed a genetic algorithm to solve the same mathematical problem yet with a nonlinear objective function. Wang (1995) explored fuzzy relational equations based on the max-min composition with multiple objective linear functions. Wang characterized some properties of ecient points and transformed the problem into a multi-attribute decision problem. Loetamonphong, Fang, and Young (2002) extended the analysis of the multi-objective optimization problem to the dierent model with nonlinear objective functions. A genetic procedure was employed to obtain the Pareto optimal solutions. Based on the dierent objective function
107

types, when the objective functionin (1)becomes Z ( x) = max iI {min(ci , xi )} ,Wang et al. called this model the latticized linear programming problem (Wang, Zhang, Sanchez and Lee, 1991). Recently, Yang and Cao (2005) extended the latticized linear programming problem to a situation with a geometric objective function. Subjected to fuzzy relational equations with max-product composition for the (1)-(2) problem, Loeta-monphong and Fang (2001) investigated a minimization problem with a linear objective function. They separated this optimization problem into two sub-problems according to the nonnegative and negative coecients in the objective function. The sub-problem generated by negative coecients can be solved easily by the maximum solution. On the other hand, the sub-problem formed by the nonnegative coecients can be convertedintoa0-1integer programming problem and then solved by the branch-and-bound method. Guu and Wu (2002) identied a necessary condition for an optimal solution in terms of the maximum solution derived from fuzzy relational equations. This necessary condition was utilized to provide an ecient procedure for solving the minimization problem. Wu and Guu (2004) extended the fuzzy relational constraints with max-product composition to a situation with a max-strict t-norm composition. They indicated that the necessary condition for an optimal solution in terms of the maximum solution can also be applied to a situation of max-strict t-norm composition. This study is motivated by the work of Khorram, Ghodousian, and Molai (2006). They considered the constraint part of problem (1)-(2), which consists of fuzzy relational equations with max-star composition. Khorram, Ghodousian, and Molai decomposed this problem into two sub-problems that depend on the nonnegative and nonpositive cost coecients in the objective function. According to their procedure, the sub-problem formed by nonpositive coecients was solved by the maximum solution. Then, they removed variables with nonpositive coecients from the objective function and proposed an algorithm to solve the sub-problem formed by the remaining variables that are nonnegative coecients in the objective function under the original constraint. Finally they combined the two solutions obtained from those two sub-problems to yield an optimal solution for the (1)-(2) problem. Although they intended to present an algorithm to obtain an optimal solution without nding all minimal solutions for the (1)-(2) problem, the optimal solution can not be guaranteed to yield by applying their algorithm. To improve this situation, this study rst provides a necessary condition for an optimal solution of the (1)-(2) problem. A simple value matrix is then used to capture the contribution of each variable in the objective function yielded by employing the necessary condition. Furthermore, more rules than a previous paper (Guu and Wu, 2002) for dealing with the max-product composition are proposed to reduce problem size. This value matrix and these rules facilitate the development of an ecient procedure for nding optimal solutions without decomposing the (1)-(2) problem into two sub-problems.

2. Preliminarypropertiesformax-starcomposition
This section describes some proper ties of fuzzy relational equations with a max-star composition. More-over, to solve the (1)-(2) problem for a general situation, this study pays attention to each component of an optimal solution and how it is either 0 or the corresponding value of the maximum solution. As in Di Nola et al. (1991), the triangular norm (t-norm for short) is a real function mapping from [0, 1] [0,1] to[0,1] such that t(a,0)=0, t(a,1) = a, t(a, b)=t(b,a), t(a, t(b,c))=t(t(a,b),c), and t(a,b)t(a,c) if bc. The common max-min and max-product compositions are specic cases of the max-t-norm composition. Obviously, the max-star composition described in (3) does not belong to the class of max-t-norm composition since max{0+a0a} 0 if a 0. However, some characteristics of the solution set obtained using the max-star composition are similar to that in the max-product composition. To be precise, each component of the minimal solution obtained from fuzzy relational equations with a max-product composition is either 0 or the corresponding component value from the maximum solution (Guu and Wu, 2002). This is also true for max-star composition. Now, this study explores the property of solution set X(A, b) to (2) with the max-star composition. Lemma 1. If in the j th equation aij > b j holds for i I in (2), then the solution set X(A,b) is empty.
108

Proof. Due to0 xi 1 and 0 aij 1, if aij > b j holds for i I in (2), then

This result leads to and no solution for xX(A,b) can satisfy the j th equation in (2). By Lemma 1, one can easily acquire if b j =0 and aij >0 for i I in (2), then the solution set X(A,b) is empty. On the other hand, if b j =0 and aij =0 for all i I , then the j th equation becomes {Since0 xi 1, xi has to equal zero for all i I if any solution exists in X(A,b). This is a trivial solution. Hence, only two cases occur in the solution set X(A,b) for b j =0 in (2). It is either empty or the trivial solution. Henceforth, this study assumes b j >0,jJ and the solution set X(A,b) . From Lemma 1, this study obtains the necessary condition for the existence of a solution such that the solution set X(A,b) in (2) is aij b j for all i I .
Lemma 2. If in the j th equation b j =1 holds for some jJ in (2) and X(A,b) , then the j th equation

in (2) has no eect. Proof. Due to X(A,b) and 0 aij 1 for i I in (2), there is aij b j for all i I by the necessary condition. This implies aij =1 for all i I in the j th equation b j =1 and the following equation holds

This result yields that variables xi , i I have no eect to the j th equation. By Lemma 2, henceforth, this study assumes aij <1 for i I , jJ in (2).
Denition 1. Let x = ( xi )1m and x = ( xi )1m and x = ( xi )1m be two vectors. For any vector x
1 1 2 2 2 2

and x , x x
2 1

if and only if xi ( xi )1m for all i I .


1 2 *

Denition 2. A solution x X ( A, b) is called the maximum solution if x x for all xX(A,b). On the

other hand, an x X ( A, b) is a minimal solution if x x implies that x = x , xX(A,b). Asolution x

X(A,b) is optimal for the (1)-(2) problem if Z( x )Z(x)for all xX(A,b). Based on these denitions, the maximum solution of fuzzy relational equations with max-star composition can be easily derived by applying the following operation:

Denition 3. For any solution x = ( xi ) iI X ( A, b) in (2), xi is called a binding variable if

xi + aij aij xi = b j holds for some jJ. The set J ( xi ) :={ j | xi + aij aij xi = b j ,jJ} denotes the binding
set of binding variable xi . Note that a solution for (2) is to nd a set of vector x = ( xi ) iI that satises all equations. By Denition 3, to nd a solution for (2) can be considered the selection of binding variables from the binding set to satisfy all equations. Lemma 3. Let x = ( xi ) iI be the maximum solution and x = ( xi ) iI be a solution of (2). If xi is binding in the j th equation, then xi is also binding there.
Proof. For any solution x = ( xi ) iI X ( A, b) , there are

109

Hence, xi + aij aij xi b j holds for any value of variable xi, implying that xi + aij aij xi b j jJ. Now, if xi is binding in the j th equation, then xi + aij aij xi = b j for all jJ(xi). Moreover, xi xi exists for any solution x. Therefore, the following inequality holds: This result suggests xi + aij aij xi = b j . Hence, xi is also binding in the j th equation.
Theorem 1. Let x = ( xi ) iI be the maximum solution. For any solution x = ( xi ) iI X(A,b), if xi is a

binding variable, then xi = xi


Proof. For any solution x = ( xi ) iI X ( A, b) , there are

Since xi is a binding variable, xi + aij aij xi = b j for some jJ. By Lemma 3, xi is also binding at the jth equation. Hence, xi + aij aij xi = b j . Now, suppose that xi < xi , this then implies that which is impossible. Therefore, xi = xi
Theorem 2. Let x < ( xi ) iI be the maximum solution. If x = ( xi ) iI is a minimal solution, then
* *

xi = 0 or xi = xi for each i I .
Proof. Each variable xi in a minimal solution is either nonbinding or binding. Suppose that xi
* *

is not a for all k

binding variable and xi > 0 . Then a solution can be constructed for x by letting x
!

! i=

0 and x k = x
!

* k

I and k i , respectively. It follows that x x , implying that x


! *

is not a minimal solution. Hence, a a binding variable, then xi = xi by


*

nonbinding variable x

* i

requires that x

* i =

0. On the other hand, if x

* i is

Theorem 1. Theorem 2 reveals the necessary condition for any minimal solution of fuzzy relational equations with a * * * max-star composition. This necessary condition is that for any minimal solution x = ( xi ) iI , if xi is not a binding variable, then xi =0. On the other hand, if xi is binding, then xi = xi . This same necessary condition occurs in literature for fuzzy relational equations with a max-product composition. The primary aim of this study is to present a novel and ecient procedure for minimizing a linear objective function under fuzzy relational equations with a max-star composition. Now, this study proceeds with theoretical results for the (1)-(2) problem. Theorem 3. Let x = ( xi ) iI be the maximum solution computed by (4). For any optimal solution
x * = ( xi* ) iI X ( A, b ) of the (1)-(2) problem, the
* * *

i th component of x * is one of the following situations:

Proof. (1) For any optimal solution x * = ( xi* ) iI X ( A, b ) , xi* xi exists. Suppose xi* xi . Due to ci < 0 ,

this yields

It is a contradiction to any optimal solution x* . Hence, xi* = xi . (2) If xi* is not a binding variable, then xi* =0 due to ci < 0 . On the other hand, if xi* is a binding variable, then xi* = xi by Theorem 1.
110

(3) Since xi* is a binding variable, xi* = xi by Theorem 1. (4) For any optimal solution x , 0 xi xi 0 exists. Let Z ( x ) be the corresponding optimal value of
* * *

x * for the (1)-(2) problem. Since ci = 0 , it implies 0 = ci xi* = ci xi = 0 for any value xi* in[0, xi ]. This result
leads the following equation hold

Therefore, for any optimal solution x , if xi is not a binding variable with ci = 0 , then xi can be any
* * *

value in [0, xi ]. For any optimal solution x = ( xi ) iI X ( A, b) of the (1)-(2) problem, Theorem 3-(2) indicates that if
* *

ci > 0 and x* = xi , then xi* must be a binding variable.


3. Rules for reducing the problem
Based on the results obtained in Section 2, this study employs a simple value matrix to solve the general case of the (1)-(2) problem with a max-star composition. Using this simple matrix, some rules are proposed for developing an ecient procedure for nding optimal solutions. From Theorem 3, each component of the optimal solution x
*

can be composed of either x =0 or

xi* = xi for all iI. Furthermore, Theorem 1 shows that for any solution x = ( xi ) iI X ( A, b) , if xi is a binding variable, then xi = xi . Based on these properties, this study focuses on the work of selecting proper binding variables to derive the optimal solution for the (1)-(2) problem. The maximum solution x = ( xi ) iI and binding set J ( xi ) provide useful information when searching for all binding variables. Hence, the search is limited to J ( xi ) and a value matrix M = ( mij ) is dened with iI and jJ by

The numerical elements in the i th row of M correspond to the contributions in the objective function by setting xi = xi . Notably, if xi = xi is nota binding variable but has the nonnegative cost coecient ci 0 , then the elements in the i th row of M will become mij =, jJ. Depending on the value matrix M, this study creates several rules to reduce the problem. The idea underlying the use of rules for reducing the problem is to x as many as possible the i th component of optimal decisionvariablesby0 or xi . To develop a procedure for nding the optimal solution, this study denotes the following index sets for value matrix M. Essentially, three cases occur in the index set J i (M ) for the variable xi = xi . They are J i (M ) = J ( xi ) ,
J i (M ) = J and J i (M ) =. For the rst case, the index set J i (M ) is equivalent to binding set J ( xi ) whenever

xi = xi becomes a binding variable. When xi = xi can not become a binding variable (i.e., J ( xi ) =) but has a negative coecient ci < 0 , then J i (M ) = J is the second case. If xi = xi can not become a binding variable but has a nonnegative coecient ci 0 , then J i (M ) = is the third case. The index set I j (M ) represents that
the possible variables of x may be selected as a binding variable or a nonbinding variable with a negative coecient in the j th equation. Now, some rules are proposed to optimize the (1)-(2) problem by employing the value matrix M. Rule 1. If there are some entry mij 0 in the value matrix M for some i I , then xi can be assigned to
111

the i th component of any optimal solutions xi for those i I .


*

Rule 2. If a singleton I j (M ) ={ i } exists for some j J in the value matrix M, then xi is assigned to

the i th component of any optimal solution. Rule 3. If I p ( M ) I q ( M ) for some p, qJ in the value matrix M, then the q th column of M can be deleted. Rule 4. If J s ( M ) J t ( M ) for some s, tI with 0 < ct xt < cs xs , then there exists an optimal solution

x * = ( xi ) iI with x s* =0. Rule 5. Let I I and U iI J i ( M ) = J . If rI, r I , J r ( M ) J and


exists an optimal solution x = ( xi ) iI with xr =0.
* *

iI

ci xi < cr xi < cr xi = , then there

Rule 6. In the process of nding an optimal solution, if ck > 0 and J k (M ) = for some kI, then the

value 0 can be assigned to the k th component of the optimal solution. Employing these rules in the value matrix M, this study presents a procedure to nd optimal solutions. The idea behind the procedure is to apply Rules 1-6 to x as many values as possible for variables such that some components of optimal solutions can be determined. Problem size is then reduced by eliminating corresponding rows and columns from matrix M. When the problem size cannot be further reduced, the branch-and-bound method is applied to solve the remaining undetermined variables. The procedure for nding optimal solutions for the (1)-(2) problem with max-star composition is summarized as follows. Step1 Compute vector x = ( xi ) iI by (4). Step2 Check the consistency by verifying whether x oA =b. If it is inconsistent, then stop. (If the problem is consistent, then x = ( xi ) iI is the maximum solution.) Step3 Compute the index sets J ( xi ) for all iI and generate the value matrix M by (5). Step4 Apply Rule1to rst reduce the problem size, then as far as possible employ Rules 2-6 to determine the values of as many decision variables as possible. Note that the priorityofRule1isto handle the components mij <0 before mij =0. Delete the corresponding rows and/or columns in M. (Hence, problem size is reduced.) Denote the remaining sub-matrix by M again. If all decision variables have been set, go to Step 6. Step5 Take the (remaining) value matrix M. Employ the branch-and-bound method to solve the remaining undetermined decision variables. Obtain the optimal solution and stop the procedure. Step6 Generate optimal solutions for the problem and obtain the optimal value.

4. A Numerical Example
In this section, an example subject to fuzzy relational equations with max-star composition is presented. This example is taken from Khorram, Ghodousian, and Molai (2006). In solving this example by their procedure, the problem was decomposed into two sub-problems and the optimal solution obtained had an optimal value of 1.23. The proposed procedure in this study obtains an optimal solution without decomposing the problem into two sub-problems and yields an optimal value of 1.43, which is less than that obtained using the algorithm developed by Khorram, Ghodousian and Molai. Example Consider the following optimization problem subject to fuzzy relational equations with max-star composition.

112

Solving this example according to the proposed procedure step by step, an optimal solution can be obtained as and the optimal value is Z ( x ) =1.43.
Remark. This example solved by the algorithm developed by Khorram, Ghodousian, and Molai obtains the following solution.
*

The corresponding objective value for solution x

is 1.23. Obviously, this solution is not optimal.

5 Conclusion
Thiswork considered an optimization probleminvolvedthe minimizationofa linearobjective function subject to fuzzy relational equations witha max-star composition. Although the max-star composition does not belong to the class of the max-t-norm composition, some characteristics of the solution set obtained using the max-star composition are similar to that of the max-product composition. Precisely, each component of an optimal solution obtained from fuzzy relational equations with max-star compo-sition can be either 0 or the corresponding components value of the maximum solution. This useful property facilitates using a simple value matrix for solving such an optimization problem. Based on this simple value matrix, some rules are proposed to reduce problem size. Then, an effcient procedure is presented for computing optimal solutions.
References

[1]

Khorram E., Ghodousian A., Molai A.A. Solving linear optimization problems with max-star com-position equation constraints. Applied Mathematics and Computation, 2006,179: 654-661. [2] Fang S.-C.,LiG. Solving fuzzy relation equations witha linearobjective function.Fuzzy Sets and Systems, 1999, 103: 107-113. [3] Wu Y.-K., Guu S.-M., Liu J.Y.-C. An accelerated approach for solving fuzzy relation equations witha linearobjective function. IEEETransactions onFuzzy Systems, 2002, 10(4): 552-558. [4] Lee H.-C.,Guu S.-M.Onthe optimal three-tier multimedia streaming services.Fuzzy Optimization and Decision Making, 2002, 2(3): 31-39. [5] Wu Y.-K., Guu S.-M. Minimizing a linear function under a fuzzy max-min relational equation constrain.Fuzzy Sets and Systems, 2005, 150: 147-162. [6] Lu J.,Fang S.-C. Solving nonlinear optimization problems with fuzzy relation equations constraints. Fuzzy Sets and Systems, 2001, 119: 1-20. [7] Wang H.-F.,A multi-objective mathematical programming problem with fuzzy relation constraints. Journal of Multi-Criteria Decision Analysis, 1995, 4: 23-35. [8] Loetamonphong J.,Fang S.-C.,Young R.E. Multi-objective optimization problems with fuzzy rela-tion equation constraints.Fuzzy Sets and Systems, 2002, 127: 141-164. [9] Wang P.Z., Zhang D.Z., Sanchez E., Lee E.S. Latticized linear programming and fuzzy relation inequalities. Journal of Mathematical Analysis and Applications, 1991, 159: 72-87. [10] Yang J.-H., Cao B.-Y., Geometric programming with fuzzy relation equation constraints. Proceed-ings of the IEEE International Conference onFuzzy Systems, 2005, 557-560. [11] Loetamonphong J.,Fang S.-C. Optimization of fuzzy relational equations with max-product com-position.Fuzzy Sets and

113

Systems, 2001, 118: 509-517. [12] Guu S.-M.,WuY.-K. Minimizingalinearobjective function with fuzzy relation equation constraints. Fuzzy Optimization and Decision Making, 2002, 1(4): 347-360. [13] Wu Y.-K., Guu S.-M. A noteon fuzzy relation programming problems with max-strict-t-norm composition.Fuzzy Optimization and Decision Making, 2004, 3(3): 271-278. [14] Di Nola A.,Pedrycz W., Sessa E., Sanchez E.Fuzzy relation equations theory as a basis of fuzzy modelling: anoverview.Fuzzy Sets and Systems, 1991, 40: 415-429.

114

SECTION TWO
Operational Management

115

116

Virtual Cellular Manufacturing Systems: Emerging Research Issues in Global Manufacturing & Supply Chain Contexts
Nallan C. Suresh
Professor & Chairman, Dept. of Operations Management & Strategy School of Management, State University of New York, Buffalo, NY 14260, USA. Tel: 1 716 645 3279; Fax: 1 716 645 5078; E-mail: ncsuresh@buffalo.edu

Abstract The design of production systems is driven increasingly by major changes taking within global supply chain contexts. This paper reviews emerging research pertaining to virtual cellular manufacturing (VCM) systems. A VCM system is a group of resources dedicated to manufacturing of one or more part families, but the grouping may not be reflected by physical proximity of resources such as machines and labor. Identifying such logical groups through the production planning and control system offers the possibility of achieving the traditional advantages associated with cellular manufacturing such as lower cost, improved flow, higher efficiency, simplified production control, better quality and rapid response in todays volatile conditions in globally dispersed supply chains without changes to factory layouts. This paper reviews key findings of emerging streams of research in this area under the two areas of operational and design aspects, and identifies future research needs. Key words Virtual Cellular Manufacturing Systems, Global Manufacturing, Supply Chain

1.

Trends in Global Manufacturing & Supply Chains

In recent years, manufacturing organizations have started undergoing major changes due to the restructuring of global supply chains. The reconfiguration of global supply chains has been driven by advances in supply chain management and logistics concepts, advances in information technology, enabling better coordination and visibility across national boundaries, the diverse and unique needs of global markets, and increasing levels of turbulence in business conditions. Among advances in supply chain management (SCM) and logistics concepts, the principles of postponement (or delayed differentiation) have contributed significantly to the reorganization of global supply chains. These include: 1) form postponement, 2) purchasing postponement, 3) capacity postponement, and 4) logistics postponement. Form postponement entails redesign of products and processes in such a way that generic (primary) manufacturing is carried out at globally cost-effective locations, and later-stage manufacturing and assembly are carried out more locally, nearer the markets. The benefits of form postponement include greater economies of scale in primary manufacturing, with more stable demand patters, coupled with better responsiveness to demand at the later stages, resulting in lower inventories and shortages (mismatch costs). Significant benefits arise in manufacturing and logistics, resulting in cost reductions and better service levels to end customers. In the case of purchasing postponement, vendors are required to maintain inventories at the sites of downstream manufacturing sites, so that pay-on-use systems like vendor-managed inventory (VMI) are utilized. Capacity postponement entails reorganizing manufacturing capacity into reactive systems and non-reactive systems, so that items with stable forecasts are manufactured early in non-reactive capacity. Items with more uncertainty are manufactured on reactive systems, closer to time of sale, or after receipt of customer orders, based on pull systems. Logistics postponement (time and place postponement) has led to extensive reorganization of global distribution systems based on greater centralization of distribution inventories, especially for those items with lower demand and greater demand volatility. Logistics postponement has also led to reorganization of global manufacturing networks significantly. As shown in Figure 1, the overall trend, in response to the above advances in supply chain management concepts, has been to reconfigure global production and logistics networks essentially into non-reactive and reactive systems.

117

Non-Reactive Capacity Systems

Reactive Capacity Systems

Global Markets

More stable production configurations for generic items: items with stable, consolidated demand, plus minimum runs of items with high uncertainty

Flexible & agile production systems for items with high demand uncertainty; Responsiveness, customization, economies of scope logic; Based on analysis of SKU-product-process matrix, & demand uncertainty.

Figure 1.

Reconfiguration of Global Manufacturing & Supply Chains

The design of production systems, including cellular manufacturing systems for parts production, has to be viewed within this global context. Along with the above developments, advances in information technology have enabled more virtual configurations. In addition, given the increasing turbulence and volatility in global businesses, risk mitigation and rapid response have been emphasized in recent years to avoid disruptions in global supply chains. The major implication for production systems, therefore, has been to ensure low-cost, high-quality outputs associated with steady-state production, without losing flexibility and resilience. Given the above trends, we consider a relatively new production configuration in the area of cellular manufacturing, specifically, virtual cellular manufacturing systems.

2.

Virtual Cellular Manufacturing (VCM) Systems

Virtual cellular manufacturing systems involve the application of group technology (GT) principles, especially part family-oriented manufacturing, within functionally-organized production systems. Without changing the layout to cellular form, VCM seeks to exploit the benefits of group technology while retaining the flexibility of the job shop (Kannan & Ghosh 1996a, 1996b). Based on the job mix at a given time, machines across various departments are identified as virtual (i.e., logical) groupings, instead of requiring the machines to be physically adjacent. This has been the principal feature of various definitions offered for virtual cells over the years. One major attraction of VCM is the avoidance of the layout change, which is often a major deterrent to implementing cellular manufacturing. It has also been pointed out that VCM may be appropriate for small and medium-sized enterprises where physical separation of machines may be constrained by frequent changes in technical and organizational factors (Nandurkar & Subash Babu 1998; Subash Babu et al. 2000). Perhaps more importantly, in addition to avoiding changes in layout, the application of part family-oriented processing within a functional layout enables retention of the pooling synergies of the job shop (Suresh & Meredith 1994). The conversion to a cellular layout requires the partitioning of several multi-server work centers (pools of machines) of the job shop. This results in adverse queuing effects and performance deterioration, which need to be countered systematically (Suresh 1991; 1992). Thus requisite flexibility in turbulent environments is retained, with VCMs only requiring logical reconfiguration, within the planning and control system, instead of physical reconfiguration. Being based on the functional layout (FL), virtual cellular manufacturing retains the departmental separations in planning and control. That is, each operation for a customer order may be under the control of a different department supervisor, requiring extensive coordination for each job. The move times between successive operations may also be long as in the job shop, compared to physical cells, wherein successive machines are physically much closer. Despite these negative effects, the overall performance of virtual cells has been shown to be relatively good in many parameter ranges, compared to functional and traditional cellular layouts (Suresh & 118

Slomp, 2005). Research on VCM is still in a preliminary stage. Even though there have been a few rigorous experimental studies, further experimentation, based on a wider range of parameters is required before one can make generalizable conclusions about VCM. In addition, one major need for extension lies in the fact that the studies on VCM so far have been mostly based on single resource constrained (SRC) systems i.e., as purely machine-limited systems, assuming the existence of adequate labor and tooling resources. However, given the fact that labor forms a second major constraining resource, and many of the advantages associated with cellular manufacturing are known to be derived from labor flexibility, it becomes necessary to extend the research to dual resource constrained (DRC) systems, considering many labor-related aspects like cross-training. In the following sections, we first review the emerging literature on operational aspects of VCMs, followed by various methods for design of VCMs.

3.

Operational Issues of VCMs: Preliminary Research Findings

The performance of cellular layouts (CL) has been analyzed extensively since the 1970s, using computer simulation models for the most part. The extensive use of simulation has been primarily due to the complexity and analytical intractability of both functional layout (FL) and CL systems. Table 1 provides a general evolution, identifying five major streams and showing representative studies in each stream. One of the first major developments since the 1970s was the emergence of some paradoxical findings. Early simulation studies such as Leonard & Rathmill (1977a; 1977b) and Rathmill & Leonard (1977) indicated that the performance deteriorates with the conversion to cellular manufacturing, with an increase in flow time and work in process (WIP) inventory. This obviously went against conventional wisdom relating to group technology and cellular manufacturing. Simulation research on CL continued during the 1980s, and studies such as Flynn & Jacobs (1986; 1987), Flynn (1987) and Morris & Tersine (1989; 1990) also demonstrated the superiority of FL over CL.
Table 1 Research Stream 1. Early Simulation Studies Performance Comparisons of FL and CL a. (Representative Studies). b. Dual Resource Constrained (DRC) Studies ..

Single Resource Constrained (SRC) Studies

Ang & Willey (1984) Flynn & Jacobs (1986, 1987) Flynn (1987) Morris & Tersine (1989, 1990) 2. Queuing theory- Simulation Suresh (1991, 1992) Models Suresh & Meredith (1994) Shafer & Charnes (1993) 3. Simulation Studies Garza & Smunt (1991) Suresh (1991, 1992) Suresh & Meredith (1994) Burgess et al. (1993) Jensen et al. (1996) Shafer & Charnes (1993, 1995) 4. 5. Hybrid CL, FL and CL VCM*, FL and CL Suresh (1991) Burgess et al. (1993) Shambu & Suresh (2000) Suresh & Meredith (1994) Kannan & Ghosh (1996a, 1996b) Kannan (1997, 1998) Jensen et al. (1998)

.. Russell et al. (1991) Suresh (1993) Morris & Tersine (1994) Jensen & Malhotra (1997) Eckstein & Rohleder (1998) Jensen (2000) Suresh & Gaalman (2000)

Suresh & Slomp (2005)

Subsequently, the queuing theory-based models of Suresh (1991, 1992) showed that these results are primarily due to the loss of routing flexibility (pooling synergy) when job shop work centers are partitioned when converting to CL (stream 2 in Table 1). In the functional layout, each department is a multi-server system of similar machines processing a common queue of jobs. After converting to CL, the machines get partitioned and 119

reassigned to different cells. This results in a loss of routing flexibility. Using queuing theory, it was shown that significant adverse effects arise when such common queues are partitioned. This performance deterioration has to be overcome through setup reduction, beyond a threshold level, before one can realize the benefits of CM. The simulation studies of the 1990s (stream 3a in Table 1) explored various other conditions (e.g., inter-cell movements, operation overlapping within cells) under which CL can outperform FL, despite the loss of routing flexibility in CL. Several studies also extended FL-CL comparisons to dual resource constrained (DRC) systems (stream 3b). Stream 4 pertained to studies analyzing other, hybrid types of cellular systems. Stream 5 pertains to the emerging area of virtual cellular manufacturing. First, Suresh & Meredith (1994) explored the impact of using part family-oriented scheduling within a functional layout, given the loss of routing flexibility in physical cells. It was shown that the VCM system performed better than CL and FL under moderate setup time reduction and comparable lot sizes. This study utilized first come, first serve (FCFS) scheduling rule. Kannan & Ghosh (1996a; 1996b) and Kannan (1997; 1998) explored the use of several other part-family-oriented scheduling rules. They investigated the performance of FL, CL and two virtual cell systems (VCM1 and VCM2). The performance of VCMs was shown to be superior to FL, which in turn was superior to CL. VCM systems fared even better under high shop load, unbalanced demand patterns and low lot sizes. In Kannan & Ghosh (1996b), the relative performance of five types of VCM systems was investigated, based on five different family selection rules. It was again shown that VCM generally outperforms CL and FL. Jensen et al. (1998) considered a special case (a semi-conductor manufacturing system) consisting of a flow shop, with occasional skipping of work centers, and with each department consisting of multiple machines, as in a job shop. The utility of family-oriented scheduling was demonstrated when setup times are higher than a certain percentage of processing times. However, when family setup times were less than 15% of processing times, there was no advantage with family-based scheduling. All the above studies were based on single resource constrained systems. The recent study of Suresh & Slomp (2005) extended the research to DRC system, considering both machines and labor resources. This preliminary study indicates the following general results for the operational performance of VCMs. Figure 2 shows these results for conditions in which the setup reduction possibilities are relatively low, given not very similar parts within the family, very diverse range of tooling requirements, etc. In this figure, when multi-functionality (cross-training) level equals one i.e. when each worker can perform only one function, VCM system results in improved flow time and work in process inventory than FL. However, it is seen that CL systems are not feasible at this level, and hence not shown in the figure for this setting, due to the fact that cellular systems cannot operate without worker cross-training. As the level of cross-training is increased to two, it is seen that the VCM system is most superior, in terms of flow time and WIP, followed by the functional layout, and then the cellular layout. The worst-performing system for this level of cross-training is seen to be the traditional cellular system.
80% % Improvement 60% 40% 20% 0% 1 2 3 4 5 6 Cross-Training Level

VCM FL

Figure 2.

Relative Performance of VCM, FL and CL systems

120

As the cross-training level is increased further, to a level of 3 and higher, the CL systems are seen to perform best, followed by VCM which continues to be superior to the functional layout. Thus the adverse effects of partitioning multi-serves queues of machines and labor are overcome in the traditional CL beyond a threshold level of worker cross-training. Thus it is seen that with low levels of setup reduction possibility, and low levels of cross-training, virtual cellular systems perform best. When high setup reduction possibilities exist, due to high part family similarities, greater use of single minute exchange of dies (SMED) principles, etc. and with high cross-training levels, CL systems tend to outperform both VCM and FL in the entire range. Some additional findings may be seen from the recent study of Suresh & Slomp (2005). The above results represent the state of current research in operational aspects of VCMs. These studies have indicated the general range of applicability of VCM systems when they may outperform traditional CL and FL systems. But much work remains to be done in comprehensively isolating the range of parameters where VCMs can be profitably utilized.

4.

Design Issues of VCMs: Preliminary Research

Table 2 summarizes the emerging body of studies in the design of virtual cellular manufacturing systems, starting with the early study of Altom (1978). A summary of this stream is provided in the recent work of Nomden et al. (2006).
Table 2 Author(s) Altom, 1978 Drolet Irani et al., 1996 et al., 1993 Design of VCMs (from: Nomden, et al. 2006: representative studies) Resources M/-/M/H/M/H/M/-/M/-/P M/H/M/-/M/-/P M/H/M/H/M/-/P M/H/M/-/M/H/M/H/M/-/P M/-/M/H/Implementation MPS/-/-/-/MPS/-/-/MRP/MPS/MRP/-/MRP/SFC -/MRP/MPS/MRP/-/MRP/SFC -/MRP/SFC MPS/-/-/MRP/-/MRP/SFC -/MRP/-/MRP/SFC -/MRP/SFC MPS/MRP/-/MRP/SFC GT GT GT L L L L L GT GT GT GT A A A A L Layout L L L GT GT GT A A GT GT A A Automation

Ko & Egbelu, 2003. Khling, 1998 Mak & Wang, 2002 McLean et al., 1982 Mertins et al., 1992 Montreuil et al., 1992 Moodie et al., 1994 Prince & Kay, 2003 Ratchev, 2001 Rheault et al., 1995 Saad et al., 2002 Sarker & Li, 2001 Slomp et al., 2004; 2005 Subash Babu, et al., 2000 Thomalla, 2000.

Vakharia et al., 1999 M/-/MPS/-/Legend: Research type: C (Conceptual), D (Design), O (Operation), E (Empirical), V (Various/other) Resources considered: M (Machines), H (Material handling equipment), P (People) Implementation: MPS (Master Production Schedule), MRP, SFC (Shop Floor Control) Layout: L (Layout considered explicitly) / GT: Group Technology used explicitly

Again, as for operational aspects, much work remains to be in the design of virtual cellular manufacturing systems, driven by the major changes taking place in global supply chains. Currently there appears to be a major need for synthesizing traditional production systems and warehouse design principles with the major drivers of global supply chains. The area of virtual cellular manufacturing presents some initial high-payoff opportunities.
References [1] References & expanded version of the paper can be obtained by e-mailing the author at ncsuresh@buffalo.edu.

121

A Study on the Performance Characteristics of Closed-loop Asynchronous Automatic Assembly Systems


W.K. Leung1, W.C. Ng2, Y. Ge3
1 Department of Marketing, City University of Hong Kong, Hong Kong 2 Department of Industrial and Manufacturing Systems Engineering, University of Hong Kong, Hong Kong 3 Department of Marketing, City University of Hong Kong, Hong Kong

Abstract An automatic assembly system, a key tool for mass production, is generally composed of a number of workstations and a transport system. While the workstations perform some preplanned operations, the transport system moves the assemblies from one station to another. Usually an assembly is moved on an especially designed carrier called a pallet. The device that fixes the position of an assembly on the pallet is called a fixture. A typical example of automatic assembly systems is a disk washer assembly system which consists of more than forty stations. The complexity of analyzing the performance of an automatic assembly system is that the overall performance is affected by many factors. These factors include: (i) the number of stations installed (ii) the number of pallets installed (iii) ) the buffer units between two stations (iv) the reliability of the workstation (v) transport delay time. Therefore, one of the major challenges to a design engineer is to find the optimal combination of the mentioned factors while maintaining the highest productivity at a lowest cost. A simulation model written in C++ is particularly designed to study assembly systems with different configurations. In our study we first use this simulation model to study the performance characteristics of closed-loop assembly systems. The impact of each decision factor on the overall performance is also analyzed. The optimal configurations were then identified. Finally, the guidelines for optimizing the overall performance of automatic assembly systems are provided. Keywords Simulation, Automatic Assembly System (AAS), Factorial Experiment, Pallet Optimization

1. Introduction
An automatic assembly system is basically composed of a series of workstations performing simple automatic operations and a transport system with transfers the assemblies from one station to another. Usually an assembly is moved on a specially designed carrier called a pallet. The device that fixed the position of an assembly on the pallet is automatic assembly systems can be classified as indexing systems and free-transfer assembly systems according to the transfer system installed. An indexing system transfers all assemblies simultaneously; consequently the entire assembly system stops if any one of the workstations is down. Therefore, an indexing assembly system with several workstations can have very high down time if the individual station reliability is not high. A free-transfer system, on the contrary, is separated by buffer units. Hence, each individual workstation may operate independently without immediate interference from the malfunction of the other workstations. Thus, the productivity of an assembly system can be increased with a free-transfer configuration [1] environment. For the closed loop assembly systems (CLAS), a finite number of pallets are installed. However, the number of pallets installed is not sufficient, workstations will be frequently suffered. On the other hand, if too many pallets are installed, workstations will be blocked once a workstation is jammed or malfunction. Moreover, the cost of pallets may be very high. Therefore, installing an optimal number of pallets in a CLAS is one of the most challenging task to an engineer.

2. Literature Review
With the advancement of queuing theories, many approximate techniques and results originally developed for studying the behavior of queues, have been employed to study assembly systems. In fact, assembly systems with closed loop structures with finite pallets can be modeled as cyclic queues with a finite number of customers. In 1982 Koenigsberg [2] gave a review on the research concerning the cyclic queues an closed queue networks. Other researchers have applied queuing approximations techniques in analyzing the performance of manufacturing systems with queue structures [3, 4], Sanders and Kammath have developed RENA [5], an appropriate approach to study cyclic queues, to estimate the production rates of the automatic assembly systems with closed-loop structure with general station cycle times. While these approximate techniques have provided good insights in understanding the performance of manufacturing systems, most of the models obtained are 122

restricted to study assembly systems with simple topological structures with limited number of workstations. Many researchers [6, 7, 8, 9, 10, 11, 12] have developed an approximate models that can analyze the performance of closed-loop systems with finite buffer units and pallets. These models contribute approximate solutions in estimating the overall productivity. However, some of the assumptions such as negligible transport time are not realistic and cannot be practically applied in the real world situation. Due to the complex nature of the analysis of the performance characteristics of assembly systems, many researchers have attempted to study this type of problems by using simulation [12, 13, 14]. An excellent presentation on the techniques for studying the performance characteristics of different types of assembly systems can be found in Boothroyd [1]. Also, Buzacott and Shanthihumar have written an extensive review of modeling techniques for manufacturing systems [3]. Law [15] analyzed the effects of different parameters by applying experimental design techniques to collect and analyze the simulation data. Studies of the performance characteristics of parallel stations using simulation have been presented by the authors [16, 17, 18]. Using simulation approach, unrealistic assumptions can be relaxed and the results generated can reflect the actual operations environment more accurately. In this paper, we will first present the performance characteristics of closed looped assembly systems (CLAS) with finite pallets. Critical factors affecting their performance are identified and their impacts are quantified. Finally guidelines for selecting the right configuration to optimize the overall performance of CLAS are presented.

3. Model Description
A discrete-event simulation model implemented using C++ was developed to evaluate and compare the performance of CLAS under various assembly system design parameters. These systems consist of one dummy load/unload station and a series of workstations. The mean processing time of each station is set to be equal. That is, the whole system is said to be balanced. Assemblies are mounted on the pallets to transfer from station to station on a continuous operating conveyor. A finite number of buffer units are installed among the workstations (Fig. 1). The stochastic behavior of the systems was simulated by an appropriate probability distribution for the random variables.

Jammed Staation

Forced Station Starved Station

down

Normal Station

Normal Assembly Defective Assembly Figure 1 A Closed Loop Asynchronous Automatic Assembly System

4. Performance Characteristics of Work Stations


Under normal operation, each station can be in one of the following four states:

123

Table 1

State Condition(s) of Workstation

State Idle Jammed Force down Busy

Conditions for occurrence when no assembly carried by the pallet is available for immediate processing when defective assemblies (components) are encountered when the downstream buffer units are full and consequently a completed assembly cannot be released when the station is processing a normal (non-defective) assembly

States 1 through 3 are classified as nonproductive since a workstation cannot operate on any normal assemblies when it is in one of these three states. Therefore, the productivity of the system increases as the duration of workstations in the productive states increases. Below are the Assumptions of the Model: (i) Process Characteristics of the Workstations The assembly system is balanced. That is, every workstation shares the same mean processing time and distribution. (ii) Breakdown Characteristics of the Workstations The only cause of breakdowns of a workstation is the jamming of a defective assembly, and the probability of encountering a defective assembly is the same for all the workstations. (iii) Repair Characteristics of Workstations The repair time of workstations follows a random distribution. (iv) Transport Time for the Pallets The transport time unit is defined as the time required for a free running assembly to travel one buffer space or a normal workstation. The transport time for buffer units and workstations are equal and constant. (v) System Capacity There are sufficient spaces for the last workstation to release its final product. Also, assemblies are always available to feed the workstations. Consequently, the last station will never be blocked and the first station only be starved when no pallets are available.

5. The Experiment
5.1 Input Parameter To study the performance characteristics of the system, a factorial experimental design was used to analyze the experiment output based on the following five control variables:
N P B CV D = = = = = Total number of stations Pallet ratio Buffer size Coefficient of variations of each workstation Transport delay ratio

Total number of stations (N) is defined as the total number of workstations required to be installed to finish all the assembly processes. Pallet ratio (P) is defined as the ratio between total number of pallets and the total number of buffer units. Coefficient of variation of each workstation (CV) is defined as the variation of process time due to jamming. Transport delay ratio (D) is defined as the ratio between the time for a pallet to travel one unit buffer size and the mean processing time. Two different levels for each factor were carefully chosen to represent typical system operating conditions. For instance, the buffer size was set equal to one or five. In this way, total 32 (25) combinations of different operating conditions were formed. The chosen levels represent the range of the factor in a typical automatic assembly system. The high and low levels of each factor chosen are summarized in Table 2.

124

Table 2

Values of the high/low levels of each factor

N Low High 3 7

P 0.3 0.7

B 1 5

CV 0.01 0.05

D 0.1 0.3

The length of each simulation run was set to 22000 time units. Data collected from the first 2000 time units were discarded to avoid the initial bias. Each of the 32 combinations was replicated for 20 times. Different random seeds were used for different combinations and replications. 5.2 Output Since the valid simulation run length is 20000 time units and the processing time is 5 time units, the maximum output is equal to is the valid simulation time / (processing time of station 1 + one unit time for transferring the pallet out of the station). This theoretical output will be considered as a base line performance to be compared with the performances produced in different condition combinations. 5.3 Results By making use of the Yates algorithm, the estimated main effects and the interaction effects of the factors were obtained and summarized in Table 3.
Factors Average: Main effects: N P B CV D Two-factor interactions: NP NB N CV ND PB P CV PD B CV BD CV D
Table 3 Effects of factors with the design region Estimated effects Factors 77.92% Three-factor interactions: NPB -7.21% N P CV 24.08% NPD 20.98% N B CV -12.07% NBD -3.70% N CV D P B CV 2.29% PBD 1.86% P CV D -3.24% B CV D -0.44% Four-factor interactions: -16.28% N P B CV -1.00% NPBD 3.72% N P CV D 0.18% N B CV D -0.94% P B CV D 0.34% Five-factor interactions: N P B CV D

Estimated effects 0.37% -0.40% 0.26% 0.52% -0.63% 0.11% 2.70% 1.16% -0.27% 0.05% 1.19% 0.82% -0.29% 0.24% 0.07% 0.00%

The statistical results indicate that the main effects of the factors, N, P, B, CV and D, chosen within the design region are significant at 99 percent level of confidence. In other words, all the studied factors can impose statistically significant impact on the productivity of the system. In terms of total products produced, the pallet ratio (P) appears to be the most significant factor among the five studied factors. By increasing pallet ratio from 0.3 to 0.7, the total products produced are increased by more than 24 percent. The second important factor is the buffer size (B) which can improve the productivity to 21% when increasing the buffer size from 1 to 5 units. The coefficient of variance (CV) can also affect the productivity by more than 10% when changing from low to high level. Among all the factors, the delay ratio (D) has the least impact on the productivity, which is less than 4% reduction when changing from low to high level. One of the main difficulties in analyzing the productivity of an automatic assembly system is due to the interaction of different factors. Table 3 shows that, P B can significantly affect the productivity of an automatic assembly system, while the interactions of other factors appear to be small. This suggests that P and B are highly interactive factors, and thus should not be interpreted individually. 125

6. The Impact of Pallets under Different Factors


In the previous sections, we can only conclude whether a factor has a statistically significant impact. To get further insights into the behavior of the systems, more levels of each factor have to be chosen within the previous design region for analysis. In particular, we are interested in determining whether the response (productivity) is linearly or nonlinearly dependent on the factors. Also, we want to know the relationships between the pallet ratio and the other factors under investigation. Three levels were chosen for N, B, CV and D to study their behavior in detail. 10 levels were chosen for P for further investigation. Levels chosen for each factor are summarized in Table 4. In this section, we will focus to discuss the interactive effects between the pallet ratio and different factors.
Table 4 Studied factors and levels: Value levels (Mid) 7 0.3 0.4 0.5 0.6 3 0.03 0.2

Studied factors N P B CV D 0.1

(Low) 3 0.2 1 0.01 0.1

0.7

0.8

(High) 10 0.9 1 6 0.05 0.3

6.1 Interaction between P and N When N increases, more buffer units have to be installed to decouple the jamming effect of the workstations installed. As a result, more pallets have to be installed to maintain the productivity of a system. Fig.2 shows that, the productivity increases as the pallet ratio increases, and reaches the max value when pallet ratio equals 0.7. When pallet ratio is higher than 0.7, the productivity drops as pallet ratio increases. It appears that productivity and pallet ratio relates non-linearly under different N; and under all N levels. The optimal pallet ratio appears to be 0.7. Also, it is noted that the productivity decreases as the number of stations increases. The phenomenon is due to the fact that the overall reliability of the system decreases when the number of stations increases.
Productivity with Different Number of Workstations (N)
1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Pallet Ratio (P)

Productivity

N=3 N=7 N = 10

Figure 2

Productivity with Different Number of Workstations (N)

6.2 Interaction between P and B When the pallet ratio is at high level 1, a jamming at any station can cause the whole system to be forced down in every short time. That is, the buffer units cannot decouple the jamming effect of a station and hence does not improve the overall productivity of a system. However, when the pallet ratio is at 0.2, a station can perform normally even when a downstream stations is jammed if sufficient buffer units are installed to decouple the jamming effect. Under this condition, the duration of a station in normal state can be increased. Thus, the productivity can be improved significantly by adding more buffer units. Fig. 3 shows that the pallet ratio and buffer size forms a non-linear relationship. However, the change in productivity with respect to the pallet ratio decreases as the buffer size increases. In other words, the improvement in productivity due to the additional pallets increases as the buffer size increases. 126

Productivity with different Buffer Size (B)


0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Pallet Ratio (P)

Productivity

B=1 B=3 B=6

Figure 3 Productivity with Different Buffer Size (B)

6.3 Interaction between P and CV Fig.4 shows that the ratio of change in productivity with respect to pallet ratio does not have significant changes when the pallet ratio is below 0.6. However, when the pallet ratio goes beyond 0.6, it appears that the rate of changes in productivity decreases as CV increases. The phenomenon can be explained in terms of the decouple effectiveness of buffer units installed. When a system is full of pallets, workstation will be forced down whenever a workstation is jammed. Consequently, the decoupling effectiveness of the buffer units is significant reduced and workstations cannot perform normal operations independently. Hence, the variation of processing time has stronger impact on the productivity under this situation. On the other hand, if there are only a few pallets in a system, the impact due to the variation of processing time can be reduced by the decoupling effect of the buffer units. Hence, the productivity does not show a significant changes when the pallet ratio is below 0.6.
Productivity with different coefficient of Variance (CV)
1 0.8

Productivity

0.6 0.4 0.2 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Pallet Ratio (P)

CV = 0.1 CV = 0.3 CV = 0.5

Figure 4

Productivity with Different Coefficient of Variance (CV)

6.4 Interaction between P and D Fig.5 depicts that the productivity and the pallet ratio relates non-linearly as different transport delay ratio is used. The productivity can be significantly different when the transport delay ratio varies at the P levels that are lower than 0.6. However, the difference in productivity becomes insignificant as the pallet ratio has reached 0.8 or above. This phenomenon can be explained by considering the force down and starving frequencies. When a system is full of pallets, the starvation time of the system is negligible. However, the whole system will be forced down whenever a workstation is jammed. So the productivity is dominantly determined by the frequencies of a jamming and the time to clear a jam. Consequently, delay time does not play an important role under this condition. On the other hand, if there are only a few pallets circulating in a system, workstations will be starved quite frequently and the impact of force down is negligible. Under this condition, the productivity of the system is determined by starvation rather than force down effect. Since the starvation duration will be increased with the 127

transport delay, the productivity decreases as the transport delay decreases.


Productivity with different Transport Delay Ratio (D)
0.9 0.8 0.7

Productivity

0.6 0.5 0.4 0.3 0.2 0.1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Pallet Ratio (P)

D = 0.1 D = 0.2 D = 0.3

Figure 5 Productivity with Different Transport Delay Ratio (D)

7. General Guidelines for Designing CLASs


This paper studied the performance characteristics of closed loop asynchronous assembly system and provided insights in selecting the optimal configuration for achieving highest productivity and lowest cost. Some general guidelines were generated to facilitate engineers in designing close-looped assembly systems. These guidelines can be summarized as follows: The main effect of all five factors under investigation, namely, N, P, B, CV and D, can have significant impact on the productivity (Table 5) within our experimental design region. However, the degree of impact of each factor varies depending on the combination of the other factors. While (P) and (B) have are positively related to the overall productivity, CV, D and N are just the opposite. The results showed that the pallet ratio (P) appears to be the most significant factor in terms of improving the total productivity. By increasing pallet ratio from 0.3 to 0.7, the total products produced were increased by more than 24 percent. The second important factor is the buffer size (B) which can also improve the productivity by close to 21% when increasing the buffer size from 1 to 5 units. In other words, both factors can improve the overall productivity by more than 20%. However, the overall productivity decreases ranging from 3.7% to 12% as the values of CV, N and D are switching from low level to high level. Therefore, adding pallets up to the optimal pallet ratio should be first considered if one wants to improve the overall productivity
Table 5. Main effects of the five studied factors

1 2 3 4 5

Factors P B D N CV

Estimated effects 24.08% 20.98% -3.70% -7.21% -12.07%

Optimal Values 0.6 0.8 Depends on CV 0 Minimum* 0

The second phenomenon concerns the optimal pallet ratio setting in a closed-loop AAS. When the number of pallet is set between 0.6 to 0.8 of the total capacity of the system (all buffer units + all station units), the productivity appears to achieve the maximum value. The productivity increases as the pallets ratio increase up to 0.6 and plateau off between 0.6 to 0.8; when the pallet ratio is further increased the productivity drops. Therefore, a general guideline is that the pallet ratio should be set at least at 60% and should not add up beyond 80%. If the number of pallets could not be changed due to economical reasons, changing the buffer size could be another alternative for improving productivity. Although buffer size (B) and pallet ratio (P) both are very significant factors on improving productivity, B relates nonlinearly with P in terms of the effect on productivity. 128

The simulation results showed that when P was lower than the optimal value, increasing the buffer size between stations will significantly increase the productivity; while when P is equal to or higher than the optimal value, although the effect is still there, adding buffer size is not likely to decouple the effectiveness caused by P, that the productivity decreases as P continues to increase. This phenomenon suggests that, when P is lower than the optimal value, adding buffer size between stations would help improving productivity; otherwise, adding buffer size would have only limited impact. . While both the pallet ratio and the buffer size are not allowed to change in the system, making effort on decreasing the variance of stations could be another alternative plan for the engineers. The simulation results indicated that, when the pallet ratio (P) is larger than 50%, decreasing the coefficient of variance (CV) of stations will improve the productivity and the effect appears to be the same at all P levels higher than 50%. This conclusion suggests that if both the number of pallets and the buffer size are fixed, keeping the jamming probability of all the stations at the same level will help improving the total productivity. Both D and N have negative impact on the productivity when they are changed from low level to high level. To optimize the overall productivity, the number of stations should be minimized as more stations may potentially increase the CV which in turn will lower down the overall productivity. That is, if possible, one should consider minimizing the number of operations. Delay ratio which has the least impact on productivity within our design region should be the last factor to be tuned. While this paper has shown the performance characteristics of balanced CLAS, unbalanced CLAS and CLAS with more complicated configuration such as systems with sub-loops and parallel stations have not been studied. These types of related problems are under the investigation by the authors.
References

[1] [2] [3] [5]

Boothroyd G, Poli C, Murch L E. Automatic Assembly, Marcel Dekker, 1982 Koenigsberg E, Mamer. The Analysis of Production System. International Journal of Production Research, 1982, 20(1):1-14. Buzacott J A, Shanthikumar J G. Stochastic Models of Manufacturing Systems.1st Edition. Prentice-Hall, Incorporated, 1993 Kamath J. Analytical Performance Models for Automatic Assembly Systems, Ph D. Thesis. University of Wisconsin, Madison, US, 1986

[4] Whitt W. Open and Closed Networks of Queues. AT&T Bell Lab. Tech. J., 1984, 63:1911-1979 [6] Dallery Y, Gershwin S B. Manufacturing Flow Line Systems: a Review of Models and Analytical Results. Queueing Systems Theory and Applications, 1992, 12: 3-94 [7] Frein Y, Commault C, Dallery Y. Modeling and analysis of closed-loop production lines with unreliable machines and finite buffers. IIE Transactions, 1996, 28: 545-554 [8] Onvural R O, Perros H G. Approximate throughput analysis of cyclic queuing networks with finite buffers. IEEE Transactions, Software Engineering, 1989, 15: 800-808 [9] Dallery Y, Towsley D. Symmetry property of the throughput in closed tandem queuing networks with finite capacity. Operations Research Letters, 1991, 10: 541-547 [10] Kim D S. The equivalence of two-station closed and open serial production systems with finite buffers. IIE Transactions, 1998, 30: 101-106 [11] Hand M S, Park, D J. Performance Analysis and Optimization of Cyclic Production Lines. IIE transactions, 2002, 34: 411-422 [12] Okamura K, Yamashina H. Analysis of Effect of Buffer Storage Capacity in Transfer Line Systems. AIIE Trans, 1997, 9: 127-135 [13] Okamura K, Yamashina H. Analysis of In-Process Buffers for Multi-Stage Transfer Line Systems. Int. J. Prod. Res., 1983, 21: 183-196 [14] Ovuworie G C. Some Simulation Results of a Model of an Unreliable Series Production System. Int. J. Prod. Res., 1982, 20: 619-627 [15] Law S S. A Statistical Analysis of System Parameters in Automatic Transfer Lines. Int. J. Prod. Res., 1981, 19: 709-724 [16] Leung W K, Lai K K. Analysis of Strategies for Installing Parallel Stations in Assembly Systems, Industrial Engineering & Management Systems - An International Journal, 2005, 4(2): 117-122 [17] Leung W K, Lai K K. Performance Analysis of Assembly Systems with Highlift Stations: A Practical Approach, International Journal of Computer and Industrial Engineering, 2004, 45(2).
[18] Leung W K, Lai K K. Analysis of Repair Strategies for Automatic Assembly System, International Journal of Quality and Reliability Management, 1996, l15(6): 45-58

129

Container Slot Allocation Model with Liner Shipping Revenue Management


Bu Xiangzhi, Chen Rongqiu, Li Li
1 School of Management, Huazhong University of Science and Technology, Wuhan 430074, China 2 School of Business, Shantou University, Shantou 515063, China

Abstract Based on revenue management, the container slot allocation problem for liner shipping industry is studied quantitatively under uncertainty circumstances. Firstly, a multi-leg capacity allocation model is proposed considering empty container transportation and routing choice based on the analysis of the characteristics for liner shipping revenue management. Then, due to the demand uncertainty, a novel robust optimization method is proposed to solve the model. Finally, a case study is formulated via simulations. The results show that the robust optimization model get higher revenue than the deterministic linear programming model, and that the model and solution method have application value to the revenue management problems for container liner shipping companies. Key words Liner shipping, Revenue management, Slot allocation, Robust optimization

1. Introduction
With the fast development of container shipping industry and the increasing intensity of competitive market, the optimization of container transportation flow system has become a hotspot that arouses great concern from the container transportation companies and interrelated organizations. As a representative service industry, container liner shipping industry has the characteristics for the application of Revenue management (RM)[1,6]. However, for the long voyage time and multiple routes, the liner shipping RM system has to consider the optimization revenue of the whole transportation network. At the same time, owing to the reusing of containers and the imbalance of trade development, the liner shipping companies have to consider not only the allocation of the loaded containers, but also the transportation and stowage of the empty ones. Facing multiple routes and uncertain demands, the liner companies have to decide how to allocate their limited transportation capacity effectively on each line so as to transport the goods with maximum marginal contribution to the profit and achieve the goal of revenue maximization. This is the problem our paper studied. The literatures for revenue management are rich for airline passenger problem domain. While literatures are relatively scarce for the liner shipping industry. Using actual booking data of the trans-Pacific westbound voyages, Ha(1994) studied the capacity control policies of major commodities for a liner shipping company applying the expected marginal revenue (EMR) model and the threshold curve model. The results of a total of 102 simulation runs showed that the revenue management models effectively increased the freight revenue by controlling the booking requests. Introducing a dynamic programming model of the booking process, Maragos(1994) studied the dynamic capacity allocation and pricing problems for one-leg and multi-leg liner shipping companies. However, the problems of empty container reposition and routing choice havent been considered in the above literatures. As an important problem in liner shipping operation, empty container reposition problem has raised great concern recently. Shen and Khoog(1995) established a decision support system to solve the empty container imbalance problem. Lai et. al(1995) studied the container logistics and allocation problem by simulation and heuristic method, which got the logistics policies for the reduction of operation and opportunity costs. However, with the aim of cost reduction, these literatures didnt consider the revenue maximum problem of the whole voyages. Considering all the mentioned factors, Chen and Lee(2002) proposed a liner shipping revenue management problem, which is similar with this work. They provided a fuzzy multi-objective programming model, and the uncertainty demand and customer satisfaction degree are all represented by fuzzy numbers in the model. Considering more stochastic nature of demand and empty container reposition, Bu et. al(2005) proposed a robust optimization model for multi-leg slot allocation problem, however they didnt consider routing choice problem. Based on these literatures, we establish a stochastic optimization model for multi-leg slot allocation and routing 130

choice problem in this paper, and then we propose a novel robust optimization method to solve the model. This paper proceeds as follows: In section 2, the optimal stochastic network capacity allocation models are formulated. A novel robust optimization method is proposed to solve the models in section 3. In section 4, a case study and discussions are provided, followed by some conclusive remarks and the research prospects in section 5.

2. Model Formulation
The following are the major Assumptions and notations for parameters and variables being used in this paper: Suppose the parameters such as the loading and discharging rate, transshipping cost, ocean shipping costs and land-carriage costs, and the average freight rates of each origin-destination port pair have been estimated. Suppose l L is the set of all the shipping lines of the company, each of which in the set has M legs and the calling ports are known. is the set of all the loading and discharging port pairs, that is = {(ij )} , where i is the index of loading port and j is the index of discharging port. P represents the set of all the feasible routes which is consist of the pairs in for all the lines, p is one of the members in P. rijkp stands for the unit revenue of shipping the k-type loaded container by route p between port i and port j, which equals the freight rate subtracting transportation costs (and the costs include inland transportation fares, ocean shipping rates, loading, unloading costs, and transshipping fares etc.). Dijk denotes the demands of k-type loaded container between loading and unloading port pairs. Decision variable xijkp represents the number of slots allocated to k-type loaded container shipping by route p from port i to port j. f wijk denotes the average total weight (tons) of each k-type loaded container delivered from loading port i to discharging port j. l Cij is the operational capacity of the vessel between port i and j in line l(unit: TEU, twenty-foot equivalent units). Wijl is the deadweight tonnage of the vessel (unit: ton). Suppose in one period of time, the average repositioning demand of empty containers to be supplied port j is E j . yijkp represents the number of slots allocated to k-type empty container shipping by route p from port i to
e port j, wk is the average total weight (tons) of each k-type empty container, and cijkp is the corresponding costs

for empty containers repositioning. We also define the following indicator function: 1,if route p passing the port pair(ij ) in line l l Aijp = 0, otherwise So this multi-leg slot allocation model with routing choice can be given as below(M1):
Max s.t.
K l l Aijp ( xijkp + yijkp ) Cij , (ij ) , l L pP k =1 K Al ( w f x + we y ) W l , (ij ) , l L k ijkp ij ijk ijkp pP k =1 ijp xijkp Dijk , (ij ) pP K y E j , j : (ij ) i:( pP ijkp ij ) k =1 xijkp , yijkp N {0}, p P, (ij ) ( ij ) pP k =1

(rijkp xijkp cijkp yijkp )

(M1)

131

The objective function of model M1 is to maximize total freight contribution (freight revenue minus repositoning cost) from the shippment.The first constraint is the capability constraint, that is all the allocated slots for loaded and empty containers cannot exceed the vessel operational capacity. The second is the deadweight constraint, that is, the total weight of loaded and empty containers cannot exceed the vessel deadweight tonnage. The third is demand constraint of the decision variable, due to the stochastic nature of the demand, which is uncertain and the solving method will be given later. The next is the reposition constraint, which means the total slots for loading empty containers must be greater than the repositioning demand of containers to be supplied port j. The last one is the integer constraint of the decision variables.

3. Model Solving Based on Robust Optimization


Due to lots of stochastic variables in the model above, it is not easy to solve it directly. One of the solution methods is to replace the stochastic variables with their expected value, which is called the deterministic linear programming(DLP)method. However, the drawback of this approach is its inability to handle risk aversion or decision-makers preference in a direct manner and the solution may not be guaranteed to be feasible [18]. One may then carry out sensitivity analysis (SA) for some corrective action, however, such approach is a reactive one[12]. We believe decision makers would prefer to have some proactive tools to get the solution, and robust optimization (RO) is just this kind of approach. 3.1 Model Transformation According to Mulvey et. al.(1995), robust optimization is an integration of goal programming and scenario-based description of unknown data, which is one of the proactive approaches to solve the stochastic programming problems[12]. The philosophy of robust optimization is built on the trading-off between the solution robustness and model robustness. According to literature[9] and [12], we can then transform our model (M1) into a robust optimization model (M2) as below:
Max s.t.
K l l Aijp ( xijkp + yijkp ) Cij ,(ij ) , l L pP k =1 K Al ( w f x + we y ) W l ,(ij ) , l L k ijkp ij ijk ijkp pP k =1 ijp s xijkp max Dijk ,(ij ) , s , k = 1, K pP K yijkp E j , j : (ij ) i:(ij ) pP k =1 K s = (rijkp xijkp cijkp yijkp ) (ij ) pP k =1 xijkp , yijkp N {0},(ij ) , p P

ps s ps s ps s ps
s s s

(ij ) k =1

ijk

s Dijk

pP

xijkp

{ }

(M2)

Where s = {1,2,
S

, S } represents one of the scenarios, p s is the corresponding probability, such that

p s 0 and p s = 1 . The first term in the objective function is the expected revenue. The second one is the
s =1

mean absolute deviation of the revenue. The absolute deviation in the second term is for the constraints violation. Together, these two terms can be viewed as a measurement of trade-off of solution robustness. The third term is a model robustness measurement. The parameters and ijk are nonnegative weighting parameters, where can 132

be regarded as a risk factor for the decision-makers, and ijk are the penalty weights for the constraints violation. 3.2 An Novel Robust Optimization Method The mean absolute deviation terms, however, introduce some complexity owing to increasing number of artificial variables when the model is solved by linear programming. Recently, Li (1996) proposed a novel means of solving a GP problem which is described as the following theorem (Theorem 1)[9]. Yu & Li (2000) proved that it is a simple and computational efficiency method [18]. Theorem 1 (Li (1996)): A GP problem (F is a feasible set)
Min s.t. Z = f (X ) g X F
ZZ = f ( X ) g + 2 g f (X ) 0

can be linearized using the following form:


Min s.t.

0
X F

The proof of Theorem 1 can be found in literature[19] According to Theorem 1, the model (M2) can be reformulated as below (M3): Max s.t. s ps s + s 0 s s s Dijk xijkp + ijk 0 pP K l Al ( x + yijkp ) Cij ,(ij ) , l L pP ijp ijkp k =1 K f l e l Aijp ( wijk xijkp + wk yijkp ) Wij ,(ij ) , l L pP k =1 s xijkp max Dijk ,(ij ) , s pP K y E j , j : (ij ) i:( pP ijkp ij ) k =1 K s = (rijkp xijkp cijkp yijkp ) (ij ) pP k =1 s s 0 ij 0, (ij ) , s x , y N {0}, (ij ) , p P ijp ijp
s s s ps s ps [ s ps s + 2 s ] ps ijk [ Dijk xijkp + 2 ijk ] s s s K

(ij ) k =1

pP

(M3)

{ }

Obviously, the above model is the typical integer linear programming model, and we can solve it easily by some linear modeling package (such as Lindo, Cpex). However, due to the allocated slot numbers in each route as the decision variables, the scale of the model will increase sharply once the lines or calling ports increase, and model will become very difficult to solve. Which is known as the NP hard problem. Although we can solve it by some heuristic or meta-heuristic methods(such as Genetic Algorithm, simulation annealing), we can only find the near-optimal solution. However, it is not usual to transship more than two times for the high loading, unloading and transshipping costs. So, in this paper we didnt try to search all the feasible routes, and just consider the simple conditions such as direct delivery or just transshipping one time.

133

4. Case Simulation and Discussion


Two far east/American service routes of a liner company in China are studied as a case. There are five calling ports in the two service lines, three of which lie in far east region and the other of which lie in American. The following graph shows the voyage network(see Figure 1). There are 12 port pairs along with the two lines, if the near ocean shipping is ignored. Based on the historical booking data, there are three scenarios of shipping demand, which are shown as follows (see Table 1). For each scenario, the corresponding probability is 0.3, 0.6 and 0.1, and the corresponding average rate lies in the bottom line of Table 1. The specification of voyage one is 1000 TEU operational capacity, 7800 ton deadweight, and that of voyage two is 600 TEU operational capacity, 6900 ton deadweight. The parameters, such as ocean shipping costs, transshipping costs, loading and unloading costs for loaded containers, are given in table 2 and table 3 for each of the line.
1 3 Voyage one 2 4 5

3 Voyage two

1 Port node Transship node

5 Leg line Loading and unloading line

N N

Fig. 1 the voyage network 1-3 450 300 200 335 1500 1-5 518 345 230 386 1300
Tab. 1 Demands and rates between calling ports 2-3 2-5 4-3 4-5 3-1 3-2 405 563 360 315 225 240 270 375 240 210 150 160 180 251 161 140 101 107 302 419 269 234 168 179 1400 1300 1200 1100 400 500

S1(0.3) S2(0.6) S3(0.1) Expectations Average rates

3-4 165 120 90 131 500

5-1 225 150 101 168 500

5-2 270 180 120 201 600

5-4 158 105 71 117 600

Tab. 2 Average shipping costs for loaded containers in voyage one

Tab. 3 Average shipping costs for loaded containers in voyage two

1 2 4 5 3

1 0 389 376 196 170

2 26 0 389 222 196

4 39 13 0 235 209

5 219 193 180 0 415

3 245 219 206 26 0

1 5 3 2

1 0 245 196 26

5 183 0 379 209

3 203 20 0 229

2 365 219 170 0

Then, we get the solutions of the model using the optimization software LINDO 6.0.Table 4, 5 and 6 show the allocation results, comparing the method of DLP and the method RO proposed in the paper. 4.1 The delivery routing choice As is shown by the DLP solution in table 4, almost all the demands between port pairs are delivered by one single voyage. Because the costs of loading and unloading and transshipping a container are much more than the slot usage costs, the company will try to use direct shipping except the operational capacity of one voyage cant afford to fully fill the demand. While we can see from the RO solution that the loaded containers from port 5 to port 2 and the loaded and the empty containers from port 3 to port 1, are all delivered by the two voyages at the same time. The reason is that considering the stochastic nature of the demands, the allocated slots for the loaded containers from the solution of RO method are much more than that from DLP. Thus, the operational capacity of one single voyage cant fill the slot demands. Then, although it will cost much for the company to use the 134

transshipping mode, the company have profit. That is why we can see that the transport plan from RO will get more profit than that from DLP.
Tab. 4 Routing choice between port pairs for each of the voyage DLP RO voyage 1 voyage 2 total voyage 1 voyage 2 0 335 335 0 450 0 0 0 0 0 0 265 265 0 150 0 0 0 0 0 302 0 302 257 0 0 0 0 0 0 365 0 365 335 0 0 0 0 0 0 269 0 269 240 0 0 0 0 0 0 64 0 64 168 0 0 0 0 0 0 168 0 168 149 76 120 0 120 98 22 0 179 179 0 240 0 90 90 0 90 131 0 131 165 0 85 0 85 85 0 168 0 168 225 0 0 0 0 0 0 0 201 201 120 150 0 0 0 0 0 117 0 117 158 0 0 0 0 0 0 1584 980 2564 1818 1065 205 90 295 182 113

1-3 1-5 2-3 2-5 4-3 4-5 3-1 3-2 3-4 5-1 5-2 5-4 Total

loaded empty loaded empty loaded empty loaded empty loaded empty loaded empty loaded empty loaded empty loaded empty loaded empty loaded empty loaded empty loaded empty

total 450 0 150 0 257 0 335 0 240 0 168 0 225 120 240 90 165 85 225 0 270 0 158 0 2883 295

4.2 Slot allocation plan The optimal slot allocation of the two voyages is shown in Table 5 and Table 6. As is shown in Table 5, the usage rate of leg port 1 to port 2 is very low, and that of leg port 4 to port 5 and port 5 to port 3 is much higher. Comparing the result from DLP and RO, Table 5 shows that the usage rate of the slots under RO increases deeply, and two of the legs reach their maximum capacity. So, the liner company should take the charter slot policy, under reasonable costs, the company can develop alliance voyage or charter slot temporarily for the legs which reach their capacity limit.
Tab. 5 The optimal slot allocation for voyage one 1-2 2-4 4-5 0 302 302 0 365 365 0 0 269 0 0 64 0 0 0 131 0 0 0 0 0 117 117 0 248 784 1000 24.8% 78.4% 100% 1-2 2-4 4-5 0 257 257 0 335 335 0 0 240 0 0 168 0 0 0 165 165 0 0 0 0 120 0 0 158 158 0 443 915 1000 44.3% 91.5% 100%

D L P

2-3 2-5 4-3 4-5 3-1 3-4 5-1 5-4 Total slots Usage rate 2-3 2-5 4-3 4-5 3-1 3-4 5-1 5-2 5-4 Total slots Usage rate

R O

5-3 302 0 269 0 0 0 168 117 856 85.6% 5-3 257 0 240 0 0 0 225 120 158 1000 100%

3-1 0 0 0 0 168 131 168 117 584 58.4% 3-1 0 0 0 0 150 165 225 120 158 818 81.8%

135

We can see from Table 6 that the leg with the lowest usage rate is from port 2 to port 1, usage rate is zero under DLP and 12.5% under RO. The reason is that we didnt consider the near ocean shipping, at the same time the shipping cost is relatively low from port 2 to port 4, most of the shipping loads carried by voyage one. For such legs as leg 21, the company can take the policy of developing near ocean transportation or developing alliance with other liners to lease the slots, which can raise the whole usage rate and then the profit level.
Tab. 6 The optimal slot allocation for voyage two 1-5 5-3

3-2 0 0 179 201 380 63.3% 3-2 0 0 76 240 150 466 77.6%

2-1 0 0 0 0 0 0% 2-1 0 0 76 0 0 76 12.6%

1-3 1-5 D L P 3-2 5-2 Total slots Usage rate 1-3 1-5 3-1 R O 3-2 5-2 Total slots Usage rate

335 265 0 0 600 100% 1-5 450 150 0 0 0 600 100%

335 0 0 201 536 89.3% 5-3 450 0 0 0 150 600 100%

5 Conclusions
This paper develops a static multi-leg multi-line stochastic programming model on liner shipping revenue management problems under uncertainty. In the model formulated, we consider empty container repositioning, routing choice and stochastic demand, which reflect more realistic operation conditions of liner companies. A novel approach of robust optimization is applied to solve the model. Based on the comparison of the solution under the method of DLP and RO, we get some beneficial conclusions as followed: (1) The simulation results show that the model based on revenue management has great application potential in liner shipping industry, which can raise the profit of the liner shipping companies; (2) The robust optimization method proposed considers more precise nature of stochastic demand, which gets higher usage rate and profit than the DLP method; (3) The linearization technique of RO has much more application value, which makes widely available linear modeling package be applied directly to the model. Till now, the study on liner shipping revenue management is at the beginning stage and it becomes a more and more hot research area with a high potential for developing new models and procedures to improve revenue, and provide decision support to liner shipping companies: Firstly, revenue management is based on accurate forecasting on demand, while the three-dimensional characters of ocean shipping demand depend on additional factors such as seasonality patterns and the impact of weather etc., the simple and low-tech forecasting techniques are not effective for accurately predicting cargo demand. New forecasting model or technique should be developed. Secondly, this paper mainly studies a static network control policy. In fact, due to the stochastic and dynamic characteristic of ocean shipping industry, the dynamic network control policy might be much more needed. Finally, the philosophy of revenue management is to balance supply and demand through capacity allocation and price, therefore, the dynamic pricing problem for liner shipping revenue management also has great prospect.

136

References

[1] [2] [3]

[4]

[5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18]

Billings,J.S.,A.G. Diener and B.B. Yuen. Cargo Revenue Optimisation[J], Journal of Revenue and Pricing Management , 2003, 2(1):69-79 Bu X.Z, Zhao Q.W et. al. Optimal Capacity Allocation Model of Ocean Shipping Container Revenue Management Considering Empty Container Transportation[J]. Chinese Journal of Management Science, 2005, 13(1):71-75(in Chinese) Chen C.Y, Q.A. Lee. Containership Revenue Management-taking the slot allocation as example[R]. Department of Transportation and Communication Management Science, National Cheng Kong University, Hsinchu, Taiwan, Working paper,2002(in Chinese) Chiu M.C. A Network Design Model for the Liner Containership Routing Problem[D]. Department of Transportation and Communication Management Science, National Cheng Kong University, Hsinchu, Taiwan, Doctor Dissertation, 2002(in Chinese) Ha D. Capacity Management in the Container Shipping Industry: The Application of Yield Management Techniques[D]. P.H.D. thesis ,The University of Tennessee,1994 Kasilingam R G. Air cargo revenue management: characteristics and complexities[J]. European Journal of Operational Research, 1996, 96(1)36-44 Kimes, S. E. The basics of yield management[J]. Cornell Hotel and Restaurant Administration Quarterly, 1989, 30(2):14-19 Lai K K, Lam K and Chan W K. Shipping container logistics and allocation[J]. Operational Research Society, 1995, 46(6):687-697 Li H.L. An Efficient Method for Solving Linear Goal Programming Problems[J], Journal of Optimization Theory and Application, 1996, 90(2):465-469 Maragos S A. Yield management for the maritime industry(shipping, itineraries)[D]. P.H.D. thesis, Massachusetts Institute of Technology, 1994 McGill J.I. and G..J. van Ryzin. Revenue Management: Research Overview and Prospects[J]. Transportation Science, 1999, 33: 233-256. Mulvey, J.M., Vanderbei, R.J. and S.A. Zenios. Robust Optimization of Large-scale System[J]. Operations Research, 1995, 38: 673-688 Shen ,W.S and C.M. Khoong, 1995. A DSS for Empty Container Distribution Planning[J]. Decision Support System 15(1):75-82 Smith, B. C., J. F. Leimkuhler, et al. Yield management at American Airlines[J]. Interfaces ,1992, 22(1): 8-31. Talluri, K.T, and G. Van Ryzin. A randomized linear programming method for computing network bid prices[J]. Transportation Science, 1999, 33(2): 207-216 Ting S.C. and G. H. Tzeng. Fuzzy Muli-objective Programming Approach to Allocating Containership Slots for Liner Shipping Revenue Management[A]. Semmering: The 16th International Conference on MCDM, 2002.1-16 Weatherford, L.R and , S.E. Bodily. A taxonomy and Research Overview of Perishable-Asset Revenue Management: Yield Management, Overbooking and Pricing[J]. Operations Research ,1992, 40: 831-844. Yu C.S. and H.L. Li. A Robust Optimization Model for Stochastic Logistics Problems[J], International Journal of Production Economics, 2000, 64:385-397

137

Production Order Evaluation Based on Neutral Network Model


Qiu Jie 1, 2, Chen Zhixiang 2
1 School of Management, Huazhong University of Science and Technology, Wuhan 430074, China 2 School of Business, Sun Yat-sen University, Guangzhou 510275, China

Abstract For MTO manufacturing enterprise, evaluating production order is the core task in demand management and production plan-making. Production order is a list which contains detail information about the product required by customer. Considering different factors, such as the production-capacity, payoff capacity and the customer-relationship, etc. order acceptation is multi-objectives (attribution) decision problem. This paper makes use of the neural network to establish a decision model, applies the MATLAB neural network tool-box to solve the problem. In the BP neural network learning method, the weight is adjusted continually to reduce the workload and gain more accurate results more quickly. An application case is used to show the reasonability and feasibility of the method. Key words Production order, BP neural network, Order selection, MATLAB tool-box

1.Introduction
Nowadays, more and more manufacturers are facing a market of make-to-order. In this market, the order-selection decision-making problem is the core of demand management and production plan-making[1]. Production order of manufacturing enterprise is a list which contains detail information about the product required by customer. Considering the production-capacity, payoff-capacity and the customer-relationship, etc, different factors, order selection is multi-objectives decision problem. In the past, many researches about this kind problem focused on decision and analysis methodssuch as ELECTRI, TOPSIS, and AHP method. These methods require the person who makes the decision owner various specialized knowledge, for example, evaluating-system of order-choosing, the multi-qualification of operation of enterprise, the circumstance of marketing, customer and so on. The decider must allocate a weight to a factor of the decision-system to gain a decision matrix. Then compute the priority of each production order, finally make the plan for production. But in the process of decision, it not only needs cumbersome calculation, but also has many subjective factors; meanwhile, it cant make use of previous successful cases. All of these will result in wrong of decision. This paper offers a production order comprehensive evaluation method based on neural network. Through inputting the initial value of many samples, unitary disposal of subject-layer, from neural cell of connotative-layer to out put layer, finally the synthetic value is made sure to ascertain the reasonable priority sequence for production orders. By training and learning the samples continually, adjusting the weights between the neural cells of different layers, the relation of evaluated result and factor system is expressed. Meanwhile by using the MATLAB-visible software, the neural network will be optimized (Through learning to ascertain the best number of neural cells of the connotative-layer), consequently make the speed of network more quickly, the time shorter. This will reduce the workload of evaluation, save the evaluating-time, accumulate and use the experience of expert to make the process of evaluation automatically [2].

2.Factors analysis of production order evaluation in manufacturing enterprise


Because production order is a list which contains detail information about the product required by customer, the problem of order choosing is a multi-attribution decision problem. It needs to consider many factors in the process of order evaluation. Referring to other researches, we summarize nine factors need to be considered when a manufacturer deicide an order acceptation, these factors are: the relationship between manufacturer with customer: Generally, the relationship between manufacturer and customer has important impact on the acceptation of order from different customers. Manufacturer is willing to deal with customer with good relationship first. the unit profit of each ordered product: Usually, manufacturing enterprise will consider the unit profit of
This paper is funded by the National Science Foundation of China (70672078)

138

each product when considering acceptation of order, the higher the unit profit, the higher degree by acceptation. the integrality of information of ordered product (e.g., quantity, variety, color, type): If manufacturing enterprise cant get the accurate information from the customer order, it wouldnt plan the production successfully. the rate of required quality eligibility of ordered product: The required quality eligibility rate of ordered product varies from customers to customers. Manufacturing enterprise will deal with their orders differently. the degree of satisfying the customer order: Each customer has different requirements for their orders; manufacturer must consider the ability of offering ordered product in their ability. It is better to accept the order which can be met the requirement totally. the on time delivery rate of order: The delivery requirement varies from customers to customers. Manufacturer will consider the customer order which has a restrict delivery requirement. the reasonability of negotiation items of order (e.g., breach of negotiated items, delay of delivery etc.). Mutually, there is a negotiated item to deal with some accidents; the reasonability of item will impact the decision of manufacturer. the reasonability of solving logistics problem: The two parties are both responsible to the transportation of product, including the fee of transportation, the reasonability of relative problems of logistics will influence the decision of order acceptation. the emergency degree of order delivery.: Different customer has a different requirement of delivery; manufacturer will make the production plan in term of different emergency degree of order; the more emergent of delivery of the order, the higher priority of order. Considering these factors, the neural network model will use them as input parameters, past the subjection-layer, connotative-layer to achieve the out-put, finally get the synthetic value of orders. Then, decider will give order to all the production orders in term of the final output values of neural network.

3. Order evaluation model based on neural network


3.1 The framework of artificial neural network In the above nine input parameters, some of them are lingual variable which are qualitative, such as the relationship with customer, emergency degree of delivery, so on, but some others are quantitative, such as one time delivery rate and rate of quality eligibility, etc. For quantitative parameters, before the order evaluation, the actual value of each order must be unified into the area without dimension by subjection functions. While for qualitative parameters, the membership function method is used, for example, we use 1, 0.75, 0.5, 0.25, 0, to denote best, better, generic, worse, worst respectively [3]. For this, we set up subjective membership function layer in the neural network; use the subject membership function to quantify the qualitative parameters to make each parameters uniform, so each input parameters are comparable in the system. In this paper, the neural network is composed of input, subjective membership function layer, connotative-layer and output. It is shown in Fig.1.

x1

x2
output layer

xn

input membership layer connotative layer

Fig1. Configuration of artificial neural network

139

3.2 The basic deductive flow of BP neural network In above artificial neural network, the first layer is input-layer, because there are 9 important factors in all, so the number of input-nodes is also 9. These nodes represent the input values. (1) The first layer is used to make all quantitative input values unified using following formula.

net i1 = x iy , y i1 = f i1 net i1 ; i = 1,2

net i1 min i (max i min i ), max i > net i1 > min i 1, net 1 max i = i 0, net 1 min i i
1 1

{(

9
(1)

The superscript of xi y i represent the first layer, they respectively represent the i th input-variable and the

th

output-variable. The max i and min i are the maximum and the minimum of the i

th

input-variable

respectively. (2) The second layer is subjective membership function layer. Each node expresses a value of lingual variable (qualitative). Its function is the calculation of the subject-value of each variable, all the weight of the layer is 1. For the quantitative input values which have been unified need not to be disposed, remaining them unchanged.

net 2 = ( xi2 im ) 2 ( im ) j
2

y 2 = f j2 ( net 2 ) = exp ( net 2 ) j j j j = 1, 2

(2)

im , im are the mean-value and standard deviation of the variable of the node when the function is
Gauss-function. (3) The third layer is connotative-layer, all the weights of the layer is 1.
3 netk = w3 x 3 jk j 3 3 3 yk = f k3 ( netk ) = netk

(3)

k = 1, 2,
The weight of w
3 jk

is also 1.
4 4 neto = wko xk4

(4) The fourth layerfinal layer is output-layer, there is only a fan-out.


4 4 4 yo = f o4 ( neto ) = neto

(4)

o =1
The w
4 ko

represents the weight which will be k th trained.

3.3 The learning-algorithm of BP neural network In order to use the neural network method to solve the order-evaluation problem, the weights must be adjusted by learning and training the samples to gain a reasonable evaluation result. The learning-algorithm of BP neural network feature in that proportioning the error into each node, calculating the reference-error to adjust the weights by transfer the error from output-layer to input-layer layer by layer, until the error achieve minimum[4].The BP algorithm adopt steepest descent algorithm to modify each weight directing to the negative-grads-way of error-function. The error function is defined as below:
2 1 4 1 2 y o y o = eo 2 2 In here, y o is expectation value of output and eo is error of each output respectively.

E=

(5)

The corresponding learning-algorithm is depicted as below. 140

(1) The error of the fourth layer is:

o4 =

E = eo 4 net o E 4 wko

(6)

(2) Modification of the weight between the third and the fourth layer must follow the below rule:
4 4 wko (K + 1) = wko (K )

(7)

is called learning-speed, in the paper, it is supposed that =0.15,


4 4 E E net o net o = = eo 4 4 4 4 wko net o wko wko

(3) The third layer, because the weight is 1, so only need to calculate the error-item:

k3 =

4 3 E E net o y k 4 = = o4 wko 3 4 3 3 net k net o y k net k

(8)

(4) The second layer, error is as below:


2 3 E E net k y j 3 = = = k3 y k 2 3 2 2 net j net k y j net j 2 j

(9)

The error-formula of the mean-error and standard deviation of error of the variable is as below:
2 2 E E net j 2 x i im = =j im net 2 im ( im )2 j 2 2 E E net j 2 xi im = =j im net 2 im ( im )3 j

(10)

(5) The regenerated formula of the mean-error and standard deviation of error of the variable of connotative-layer is as below:

im (K + 1) = im (K )

E im

E im (K + 1) = im (K ) im
The coefficient

(11)

, represents the learning-speed of the mean-error and standard deviation of error of

Gauss-function respectively. 3.4 Making use of MATLAB neural network tool-box to optimize the BP neural network According to the feed-back arithmetic of BP neural network, weights will be adjusted continually, consequently the error between the value of output and expectation value achieve minimum and the precision of decision will be upgraded in a way. Meanwhile, the paper also consider another problemthe speed of decision. As long as this, gaining of the result will be not only accurate but also quick. The method is better than former methods on many aspects, for example, work efficiency, workload and so on. In the neural network, the factor which affects the speed of training and learning is the number of neural cells of connotative-layer. The number of neural cells of connotative-layer will affect the ability that the network identifies the little change in the process. If the number is small, the network cant identify the little change effectively in the process, if large, will prolong the time of training the network [5]. So far, there are many researches in the field [6-7]. The paper adopt the MATLAB neural network tool-box to train the networks which has different number of neural-cell of connotative-layer, by comparing the result to ascertain the best number of neural-cell of connotative-layer. So in the condition of making the network convergent, the structure of the 141

network is simple and the speed of training is fast.

4. Application
A textile mill is facing an order selection problem recently, because the quality of its product is very good, many customers order arrive in a time, this make it in a dilemma of how to chose production order to arrange production plan. In here, suppose 5 orders are drawn out from the database, we evaluate them using the neural network method to give priority to each order. It will help manufacturer to arrange production order scientifically. Table 1 is the original data of order.
Tab. 1 original data of production order Attribution of different production order under nine decision factors

Order number 1 2 3 4 5

1 best better best generic worse

2 1.5 1.8 2.0 1.5 1.6

3 generic best better worse best

4 0.80 0.78 0.90 0.88 0.85

5 0.85 0.89 0.82 0.80 0.88

6 0.84 0.80 0.83 0.87 0.85

7 better generic worse best worse

8 generic worse best better worse

9 High generic generic high low

In table 1, we can see that some attributions of order are qualitative, and some are quantitative, before they are processed, they must be changed using the formulas in section 3.2. 4.1 MATLAB program The MATLAB program code is as below: P=[0.75,1.5,0.75,0.8,0.85,0.9,0.75,0.75,0.8]; // input T=[0.75,1.45,0.81,0.83,0.82,0.91,0.87,0.78,0.81]; //output net=newff(minmax(P),[9 n 1], // suppose the number of the nodes of connotative layer is n {'tansig','purelin','purelin'},'trainlm'); // training function y1=sim(net,P); net.trainParam.epochs=2000; net.trainParam.goal=0.005; net.trainparam.show=200; net.trainparam.lr=0.05; net=train(net,P,T); y2=sim(net,P); error=y2-T;//error res=norm(error); figure; plot(P,T,'r+');//figure show hold on plot(P,y1,'g-'); hold on plot(P,y2,'b--'); title('the output result of network after training '); In the case, on the condition that other conditions isnt changed, changing the number of the nodes of the connotative-layer and optimizing the structure of the neural network will accelerate the training and gage the result of output. 4.2 Neural network training experimentation In order to train the neural network to fit the requirement, through changing the number of the neural cells of the connotative layer to operate the MATLAB program, it got different results: (1) K=16 142

TRAINLM, Epoch 0/2000, MSE 3.86663/0.005, Gradient 177.927/1e-010 TRAINLM, Epoch 1/2000, MSE 0.00338366/0.005, Gradient 1.66352/1e-010 TRAINLM, Performance goal met. The result is shown in Fig.2.

Fig.2 Neural network training result with K=16

(2) K=12 TRAINLM, Epoch 0/2000, MSE 3.36876/0.005, Gradient 64.2373/1e-010 TRAINLM, Epoch 2/2000, MSE 0.000957722/0.005, Gradient 0.125241/1e-010 TRAINLM, Performance goal met. The training result is shown in Fig.3.

Fig.3 Neural network training result with K=12

(3) K=8 TRAINLM, Epoch 0/2000, MSE 3.09546/0.005, Gradient 79.1297/1e-010 TRAINLM, Epoch 2/2000, MSE 0.00105795/0.005, Gradient 0.283564/1e-010 TRAINLM, Performance goal met. The training result is shown in Fig.4.

143

Fig.4 Neural network training result with K=8

From the network training experimentation-results, it is known that the error of training is least and the training-time is shortest when the number of neural cell of the connotative-layer is 12. So we set 12 as the number of the number of neural cell of the connotative-layer when designing the neural network, which is best for the decision-model. 4.3. Production order evaluation result using neural network method We use the trained neural network above to process the original data, it is easy to get the order evaluation result. The output of each network is the value of priority of each order. Through calculating, these values are 0.721, 0.701, 0.781, 0.685, and 0.615, respectively. The basic rule is that the larger the value, the higher the priority of the order. So the final sequence of production order is 3 1 2 4 5 . Using this sequence of production order, the manufacturer can make a reasonable production plan for customer orders.

5. Conclusion
With drastic competition of global market, enterprise is in the face of a buyers market which is changed quickly and cant be forecasted, the conventional pattern of production and operation response to the marketing-change slower and slower[8]. How to enhance the agility of operation of enterprise, quickly respond to customer requirement is becoming more and more important for manufacturers. Bases on the BP neural network learning-algorithms, production order evaluation is easy to be finished and get a satisfactory result of order priority. The evaluation-method based neural network can also do the delicacy-analysis and eliminate the false-analysis for the problem [9]. Through the application case, it is shown that the method is reasonable and feasible.
References

[1] [2] [3] [4] [5] [6] [7] [8]

Chen Zhixiang. Improved algorithms of ELECTRE-for production order evaluation. Group Technology & Production Modernization, 2005, (2): 19-21 Zhu Shijing Chen Ting. A neural network based model for multi-criteria comprehensive assessment. Systems Engineering-theory & Practice, 1994, (9):74-80 Feng Chenming Fang Deying. A multi-criteria comprehensive assessment for neural network method. Modern Management Science, 2006, (3):61-62 Zhang Jili. Fuzzy neural network control theory& engineering-application. Ha ErBin: Haerbin Industry University publishing company, 2004. [He Zhen Liu Dongsheng. Using neural networks to control auto-correlated Process. Industrial Engineering Journal, 2006, 9 (6):85-90 Cheng C.S. A multi-layer neural network model for detecting changes in the process mean [J].Computers & Industrial Engineering, 1995, 28 (1): 51-61. Cheng C.S. A neural network approach for the analysis of control chart patterns. International Journal of Production Research, 1997, 35 (3): 667-697. FAN Bo. Research on demand management of productive corporation. Journal of Chongqing University (Social Sciences Edition), 2002, 8(5):33-35

144

Distribution the TOC Way: Review and a Case Study


Cui Nanfang, Leng Kaijun
School of Management, Huazhong University of Science and Technology, P.R.China, 430074

Abstract Supply chain management is an essential part in firms management, while distribution management is one of the most important links in the chain. Traditional distribution management is the push style, which completely depend on the accuracy of the forecast of the market demand which is not so reliable. Firms need a new way to solve their problems. In this paper, we introduce the TOC solution to the distribution management and give a Europe fashion company as an example. Key words Supply chain, Distribution, TOC

1. Introduction
Supply chain management has evolved over the latter half of the last century. During that time, the business landscape has changed considerably and supply chain management has evolved in parallel with these influences. By the new millennium, mass customization, increased customer expectations and fiercely intense competition characterized the marketplace and chain versus chain competition became a significant factor in marketplace success [1]. As organizations have shifted toward optimizing the extended enterprise in an increasingly dynamic business environment, supply chain management has shifted its focus to inventory visibility in the chain. More readily observable than other parameters, inventory is an important indicator of system performance and the impacts of uncertainties from various sources, as these organizations continually refine the management of supply chain. Regardless of the maturity of the approaches developed, managing inventory throughout the chain remains a critical competitive factor for the supply chain. As such, continual refinement of the strategies used to manage and reduce inventory in the chain is essential. Before our statement, we should clarify the distribution boundaries first. The boundaries of distribution are defined as follow: All activities and functions related to making the finished good products available to the market and the raw materials available to production, including subcontracting. The shadowed part in Figure 1 is the boundary of discussion:

Suppliers

RM

Operation

FG

Customers

Raw Material Stock Management

Intermediate Stock Management

Finish Good Stock Management

Fig.1 Boundary of the discussion

As we need a reference standard or sample used for the quantitative comparison of properties, so we should be aware of the distribution performance measures clearly. The most commonly used performance measures in distribution are: Due Date Performance, Availability, Lead Time, Inventory turns, Operating Cost, Quality. There are many issues in the distribution system, and many scholars engaged in the research of this region [2] [3] , still, we found that there are quite a few literatures in china which concerned about distribution using the TOC (Theory Of Constraints) methods. Therefore, our purpose of this dissertation is to introduce this new powerful and 145

effective method as a tool to resolve the problems that exist in the distribution system and we hope this paper could be helpful to the firms in china who wants a higher performance in supply chain management.

2. Problem statement and methodology


According to the investigation, we found that one of the typical problems in the distribution system could be described as following: The plant would produce the proper units of product according to the initial forecast of every market in every region in a proper period (a month typically) first, then push them into the distribution channel such as regional warehouses, brand shops, etc. However, the real demand of the market always different from the estimation, and the demand of different market and different places are far away from each other. So both the overstock and the shortage would occur at the same time. This typical phenomenon could be clearly shown in the Fig.2.

Production

Distribution

End User

Forecast 1600

Demand Push 1000

500

Overstock 500

600

1100

500 Shortages

Fig.2 Typical problem of distribution

Once we realized that the improvement of the current distribution system is indeed, the very first thing is to make clear the current reality situation. Usually, people have very good intuition. When dealing with a system whose performance we want to improve, we engage our thinking about what can be done to make it better. As such, we tend to collect observations or complaints, from our own thoughts or what others say, that we believe are the major problems of the system. These are the symptoms or complaints that tell us that the system is sick, or we may call it low performance instead more clearly. In fact, these symptoms or negative effects that you and other stakeholders (e.g. your customers, shareholders etc.) are currently experiencing are called UnDesirable Effects (UDEs) in TOC [4]. These effects are Undesirable in relation to the goal of your defined system and the vision. In here, we mean the distribution system. And Dr. Goldratt and Eli had already collected those UDEs in the distribution system in the table as follows [5]:
Distribution Possible UDEs Priorities constantly shift Too many shortages Too many Inventory Items have too high inventory levels Too many Inventory Items are obsolete Dead Inventories level is too high There are too many cross-shipments There are too many customer returns Delivery costs are too high
Tab.1 Possible UDEs in the distribution system Other Functions possible UDEs

Too often there is a need for an urgent delivery

Lost sales Reduced profit Too low Return Of Inventory There is constant shift in priorities (in operations) Lost customers Introduction of new products is often delayed

146

Actually, no matter how complex of any distribution systems are, the UDEs in the system would be in the scale of Talbe1, which have been proved by many experienced consultant in reality of the Variable Vision project of Goldratt School. Then we can use of Thinking Process of TOC to find the core problem of the current distribution system, and the core cloud is as following:

Reduce Cost Profitable distribution Protect Sales

Hold less inventory


Dilemma

Hold more inventory

Fig.3 The dilemma of distribution

After finding out the core problem of the distribution system, we need to take a logical analysis of the core cloud, try to challenge every assumption of the cause-effect relationship in the cloud to find proper injection to solve the problem, and these assumptions may be the key point to the direction of our solution. In here, we found that in order to protect Sales we need to hold high inventory is based on three assumptions: a. Replenishment time is too long b. Suppliers are not always reliable c. Forecasts are inaccurate So how can we operate according to a much more accurate forecast, operate with much reduced replenishment time and increase reliability of re-supply without introducing a new more accurate forecasting system, increasing investment in re-supply and replacing or re-educating vendors/suppliers becomes a significant issue. Furthermore, we need to be aware of the reality of the distribution environment is that sales (or product consumption) are remote from production and the tolerance time of the buyer is (much) shorter than the time it takes production and shipping to make product available (at the point of sale or consumption).

3. Direction of the solution


Regard with the Replenishment time, first we need to know what impacts replenishment time. The answer is: Batching. In the supply chain, as it takes time to place an order so order entry departments often batch, and as operations like to save set-ups, orders have to wait until a batch is big enough, besides, too often shipping departments will batch orders to make shipments as low in cost as possible. So, replenishment lead time equals order lead time plus production lead time and transportation lead time. That is total lead time as well, and long replenishment lead time requires high inventories. Furthermore, the longer the replenishment time the slower a system can react and the more performance depends on forecast accuracy. With regard to the accuracy of the forecasting, it doesnt work effectively because: demand of end-user is volatile, there is further variability throughout the supply chain, and fluctuations in supply are ruled by Murphy and too many interdependencies due to long lead times. And things often act as in the Fig.2 with its UDE: not enough inventories of fast moving products and too much inventory of the slow moving products. So, how to deal with this? We all know that the aggregation is an effective way to reduce the fluctuation of the forecast statistically, so it is the direction of our solution. And concerned with the reliability of the supplier, our direction of the solution is to have the ability to adapt to changes in consumption rather than relying on forecasts we use the information we have about the adequacy of current inventories. 147

4. TOC Solution
By now, as we know about what to change, and what to change to, then, we need to know about how to cause the change, and the TOC solution is as following: a. Establish the plant (Central) Warehouse(effect shown in Fig.4 ) b. At each place and for each product establish the inventory target according to the formula c. Move to Order daily Replenish periodically d. Monitor the inventory targets according to the zones e. Re-examine policies of make to stock-make to order f. Educate sub-systems to monitor execution using Dollar days measurements g. And we use the buffer management statistics to create the POOGI.

Fig.4 Effect of aggregation

And for factory capacity, the possible injections are: . All decisions with regard to accepting substantial increases in demand are evaluated through the use of buffer management. . The TOC solution for production, DBR and buffer management is implemented, if not already in place . All process improvement efforts are focused through the use of buffer management. We need to understand that each link must subordinate its efforts to the overall success of the chain as a whole, so once we start our implementation, there are many require paradigm shifts, for instance, we need to stop attempting to forecast the likely demand for the product, because its a waste of our time and energy, we need to stop the push and the long run syndrome, as well as the economic batch size and the economic order size syndrome. In this way, we have changed the supply chain (push system) to a demand chain (pull system). We give an example in the following section with the method we mentioned above.

5. An example
In this section, we use a Europe fashion retailer named WE which has had implemented the TOC methods in their supply chain management as a case study example. WE Europe is a medium size fashion retailer, with its European headquarters in Utrecht, the Netherlands, and active in 6 countries within Europe-Netherlands, Belgium, Luxemburg, France, Germany and Czech, and it owned 219 stores total in these countries, with 2500 employees, of which 1200 FTE, and has 7000 new option introductions per year. Its business process is shown in Fig.5, and WE used to implement traditional supply chain management approach as a Push system as shown in the Fig.6, and of course, they have met the typical problems as many other companies used to, and of course there is a low performance in their distribution system, they concluded it as There is a stock, but not on the right location, Need for change!, which forced the executive of WE made up their minds to change the existing system.

148

Fig.5 WEs business process

Fig.6 Overview of WEs supply chain in past (Simplified)

Then they made up their objective as: Improve economic profit by decreasing lead-time in the supply chain, through which a faster and more efficient reaction can be realized on the demand of the customer. And they made their 5 sub-projects (which actually are TOC solution as we have mentioned above): . Build (season) collections and themes step by step . Realize more flexibility with supplier contracts . Introduce rolling budgets . Measure KPIs (focus is on cash generation not cost reduction) . Improve distribution from DC to shops And the last two sub-projects are their todays focus. With the last sub-projects objective is to improve stock availability in the shops, without increasing the stock level, there fore reducing stock outs and markdowns, and the starting point is No forecast, but a responsive supply chain. Forecasting on SKU-location is extremely difficult with 2-4 pieces of sales on style/color level a week and a life cycle of ca. 12 weeks. The specific changes of TOC solution are as follows: In Business, WE realize responsive SC process as: first, Leverage on aggregation of stock which made move value offering point upstream and keep stock upstream in the EDC; Second, Use Integrated measurements which Introduce operational KPIs, focus on lost sales reduction and increased inventory turnover; third, Lead-time reduction: order daily, but replenish frequently. Last but not the least, Eliminate oscillation: retrieve transaction data from the store to drive true demand. In Organization, WE restructuring of planning department to create a clear split between pre season and in season planning with Planners (tactical) and Traders (operational). In System, WE support business process with a responsive SC solution, based on dynamic buffer management and integrated measurements Viva cadena. With these changes applied in Business, Organization and System, WE have successfully realized a 149

responsive supply chain, simplified shown in Fig.7.

Fig.7 Overview of WEs supply chain at present (Simplified)

After the implementation of the TOC solution, the performance of WE has significantly improved, as WE calculated: Lost sales of 20% to last year. (The algorithm of lost sales is product not available in the store, taking into account a substitution factor of 50%); 10% turnover increase directly related to a responsive supply chain; Average stock level in the stores decreased, additional spin offs, but not yet realized; increased stock turnover; less markdowns increased profitability. All in all, WE have successfully achieved their goal.

5. Conclusion
TOC is a continually developing method in many regions of management, according to our introduction, we can now realize a powerful and effective tool in the distribution management and through the case we introduced last, we may foresee the future vision of many firms in supply chain management within the TOC way, especially in china. We would like to see many more successful cases like WE appeared in the near future with the implementation of TOC. The discussion herein has focused on admittedly simple scenarios for the sole purpose of applying the TOC method in supply chain management, especially in distribution management. As many scholars have discussed, the supply chain presents a significantly more complex and intricate environment [6]-[8], so an obvious area for future research is the development of case studies which detail the application of these TOC principles and analyze the performance of constraint-based methods in the chain environment. There are also many areas which directly impact the supply chain objectives discussed, including: 1. supply chain layout considerations 2. pipeline design 3. the impact of control policies (order quantities, reorder points and safety stocks) in a chain employing constraint-based control methods 4. effective control mechanisms for serial assembly operations in the supply chain 5. effectively blending the TOC concepts for enhanced chain performance. And comparing the performance of this system against others based only on TOC approach. While some of these topics have been addressed to varying degrees in the literature, it would be both interesting and beneficial to research the impacts of these areas in a chain where constraint-based methods form the basis of the overall system control. This would be particularly interesting in a chain that blends the concepts of TOC to optimize the chains performance.
References

[1] [2]

Bernstein,F., A. Federgruen. Pricing and Replenishment Strategies in a Distribution System with competing Retailers. Operation Research. 2003, 51:409-426. Biteran, G.,R. Caldentey. An overview of pricing models for revenue management. Manufacturing & Service Operations

150

[3] [4] [5] [6] [7] [8]

Management. 2003,5:203-229 Suwangruji, P. & Enns, S.T. Information and logic requirements for replenishment systems. International journal of Operations and Quantitative Management, 2002, 8: 149-164 Blackstone, J.H. Theory of constraints-a status report. International Journal of Production Research, 2001, 39:1053-1080 Eli Schragenheim & H. William Dettmer. Manufacturing at Warp Speed. Press LLC. 2001:175-207 Layden.J. Thoughts on supply-chain management. Manufacturing Systems. 1998,16(3):80-88 Donovan.M. Is manufacturing a weak link in your supply chain? Industrial Management. 1996, 38(6):13. Evans.G..N., Towill.D.R and Naim.M.M. Business process reengineering the supply chain. Production Planning&Control. 1997,6(3):227-237

151

The Research on Distribution of Large Scale Chains Enterprise with Time Window
Da Qingli, Zhang Heng
School of Economics and Management, Southeast University, China, 210096

Abstract A multi-depot heterogeneous fleet vehicle routing model with time windows and simultaneous delivery and pick-up is presented for large-scale chains enterprise. The goal is to minimize the total cost of performing the pick-up and delivery services of all the customers. Five types of costs are considered in the objective function: fixed costs for used vehicles, distance costs and travel time costs along the selected routes, waiting time costs and penalty costs due to time-window and working time constraints violations, profit due to pick-up. In order to solve the model, a three-phase hybrid genetic approach is introduced. The computation example indicates applications of the model and the approach. Key words Delivery, Pick-up, Large scale, Genetic algorithm, Chains enterprise

1. Introduction
There are two forms of distribution for chains enterprise: some enterprises outsource the work to third-party logistics; others have distribution centers and vehicles, and delivery themselves. Distribution center planning and routing optimization always can enhance competitive capacity and create value for company. In this paper, the second form of distribution is discussed mainly. Generally speaking, the number and the location of distribution centers as well as the number of vehicles are given. She/he must specify which vehicle services which customers and what sequence to follow so as to minimize the total cost. In fact, although companies try their best to delivery the right product to the right customer at the right time, there are also some returned products for product quality or other causes. If we consider the pick-up when optimizing the distribution routing, it will enhance the utilization of the vehicles and lower the cost, which is the aim of this paper. This classical of logistic problems is usually known as the vehicle routing problem (VRP). After Dantzig and Ramser presented it in 1959, several classes of vehicle routing problems have been studied in the literature. CVRP(Vehicle Routing Problem with Capacity Constraints)and VRPTW (Vehicle Routing Problem with Time Windows), each customer has an associated time window defined by the earliest and the latest time to start the customer service. Time windows can be hard or soft. Currently, VRPTW optimization models are useful for a variety of practical applications. However, some practical issues have not yet been addressed. Dondo and Cerda (2007) present a novel three-phase heuristic/algorithmic approach for the multi-depot routing problem with time windows and heterogeneous vehicles [1], but they dont consider pick-up. Min (1989) first presents Vehicle Routing Problem with Pickups and Deliveries (VRPSDP) [2]. Halse (1992) advances a heuristic method to solve the VRPSDP model [3]. Angelelli and Mansini (2002) present a column generation method for VRPSDP with time windows [4]. Dethloff (2001) compares VRPSDP with other vehicle routing problems and develops a heuristic algorithm to solve the problem [5]. Ropke and Pisinger (2006) review the models presented and first to present a unified heuristic for a large class of vehicle routing problems with backhauls and solve it through an improved version of the large neighborhood search heuristic method [6]. Bianchessi and Righini (2007) present and compare constructive algorithms, local search algorithms and tabu search algorithms [7]; in particular ,they address the issue of applying the tabu search paradigm to algorithms based on complex and variable neighborhoods and propose several kinds of heuristic algorithms for the VRPSPD with indivisible demands, mixed pick-ups and deliveries and both simple and composite demands. Zhang Jian yong and Li Jun (2006) give a hybrid genetic algorithm to VRPSDP [8] .Qu Zhi wei et al. (2004) analyze the large scale delivery /collection problem with a three-phase solution framework: first, the customers were segregated into districts according to the main road grid

This research has been supported by National Natural Science Funds of China (The Study on Reverse Logistics System Structure of Assembling Electronic Products in Network Circumstance,No:70472033)

152

system; then the customer districts were assigned to vehicles using the vehicle flow formulation model and the combined saving and 3-option algorithm, finally, the vehicle routes were determined as a traveling salesman problem[9]; however, they assume a fixed fleet of vehicles of uniform capacity housed in central depot. Gendeau et al. (1999), Chaziri and Osman(2003), Sral and Bookbinder( 2003) study the single-vehicle VRPB [10-12]. Based on the practical situation, we consider the multi-depot VRPSDP with time windows and heterogeneous vehicles .In order to solve the model, we present a novel three-phase hybrid genetic approach. The remainder of the paper is organized as follows. We start in Section 2 by presenting a comprehensive description of VRPSDP with time window (VRPSDPTW) and formulate the model. Section 3 goes on developing a hybrid approach for large-scale VRPSDPTW. Finally, in Section 4 some experiments are carried out to evaluate the computational efficiency of the approach presented in the paper and we conclude with a summary.

2. The multi-depot heterogeneous VRPSDP


2.1 Hypotheses (i) Each route must start and end at the same depot; (ii) each node must be serviced by just a single vehicle; (iii) the total load assigned to vehicle v must never exceed its cargo-capacity q v , when v leaves the customer, its load of containing goods pick-up must also never exceed its cargo-capacity q v . (iv) The length of time during
max which a vehicle v stays in service should be shorter than the maximum allowed working time tvv ; (v) the

delivery and pick-up service at every customer must start within the time window since otherwise a penalty cost should be charged. 2.2 Definitions We consider the problem of daily delivery of goods from multi-depot to a number of geographically dispersed customers. The aim is to design the routes for serving customers, i.e. to determine the ordered sequence of locations on each vehicle route and the arrival and departure times for each customer location. Let us consider a routing network, represented by undirected graph G = {I , P, A} connecting customers I = {i1 , i2 ,..., in } and distribution centers P = {p1 , p 2 ,..., pl } through a set of undirected edges A = {(i, j ) / i, j ( I P)} . At each customer location i I , a fixed load wi is to be delivered, and a fixed load u i to be picked up. The service must begin in a time window [ai , bi ] ,where ai is the earliest time and bi is the latest time. A fleet of heterogeneous vehicles V = {v1 , v 2 ,..., v m }with different cargo-capacities qv are housed in multiple distribution v centers. Associated to the set of edges (i, j ) A ,there is a pair of vehicle-dependent matrices C ' = {cij } and
v T ' = t ij

{ } denoting

the

travel

cost

and

the

travel

time

from node

to

j,which

subject

to

cik + c kj cij and t ik + t kj t ij . The goal is to minimize the total cost of performing the pick-up and delivery

services at all customers. Four types of costs are considered in the objective function: fixed costs for used vehicles, distance costs and travel time costs along the selected routes, waiting time costs and penalty costs due to time-window and working time constraints violations [1]. 2.3 Parameters and decision variables In order to formulate the model, we define the following parameters: Cvf stands for the fixed cost of using vehicle v; ct denotes the labor cost per unit time; v denotes penalty cost per unit time when v works overtime; i stands for the penalty cost per waiting time and i for penalty cost per lathing time ( i < i , considering the quality of service customers receive, if vehicle arrives behind its time, the customers may be depressed for the goods they wanted sold out. This may harm the image of the chains enterprise and even the whole enterprise.) lv' indicates the load of v when it leaves the distribution center; l i denotes the load after service customer i ; l v'' stands for the load of v when it returns to the distribution center; ai and bi denote the time boundary for customer i; Ti denotes the boundary for vehicle v; r0 stands for the profit associated with per unit pick-up load;

C i states for the cost and Ti for time associated with distance from distribution center to customer i.
The proposed mathematical formulation requires to define three different sets of variables: () the 153

assignment variable Yiv to allocate vehicle v to customer site i; () the assignment variable X pv to allocate vehicle

v to center p ; () the sequencing variable S ij to denote that customer site i is visited before j ( S ij =1), or
after ( S ij =0) ,else that

S ij = M ( M denotes a large number).

2.4 Model Given the above-defined goals and variables, the problem can be formulated as follows:
min z = (C vf
vV

X
pP

pv

+ ct TVv + CVv r0 l v'' + v Tv ))

(1)

Subject to

Y
vV

iv

= 1, i I
= 1, v V

(2) (3) (4) (5) (6) (7) (8) (9)


j i

X
pP

pv

1, x 1 l v' = Y jv w j ( f ( S ij )), v V , f ( x) = iI jI 0, x > 1

qv l v' w j + u j M (1 Y jv ), ( j I )
1, x = 1 qv YivY jv (li w j + u j M (1 g ( Sij Sij ) f ( Sij )), (i, j I ; i j ), g ( x) = iI j I 0, x 1

l v' q v , v V
l j q v , j I

Ci cv ( X pv + Yiv 1), i I , p P, v V pi
v C j Ci + cij M (1 S ij ) M (2 Yiv Y jv ), i, j I , v V , i

(10) (11) (12) (13)


j

Ci C j + c vji MS ij M (2 Yiv Y jv ), i, j I , v V , j
v CVv Ci + cip M (2 Yiv Y pv ), i I , v V , p P

Ti t v + t ( X pv + Yiv 1), i I , p P, v V
v pi

T j Ti + st i + t M (1 S ij ) M (2 Yiv Y jv ), i, j I , v V , i
v ij

(14) (15) (16) (17) (18) (19)

Ti T j + st j + t MS ij M (2 Yiv Y jv ), i, j I , v V , j
v ji v TVv Ti + st i + t ip M (2 X pv Yiv ), i I , p P, v V

ai ai Ti , i I

bi Ti bi , i I
Ti TVv tv
max v

, v V

The objective function(1) aims to minimize the overall service expenses, including fixed vehicle utilization costs C vf X pv , traveling distance CVv , time costs ct TVv , pick-up profit ( r0 l v'' ) , waiting and service
vV pP
vV

vV

vV

time costs

T
vV v

and penalty costs

(
iI

max((ai Ti ),0) + i max((Ti bi ),0)) .

Assignment of nodes to vehicles: Eq. (2) states that every customer node i must be serviced by a single vehicle v . Splitting the load to be picked-up from a customer site is a forbidden option. Assignment of vehicles to depots: Constraint. (3) states that every used vehicle v should be allocated to a single depot p to which it returns after visiting all the assigned customers. The required fleet size is a problem variable to be determined simultaneously with the best set of routes and schedules. Load of vehicle: Eq. (4) states that the load of every used vehicle when it leaves distribution center is equal to the summation needs of customers to be delivered. Load constraint: Constraints (5) - (8) state that the load of every used vehicle after it finished the service for customer i (delivery the goods and pick-up goods) must never exceed its cargo-capacity qv . 154

Least traveling cost for vehicle v to arrive at node i: Constraint (9) states that the cost of traveling from v depot p to node i must be greater than or equal to c pi only if the node i is serviced by vehicle v ( Yiv = 1 ) housed in depot p ( X pv = 1 ). Relationship between traveling costs up to nodes i, j I on the same tour:
v Let cij stands for the least

travel cost from node i to node j on the vehicle v. If both nodes (i, j ) are on the same tour and node i is visited before j ( S ij =1), then constraint (10) states that the distance-based travel cost from the depot to node j ( C j ) must
v always be greater than i by at least cij . Constraint (14) states the service starting time at node j ( T j ) should be v greater than Ti by at least the sum of both the service time st i at node i and the traveling time t ij .

In case

node j is visited earlier ( S ij =0), (11) and (15) statement hold. If i, j I on the different

tour, constrains (10),

(14), (11) and (15) are redundant. Overall traveling cost along the tour assigned to vehicle v : Constrain (11) states that the overall traveling cost incurred by vehicle CVv to complete the tasks must always be greater than the traveling expenses from the
v depot to any customer i along the tour by at least the amount cip .

Earliest service starting time at customer i: Constraint (13) states that the vehicle v will never begin the service at the assigned node i before time ( t v + t v ) where t v denotes the leaving time of vehicle v from pi distribution center. Overall traveling time for vehicle v : Constraint (14) indicates that the total time required by vehicle v to v complete the tour is found by adding the sum of both the service time st i at node v and the travel time t ij . Time windows can be hard or soft. We consider the time windows as soft constrain. Time window constraints can be violated at a finite cost and the vehicle can start the service at node i before time ai or after time bi . In such a case, the constants a i and bi state the scope of violation of time windows. Constraint (19) applies just in case the maximum allowed working time tvvmax is regarded as a soft constraint that can be violated at some penalty cost.

3. A three-phase hybrid genetic approach for large-scale VRPSDPTW


VRP is NP-hard problem, and becomes more complex for larger scale and more constrains. To solve the problem many approaches are presented by researchers, which can be categorized two classes: (i) exact approaches, such as dynamic programming techniques, Lagrangian relaxation methods, column generation algorithms ;(ii) Heuristic approaches, such as simulated annealing, tabu search, colony optimization and genetic algorithms. There is no doubt that the multi-depot heterogeneous fleet VRPSDPTW is very difficult to solve by a pure optimization approach .So we construct a hybrid genetic approach solving the problem based on cluster-first/route-second philosophy: First, in order to find a good set of clusters, each one comprising several customer sites, without relying on routing information, we expand the heuristic algorithm by Ref.[1]; Second, we allocate the vehicles to distribution center according to the total distance from depot to customers; Finally we introduce a genetic algorithm to optimize the problem. 3.1 Clusters (i) Open a list of customers L and sort them by increasing values of the earliest arrival times ai ; if some customers have the same ai , sort them by increasing values of the latest arrival times bi ;Open a list of available vehicles V and arrange them by decreasing values of the ratio (q v / c vf ) ; set d max = avr (d ij ) ; = max( st i ,0.05t vmax ) , denotes the maximum allowed waiting time . (ii) (Program loop) Open an empty list K n linked to the next cluster Part n . Assign the top entry of list V to cluster Part n and delete it from V, set q v' = q v ; If V is empty, that means each vehicle has been assigned to task, find the nearest

Part i , q v' = q v Yiv wi .


iI

(iii) Pick up the top node i on the list L ,and place it at the bottom of list K n .Initialize the parameters of 155

cluster Part n :
ai aPart n , bi bPart n , wi wPart n , st i stPart n , u i uPart n

Then delete customer i from list L, and make a copy of the current lists L as L1. (iv) Pick up the top node j from list L1, and check wPart n + w j qv' ,if it is true ,go to step (v) else delete j from list L1,repeat step (iv). (v) Compute the distance d ij between customer j and its nearest node i on the list K n .Check d ij d max ; if the expression is true , go to step (vi); else delete customer j from L1 and return to step (iv). v (vi) Verify the constraint: aPart n + stPart n + t ij max(bPart n , b j ) ; if the expression is true, go to step (vii), else return to step (iv). v (vii) Check the expression: aPart n + stPart n + t ij + a j ; if it is true ,go to step (viii), else delete L1and save the list K n and define Part n , return to step (iv). (viii) Place node j at the bottom of list K n and update the parameters for cluster Part n as v follows: wPart n + wi wPart n ; max( aPart n + stPart n + t ij , a j + st j a i ) stPart n ; uPart n + u i uPart n If ( bPart n > b j ) is true, then b j bPart n , delete customer j from lists L and L1, and go to step(ix). (ix) If list L1 is empty, save the list

K n defining the cluster Part n and proceed to step(x), else return to

step (iv). (x) Repeat steps (ii)-(ix) until L is empty. (xi) Calculate the cluster cancroids as well as the time and the distance between any pair of clusters defined by the algorithm. 3.2 Assigning vehicles to distribution centers Calculate average cost avr ( Yiv cip ) for p P , pick-up the smallest, so the vehicle v assign to distribution
iI

center p. In this case, some distribution centers may useless. After phase 3.2,the following tasks have been completed: (i) assignment of customer nodes via clusters to vehicles;(ii) allocation of used vehicles to depots; (iii) discovery of a near-optimal set of cluster-based tours and (iv) the sequence of clusters on the same tour that indirectly provides a partial arrangement of the customer sites visited by the same vehicle. 3.3 Optimizing the single tour-scheduling Our aim in this phase is to order nodes within clusters, schedule the service start times at the customer locations for every tour, and find the better solution. So we introduce an improved genetic algorithm (GA). 3.3.1 Chromosome representation v v I iv denotes the order number of customer i serviced by vehicle v, so ( I xo , I iv ,, I yo )represent the route of v. After the route fixed, we can determine the optimization begin time Tv by enumeration because Tv must lower than max (Yiv bi ), i I . 3.3.2 Fitness function We verify each constraints, if a constraint is dissatisfied, set F =0; else

F=

z0 (z max z ) + z 0 z min z 0 z min z 0 z min

(20)

where z 0 represents the average fitness value, z min stands for the lowest fitness value, and z max for the maximum fitness value. If the amount of goods to pick up exceed qv , we set the reverse amount as qv . This function has a character that helps to improve the rate of selection and find superior solution. 3.3.3 Initialization population After steps 3.3.1-3.3.2, we get a feasible solution, which may be the better solution, so the order of customers serviced by v as a chromosome. Then, creative n 1 different orders of these customers randomly. 3.3.4 Parent selection operator There are several selection methods, such as tournament selection, rank selection, elitism selection and so on 156

. For this study, we used maximum preserved roulette wheel selection. 3.3.5 Crossover operator We apply special two-point crossover :If A,B are the parent , first select two points randomly ;second put the genes before A as A1and after B as B1, finally delete the same genes. By now we get new children A2 B2.This method allows that A is the same as B, but they can product different children. 3.3.6 Mutation operator After recombination, some children undergo mutation. We change order of two genes randomly for them. 3.3.7 Stopping criteria
' This paper has two stopping criteria: (i) max generation (ii) F ( z max ) F ( z max ) < ,where can be set

[10]

F ( z max )

ahead.

4. Example
To show the computational performance of the proposed three-phase hybrid genetic approach for VRPSDPTW, some examples are tested. Some parameters are generated randomly based on practical situation and the runtime is analyzed. The runtime consists of two parts: time of heuristic algorithm t1 and time of genetic algorithm t2. We analyze a chain enterprise which consists of five distribution centers locating at {(20,10),(15,15),(16,16),(20,18),(0,20)};there are 9 vehicles and q v is generated randomly in the
max range[150,200]and tvv in [2000,2500] etc.. Reclaim cost equals tenth of distribution cost, early punishing cost is

1(/m), late punishing cost is 3(/m),and the overtime cost is 0.5(/m).


Runtime of heuristic algorithm
4 time(s) 3 2 1 0 0 50 100 the number of customers 150 200

Fig.1 Runtime of heuristic algorithm

(i) We mainly analyze the runtime of heuristic algorithm t1 because t2 depends on the practical data and stall criteria. The model is solved first by heuristic algorithm, and the change of runtime t1 is shown in Fig.1.It shows that the value of t1 increases slowly with the number of customers and the algorithm suitable to solve large-scale problem. (ii) We analyze 3 enterprises consisting of 30, 50,100 chains nodes, respectively, and there are 9 vehicles to fulfill the distribution task. The other parameters are same as above. The result is shown in Tab.1. It turns out obviously that the hybrid approach can get a better result in reasonable time.
Number of customers 30 50 100 Number of vehicles 9 9 9
Tab.1 The result of three-phase approach Time of heuristic Result of heuristic Total runtime algorithm (s) algorithm (s) 0.3241 5.6097e+004 57.1801

Result after genetic algorithm 4.5988e+004 6.9186e+004 1.7958e+005

0.4763 1.8953

9.3258e+004 2.0972e+005

185.0269 397.6718

5. Conclution
This paper presents a model of VRPSDPTW and a three-phase hybrid genetic approach to solve the model, 157

and get a better result. As different chain enterprise has different target because of their different competition strategy, it turns out to be a multi-objective programming problem. How to solve those problems is a further work. Additionally, how to completely consider the influence of reverse logistics to target function, this model still needs to be improved.
References

[1]

Dondo R., Cerda J. A cluster-based optimization approach for the multi-depot heterogeneous fleet vehicle routing problem with time windows. European Journal of Operational Research, 2007, 176(3): 1478-1507 [2] Min H. The multiple vehicles routing problem with simultaneous delivery and pick-up points. Transportation Research, 1989, 23(4): 377-386 [3] Halse K. Modeling and solving complex vehicle routing problems, PhD thesis. Denmark: Institute of Mathematical Statistics and Operations Research (IMSOR), Technical University of Denmark, 1992 [4] Angelelli E., Mansini R. The vehicle routing problem with time windows and simultaneous pick-up and delivery, in: Klose A, Speranza M. G. , Van Wassenhove L. N.,Eds. Quantitative Approaches to Distribution Logistics and Supply Chain Management, Springer-Verlag, 2002.249267 [5] Dethloff J. Vehicle routing and reverse logistics: The vehicle routing problem with simultaneous delivery and pick-up.OR Spektrum, 2001, 23(1):79-96 [6] Ropke S., Pisinger D. A unified heuristic for a large class of vehicle routing problems with backhauls. European Journal of Operational Research, 2006, 171(3): 750775 [7] Bianchessi N., Righini G. Heuristic algorithms for the vehicle routing problem with simultaneous pick-up and delivery. Computers & Operations Research, 2007, 34(2): 578594 [8] Zhang Jianyong, Li Jun. Hybrid Genetic algorithm to vehicle routing problem with simultaneous delivery and pick-up.China Journal of Highway and Transport, 2006, 19(4): 118-122(in Chinese) [9] Qu Zhiwei,Cai Linning,Li Chen, Zheng Li. Solution framework for the large scale vehicle delivery/collection problem[J]. Tsinghua Univ(Sci&Tech),2004,44(5):581-584(in Chinese) [10] Gendreau M., Laporte G., Vigo D.Heuristics for the traveling salesman problem with pickup and delivery. Computers & Operations Research, 1999, 26(7): 699714 [11] Ghaziri H., Osman I.H. A neural net work algorithm for the traveling salesman problem with backhauls. Computers& Industrial Engineering 2003, 44(2): 267281 [12] Sral H., Bookbinder J.H.The single-vehicle routing problem with unrestricted backhauls.Networks,2003,41(3):127-136

158

Multi-Objective Layout Optimization in Dynamic Environments: A Heuristic Approach


Dong Ming1 , Liu Fei1 ,Hou Forest2 , Zhang Franky2, Chen Feng1
1 Shanghai Jiao Tong University, P.R. China, 200240 2 Intel Products (Shanghai) Ltd., P.R. China, 200131

Abstract This paper discusses the dynamic multiple stage facility layout problem with the unequal area departments. Layout evaluation criteria considered include not only the traditional material handling cost but also the re-layout cost and other costs that are not considered before. A heuristic algorithm is proposed to solve this NP-hard problem. The key point for this algorithm is how to consider the functional layout and re-layout cost during the changing periods. The heuristic algorithm can also be used to reduce the complexity of the solution searching procedure. Experimental results show that the proposed algorithm is effective. Key words Layout design, Dynamic environment, Heuristic algorithm, Multi-objective optimization

1. Introduction
Facility layout design determines how to arrange, locate and distribute the equipment and support services in a manufacturing plant. An efficient layout planning is quite important for a factory due to its complexity and high re-layout cost. There are several methods for solving the facility layout problem (FLP), such as a quadratic assignment problem (QAP), a quadratic set covering problem (QSCP), a linear integer programming problem, and a graph theoretic problem. Mathematical techniques usually involve the identification of one or more goals that the layout should strive to achieve. A widely used goal is the minimization of transportation costs in the site. But there are also some other quantitative indices for the plant layout evaluation, for example, work in process (WIP) inventory (Benjaafar 2002), product lead time (Meng 1999), area utilization factor, and shape ratio factor (Wang 2005). With the changing technology, market requirements vary quickly. Manufacturing facilities must be able to exhibit high levels of flexibility and robustness despite significant changes in their operating requirements especially in the high-tech industry. The complexity of FLP design increases when it is regarded as a dynamic problem, i.e. the dynamic facility layout problems (DFLPs), which focus on minimizing the total cost over the specified time horizon. Balakrishnan and Cheng (1998) gave a state-of-the-art survey about this problem. The design of the semiconductor plant facilities for high volume production has traditionally been dominated by functional or process layout, where work centers consist of groups of similar or identical machines that are capable of performing the same type of unit process. But the manufacturing tools will be updated very quickly due to the changing products. When the space of the plant is limited and many machines are need to re-location, the solution for how to reduce the re-layout cost and optimize the whole facilities in the multi-periods is what we consider in this paper. The remainder of the paper is organized as follows. Section 2 develops a multi-objective optimization model for multi-stage layout decisions. Section 3 describes the proposed algorithms on solving developed multi-objective optimization problem. Section 4 presents and discusses the findings of this study. Section 5 concludes and offers suggestion for future research.

2. Problem Statement and Modeling


The department is considered as a rectangle or the smallest possible rectangle contains the department (when the department is not a regular shape). Its position is described by the coordinates of the center point and the orientation. There exist several criteria used to evaluate the performance of a given layout. The following criteria are

This research has been supported by National Natural Science Funds of China (No: 70571050) and Shanghai Pujiang Program (No. 05PJ14067).

159

adopted: 2.1 Material handling cost In the unequal area department layout problem, most researchers only consider the material flow cost as objective function. An effective facility layout and material handling design will reduce the operating cost of the industry by 1030% (Tompkins and White 1996). The single product material flow cost can be formulated in the following: Single-product

C
j =1 k =1 jk

f ( j, k ) d ( j, k , )

(1)

where C is the transferring cost per unit, f(j ,k) is the material flow from department j to k, and d(j ,k, ) represents the distance between j and k under the layout . Considering many products are traveling among the departments, the total material handling cost equals the summation of material handling costs of individual products: Multi-products:
all products P

MHCp =

all products P j =1 k =1 jk

f p ( j, k ) d ( j, k , )

(2)

where C p represents the transferring cost per unit for product p. 2.2 Re-layout Cost and Shutdown Cost The factory tries to avoid the machines re-layout as much as possible due to the extremely high cost. We check the relocation by seeing whzether the departments are overlapped between the old and new locations. If the machines re-layout will result in the shutdown of the product line, we will calculate the shutdown cost additionally. 2.3 Area Utilization Efficient area utilization for a factory is another important factor needs to be considered. Because the factorys area is limited and the land investment usually is very expensive. In this paper we focus on the fixed department shape layout problem which means that the department shape cannot change but the rotation is allowed. The area utilization can be interpreted as C area ( AT [i] + Useless Area) where the AT[i] means the
i

area for machine i, Carea is the cost per area. The useless area is defined as the space which cannot put the smallest machine in it.

3. Methodology
3.1 Algorithm design This algorithm is developed based on the two dimension bin packing problem and the experience used in the industry. We extract the common senses and form some rules. By using the structure based on the knapsack problem and these rules, we deal with this dynamic layout problem more efficiently and reduce the solution spaces effectively. Furthermore, the meta-heuristic algorithms will be applied to search the optimal solution from the large set of possible candidates. The basic concept underlying the procedure proposed is very simple. There are machine tool lists from the forecasting, which includes two parts, machines to be added and machines to be removed. But when designing a layout, we have three actions: add new tool, remove old tool, and re-layout the old tool. Now, we consider the re-layout the old tool as removing this tool and adding this tool in same period. Figure 1 shows the detailed flowchart. The algorithm starts by finding the free spaces from the initial layout. Then, go to the last period, find total remove List during the whole period. Remove these machines, and put the machines in the add list. Obviously, there are too many possible solutions. Heuristic put in rules is developed to select the solution candidates. When we put the departments in the area, there exist some situations that need to relocate some machines. Then Fetching Rule is used. After finishing the last period, we check whether it is reasonable. If its reasonable, we try to solve the problem with n-1 period. Otherwise, this solution is abandoned. 160

The result obtained is not a single optimized solution, but a good solution set. That means we have many possible choices. Then, the objective function is used to judge these choices and Tabu search algorithm can be used to reduce the complexity. 3.2 Put-in rules When machines are going to be put in given free spaces, we do not need to re-layout any other machine. This rule contains two main parts: (a) The free spaces are very limited, so all the departments must be put into the floor. (b) Its easy to put all the departments to these area, so these department should be arranged as good as possible. Then, we describe the procedure as following: (1) Order all the departments by area or other characteristic. (2) Find all the suitable area for this department i. Moreover, reversal is allowed. (3) If there is no suitable area, combine the area. Then try again. (4) If still no suitable area, we conclude that there is no solution. Try to use the fetch rule to re-layout some machines. (5) Put the department to the area, and cut the area into two new areas. (6) Repeating this procedure until all departments are placed.
Period 1 Period 2 Period 3

CAM ATL

1 3

CAM ATL

3 3

Add List

CAM ATL

1 3

Remove List

Epoxy CTL

1 0

CAM ATL HIS

1 0 1

IHS SPRG

1 3

Fig. 1 Add list and remove list structure

Machine

Machine

Machine

A. unsuitable

B. suitable
Fig. 2 Finding a suitable area

C. unsuitable

2 1 4 A 3

Relayout cost= 50000

2 1 B

Machine 2 too big for other free area

Relayout cost= 10000

Fig. 3 Selecting machine in the neighborhood

3.3 Fetching rules If we dont re-layout any machine and try to find the solution, there are two situations: (a) There is no any 161

possible solution which is feasible. (b) The departments in same process could be in separated areas. Then, some machines should be fetched to both the add list and remove list. Selecting the machine is called fetching rule. (1) Start the iteration from re-layout i machine, then i=i+1 until i=Maxi. (2) Find which situation causes we fetch the machine from the floor. (3) In case of (a): From the biggest area, find the machines in its neighborhood . The machine set that is 1 smaller than the biggest free area is . For machine Ak , the remove cost for Ak is C Ak . The joint line of
2 Ak is C A2 . Then the weight of each machine Ak can be obtained: WAk = C 1 k + (1 )C Ak . In Figure A k

3, department 1 is fetched out. The rule for A is the longest adjacency between department 1 and the free area. The rule for B is the lower re-layout cost between department 1 and 2. The rule for C is that department 1 is the only choice because department 2 is too big to the free area, that means if department 2 is re-located, the department 2 will be located in the same place with high probability. In case of (b): Order the machine on add list by the area. For machine i, look for the same type machines already on the floor, and find the total area.

4. The industrial case


The company assembles and tests the semiconductor related products. The production processes from A to H have totally 37 machines. The initial layout is shown in Figure 4. Table 1 shows the machines quantities in multi-stages.

Fig. 4 Initial layout


Tab. 1 Machines quantities in different stages

Mach./Stage APL: CAM: Deflux: Epoxy 1 : Epoxy 2 : IHS: SPRG: CTL: Cure:

Initial 6 5 3 2 4 4 6 6 8.5

Q1'06 6 5 3 1 4 4 6 6 8.5

Q2'06 6 5 3 1 4 4 5 6 8.5

Q3'06 7 5 2 0 5 3 4 6 8.5

Q4'06 7 6 2 0 5 3 4 7 6.5

Applying the above algorithm, the solution for this example can be obtained. Figure 5 gives the real-world layout and Figure 6 shows the layouts from the periods 1 to 4. It can be seen that the layout in each period is still functional, and the re-layout cost is relatively low. The real-world layout is almost same as the results, and the comparison is given in Table 2.

162

Fig. 5 Real-world layout

Fig. 6 Layouts from the periods 1 to 4


Tab. 2 Comparisons between real-world layout and results from proposed algorithm Useful space *space Total cost Material handling cost Relayout cost cost (Phase) Phase2 225338 10000 1909 237247

Total cost

Algorithm

Phase3 Phase4 Phase5 Phase2 Phase3 Phase4 Phase5

227256 246767 237744 225338 227256 247895 239436

5000 102000 112000 10000 5000 102000 112000

3334 7328 3293 1909 3334 7328 3293

235590 356095 353037 237247 235590 357223 354729 1184789 1181969

Real layout

5. Summary and conclusions


This paper presents a new heuristic approach to the problem of multi-stage dynamic layout. One of the most important features of this approach is that both functional layout and re-layout with unequal areas are considered. The heuristics are designed based on the industrial experiences. Arrangements generated in this way are able to provide reasonable results in most cases. The algorithm makes use of the concept of combining the areas and how to place a department in the area. This idea is wildly used in the bin packing problem. The objective function considers the material flow cost, 163

re-layout cost and the space utilization. Through the experiments, the results show that the proposed algorithm is effective and practical for use.
References

[1] [2] [3]

Balakrishnan, J., Cheng, C. Dynamic layout algorithms: a state-of-the-art survey, Omega- International Journal of Management Science, 1998, 26(4):507-521 Braglia M., Zanoni, S., Zavanella, L. Layout design in dynamic environments: analytical issues, International Transactions in Operational Research, 2004, 12(1):1-19 Bryan, A., Norman, A., Smith, E. A continuous approach to considering uncertainty in facility design, Computers & Operations Research, 2006, 33:17601775

[4] Meng, G., Heragu, S.S., Zijm, H. Reconfigurable Layout Problem, International Journal of Production Research, 2004, 42(22):4709-4729 [5] Lin, L.C., Sharp, G.P., Quantitative and qualitative indices for the plant layout evaluation problem, European Journal of Operational Research, 1999, (116):100-117 [6] Wang, Ming-Jaan, Hu, M.H., Ku, M.Y. A solution to the unequal area facilities layout problem by genetic algorithm, Computers in Industry, 2005, 56(2):207-220 [7] Rosenblatt, M.J., Kropp, D.H. The single period stochastic layout problem, IIE Transactions, 1992, (24): 169-176
[8] Benjaafar, Saifallah. Modeling and Analysis of Congestion in the Design of Facility Layouts, Management Science, 2002, 48(5):679-204

164

A Centralized Inventory Policy for Open-Loop Reverse Supply Chain


Gou Qinglong , Liang Liang, Xu Xiping , Xu Chuanyong
School of Management, University of Science and Technology of China Hefei, Anhui, 230036, P.R. China

Abstract As an extension to Gou et al.(2006)[9], this paper introduce a special inventory policy for the open-loop reverse supply chain, in which the centralized returns center(CRC) is authorized to manage the inventories at local contract point(LCP) location. To formulate the model, special technique has been used to translate the CRCs inventory and handling process into an Ek / M /1/

queueing system and thus the respective conclusions in queueing theory are utilized. Our purpose is to determine LCPs economical delivery batch and CRCs handling batch to minimize the long-run average cost. Sensitivity analyses are also presented. Keywords Reverse supply chain, Inventory management, Queueing system

1. Introduction
Reverse supply chain (RSC) is a series of activities required to retrieve a used product from a customer and either dispose it or reuse it (Guide and Van Wassenhove, 2002)[12]. It is becoming more and more important for modern business organizations. On one hand, people are confronted with serious environmental problems and thus environmental regulations require firms to recycle the products they sell (Anindya, 2003)[1]. On the other hand, reusing components of abandoned products can reduce material cost for manufactories (Guide et al., 2003)[10]. During the Past two decades, RSC has received special attentions by literatures. The first theoretical framework of reverse supply chain is initiated by Lund (1984)[17], which is for remanufacturing. After that, Thierry et al. (1995)[24] analyzed product recovery management and Toffel(2003)[25] discussed the strategic importance of the end-of-life product management. Further, the empirical studies, which includes case studies and survey-based studies, is also a focus for study of reverse supply chain. For the case studies, readers can see de Brito et al.(2002)[5] for a review , for the survey-based studies, please see Rogers and Tibben-Lembke(2001)[22], BlumBerg(1999)[2], Guide(2000)[11], Lieb et al. (1993)[15] for examples. In spite of the literatures mentioned above, inventory management and modeling plays a significant role in the research of reverse supply chain(Mabini et al., 1992, and Richter, 1996 a&b, for deterministic models; and Pierskalla and Voelker, 1976, Cohen et al., 1980, Cho and Parlar, 1991, Var der Laan et al., 1996, and Inderfurth, 1996, Kiesmller and Scherer,2003 , Listes and Dekker,2005, DeCroix and Zipkin, 2005, Gou et al., 2006, for stochastic models) [3] [4] [6][9] [13][14][16] [18][19][20][21] [26]. However, most of them are based on a closed-loop reverse supply chain, where a in-house distribution center which combines forward and reverse distribution services is adopted (Fleishmann et al., 1997)[7]. In fact, because reverse logistics is so different from the conventional forward practices, the question of whether to handle reverse logistics in a forward distribution center(close-loop system) or in a separate facility (open-loop system) arises often(Gooley, 2003)[8]. As described by Gooley (2003) [8], the right solution for the question depends on the who, what and where of the reverse logistics and thus there is no absolute answer. Along with this direction, an open reverse supply chain system which combines with a single centralized returns center (CRC) and several independent local contact points(LCP) is discussed by Gou et al.(2006)[9]. In the system considered by Gou et al.(2006) [9], each LCP received a stream of random product returns from consumers and delivery them to CRC independently as its inventory reaches its economical transportation batch S , CRC handles those returned products with a batch Q (where Q = rS , r is an integer). Noting that a company would rather set up several LCPs in different divisions of a city and a single CRC in an urban place, LCPs are normally congregated in an adjacent area and CRC lies in a relative far place. Thus, it is reasonable for CRC to transfer all the LCPs inventory in one dispatch to save the delivery cost. As an extension of Gou et al.(2006) [9], this research introduce a new inventory control and delivery policy for
This work was supported by the National Natural Science Foundation of China (No. 70525001).

165

the reverse supply chain system. Under a special contract, CRC is authorized to manage the inventories at LCP location. Therefore, CRC has the liberty of controlling the upstream transportation decision to accumulate a larger delivery batch. In detail, once all the inventories of LCPs accumulate to the economic batch S , CRC transfers all of them back immediately. Be the same as Gou et al.(2006) [9], CRC handle the product returns with a batch Q (where Q = rS , r is an integer). The major issue is to determine the economical delivery batch S and handling batch Q or r to minimize the long-run average cost. Sensitivity analyses are also presented. This paper is organized as follows. Assumptions and the inventory process for each stage in the reverse supply chain are described in detail in Section 2. Model development is in Section 3. In section 4, the effects of each policy parameter on the final solutions and the supply chain costs are analyzed. Concluding remarks are in Section 5.

2. Problem Description and Assumptions


The system considered by this paper consists of a single CRC and m LCPs (see Fig. 1). LCPs receive the returned product from customers and CRC is responsible for the delivery of product returns from LCPs to CRC and the handling task of those product returns. More details of the system and inventory policy are illustrated by the following assumptions:

Fig. 1 The Stages in the Reverse Supply Chain System

Assumption 1: Each LCP receives a stream of product returns from customers which follows a Poisson process with parameter i t ( i = 1, , m ). Further, to simplify the model, i = 0 ( i = 1, , m ) is assumed. Assumption 2: Once alzl the inventories of LCPs reaches the economical transportation batch S , CRC transfer all of them back immediately. Further, to simplify the model, assume that the delivery time is zero. Assumption 3: CRC has a single server to handling those product returns, and the server handles the product returns while the server is free with a handling batch of Q , the handling time L is random and follows an exponential distribution with parameter . Assumption 4: CRC has a maximum busy ratio 0 , where 0 is a given parameter which between 0.9 and 1. The costs involved in the system constitute with: (i) the inventory holding costs occurred at LCP locations, (ii) the delivery cost from LCPs to CRC, which includes a mainly fixed cost for each delivery and a minor cost for each LCP be involved in a delivery; (iii) the inventory holding costs occurred at CRC location, and (iv) the handling cost occurred at CRC, which includes a fixed cost for each batch cost and a minor cost for each unit product return per unit time. To make these costs clearly, we list the related parameters as follows: FD A mainly delivery cost for each delivery from LCPs to CRC cD A minor delivery cost for each LCP which be involved in a delivery FH A fixed setup handling cost for CRC for each handling batch cH Variable handling cost for CRC per unit product returns per unit time hL Inventory cost for LCP per unit product returns per unit time hC Inventory cost for CRC per unit product returns per unit time

166

3. Model Development
3.1 The inventory holding cost at LCP locations Looking all the LCPs as a single entity LCPS, it is easy to know that the product returns to LCPS is a Poison process with parameters t , where = im 1 i = m0 . Fig. 2 presents an example of LCPSs inventory = process in which { X j , j = 1,2, } is the interval between two adjacent arrivals of product returns.

Fig. 2 The inventory process at LCPS

Define the period between two adjacent deliveries from LCPS to CRC as a delivery cycle, the average period of each delivery cycle TL ,C is

TL ,C = E ( X 1 +
S

+ XS) = S / ,

(1)

and the mean of inventory cost at LCP locations for each delivery cycle is

H L = hL E[ ( j 1) X j ] = hL
j =1

S ( S 1) . 2

(2)

Thus, the average inventory holding cost at LCP locations H avL is

H avL =

H L ( S 1)hL . = TL ,C 2

(3)

3.2 The Average Delivery Cost from LCPs to CRC The costs occurred in each delivery cycle includes a fixed delivery cost FD and a variable delivery cost kcd , where k is the random number of LCPs occurred in a delivery. That is said the delivery cost for each delivery cycle is

D = E[ FD + kcD ] = FD + cD E[k ] .
Denote j represents whether LCPj be involved in a delivery from LCPS to CRC, thus

(4)

E[k ] = E[ j =1 j ] = mp j .
S

(5)

where p j = P ( j = 1) = 1 (1

1 S ) . Thus the average delivery cost for each delivery is m D = FD + mc D p j ,

(6)

and the average delivery cost from LCPS to CRC Dav is

Dav =

F mcD D 1 = D+ {1 (1 ) S } . TL ,C S S m

(7)

3.3 The Average Inventory Holding Cost in the CRC Location To formulate the inventory cost of CRC, we look all the LCPs as a cyber entity LCPS and divide CRC into two parts: CRC I and CRC II. In detail, CRC I receives product returns from LCPs with a batch of S and keep them. However, once the inventory of CRC I accumulates to Q (where Q = rS , r is an integer), CRC 167

I transfer all the product returns to CRC II immediately. CRC II handles these product returns with a batch of Q . The relationship of the inventory process between CRC and that of CRC I and CRC II is illustrated by Fig. 3. In Fig. 3, { X (j I ) , j = 1,2, } is the time interval series of the delivery from LCPS to CRC, i.e.,

X (j I ) = k = jS +1 X j is a delivery cycle from LCPs to CRC.


( j +1) S

Fig. 3 The inventory process of CRC, CRC I and CRC II

Define the period between two adjacent transfer from CRC I to CRC II as a transfer cycle, then the average length of each transfer cycle TI , II is

TI ,II = E ( X 1( I ) +

+ X r( I ) ) = E ( X j ) =
j =1

rS

rS ,

(8)

the mean of the holding cost for CRC I for one transfer cycle is

H I = hC E{ (r 1) SX (j I ) } =
j =1

hC r (r 1) S 2 . 2

(9)

Thus, the average holding cost for CRC I is

H avI =

(r 1)hC S . 2

(10)

Looking each batch of product returns from as a single customer, it is easy to know that the inventory process for CRC II forms an EQ / M / 1 / queueing system, in which the customer arrival intervals is Erlang distributed with parameters (Q, ) and the service time for each customer is exponential distributed with parameter . Knowledge of queueing theory tells us that when = /(Q ) <1 is satisfied the average queue length for EQ / M / 1 / system is

Nq =

Q Q ( z0 1)

(11)

where z0 >1 is a root of z Q+1 ( + ) z Q + = 0 (see Sun R.H. and Lee J.P., 2002, page 79-86 for a reference)[23]. Thus, the average inventory cost of CRC is

H avII = hC N q Q =
where z0 >1 is a root of z Q+1 ( + ) z Q + = 0 . 168

hC , Q v( z0 1)

(12)

Noting there is a z0 be included in equation (12), where z0 >1 is a root of z Q+1 ( + ) z Q + = 0 . In fact, queueing theory also proves that z0 is unique existed (Sun R.H. and Lee J.P., 2002, page 79- 86)[23] and Gou et al.(2006)[9] illustrates that

1<

Q + + . z0 Q +1

(13)

while = /(Q ) <1 is satisfied. From equation (10) and (12), we get the average holding cost at CRC location H avC as follows:

H avC = H avI + H avII =


where z0 >1 is a root of z Q+1 ( + ) z Q + = 0 .

(r 1)hC S hC + Q 2 ( z0 1)

(14)

3.4 CRC average handling cost Since all the returned products which arrived in the CRC will be handled in the long-run. Therefore, for a long enough period T , assuming that N times handling processes happened, then the mean of the variable handling cost ( VHC ) is

E (VHC ) = cH QE ( L1 +
The fixed handing cost ( FHC ) is

+ LN ) =

c H T

(15)

E ( FHC ) = FH E ( N ) =
Thus, the average handling cost H avH is

FH T . Q

(16)

H avH =

E (VHC ) + E ( FHC ) cH FH = . + T Q

(17)

3.5 The Total Cost and the Optimization Problem From the discussions of subsection (i) to (iv), we get the expression of the long-run average cost of the reverse supply chain system as follows: hC F h ( S 1) (r 1)hC S FD mc D c 1 (18) C (r , S ) = L + + + {1 (1 ) S } + H + + H , rS 2 2 S S m rS ( z 0 1) where = /(Q ) <1, z0 >1 is a root of z Q+1 ( + ) z Q + = 0 . Note that the existing of z0 in equation (18), we denote
f (r , S , z ) = hC hL ( S 1) (r 1)hC S FD mc D F c 1 + + + {1 (1 ) S } + H + + H . 2 2 S S m rS ( z rS 1)

(19)

Thus, the problem reduces to the following optimization problem: min f (r , S , z ) subject to

(20)

r 1, S 1,

z rS +1 ( + ) z rS + = 0, = /(rS ) 0 , rS + + , z rS + 1
r , S are integers.
where m, , , FD , FH , hC , hL , cD , cH , 0 are all given constants for the system. 3.6 Algorithm for the optimal problem Assuming that (r * , S * ) is the optimal solution which minimize C (r , S ) , then for any solution (r , S ) , 169

C (r * , S * ) C (r , S ) . Noting that each item in the right of equation (18) and (19) is positive, then for any solution (r , S ) ,

f ( r , S , z ) c H /
and if S * = S , we have

FD

S*

2( f ( r , S , z ) c H / + 1, hL

(21)

r*

2{ f (r , S , z ) ( S 1)hL / 2 FD / S c H / } . +1 hC S

(22)

In fact, equation (21) and (22) represents the bounds of (r * , S * ) . Utilizing these results, we represent the algorithm for the optimization problem as follows. Algorithm Step1. Let r = 1 , S = [2 / v] + 1 , calculate the respectively cost f (r , S , z ) and get an bound of S * : c c upS = 2( f (r , S , z ) H ) / hL + 1 , lowS = FD /( f (r , S , z ) H ) .

2{ f (r , S , z ) ( S 1)hL / 2 FD / S cH / } +1 , Step2. For each lowS S upS , let upR = hC S

calculate the respectively cost f (r , S , z ) for each pair of (r , S ) where 1 r upR and compare them to get the optimal solution (r * , S * ) .

4. Results and Sensitivity Analyses


In this section, we change each parameter and analyze its affection to the final solutions. 4.1 The r and Ss Affection on the System Cost Changing r and S in a large variation, we get their affection on the system cost shown in Fig. 4. Fig. 4 illustrates that: (i) To any fixed r , along with the increase of S , the system cost decreases sharply firstly to reach its minimum cost, and then increases linearly later. (ii) The larger r is, the more sharply for both decrease and increase is. rS In fact, noting that when S is large enough, the two items (1 1 / m) S and hC / ( z0 1) in equation (18) and (19) are all close to zeros, thus,

f (r, S, z)

(hL + (r 1)hC )S (FD + mcD + FH / r) + + K0 2 S

(22)

Fig. 4 The

r and S s Affection on the System Cost

where K 0 = c H / hL / 2 is a constant. Let g (r , S ) = f (r , S , z ) K 0 , then g (r , S ) has the same trend with f . Differentiating on S in the above equation, we have

g hL + (r 1)hC ( FD + mc D + FH / r ) = . S 2 S2
170

(23)

From equation (23), it is easy to know that: to a fixed r , f gets it minimum


* g r = 2 (hL + (r 1)hC )( FD + mc D + FH / r )

(24)

while S =

2 ( FD + mc D + FH / r ) /( hL + ( r 1) hC ) . Equation (23) illustrates that: (i)when S is small

enough, the system cost decrease with a rate about ( FD + mc D + FH / r ) / S 2 . (ii)when S is large enough, ( FD + mcD + FH / r ) / S 2 is close to zero and thus the cost increases linearly with a rate of (hL + (r 1)hC ) / 2 .Therefore, the larger r is, the more sharply the line is. 4.2 The ms affection on the optimal solution To analyze m s affection on the optimal solution, we change m from 1 to 20 and calculate the optimal solution for each m in two different cases which are shown in Fig. 5. Fig. 5 illustrates the following trends: (i) when the number of LCPs m is small enough, a large r is advised. (ii) when the total market is fixed, the increase of m almost has no affect to the total cost and S (Case 2). (iii) aS the total market increases, the delivery batch S and the system cost increase linearly (Case 1).

Fig. 5 The

m s affection on the Optimal Solution

From equation (24), we have


* h + (r 1)hC FD + mcD + FH / r gr = L * hL FD + mcD + FH g1
* * Noting that the larger m is, the larger g r / g1 is, thus a smaller r is advised.

(25)

4.4 The s Affection on the Optimal Solution Changing v from 0.1 to 5, we get its affect on solution as follows (see Fig. 6): (i) when v is little enough, a large r is advised and as the increase of v , r decreases rapidly to 1. (ii) when v is little enough, the increase of v reduces the system cost sharply. (iii) when v is large enough, the increase of v almost has no effect to the (r , S ) solution and the system cost.

Fig. 6 The vs affection on the Optimal Solution

In fact, when the efficiency of CRC is rather low, more and more product returns will be accumulated, and thus larger delivery and handling batches will be advised. That will lead the inventory cost increase sharply. Therefore, when CRCs efficiency is rather low, it is necessary to improve it indeed.

171

4.5 the hL and hCs Affection on the Optimal Solution

Fig. 7 The

hL and hC s affection on the Optimal Solution


In

Changing hL and hC in a large variation, we get their affections on the optimal solution as follows (see Fig. 7): (i)as hL increases, r increases and S decreases. (ii)as hC increases, r decreases and S increases. fact, noting that the larger hL (the smaller hC is), the smaller g / g
* r * 1

is, thus a larger r is more likely be

advised. 4.6 the FD and cDs Affection on the Optimal Solution Fig. 8 shows FD and cD almost have the same effects: as the increase of FD and cD , r decreases and * * S increases. Equation (25) tells that the larger FD and cD is , the larger g r / g1 is, thus a smaller r is advised. Further, the increase of FD and cD means that the increasing of the delivery cost, thus it is reasonable to has a large delivery batch to reduce to delivery cost.

Fig. 8 The FD and

cD s affection on the Optimal Solution

4.7 the FH and cHs Affection on the Optimal Solution To analyze the effect of FH and cH , we change FH form 50 to 1000 and cH from 1 to 20 respectively, and get their affection which be shown in Fig. 9. (i) as FH increases, r has a trend to increase. (ii) while r fixed, S increases as FH increases. (iii) the change of cH has no effect to r and S .

Fig. 9 The

FH and cH s affection on the Optimal Solution

* * Noting that g r / g1 increases as FH increases, that means a larger r would likely be advised. Specially,

the fixed handling cost increases, it is reasonable to increase the handling batch thus S increases when r is fixed. Further, no matter how high cH is, all the product returns have to be handled, thus the change of cH has 172

no affect to the final solution but the cost increase linearly.

5. Conclusions
In this paper, a centralized inventory policy is introduced for the open-loop reverse supply chain. Using a special contract, CRC has liberty of managing the inventory of all the LCPs, just as the VMI management mode in forward supply chains. Utilizing the conclusion of Ek / M / 1 / queueing system, a stochastic inventory model for a single-CRC and multi-LCPs reverse supply chain system has been developed. Further, an algorithm is developed to find the optimal solution and each parameters affection to the solution has been discussed. Sensitivity Analyses tells us that: (i) the larger hL or FH is, the larger r is more likely be advised. (ii) hC , FD , cD have the opposite affects. (iii) the change of cH has no effect to the (r , S ) solution. (iv) when the CRCs efficiency v is rather low, it is necessary to improve it. Some limitations of this study and further research for this area include: (i) The assumption that CRC has only one facility to handle the product returns may not fit the fact, therefore, the case of multiple server be included in the system should be discussed. (ii) We do not consider that the constraint of the transportation and this should be discussed in the future model. (iii) The time erosion of a product's value is not considered in the model and thus a penalty cost for each unit product per unit time may be introduced.
References

[1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20]

Anindya R. How efficient is your reverse supply chain. ICFAI Press Effective Executive (Special Issue: Supply Chain Management), January 2003. Blumberg DF. Strategic examination of reverse logistics and repair service requirements, need, market size, and opportunities. Journal of Business Logistics, 1999, 20(2):141-159. Cho DI, Parlar M. A survey of maintenance models for multi-unit systems. European Journal of Operational Research, 1991, 51: 1-23. Cohen MA, Nahmias S, Pierskalla WP. A dynamic inventory system with recycling. Naval Research Logistics Quarterly, 1980, 27(2): 289-296. De Brito MP, Flapper SDP, Dekker R. Reverse logistics: a review of case studies. Working Paper: Econometric Institute Report EI 2002-21. DeCroix GA, Zipkin PH. Inventory Management for an Assembly System with Product or Component Returns. Management Science, 2005, 51(8): 1250-1265. Fleischmann M, Bloemhof-Ruwaard JM, Dekker R, van der Laan E, van Nunen JAEE, Van Wassenhove LN. Quantitative models for reverse logistics: A review. European Journal of Operational Research, 1997, 103(1): 1-17. Gooley TB. The who, what and where of reverse logistics. Logistics Management, 2003, 42(2):38-42. Gou QL, Liang L, Huang ZM, Xu CY, A Joint Inventory Model For An Open-Loop Reverse Supply Chain, unpublished working paper, USTC(Submitted to EJOR in Dec 2006), 2006. Guide Jr VDR, Jayaraman V, Linton JD. Building contingency planning for closed-loop supply chain with product recovery. Journal of Operations Management, 2003, 21(3): 259-79. Guide Jr VDR. Production planning and control for remanufacturing: industry practice and research needs. Journal of Operations Management, 2000, 18: 467-83. Guide Jr VDR, Van Wassenhove LN. The Reverse Supply Chain, Harvard Business Review, 2002, 80(2): 25-26. Inderfurth K 1996. Modeling period review control for a stochastic product recovery problem with remanufacturing and procurement lead-times. Working paper 2/96. Faculty of Economics and Management, University of Magdeburg, Germany. Kiesmller GP, Scherer CW. Computational issues in a stochastic finite horizon one product recovery inventory model. European Journal of Operational Research, 2003, 146(3): 553-579. Lieb RC, Millen RA, Van Wassenhove LN. Third-party logistics services: a comparison of experienced American and European manufacturing. International Journal of Physical Distribution and Logistics Management, 1993, 23(6): 35-44. Listes O, Dekker R. A stochastic approach to a case study for product recovery network design. European Journal of Operational Research, 2005,160(1): 268-287. Lund RT. Remanufacturing. Technology Review 1984; February/March: 19-29. Mabini MC, Pintelon LM, Gelders LF. EOQ type formulations for controlling repairable inventories. International Journal of Production Economics, 1992, 28: 21-33. Pierskalla WP, Voelker JA. A survey of maintenance models: the control and surveillance of deteriorating systems. Naval Research Logistics Quarterly, 1976, 23: 353-388. Richter K. The EOQ repair wand waste disposal model with variable setup numbers. European Journal of Operational Research, 1996a, 95: 313-324.

173

[21] Richter K. The extended EOQ repair and waste disposal model. International Journal of Production Economics, 1996b, 43(1-3): 443-448. [22] Rogers DS, Tibben-Lembke R. An Examination of reverse logistics practices. Journal of Business Logistics, 2001, 22(2): 129-48. [23] Sun R.H. and Lee J.P., The Basis of Queueing Theory, Science Press: Beijing; 2002. p.79-86. In Chinese. [24] Thierry M, Salomon M, van Nunen J, Van Wassenhove LN. Strategic issues in product recovery management. California Management Review, 1995, 37(2): 114-134. [25] Toffel MW. The growing strategic importance of end-of-life product management. California Management Review, 2003, 45(3): 102-129. [26] Var de Laan E, Dekker R, Salomon M. An (S, Q) inventory model with remanufacturing and disposal. International Journal of Production Economics, 1996, 46-47: 339-350.

174

Research on Development of Modern Logistic Industry in Area along New Eurasian Land Bridge (Chinas Section) Based on Spatial Economic Relationship
Ji Shouwen , Chen Jiajuan , Xie Fang
School of Traffic and Transportation, Beijing Jiaotong University, Beijing, China, 100044

Abstract The New Eurasian Land Bridge (Chinas section) links China's eastern, central and western regions, Collaborative development of the logistics in these regions is great significance for lower logistics costs, promoting regional economic coordination. The New Eurasian Land Bridge regional logistics of the situation and characteristics of China is analyzed. Based on the structure of freight and cargo space, the spatial economic relationship is established. Then methods to strength the collaborative development of the New Eurasian Land Bridge (Chinas section) logistics are provided. Key words Spatial economic relationship, Modern logistics, Collaborative development

1. Introduction
The New Eurasian Land Bridge (NELB) starts form Lienyungang of Jiangsu Provincetake our country within the boundaries of Gansu sea, Lanchow-Sinkiangline as the skeleton, and through central Asia, European countries and the Dutch port of Rotterdam. The NELB is 4,131 km in length in Chinese churchyard, traversing 10 provinces or regions in Chinese eastern, mid and western area, including Jiangsu, Shandong, Anhui, Henan, Sanxi, Shanxi, Gansu, Ningxia, Qinghai and Xinjiang, radiant area also reaching Hubei, Sicuan, Neimenggu, etc. The total area along the NELB is about 3,600,000 km2, taking up 37% of the national total area; there is about 0.435 billion population, accounting for 34.93% of the countrys, which shows it possesses a great important position in Chinese socio-economic development. New Eurasian Land Bridge in China's rapid economic development of the region, Modern logistics industry quickly started and taking shape, Relying on new Eurasian Land Bridge Transportation Corridor, Convenient network access is built up along regional logistics , For the development of regional economy has played an important supporting role in safeguarding .Development of the modern logistics industry in the areas along, For stimulate regional economic development along ,Promote the eastern, central and western development is of great significance.

2. Basic Status of Logistic Development in Area along NELB


Because of the industrial structure in the area along the NELB with resource concentrated characteristic in western section while manufacture concentrated in eastern, the logistic industry is determined as a significance consequence for the whole economic zone along the NELB. In recent years, the logistic industry, especially relying on this continental transportation corridor, has been achieved great progress. Developing the modern logistic industry has gradually become an important approach to promote the economy development for most of cities along the NELB. 2.1 Analysis of Basis Indicators of Logistics Industry in Area (1) Total Social Logistic Value From the year of 1995 to 2003, the total social logistic value in the area appeared an increasing trend in a great extent, and in the same period, its increasing speed is higher than the average speed each year of the GDP in the area. According to the statistic yearbook of each province along the NELB, it is statistic that the total social logistic value in the area which reflects the logistic need scale has been increased from 2,200 billion RMB to 5,800 billion RMB since 1995 to 2003, which represented an increase of 160.26%, with an average increasing speed of 12.70% annually, higher 1.4% than that of GDP (in terms of 11.3% of existing speed) in this area in the same period. (2) Total Social Logistic Cost From 1995 to 2003, the total social logistic cost was raised from 507.4643 billion RMB to 999.7947 billion 175

RMB. But its proportion in the GDP presented a slow downtrend, from 26.76% in 1995 to 22.35% in 2003, which shows that the logistic industry efficiency is increasing gradually. The proportion of the total social logistic cost in the GDP in 2003 is higher 1% than that of the country(21%), while higher 10%~12% than that of developed countries or region such as America, Japan and Europe, which indicates that the whole operation efficiency of the logistic system in the area is needed to be improved. (3) Logistic Industry Adding Value From 1995 to 2003, the total social logistic adding value was moved from 259.8 billion RMB to 549.8 billion RMB, increasing more than 1.12 times, with average speed of 9.82% annually, similar to that of the GDP in the same period. The scale of the logistic adding value is higher than the scale of the GDP, which indicates that with the economic development the logistic socialization degree is increasing ceaselessly. (4) Freight Volume and Freight Turnover Since the NELB was opened up, the freight volume and freight turnover in the area are raised rather quickly, which settles a profound base for the logistic development in the area. In 2003, the volume of freight hit 5.405 billion t, taking up 34.6% of that of the country, 1.8 times as much as in 1992. The freight turnover reached 1,275.4 billion t, making up 23.7% of that of the country, 2.23 times as much as in 1992. In conclusion, the sustainable and stable development of economy in the area along the NELB brings an enormous need for the logistic industry objectively, so as to boost its acceleration development. However, the fact of the imbalance in the economic development between eastern and western region results in the imbalance of the whole logistic industry development, different regions existing larger disparity, appearing the unbalanced situation with developed in east while undeveloped in west. 2.2 Mainly Existing Issues and Restricting Factors for Development of Logistic Industry Although some achievement has been made in the logistic development in the area, the issues such as the imbalance of economic development in east and west and the lack of coordination and cooperation in the logistic development are restricting the further development of the logistic industry in the area. Now the mainly exiting issues are as follows: (1) Lack of Necessary Coordination and Cooperation in Logistic Development in Area (2) Lagging Construction of Logistic Infrastructure (3) Policy Environment Should Be Developed (4) Relatively Lagging Information Development in Area All above issues will be resolved through reforming the development mode of the modern logistic industry in the area along the NELB, under the leading and guidance of the NELB Coordinative Group of State Department, the national and provinces relative departments.

3. Analysis of Spatial Economic Relationship of NELB (Chinas Section)


3.1 Analysis of Spatial Economic Relationship Based on Freight Mutual Volume The data of the freight mutual volume from 1998 to 2003 is used to compare and analyze in this report to reflect the spatial economic relationship. Among them, the 2003 data are shown in Table 1.
Tab. 1 Statistics of Freight Flow of the route province area (in 2003) Arrived Jiangsu Shandong Anhui Henan Shanxi Sanxi Dispatched Jiangsu 1372 76 310 482 297 111 Shandong 1389 5293 463 647 984 134 Anhui 1891 140 3498 86 39 19 Henan 1473 602 573 2390 193 143 Shanxi 2244 4917 537 958 2685 141 Sanxi 779 269 65 530 123 1114 Gansu 164 41 29 134 45 286 Ningxia 28 42 9 27 11 138 Qinghai 70 29 8 36 32 42 Xinjiang 95 128 11 251 66 160 Unit: Ten thousand tons

Gansu 90 61 9 83 63 51 1660 404 197 1295

Ningxia 15 13 1 22 30 9 62 485 8 36

Qinghai 46 30 7 74 33 23 310 101 212 85

Xinjiang 53 59 7 38 13 35 118 14 5 1330

176

There are two characteristics on the statistic data from 1998 to 2003: (1) The total freight mutual volume of provinces and regions along the NELB is decreasing gradually from the east to west. (2) The freight is mainly transported and sell within local province. The degree of the economic relationship is identified according to the proportion of freight dispatched volume from each province or region in China to the province or region along the NELB in total freight dispatched volume of the province or region, that is to say, if the proportion is lower than 5%, then the degree of the economic relationship is low; and if higher than 10%, then high; if between 5% and 10%, then medium. The spatial economic relationship based on the freight mutual volume is shown as Table 2 through analysis.
Tab. 2 Degree of Spatial Economic Relationship

Arrived Dispatched Jiangsu Shandong Anhui Henan Shanxi Sanxi Gansu Ningxia Qinghai Xinjiang

Jiangsu H H H H H H M L H L

Shando ng L H L H H M L L L L

Anhui H M H H L L L L L L

Henan H M L H M H L L M M

Shanxi H H L L H L L L M L

Sanxi L L L L L H L H M L

Gansu L L L L L L H H H H

Ningxia L L L L L L L H L L

Qinghai L L L L L L H M H L

Xinjiang L L L L L L L L L H

Notes: Hhigh, Llow, Mmedium.

The statistic result shows that the freight mutual volume between the eastern and the western section is obviously less than that of within each section. The economic development level in the eastern section is higher with close relationship and larger internal freight mutual volume; as located in deep inland, there has formed a trade region in the mid-western section taking some big or medium cities (for instance, Xian, Lanzhou, Urumchi, Yincuan, Xining, etc.) as centers. Such spatial economic status has provided a good opportunity to realize the cooperation and development of the modern logistic industry in the area. Each province or region, especially the eastern and mid-western section, can exert its economic advantage to strengthen the communication and cooperation each other, so as to optimize the whole economic layout, promote the complete upgrade of the regional economy, then boost the further development of the modern logistic industry in the area. 3.2 Analysis of Spatial Economic Relationship Based on Logistic Transportation Structure (1) Freight Structure With the gradual development of the economy in the area, the logistic transportation structure has been evolved from singularity to diversification. From 1995 to 2003, the road transportation occupied a bigger proportion, about 75%, in the freight volume, while the railway is about 15%. At the same time, the railway posses the absolute proportion advantage in the freight turnover, stably keeping with over 50%. (2) Average Transportation Distance From 1996 to 2003, there is a little increasing trend in the average transportation distance for the road transportation, with an increase of 1% annually; before 2000, the average transportation distance of railway waved rather greatly, but after 2000, it is primarily stabled in 650 km (meet the inter-province communication requirement), far longer than that of the road (between 60km to 65km, mainly within the province); the total average transportation distance reached 224.916 km from 185.697 km, and the increase is so obvious that the average speed is nearly 3% each year. All those indicates that the spatial economic relationship in the area is closer increasingly, and the railway still is the main ligament connecting the economy of the trans-provinces, the 177

road plays a accessorial role in the spatial economic relationship, especially suited for the short-distance logistic transportation within the province[1]. (3) Transportation Intension The calculated result shows that the transportation intension indicated by the freight volume in the area is decreasing gradually annually, but the industrial structure and product structure is developed to the orientation of high value; the annual change scale of the transportation intension based on the freight turnover is relatively less, so the economic relationship between provinces is more and more close, and the mutual material volume is enlarged continuously and the transportation distance of freight is prolonging ceaselessly. In conclusion, there is a trend of primary products flowing from the west to the east and finished products from the east to the west for the spatial economic relationship in the area. With the advancement of the economic level, it can estimated that provinces along the area will adjust their industrial structure layout step by step according to each advantage on resource and technology, the mutual volume of the logistic transportation will increase continuously with the types of freight orienting to high value and diversification, and based on meeting the logistic requirement, the logistic transportation will gradually be adapted to high quality requirements such as speediness, convenience, safety, economized packaging, so as to obtain the best socio-economic benefit. The increasingly close of the economic relationship reflects the increasing enlargement of the economic and logistic need in the area, at the same time requires reinforcing the cooperation of the modern logistic industry in each layer for the local governments along the NELB, to provide powerful impetus for the sustainable development of the logistic industry, expedite the upgrade of the development level so as to advance the core competition of the whole economy in the area.

4. Methods of Coordinated Development of NELB (Chinas Section)


4.1 Whole Layout and Pattern of Logistic Development in Area Based on the spatial economical relationship in the NELB area and point axis propel development pattern, the whole thought of logistic development layout in the NELB area is : constructing the Xuzhou, Zhengzhou, Xian, Lanzhou and Urumchi five cities as five big economic regions facing the NLEB (including the Huai-Hai economic region centralized with Xuzhou-LianyunGang boosting power, the center plain economic region centralized with Zhengzhou boosting power, the Guanzhong economic region centralized with Xian boosting power, the upper Yellow Rivers multi-nationalities economic region centralized with Lanzhou boosting power and the Xinjiang economic region centralized with Urumchi boosting power), and connecting the five economic regions with Longhai-Lanxin railway and LianyunGang- Huoerguos expressway, to form the regional logistic development strategic layout of two line six point with promotion from point to axis mode in the NELB area[2][3]. The whole layout of logistics development in the NELB area (Chinas Section) is shown as Figure 1. 4.2 Development Module of Logistic Industry in the NELB The difference of industry structure and layout in the NELB area causes the different spatial economic relationship between different regions. These features determined that the logistics in the area along the NELB should be developed according to different regional condition. Those developing mode could be adopted includes: (1) Constructing Regional Network Logistic System based on Information Platform (2) Constructing Rapid NLEB Logistic Channel (3) Inducting Union of Enterprises, Fostering Alliance of Logistic Enterprises (4) Supporting Dragon Enterprise, and Cultivating One Station Logistic Service Ability

178

Fig.1 Whole Layout of Modern Logistic Development in NELB Area References [1] [2] [3] ZHAO Yi-ru, Developmental Comparison Between the Cities Along Yangtze River and Those Along the New Eurasia Continent Bridge, Areal Research and Development, 2000,19(3),21-25 ZHU Ying-min the Developing Location and Developing Thought of the Continental Bridge Urban Axis, Economic Geography, 2000,20(6),65-69 LI Jun-ye, Demonstration Study on Unbalancing Development of Regional Economy, Journal of Transportation Systems Engineering and Information Technolog,2002,2(2),14-18

179

A Simulation Based Research on Loading -Unloading Strategy in Railway Container Terminal


Li Dong , Wang Dingwei
School of Information Science and Engineering, North-eastern University, P.R.China, 110004

Abstract With the development of logistics trade, the number of containers transported by train increases promptly, which makes it grow important that how to improve the operation efficiency in a railway container terminal. In this paper, a simulation model of load-unload process in a railway container terminal is built. Through running the model, some strategies are educed. Key Words Simulation, Load-unload strategy, Railway container

1. Introduction
Railway container transportation had its spring in England, which has been an important form in international transportation trade. In 2004, Railway department of china planned to build 18 railway container terminals, which denoted a great development of railway container transportation trade. It brings a growing interest in research on how to improve the operation efficiency in a railway container terminal. Researchers have developed some approaches to address the problem of container transportation. Powell and Carvalho[1]have analysed the allotment strategies of transport vehicles in railway container terminal. Bostel and Dejax[2]have devided the operation process in a railway container terminal into two stages, one is how to set containers, the other is the schedule of vehicles. Kim[3] has investigated the route of crane, and applied two heuristic algorithms for solving the problem of routing yard side equipment. For the past few years, with the development of simulation technique, simulation method has more and more applications in researches of container transportation. Gambardella[4] has investigated the problem of resource allocation and scheduling of load-unloading in a quay. Emrullah[5] has ascertained the bottle-neck in the process of port container transportation by means of simulation, and proposed a new optimization strategy. Dahal[6] built a simulation model to optimize the carrying system of port. There has been many researches on operation in port, while study on railway container transportation is less. in this paper, we build a simulation model to study the load-unloading strategy in the railway container terminal, then we propose three strategies. The rest of this paper is organized as follows. In the 2nd section, the layout of container yard, the operation equipments and the process of load-unloading are introduced. We describe the simulation model in the 3rd section. In the 4th section, we analyse the result of running the model, and propose three strategies. We give the conclusions of this paper in the 5th section.

2. Description of Railway Container Terminal


A railway container terminal is the joint of transportation of port and inland, inland and inland. 2.1 Layout of Railway Container Terminal The layout of a railway container terminal is shown in Fig.1. On one side is railway, by which container train enter the terminal. And on the other side is container yard which are composed of container stacks, operation lanes and traffic lanes. In Fig.1, there are 12 stacks, containers are piled up on them. Export containers are set on the first row of stacks, and import containers are set on others. Container is loaded/unloaded to/from container truck(CT) by gantry crane(GC)on operation lane which is one way road, that is, it only allows the CT move along the same direction. CT carries container into/out of yard on the traffic lane(the road marked by broken line in Fig.1) .

Key Project Supported by National Natural Science Foundation of China (70431003); Innovative Research Team Project Supported by National Natural Science Foundation of China(60521003)

180

2.2 Load-unloading Equipment There are three kinds of equipment in terminal: forklift(FL), container truck(CT) and gantry crane(GC). CT carries container. FL loads/unloads container from train(CT) to CT(train). GC is set on stack which loads/unloads container from stack(CT) to CT(stack). And it is necessary to be emphasized that one stack is served by only one GC.
railway

Containers stack Containers stack Containers stack

Containers stack Containers stack Containers stack

Containers stack Containers stack Containers stack

Containers stack Containers stack Containers stack

Container yard

Fig. 1 Layout of a railway container terminal

2.3 Operating Process of railway container terminal There are 3 steps for load-unloading in terminal. Step 1. Planning. According to the information of container train (arriving time, the next station, the type of container, the one who receive the container), load-unloading planning is made, which decide which railway is used by the train, which FLs, CTs, and GCs are assigned to the task, which stack positions will be occupied by containers. Step 2. Preparing to deal with the coming containers. It includes checking the equipment such as GC, CT, FL etc. , GC, CT, FL enter their operation area. Step 3. Load/unloading operation. When container train arrives, import containers are unloaded to stacks and export containers are loaded on the train.

3. Simulation Model
We build the simulation model by the Object Oriented programming technique. 3.1 Class, Object and their attributions Four classes are defined, they are: container class, FL class, CT class and GC class. Consequently, four objects are defined which are class arrays. The size of array is the number of the equipments. And the attributions of every object are shown as follow: Attribution of CT={isGoing, isLoaded, load, isWaiting, is Operated, workPosition, waitTime, container} The former five attributions are bool variables which describe the state of object. Other four are float type variables. Their significances are: isGoing: whether CT is moving between yard and train; isLoaded: whether CT is loaded; load: if CT is carrying container from train to stack, its value is 0, or else its value is 1; isWaiting: whether CT is at the state of waiting for GC or FL, if GC and FL are busy, CT has to wait; isOperated: whether CT is being served by GC or FL. workPosition: marks the operation position of CT; waitTime: describe the time of waiting; container: marks the next container will be operated. Attribution of GC={isWorking, workTime, freeTime, waitQueue} isWorking: whether GC is operating, a bool type variable; workTime: the time of operation, a integral variable; freeTime: free time of GC; waitQueue: the number of CTs waiting for GC. Arritribution of FL={isWorking, workTime, freeTime, waitQueue, workZone, container } The significances of former four variables have no difference from those of GC. FL can move randomly, so it has two additional variables. workZone: the operating area of FL; container: the next container will be dealt with 181

by FL. Attribution of container={isLockLoad, isLockUnload} isLockLoad: whether the container is locked by CT to carry it from stack to train; isLockUnload: whether the container is locked by CT to carry it from train to stack. 3.2 Flow of simulation Simulation flow is composed of four sub-flows: 1.carrying container from train to stack; 2.empty CT coming back to train; 3.carrying container from stack to train; 4.empty CT coming back to stack. Here we only illuminate the first sub-flow. It is shown in Fig.2, container train arrives terminal and all kinds of equipments enter their operation areasCT judges whether container i has been locked by other CTs, if no, CT lock it as the next operated container, otherwise CT judges container i+1CT moves to the container to be operatedCT judges whether FL is free, if yes, FL unload the container from train and load it to CT, if no, CT wait until FL is freeCT carries the container to stackCT judges whether GC is free, if yes, GC unload container from CT and set it into stack, if no, CT wait until the GC is freeCT lock next container to be operated.

Train enter station ; cranes and trucks arrivework area

Container i is loaded? Yes i=i+1

No

Truck lock container i Truck move to container i Gantry crane is working? No

Yes Truck wait for a time step Truck carry container i to stack

Gantry crane is working? Yes

No

Truck wait for a time step Gantry crane set container i in the stack No All trucks are finished? Yes

Fig. 2 Flow of carrying container from train to stack

3.3 Transformation of state variables Transformation of state variables is the key deciding the simulation flow direction, the transformation is triggered by given event, it is shown in Tab. 1. The model is driven by time, with the going of time step, the given event will occur, then the state variables 182

will transform.

4. Simulation and Analysis of Load-unloading strategies


4.1 simulation parameters Our model is established with the background of a railway container terminal. Its layout is the same as what Fig. 1 shows. The arriving container train is loaded 100 containers. There are 3 rows and 4 columns of stacks in the container yard, and every stack has 3064 containers. The distance from railway to container yard is 30 miles. The speed of loaded CT is 5.6m/s, the speed of empty CT is 11.2 m/s. Every stack is equipped one GC, it takes 70 seconds to load or unload a container. FL is deployed along the railway, it takes 50 seconds to load or unload a container, which is shorter than that of GC. We simulate the process from train arriving terminal to train leaving terminal, during which all the import containers on the train will be unloaded and carried to stacks, all the export containers in the stacks will be carried and loaded on the train. Our study object is optimizing the time of the process, the shorter the time is, the more container trains the container terminal can serve for.
Tab. 1 Transformation of State Variables

object

variable isGoing isLoaded

event 01 Container is loaded on or unloaded from truck Contain is loaded on truck All the containers on train are carried to stack or model assign truck to do so Gantry crane or forklift is busy State of truck transform from waiting to being loaded/unloaded by forklifts or gantry cranes Is asked by truck to load or unload Container is Locked to be loaded Container is locked to be unloaded 10 truck arrive load/unload point Contain is unloaded from truck / Gantry crane or forklift is free and other trucks are waiting with superior privilege Gantry cranes or forklifts finish loading/unloading containers onto/from truck Finish loading or unloading / /

CT

Load isWaiting isOperated

GC/FL container

isWorking isLockedLoad isLockedUnload

4.2 Simulation and Its Result By studying the simulation, We try to adjust the number of FL, CT , adjust the programming of setting containers in the yard, and then the load-unloading strategies are researched. The result of simulation is listed in Tab. 2. What should be explained is that in Tab. 2, when 5 CTs are in operation, we list all the simulation results, while when 6,7,8 CTs are employed, we only give the optimal results. 4.3 Analysis of simulation result and load-unloading strategies 4.3.1 Number of CT and FL In Tab. 2, when 5 FLs and 15,20,25 CTs are operating in terminal, with the increasing of CT number, the total operating time decrease. But when the number of CT is 25, average waiting time of CT become long, and the cutting of operate time is not evident. This denotes that if the number of CT get to some degree, and the number of FL is not increased, the operating efficiency will not be improved evidently, but the waiting time of CT prolongs. So we can draw a conclusion that for 5 FLs, employing 20 CTs is a better selection. Furthermore, even add FL, the number of CTs cant be increased randomly, because too much CTs will lead to traffic jam in container yard and too long waiting queue of GC. 4.3.2 Assignment of CT Two strategies of assignment of CT are listed in Tab. 2, the first is that all the CTs carry all the import containers to stacks, then all the CTs turn to carry export containers to train. The second is that some CTs is assigned to carry export container specially, and other CTs operate as the first. And the total operating time of the process operated according to the second is shorter than according to the first. The reason lies in that import 183

containers are set in different stacks from export containers, dealing with them simultaneously denote adding more GCs to operate. More equipment brings less operating time. 4.3.3 Assignment of Stacks The result of distributing import containers to 1,2,,8 stacks are listed in Tab. 2. It shows that setting import containers to different stacks respectively can reduce total operating time evidently. But how many stacks are assigned to storage containers is relative to the number of CT and FL. When there are 5 FLs and 15 CTs are in operation, distributing containers to 5 stacks is an optimal scheme. Because there are not too many CTs and FLs, if container is distributed to more than 5 stacks, the total operating time is not decreased much, but the free time of GC is increased. When more FLs and CTs are in operation, according to Tab. 2, more storage stacks bring shorter operating time. So we say, distribute import containers to more stacks is a good strategy.

5. Conclusion
We build a simulation model for railway container terminal. After analysing the simulation
Tab. 2 Simulation Result

Number of FL

Number of GT

How to assign CT to tasks

How to set container Set to 1 stack Set to 2 stacks Set to 3 stacks Set to 4 stacks Set to 5 stacks Set to 6 stacks Set to 7 stacks Set to 8 stacks Set to 1 stacks Set to 2 stacks Set to 3 stacks Set to 4 stacks Set to 5 stacks Set to 6 stacks Set to 7 stacks Set to 8 stacks Set to1 stacks Set to 2 stacks Set to 3 stacks Set to 4 stacks Set to 5 stacks Set to 6 stacks Set to 7 stacks Set to 8 stacks

Waiting time of CTs(s) 906 562 231 95 36 30 27 21 972 768 491 289 158 94 67 43 1020 876 645 378 199 83 67 65 11 15 37 35 19 17

Free time of GCs(s) 0 0 0 0 5 60 79 88 0 0 0 0 2 11 15 14 0 0 0 0 0 3 5 8 17 19 14 7 2 0

free time of FLs(s) 520 452 213 91 89 87 87 90 514 437 168 87 65 58 58 51 196 122 78 67 63 53 44 37 60 36 13 29 21 27

Finishing Time(s) 4418 4316 4226 4155 4087 4066 4050 4041 4368 4106 3998 3525 3039 2707 2645 2608 4315 4102 3967 3498 3000 2601 2580 2579 3952 2499 2287 2002 1810 1795

15

20

All CTs unload then all CTs load

25

5 5 5 6 7 8

15 20 25 26 30 30

10 CTs unload and 5 CTs load, 12 CTs unload and 8 CTs load, 14 CTs unload and 11 CTs load, 17 CTs unload and 9 CTs load 20 CTs unload and 10 CTs load 20 CTs unload and 10 CTs load

Set to 5 stacks Set to 8 stacks Set to 8 stacks Set to 8 stacks Set to 8 stacks Set to 8 stacks

184

results, we get 3 strategies: 1.the number of CT should match that of FL; 2. dealing with import containers and export containers simultaneously will reduce the total operating time; 3.Distributing import containers to different stacks is a good scheme.
References

[1] [2] [3] [4] [5] [6]

W.B. Powell, T.A. Carvalho, Real-time optimization of containers and flatcars for intermodal Operations, Transport Sci 1998,32(2):110-126. N. Bostel, P. Dejax, Models and algorithms for container allocation problems on trains in a rapid transshipment shunting yard, Transport Sci 1998,32(4): 370-379. Ki young Kim, Kap hwan Kim, Heuristic Algorithms for Routing Yard-Side Equipment for Minmnizing Loading Times in Container Terminals, Naval Research Logistics, Vol.50, 2003 Gambardella L M, Mastrolilli M, Rizzoli A E. An optimization methodology for intermodal terminal management. Journal of Intelligent Manufacturing, 2001,12(5): 521-534 Emrullah D. Simulation model and analysis of a port investment. Simulation, 2003,79(2): 94-105 Dahal K, Galloway S, Burt G. A port system simulation facility with an optimization capability. international Journal of Computational Intelligence and Applications, 2003,3(4):395-412

185

A Fuzzy Evaluation Method of Integrated Logistics Service Networks


Zhao Zhiyan, Li Bo
School of Management, Tianjin University, Tianjin, P.R.China, 300072

Abstract This paper presents a simple and effective approach to evaluate the integrated logistics service networks. Considering the three logistics service networks, the fuzzy language variables are introduced to represent the value of the criteria, and the weights of assessing criteria are obtained by the average method of the evaluations by the Experts or the decision makers. Then the likelihood degree of the trapezoidal fuzzy numbers is defined and the fuzzy complementary judgment matrix is given. Thus, the rankings of the logistics service networks under each criterion and the synthetically criteria are both calculated. Finally, a simulation example is given to show the validity and feasibility of the method. Key words Integrated logistics service network, Multi-mode and multi-service, Trapezoidal fuzzy numbers, Evaluation

Introduction
After joining the WTO, the more and more enterprises in China will face the serious competition of globalization. Global operation will increase logistics cost and complexity. Simultaneously, global operation will make uncertainty increase greatly. Uncertainty results from longer distance and lead time. These unique challenges will promote the development of an efficient and effective global logistics with supply chain strategy. Of course, the challenges of global logistics vary significantly in different region of the world. The focus is the search and the development of the logistics system. The one of the best and effective way is the integration of the logistics service systems. For example, the application of multimode, multi-service is more and more popular. The problem how to design a logistics model has been discussed by a lot of papers and many authors have analyzed the logistics system of the multimode, multi-service network. For example, Ahuja et al (1993) [1] have discussed this logistics network using the mixed integer programming, but they just have considered the small network system, as the network size increase, the problems become more difficult to solve. Then, Crainic (2000)[2] has offered the detailed, cost-minimizing operating logistics plans and Armacost(2000)[3] has developed a solution techniques to route aircraft and express items in the UPS network. While these papers have only studied the transportation of a single service level and havent provided the discussion for the multiple service level. Because of the complexity of the multimode, multi-service network, the relationship between the logistics cost and the service level generally is inverse; that is, the logistics cost will be reduced following the integration of multimode, multi-service logistics, but the service level can not be improved; on the contrary, it can be decreased. With a shorter planning horizon and an overall objective of minimizing inventory in the value-chain, the transportation has become a critical factor in the distribution process. So Jonah C. Tyan and Fu-Kwun Wang(2001)[4] have discussed an evaluation of freight consolidation policies in global third party logistics, There has been much discussion of how a company gains competitive advantages through integrated management. The integrated management is a trend. But, to the single mode of the logistics service of the enterprises in China, what strategy should they take? This paper has discussed this problem based on the three operation modes: (1) non-integrated network; (2) facility-integrated networks; (3)fully integrated networks. Firstly, the evaluation attributes are put forward, and then the weights of assessing attributes and the performance indices of the three kinds of the logistics model under the single attribute are obtained. Based on the concept of possibility degree of the trapezoidal fuzzy numbers, the fuzzy complementary judgment matrix is obtained. Then, the rankings of logistics model under every attribute and the whole are both calculated. Finally, a simulation example is given to show the validity and feasibility of the proposed method.

2. The three logistics service networks


The traditional network is the single mode, single service, Fig.1(a) displays this distribution network. This network has the separate logistics service routes. Items first travel in delivery vans or trucks from their customers

This research has been supported by National Natural Science Funds of China (No. .70572045).

186

along local delivery tours to the nearest regional consolidation terminal, these consolidation terminals consolidate cargo within the region to the breakbulk terminal for efficient longhaul transportation. All break-bulk terminals serve as hubs, although for smaller percentages of the total network volume. In non-integrated networks, air or ground network operations are run independently. The scheduling methods for this type of network are relatively simple and easy. But its costly.

a)

b)

Facilities Consolidation terminal Breakbulk terminal Main hub

Routes local route access route long haul route

Fig.1 Single mode, single service network and partly integrated networks

However, if we adopt the method of integrated network, it will reduce the cost with higher complexity. When integrating the facilities and the routing remaining independent, this is facility-integrated networks. The customer can be served by the same closer terminal, although the routing (such as ground and air) remains independent. So we can reduce the cost by decrease the number of terminal. If facilities and routes are integrated simultaneously, it is fully integrated networks. In this case the pickup and delivery tour decrease as the density of customer increase, see Fig 1(b). As the figure shows, multiple-route tours between breakbulk terminals are removed to avoid an overcapacitated network. The main hubs are introduced to allow the longhual vehicles to accumulate higher volumes.

3. The Fuzzy Evaluation Method of Integrated Logistics Service Networks


The effective management of logistics operation requires the establishment of these networks for performance assessment. In order to evaluation the above three networks of logistics, this paper is developed a new approach to compare and make analysis for the different logistics modes and levels on the basis of fuzzy evaluation method. The evaluation process is shown fig. 2. The evaluation process is to make the analysis and the comparisons among the different alternatives (alternatives are the different logistics network Ai (i = 1, 2,..., m) ) under the criteria C j ( j = 1, 2,..., n) . Then we could find the better one by the calculation and evaluation process. Firstly, the evaluators make the subjective evaluations in the forms of the language variables by questionnaire to determine: the weighted vector W = ( w1 , w2 ,..., w j ,..., wn ) ; the judgment matrix X = {xij , i = 1, 2,..., m; j = 1, 2,..., n} . The weighted vectors represent the levels of the importance for the evaluation criterion C j ( j = 1, 2,..., n) . The judgment matrix

X = {xij , i = 1, 2,..., m; j = 1, 2,..., n} represents the levels of the evaluation criterion C j corresponding to the
different logistics network Ai . When the weighted vectors and the judgment matrix are given, the solved goal is that the ranking vector of the different logistic networks can be determined by considering the performance ranking based on the single criterion and the whole criteria 187

Start Define Criteria Determine Weights Evaluation Process Ranking Vectors Under Single Criterion Ranking Vectors Under Whole Criteria

Fuzzy Method Language Variables, Trapezoidal Fuzzy Numbers Fuzzy Complementary Judgment Matrix

Fuzzy Linear Weight Method

Evaluation Results End

Fig. 2 The fuzzy evaluation structure of integrated logistics services networks

Firstly, we consider to obtain the ranking vector of the different logistic networks under the single criterion. One concept, the degree of likelihood, will be introduced. When the trapezoidal fuzzy numbers satisfy A B , the definition 1[6] is shown as following. Definition 1: Assume A = ( a1 , a2 , a3 , a4 ) B = (b1 , b2 , b3 , b4 ) are the trapezoidal fuzzy numbers. The degree of likelihood A B will be

V (a b) = max{1 max{
+ (1 ) max{1 max{

b2 + b3 2a1 , 0}, 0} (a2 + a3 2a1 ) + (b2 + b3 2b1 )


(1)

2b4 a2 a3 , 0}, 0} (2a4 a2 a3 ) + (2b4 b2 b3 )

Where is shown the measure value of the decision maker for the risk. If = 1 , the decision maker is an optimism and a risk seeker. If = 0 , the decision maker is a pessimist and a risk avoider. If 0 < < 1 , it is the

measure value of the different decision makers preference for risk. The elements of the column j from the above judgment matrix X are shown the evaluation solutions of the different logistics network Ai (i = 1, 2,..., m) under the criteria C j ( j = 1, 2,..., n) . Then the matrix for the degree of likelihood is constructed by comparing two elements of the column j according to definition 1. It is shown in formula (2).

V ( x1 j x1 j ) V ( x1 j x2 j ) V ( x x ) V ( x x ) 2j 1j 2j 2j j P = ... ... V ( xmj x1 j ) V ( xmj x2 j )

... V ( x1 j xmj ) ... V ( x2 j xmj ) ... ... ... V ( xmj xmj )


j

(2)

It is known by the definition of the likelihood degree that the matrix P by this way must be a fuzzy complementary judgment matrix. From the formula (1) of the likelihood degree matrix can be developed to rank the vectors of the performance for the different logistics networks under the evaluation criterion j . It is shown in formula (3).

i j =
Therefore, the ranking vector

1 m j m ( Pil + 1 ), i = 1, 2,..., m; l = 1, 2,..., m 2 m l =1

(3 )

W j {i j (i = 1, 2,..., m)}( j = 1, 2,..., n) can be calculated by formula (2). It is shown the ranking vectors of the different logistics networks under the evaluation criterion C j ( j = 1, 2,..., n) .
By the same method, the ranking vectors of the different logistics networks can be analyzed by each evaluation
188

criterion. Then we can find easily the strength and weakness of every logistics networks under the single criterion. In order to solve the problem of the final ranking of the different logistics networks, the synthesis ranking vector will be solved for the different logistics networks. The evaluation matrix can be obtained by the weighted vector ( W ) multiplying by the judge matrix ( X ). It is shown in formula (4).

w1 x11 w x Z = 1 21 ... w1 xm1

w2 x12 w2 x22 ... w2 xm 2

wn x1n ... wn x2 n ... ... ... wn xmn ...

(4)

The synthesis evaluation vectors z{zi (i = 1, 2,..., m)} are the sum of the elements of Z in row. By applying the formula (2) and (3) to the evaluation vector z{zi (i = 1, 2,..., m)} ,
W
z

the synthesis ranking vectors

{iz (i

= 1, 2,..., m)} for the different logistics networks can be obtained. Therefore, The best logistics network

can be chosen by comparing the synthesis ranking vectors.

4. Simulation Example
In order to prove the effectiveness of the proposed evaluation method, the three logistics networks mentioned above have been chosen to evaluate their performances. Firstly, we have summarized the six criteria of the evaluation: (1) Facility Utility; (2) Speed; (3) Cost minimization; (4) Service quality; (5) Reliability; (6) Efficiency respectively. Here the trapezoidal fuzzy numbers are adopted to express the values of language variables. Within the process of evaluation, the language variable set {Extra important (EI)Very Important(VI) Important (I)Some important (SI)No Important (NI)}{Very Good(VG)Good(G)Fair(F)Poor(P)Very Poor(VP)} respectively represents the important levels of evaluation criteria and performance level under each evaluation criterion. The evaluations represented by the language values are firstly transferred to the trapezoidal fuzzy numbers[5]. The relevant weights of the criteria will be defined by the experts and the decision makers of logistics enterprises by the questionnaire designed for this research. Then the weighted vectors W can be obtained by the average calculation of their evaluation results, shown in table 1. Further, the judgment matrix X can be computed by the weighted average of the evaluation solutions, shown in table 2.
Facility C1 (47,53,62,68) Perfect Facility C1 (64,69,75,82) (68,75,86,92) (70,79,85,91)
Tab. 1 The Criteria and the weights for the three logistic networks Speed Cost Service quality Reliability C2 C3 C4 C5

Criterion weight

Efficiency C6 (66,74,85,91) Efficiency C6 (68,76,81,87) (75,80,87,95) (60,71,78,82)

(52,61,67,73)

(75,82,90,97)

(76,87,94,96)

(56,62,71,76)

criterion A B C

Tab. 2 The judgment matrix for the three logistics networks Speed Cost Reliability Service quality C4 C2 C3 C5

(73,78,86,95) (51,60,69,82) (65,72,79,85)

(60,66,72,78) (61,65,73,79) (58,62,68,77)

(64,71,79,83) (59,64,71,77) (54,61,66,71)

(65,75,83,90) (75,80,89,93) (44,47,52,59)

We could get the synthesis ranking vector of the three logistics networks under the six criteria and the single ranking vectors of the three networks under the single criterion by handling formula (1) to (3) based on the weighted vector W and the judgment matrix X from Table 1 and Table 2, shown in table 3.
Tab. 3 The synthetically ranking vector of the three networks under the six criteria

Criterion A B C

C1 0.1971(3) 0.3876( 2) 0.4153(1)

C2 0.4782(1) 0.1067(3) 0.3351(2)

C3 0.3542(2) 0.3688(1) 0.277(3)

C4 0.4755(1) 0.3198(2) 0.2047(3)

C5 0.371(2) 0.4623(1) 0.1667(3)

C6 0.3284(2) 0.4701(1) 0.2016(3)

Synthesis 0.3685(1) 0.3606(2) 0.2709(3)

189

From table 3, we can see that A(fully integrated networks) is best. The difference (just 0.0079) between B and A on the synthetically level is not very large, but there is the big difference between A, B and C(probably 0.19). Analyzing the reason, B has the separate routes but A has the integrated one, B could provide high quality service with costly expense. So the total cost of B is higher compared with that of A. But here we suppose that A can realize the best operation process and has the higher management level. To the logistics enterprises which are operating in single mode and single service in China, from C to A, there is a long distance because of the poor management level, but they can firstly take the partly integrated networks to increase efficiency and reduce the total cost, and from the table 3, the partly integration like facility-integrated networks also can realize the higher profit and create the value.

5. Conclusion
The effective management of the logistics networks requires a framework establishment for the performance assessment. This framework provides the mechanism to evaluate the system performance for the different kinds of logistics networks. This paper presents a simple and effective approach to evaluate the integrated logistics networks by applying the language variables criteria and the relevant ranking theory of the trapezoidal fuzzy numbers. Finally, the simulation example is given to show the validity and feasibility of the method.
References

[1] [2] [3] [4] [5] [6] [7]

Ahuija, R., Magnanti,T., and Orlin, J.. Network flow: theory, algorithms and applications. Prentice-Hall, Inc., Englewood Cliffs, N.J. 1993. Crainic, T.. Service network design in freight transportation. European Journal of Operational Research, 2000, 122(2): 272-288. Armacost A.. Composite variable formulations for express shipment service network design, Ph D dissertation, Massachusetts Institute of Technology. 2000. Jonah C. Tyan, Fu-Kwun Wang. An evaluation of freight consolidation policies in global third party logistics, Omega 200331(1):55-62. Chung-Hsing Yeh,Yu-Liang Kuo. Evaluating passenger services of Asia-Pacific international airports. Transportation Research , 2003, (7): 35-48. Xu zeshui. A method for priorities of triangular fuzzy number complementary judgment matrices. Fuzzy System and Mathematics, 2002:16(1):47-50(in Chinese). Xu zeshui. A method for priority of fuzzy complementary judgment matrix. Journal of Systems Engineering, 2001, 16(4):311-314(in Chinese).

190

Formulation and Complexity Analysis for 3PL Transportation Problems


Li Kunpeng
School of Management, Huazhong University of Science and Technology Wuhan, Hubei, 430074. P.R.China

Abstract Nowadays, it is popular to outsource transportation and distribution of finished products to third party logistics (3PL) provider in many industries. In order to shorten response time from order receipt to delivery, and also to improve on time delivery accuracy, manufacturing scheduling and transportation scheduling should be considered in a synchronization perspective. Thus, for the decision of 3PL transportation, production capacity is an important constraint. In this paper, 3PL transportation problems are classified. Formulation of each sub-problem is presented. Furthermore, computational complexity of each problem is proposed. Keywords 3PL, Transportation, Synchronization, NP-completeness

1. Introduction
Many companies are enhancing their competitiveness by offering Just-in-Time (JIT) delivery. Costs or penalties are incurred by delivering an order either earlier or later than the customers due-dates. Besides, maintaining short response time from order acceptance to final delivery is one of the key competitive advantages. Thus, many companies deliver products to customers directly after production without holding finished product inventory. This is particular true for the industries with short product life cycle, such as consumer electronics manufacturing, ready-mix concrete supplying and food catering industry (Garcia et al. 2004[1]; Chen and Vairaktarakis, 2005[2]; Li et al., 2005[3]). In order to improve customer service and reduce production and transportation costs, scheduling of assembly manufacturing and transportation should be synchronized with each other. Nowadays, due to the professional services provided by third party logistics (3PL) provider, it is more efficient to outsource the transportation or distribution to 3PL. There are two types of operations. If 3PL only serves one customer, the schedule of 3PLs vehicle is determined by the order completion time in manufacturing. This type of operation is particularly true for 3PLs that providing road transportation services. On the hand, if 3PL provides services to more than one manufacturer, the departure and arrival time of the vehicle is determined by 3PL rather than by the manufacturer. In addition, the unit transportation cost of each vehicle varies. The manufacturer can book capacity on the available vehicles accordingly. Then, decision on allocation of orders to the vehicles is made to utilize the booked capacity efficiently. The typical case is the air-cargo transportation service provided by cargo airlines. Chen and Vairaktarakis (2005) [2] addressed the integrated production transportation scheduling problem considering the first type of 3PL operations, which is mentioned above. Conversely, the second type of 3PL operations is considered in this paper. The objective is to find a joint schedule of production and distribution to optimize the objective function that contains total distribution cost and the impact of maximum/average delivery time. Li et al. (2005, 2006) [3] [4] formulated the problem of synchronized scheduling of assembly manufacturing and air transportation to minimize total cost that consists of delivery earliness/tardiness cost and air transportation cost. The air transportation problems studies are polynomially solvable by mathematical LP solvers. Motivated by above application, this paper studies the second type of 3PL transportation problem on consideration of synchronization with production. In other words, constraint of production issues should be included in the model of the 3PL transportation problem. In this research, the production facility is assumed to be a single machine. Thus, the constraints of production should be considered is production capacity. According to order split or not in transportation, two different problems are formulated. Then, the computational complexities of each problem are investigated mathematically.

191

2. Assumptions
The 3PL transportation allocation problems formulated in this work are based on the following assumptions: Decisions of transportation allocation are for the orders accepted in the previous planning periods. All the packed products have same or similar dimensions. Business processing time and cost, together with loading time and loading cost for each vehicle are included in the transportation time and transportation cost. Vehicle departure time is taken as the time that 3PLs vehicle set out from the manufacturers plant. Vehicle arrival time is taken as the time that the vehicle reaches customer. Orders released into production facility for the planning period are delivered within the same planning period which means there are no production backlogs. Assembly flow shop is single machine. The machine can process only one part at a time. There are no machine breakdowns and preemptions. Total manufacturing time of an order is directly proportional to the orders quantity. Waiting penalties for orders before transportation are order independent, i.e., they are not determined based on any job characteristics. The starting time of the planning period is set equal to zero.

3. Problem Formulation
3PL provides several available vehicles in the planning period. The departure time, arrival time, available capacity, and transportation cost of each vehicle are determined at the beginning of the planning period. In this section, the 3PL transportation allocation problem is formulated as an Integer Linear Programming (ILP) model in order to allocate the right orders to the right vehicle so as to minimize delivery cost, which consists of delivery earliness penalty costs, delivery tardiness penalty cost, and transportation cost. Synchronization is incorporated into the ILP model by the constraint that balances the total production rate with the transportation allocation. The following notation is defined: i the order index, i=1, 2, . N f the vehicle index, f=1, 2,F l the machine index, l=1, 2,L k the destination index, k=1,2,K Mi order is destination Mf vehicle fs destination Df the departure time of vehicle f at the manufacturing plant Af the arrival time of vehicle f at the destination Cf the transportation cost for per unit of the product when transported by vehicle f Hf the available capacity of vehicle f Qi the quantity of order i i the delivery earliness penalty cost (/unit/hour) of order i i the delivery tardiness penalty cost (/unit/hour) of order i di the due date of order i Eif the per unit delivery earliness penalty cost for order i when it is transported by vehicle f Eif= Max(0, diAf)* i Lif the per unit delivery tardiness penalty cost for order i when it is transported by vehicle f Lif= Max(0, Afdi)* I Xif 1 if order i is allocated to vehicle f, 0 otherwise. Rl the production rate of machine l B a large positive number 192 (2) (1)

|B| the absolute value of B The ILP model for the 3PL transportation allocation problem is expressed as follows: Min Subject to:

C f X if Qi + X if Qi ( Eif
i f i f

+ Lif )

(3)

X if = 1

for all I for all i, f for all f

(4) (5) (6)

B * X if * | M i M f |< 1

X if Qi H f
i f ' =1 i

X if ' Qi D f Rl
l =1

for all f for all i, f

(7) (8)

X if {0, 1}

The decision variable is Xif, which is non-negative integer variable. The objective is to minimize overall total delivery cost which consists of total transportation costs, total delivery earliness penalty costs and total delivery tardiness penalty costs. Constraint (4) confirms that one order is allocated to exactly one vehicle. Constraints (5) ensures that if order i and vehicle f have different destination, order i cannot be allocated to vehicle f. Constraint (6) ensures that capacity of vehicle f is not exceeded. Constraint (7) ensures that allocated orders do not exceed production capacity. It ensures that allocated quantity can be supplied based on sufficient production capacity. The output of the ILP model is the allocation of orders to vehicles. It indicates that each orders transportation departure time is determined. According to the forth assumption in section 3, transportation departure time of an order is taken as the assembly due date of the order.

4. Computational Complexity Analyses


The complexity of the 3PL non split transportation problem is investigated by computational experiments. Table 1 shows the experiment design. The planning period is set to 24 hours. To simplify the experiments, all orders and vehicles are assumed to have identical destination. In addition, the transportation time of each vehicle from origin to destination is 2 hours. For each problem, N is set equal to 5F. Problem size starts from 15j3f.
Tab.1 Experimental design used in random problems generation

Problem Parameter Number of orders N Number of vehicles F Order quantity (Qi) Order due date (di) Order delivery earliness penalty cost (i) Order delivery tardiness penalty cost (i) Vehicle departure time (Df) Vehicle capacity (Hf) Per unit transportation cost (Cf) Production rate (PR) Instance/configuration

No. of classes 4 4 1 1 1 1 15 3 20 4 25 5

Values 30 6

Uniform[50,200] Uniform[1, 6]*(Pi+Ti) Uniform [3,5] Uniform [5,8] FN*24 /F Uniform[300,1000] Uniform [50,70] TQa * Uniform [1.2, 2] /24

1 1 1 5

a TQ is total quantity which equals the sum of all the orders quantity.

Departure time of each vehicle is generated by assigning identical time intervals between each two adjacent vehicles departure. The total number of vehicles is denoted by F. Each vehicle is assigned a vehicle number FN,
193

which starts from 1 to F. Each vehicles departure time is then generated by the formula 24*FN/ F. Order due date is drawn from uniform distribution. The earliest delivery time of an order is the sum of assembly processing time and transportation time. Therefore, the range for order due-date is between Pi+Ti and 6(Pi+Ti), where Pi is the orders assembly processing time and Ti is the transportation time. Once the quantity of each order for a given instance is generated, the value of production rate (in hours) is generated from a discrete uniform distribution as: PR = TQ * Uniform [1.2, 2]/24, where TQ is total quantity which equals the sum of all the orders quantity. Five instances are generated for each test problem. Lingo 8.0 software is used to construct the mathematical programming models. All the tested problems are solved on a Pentium 4, 2.4 GHz computer with 256 MB RAM. Table 2 shows the computational time to obtain the optimal solution for each instance of the first four problems by Lingo 8.0.
Tab. 2 Computational times for tested 3PL transportation problems

Problem 15j3f
a

Computational time Instance 1 1s 0 1s 6m46s Instance 2 0


b

Instance 3 0 0 2s >12h

Instance 4 0 0 2s >12h

Instance 5 1s 1 2s >12h

Average 0.4s 0.2s 2.6s


/c

20j4f 25j5f] 30j6f


a b

0 6s 9h19m9s

indicates that the problem consists of 15 jobs and 3 vehicles with one destination as shown in Lingo solver status penal, the solving time is 00:00:00 c average computational time can not be obtained as the computational times of the last three instances are unknown.

It is observed that for small sized problems, i.e., the first three problems, the computational time to obtain the optimal solution is trivial. However, the computational time increases significantly for most of the instances of problem 30j6f. For problem 30j6f, instance 1 has the smallest computational time of 6 minutes and 46 seconds according to Table 2. Then, the computational time increases significantly to 9 hours for instance 2. Furthermore, for each of the three remaining instances, the computational time is more than 12 hours. After 12 hours, the computation of Lingo is terminated due to the unacceptably long computational time, as a production shift is normally less than 12 hours. These results clearly demonstrate the computational intractability of the 3PL transportation problem studied in this paper.

5. Conclusions
This paper studied the 3PL transportation problem in context of supply chain synchronization. The mathematical formulation of the problem is presented. To investigate the computational complexity of the problem, a numerical experiment is conducted. The computational results indicate that this problem is NP-complete.
References

[1] [2] [3] [4]

Garcia, J.M., Lozano, S. and Canca, D. Coordinated scheduling of production and delivery from multiple plants. Robotics and Computer-Integrated Manufacturing, 2004, 20(3):191-198. Chen, Z.L. and Vairaktarakis, G.L. Integrated Scheduling of Production and Distribution Operations. Management Science, 2005, 51(4): 614-628. Li, K.P., Ganesan, V.K and Sivakumar, A.I. Synchronized scheduling of Assembly and Multi-Destination Air Transportation in Consumer Electronics Supply Chain. International Journal of Production Research2005, 43(13): 2671-2685. Li, K.P., Ganesan, V.K. and Sivakumar, A.I. Scheduling of Single Stage Assembly with Air Transportation in A Consumer Electronics Supply Chain. Computers & Industrial Engineering, 2006, 51: 264-278.

194

Study of the Partial Task Management under Constrained Resources


Li Mingyu, Bi Yiming , Li Bin
Xian Research Inst. of Hi-tech HongQing Town Xian , P. R .China, 710025

Abstract This paper presents a partial management technique (PMT) that decides which tasks should be assigned to the same resource without explicitly defining management of these tasks to a particular resource. Our method simplifies the management and scheduling steps while imposing a small or no penalty on the final solution quality. This technique is specially suited for problems which have different resources constraints. Our method does not cluster tasks into a new task, as typical clustering techniques do, but specifies which tasks need to be executed on the same processor. Our experiments have shown that PMT, which may produce nonlinear groups of tasks, gives better results than linear clustering when multi-resource constraints are present. Linear clustering was proved to be optimal comparing to all other clusterings for problems with timing constraints only. In this paper, if used for multi-resource synthesis problem, linear clustering will produce inferior solutions. Keywords Constrained resources, Task management, Scheduling

1. Introduction
In this paper, we present a method called partial management technique (PMT) which is able to improve efficiency of a management and scheduling as well as quality of the final results. This has not only timing constraints but also code and data memory constraints. These constraints define memory requirements for tasks and communications and therefore influence task management and scheduling decisions. The data memory aspect in embedded systems is especially important for signal and image processing applications which deal with enormous amounts of data. The aim of this work is to develop efficient techniques for an embedded system synthesis tool that accepts system architecture description and an application specification given as a task graph and produces constraints which will reduce complexity of the management and scheduling. PMT uses this task graph, generated from the original specification. Therefore, the fine grain task graph can be used, which gives full optimization potential, but possibly long run-times of synthesis tools. Our approach avoids deadlocks since it does not cluster tasks but constraints task management within a group of tasks.

2. Description of the MATAS system


The MATAS synthesis system [8] makes both task and communication management and scheduling. It considers timing constraints as well as code and data memory constraints. The goal of the system is to find (near) optimal solution, in respect to schedule length while fulfilling all constraints. The synthesis is done within constraint programming framework, as presented in Algorithm 1. This algorithm tries to use different resources, such as time, code and data memory, evenly. The decisions are made based on estimates of future use of these resources. Both time and data metrics, which are used to choose next task to schedule, reflect the usage of those resources in relative terms. Often the next task will increase either critical Path length or data memory usage, and therefore it is important to know which resource is currently more used and act accordingly. Since the algorithm is constraint-driven the result of all decisions directly propagates to all finite domain variable (FDV) and constraints. This makes the implementation of different search heuristics easier and less error-prone. Algorithm 1. The general idea of MATAS algorithm R Task without predecessors While R 0 do Select Ttime with minimal mobility Select Tdata with greatest OS (T ) O.size() I P (T ) I .size()
data data

{S(Task) denotes all data produced by the task} {P(Task) denotes all data consumed by the task }
195

{size()denotes the size of the given data} if metricsdata > metricstime then nextTask = Tdata else nextTask = Ttime {Task which decrease usage of more utilized resource has been chosen} for all processors which can execute nextTask do find minimum SnextTask compute time usage factor tu for nextTask compute code usage factor cu, for nextTask compute data usage factor du, for nextTask choose processor Pmin , which minimize tu + cu, + du, schedule nextTask on Pmin schedule all incoming communications and reserve communication buffers RR \ {nextTask}U{tasks with all input data available} The presented partial management technique is an extension of the prototype design system MATAS, as presented in Fig.1. Our method introduces new constraints which limit the possible management of the tasks. The idea of these constraints is to guide MATAS system towards better solutions. PMT constraints (1) state that all tasks within the same group are assigned to the same processor and the related communication between these tasks has duration zero.

Ti GnT j Gn : PTi = PTj , DCi , j = 0

(1)

Fig.1 MATAS framework and PMT

We do not create new tasks from old ones but specify which tasks need to be assigned to the same processor. And this is the only solution of this type which reduces the complexity of the task management and scheduling problem without changing application model.

3. Partial management techniques


The system synthesis has to take into account multiple resources. In our model, we have currently three types of resources for which parallel tasks compete: time slots, code and data memories. We assign tasks to processors time slots and communication tasks to bus time slots. Each task needs data memory during execution as well as produces and consumes data which are also stored in data memory. The code memory needs to be reserved for each task so it can be executed on a selected processor. The complete solution specifies these three managements. Since the number of decision variables is normally large, the size of the search space can be huge. Our method makes selected management decisions explicit by specifying management constraints (1). PMT makes partial management decisions based on several closeness measures which reflect resources use, such as time, data memory and code memory, between groups of tasks. The final closeness measure is defined as a weighted sum of these closeness measures as defined below.

closenessgi , gj = w1 ct gi , gj + w2 ccgi , gj + w3 cd gi , gj

(2)

where w1, w2 and w3 are weights, In our experiments all weights are equal. ctgi,gj is the closeness measure for
196

time, ccgi,gj is a closeness measure for code memory, cdgi,gj is a closeness measure for data memory. There are two crucial assumptions when computing closeness measures. The values in the dominator in (3), (4), and (5) are always computed under an assumption that groups gi and gj execute on different processors. On the other hand, the numerator is the minimal value under assumption that both groups execute on the same processor. Each of the metrics may have a value lower than one and this will indicate that there is a possible gain if both tasks groups are executed on the same processor.

ct gi , gj =

min Pgi = Pgj Dgi + Dgj min Pgi Pgj Dgi + Dgj + C gi , gj

(3)

where Dx, denotes the execution time of group x , and Cx,y denotes the communication time between group x and y. The closeness value will be smaller with more communication time required to communicate data between two groups. Small closeness value ctgi,gj indicates that there will be a gain in schedule length if both groups are executed on the same processor.

min Pgi = Pgj

ccgi , gj =
min Pgi Pgj

CM gi + CM gj CM ( Pgi ) CM gi CM gj + CM ( Pgi ) CM ( Pgi )

(4)

where CMx, denotes how much code memory is required by a group x, and CM(Px) denotes the amount of code memory available at the processor which executes group x. The minimal relative usage of code memory under the assumption that groups execute on the same processor is represented by numerator. This will be divided by the sum of minimum relative usage of code memory for each of the groups executing on different processors. The closeness function for code memory reflects, in relative terms, how much more code memory would be used if two groups were grouped. Since the code memory is a resource which is reserved for the whole time it is possible to use this type of metrics. The most difficult resource to take into account is the data memory. Equation (5) presented below defines this measure.
cd gi , gj = min Pgi Pgj DM gi + DM gj CM ( Pgi ) DM gi DM gj DM (Cgi , gj ) Cgi , gj + + DM ( Pgi ) DM ( Pgj ) min( DM ( Pgi ), DM ( Pgj )) min Pgi = Pgj

(5)

DM Gx

T y G x

DM

Ty

DTy

(6)

where DMx, denotes how much data memory is needed for group x temporary data. . Each task is annotated with local data memory size multiplied by its execution time. For a given group all data memory requirements are added and divided by the data memory size of the processor. In case when two tasks are executed on different processors, an additional cost appears due to double reservation of data memory buffers for communication. This cost is represented by a third term in denominator of equation (5). Since the communication time can differ we assume that communication time is the average of possible communication times. This cost is normalized by the smallest data memory size of one of the processors executing both groups. The PMT decisions are difficult since the knowledge on management of tasks to processors is not available yet. It is possible that grouping of two tasks will increase the schedule length. Therefore, each resource closeness measure aims at reflecting the possible degradation or improvement of resource usage. Our PMT algorithm, presented in Algorithm 2, initially starts with one task per group. It then computes closeness measures for each pair of groups with a non-local communication. Each PMT iteration will merge two closest groups thus making at least one communication local. The algorithm will stop when expected reduction of task graph is reached. Algorithm 2. PMT algorithm
197

for all i do

Gi = {Ti} tasks-to-constraint = while tasks-to-constraint > 0 do for all non-local communications Ci do Gp, = producer(Ci) Gc = consumer(Ci) clssi = closenessc, ,G, select Ci with smallest clssi merge groups Gp, and Gc make all communication between merged groups local tasks-to-constraint = tasks-to-constraint - 1

4. Real-life examples
Our technique was applied to a real-life example presented in [9]. Their problem is quite simplified from our perspective since the application is mapped onto a homogeneous multiprocessor architecture consisting of 7 processors and a single bus. The authors also do not take into account code memory and precedence constraints. However the application consists of large number of tasks and communications which makes it a good benchmark example. Our MATAS/PMT system has been applied to this example and produced the results presented in Tab.1.
Exp. 1 2 3 4 Method MATAS PMT Clustering PMT Clustering PMT Clustering
Tab.1 Experimental results for real-life example Tasks constrained % of all tasks

Deadline [ms] 1992 1992 1992 2163 2013 2395 2394

Bus Load [ms] 176 176 176 56 132 2 130

0 30 30 60 60 90 90

0 25 25 50 50 75 75

The first experiment has no partial management constraints introduced by PMT. Therefore there is no problem complexity reduction and it has a full optimization potential. Since the full search has not been performed the obtained solution is not proved optimal. The lower bound for deadline of this example, with precedence constraints and a given architecture, is equal to 1975 ms. In the following experiments we compare linear clustering with PMT. Linear clustering uses the same metrics as PMT for making clustering decisions. In the second experiment 25% of tasks have been partially assigned by PMT or clustered. In this particular case, a solution obtained with both PMT constraints and linear clustering give the same solution as in the previous case. The problem complexity reduction has been achieved without penalty on a quality of the solution. In experiment three and four both PMT and clustering simplified the problem at the expense of the achieved deadline. However the obtained deadline still lies within 25% from the lower bound when the complexity of the management problem was reduced by ratio 50% or 75%. In experiment three the clustering with MATAS obtained shorter deadline than PMT with MATAS. In this case, however, it was also checked that the constraints imposed by PMT do not exclude the solution found by MATAS with clustering. In this particular case, clustering guided the MATAS system better. This real life example shows that for problems where only time constraints are imposed the linear clustering will always give as good result as non-linear methods like PMT. It conforms to the theory presented in [4]. This however is not the case for multi-resource problem as indicated in further experimental results.
198

The experiment setups show that reduction of the search space resulted, in some cases, in decreased quality of the solution. The deadline was increased slightly. However, consistent reduction of the heuristics runtimes equal to the percentage of constrained tasks was achieved. The PMT method does not transform the problem itself, it just adds partial management constraints. Clustering on the other hand transforms the problem and therefore it influences not only task management but also scheduling of communications. The clustering itself did help to improve the solution quality comparing to original MATAS approach but these solutions are usually not as good as PMT can deliver.

5. Conclusions
Our partial management technique, presented in this paper, makes it possible to decrease the complexity of the management and scheduling problem. During the search, managements of all tasks, within the same group, to the processor is performed only once. This method works for architecture with communication structure. The architecture resources can be of different nature, from simple ones, such as code memory, to more complex, such as computation time or data memory. The experimental results indicate that PMT can simplify the problem by removing inferior parts of the search space, which was observed in the second experiment setup of real-life example. Our heuristic is able to improve quality of the scheduling and management as it was observed in both random experimental setups. It gives better results comparing to clustering as well as not pre-constrained original MATAS approach.
References

[1] [2] [3] [4] [5] [6] [7]

R. Dick, D. Rhodes, and W. Wolf. TGFF: task graphs for free. In Sixth International Workshop on Hardware/Software Codeszgn), pages 97-101, 1998. M. M. Eshagian and Y. C. Wu. Mapping task graphs onto system graphs. In Computing Workshop, pages 147-160, 1997. Gerasoulis and T. Yang. On the granularity and clustering of directed acyclic task graphs. IEEE Transactions on Parallel and Distributed Systems, 4:686-701, June 1993. D. Kadamuddi and J. J. Tsai. Clustering algorithm for parallelizing software systems in multiprocessors environment. IEEE Transactions on Software Engineering, 26:340-361, April 2000. K. Mariott and P. Stuckey. Introduction to . The MIT Press, 1998. M. Senar, A. Ripoll, A. Cortes, and E. Luque. Clustering and remanagement-based mapping strategy for message-passing architectures. In Parallel Processing Symposium and Symposium on Parallel and Distributed Processing IPPS/SPDP, 1998. R. Szymanek and K. Kuchcinski. A constructive algorithm for memory-aware task management and scheduling. In Ninth International Symposium on Hardware/Software codesign, April 2001.

199

A Study on Guaranteed Delivery Time Based Inventory Model of Component Commonality in Assemble-to-Order Systems
Lin Yong, Chen Kai
College of Management, Huazhong University of Science & Technology, Wuhan 430074, China

Abstract In this paper, we develops a guaranteed delivery time based inventory model of Component Commonality to consider the assemble-to-order environment where components are replenished according to a pull-policy and demand is a function of a uniform guaranteed delivery time. We focus on the dynamics between the guaranteed delivery time and its impact on commonality decisions. Assembling products according to priority with common component is also considered. Based on the mathematical analysis, a conclusion is reached that the effect of component commonality on inventory stock cost can be magnified by compression of guaranteed delivery time. And it set up some rules for inventory stock management with lead time. Numerical examples from a case study with a China automobile corporation are also setup to illustrate the impact of guaranteed delivery time on the role of commonality under different cost structures. Key words Guaranteed delivery time, Component commonality, ATO (Assemble to Order)

1. Introduction
The concepts of commonality and time-based competition are now attracting renewed attention as companies are compelled to provide and manage an increasing product variety and to compete on speed. In many product markets, remaining competitive requires offering a wide variety of products, customized to meet each customers requirements. In this sort of environment, it may not be practical to stock all of the different products, since there are just too many variations. Instead, manufacturers will stock components, and assemble the products to order. One way to save money in an assemble-to-order environment is to reduce the number of components by replacing a variety of unique components by a common component. The use of common components for different products (commonality) is an important methodology for managing product variety and maintaining competitiveness in the age of mass customization and supply chain competition. Typically in assemble-to-order systems, it has been shown that replacing a number of specific components by a smaller number of general-purpose, common components can reduce required safety stock levels due to the benefits of risk pooling. Meanwhile, in an increasingly intense competitive environment, time-based dimensions of a product are becoming an increasingly important component in assessing strategic advantage. According to a survey conducted by Miller and Roth (1988) and the research work reported in Blackburn (1991), most manufactures have shifted their manufacturing strategies from cost and quality to speed. This shift is known as time-based competition. In the 90s, Blackburn (1991) provides evidence that shows many firms compete on response time. Companies use three main strategies to utilize speed to attract customers: (i) to serve customers as fast as possible, (ii) to encourage potential customers to get a delivery time quote prior to ordering, and (iii) to guarantee a uniform delivery lead time for all potential customers (So and Song, 1998). The paper of So and Song (1998) is perhaps the first research paper directly addressing the issue of uniform guaranteed delivery time, In the case of a delivery time guarantee, firms advertise a uniform delivery time for all customers within which they guarantee to satisfy most customer orders. The length of the delivery time is a decision variable that directly affects overall demand. Since the late 1980s, a large volume of operations management literature has recognized that customer demand increases with lower delivery times (So and Song, 1998; So, 2000). Karmarkar (1993) pointed out that lead times are most probably inversely related to market shares. In this paper, we develops a guaranteed delivery time based inventory model of Component Commonality to consider the assemble-to-order environment where components are replenished according to a pull-policy and demand is a function of a uniform guaranteed delivery time. Assembling products according to priority with
This research has been supported by National Science Foundation of China (No: 70502015).

200

common component is considered. This paper focuses on the dynamics between the guaranteed delivery time and its impact on commonality decisions. Results show that the length of the delivery time not only directly affects overall demand and cost, but also has some further impact on commonality decisions.

2. Literature Review
Commonality and postponement are not new concepts. Alderson (1950) was the first to analyze the concept of postponement in marketing literature while Dogramaci (1979) provided an early study of component commonality from a risk pooling perspective. The published literatures are mainly focused on these followed fields. 2.1 Single-period model Baker (1985) considered the impact of correlated component demands on safety stock requirements. He pointed out that the traditional (safety stock) -policy might not be sufficient to provide the desired service level. Baker, Magazine and Nuttle (1986) were the first to present the two-product, two-level, single-period inventory model, which analyzed a problem with two end items with independent and uniformly distributed demands. They drew the following conclusion: (1) the total inventory (in number of units) decreases with commonality; (2) the inventory level of the common component is smaller than the combined inventory levels of the components it replaces; (3) the inventory levels of non-common components increase with commonality. Gerchak, Magazine, and Gample (1988) extended this analysis to consider general demand distributions and any number of products. They considered the cost minimization subject to a service level constraint and found that while property (1) (see above) is still true, (2) and (3) may not necessarily hold. Bagchi and Gutierrez (1992)s model maximizes the service level for a fixed total number of units in stock. For exponential and geometric demand distributions, they found that the marginal cost reduction increases with commonality. However, since neither of these papers considers cost in their models, they do not help answer the question of whether commonality is still worthwhile when the common component is more expensive than the ones that it replaces. Eynan and Rosenblatt (1996) use the two-product, two-level inventory model of Baker et al. (1986), but introduced a cost structure for the components. In particular, they demonstrated that commonality might not always desirable to introduce commonality when the common component is more expensive. Several conditions are provided under which commonality will result in lower inventory costs. Eynan and Rosenblatt (1997) then consider the possibility of using both the common component and the separate components, and showed that this can lower overall costs. Eppen (1979) consider the related problem of warehouse consolidation or centralization. Using a single-period model with normal demands, Eppen shows that holding cost and shortage cost are reduced by storing a product in a single location, rather than at several locations. Jnsson and Silver (1989) consider a model with any number of products that are assembled to order from a number of different components. Some components are common to two or more products. The objective is to maximize expected net profit subject to a budge constraint that limits the stocking level of components. Heuristics and bounding procedures are developed. Jnsson et al. (1993) consider the same problem, but utilize a scenario aggregation approach. Cattani (1995) considers a different make-to-stock model where several products are replaced by a single common product (separate components are not considered). Near-optimal stocking levels and costs are derived using a single-period model with normal demands. All of the previously mentioned work uses a single-period inventory model. It is assumed that components are purchased only once, to satisfy all future demand. Unfortunately, very few inventory systems operate for only a single period. 2.2 Multiple-period model Gerchak and Henig (1989) consider a multiple-period model. They develop a very general model with any number of products and components, and general demand distributions. Hillier (1999) extends the model of Baker
201

et al. (1986) and Eynan and Rosenblatt (1996) to consider the multiple-period (or infinite-horizon) case. Components are purchased at the beginning of each time period. The order-up-to levels are chosen to minimize holding costs subject to meeting a service level constraint. Interestingly, the results are drastically different than for the single-period model when the common component is more expensive than the unique components it would potentially replace. While these are many situations in the single-period model where incorporating a more expensive common component would be worthwhile, this is rarely the case in the multiple-period model. Hillier (1999) consider a multiple-period, make-to-stock system in which products are stocked at the beginning of each period, according to a forecast. After customer orders are received during the period, they are filled using the stocked items. Demand that cannot be met due to shortages is backlogged to the following period, but incurs a shortage or backlogging cost. It utilizes a very general model, with any number of products and general demand distributions. A dynamic program was developed to minimize discounted purchasing (or production), holding, and shortage costs. By comparing the total costs when using and when not using the common products, one can determine under what scenarios it is beneficial to employ commonality. Hillier (2000) considers a similar multiple-period, assembly-to-order model with Gerchak and Henig (1989), and develops heuristics and bounds. It considers two different scenarios. In the first scenario, no component commonality was utilized. Optimal stocking levels and costs were derived. In the second scenario, a common component replaced similar components in each of the products. A heuristic was developed which give near optimal solutions (typically with cost within 1 or 2% of optimal). Upper and lower bounds on the cost of the optimal solution were also developed. Using these bounds, along with the optimal costs of the no commonality scenario, the cost effectiveness of incorporating commonality can be investigated under a variety of circumstances. All these papers develop both heuristics and optimal solutions (with further assumptions) for the restricted problem. Neither of these papers was able to derive expressions optimal stocking levels or costs. Also, the future work in this area seems to consider a more complicated product-tree structure. 2.3 Service level model Most authors used aggregate service levels in their models with the exception of Baker (1985). Bagchi and Gutierrez (1992) studying the effects of increasing component commonality, found that, for a given aggregate service level, the aggregate stock requirement decreases at an increasing rate. The general assumption made, explicitly or implicitly, in previous studies is that the cost of each of the replaced components is equal to the cost of the replacing common component, allowing the common components to be more expensive than those they replace, and analyze cases in which commonality is still economically justified. Recently, Mirchandani and Mishra (1999) studied the commonality problem in a two-stage assembly system with a product-specific service level (PSL) requirement. They showed that since ASL may provide a higher than necessary service level, the use of PSL leads to additional inventory savings. 2.4 Computational issues model A number of authors have focused their research on computational issues in commonality models. For example, Hillier (1999) developed bounds on the multi-period cost for Gerchak and Henigs (1986) profit maximization model. Jnsson and Silver (1989), Jnsson et al. (1993), Tayur (1995) and Mirchandani and Mishra (1999b) developed computational approaches to solve large-scale commonality problems. 2.5 Marketing, logistics, and supply chain perspectives models In a manufacturing-distribution system, a considerable portion of the risk and uncertainty costs is due to differentiation in form, place, and time. Postponement of the point of differentiation is an important means to reducing or eliminating this risk and uncertainty. This has long been recognized and studied in marketing and logistics literature. 2.6 Models with lead time For the traditional inventory models, lead time is an insignificant factor in inventory modeling and implementation. It is usually assumed that the lead time for an order to arrive is relatively short and constant.
202

There are some examples in Fotter (1988) to explain that this is true in most of the time, however, there are inventory ordering situation for which this is not the case. Liao and Shyn (1991) presented a probabilistic model in which the order quantity is predetermined and lead time is a unique decision variable. Ben-Daya and Raouf (1994) extended the Liao model by considering both lead time and the order quantity as decision variables. For probabilistic demand, Tang (1994) propose a periodic review inventory model whose deterministic lead time is a decision variable. James etc. (1999) regarded that a (Q, r) model with stochastic lead time could be a building block in supply chain. Most of the literatures on stochastic-lead-time models are either for a finite horizon (Kaplan, 1970; Ehrhardt, 1984), or based on approximations (Friedman, 1984; De Kok, 1993), or do not obtain parameters of the optimal policy (Zalkind, 1978; Liberatore, 1979; De Kok, 1993). Previous studies have a dedicated research in the fields of commonality inventory model, and in the field of the inventory model with lead time. But there are few literatures considered the two fields together. This paper aims to find the relation between component commonality inventory and lead time. This study first set some assumption and notation in Section 3, and then put forward the model with and without component commonality in Section 4. Section 5 has a case study with a China automobile corporation; also the results are analyzed and discussed in this section. Section 6 presented some concluding comments.

3. Notation and assumptions


To develop models which will be proposed, we adopt the following assumptions and notation. Assumptions: The general product structure for our commonality study is a two-level product within ATO where the first level consists of end products and the second consists of components. The product structure used in Baker et al. (1985) and Jnsson and Silver (1989) is assumed in this paper. The basic model consists of two end products, and each end product comprises two different components that are normalized so that one component of each type is needed to make one end product. The structure of the basic model is illustrated in Fig.1a. We call it Model N, a model that does not assume a common component. When a common component, say component 7, is used to replace components 4 and 5, the new product structure is illustrated in Fig.1b. We call it Model C, a model that assumes one common component. When another common component, say component 8, is also used to replace components 3 and 6, the new product structure is illustrated in Fig.1c. We call it Model D, a model that assumes two common components.

Products: Component: 3

1 4 5

2 6 3

1 7

2 6

1 7

2 8

(a) Model N

(b) Model C

(C) Model D

Fig.1 Structure of Model N, C and D

Demand for product 1 follows a uniform distribution on the interval [0, B1 ( LT ) ], while demand for product 2 follows an independent uniform distribution on the interval [0, B2 ( LT ) ]. Since customer demand increases with lower delivery times (Blackburn et al., 1992; So and Song, 1998; So, 2000; Ray and Jewkes, 2004), we can assume that the demand parameters depends linearly on LT , i.e,

B i ( LT ) = aTi bi LT , i = 1,2

(1)

where LT denotes the guaranteed delivery time, aTi represent the constant part of the demand, and bi represents the guaranteed delivery time sensitivities of the demand. Without loss of generality we shall assume B1 ( LT ) B2 ( LT ) . Then, we break the guaranteed delivery time down into two parts, i.e,
203

LT = L0 + L
guaranteed delivery time. Combining (1) and (2) we can express Bi in terms of L and system parameters as

(2)

Where L0 denotes the minimum possible guaranteed delivery time, and L represents the compressible part of

Bi ( L) = aTi bi ( L0 + L) = ai bi L

(3)

where ai = aTi bi L0 . So ai denotes two times of the mean demand rate when L = 0 , and bi still represents the guaranteed delivery time sensitivities of the demand. The linear demand function will help us to obtain qualitative insights without much analytical complexity. It also has the desirable properties that the lead time elasticity of demand are higher at higher guaranteed delivery times (refer to Palaka et al., 1998). Each product is assembled-to-order from various combinations of components which are supplied with a pull-policy. At the beginning of each period, the inventory level of separate components reaches to S 3 S 4 S 5

S 6 respectively, and the inventory level of common components reaches to S 7 S 8 . We assume that each kind of
components can be supplied to a certain level at the beginning of each period, and when shortages occur, sales will be lost. It is obvious that ordering cost has nothing to do with the commonality decision with such kind of pull replenishment policy. Thus, three costs are considered in the model purchasing costs, holding costs, and shortage costs. Notations: L compressible part of guaranteed delivery time. S i stock level for component i at the beginning of each period(i=3,4,5,6,7,8).

Ci

purchasing cost for component i(i=3,4,5,6,7,8).

h holding cost per unit. x real demand for product 1 in each period. y real demand for product 2 in each period. C X shortage cost for product 1. CY shortage cost for product 2. f x ( x) density function for demand during one period for product 1.

f y ( x)

density function for demand during one period for product 2.

E X (L) expected demand for product 1 in one period. EY (L) expected demand for product 2 in one period.

4. Model with or without component commonality


4.1 Model N Referring to Fig.1a, it is obvious that, in an optimal allocation, the inventory stock levels for components 3 and 4 are the same and those for components 5 and 6 are the same (otherwise we carry extra inventory that does not help reduce product shortage). The optimization formulation for Model N is

TC N ( S , L) = C NP + C NH + C NS
costs, expected holding costs, and expected shortage costs respectively, and

(4)

where TC N ( S , L) represents the expected total cost, C NP , C NH and C NS represent expected purchasing

C NP =

B1 ( L )

(C 3 + C 4 )xf x ( x)dx +

B2 ( L )

(C 5 + C 6 ) yf y ( y )d y

= E X ( L)(C 3 + C 4 ) + EY ( L)(C 5 + C 6 )

(5)

204

C NH =

B2 ( L ) x y (C 3 + C 4 )h( S 3 ) f x ( x)dx + (C 5 + C 6 )h( S 6 ) f y ( y )dy 0 0 2 2 E ( L) E ( L) = (S 3 X )(C 5 + C 6 )h )(C 3 + C 4 )h + ( S 6 Y 2 2 B1 ( L )

(6)

C NS =
such that

B1 ( L )

S3

C X ( x S 3 ) f x ( x)dx +

B2 ( L )

S6

CY ( y S 6 ) f y ( y )dy

(7) (8)

S 3 = S 4 E X ; S 5 = S 6 EY ; L 0 ;

Generally the service level is no less than 50%, so the stock level of components is higher than expected demand. This restriction will help us to minimize the expected total cost and focus on commonality decision without much analytical complexity. Substituting (5), (6) and (7) into (4) and simplifying gives

TC N ( S , L) =

B1 ( L)(C 3 + C 4 ) + B2 ( L)(C 5 + C 6 ) B ( L) + ( S 3 1 )h(C 3 + C 4 ) + 2 4 2 ( B ( L ) S 3 ) C X ( B2 ( L ) S 6 ) 2 CY B ( L) (S 6 2 )h(C 5 + C 6 ) + 1 + 4 2 B1 ( L) 2 B2 ( L )

(9)

Obviously the expected total cost function is a descent function of L , for a given L , the expected total cost function is a polynomial function of S3 or S6 . Differentiating with respect to S3 and S6 , and the second-order partial derivatives of the objective function (9) are

2TC N ( S , L) S 3
2

2TC N ( S , L) CX CY , = , 2 B1 ( L) B2 ( L ) S 6
* *

(10)

So (9) is joint convex in S 3 and S 6 , and setting the first-order partial derivatives of (9) equal to zero lead to following. The S 3 and S 6 that minimize TC N ( S , L) , denoted by S N 3 and S N 6 , satisfy (11) and (12). This solution is unique and must be optimal due to the convexity of (10).
* SN3 =

C X C3 h C 4 h B1 ( L) CX CY C 5 h C 6 h B2 ( L ) CY

(11)

* SN6 =

(12)

4.2 Model C Referring to Fig.1b, component 7 is the common component. In optimal allocation, the relationship of the inventory stock levels for components 3, 6 and 7 is S 7 S 3 ; S 7 S 6 ; S 3 + S 6 S 7 (see Jnsson and

Silver ,1989; Fong, Fu and Li, 2004). The optimization formulation for Model C is

TC C ( S , L) = C CP + C CH + C CS
costs, expected holding costs, and expected shortage costs respectively, and

(13)

where TC C ( S , L) represents the expected total cost, C CP , C CH and C CS represent expected purchasing

C CP =

B1 ( L )

C 3 xf x ( x)dx +

B2 ( L )

C 6 yf y ( y )dy +

B1 ( L )

C 7 xf x ( x)dx +

B2 ( L )

C 7 yf y ( y )dy

= E X ( L)C 3 + EY ( L)C 6 + ( E X ( L) + EY ( L))C 7

(14)

205

C CH =

B2 ( L ) x y C 3 h( S 3 ) f x ( x)dx + C 6 h( S 6 ) f y ( y )dy + 0 0 2 2 B1 ( L ) B2 ( L ) x y 0 0 C7 h(S 7 2 2 ) f x ( x) f y ( y)dydx E ( L) E ( L) E ( L ) + EY ( L ) = (S 3 X )C 3 h + ( S 6 Y )C 6 h + ( S 7 X )C 7 h 2 2 2 B1 ( L )


B1 ( L )

(15)

C CS =

S3

C X ( x S 3 ) f x ( x)dx +

B1 ( L )

S3

B2 ( L )

S7 S3

CY ( y S 7 + S 3 ) f x ( x) f y ( y )dydx +
S7 S6 0

such that

S3

S7 S6 S7 x

B2 ( L )

CY ( y S 7 + x) f x ( x) f y ( y )dydx +

B2 ( L )

(16)

S6

CY ( y S 6 ) f x ( x) f y ( y )dydx
(17)

S 7 S 3 E X ; S 7 S 6 EY ; S 3 + S 6 S 7 E X + EY ; L 0 ;

If common components can not fill all the finished products, allocation of the common components should be considered. While there are many papers consider the shortage of common components, few of them consider such scenario. Many researchers used aggregate service levels in their models to define the probability of meeting all demand (see Baker, Magazine and Nuttle, 1986; Gerchak, Magazine and Gample, 1988; Eynan and Rosenblatt, 1996), but they did not give the total shortage cost for each product respectively. Thus, when the shortage cost of end products is different from each other, it is difficult to use cost structure in their models. Some researchers considered the priority of one product (See Jnsson and Silver, 1989; Fong, Fu and Li, 2004), but they did not present the shortage cost for each end product respectively either. This paper considers such scenario. Give C X = 1 and CY = 1 , formula (16) can be transform into the formula (6) in the paper of Fu and Li. So their model is a special case of ours. We assume that product 1 has the priority, when comment components 7 are not enough for all products, we will allocate them for assembling product 1 first. Substituting (14), (15) and (16) into (13) and simplifying gives

TC C ( S , L) =

B1 ( L)C 3 + B2 ( L)C 6 + ( B1 ( L) + B2 ( L)C 7 ) B ( L) B ( L) + ( S 3 1 )hC 3 + ( S 6 2 )hC 6 4 4 2 ( B2 ( L ) S 7 + S 3 ) 2 ( S 3 B1 ( L)) 2 C X B1 ( L) + B2 ( L) +( ( B1 ( L) S 3 ) + )hC 7 + (S 7 2 2 B1 ( L) 4

S + S6 S7 2 2 ( 3 )( S 3 + 3B2 ( L) S 3 2 S 7 S 3 S 6 S 3 3B2 ( L) S 7 + 3B2 ( L) 2 + S 7 6 ( B ( L) S 6 ) 2 ( S 7 S 6 ) CY 2 ) 3B2 ( L ) S 6 + S 6 S 7 + S 6 ) + 2 2 B1 ( L) B2 ( L)

(18)

The expected total cost function is a descent function of L, for a given L, the expected total cost function is a polynomial function of S3 , S6 and S7 . (18) is joint convex in S3 , S6 and S7 , and setting the first-order partial derivatives of (18) equal to zero lead to following. The S3 , S6 and S7 that minimize TC C ( S , L) , denoted by S C3 ,
* * S C6 and S C7 , satisfy (19), (20) and (21). We can use Hessian Matrix to justify the unique and optimal of this *

solution to be convex.

TC C C S CY S 3 CY S 3 C Y S 7 C Y S 3 S 7 C Y S 3 = hC 3 C X + CY + X 3 + + =0 S 3 B1 ( L) B2 ( L ) B1 ( L) B2 ( L) TC C hB1 ( L) B2 ( L)C 6 CY B2 ( L) S 7 + CY B2 ( L) S 6 + CY S 7 S 6 CY S 6 = =0 S 6 B1 ( L) B2 ( L)
2

(19)

(20)

206

hB1 ( L) B2 ( L)C 7 + CY ( B2 ( L) S 7 B2 ( L) S 6 + TC C S 3 = S 7
2

2 + S 6 2 S 7 2 B1 ( L) S 3 + B1 ( L) S 7 B1 ( L) B2 ( L)) =0 B1 ( L) B2 ( L)

(21)

4.3 Model D Referring to Fig.1c, component 7 and 8 are the common component. It is obvious that, in an optimal allocation, the inventory stock levels for common components 7 and 8 are the same (otherwise we carry extra inventory that does not help reduce product shortage). The optimization formulation for Model D is

TC D ( S , L) = C DP + C DH + C DS
expected holding costs, and expected shortage costs respectively, and

(22)

where TC D ( S , L) represents the expected total cost, C DP , C DH and C DS represent expected purchasing costs,

C DP =

B1 ( L )

(C 7 + C8 ) xf x ( x)dx +

B2 ( L )

(C 7 + C8 ) yf y ( y )dy = ( E X ( L) + EY ( L))(C 7 + C8 )

(23)

C DH =

B1 ( L )

B2 ( L )

(C 7 + C8 )h( S 7

x y ) f x ( x) f y ( y )dydx 2 2

E ( L ) EY ( L ) = (S 7 X )(C 7 + C8 )h 2 2 C DS =
such that
B1 ( L ) S 7 B2 ( L ) S 7 x

(24)

B2 ( L )

CY ( y S 7 + x) f x ( x) f y ( y )dydx

(25) (26)

S 7 E X + EY ; L 0 ;

We assume that product 1 has the priority, when comment components 7 and 8 are not enough for all products, we will allocate them for assembling product 1 first. Substituting (23), (24) and (25) into (22) and simplifying gives

TC D ( S , L) =

( B1 ( L) + B2 ( L))(C 7 + C8 ) (4 S 7 B1 ( L) B2 ( L))(C 7 + C8 )h + 2 4 3 ( B ( L ) + B2 ( L ) S 7 ) CY + 1 6 B1 ( L) B2 ( L)

(27)

The expected total cost function is a descent function of L , for a given L , the expected total cost function is a polynomial function of S7 . Differentiating to S7 , and the second-order partial derivatives of the objective function (27) are

2TC D S 7
2

B1 ( L) + B2 ( L) S 7 CY 0 B1 ( L) B2 ( L)
*

(28)

So (27) is convex in S7 , and setting the first-order partial derivatives of (27) equal to zero lead to following. The S7 that minimize TC D ( S , L) , denoted by S D7 , satisfy (29) and (30). This solution is unique and must be optimal due to the convexity of (28).

( B1 ( L) + B2 ( L) S 7 ) 2 TC D = h(C 7 + C8 ) CY = 0 S 7 2 B1 ( L) B2 ( L)
* S D7 = B1 ( L) + B2 ( L)

(29)

2hB1 ( L) B2 ( L)(C 7 + C8 ) CY

(30)

4.4 A Comparative study of the models with or without component commonality The expected total cost function (4), (13) and (22) are descent functions of L . When the guaranteed delivery time decreases by L , the compressible part also decreased by L = LT . Compare Model C with Model N, the 207

reduction of expected total cost is


* * TC N C ( SL L) = TC N ( S L L , L L) TC C ( S L L , L L) * * TC N C ( S , L) = TC N ( S L , L) TC C ( S L , L)

Compare Model D with Model N, the reduction of expected total cost is


* * TC N D ( S , L L) = TC N ( S L L , L L) TC D ( S L L , L L) * * TC N D ( S , L) = TC N ( S L , L) TC D ( S L , L)

5. Examples
Here we use examples from a case study with a China automobile corporation to explore the impact of guaranteed delivery time on the role of commonality under different cost structures. We are also interested in seeing how the benefits of employing commonality change with the guaranteed delivery time. These examples should give insights into how commonality effects change under various situations. Two cost structures are considered: (i) The cost of common component is be equal to unique component, the holding cost and shortage cost are varied, and (ii) The cost of common component is more expensive than unique component, the holding cost and shortage cost are varied. 5.1 The cost of common component is be equal to unique component Suppose demands follow identical uniform distribution on the interval [0, B (L) ], and we have B (L) 400
2 L . Assuming equal costs (C3=C4=C5=C6=C7=C8=1), and h=0.1, 0.15, or 0.2, CX=CY=2 or 3. Thus there is

a cost reduction by using a common component and compressing guaranteed delivery time in this case. The results of our comparison between Model C and Model N are plotted in Fig.2a. Comparison between Model D and Model N is also studied and results are plotted in Fig.2b.

Fig.2a Comparison between Model C and Model N

From the Fig.2a, it is seen that a larger reduction of the expected total cost from Model N to Model C is achieved with a shorter guaranteed delivery time. As the lead time becomes shorter, more shortage cost leads to more benefits of commonality, and as the holding cost increase the benefits becomes larger and larger. Thus the shortage cost and holding cost have compound effect on the benefits of commonality with a shorter guaranteed delivery time.

208

Fig.2b Comparison between Model D and Model N

From the Fig.2b, it is seen that a larger reduction of the expected total cost from Model N to Model D is also achieved with a shorter guaranteed delivery time, and Model D gains greater benefits than Model C. But, unlike Fig.1a, as the lead time becomes shorter, more shortage cost leads to less benefits of commonality. Though, the benefits increase as the holding cost increase, there is no compound effect with shortage cost and holding cost. 5.2 The cost of common component is more expensive than unique component Suppose demands follow identical uniform distribution on the interval [0, B (L) ], and we have B (L) 400
2 L . To assess the effect of component costs on commonality, we consider cases involving different costs (C3 C4C5C61;C7C81.01) and results are plotted in Fig.3a. and Fig.3b (h=0.1, 0.15, or 0.2, CX=CY=2 or

3). Thus there are both decrease and increase in cost by using a common component and compressing guaranteed delivery time in this case. The results of comparison between Model C and Model N are plotted in Fig.3a. Comparison between Model D and Model N is also studied and results are plotted in Fig.3b. From the Fig.3a, it is seen that if there is a reduction of the expected total cost from Model N to Model C with the longest guaranteed delivery time, a larger reduction will be achieved with a shorter guaranteed delivery time, but if there is a negative benefit at the beginning, commonality will cost more with the lead time decrease. The compression of guaranteed deliver time amplifies the effect of commonality. As the shortage cost or the holding cost increase, it will more likely to have positive effect on the benefits of commonality with the lead time compressing. The shortage cost has more impact on the benefits than the holding cost, and the shortage cost and holding cost have compound effect on the benefits. Thus Model C is very sensitive to component cost.

Fig.3a Comparison between Model C and Model N

209

Fig.3b Comparison between Model D and Model N

From the Fig.3b, it is seen that a larger reduction of the expected total cost from Model N to Model D is achieved with a shorter guaranteed delivery time, and Model D gains much more benefits than Model C. But, unlike Model C, Model D is not very sensitive to component cost. Fig.3b. has many similar features with Fig.2b. As the lead time becomes shorter, more shortage cost leads to less benefits of commonality. Though, the benefits increase as the holding cost increase, there is no compound effect with shortage cost and holding cost.

6. Conclusions
In this paper, we develops a guaranteed delivery time based inventory model of Component Commonality to consider the assemble-to-order environment where components are replenished according to a pull-policy and demand is a function of a uniform guaranteed delivery time. Previous studies have a dedicated research in the fields of commonality inventory model, and in the field of the inventory model with lead time. But there are few literatures considered the two fields together. This paper focuses on the dynamics between the guaranteed delivery time and its impact on commonality decisions. The impact of guaranteed delivery time on the effect of commonality on inventory cost is examined. Besides, we consider the allocation tactic of the common components, when common components can not fill all the end products. We consider the priority policy, and the total shortage cost for each product is given respectively. This can be useful for practicing managers when the shortage cost of end products is different from each other. Beyond the specifics of the model, our analysis creates a framework for thinking about the relationship between guaranteed delivery time and commonality decisions. And it set up some rules for inventory stock management with guaranteed delivery time. Results show that the length of the delivery time not only directly affects overall demand and cost, but also has some further impact on commonality decisions. And the benefits of commonality increase with lower guaranteed delivery time when the common component is the same expensive as the components it would replace, but this is shown not to be the case if the common component is more expensive than the components it would replace. When the benefits of risk pooling outweigh the added purchasing costs, a larger reduction will be achieved with a shorter guaranteed delivery time, while the added purchasing costs dominate the benefits of risk pooling, commonality will cost more with the lead time decrease. Based on the mathematical analysis, a conclusion is reached that the effect of component commonality on inventory stock cost can be magnified by compression of guaranteed delivery time. In future research on this problem, it would be interesting to extend this model to incorporate the influence of the component supplier lead time and using a (Q, r) replenishment policy, considering the finished product delivery time and component lead time simultaneously. Another extension of this work may be conducted by considering other allocation tactics of common components when shortage occurs and compare the impact of
210

different allocation tactics on the shortage cost and total cost for the model developed in Section 4. Another would be to extend to a model with multiple end products and fairly general demand patterns.
References

[1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21]

[22] [23] [24] [25] [26] [27] [28] [29] [30]

Alderson, W., 1950. Market efficiency and the principle of postponement, Cost and Profit Outlook, September 3. Bagchi, U., Gutierrez, G., 1992. Effect of increasing component commonality on service level and holding cost. Naval Research Logistics 39, 815832. Baker, K.R., 1985. Safety stocks and component commonality. Journal of Operations Management 6, 1322. Baker, K.R., Magazine, M.J., Nuttle, H.L., 1986. The effect of commonality on safety stock in a simple inventory model. Management Science 32, 982988. Blackburn J ., 1991. Time Based Competition, Richard D.Irwin,Homewood,Illinois. Blackburn, J.D., Elrod, T., Lindsley, W.B., Zahorik, A.J., 1992. The Strategic Value of Response Time and Product Variety. In: Voss, C.A. (Ed.), Manufacturing StrategyProcess and Content. Chapman and Hall, London (Chapter 13). Dogramaci, A., 1979. Design of common components considering implications of inventory costs and forecasting. AIIE Transactions 11, 129135. Ehrhardt, R., Mosier, C., 1984. A revision of the power approximation for compution (s,S) policies. Management Science (pre-1986). Linthicum: Vol. 30, Iss. 5; p. 618 (5 pages) Eppen, G. D., 1979. Effects of Centralization on Expected Costs in a Multi-Location Newsboy Problem. Management Science 25, 498-501. Eynan, A., Rosenblatt, M.J., 1996. Component commonality effects on inventory costs. IIE Transactions 28, 93104. Eynan, A., Rosenblatt, M., 1997. An analysis of purchasing costs as the number of products components is reduced. Production and Operations Management 6, 388397. Fong, D.K.H., Fu, H., Li, Z., 2004. Efficiency in shortage reduction when using a more expensive common component. Computers & Operations Reseach 31, 123-138. Gerchak, Y., Henig, M., 1986. An inventory model with part commonality. Operations Research Letters 5, 157160. Gerchak, Y., Henig, M., 1989. Component commonality in assemble-to-order systems: Models and properties. Naval Research Logistics 36, 6168. Gerchak, Y., Magazine, M.J., Gample, A.B., 1988. Component commonality with service level requirements. Management Science 34, 753760. Hillier, M.S., 1999. Product commonality in multiple-period, make-to-stock systems. Naval Research Logistics 46, 737751. Hillier, M.S., 2000. Component commonality in multiple-period, assemble-to-order systems. IIE Transactions 32, 755766. Jnsson, H., Silver, E.A., 1989. Common component inventory problems with a budget constraint: Heuristics and upper bounds. Engineering Costs and Production Economics 18, 7181. Jnsson, H., Jrnsten, K., Silver, E.A., 1993. Application of the scenario aggregation approach to a two-stage, stochastic, common component, inventory problem with a budget constraint. European Journal of Operational Research 68, 196211. Kaplan, R.S., 1970. A dynamic inventory model with stochastic lead times. Management Science (pre-1986). Linthicum: Vol. 16, Iss. 7; p. 491 (17 pages) Karmarkar, U.S., 1993. Manufacturing lead times. In: Graves, S.C., RinnoyKan, A.H.G., Zipkin, P.H. (Eds.), Logistics of Production and Inventory. In: Handbooks in Operations Research and Management Science, vol. 4. North-Holland (Elsevier Science Publishers B.V.), Amsterdam, The Netherlanhds. Liao, C.J., Shyu, C.H., 1991. Analytical determination of lead time with normal demand. International Journal of Operations and Production Management 11, 7278. Miller, J., and Roth, A., 1998. Executive Summary of the 1988 North American Manufacturing Futures Survey. Boston University Roundtable, Manufacturing. Mirchandani, P. and A.K. Mishra, 1999. Component commonality: models with product-specific service constraints, Working paper, Katz Graduate School of Business, University of Pittsburgh. Palaka, K., Erlebacher, S., Kropp, D.H., 1998. Lead time setting, capacity utilisation, and pricing decisions under lead time dependent demand. IIE Transactions 30, 151163. Ray, S., Jewkes, E.M., 2004. Customer lead time management when both demand and price are lead time sensitive. European Journal of Operational Reseach 153, 769-781. So, K.C., Song J.S., 1998. Price, delivery time guarantees and capacity selection. European Journal of Operational Research 111, 2849. So, K.C., 2000. Price and time competition for service delivery. Manufacturing and Service Operations Management 2 (4), 392409. Tayur, S. R., 1995. Computing optimal stock levels for common components in an assembly system, Working paper, GSIA, CMU. Zalkind, D., 1978. Order-level inventory systems with independent stochastic leadtimes. Management Science (pre-1986). Linthicum: Vol. 24, Iss. 13; p. 1384 (9 pages)

211

Outsourcing Decision-Making of Equipment Maintenance in Process Industries


Liu Liping1,2, Ji Jianhua1, Fan Tijun2, Hu Jiling2
1 Aetna School of Economics and Management, Shanghai Jiao Tong University, P. R. China, 200052 2 School of Business, East China University of Science and Technology, P. R. China, 200237 E-mail: lpliu@ecust.edu.cn

Abstract Proper equipment maintenance can help process industries operate effectively and safely. While equipment maintenance outsourcing can improve efficiency, it may bring about a lot of risks. However, rational outsourcing decision-making is an effective risk mitigation strategy. In this paper, we consider equipment maintenance vendor selection problem in process industries, and develop a 0-1ILP model to achieve the optimal solution. Finally, we present a numerical example as demonstration analysis. Key words Process industries, Equipment maintenance, Outsourcing risk, Outsourcing decision-making, the model

1. Introduction
The production of process industries is continuous. The successful operation of production equipment is very important for process industries. Thus, proper equipment maintenance can help process industries operate effectively and safely. With the development of third-part logistics, more and more manufacturers prefer outsourcing equipment maintenance to doing it themselves. Although equipment maintenance outsourcing can improve efficiency, it may bring about a lot of risks. However, rational outsourcing decision-making is an effective risk mitigation strategy. So it is necessary to study how to make optimal decision of equipment maintenance outsourcing in process industries. There are not a few papers that are interested in. Flix C. Gmez de Len Hijes et al. present some maintenance strategy based on a multi-criterion classification of equipments [1]. An economic model is presented for the optimization of preventive maintenance in a production process with two quality states in [2]. J.Pongpech, and D.N.P. consider a periodic preventive maintenance policy which achieves a tradeoff between the penalty and maintenance costs [3].With the development of third-part logistics, many manufacturer outsource equipment maintenance. This phenomenon catches up the attention of research persons. Fan Tijun et al. put forward three outsourcing patterns for equipment maintenance [4]. Nine outsourcing strategies of equipment maintenance outsourcing for enterprises are put forward and analyzed qualitatively in [5]. Cui Nanfang et al. research on integrated spare parts management in equipment maintenance outsourcing [6]. Though outsourcing brings manufacturer efficiency, it brings about a lot of risks. A. Hoecht, Trott investigates the innovation-related risks that can arise from strategic outsourcing and adopts a trust, collaboration and network perspective for this analysis [7] . Kweku-Muata et al. offer a method and some mathematical models for analyzing risks and constructing incentive contracts for IS outsourcing [8]. Ton G. de Kok researches on Capacity allocation and outsourcing in a process industry [9]. Though there are many literatures that fully research on equipment maintenance outsourcing, outsourcing risks, outsourcing in process industries respectively, they seldom consider these fields together (i.e. outsourcing decision-making of equipment maintenance in process industries). In this paper, we study the equipment maintenance outsourcing problem in process industries. This paper is organized as following. The equipment maintenance outsourcing decision-making problem is described in section 2 and a 0-1 ILP model is built in section3. Then a numerical example is presented in section 4. Finally the conclusion and future work are presented in section 5.

This research has been supported by National Natural Science Funds of China (No:70472030 and 70472057).

212

2. The Problem Description


It can be defined as a selection problem of equipment maintenance vendor. The manufacturer aims to achieve the maximal profit within particular maintenance cost and risk cost. Supposed the number of equipment maintenance vendor is n, and v j means equipment maintenance vendor j. Let rj denote the profit that v j brings about to the process manufacturer. Let c j be the equipment maintenance cost that the manufacturer should pay to v j and d j be the risk cost that v j might result in. Let cT denote the maximum total money that t cT would like pay for equipment maintenance. Let cT mean the maximum total risk cost that the manufacture can bear. These can be expressed in table 1.
Tab.1 Profit-cost Matrix v1 v2 r2 c2 d2

Vn rn cn dn cT dT

profit Maintenance cost Risk cost

r1 c1 d1

3. The Model
We develop a 0-1 integer linear programming (see figure 1) for the problem above. Let x j be whether the process industry manufacturer selects vendor j, and If x j =0, vendor j isnt selected. The objective function denotes that the manufacturer want to achieve maximal profit through equipment maintenance outsourcing. But the manufacturer wont pay no more than expected for outsourcing, and this can be express as the first constraint. There are some risks exist in the outsourcing (i.e. releasing of commercial secrets, dishonesty of vendors etc.), so every vendor may bring some risks cost to the manufacturer, and the second constraint denotes that the

max z = rj x j
j =1

c j x j cT j =1 manufacturer dont want accept too much risks cost. n s.t. d j x j dT j =1 x = 0, or1 j = 1, 2, j
n

,n

Figure 1 the Mode

4. The Numerical Example


Supposed there is manufacturer in process industry, and it has five vendors that can present equipment maintenance service. The maintenance cost and risk cost of every vendor are known as in table 2.
Tab. 2 values of Profit-cost Matrix

1 profit Maintenance cost Risk cost 2 1 3

2 7 5 4

3 3 6 2

4 5 4 3

5 6 3 5 15 12

How to select proper vendor? We can build the 0-1ILP model (see figure 1), and use LINDO 6.1 to solve it (see figure 2).
213

max 2x1+7x2+3x3+5x4+6x5 ST x1+5x2+6x3+4x4+3x5<=15 3x1+4x2+2x3+3x4+5x5<=12 end INT 5

The optimal solution is present in figure 3. Thus the manufacture should select vendor 1, vendor 2 and vendor 5 in this circumstance.
OBJECTIVE FUNCTION VALUE 1) VARIABLE X1 X2 X3 X4 X5 25.00000 VALUE 1.000000 1.000000 0.000000 0.000000 1.000000

Figure 3 the Optimal Solution

5. Conclusions and Future Work


In this paper, we consider a vendor selection problem that how the process industry manufacture make the optimal decision of equipment maintenance outsourcing. Moreover, we build a 0-1ILP model to solve this problem and present a numerical example as demonstration analysis. Selecting proper equipment maintenance vendor is an effective mitigation strategy of outsourcing risks. To manage outsourcing risks of equipment maintenance in process industries comprehensively, it is necessary to find out other effective risk mitigation strategies. It can be study further in future work.
References

[1] [2] [3] [4] [5] [6] [7] [8]

[9]

Flix C. Gmez de Len Hijes, Jos Javier, Ruiz Cartagena. Maintenance strategy based on a multi-criterion classification of equipments. Reliability Engineering & System Safety, April 2006, Vol. 91(4): 444-451 Sofia Panagiotidou, George Tagaras. Optimal preventive maintenance for equipment with two quality states and general failure time distributions. European Journal of Operational Research, July 2007, Vol. 180(1): 329-353 J. Pongpech, D.N.P. Murthy Optimal periodic preventive maintenance policy for leased equipment. Reliability Engineering & System Safety, July 2006, Vol. 91(7): 772-777 Fan Tijun, Chen Rongqiu, Cui Nanfang. Outsourcing patterns of equipment maintenance. Huazhong University of Science & Technology, (Nature Science Edition), Mar. 2003, Vol. 31(3): 81-83 FAN Tijun, CHEN Rongqiu, CUI Nanfang. Strategic Analysis of Equipment Maintenance Outsourcing. Chinese Journal of Management Science, Aug. 2003, Vol. 11(4): 47-53 CUI Nanfang, QU Kui, YANG Huili. Integrated Spare Parts Management in Equipment Maintenance Outsourcing. Softs Science, 2005, Vol. 19(5):57-59 A. Hoecht, Trott. Innovation risks of strategic outsourcing. Technovation, May-June 2006, Vol. 26(5-6): 672-681 Kweku-Muata, Osei-Bryson, Ojelanki K. Ngwenyama. Managing risks in information systems outsourcing: An approach to analyzing outsourcing risks and structuring incentive contracts. European Journal of Operational Research, October 2006, Vol. 174(1):245-264 Ton G. de Kok. Capacity allocation and outsourcing in a process industry. International Journal of Production Economics, December 2000, Vol. 68(3): 229-239

214

A Location-Allocation Problem Applying in Scrap Steel Recycling Network Design


liu Yang1, Tang Jiafu1
Dept of systems Engineering, College of Information Science& Engineering Northeastern University, Shenyang, Liaoning, 110004, China

Abstract In recent years, quantities of steel production in countries all over the world are increasing fast. However many developing countries advance their steel production at the expense of consuming mass of iron ore, coke, and sacrificing environment. Standing on the point of environment protection and natural resource saving, many experts appeal that the iron and steel industry should exert itself to utilize less iron ore, and more scrap steel instead. Realizing advantages and essentiality of using scrap steel, it is necessary to find out some feasible methods to estimate whether and how manufacturers can get profit from recycling and what the impact factors are. As a matter of fact, whether recycling scrap steel can increase the profit of a steel-making facility heavily depends on how effective reverse logistics operate. In this paper, we establish a closed-loop supply chain aiming at illustrating the whole business in the iron and steel industry from steel-making to scrap steel recycling. Then focusing on the reverse chain, we further explain some base-knowledge of scrap steel and analyze characteristics of its recycling more detailed. Finally we formulate a location-allocation model only considered for the obsolete scrap steel recycling which is a branch of the reverse chain on the condition of multi-collection centers and multi-consumer zones. Keywords Reverse Logistics, The iron and steel industry, Scrap steel, Recycling, Location-allocation model

1. Introduction
In recent years, the total global output of crude steel in iron and steel industry goes up steadily at rate of exceeding 2.4 percent averagely. Along with the increasing quantity of steel production, the demand of scrap steel is rising simultaneously. Many developed countries make steel using short procedure technology of electrical furnace adding scrap steel, which result in more than 70% of steel output is made by scrap steel. On the contrary, some developing countries advance their steel production at the expense of consuming mass of iron ore, coke, and sacrificing environment. Standing on the point of environment protection and natural resource saving, many experts appeal that the iron and steel industry should exert itself to utilize less iron ore, and more scrap steel instead. Effective utilization of scrap steel can bring many advantages to the iron and steel industry[7]. Firstly, it can help to save energy. Steel-making is a major consumer of energy comparing with other industry. While most energy is consumed before iron-making procedure such as mining, mill run, agglomeration, etc. Using scrap steel to substitute molten iron can overleap these procedures so that contribute to save mass of energy. In ideal condition, using scrap steel only needs nearly 1/3 energy that iron mineral needs to make steel product. Secondly, it can help to save investment. Increasing global competition in steel market forces the iron and steel industry to cut down its investment, improve product quality, and reduce manufacture cost to the top of them bent. Since scrap steel is much cheaper than pig iron, manufacturer can save about 67% purchasing cost by using it. Thirdly, it can help to reduce pollution of environment. The contamination venting from steel-making production includes CO2, scoria, steel residue, mass of solid wastes and chemic sewage. And more than 80% of them are created in the procedures as mining, mill run, agglomeration, coking, and steel-making. Using scrap steel can overleap some of these procedures; hence we can minimize the pollution at an inferior limit. Lastly, it can help to reduce quantity of transportation. Using scrap steel can reduce 70% quantity of transportation. On the one hand, decreasing quantity can reduce general costs and increase productivity; on the other hand, it can relax the pressure of transportation, and save transportation tool. Standing on this point, recycling scrap steel has some benefit for social development. Realizing advantages and essentiality of scrap steel recycling, it is necessary to find out some feasible
Fund: National Natural Science Foundation of China (70625001 70601004), Scientific Research Key Project of MOE (104064)New Century Excellent Talents Support Plan of MOE (NCET-04-280) Authors: Yang LIU(1982~), Tel83678349Email: doris198221@163.com Jiafu TANG (1965~), Tel83678349Email: jftang@mail.neu.edu.cn

215

methods to estimate whether and how manufacturers can get profit from recycling and what the impact factors are. As a matter of fact, whether recycling scrap steel can increase the profit of a steel-making facility heavily depends on how effective reverse logistics operates. Therefore it is essential to integrate the proper management method with the characteristics of scrap steel recycling network to try to gain the maximum profit. So far, there are few people integrate reverse logistics with the iron and steel industry, which results in lack of literatures in this field. Comparing with other countries, there are more researchers studying on raw materials recycling in the iron and steel industry in American and Japanese. And the issues they mainly take attention on include improvement of raw material recycling processing procedure, techniques of industrial wastes disposal, raw materials recycling network design, market analysis and forecast, etc. Nakajima (2002) investigates amounts of steel processing scraps generated by the Japanese industry in order to develop materials flow analyzed by an input-output table. And also he analyses the iron and steel scrap flow in Japan for the purpose of understanding the supply and demand system of the scrap at the present time and knowing the detail and precise scrap flow. Taki (2005) attempts to give a detailed description of the actual state and features of steel-can recycling. And as a result of the LCA (life cycle assessment) of steel cans, based on the fact that steel products are being used cyclically in steel products, he has proved that steel cans are environment-friendly containers with low CO2 emissions and energy consumption during life cycle. J. van den Brink (2002) has illuminates a Cycle of Matter model, which covers scrap flows between the three main actors defining the cycle, i.e. scrap users, scrap generators and scrap dealers; and incorporates the generation and use of three types of scrap, i.e. home scrap, prompt scrap and obsolete scrap. His paper presents the main results of a detailed study of the national scrap recycling industry of a developing country, Tanzania. The scrap flows in the Cycle of Matter model were estimated on the basis of specially tailored surveys amongst scrap users, producers and traders, and on the basis of secondary data and coefficients derived from the literature. Also the paper demonstrates that the model is a powerful tool for the analysis of the scrap cycle and reveals that Tanzania possesses large surpluses of scrap. In this paper, we establish a closed-loop supply chain aiming at illustrating the whole business in the iron and steel industry. Then focusing on the reverse chain, we further explain some base-knowledge of scrap steel and analyze the characteristics of its recycling detailedly. Finally we formulate a location-allocation model only considered for the obsolete scrap steel recycling which is a branch of the reverse chain on the condition of multi-collection centers and multi-consumer zones. We wish this work can give some scientific and beneficial advises on scrap steels recycling to manufacturers in the iron and steel industry. The rest of this paper is organized as follow. In section 2 we establish a closed-loop supply chain for the whole business of steel products, and then make some analysis and illustration about it. In section 3 we formulate a location-allocation model focusing on obsolete scrap steel recycling. Finally we conclude this paper in section 4.

2. Closed-loop supply chain for the whole business of steel products


2.1 Process of the whole business The iron and steel industry is a supply chain that combines row material purchasing, smelt, foundry, manufacture, and products distribution as an integer. Contemporarily since it cannot be divorced from service management, transportation and other logistics, it also is an enormous supply chain that global covered. We establish a closed-loop supply chain which starting at crude iron and steel making and ending at scrap steel recycling as shown in Fig.1 to describe the whole process of business in the iron and steel industry. At the starting point, crude steel are produced and processed into basic iron and steel in iron and steel facility. In most cases, output of iron and steel facility cannot satisfy demand of end user directly, so they must be sold to metal manufacturing facilities to receive further machined. In metal manufacturing facilities, different classes of steel-containing commodities are made in according to demands of customers. Finally all commodities are sold to various customer zones. Up to this stage, it completes the direct supply chain. And during this chain, losses occur and scrap steel is generated. The whole process of business is closed by scrap steel recycling.

216

Wastes

Disposal

Wastes

Disposal

Materials

The Iron and steel facility

Metal manufacturing facilities

Products

Customer zones

Home scrap steel Eligible scrap steel Scrap steel processing facilities

Prompt scrap steel Obsolete scrap steel Collection centers

Disposal

Wastes

Fig. 1 Closed-loop supply chain for the whole business of steel products

2.2 Recycling of scrap steel J.van den Brink (2002) has classified scrap steel into three types, namely home scrap steel, prompt scrap steel and obsolete scrap. From Fig.1 we can find these three types of scrap steels and their generators. It is easy to classify all sites in Fig.1 into four roles, namely, scrap steel generators, scrap steel collectors, scrap steel dealers, and scrap steel user. The role of the iron and steel facility is twofold. On the one hand, it uses all types of scrap as their major input, hence it is a scrap steel user; On the other hand, it generates home scrap as a by-product of their production process [2], hence it is a scrap steel generator. The other two scrap steel generators are metal manufacturing facilities and customer zones. Different from home and prompt scrap steels which are sent to the scrap steel processing facility directly by their generators, obsolete scrap steel is first collected by some special collection centers which existing as a third party in the steel market. And these collection centers play the role of scrap steel collector. Scrap steel dealer is the scrap steel processing facility which is the central core in the reverse chain, and plays an intermediate role between scrap steel generators and scrap steel user. It has some special functions, namely Receive scrap steel from inside and outside of the iron and steel industry Sort and pre-process all the scrap steels. Supply eligible scrap steels to scrap steel users.

3. A location-allocation model for recycling scrap steel


3.1 Problem formulation Form above, we can see that scrap steel processing facility is the core department of the reverse chain for scrap steel recycling. And it has three suppliers of scrap steels, namely, customer zones, metal manufacturing facility and the iron and steel facility. The last two suppliers will sent home scrap steel and prompt scrap steel directly, while customer zones are forced to supply obsolete scrap steel through some collection centers. To be noted that as materials of scrap steel processing facility, the proportion of obsolete scrap steel are much larger than the other two classes in actual. Based on this, we further emphasize our research on the obsolete scrap steel supplying which plays a main part during scrap steel recycling. The obsolete scrap steel recycling network can be further organized as shown in Fig.2. From a strategic point of view, we plan to answer the following questions: What size and how many collection centers should be open? Where can the collection centers be located? How much obsolete scrap steels should each collection center handle? The problem can be described as follows: given potential locations, capacity limitations, fixed opening costs of collection centers, transportation costs between each pair of customer zones and collection centers and between each pair of collection centers and scrap steel processing facility, and locations of customer zones and scrap steel processing facility. We assume that a time period is one year and there is only one scrap steel processing facility. 217

We know the annual supply and demand of obsolete scrap steel. Then we want to find out which collection center should be open and how much scrap steels should they handle with the objective to minimize the total cost of the network. The model can be formulated as a mixed integer programming in the following section.
1 1 2

J I Customer zone Collection center Scrap steel processing facility

Fig. 2 Obsolete scrap steel recycling network

3.2 Notation Parameters: I : be the number of customer zones J : be the number of potential collection centers

0: denotes scrap steel processing facility a : be the transportation cost of obsolete scrap steel per unit of weight and per unit of distance between each pair of customer zone and collection center b : be the transportation cost of obsolete scrap steel per unit of weight and per unit of distance between each collection center and scrap steel processing facility c j : be the opening cost of collection center j

dij : be the distance between customer zone i and collection center j e j 0 : be the distance between collection center j and scrap steel processing facility
i : be the total quantities of obsolete scrap steel generated by customer zone i in a time period : be the total demand of scrap steel processing facility for obsolete scrap steel H j : be the maximal storage capacity of collection center j
Decision variables: pij : be the total quantities of obsolete scrap steel transported from customer zone i to collection center j

q j 0 : be the total quantities of obsolete scrap steel transported from collection center j to scrap steel
processing facility

1 xj = 0

if collection center j is open otherwise

1 yij = 0

if customer zone i and collection center j have business otherwise

218

1 z j0 = 0

if collection center j and scrap steel processing facility have business otherwise

3.3 Model Based on above, we formulate an allocation-location model as follow

Min Subject to

adij pij yij + be j 0 q j 0 z j 0 + c j x j


i =1 j =1 j =1 j =1

(1)

pij yij = i
j =1 J j =1

i {1,2,...,I }

(2)

q j0 z j0 =

(3)

pij yij = q j 0 z j 0
i =1 I

j {1,2,...J } j {1,2,...J }

(4)

pij yij H j x j
i =1 J j =1

(5)

xj 1 yij 1
j =1 J j =1 J

(6)

i {1,2,...,I }

(7)

z j0 1
yij x j z j0 x j
1 y pij i yij M ij 1 z q j0 z j0 M j0 pij ,q j0 0; x j , yij ,z j0 = 0,1
i {1,2,...I } j {1,2,...J }

(8) (9) (10) (11)

i {1,2,...,I } , j {1,2,...,J }
M is a big enough number

j {1,2,...J }
M is a big enough number

(12) (13)

i {1,2,...,I } , j {1,2,...,J }

The objective function expresses the cost structure of the obsolete scrap steel recycling network, which consists of transportation costs and fixed opening costs. Constraint (2) ensures all obsolete scrap steels generated by each customer zone will be collected by collection centers. Constraint (3) guarantees that the demand of scrap steel processing facility for obsolete scrap steels will be satisfied. Constraint (4) is input and output balance equations, they ensure that the output of each collection center must equal to the input of it. Constraint (5) is capacity constraint; it ensures that the storage capacity of each collection center cannot be exceeded. Constraint (6) ensures that there must be at least one collection center existing in the network to collect scrap steel from customer zones. Constraint (7) guarantees that each customer zone will have at least one collection center to allow for it, and constraint (8) guarantees that scrap steel processing facility will have at least one collection center to supply scrap steel for it. Constraints (9)-(12) are logical constraints. They express that any two depots in the
219

network have business must found on the basis that the two depots are open. The problem is a non-linear integral programming, and we can apply many algorithms on its solving. For instance, we can utilize characteristics of constraints and apply Lagrangean relax algorithm to decompose the problem into several sub-problems. And then through solving all sub-problems and relax iteration process, we can finally get the optimal solutions.

4. Conclusions
This paper applies reverse logistics in the iron and steel industry, and formulates a location-allocation model to choose the proper locations of collection centers from a set of potential locations, taking the capacities of them into consideration simultaneously. Although this model has omitted some actual effective factors, it can help to make a macro-decision on the scrap steel recycling network design.
References

[1] [2] [3] [4] [5] [6] [7]

Moritz Fleischmann. Quantitative Models for Reverse Logistics. New York: Springer-Verlag Berlin Heidelberg, 2001. J.van den Brink. The Tanzanian scrap recycling cycle. Technovation, 2002, 22:187-197 A.I.Barros, R.Dekker, V.Scholten. A two-level network for recycling sand: A case study. European Journal of Operational Research, 1998,110: 199-214 Andreas Klose, Andreas Drexl. Facility location models for distribution system design. European Journal of Operational Research, 2005, 162: 4-29 Shams-ur Rahman, David K.Simth. Use of location-allocation models in health service development planning in developing nations. European Journal of Operational Research, 2000,123: 437-452 HUANG Guang-qiu, WANG Guo-zheng, ZHOU Jing. A Genetic Algorithms Approach to Optimum Locating of Multiple Stations in Logistics transportation. Microelectronics and Computer, 2006, 23 (3): 47-50 Zhang H.. Sustainable utilization of scrap iron and steel resource in China. Chinese Journal of China Population, Resources and Environment, 2003, 13(2): 106-110

220

A Dynamic Inventory/Allocation Problem Based on Internet Auctions


Liu Shuren, Hu Qiying
Department of mathematics, Shanghai University, Shanghai, P.R.CHINA, 200444 College of International Business and Management, Shanghai University, Shanghai, P.R.CHINA, 201800

Abstract We study a dynamic inventory/allocation problem by selling items through Internet Auctions. We address both finite and infinite horizon models with the objective of maximizing total expected discounted profit or long-run average expected profit. We show that the period inventory is managed based on the classical (j, J) policy for the finite horizon models, a stationary (j, J) policy for the infinite horizon discounted profit and average profit. Key words Inventory control, Internet auction, Markov decision process, Optimal (j, J) policy

1. Introduction
In this paper, we study the problem for a seller who uses an online auction mechanism for selling a replenishment product. The number of bidders in each period as well as the individual bidders'valuations are random. The seller purchases his good from an outside suppler at the ordering cost including both a fixed ordering cost and a variable cost proportional to the amount ordered. There is a holding cost for inventory and a leadtime for replenishment. The seller must decide on how to replenish his stock over time to maximize his profits. Such a problem is clearly important, not only in the retail industry, but also in manufacturing environments in which production/distribution decisions can be complemented with auction/pricing strategies to improve the firm's bottom line. Traditional inventory models focus on effective replenishment strategies and typically assume that a commodity's price is exogenous determined. In recent years, however, a number of industries have used innovative pricing strategies to manage their inventory effectively. These development call for models that integrate inventory control and pricing strategies. A body of research exists about dymamic combined pricing and inventory strategies (for example, Thomas, 1974; Thowsen, 1975; Federgruen and Heching, 1999; Chen and [1, 2 , 3, 4 , 5 ] [1] Simichi-Levi 2004a, b) . Thomas (1974) considered a model that includes a fixed ordering cost and he [ 4,5] proposed a simple policy called (s, S, p) to control the system. Chen and Simichi-Levi (2004a, b) proved that such a policy is indeed optimal when (1) demand is additive in a finite time horizon, and (2) the seller's objective is to either maximize total expected discounted profit or maximize average long-run profit in the infinite-horizon [ 2] model, but not necessarily so under multiplicative demand uncertainty in a finite time horizon. Thowsen (1975) considered a model that has zero fixed ordering cost and showed that a base-stock-list-price policy is optimal and [ 3] the optimal price is a decreasing function of the starting inventory. Federgruen and Heching (1999) extended Thowsen's work to a more general setting. They showed that, with general stochastic demands, a base-stock-list-price policy remains optimal when price can either be changed arbitrarily or decreased only. A parallel stream of research combines optimal auction and inventory control, which is a relatively new topic. [ 6] [ 7 ,8 ] Van Ryzin et al. (2004) , based on Vulcano et al. (2002a, b) , studied the optimal auctioning and ordering in an infinite horizon inventory-pricing system. They showed that a base-stock, reserve-price-auction policy is and Federgruen and Heching (1999) . optimal similar to a base-stock-list-price policy in Thowsen (1975) This policy says that optimal allocation can be achieved by conducting a first-price or second-price auction with a fixed reserve price in every period; The reserve price is related only to the replenishment cost of the good; The optimal replenishment policy is to order up to a fixed base stock level at the end of each period. An assumption made by Ryzin et al. is that the quantity of items offered is determined before ending the auction. In Du et al. [9] (2005) , an allocation problem is studied for selling a given number of items in the given number of online auctions, where the quantity of items offered to each auction is determined before opening the auction. Based on it, we study, in this paper, the problem for a seller who uses an auction mechanism for selling a replenishment
This research has been supported by National Natural Science Funds of China (The studies about some problems in revenue management, No: 70571049)
[ 2] [ 3]

221

product with fixed ordering cost. We show that the optimal replenishment policy for this question is quite simple [1] [ 4,5] , that is, similar to the optimal replenishment policy in Thomas (1974) , Chen and Simichi-Levi (2004a, b) the period inventory is managed based on the classical (j, J) policy. The remainder of this paper is organized as follows. In Section 2, a dynamic inventory/allocation problem by selling items through Internet auctions is described and Markov decision process model is presented. In Section 3 we characterize the optimal policy for a general finite planning horizon and extend the results for the finite-horizon model to the infinite-horizon case under both the infinite horizon discounted criterion and the long-run average profit criterion. Finally, Section 4 is concluding section.

2. The Model
The problem here is described as follows. The seller sells items through sequential Internet auctions. He sets auctions in an Internet web one by another. The duration of each auction is fixed with the length of t 0 . At the beginning of each auction, the seller should determine how many items from his hand to be auctioned in this auction. On the other hand, the seller can order items from his supplier at the beginning of the auction. It is assumed that the leadtime for each order is t 0 . That is, the items he ordered at the beginning of a period will arrive at the beginning of the next auction. We suppose that customers arrive according to a Poisson process with the arrival rate

. Each customer is

risk-neutral. Moreover, each consumer wishes to purchase at most one item and she has a valuation on each item. The valuation is private and symmetric, i.i.d, draws from a uniform distribution F (.) , which is strictly increasing with a continuous density function f (.) on an interval v, v , with F (v ) = 0 and F (v) = 1 . This means that customers are independent private valuation (IPV). Furthermore, it is assumed that each customer will bid honestly for the auctioned items, for example, when the mechanism of the auction is the first-price sealed-bid for multiple items. Each auction is a multi-item auction with reserve price v . For such an auction with s items, each of the s highest bidders will win an item if her bid is not less than the reserve price, and pays the value of her bid for the item. The total profit of the seller from the sequential auction is the sum of the profit gained from each auction. The seller's objective is to maximize his total profit. The notations are as follows. The epoch is defined as the beginning of an auction. n : index for horizons. i : state variable, the stock level at a period. j : decision variable, the order quantity at a period.

[ ]

s : decision variable, the number of items offered to the auction at a period. So, 0 s i + j . h : holding cost per item per unit time. The item is given to the winners at the end of the auction. K : setup cost for one order. c : a variable ordering cost per item.

: the one period discount factor.


[9]

Then, from Du et al. (2005) , we know that the probability of exactly m bidders arriving whose bids are larger than or equal to v is given by
qm =
( v v ) m ( v v ) m! e ,m

0;

ps =

m=s

qm = 1 qm ,
m =0

s 1

(1)

Where

t 0
vv

The revenue gained by the seller (not including the holding cost) from the auction when s items are offered is given by
222

r ( s ) = Ebk = sv
k =1

s ( s + 1) s 1 s m + 1 + ( v)( s m)qm , 2 2 m=0

(2)

and the transition probability is


qi l , pil = ps , 0, is<li l =is l < i s, or l > i.
[9]

(3)

About the revenue function r(s), we have the following lemma from Du et al. (2005) Lemma 1: The revenue function r (s ) is strictly concave and increasing with s . Define

( j) =

1 0

if j > 0, otherwise.

K ( j ) + cj . At a period, if the initial inventory is i , the order quantity is j , and the number of items offered to the auction is s , then the one-period expected profit is r ( s ) K ( j ) cj hi .
The objective for the dynamic inventory control based on Internet auctions is to decide on joint ordering and allocation policies so as to maximize total expected discounted profit over the entire planning or the long-run average profit for the infinite-horizon case. To find the optimal strategy for the problem. Let Vn (i ) be the maximum total expected discounted profit (discount relative to period n ) when n periods remain in the plan and the current inventory level is i . Then, the optimality equation for the finite horizon is as follows, for n = 1,2, , N ,

The ordering cost function includes both a fixed cost and a variable cost and is calculated for a period as

Vn (i) = max {K ( j) cj + r(s) + qmVn1 (i + j m s)} hi


j 0,0si + j m=0

(4)

with boundary condition V0 (i ) = 0 for i 0 . This means that the remaining items at the end have no value at all.

3. Results
In the finite horizons, by the induction method we show that the optimal replenishment policy is quite simple [1] [ 4] similarly to the optimal replenishment policy in Thomas (1974) and Chen and Simichi-Levi (2004a) . To find the optimal strategy, we have due to equation (4) that

Vn (i) = max {K ( j i) c( j i) + r(s) + qmVn1 ( j m s)} hi .


j i ,0s j m=0

Let

g n ( j , s ) = r ( s ) cj +

m =0

V n 1 ( j m s ), , j },

s n ( j ) = max{ s s g n ( j , s ) 0 , s = 1, 2 ,

where s g n ( j , s ) := g n ( j , s + 1) g n ( j , s ) . From the induction supposition that Vn 1 ( j ) is concave, we can prove that g n ( j , s ) is concave in s similarly to Proposition 5.3 in Du et al. (2005) the optimal allocation quantity when
[9]

. So, s n ( j ) is

j items obtain and thus g n ( j ) = g n ( j , s n ( j )) . Moreover, g n ( j ) = g n ( j , s n ( j )) is strictly concave in j for each


n 1 similarly to Theorem 5.1 in Du et al. (2005) [ 9 ] . Hence

223

Vn (i ) = max { K ( j i ) + g n ( j , s n ( j ))} + (c h )i
j i

= max{ g n (i , s n (i )), K + max g n ( j , s n ( j ))} + (c h )i.


j >i

Since g n ( j ) = g n ( j , s n ( j )) is strictly concave, it is easy to show that the following theorem holds by the induction method. Theorem 1: For any n , we have 1) Vn (i ) is concave in i ; 2) there are two constants j n < J n such that at state i with n periods remaining, it is optimal to order
* *

with the quantity of J n i if and only if i j n ; otherwise, no order is placed.


* *

Theorem 1 suggests that the inventory strategy is a ( j , J ) policy for the finite horizon discounted problem when a seller uses an auction mechanism for selling a replenishment product with fixed ordering cost: if the * inventory level at the beginning of period n is no more than the reorder point j n , an order is placed to raise the inventory level to the order-up-to level J n ; otherwise, no order is placed. Observe that in the case of the standard stochastic inventory problem, it is natural to expect that a stationary
*

( j , J ) policy is optimal for both the infinite horizon discounted criterion and the long-run average profit
criterion since a (j, J) policy is optimal for its finite horizon counterpart. We consider the infinite-horizon case where all data are stationary. By using the method of successive approximation we show that the optimal values and the optimal polices for the infinite horizon discounted criterion is exactly the limit of those for the finite horizon discounted problem; and at the same time we show that the convergence of discounted profit optimal values and policies to the long-run average profit optimal values and policies by vanishing discount approach. Our main result is the following theorem. Theorem 2: A stationary (j, J) type policy is optimal for either the infinite horizon discounted criterion or the long-run average profit criterion. Theorem 2 suggests that a stationary (j, J) type policy is optimal for either the infinite horizon discounted criterion or the long-run average profit criterion, similarly to the optimal stationary inventory policy in Chen and [ 5] Simichi-Levi (2004b) .

4. Conclusions
In this paper, we study the problem for a seller who uses an auction mechanism for selling a replenishment product. The number of bidders in each period as well as the individual bidders'valuations are random. The seller purchases his good from an outside suppler at the ordering cost including both a fixed ordering cost and a variable cost proportional to thze amount ordered. There is a holding cost for inventory and a leadtime for replenishment. The seller must decide on how to replenish and allocate his stock over time to maximize his profits. We consider the problem to be a Markov decision process. We then address the properties of the optimal inventory policy in both finite and infinite horizon models with the objective of maximizing total expected discounted profit or his time average value, and show that the period inventory is managed based on the classical ( j , J ) policy for the finite horizon model and a stationary ( j , J ) policy is optimal for either the infinite horizon discounted profit or the average profit. Surprisingly, although pricing and Internet auctions are so different mechanism, we obtain the same replenishment policy under both inventory control with innovative pricing strategies and inventory control based on Internet auctions. Further research may include the inventory control problem that is combined auction and list-price. Often, firms that sells with an auction mechanism also use a regular, fixed-price mechanism in parallel. In this retail setting, this is often achieved by using each mechanism in a different channel. In industrial settings, a firm may have fixed-price demand as a result of long-term contracts, while at the same time participates in auctions from spot-purchase customers. In addition, ours is a lost sales model. One may further consider the inventory control when backorder is allowed.
224

References

[8] [9] [10] [11] [12] [13] [14] [15] [16] [17]

Thomas, L. J. Price and production decisions with random demand. Operations Research, 1974, 22: 513-518 Thowsen, G. T. A dynamic, nonstationary inventory problem for a pricing /quantity setting firm. Naval Research Logistics Quarterly, 1975, 22: 461-476 Federgruen, A., A. Heching. Combined pricing and inventory control under uncertainty. Operations Research, 1999, 47(3): 454-475 Chen, X., D. Simchi-Levi. Coordinating inventory control and pricing strategies with random demand and fixed ordering cost: the finite horizon case. Operations Research, 2004a, 52(6): 887-896 Chen, X., D.Simchi-Levi. Coordinating inventory control and pricing strategies with random demand and fixed ordering cost: the infinite horizon case. Mathematics of Operations Research, 2004b, 29(3): 698-723 Van Ryzin, G. J., G. Vulcano. Optimal auctioning and ordering in an infinite horizon inventory-pricing system. Operations Research, 2004, 52(3): 346-367 Vulcano, G., G. J. van Ryzin, C. Maglars. Optimal dymamic auctions for revenue management. Management Science, 2002a, 48(11): 1388-1407 Vulcano, G., G. J. van Ryzin, C. Maglars. Optimal dymamic auctions for revenue management. Manufaturing and Service Operations.Management, 2002b, 4(1): 7-11 Du, L., Q. Hu, W. Yue. Analysis and evaluation for optimal allocation in sequential Internet auction systems with reserve price. Dynamics of Continuous, Discrete and Implusive System, Series B: Application and Algorithms, 2005, 12(4): 617-631

225

Reliability of Transportation Network Service Capacity Based on Effective Agile Theory*


Song Rui , He Shiwei
School of Traffic and Transportation Beijing Jiaotong University, 100044, Beijing, Peoples Republic of China

Abstract In the paper, the new concept of effective agility is proposed. The relation of Just-In-Time transportation service and effective agility is shown based on k-shortest path theory. The method for evaluating the reliability of transportation network service capacity is given. The traffic assignment model with link capacity constraints on the concept of effective agility is proposed and GAs solution is designed. Case study with the developing software shows that the effective agility concept and calculating method for reliability of transportation network service capacity are feasible and promising for practical using. Key words Effective agility, Transportation Capacity, Railroad network, Reliability, Genetic algorithms

1. Introduction
Todays markets are becoming dynamic and competitive and a firms successful management needs to be able to adapt to the continuous change and learn from its environment. One of the ways for the success of enterprise is to become agile in the procedure of its productions and operations. Agile theory came from Agile manufacturing (AM) that has been defined as the capability of surviving and prospering in the competitive environment of continuous and unpredictable change by reacting quickly and effectively to changing markets, driven by customer-designed products and services. Critical to successfully accomplishing agile manufacturing are a few enabling technologies such as the standard for the exchange of products (STEP), concurrent engineering, virtual manufacturing, information and communication infrastructure, etc. The AM could lower manufacturing costs, increase market share, satisfy the customer requirements, facilitate rapid introduction of new products, eliminate non-value added activities and increase manufacturing competitiveness [1,2]. Agile manufacturing is complementary to quick response to an initiative initially developed for the industry such as textiles, clothing, footwear and transportation. Quick response manufacturing (QRM) is an expansion of time-based competition (TBC) strategies which use speed for a competitive advantage. Essentially, QRM stems from a single principle by reducing lead times[1]. Quick response refers fundamentally to speed-to-market of products which move rapidly through the production and delivery cycle, from raw materials and component suppliers, to manufacturer, to retailer and finally to end consumers. In recently research, people found that QRM not merely mean to shorten the production time but to produce in time. Thus, the agility should be the effective agility. The quick response really denotes that the agile manufacturing be produced as quickly but as the customer request. However, few works of the effective agility and its applications are reported openly. Evaluating the reliability of transportation network serve capacity is one of the important works of transportation department. The reliability of network service capacity also refers to one of the important areas in transportation science. In China, the research on reliability of transportation service capacity is mainly limited in lines and stations [3,4]. Few works have been found in the fields of network service capacity reliability. In other countries, Ref. [5] proposed a model and an algorithm for measuring the reliability of road network service capacity. Ref. [6] gave a method for evaluating the reliability of double-container railroad network capacity in U.S.A. Since the works are based on the maximum OD output with not considering the network mileage and OD structure, the results are not satisfied for network features. Ref. [7] proposed a concept of effective network service capacity and its measuring method considering the network features. Thus, the reliability of network service capacity will be addressed on the concept and its measuring method of Ref.[7]. The remainder of the paper is organized as follows: first, the new concept of effective agility and the
**

Supported by National Natural Science Foundation (No.70371014), Doctoral Foundation (20040004012) of China.

226

reliability concept of transportation network service capacity based on effective agility is proposed. In Section 3, the calculating method for reliability of network service capacity is proposed. In Section 4, case study for measuring reliability of network service capacity is presented. Finally, the conclusions are given in Section 5.

2. Concept of Reliability of Transportation Network Service Capacity on Effective Agility


The definition of effective agility is as follows: The capability of surviving and prospering in the competitive environment of continuous and unpredictable change by readjusting the operations time effectively to changing markets according to the customer demands, especially the time demands. The definition of reliability of transportation network service capacity based on effective agility is as follows: Under the conditions of given network, fixed facilities, moving devices and certain level of operations management, within an unit time, the transportation network system has the possibility of successfully transporting the required amount of freight or commodity traffic from all origins to their destinations using its network service capacity with the level of effective agility service requirements. From above definition, service capacity of transportation network is the key factors for determining the network reliability. It is the possible actual using capacity of network per unit time on the conditions of fixed devices, human resources, management and service levels. The output unit is ton-kilometers per unit time (such as mouth or year). Total effective service capacity of transportation network is influenced by the use of transport devices, flow routines and transport demands. Flow routes have large influence on the effective service capacity when the transport devices and forecasting demand are fixed. Ref[7] discussed the concepts of effective service capacity in detail named as minimum k-degree effective service capacity of transportation network (Min-KESCTN), maximum k-degree effective service capacity of transportation network (Max-KESCTN), most economical effective service capacity of transportation network (EESCTN) and actual effective service capacity of transportation network (AESCTN) based on different cost objectives by adopting the k-shortest paths technology. In above concepts, the definition of Max-KESCTN is shown as follows since its inter-connected with the reliability of railroad network capacity. For Max-KESCTN case, the maximum forecasting OD flows in the planning period select their paths from k possible routes not according to the shortest distance while war, accident, or other disasters happen. So, by selecting their paths from k possible routes according to the maximum ton kilometers objective with line capacity constraints, the possible maximum total capacity could be calculated. Max-KESCTN is a parameter to reflex the reliability and flexibility of a railroad network. It could be used as the upper bound of the effective service capacity. Here, k-shortest path is concerned with the JUST-IN-TIME transportation service level based on effective agility such as accepted travel time and distance. Also, other kinds of time costs of transportation operations could also be transformed in a generally formed virtual transportation network. Thus, the agile demands for the customers transportation time could be reflected within the k-shortest paths. Also, the concept of k-shortest path is very useful for grasping the service level of transportation systems such as railroad system that does not have accurate definition of service level like road system. The reliability of transportation network service capacity is the probability of accommodating a certain traffic demand while the stochastic conditions where endogenous operational parameters and exogenous resource parameters of a transportation system are constantly changing. The influence of the operations delay, the devices failure and the environment factors as bad weather, war or other disasters on transportation network service capacity and reliability is hard to exactly evaluate. Usually experienced coefficient is used in actual calculation. We will use the above concepts to evaluate the reliability of railroad network service capacity.

3. Method for Evaluating the Reliability of Transportation Service Network


3.1 Equation For Calculating The Reliability of Railroad Network Capacity In daily operations, the failure, delay or interruption of transportation system might happen for various uncertain reasons. In such cases, sometimes a few train paths are interrupted completely and the freight OD flow 227

could only be transported through other paths. Suppose the number of paths is set as k when the accepted service level based on effective agile transport time is given. Then the number of potential feasible paths is less than k when some paths are interrupted. Sometimes the paths are not interrupted completely and the capacity is partially influenced for some time. The influencing on these paths capacity should be considered either. Suppose there be X uncertain scenarios. x represents a set of stochastic factors that happens in the same time for each scenario

x and x X . The stochastic factors include natural and manual aspects as operations delay, device failure, and
environments influence, etc. Px is denoted as the probability of X stochastic scenarios, Px 0 and the Max-KESCTN is set as Z max-KECCTN . Since the Max-KESCTN is the maximum output provided by railroad network system, it reflects the maximum possibility to carry the OD flows to their destinations with all k shortest possible paths. Its reasonable to use the ratio of expectation value of Max-KESCTN under various scenarios that might influence the network capacity to the maximum Max-KESCTN under ideal scenario to reflect the reliability of the network capacity. Suppose the reliability of network service capacity is R , we have
x

Px = 1
x =1

R = Px
x =1

x Z max-KESCTN Z max-KESCTN

(1)

3.2 Method for Calculating Max-KESCTN (1) Predict the maximum transport demand for each O-D flow of the planning year. (2) Calculate the k-shortest paths for each O-D flow [8] on effective agility concept; (3) Adjust the k-shortest paths according to the actual cars flow route manually if some paths are enforced or fixed. (4) With the capacity constraintsaccording to maximum ton-kilometers output objective, the O-D flow will

be assigned on different route with model and GAs method shown in Section 3.3 and 3.4. (5) The sum capacity of all assigned flow times the line costs(such as line length)is the Max-KESCTN. 3.3 General Freight O-D Flow Distribution Model Based on K-shortest Path Definitions ---- mth O-D flow, m M, M is the index set of whole O-D flows; i---- ith link, i L , L is the m link set of the network; p---- pth route, p ,
m m

is the route set of the mth O-D flow might select with
m

K-shortest path methods; L


m

m p ----

link set of the pth route selected by mth O-D flow; N ---- the demand of mth

O-D flow; N max --- the maximum demand of mth O-D flow in planning year; Ci ---- the capacity of ith link;
m wi ---- cost of ith link; p ---- boolean variable, 1, if mth O-D flow selects pth route; 0, otherwise; aimp ---- the ,

flow added on link i of pth route when it is selected by mth O-D flow; when
m p = 0 which is equal to 0.

m p = 1 , which is equal to N m , when

Freight car flow distribution model based on K-shortest path is formulated as follows: max Z = wi aimp ,
mM p m iLm p

(2)

s.t.
m aimp = p N m , m M , p m , i Lm , p

(3) (4) (5) (6)

mM pm


p m

aimp Ci , ,

iL

m p = 1,

m M

m 0 N m N max m M

228

m p {0 ,1} ,

m M , p m

(7)

In the model, Equation (3) is the O-D flow conservation constraint, Equation (4) is service capacity constraints of links, Equation (5) denotes each O-D flow can only select one route from all possible routes of the route set, Equation (6) denotes that each OD flow is within 0 and the maximum volume in the planning year. Equation (7) is logical constrains. With the objective seeks for maximum, the freight car distribution model could be easily evaluating the maximum k-degree effective service capacity of railroad network. 3.4 Genetic Algorithms (GAs) Solution The above model belongs to a large scale mixed 0-1 programming model and GAs are used for the solution. K-shortest path could be solved with existing method[8]. The GAs are simply stated as follows: (1) Chromosome decode: 0-1 decoding method is used. Each chromosome represents a potential solution of the routes selection by whole O-D flows. The length of a chromosome is equal to the total route number potentially selected by all O-D flows. Each gene represents one possible route, 1 denotes that the route is selected, and 0 denotes that the route is not selected. From constraints (5), each O-D flow can only select one route once time. For example, chromosome 100,010,,010 denotes each O-D flow have three 3 possible routes, each OD flow adopts its maximum volume in the planning year. (2) Fitness function: Let f z be total service capacity of railroad network, f k be objective value of kth chromosome,

be a constant, k be the fitness of kth chromosome.

Cars flow distribution for Max-KESCTN, f k = sum of flow times the cost for all links - the penalty w if the constraints (4) are not satisfied, w = penalty coefficient overflow volume for each link; and fitness k = f k / f z . (3) Selection strategy: We give more reproductive chances to the populations that are the most fit. The M highest-rated chromosomes will be preserved to next generation. (4) Genetic operators: Crossover operator adopts partial marching crossover method. The marching point must be the separation point of two O-D flows to guarantee the flexibility of the crossover chromosome. Mutation operator adopts multi-point mutation method. Each time one O-D point is selected for randomly mutation operation. Constraint (5) should be satisfied to guarantee the flexibility of the mutation chromosome. (5) Convergent condition: Predefined maximum number of generation or time limit is reached. The detailed procedure of GAs could be found in Ref.[9]. If more exact results are needed, the minimum gap between each link capacity and accumulated flow on that link along one path will be used as the flow of the unassigned OD flow path. Then, the final OD flow assignment result will be used to calculate the Max-KESCTN. Moreover, the new added flow along with the previous assigned flow could be recalculated in the GAs. The procedure could be iterated for many times until no more new flows are added to the final result.

4.Case Study
The test network is composed of 8 nodes (stations) and 12 links. The network structure and link costs are shown in Fig.1. The upper bound of each link capacity is shown in Fig.2. The forecasting maximum OD flows is shown in Tab.1.
2

10

7
6
8

3
1

2
5

8
8

3
2
3

1
5

5
8

Fig.1 The network structure and link costs.

Fig. 2 The upper bound of link capacity.

NoteThe data are all with conversion units. The lower bound of capacity is 0.

229

O\D 1 2 3 4 5 6 7 8

1 0 1 1 1 1 1 1 1

2 1 0 1 1 1 1 1 1

Tab.1 The forecasting maximum OD flows 5 3 4

6 1 1 1 1 1 0 1 1

7 1 1 1 1 1 1 0 1

8 1 1 1 1 1 1 1 0

1 1 0 1 1 1 1 1

1 1 1 0 1 1 1 1

1 1 1 1 0 1 1 1

With above algorithm, the final results are shown as follows: (1) kth-degree Max-KESCTN( K 5 ) When the population size is 30, crossover rate is 0.8, mutation rate is 0.2, the generation steps are 3000, When = 1, the fitness of the optimal chromosome is 0.0990, kth-degree Max-KESCTN is 380. It should be noted that unlike conventional optimization which terminates when the minimum value is reached within tolerance, there is no true termination for genetic algorithm. Termination in a genetic algorithm is defined either by the number of generations chosen or after a defined number of consecutive fitness solutions that are identical. Because of this, one may terminate the algorithm prematurely. For the example, the generation was set to terminate after 3000 generations. Nevertheless, in all cases, the objective function reached the convergence for the scenario within 2000-2500 generations and hence, the time of execution for all scenarios was not impacted by choice of the parameters. Again, sensitivity analysis was conducted by changing the genetic parameters, crossover probability and mutation probability. However, no appreciable difference was noticed in the problem solutions when the parameters were systematically changed to different values. For example, when crossover probability was kept 0.8, 07 or 0.6, the mutation probability was kept at 0.2, 0.3, or 0.4, the objective value did not show any consistent dependence on the changes of the crossover probability nor the mutation probability. (2) reliability of railroad network service capacity Suppose the network capacity be influenced by following 3 scenarios with uncertain factors: Scenario 1link 4 5 is interrupted completely, the capacity of link 4 7 is reduced from 10 to 9 the probability of scenario and 1, p1 30Scenario 2link 7 5 is interrupted completelyand the probability of scenario 2, p2 40 Scenario 2 link 6 5 is interrupted completely and the probability of scenario 3, p3 30 The Max-KESCTN for above 3 Scenarios is calculated as follows: (here K 5 ). The calculating results are as follows: Scenario 1 Z max-KECCRN 302 , Scenario 2 Z max-KECCRN 312. Since all the OD flows are transported to their destination within its service level, the
2 Z max-KECCRN 1 2 3 Z max-KECCRN

is also 380Scenario 3

324.

Since all the OD flows are transported to their destination within its service level, the So, with equation (1), the reliability of railroad network service capacity is:

3 Z max-KECCRN

is also 380

R = Px
x =1

x Z max-KECCRN = (302 0.3380 0.4380 0.3) 380 = 0.94 Z max KECCRN

5. Conclusions
The paper proposed the concept of effective agility and reliability of transportation network service capacity on effective agility. K-shortest path technology is used as a bridge to connect the transport service level and effective agile concept. Based on the traffic assignment model on K-shortest path and GAs solution, a new method
230

to calculate the reliability of railroad network service capacity is proposed. Real instance with the developing software shows that the concepts and calculating methods of network service capacity are feasible and promising for large scale practical using. The method has been used for evaluatzing the reliability of east-north China railroad network service capacity. It provides useful demonstration for further understanding the reliability concept of railroad network service capacity. Future study will be paid on how to exactly evaluate the influence of empty car flow, temporary passenger trains and station capacity. Also, the distribution regulation of the uncertain factors on reliability will be further considered.
References

[1] [2] [3] [4] [5] [6] [7] [8] [9]

Gunasekaran, A. Design and implementation of agile manufacturing systems, International journal of production economics. 1999,62(1):1-6. Cao, Q.; Dowlatshahi, Shad. The impact of alignment between virtual enterprise and information technology on business performance in an agile manufacturing environment, Journal of operations management. 2005, 23(5):531-551. Zhu Xiaoli, Xiamiao Li. The Approach of Improving the Reliability of Timely Railway Freight Transit, China Railway, 2004, (1):51-54. (in Chinese) Feng Bingjie. Evaluation on the reliability of Marshalling Station Operations. Gansu Science and Technology, 2005, 21(11): 133-134. (in Chinese) Chen, A., Yang, H., Lo, H. K. & Tang, W. H.. Capacity reliability of a road network: an assessment methodology and numerical result, Transportation Research Part B, 2002(36):225-252. Cho. D. J., E. K. Morlok and Z. L. Chen. Efficient Algorithms for Measuring the Reliability of Transportation Network System Capacity. Working Paper, University of Pennsylvania, Philadelphia, 2002. He Shiwei, Rui Song, Zhonglin Lei, Li Zhang. Efficient Methods for Evaluating the Effective Carrying Capacity of Railroad Network System. the 86th Annual Meeting of Transportation Research Board, Washington, DC, America., January 21-25, 2007. Rink K.A., E.Y. Rodin and V.Sundara Pandian. A simplification of the double-sweep algorithm to solve the K-shortest path problem, Applied Math. Lett., 2000, 13(8): 77-85. Michalewicz, Z. Genetic Algorithms + Data structures = Evolution Programs, Springer-Verlag, Berlin. 1994.

231

Information Increment and Information Value in Supply Chains


Wang Jing ,Wang Xun , Li Yuxiang
School of Economics & Management, Beihang University, Beijing 100083, China

Abstract Information plays the most important role in supply chain management. In this paper, concepts of information increment (I) and value of information (VOI) are introduced; then, the relationship between them is analyzed. Information increment is defined as the status cognition change compared with the real status. An opinion is then presented that I and VOI are strongly and positively correlated. For supply chain applications, a general analysis framework with a common ordering policy, APIOBPCS, is established to reveal the functioning mechanism of information. Capacity constraints are introduced into the analysis for improving the practicability of the ordering policy models. The proposed opinion is proved through simulation experiments. Key words Information management, Supply chain management, Information increment, Value of information

1. Introduction
Information flow, with the highest frequency, the most complex structure and the fastest transformation, controls and dominates material, capital and value flow in supply chain processes. The whole supply chain can be operated efficiently only by using accurate information. Information plays an indispensable role in supply chain cooperation, the main function of which involves: transaction, decision-making, strategic planning and managerial controlling. The problem of supply chain information management (SCIM) has been widely researched in the world. The theoretical layer, including the measurement, evaluation and classification of information, is the preliminary work of applied researches. The aim of this research is to quantitatively analyze the relationship between information and its value through definition of information increment; and to demonstrate the positive correlation between them. In economics, game theory and payoff matrix are used as primary instrument and framework in VOI analysis. Afriat (1967) examines a situation where only finitely many observations of prices and consumption-bundles of an agent are available[1]; Aumann (1974) models the information structure in a Bayesian game by a partition of the state space into disjoint cells[2]. Since then, almost all VOI researches were carried out within Bayesian information structure. Gould (1974) defines Value of Correct Information (VCI)[3]; Gilboa and Lehrer (1991) examine situations of single decision maker, and then propose a method to evaluate information[4]. There have been considerable interests in applying VOI theory to weather forecasting, financial investment, transportation management and animal behavior. Studies in the field of SCIM, especially VOI researches, however, are still limited. The quantitative research, characteristics and mechanism of information as well as effective selection method have been paid relatively less attention. Most past efforts have been spent on Bullwhip Effect analysis, information risk and information sharing. Bourland et al. (1996) investigate the value of information sharing when lead-times of distributor and manufacturer are different[5]; through discussions of lead-time, demand forecasting and Bullwhip Effect, Chen (1998) demonstrates the necessity of information sharing[6]; Gavirneni (1998) studies the information sharing problem under a two-echelon supply chain model[7]; Cachon and Fisher (2000) show how information could be effective to the manufacturer when the distributor uses an order batching policy[8]. Among all the works mentioned above, only Chen (1998) discusses the value of demand information sharing of supply chain members. However, information sharing, as a kind of information utilization mechanism, is different from information itself. Therefore, the value of information sharing should not be confused with the concept of VOI. Since none of the above studies investigate information and VOI characteristics in supply chains, this study is designed to fill this research gap. In this paper, VOI in supply chains according to supply chain information characteristics is defined. The concept of information increment is used as a key for investigating the direction characteristics of information. Then, with a common ordering policy APIOBPCS, quantitative analysis about information increment and VOI is
This research has been supported by National Natural Science Funds of China (Research on supply chain information management based on control engineering and information theory, No: 70572014; 70521001).

232

carried out via simulation experiments. Lastly, the results are discussed and a general framework for supply chain information management studies is proposed.

2. Related concepts
2.1 Information and value of information This paper follows the opinion of Zhong (2002)[9] that information takes effect by changing the environments status cognition. According to this cognition, decision makers adjust themselves in order to maximize their utility, which can be expressed as a function of the real environment status T* and the decisions he/she makes, D. Then the utility function can be written as U ( D; T * ) , and furthermore U (T0 ; T * ) , considering

the corresponding relationship between decisions and status cognition, T0. The value of information can be calculated as the utility difference before and after the decision maker receives the information. Let the anterior cognitive status be T, and the posterior cognitive status T after information X is received. Then the value of X is
V ( X ) = U (T ; T ) U (T ; T )
* *

(1)

2.2 Information quantity


Value of information can be considered as a function of posterior status cognition parameters. In other words, information is evaluated by its effect. This paper suggests that information should also be measured by the same method. Information quantity received by decision makers should be calculated using the distance between cognitions, as follows, I (T ; T ) = d (t , t ) (2) where t is the vector representing status T, and d distance measure, which satisfies non-negativity, symmetry and triangle inequality. In fact, various information measurement equations could be regarded as special applications of Equation (2). For instance, Shannon mutual information for discrete distributions can be written as[10],
I (T ; T ) = H 1 ( pi) H 1 ( pi ) =

p log p p log p
i i i
2 2

(3)

Shannon mutual information for normal distributions can be written as,


I (T ; T ) = H 2 ( ) H 2 ( ) = log 2 e log 2 e = 2 log

(4)

where H 1 ( pi ) = pi log pi and H 2 ( ) = log 2 e

are parameters describing uncertainty (entropy).

Absolute value is used for measuring distance. To accurately describe and measure supply chain information, its characteristics should be paid full attention. Assumptions are usually made that demands are normally distributed, which indicates that orders, inventories and work-in-processes are all distributed of the same kind if linear ordering policy is used. To investigate information quantity under various demand and system conditions, this paper defines information quantity between two normal distributions, as follows,

I (T ; T ) = ( ) 2 + (log log ) 2

(5)

where is the mean value, is the standard deviation and Euclid distance measure is adopted. Vector (, log) is one-one corresponded with all the points in 2-dimensional real plain. To extend the range of the vectors latter part from positive to the whole real axis, logarithmic standard deviation is used here.

2.3 Information increment and direction characteristics of information


The above-mentioned analysis implies that, the real status of the environment influences both the decision-makers utility function and VOI. Yet this fact has long been ignored by past studies intending to
233

establish relationships between IQ and VOI. This paper brings real status into consideration of information quantity to propose the concept of information increment (I). The information increment of information X is defined as
E ( X ) = I (T ; T ) I (T ; T )
* *

(6)

The value of I will be positive if the decision maker makes relatively correct cognition move; and negative vice versa. Obviously, considering I as a function of posterior status cognition, the following features can be discovered, (1) E ( X ) is positive/negative, if and only if I (T ; T * ) is more/less than I (T ; T * ) ; (2) E ( X ) = 0 , when T = T ; (3) E ( X ) = maximum , when T = T .
*

This paper proposes the opinion that I and VOI are positively correlated.

3. Information increment and value of information


In this paper, VOI of a supply chain manufacturer is investigated. To quantitatively analyze the correlation between I and VOI, a widely-used ordering policy will be adopted here; capacity constraint is brought in as decision variable; and cost structure of this manufacturer will be set as a system performance measure. 3.1 Ordering policy An ordering policy of APIOBPCS (Automatic Pipeline, Inventory and Order Based Production Control System) is adopted in this paper, which can be depicted in words as, Let the ordering target be equal to the sum of demand estimation, a fraction of inventory offset and a fraction of WIP offset. This is a widely used policy for eliminating Bullwhip Effect. The order quantity is determined by the following equation,
1 1 Ot = Dt + ( DINV AINVt 1 ) + ( DWIP AWIPt 1 ) Ti Tw

(7)

where Ot and Dt are the order and demand estimation of period t; AINVt-1 and AWIPt-1 are the actual
inventory and work-in-process level in period t-1, DINV and DWIP the desired inventory and WIP level, Ti and Tw artificial parameters respectively. Exponential smoothing is used for demand forecasting. For detailed description about this policy, see Disney et al. (2003, 2000)[11][12]. 3.2 Capacity constraints Capacity is usually assumed infinite in cybernetics-based supply chain researches, which greatly reduces the authenticity of supply chain models. Here, a production system is defined to be a combination of an ordering system and capacity constraint, and the capacity Ca is a function of estimated demand distribution parameters, or Ca = F ( , ) , where and are the estimations of mean value and standard deviation of demand. Capacity

constraint works as an ideal limit filter. Assuming the demand process is Gaussian distributed, and no exterior resources are capable for the manufacturer, then the order process will follow a continuous-discrete hybrid distribution which is Gaussian under capacity Ca, whereas the probability on Ca will be
p (C a ) = 1 F (C a ) =

Ca

f ( x ) dx

(8)

The above-mentioned constraint is the only dissimilarity between the ordering policy used by the manufacturer and the one discussed in section 3.1. Among other aspects, they are identical. e.g., output and WIP are respectively order and on-the-way products. The manufacturer satisfies customer needs and minimizes the costs via adjusting the capacity constraint according to demand estimation. As can be seen, the real demand distribution here is actually environments real status which is discussed in Section 2; and the estimated distribution is in fact the status cognition. The value and function of information in supply chains can be revealed and calculated using this model shown as Fig.1.
234

Fig.1 The information effect model of a supply chain manufacturer

The dashed frame in Fig.1 representing a production system consists of a smoothing filter (the ordering policy APIOBPCS) and a limit filter (the capacity constraint). The decision maker of the production system collects various kinds of available exterior information concerning the demand status, and therefore changes its estimation about the demand distribution (upper dashed curves in Fig.1). The capacity constraint is then adjusted according to the new estimation. 3.3 The cost structure of a supply chain member Production costs involved in this paper, or costs produced by demand fluctuation, consists of the following two kinds: Inventory/backorder cost C I = ( v1 AINVt + v2 BOt ) , brought up by redundancy or deficiency for
t =1 T

satisfying customers demands, where v1/v inventory/backorder level in period t.


T t =1

is the unit cost of inventory/backorder, AINVt/BOt the

Production adjustment cost C P = v3 ORATEt ORATEt 1 by producing/ordering adjustments, where v3 is the unit cost of producing/ordering adjustments, and ORATEt the production/order rate in period t. For descriptive analyses of these two forms of costs, see Dejonckheere et al. (2002)[13]. Besides, fixed cost should be given full consideration since capacity is considered, which can be defined as CF = T v4 Ca , where v4 is the fixed cost per unit capacity, and Ca is the current capacity. Accordingly, the total cost of the system should be the sum of these three kinds of costs listed above, or CT=CI+CP+CF.

4. Simulation experiment
4.1 Model Settings Disney et al. (2002) provide several sets of optimized APIOBPCS parameters[14]. In this paper, small adjustments are made based on these optimized parameters in order to obtain more distinct simulation results. Let Tp=3, Ta=4, Ti=1, Tw=1.5 since slight adjustments of parameters will affect neither the convexity of the cost curves, nor any of the following results. The settings of v1, v2, v3 and v4 are also based on the above purpose. Again, such behavior will not affect the characters of the research object. The parameters used are v1=0.5, v2=1, v3=2, v4=3. 4.2 The positive correlation between I and VOI MS-EXCEL 2003 and SPSS 11.0 are used in this research for simulation experiment and statistical analysis. The length for each simulation scenario is 10,000 time periods. Firstly the relationship between cost and capacity is managed to attain. 151 different values of demand deviation from 0.1 to 1.5 and 141 values of capacity from 0 to 1.5 with an increment of 0.01 are used to simulate various cost performances. In each scenario of different deviation and capacity, 100 samples of cost values are generated, the average of which is then set to be the expected cost in this scenario. Finally a cost matrix of 151141 is generated. The total calculation amount is at the 1011 level. By investigating optimal capacities and minimum costs under each deviation level, Fig.2 can be attained. 235

0.4 0.35 0.3 0.25 0.2 0.15 0.1 0.05 0

Capacity

0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9

1.1 1.2 1.3 1.4 1.5

Standard Deviation

Fig.2 Relationship between demand deviation and optimal capacity

Irregular fluctuations are seen in Fig.2 due to errors in random number generating and calculating. But still the trend is obvious that higher capacity is necessary for greater demand fluctuation. Through regression analysis using SPSS, the optimal capacity decision rule can be attained. Here linear regression is used for simplicity.
C a ( ) = 0.177 + 0.1085 .
*

(9)

Assuming that the decision-maker is limit-rational, Equation (9) can be used as a decision rule for capacity settings according to his/her opinion about the real demand distribution. In order to attain the relationship between demand deviation estimation and cost, (9) should be substituted into the cost matrix. Then the relationship between estimated deviation and cost is clarified, as in Fig. 3.
80000 70000 60000 50000 40000 30000 20000 10000 0 -0.61-0.050.52 1.08 1.65 2.21 2.78 3.34 3.91 4.47 5.04 5.60 6.17 6.73 Estimated Deviation

Cost

Fig.3 Relationship between estimated deviation and cost when demand is N (0, 1) distributed

Some explanations about this figure should be made here that during the substitution some data with negative deviation values are generated inevitably, which disobeys the mathematical definitions. Hence, these data are disposed without any possibility to affect the final results. Using Fig.3 and relevant data, the relationship between I and VOI can now be investigated. Let the actual standard deviation of demand be *, the anterior deviation estimation =0.4, the posterior estimation , then VOI can be calculated as follows,
V ( X ) = CT ( | ) CT ( | )
* *

(10)

Likewise, based on Equation (5) and (6), effective information increment in this situation will be:
E ( X ) = log log log log
* *

(11)

And their relationship is presented in Fig.4.

236

V (X)

6000 4000 2000 0


-0.6 -0.4 -0.2

V (X ) 10000
5000 0 -5000 -10000 -15000 -20000 -25000 -30000 -35000 0 0.2 0.4

-2

-1.5

-1

-0.5

-2000 0 -4000 -6000 -8000 -10000 -12000 -14000

0.5

E (X )

E (X )

a. 2<*, or 1 and 2 on the same side of *

b. 2 >*, or 1 and 2 on the different side of *

-40000

Fig.4 Relationship between I and VOI

The Pearson, Kendall and Spearman correlation coefficient of I and VOI are 0.814, 0.932 and 0.955 respectively. This discovery reveals a strong positive correlation between them, which indicates the necessity to consider I as an important reference in information evaluation and selection. Besides, several other findings from Fig.4 are as follows, (1) I and VOI reach their peak values when the estimated and real parameters of demand distribution are identical. I and VOI have upper bounds, and they reach these bounds only when estimation is totally correct. (2) When the anterior and posterior estimations are identical, which means the supply chain member has not received any information or no information is utilized, I=0, VOI=0. (3) The supply chain member receives negative or false information if I is negative. Moreover, the greater estimation differs from reality, the more harm false information will do to the decision maker. (4) Fig.4b reveals the scenarios that 1 and 2 are in different sides of *, in which it is impossible for anterior and posterior estimations to be equal. Due to the discrepancies between utility and information quality curves, the phenomenon may appear that I and VOI will not be zero simultaneously, which explains why the curve in Fig.4b does not pass through the origin point.

5. Conclusion
This paper proposes the concept of information increment in supply chains according to supply chain information characteristics. Information increment is defined as status cognition change compared with the real status. Then, the relationship among information quantity (IQ), information increment and value of information (VOI) is analyzed. Capacity and cost settings are introduced into a common ordering policy APIOBPCS so that the effect of information to supply chain performance can be revealed. Through simulation experiments, the strong and positive correlation between I and VOI is proved. This result is particularly useful in understanding information functioning mechanism as well as in information evaluation and selection. The result shows that it is possible for supply chain members to select information by information increment. By choosing high increment information and eliminate low/negative increment information, supply chain members will effectively utilize information into correct cognitions, to maximize information value and efficiency. This research concentrates on the two roles a supply chain member is playing: 1) transforming demand signals into order signals by an ordering policy (filter), and 2) adjusting itself according to its optimal decisions based on cognitions and rations (information subject). Hence, a framework is established in this paper for supply chain information management problems using control engineering theory and information theory. By distinguishing the two roles mentioned above, this methodology reflects with more clarity how information affects supply chain members. Meanwhile, the general pattern of decision-result, which is popular in related literatures, is expanded into cognition-decision- result by this methodology. In this paper, linear regression is used for making decision rules; capacity is used as the decision variable and production cost is used as a performance measure. In fact, various kinds of decision making and performance measuring methods can be applied in this framework.

237

References

[1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14]

S. Afriat. The construction of a utility function from expenditure data. International Economic Review, 1967, 8(1): 67-77. R.J. Aumann. Subjectivity and correlation in randomizing strategies. Journal of Mathematical Economics, 1974, 1(1): 67-96. J.P. Gould. Risk, stochastic preferences, and the value of information. Journal of Economic Theory, 1974, 8(1): 64-84. I. Gilboa, E. Lehrer. The value of information - an axiomatic approach. Journal of Mathematical Economics, 1991, 20(5): 443-459. K. Bourland, S. Powell, D. Pyke. Exploring timely demand information to reduce inventories. European Journal of Operational Research, 1996, 92(2): 239-253. F. Chen. Echelon recorder points, installation reorder points and the value of centralized demand information. Management Science, 1998, 44(12): 221-234. S. Gavirneni. Benefits of co-operation in a production distribution environment. European Journal of Operational Research, 2001, 130(3): 612-622. G. Cachon, M. Fisher. Supply chain inventory management and the value of shared information. Management Science, 2000, 46 (8): 1032-1048. Zhong Yixin. Information Science Principle. 3rd edition. Beijing: BUPT Press. 2002. 45-67(in Chinese) C.E. Shannon. The mathematical theory of communication. Bell System Technical Journal, 1948, 27(7): 379-423, 623-656. S.M. Disney, D.R. Towill. The effect of vendor managed inventory (VMI) dynamics on the Bullwhip Effect in supply chains. Int. J. Production Economics, 2003, 85(2): 199-215. S.M. Disney, M.M. Naim, D.R. Towill. Genetic algorithm optimization of a class of inventory control systems. Int. J. Production Economics, 2000, 68(3): 259-278. J. Dejonckheere, S.M. Disney, M.R. Lanbrecht et al. Transfer function analysis of forecasting induced bullwhip in supply chains. Int. J. Production Economics, 2002, 78(2): 133-144. S.M. Disney, D.R. Towill. A procedure for the optimization of the dynamic response of a Vendor Managed Inventory system. Computers and Industrial Engineering, 2002, 43(1-2): 27-58.

238

Information Sharing in a Supply Chain with Learning Effect


Wu Jianghua1 , Xin Zhai2
1 School of Business, Renmin University of China, Beijing 100872, China 2 Guanghua School of Management, Peking University, Beijing 100871, China
1

jwu@ruc.edu.cn,

xinzhai@gsm.pku.edu.cn

Abstract We study the impact of information sharing across decentralized retailers in a supply chain. The manufacturer supplies similar products to multiple retailers and each retailer serves its independent end market. Retailers face one period of demand and can satisfy the demand by ordering in the first period or back-ordering some of the demand and satisfying it in the second period. The wholesale price in the second period is decreasing in the total order size, across the retailers, in the first period. We show that retailers have no incentives to share information about their private values when equilibrium order quantities are interior, i.e., the order size is between zero and the demand. This result is contrary to the result in oligopoly models on information sharing. In addition, we show that the impact of information sharing is decreasing when there are more retailers. Key words Supply chain, Learning effect, Information sharing, Bayesian Nash equilibrium

1. Introduction
Information technology development has facilitated companies to transfer and handle information more efficiently. Collaboration between partners or competitors through information sharing had become a common phenomenon in our economy. For example, Convisint.com, an online information platform was formed by three big automakers (Ford, GM and Daimler-Chrysler) in early 2000. Through this e-market, they can share information on product design and production plan with their suppliers. However, companies constantly examine whether information sharing is beneficial i.e., whether it is in their self interest to share information. Interesting, while many companies, such as Renault, Nissan, Mitsubishi and Peugeot, joined Nisteveo later, others have not joined. In fact, Volkswagen made it own e-market VWgroupsupply.com. Obviously, there are many factors that influence a firm's incentives to share information. These include (a) where the firm is located in a supply chain, (b) how this firm competes with others, (c) what type of information is going to be disclosed, etc. Without a specific framework, it is hard to predict the outcome of information sharing. In this paper, we focus on the effects of information sharing in an environment with a supplier learning curve. In many industries, the wholesale price declines as industry production volumes increase. A key factor that drives such price decreases is known as the learning curve or the "learning-by-doing" effect. This means that a product's unit cost decreases due to the accumulated experience in producing and selling the product. In a supply chain setting, the buyer's expectation of supplier's learning curve will affect his procurement strategy, which in turn impacts price reductions and his long-term cost. In oligopoly literature there is a line of research investigating incentives for oligopolists to share their private information about uncertain demand conditions or production costs. General assumptions include: (1) the uncertain variable can be a common value (e.g., the intercept in demand function) or a firm-specific value (e.g., each firm's production cost); (2) firms must reach an agreement on information sharing before they obtain private signal about their true private state; and (3) once an agreement is reached, each firm will exchange information truthfully according to the agreement. Novshek and Sonnenschein(1982)[1], Clarke (1983)[2], and Gal-or (1985)[3] analyze the Cournot game with uncertain demand conditions, while Gal-or (1986)[4] and Shapiro (1986)[5] consider the Cournot game with uncertainty about each firm's private production cost. Li (1985)[6] investigates uncertainty in both demand and production cost. Vives (1984)[7] and Gal-or (1986)[4] compare the equilibria under different competition: Cournot and Betrand. Raith (1996)[8] proposes a general framework for analyzing information sharing in oligopoly. A general result given by these works is that the equilibrium outcome of information sharing will reach two extreme points: no information sharing or full information sharing, depending on the competition type (i.e., Cournot or Bertrand) and product type (i.e, complement or substitute). Vives
This research has been supported by research grant No. 06XNB033 from Renmin University of China.

239

(1999)[9] summarizes these results and concludes that firms in Cournot competition with a homogeneous product have no incentives to share their private information about the common value. In addition, information sharing will benefit social welfare in Cournot competition with demand uncertainty. Along the supply chain literature, research is focused on the value of vertical information sharing and its effect on bullwhip effect (Lee and Whang (1997)[10]). Cachon and Fisher (2000)[11] study the value of sharing demand and inventory information in a two-level supply chain with one supplier, N identical retailers, and stationary stochastic customer demand. They find that the value of reducing lead times and increasing delivery frequency is more significant than information sharing. Both Li (2002)[12] and Zhang (2002)[13] consider the supply chain with one manufacturer and multiple competing retailers. They study the incentives for retailers to share their private information with the manufacturer under Cournot and Bertrand competition. More recently, Guo et al. (2006)[14] address supply chain partners' incentives for information sharing from the perspective of designing an information system. They prose a macro prediction market to effectively elicit and aggregate useful information about systematic demand risk. See Chen (2003)[15] for a more detailed review on information sharing in supply chains. In this paper, we focus on a supply chain consisting of independent retailers who share upstream supply. Retailers face one period of demand and can satisfy the demand by ordering in the first period or back-ordering some of the demand and satisfying it in the second period. The wholesale price in the second period is decreasing in the total order size, across the two retailers, in the first period. This decrease in wholesale price captures the market learning effect of aggregate orders that has been extensively documented in the empirical literature. In this model, we are interested to answer following questions: (1) Are there incentives for retailers to exchange demand information? (2) What is the effect of retailers on their incentives to share information? (3) What are the impacts of different schemes of information sharing on retailers' behavior? The rest of this paper is organized as follows. In section 2, we present the structure of the. In section 3, we analyze both interior and boundary Bayesian Nash equilibria and their effects on the choice of information sharing. Section 4 concludes this work and suggests some possible extensions.

2. The Model
2.1 Information Structure In the current model, we will consider a different information structure from those used in previous economics literature, which we term an information partition. Suppose there are n players who share the same demand distribution and define N by N = {1, , n} . In this information scheme, a prior probability is assigned to

each possible demand state and all these demand states are divided into some partitions. When demand information is shared, each retailer only reports the rank of the partition to which his demand belongs. Specifically, we assume that there are 2M demand states for each retailer and the size of each partition is 2M ' , where M ' < M . If K = {1, , 2 M } denotes the set of demand state, then we can divide the state set K into J equal-size subsets K j , where j {1, , J } and J {1, , 2 M } . In addition, we assume K i K j = for i j . Decision on J is made at the first stage of the game, which specifies the extent of information sharing. For example, J = 2M refers to specific (full) demand information, while J = 1 refers to no information sharing. Then, at the second stage of the game, retailer i observes his own demand di and reports his partition K i ( {K1 , , K J }) to other retailers. For example, if each retailer has 4 demand sates, we can divide the state set into two subsets: {1, 2} , which includes the two lowest demand states, and {3, 4} , which includes the two highest demand states. Each retailer can report to other retailers the partition in which his demand lies after he observes his own demand state. Thus, the finer the partition, the greater the information that is shared among retailers. In other words, the size of partitions measures the extent of information sharing. 2.2 Demand Structure At the first stage of the game, retailers make an agreement regarding the extent of information sharing. Thus,
240

the state set K is divided into J equal-sized subsets K j , where j {1, , J } . retailer i observes his own demand di , and reports his partition

Then, at the second stage of the game,

K i ( {K1 , , K j }) to other retailers. Given this information, retailer i determines his order quantity qi to

maximize the expected profit as follows:

i (qi ) = rdi p1qi (di qi )( pb + p2 (qi +


E[q j ] =

j N \ i

E[q ]))
j

where

k K j

qk Pk Pk

k K j

Given retailer i's profit function, the Cournot Nash equilibrium is a vector of order quantities q , such
that qi = arg max i (q1 , , qi1 , qi , qi+1 , , qn ) . Thus, each retailer maximizes his own profit, holding the decision of
qi

other retailers fixed.

Information Sharing

Fig. 1 Information Partition

3. Structural Results
In this section, we derive some structural results for the interior equilibrium, in which each retailer's equilibrium order quantities are always between zero and his own demand. We shall show that in this case information sharing always worsens performance, thus no sharing is the dominant strategy. We extend the model to a case with n players and four demand states, i.e., N = {1, , n} and K = {1, , 4} . Then for each demand state, d k = Dk / n , where k K and Dk is an exogenous number. Thus, the total market size has a fixed expected value regardless of n. Without loss of generality, we assume that d1 < d 2 < d3 < d 4 . In the following, we will compare three schemes of information sharing: no information sharing, partial information sharing and full information sharing. We assume that market conditions guarantee interior Cournot equilibria. 3.1 Interior Equilibrium Let qi denote the retailer i's order quantity when he has demand di , where i N = {1, , n} and di {d1 , d 2 , d3 , d 4 } . By the first order condition, we have qi = Assume
qi = a + bdi pb + p2 p1 + (di j i E[q j ])
2

by taking this back into the above equation, p + p2 p1 di (n 1)a + b (n 1) / n p + p2 p1 (n 1) 1 + have a + bdi = b . Therefore, a = b and b = . 2 2 2 (n + 1) 2n(n + 1) 2 From the first order condition, we know the conditional expected profit is

we

i = (r p1 )di + (di qi ) 2
Thus, in the first stage of the game, the unconditional expected profit under no information sharing is

241

iNIS = EkK [(r p)d k + (d k qi ) 2 ] = (r p1 ) Pk d k + (


k K k K

dk b) 2 2

Similarly, we can derive the profits under PIS and FIS (more detailed analysis please refer to Wu (2005)[16]). By comparing profits under three strategies, we have the following theorem about information sharing. Theorem 1 If the interior equilibrium is guaranteed in the second stage of the game, each retailer is worse off by sharing more information, i.e., iNIS > iPIS > iFIS . Thus, no information sharing is the dominant strategy in the first stage of the game. Proof. It can be verified that
iNIS iPIS = iPIS iFIS =

(n 1) 2 (( P + P2 ) A1 + ( P3 + P4 ) A2 ) 2 1
4(n + 1) 2 ( P + P2 )( P3 + P4 ) 1

>0 >0

(n 1) 2 (( P + P2 ) B1 + ( P3 + P4 ) B2 ) 2 1
4(n + 1) 2 ( P + P2 )( P3 + P4 ) 1

and
iNIS iFIS =

(n 1) 2 ( kK Pk d k2 ( kK Pk d k ) 2 )
4(n + 1) 2

where A1 = d3 P3 + d 4 P4 , A2 = d1 P + d3 P2 , B1 = (d3 d 4 ) 2 P3 P4 , and B2 = (d1 d3 ) 2 P P2 . 1 1 This also suggests that the difference in profits is decreasing in the number of players in the market. In other words, the impact of information sharing is diminishing when there are more players who have thinner market shares. 3.2 Boundary Equilibrium The above results show that when quantity equilibria are interior, full information sharing is always dominated by no sharing. In this section, we show that there exist cases in which some equilibrium quantities may be on the boundary. Let qL and qH denote the interior order quantities which reach the low boundary and high boundary respectively, i.e., qL arg min{qiNIS , qiPIS , qiFIS i N } and qH arg min{di qiNIS , di qiPIS , di qiFIS i N } . Let I k denote the set of retailers whose demand state is k K .
Theorem 2 If the quantity equilibrium is interior, the first order quantity reaches the low boundary is qL = {qiFIS I1 = {i}} ; the first order quantity reaches the high boundary is qH = {qiFIS I 4 = {i}} . Proof. In the case of no information sharing, it is obvious that the order quantity is lower when a retailer has a low demand. While in the case of full information sharing, the minimal order quantity is qi when retailer i

is the only one who has the lowest demand, i.e., I1 = {i} . Now we only need to prove both a + bd1 and a '1 + bL + (n 1)bH are larger than a1 + b1 + (n 1)b4 . Note that
a + bd1 (a1 + b1 + (n 1)b4 ) = = pb + p1 p2 (n 1) d1 a + a (n 1) + a1 + 1 4 (n + 1) 2n(n + 1) 2 n +1

(n + 1)d1 (n 1) nd1 (n 1)d 4 (n 1)(d 4 d1 ) > >0 2n(n + 1) 2n(n + 1) n +1 (n 1)(2(d 4 d3 )( P + P2 ) P3 + (d 2 d1 )( P3 + P4 ) P2 ) 1 > 0. 2(n + 1)( P + P2 )( P3 + P4 ) 1

and
a '1 + bL (n 1)bH (a1 + b1 + (n 1)b4 ) =

Therefore, in all the equilibrium quantities under three information sharing schemes a1 + b1 + (n 1)b4 will be the first to hit the lower bound. Similarly, we can prove that a4 + (n 1)b1 + b4 will hit the higher bound first. Theorem 2 sheds some light on the properties of boundary equilibria, in which some equilibrium quantities may be on the boundary. In this case it is difficult to obtain structural results on the impact of information sharing.
242

However, some insights can be obtained through further numerical study. 3.3 Model Extension: Continuous Demand Distribution Suppose there are one supplier and n retailers/buyers, and each retailer's demand follows a uniform distribution, i.e., di ~ U [d1 , d 2 ] . At stage 1, retailers make an agreement on how precise they will disclose their private demand, which is observed at the beginning of the second stage by each retailer. Specifically, they decide on a precision index m, and the interval [d1 , d 2 ] is divided into m segments, which are indexed by j {1, , m} . Let = (d 2 d1 ) / m , which represents the length of each segment. After retailer i observes his demand di , he will report his location ki to the public, i.e., d1 + (ki 1) < di < d1 + ki . Obviously, larger m implies that the more precise information is shared. When m = 1 , there is no information sharing. When m = , there is full information sharing. Given m, after demand information is shared, each retailer's location index K = {ki : i = 1, , n} is public knowledge. Thus, information set available to retailer i is Retailer i's expected cost is Ii={K,di,}.

i ( I i ) = qi ( I i ) p1 + (d i qi ( I i ))( p2 + pb (qi ( I i ) + E[q j ( I j )]))


j i

By the first order condition, we have the optimal solution as follows:


qi ( I i ) = A0 + A1 k j + A2 ki + A3 di
j =1 n

p p2 pb A0 = 1 (n + 1)

(n 1)(d1

) 2 , A = , A = , and A = 1 . 1 2 3 n +1 2 2 2(n + 1)

This equilibrium hazs the following property. Theorem 3 In equilibrium, each retailer will order less in expectation when they share more precise information, i.e., E[qi ] m < 0 . Proof. By taking back the value of Ai into qi ( I i ) , we have
E[qi ] d d = 2 21 <0. m 4m

Given I i , retailer i's expected profit is

i ( I i ) = (r p1 )di + (di qi ( I i )) 2
= (r p1 )di + (di A0 A1 k j A2 ki A3 di ) 2
j =1 n

The expected profit at the first stage is


i = (r p1 )(d1 + d 2 ) (d1 d 2 ) 2 (n 1) 2 1 + + (12( p2 + pb p1 ) 2 2 12(n + 1) 2 m 2 12(n + 1)2

+12(d1 + d 2 )n( p2 + pb p1 ) + n((d1 d 2 )2 + 3(d1 + d 2 ) 2 n) 2 )

The above equilibrium solution leads to the following observation. Theorem 4 If an interior equilibrium is guaranteed in the second stage of the game, retailers have no incentives to share information, i.e., E[qi ] m < 0 .
Proof.

It can be verified that

i (d d )(n 1) 2 = 1 32 <0. m 6m (n + 1) 2

Obviously, this is consistent with the result in previous case with discrete demand states.

4. Concluding Remarks
In this paper, we consider a strategic procurement problem faced by retailers who face independent demand markets but share supplier capacity. The main contribution of this research is that we employ a new information structure-information partition, which had not been used in previous related literature. We establish that the
243

equilibrium solution to our problem matches the solution for a Cournot game with retail competition. We show that retailers have no incentives to share information about their private values when equilibrium order quantities are interior, i.e., the order size is between zero and the demand. This result is contrary to the result in oligopoly models on information sharing. In addition, we show that, under constant market conditions, the number of retailers may affect their incentives to share information. The impact of information sharing is decreasing when there are more retailers. It would be worthwhile to do further numerical analysis on multiplayer game to investigate if the number of retailers will change the choice of information sharing. It is also interesting to explore the mixed structure of information sharing, for example, partial retailers form information sharing group while exclude others joining them.
References

[1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15]

[16]

Novshek W. Sonnenschein H. Fulfilled expectations Cournot duopoly with information acquisition and release, Bell J. Econ. 1982:13:214-218. Clarke R. Collusion and incentives for information sharing, Bell J Econ. 1983;14:383-394. Gal-Or E. Information sharing in oligopoly, Econometrica 1985;53:329-343. Gal-Or E. Information transmission: Cournot and Bertrand equilibria, Rev. Econ. Stud. 1986;53:85-92. Shapiro C. Exchange of cost information in oligopoly, Rev. Econ. Stud. 1986;53:433-446. Li L. Cournot oligopoly with information sharing, Rand J Econ. 1985;16: 521-536. Raith M. A general model of information sharing in oligopoly, J. Econ. Theory 1996;71:260-288. Vives X. Duopoly information equilibrium: Cournot and Bertrand, J. Econ. Theory 1984;3:71-94. Vives X. Oligopoly Pricing: Old Ideas and New Tools, The MIT Press, Cambridge, 1999. Lee HL, Padmanabhan P, Whang S. Information distortion in a supply chain: The bullwhip effect. Management Sci. 1997;43:546-558. Cachon G, Fisher M. Supply chain inventory management and the value of shared information. Management Sci. 2000;46:936-953. Li L. Information sharing in a supply chain with horizontal competition. Management Sci 2002;48:1196-1212. Zhang H. Vertical information exchange in a supply chain with duopoly retailers. Production and Operations Management 2002;11:531-546. Guo Z, Fang F, Whiston AB. Supply chain information sharing in a macro prediction market. Decision Support Systems 2006;34:427-438. Chen F. Information sharing and supply chain coordination. In: S.C. Graves and A.G. De Kok, Editors, Handbooks in Operations Research and Management Science: Supply Chain Management: Design, Coordination and Operation, Elsevier Publishing Company (2003) Ch.7. Wu J. Information Visibility and Its Impact in a Supply Chain. Ph.D. dissertation, Krannert School of Management, Purdue University, West Lafayette, IN, 2005.

244

A Study of the Production Planning and Control System for Remanufacturing


Xie Jiaping1, 2, Ren Yi2 , Zhao Zhong2
1 Top 500 Enterprises Research Center, Shanghai University of Finance & Economics, Shanghai, 200433 2 School of International Business Administration, Shanghai University of Finance & Economics, Shanghai, 200433

Abstract This paper delves into the basic logic behind the production planning in a remanufacturing environment after considering the differences between remanufacturing and the traditional manufacturing processes, and has laid out a framework of its material requirement planning. We have taken into account the input priority of materials (remanufactured over first-time) as well as the capacity constraints in disassembly and reassembly etc, and come up with a production planning model for remanufacturing. To be exact, firstly we forecast the flow of returned cores, and also suggest the disassembly probability to be dynamically updated, thus determine the purchase of first-time parts. Secondly we have done a linear programming optimization of disassembly planning of cores in each period including capacity constraints. Thirdly, we have discussed the determination of safety stock of cores and parts in order to reduce the uncertainties. At last, an example is given to illustrate the model. Key words Remanufacture, MRP, Core, Remanufactured parts

1. Introduction
In the new millennium, environmental protection is no longer a remote notion, but has become a social endeavor to save the world in which we live in. Producer-responsibility legislation is just one of its kind to hold manufacturers responsible for the whole life cycle of their products, which not only requires the proper handling of waste and pollution in the production processes, but in the remaining product life cycle as well. It is highly necessary to recycle the EOL (end-of-life) products in order to re-use the resources and to protect the environment. Lund (1983) defines remanufacturing as an industrial process in which worn-out products are restored to like-new condition. Through a series of industrial processes in a factory environment, a discarded product is completely disassembled. Usable parts are cleaned, refurbished and put to inventory. Then the new product is reassembled from the old and, where necessary, new parts to produce a fully equivalent - and sometimes superior - in performance and expected lifetime to the original new product. While remanufacturing is environmentally sound, its production planning faces a great degree of uncertainty involving the time to recycle these used products (a.k.a. cores), the quality and quantity of cores, and the disassembly processes etc. Traditional production planning control theories fall short of the requirements of remanufacturing. So how to make a tailored production planning specifically for remanufacturing is one question that awaits a proper answer.

2. Review of Literature
In the research of reverse logistics network, previous studies are mostly based on the assumption of independent distributions of product return and demand. Teunter (2001) assumes both the return and demand processes are independent, and they are stable in volume. Minner & Kleber (2001) assume the demand and return and independent, and these two processes are continuous. Bayindir et al (2003) assume both the return and demand processes are independent homogeneous Poisson processes. But Kiesmuller & Van der Lann (2001) assume the demand and return are related in probability. Brito & Dekker (2003) assume if product demand are homogeneous Poisson processes, then the return of cores is also of the same distribution, and these two distributions are independent. In the literature of disassembly and product recovery, Thierry et al. (1995) conclude that there is uncertainty in the timing and number of core returns as well as the conditions of cores after a close study. Guider Jr. (1997) points out three aspects that complicate the remanufacturing process, that are probabilistic recovery rates of parts from the inducted cores, unknown condition of the recovered parts until inspected, thus leading to stochastic
This research has been supported by the National Natural Science Funds of China (An Economical Analysis & Optimization of Recovery Processing Policies about Discarded Product, No.70472080)

245

routings and lead times. These aspects have a direct influence on the production planning and control in remanufacturing. Guide (1997) suggests the necessity of safety stock in a remanufacturing system in order to reduce the uncertainties. In the remanufacturing PPC literature, Geraldo Ferrer and D Clay Whybark (2001) correlate recycling quantity with sales volume, and take into account the different disassembly probability of parts from cores. They have determined the purchase and disassembly planning of cores based on the MRP logic. But they do not incorporate capacity constraint or the availability of purchased cores in their model. Clegg A.J. et al (1995) utilize linear programming to solve the optimization problem of material requirement planning in remanufacturing. Karl Inderfurth et al(2001) suggests the separation of the disassembly part from the combined reassembly and manufacturing part, treating the parts that are up to the standard as a source of input to the reassembly planning. Thus the standard MRP approach can be extended to incorporate the flows of return, the availability of components or products after disassembly and remanufacturing operations in a hybrid system. This paper is a study of the production planning and control (PPC) system for remanufacturing. We will first analyze the difference between the MRP tailored for remanufacturing and the standard MRP in view of the complicating characteristics of remanufacturing. And then well discuss how the standard MRP can be extended to incorporate remanufacturing. The model constitutes of two parts, part requirement as well as part supply which include the returned cores and first-time parts. Disassembly planning of cores and relevant inventory control decisions are made as well. We have also imposed capacity constraints in all the main procedures in our model, in order to see how the part requirement can be satisfied by the disassembly yields and parts purchased combined in a more realistic perspective.

3. The Model of Remanufacturing Activity Process


The main entity held responsible for remanufacturing is the original equipment manufacturer (OEM) since it posses the technology to produce the products. And therere several recovery options at hand which include part refurbishment, part remanufacturing, material recovery (recycling) and disposal (landfill). A recovery option scheme is mapped out in Fig.1 where real line and box indicate forward flow of goods while dashed line and box indicate reverse flow of used products.
First-time Material Parts Manufacturing Components Manufacturing Product Assembly Packaging & Distribution Use & Maintenance

Material Recovery

Parts Remanufacturing

Part Refurbishing

Take-back

Disposal & Landfill


Fig.1 Product recovery policy in reverse logistics

Sorting & Disassembly

The recovery process for assembled products usually goes as follows: First its disassembly. Its very important for a successful recycling program. Materials can only be recycled and usable parts be remanufactured after disassembly. Then its inspection and sorting. The first class of parts is up to grade, and is ready for reassembly or repair purposes. The second class isnt up to standards, but can go to reassembly after some repair or refurbishing. The third class is completely useless as parts, but the material that makes the parts are salvageable. They make up a renewable resource for part manufacturing; this is sometimes referred to as material recovery.

246

4. The Characteristics of Production Planning for Remanufacturing


The production planning in a remanufacturing environment is complicated by such issues as the uncertainties in the timing and quantity of cores, the parts recovered from cores etc. These characteristics may include, but not limited to: 4.1 The Uncertainty in Returns of Cores The overall production planning in a remanufacturing facility has to take into account not only the market demand (which includes the short-term market forecast and the orders taken from customers), and the quantity of returned cores as well. The timing and quantity of core returns depend on the conditions of a product, the advancement of technologies and the sales volume of products. These uncertainties are much greater than the lead-time forecasting in traditional purchasing system, and complicate the forecasting in the quantity of cores, which is quite difficult to ensure accuracy. A small inaccuracy in the forecast of returned cores will lead to unrealistic production planning, which may render all consequent planning useless. 4.2 The Imbalance between Returns and Demand Imperfect correlation between returns of remanufacturable cores and market demand for remanufactured product is almost inevitable. And when this occurs, it will give rise to two kinds of problems. When return exceeds demand, it will lead to excess stock of remanufactured products, in which case holding cost will be a problem; when demand exceeds return, it is hard to meet the customers need in time, thats an opportunity cost for the remanufacturer. So remanufacturer have to balance the return and demand in order to maximize profit, and this asks for flow-of-return planning on the basis of trading off between inventory cost and OOS (Out Of Stock) cost. 4.3 The Uncertainties in the Disassembly of Cores Disassembly of cores is a crucial procedure in the whole of remanufacturing process. Cores are often of different qualities because of different conditions, and this complicates the disassembly. Disassembling procedures are far from standardized all because of the innate uncertainties. And cores are disassembled according to DBOM (Disassembly Bill Of Material) in a stochastic way, and may involve destructive forces. So disassembly is not simply the reverse of assembly, and such characteristics must be considered in disassembly planning.
4.4 The Uncertainty of Remanufacturable Rate

To forecast the quantity of returned cores in production planning for remanufacturing is not enough. To carefully and statistically estimate the remanufacturable rate of usable parts is also necessary. This rate may vary because of different conditions of usage. Actually the accuracy of remanufacturable rate will impose a direct influence on the Remanufacturing Material Requirement Planning (RMRP). When the estimate is too high, it will lead to inadequate purchase of parts thus fall short of the actual material requirement. On the other hand, when the estimate is too low, it will lead to multiplied purchase plan, thus yield unwanted material stock. 4.5 The Uncertainties of Lead Times in Remanufacturing The timing of remanufacturing is very difficult, thats because parts of different qualities may go through different remanufacturing procedures. So this kind of uncertainty makes it very difficult for the lead-time mechanism in RMRP to work and it will have a further impact on the purchase planning and the shop floor planning. Lead-time mechanism is the foundation on which MRP can operate smoothly. Its accuracy will have a direct influence on production planning in its accuracy and reliability. In a remanufacturing environment, all the lead times are all hard to measure because of the previously mentioned uncertainties, but it is important. A casual estimation of lead time will render the purchase plan and shop floor plan groundless.

5. The Framework of RMRP Logic


In remanufacturing PPC, it is not enough to consider only the requirement on the reassembly side, the return of cores and its disassembly are important factors as well. The major differences lie in the return of cores, the
247

disassembly capacity constraint and inventory control of both cores and parts. The remanufacturing PPC system comprises of two parts: one is the requirement planning of remanufactured parts; the other is the supply planning, which includes the parts disassembled from cores as well as parts that are made from first-time material (which are purchased). We can see how the logic works in Fig.2.
Master Reassembly Schedule Parts Requirement Planning Core Supply Planning N Parts Purchase Planning BOM

DBOM

Sufficient Cores Y Core Disassembly Planning

Disassembly & Remanufacturing Capacity Planning N

Feasible? Y Control & Feedback Shop Floor Planning

Core Inventory

Parts Inventory

Inventory Control
Fig.2 Framework of remanufacturing PPC system

The production planning of remanufacturing starts from forecasting, this leads to aggregate production plan which consolidates into Master Reassembly Schedule (MRS). MRS combined with BOM and part inventory can determine the part requirement. Since the remanufactured parts are the primary resource to satisfy the part requirement, so the forecasting of cores must be made ahead of that of first-time parts. After that, the disassembly planning should pass the capacity planning analysis, to ensure its feasibility. If it passes, shop floor planning is made according to it. When it fails, that means a portion of cores that exceeds the capacity will not be disassembled, and that fall-short part requirement would only be met by first-time parts. The actual shop floor operations involve the inventories of cores and remanufactured parts, which have to be kept at reasonable levels.

6. Part Requirement Planning for the Remanufacturing System


We consider a remanufacturing establishment with a family of homogeneous products Pm (where m=1, 2,, M). After the products are sold, the remanufacturer expects a continuous flow of trade-ins over the next few periods, which is a steady resource of cores. 6.1 Requirement Planning for Remanufactured Parts 6.1.1 Master Reassembly Schedule Master reassembly schedule dictates the quantity of each product to be produced in each period. Suppose the demand for Pm in period is homogeneous Poisson distribution with mean mt, then the aggregate demand for the
248

product in time period is Qmt = mt . And the total capacity in reassembly has to be considered. Suppose the standardized reassembling time of Pm is Tm, while the machine centre has a total capacity of A; and Rm stands for the revenue for Pm per unit. So we have: max Rm Qmt

T
m =1

Qmt A

(1)

0 Qmt mt
The master reassembly schedule that passes reassembly constraint consolidates into the final schedule followed by part requirement planning. 6.1.2 Gross Part Requirement As BOM shows what kind of parts are used in each product, and how many, let B(Pm,Ci) indicates the number of part Ci used in product Pm. Then the gross requirement of each part in each period is reflected by the product of B(Pm,Ci) and the quantity of Pm to be produced. Let Gt(Ci) indicate the gross requirement of Ci in period t:

Gt (Ci ) = B( Pm , Ci ) Qmt
m =1

(2)

6.1.3 Net Part Requirement The net part requirement is determined by comparing the gross requirement with the inventory of parts from the previous period. Arrival from the previous planning horizon also has to be considered if theres any. So it follows:

N t (Ci ) = Gt (Ci ) STPt 1 (Ci ) SOPt (Ci )

(3)

where Nt(Ci) = net requirement of part Ci in period t; STPt-1(Ci) = inventory of part Ci in period t-1; SOPt(Ci) = the expected arrival of part Ci in period t. 6.2 Core Supply Planning Material requirement planning has to balance both the demand and supply of parts. The supply refers to the inventory of parts and cores. We have to make sure that the part supply meets the requirement. 6.2.1 Expected Return of Cores In this paper, we assume the return of cores is continuous, and the return of a single core is homogeneous Poisson process with mean of , that is HPP(). If so, the return of core Pm over period of time t is homogeneous Poisson process with mean mt, that is HPP(mt). Since the single return of core is independent, it follows that the mean return volume Rt(Pm) of core Pm in period t given time interval is:

Rt ( Pm ) = mt
6.2.2 Disassembly Probability Record (DPR) of Cores

(4)

The disassembly yield of cores is not constant, so theres a real need to keep a record of the disassembly probability, which is the probability of a certain part Ci that can be disassembled from core Pm, indicated by Yt(Pm, Ci). And also we suggest that the DPR to be dynamically updated, an exponential smooth definition is introduced:

Yt (P , Ci ) = Yt1(P , Ci ) + (1 )Yt2 (P , Ci ) m m m

0 1

(5)

6.2.3 Core Disassembly Planning Since therere two inventories involved in remanufacturing, its important to balance them when making a disassembly planning in order to avoid imbalanced overstock. Suppose the price of part Ci in period t is Pt(Ci), and to indicate the significance of part inventory. Hence a goal programming model is constructed for disassembly; the goal is to minimize the capital occupancy of parts and cores.
249

min W = Pt (Ci ) STPt (Ci ) + (1 ) Pt ( Pm ) STCt ( Pm )


n =t i =1 n =t m =1

t +n I

t +n M

(6)

s.t.

0 D( Pm ) min{STCt 1 ( Pm ), DCt ( Pm )

6.3 Net Part Requirement Deficit In a remanufacturing facility, parts exist not only in part inventory, but also in cores yet to be disassembled. So the net requirement of parts is determined by the two inventories. It is necessary to estimate the disassembly yield from cores before we can estimate the net requirement deficit of parts. The DPR are combined with BOM and the number of cores scheduled to be disassembled to calculate the expected number of parts that can be obtained in a certain period, as follows:

Dt (Ci ) = Dt ( Pm ) B( Pm , Ci ) Yt 1 ( Pm , Ci )
m =1

(7)

where Dt(Ci) = the number of part Ci that can be obtained by disassembly in period t; Dt(Pm) = the number of core Pm that are scheduled to be disassembled in period t. By comparing the net requirement from (3) with the disassembly yield from (7), we can see whether that requirement can be met. If not so, a net requirement deficit NDt(Ci) is in place:

NDt (Ci ) = N t (Ci ) Dt (Ci )

(8)

When net requirement deficit occurs, that is when NDt(Ci)>0, it means the part inventory of the last period combined with the disassembly yield of the current period can not meet the net part requirement. So when this occurs, the fall-short amount (here refer to as deficit) can only be compensated by additional purchase (or manufacturing) of first-time parts. Let PPt(Ci) denote the quantity of purchase of part Ci in period t, then it must satisfy that PPt(Ci) NDt(Ci). And if theres a lead time of Li period, then the purchase (or manufacturing) order must be released in period t- Li.

POPt Li (Ci ) = PPt (Ci )

(9)

where POP t- Li(Ci) = purchase order release of part Ci in period t- Li. 6.4 Inventory Control of Parts and Cores Disassembly of cores will yield both wanted parts and unwanted ones, which adds complexity to the already complicated inventory. It is highly suggested a safety stock be set up to offset some of the uncertainties occurred in the return stream of cores and in disassembly etc. 6.4.1 Expected Inventory of Parts The part inventory is an entry that depends on many other items. We should compare the difference between gross requirement and the sum of inventory of the last period, disassembly yield and part purchase, see Equation (10):

STPt (Ci ) = STPt 1 (Ci ) + Dt (Ci ) + PPt (Ci ) Gt (Ci )

(10)

If the part inventory exceeds a certain level, then possible sales should be considered. 6.4.2 Safety Stock of Parts Safety stock should be set up to offset some of the uncertainties in disassembly. Suppose according to 2 statistics, the mean disassembly time of core Pm is LDm, with variance of DLm ; while the mean demand for part Ci in one period is Gi, with variance of Gi ; and c indicating the service level, so the safety stock of parts SSP(Ci) is
2

as follows:
2 2 SSP(Ci ) = Z c LDm Gi + Gi2 DLm

(11)

6.4.3 Expected Inventory of Cores The expected inventory of cores of the current periods is jointly determined by the inventory of the last period, the disassembly and the return volume (The return volume cannot be disassembled promptly by
250

assumption, and goes directly into inventory).

STCt ( Pm ) = STCt 1 ( Pm ) + Rt ( Pm ) Dt ( Pm )

(12)

6.4.4 Safety Stock of Cores Similarly, safety stock of cores should be set up as well since theres tremendous uncertainty and complexity in forecasting the core return. Suppose according to statistics, the mean time to take back a core is Lm, with 2 2 variance of m ; while the mean quantity for disassembly in one period is Dm, with variance of Dm , and m indicating the service level, so the safety stock of cores is as follows:
2 2 2 SSC ( Pm ) = Z m Lm Dm + Dm m

(13)

7. Example
To illustrate our model, we will give out an example to explain the PPC system for remanufacturing in numbers. Heres the assumption: A company produces six homogeneous remanufacturable products P1-P6, utilizing parts of C1-C9. The BOM is listed in Tab.1 below. The prices of the cores of P1-P6 are 88, 42, 48, 40, 44, 34 respectively; and the parts are 30, 20, 10, 10, 9, 8, 2, 2 and 2. Theres an inventory of 150 for each kind of core at the beginning of week 1. The initial inventory is 200 for each kind of part. The maximum disassembly capacity for each core in each week is 220. The minimum ordering quantity is 500.
Tab.1 Bill of material

C1 P1 P2 P3 P4 P5 P6 2

C2 1

C3

C4 2

C5 2

C6

C7 2

C8 4

C9

2 1 2 1

2 2 2 2 2 2 4

The disassembly probability record (DPR) records the disassembly probability of each part from cores, which is obtained statistically, shown in Tab.2.
Tab. 2 Disassembly probability record

C1 P1 P2 P3 P4 P5 P6 0.5

C2 0.4

C3

C4 0.3

C5 0.4

C6

C7 0.6

C8 0.5

C9

0.4 0.5 0.5 0.4

0.35 0.4 0.3 0.25 0.5 0.6 0.4

0.5

Suppose the planning is made for each week and the planning horizon is 4 weeks. Heres the master reassembly schedule, listed in Tab.3.
Tab.3 Master reassembly schedule

Period (in weeks) P1 P2 P3 P4 P5 P6

1 200 100 300 250 50 100

2 150 100 250 250 100 100

3 200 150 250 200 100 150

4 200 100 250 200 100 150

251

Material requirement planning is, in this case, the part requirement planning. Well choose part C3, and explain in detail how its requirement planning is made. Tab.4 shows the requirement planning of C3. The gross requirement in the first line is derived from MRS from Tab.3 and BOM from Tab.1 combined, which is double the quantity of P3, P5 and P6 combined. The expected arrival in the second line is the parts ordered from the previous planning horizon but arrived in this horizon, and the lead time is one week. The net requirement in the third line is the difference between gross requirement and the sum of expected arrival and the inventory of the last period.
Tab.4 Material requirement planning of part C3

C3 Gross Requirement (G) Expected Arrival (SOP) Net Requirement (N) Parts in Cores (ED) Net Requirement Deficit (ND) Actual Disassembly Yield (D) Expected Part Inventory (STP) Part Purchase (PP) Purchase Order Release (POP) Part Sale (SE)

1 800 400 200 330 0 330

2 800 0 670 296 374 296 126 500 500

3 850 0 724 315 418 306 82 500 500

4 850 0 768 322 458 310 42 500 0

200

130 0 500

The net requirement of parts is firstly met by disassembly yields from returned cores, rather than purchased parts. The return quantity of cores relating to part C3 are given in Tab. 5. It should be noted that the mean return of the three is 231, 81, 125 respectively, and the occurrences are Poisson distribution.
1 P3 P5 P6 211 82 114
Tab.5 Return of cores (partial) 2

3 224 83 127

4 221 97 119

231 79 127

The initial inventory of cores is all 150. So combined with the return, we know the inventory level at the beginning of each period. (Return of the current period cannot be used immediately by assumption, thus goes into inventory.) Disassembly planning is done via LP according to equation (16)-(18). The main constraints are to balance disassembling tasks over the periods, and to meet the overall requirement. A final plan, see Tab.6.
1 P3 P5 P6 220 85 120
Tab.6 Disassembly plan of cores (partial) 2

3 220 79 127

4 220 83 127

211 82 114

The return and the disassembly plan together determine the inventory of cores (see Tab.7), which helps to determine the quantity of parts that exist in cores (together with BOM from Tab.1 and DPR from Tab.2), which is ED in line 4 of Tab.4. Net requirement can be derived after ED is calculated. The actual disassembly yield in line 6 of Tab.4 is determined by the disassembly plan from Tab.6 and BOM from Tab.1.

252

1 P3 P5 P6 150 150 150

Tab.7 Core inventory at period start (partial) 2 3

4 235 83 127

211 82 114

231 79 127

All the rest of Tab.4 is derived in a simple manner. If there is a net requirement, it is firstly met by disassembly yield (D); if the yield is not sufficient, then theres the net requirement deficit (ND) in line 5, which is completely met by part purchase (PP) in line 8. The order of the part purchase should be released according to the lead time, thats line 9. Part inventory in line 7 are calculated according to equation (10), which is the last to be settled. And if the inventory experiences a build-up, then possible part sales should be considered.

8. Conclusion
Remanufacturing is a brand-new manufacturing model which takes in cores as the primary material resource. The whole process differs from the traditional in many ways, which include the recycling uncertainty, the disassembly complexity and the varying conditions of cores. All these ask for new requirements of and modifications to the traditional PPC system. We have incorporated disassembly and reassembly capacity planning into the standard MRP, and put forward a remanufacturing MRP with the help of linear programming. But multi-time remanufacturing of a certain part, especially the serial-number-specific part, will cause a cost-ineffective problem, and asks for adjustments to the system. It is left to be discussed in future research.
References

[1]

Lund. Remanufacturing: United Stated Experience and Implications for Developing Nations. The World Bank, Washington, DC, 1983 [2] Teunter, R. Economic ordering quantities for remanufacturable item inventory systems. Naval Research Logistics, 2001, 48(6): 484-495. [3] Minner, S. and Kleber, R. Optimal control of production and remanufacturing in a simple recovery model with linear cost functions. OR Spectrum, 2001 Vol. 23: 3-24. [4] Bayindir et. al. A model to evaluate inventory cost in a remanufacturing environment. International Journal of Production Economics, 2003, 81-82: 597-607. [5] Kiesmuller G.P. and Van der Laan E. An inventory model with dependent product demands and returns. International Journal of Production Economics, 2001 Vol. 72: 73-87 [6] Marisa P. de Brito, Rommert Dekker. A Framework for Reverse Logistics. ERIM Report Series Reference No. ERS-2003-045-LIS. April 2003. 6~16 [7] Thierry,M.C, Salomon,M., Van Nunen, et al. Strategic production and operations management issues in product recovery management, California Management Review, 1995, 37: 114-135 [8] Guider Jr.,V.D.R., M.E. Kraus, and R. Srivastava. Scheduling policies for remanufacturing, International Journal Production Economics,1997, 48(2): 187-204. [9] Guide Jr, Srivastava R. Buffering from material recovery uncertainty in a recoverable manufacturing environment. Journal of the Operational Research Society, 1997, 48: 519-529 [10] Geraldo Ferrer, D Clay Whybark. Material planning for a remanufacturing facility. Production and Operations Management. Summer 2001.10.2. 112-124 [11] Clegg A.J., Williams D.J. and Uzsoy R. Production planning for companies with remanufacturing capabilities. Proceedings of the 1995 IEEE Symposium on Electronics and the Environment, IEEE, Orlando FL, 1995. 186-191

253

A Production I1nventory Model for Intangible Deteriorated Items with Demand-Dependent Deteriorating Rate
Xu Xianhao, Tang Ziyi
School of Management, Huazhong University of Science & Technology, P.R.China, 430074

Abstract An inventory model has been proposed for intangible deteriorating items with demand-dependent deteriorating rate under the circumstance in which the demand rate is changing exponentially with time. The production rate is constant and shortages are not allowed in the inventory model. This inventory model is suitable for the situation in which The deterioration is intangible and the demand rate is changing exponentially with time. The total average inventory cost is derived and the optimal expression can be obtained for production scheduling period, maximum inventory level, total average cost and the cycle time of inventory replenishment in this paper. At last, numerical examples are performed and discussed. Key words Inventory model, Deterioration, Demand-dependent deteriorating rate

1. Introduction
The deterioration of many items during storage period is a real fact. In recent years, deteriorating inventory models have been widely studied by researchers. However, in the existing literature of inventory models for deteriorating items, The products deterioration is physical and the deteriorating rate is assumed to be either constant or variant with time. None of them considered the fact that, in some situations, The products deterioration is intangible and the deteriorating rate has a relationship with demand. In practice, The products deterioration is intangible and the deteriorating rate may be influenced strongly by the demand, namely the deteriorating rate may go up or down with the demand rate. This situation generally arises in the case of inventories of fashionable goods and high technology products such as fashionable clothes, computers, electronic components, electrical appliances and so on. These products have a common character that they have a short life cycle time in the marketplace because of the rapid speed of innovation of products or the changes of customers preferences. Ghare and Schrader [1], first, proposed an inventory model having a constant rate of deterioration and a constant rate of demand over a finite-planning horizon. Dave and Patel[2] studied an inventory model with deterministic but linearly changing demand rate and constant deterioration rate over a finite planning horizon. They assumed replenishment intervals of equal lengths in their model. Hollier and Mak[3] developed inventory replenishment policies for deteriorating items with a declining demand. Moncer Hariga[4] developed an optimal inventory lot-size model for deteriorating items with general continuous time-varying demand over a finite planning horizon. The deterioration rate was assumed to be a constant fraction of the on-hand inventory. S.Papachristos and K.Skouri[5] studied a continuous review inventory model over a finite-planning horizon with deterministic varying demand and constant deterioration rate. Chao-ton Su and Chang-wang Lin[6] presented a production-inventory model for deteriorating products in which the demand rate is assumed to decrease exponentially, shortages are allowed and the deteriorating rate is assumed to be constant. The above investigations only considered the constant deteriorating rate rather than variable deteriorating rates. Covert and Philip[7] extended Ghare and Schraders model. They presented an economic order quantity model for a variable rate of deterioration by assuming a two-parameter Weibull distribution. Shah[8] suggested a generalization model by allowing shortages and using a general distribution for the deterioration rate. Mishra[9] formulated an inventory model with a variable rate of deterioration, a finite rate of production and no shortage; however, he solved only a special case of the model under very restriction assumptions. Deb and Chaudhuri[10] assumed the rate of production to be uniformly finite and considered the rate of deterioration to be time-dependent, allowing backlogged. Goswami and Chaudhuri[11] developed a general inventory model in which time-propotional deterioration rate was considered. H.Yan Tce Cheng[12] developed a general production-inventory model in which
This paper is financial supported by China Natural Science Fund (NO.70472059).

254

the production rate, the product demand rate, and the item deterioration rate are all considered as function of time. Datta and Pal research[13] the economic order quantity model for a variable rate of deterioration and demand rate varies exponentially with time. Balkhi and Benkherouf [14] consider an inventory model with assumption that shortages are allowed, deteriorating rate is constant, the demand rate and productive rate changes with time. These investigations did not consider the fact that the deteriorating rate is influenced by demand when the products deterioration is intangible. They did not study the inventory model for deteriorating items with demand-dependent deteriorating rate. In this study, we develop an inventory model for intangible deteriorating items with demand-dependent deteriorating rate. In the inventory model, we also assumed that the finite production rate is a constant, shortages are not allowed, and the demand is changed exponentially with time. The total average cost is derived and the optimal expression can be obtained for production scheduling period, maximum inventory level, total average cost and the cycle time of inventory replenishment. At last, numerical examples are performed.

2. Assumptions and notation


2.1 Assumptions The inventory model in this paper is developed on the basis of the following assumptions: (1) A single item is considered over a prescribed period of T units of time. (2) The scheduling period in this system is T where T = t1 + t2 , t1 and t2 being the duration of production

period and after the production period respectively. (3) The demand rate is known and varies exponentially with time, that is, D(t ) = e > 0.
t

, where, > 0 ,

(4) Production rate, P(t ) , is a constant. That is at time t , t > 0, P (t ) = a, a is a constant, and a > 0 . (t ) = 0 , 0 > 0 D(t ) (5) Deteriorating rate, (t ), at any time depends on the demand. That is at time t, . (6) Deterioration of the units is considered only after they have been received into inventory. (7) There is no replacement or repair of deteriorated items during a given cycle. (8) The value of production rate is greater than that of demand rate. (9) Shortages are not allowed. 2.2 notation The following notations are used: I (t1 ) inventory level at any time t, t>0,

Im CS
CD CH

maximum inventory level, setup cost for each new cycle, total cost of deteriorated items in an inventory replenishment cycle, total inventory carrying cost in an inventory replenishment cycle. the cost of a deteriorated item,

cd ch
T

inventory carrying cost per unit per day, cycle time, K (t1 ) total average cost in a cycle. It is a kind of function of variable t1 .

3. Mathematical modelling and analysis

level declines continuously and the inventory level becomes zero at time t = T . The aim of this paper is to find out the optimal values of t1 , T and I m that minimize K over the time horizon [0, T ].
255

The production inventory level starts at a time t=0 where the initial stock is zero. Then, at a time t = t1 , the production is stopped and the maximum stock level I m is reached. After t1 time units have elapsed, the stock

The differential equations governing the stock status during the period [0, T ] can be written as follows: dI (t ) = P(t ) D(t ), 0 t t1 dt dI (t ) = D (t ), dt t1 t T

(1) (2)

The boundary conditions are listed below: I (t ) = 0, at t = 0 and T

(3) (4)

I (t1 ) = I m
Using the boundary conditions (3), the solutions of the equations (1), and (2) are
I (t ) = at + I (t ) =

t e ,

0 t t1
t1 t T

(5) (6)

t (t1 +t2 ) e e ,

Again I(t1)=Im implies


I m = I (t1 ) = at1 +

t1 e = e t1 e (t1 +t2 )

(7)

According to the equation (7), the relationship between variables t1 and T can be expressed by following equations:
t2 = 1

ln(1

t1 ) t1

(8)

= g1 (t1 )

T = t1 + t2 =

ln(1

t1 )

(9)

Furthermore, according to (8) and (9), the equations can be derived as follows:
dt2 a = 1 dt1 a t1

(10) (11)

dT a = dt1 a t1

The deterioration cost over the period [0, T] is listed as follows:


CD = cd 1 (t ) I (t )dt +
0
t1 t1 + t2

cd 1 (t ) I (t )dt

(12)

t1

In(12)

cd1 (t ) I (t )dt = (
0 0

t1

t1

acd

te t +

cd

cd

e t )dt
(e t1 1)

acd

t1e t1

acd

(e t1 1) +
t1 + t2

cd

t1

cd

(13)

t1 + t2

cd 1 (t ) I (t )dt =

cd (

t1

t1

e (t1 +t2 ) + t )dt

256

cd

t2

cd

cd

e t2

(14) cd (t1 + t2 ) (15)

CD =

acd

t1e t1 cd

acd

(e t1 1) + e
t2

2
t1

t1

cd

The inventory carrying cost over the period [0, T] is shown below: CH = ch ( I (t )dt +
0
t1 + t2

I (t )dt )

(16)

t1

Where

I (t )dt = 2 t1
0
t1 + t2

t1

t1 (e 1) t1 2

(17)

I (t )dt =

t1

t1 + t2 (t1 +t2 ) e e 2 2

(18)

Therefore
CH = ch a 2 ch t1 c c + t2 (t1 +t2 ) t1 2 (e e 1) h t1 + h 2 e t1 ch 2 2 CS + C D + C H T

(19)

Hence the total average cost over the period [0, T] is given by
K1 (t1 ) =

(20)

Furthermore,
(C + CD + CH ) dT 1 dCD dCH dK (t1 ) = S + ( + ) dt1 dt1 T dt1 dt1 T2

(21)

In (21)
c dt2 dCD cd a t1 cd te + = (1 e t1 ) + d (1 e t2 ) dt1 1 dt1 c c dt2 (t1 +t2 ) ch (1 + t2 ) dCH dt = ch at1 h h e + (1 + 2 )e (t1 +t2 ) dt1 dt1 dt1

(22) (23)

The necessary conditions for K(t1) to be minimum is


dK (t1 ) =0 dt1

(24)

provided

d 2 K (t1 ) dt12

d 2 K (t1 ) is also derived as follows. > 0 for that value of t1 . The expression of dt12

d 2 K1 (t1 ) 2(CS + CD + CH ) dT 2 2 dCD dCH dT CS + CD + CH d 2T ( ) 2( ) = + dt12 T3 dt1 T dt1 dt1 dt1 T2 dt12 1 d 2C D d 2 CH ) + ( + T dt12 dt12
In (25),

(25)

257

d 2T d 2t2 a2 = 2 = dt12 dt1 ( a t1 ) 2 c d 2CD acd t1 a cd t1 d 2t c dt2 t2 = e + t1e cd e t1 + d (1 e t2 ) 22 + d e dt1 dt12 dt1 d 2CH d 2t dt = ch a + ch (1 + t2 ) 22 e (t1 +t2 ) ch t2 (1 + 2 ) 2 e ( t1 +t2 ) 2 dt1 dt1 dt1
4. Numerical examples

(26)

(27)

(28)

As the model proposed in this paper is suitable for intangible deteriorating items with demand-dependent deteriorating rate under the circumstance in which the demand rate varies exponentially with time, three kinds of numerical cases are presented in the following to illustrate the practicality of the model. The parameters in the three cases are listed in Tab.1, The optimum values are calculated and the results are shown in Tab.2.
Tab.1 The values of parameters in the model

Parameters

Case 1 12.0 170.0 0.01 300.0 10.00 1.0 500.00

Case 2 12.0 170.0 0.05 300.0 10.00 1.0 500.00

Case 3 10.0 180.0 0.1 300.0 10.00 1.0 500.00

(units)

(units/month)
a (units) cd ($/unit) ch ($/unit/month) CS ($ for each new cycle)

(month) Case 1 Case 2 Case 3 1.5772 1.5017 1.5783

Tab.2 The optimum values of the two cases T Im K d 2K (month) (units) ($) dt12

CD ($)

CH ($)

2.8228 2.8428 3.0523

207.1396 204.5687 210.6783

354.3922 353.4475 332.5861

146.3690 177.7957 171.8932

208.8320 217.8623 202.8948

291.5340 286.9350 312.2615

5. Conclusion
In practice, Some products deterioration is intangible and the deteriorating rate may be influenced strongly by the demand. However, in the existing literature of inventory models for deteriorating items, none considered the fact. In this paper, we study the problem and propose a production inventory model for intangible deteriorating items with demand-dependent deteriorating rate under the circumstance in which the demand rate is changing exponentially with time and the production rate is constant. This inventory model is suitable for the situation in which the demand rate is varies exponentially with time. The formulae about the minimum average inventory cost, the optimal production time, the optimal replenishment cycle time and maximum inventory level have been derived. The developed model is solved by using the Newton-Raphson method. Results presented in the paper provide a valuable reference for decision-makers in planning production and controlling inventory. Three numerical examples have been performed at last.
References

[1] [2]

P.M.Ghare, G.P.Shrader. A model for exponentially decaying inventories. Journal of Industrial Engineering.1963, 14: 238-243 U.Dave,L.K.Patel. (T, Si) policy inventory model for deteriorating items with time proportional demand. Journal of operational

258

[3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14]

research society, 1981, 32: 137-142 Hollier, R.H, Mak,K.L. Inventory replenishment policies for deteriorating items in a declining market. International Journal of Production Research, 1983, 21: 813-826 Moncer Hariga. Optimal EOQ models for deteriorating items with time-varying demand. Journal of the Operation Research Society, 1996, 47: 1228-1246 S.Parachristos, K.Skouri. An optimal replenishment policy for deteriorating items with time-varying demand and partial-exponential type-backlogging. Operation Research Letters, 2000, 27(4): 175-184 Chao-ton Su, Chang-wang Lin. A production inventory model which considers the dependence of production rate on demand and inventory level. Production Planning & Control, 2001(12): 69-75 R. P. Covert, G..C. Philip. An EOQ model for items with Weibull distribution deterioration. AIIE Transaction, 1973, 5: 323-326 Y.K.Shah. An order-level lot-size inventory for deteriorating items. AIIE Transaction, 1977, 9: 108-112 R.B.Misra. Optimum production lot-size model for a system with deteriorating inventory. International Journal of Production Research, 1975, 13: 495-505 M.Deb, K.S.Chaudhuri. An EOQ model for items with finite rate of production and variable rate of deterioration. Opsearch, 1986, 23: 175-181 A. Gowami, K.S.Chaudhuri. Variations of order-level inventory models for deteriorating items. International Journal of Production Economics, 1992, 27(2): 111-117 H.Yan, Tce Cheng. Optimal production stopping and restarting times for an EOQ model with deteriorating items. Journal of the Operation Research Society, 1998,49: 1288-1295 Datta TK, Pal AK. Order level inventory system with power demands pattern for items with variable rate of deterioration. Indian Journal of Pure Applied Math, 1988, 19: 1043-1053 Zaid T.Balkhi, Lakdere Benkherouf. A production lot size inventory model for deteriorating items and arbitrary production and demand rates. European Journal of Operation Research, 1996, 92 (2) : 302-309

259

The Impact of Positive Externality on Returns Policy


Xu Chuanyong, Liang Liang, Gou Qinglong, Zha Yong, Zhou Chuiri
School of Management, Univ. of Sci. & Tech. of China, Hefei 230026, P. R. China

Abstract This paper studies the impact of positive externality upon a supply chain in which returns policy is used to coordinate the channel. We focus on a typical situation where the retailer can internalize the positive externality effect while the manufacturer can not. We find that the retailer will overstock than channel optimal quantity. When the effect of externality is stronger or returns compensation is greater, the retailers order quantity deviates more from channel optimal decision. When returns policy is offered, the positive externality effect will benefit the retailer but hurt the manufacturer. Key words Externality through sales, Returns policy, Newsvendor model, Stochastic model

1. Introduction
In a few industries, manufacturers, who rely upon independent retailers to redistribute their products, adopt returns policy to coordinate the supply chain and improve his profit. Returns policy, also called buyback contract, is a contract between manufacturer and retailer in which the retailer can return partial or unlimited unsold units for partial or full refund from the manufacturer at the end of the selling season. As mentioned in Emmons and Gilbert (1998)[1], returns polices are often used to encourage larger orders from retailers selling style goods or perishable merchandise, which can be characterized by uncertain demand, long production lead times and short selling seasons. Some examples are listed in Granot and Yin (2005)[2] and Bose and Anand (2007)[3], including newspapers, books, recordings, fashion wear and clothing, health and beauty products, dairy products, and perishable services, such as airline tickets and hotel rooms. Nowadays more and more products have short life-cycle. Pasternack (1985)[4] is the first to study channel coordination through returns policy. Returns policies as coordination mechanisms are then studied in a few contexts. Bose and Anand (2007)[3] studies where retail price is fixed. Emmons and Gilbert (1998)[1] and Granot and Yin (2005)[2] study where the retailer have the power of price-setting. Lu et al. (2007)[5] extends the problem to two period model. The role and effectiveness of returns policies have been studied in a few papers, such as Padmanabhan and Png (1995)[6], Emmons and Gilbert (1998)[1], Tsay (2001)[7], and Granot and Yin (2005)[2]. It has been verified that returns policy can coordinate supply chain and improve mutual benefit in several situations. For a review of coordinating contracts, please refer to Cachon (2003)[8]. However the advantage of returns policies may be exaggerated, since these papers only consider a single product. While in practice, the retailer seldom carries only one product and it is known that purchased products will affect the customers consequent demand, the single-product returns policy model may be quite harmful. Few studies on multiple-product problem when there exists positive externality effect, except Netessine and Zhang (2005)[9]. However they do not consider the impact of positive externality on a coordinated supply chain. We study in this paper a supply chain composed of a manufacturer and a retailer. The manufacturer produces a short life-cycle product and sells to a retailer with a returns policy. Customers who purchase this product will incur additional demand for another product in the same store of the retailer. Such an complementary effect, called positive externality through sales, is common in business, e.g. TV and DVD player, PC and printer, where the later product is often an auxiliary product which can enhance the performance of the previous product. People often search for the previous product at first, and then consider its auxiliary. This paper only considers one-way relationship. The purpose of this paper is to investigate the impact of positive externality on supply chain coordinating with a returns policy. The retailer is offered a returns contract while facing an opportunity to gain external benefit from overstocking. We assume the retailer can internalize the positive externality effect while the manufacturer
This research is supported by the National Natural Science Foundation of China, Grant No.70525001. The authors gratefully acknowledge this generous support.

260

can not. The research questions we address are: (i) What is retailers optimal response to the manufacturers coordinating returns policy? (ii) How will the retailers order quantity deviate from channel optimal quantity? (iii) How will positive externality affect each partys profit? Our main findings are: (i) The retailer will overstock than channel optimal quantity; (ii) When the effect of externality is stronger or returns compensation is greater, the retailers order quantity deviates more from channel optimal decision; (iii) When returns policy is offered, the positive externality effect will benefit the retailer but hurt the manufacturer. The remainder of this paper is organized as follows: In section 2, we formally present the model. In section 3, we first analyze the retailers best response to a given returns policy and then describe returns policy that the manufacturer should offer if he wants to coordinate the channel. In section 4, we study the impact of externality on channel order quantity and profits. In section 5, a numerical analysis is presented to provide more managerial insights.

2. Model formulation
A manufacturer produces a short life-cycle product called product A and sells to a retailer at a wholesale price of w per unit. To encourage larger orders from the retailer, the manufacturer offers a contract that allows the retailer to return unsold units for a credit of b per unit at the end of the selling season. The retailer commits to an order quantity before the selling season and then sells to customers at a fixed retail price. However the retailer holds more than one product. The demand of another product hold by the retailer, say product B, will be indirectly affected by the stocking decision of product A in the following manner: a deterministic portion of customers who purchase product A will then purchase product B. Thus, the demand of product B is composed of two parts: one is from initial allocation and another is incurred by the sales of product A. Such a case occurs when product B is a complement of product A. Similar demand models is used by Netessine and Zhang (2005)[9], Wang and Parlar (1994)[10], Netessine and Rudi (1999)[11], Boyaci (2002)[12].

c A , cB p A , pB w
b

Tab. 1 Notations Production cost per unit of product A, B, respectively

Retail price per unit of product A, B, respectively Wholesale price per unit of product A Buyback price per unit of product A Order quantity of product A, B, respectively Initial demand for product A, B, respectively Joint distribution of ( DA , DB ) Salvage value per unit of product A, B, respectively Goodwill penalty for unmet demand per unit of product A, B, respectively The retailers expected profit from product A, B, respectively Expected Profit of the manufacturer and the retailer, respectively

QA , QB DA , DB F ( x, y ) v A , vB g A , gB rA , rB

m ,r

To be clearer we make the following assumptions. A1: the retail price of product A is imposed by the market, i.e. fixed. A2: salvage value of product A is same to both parties. A3: demand information of product A is known to both parties. A4: effect of externality of product A is not known to the manufacturer.
261

A5: the initial demand distribution is independent of each other. We use FA ( x) to refer to the distribution of product As demand. To simplify, we set the goodwill penalty of product B to zero. We also assume the following conditions hold to avoid unreasonable solutions:

vA < c A < wA < p A , vA < bA < wA < p A , and vB < cB < pB .


We assume per unit sales of product A will incur unit demand for product B. For a given QA , the effective demand of product B is
e DB (QA ) = DB + min{DA , QA }

(1)

The retailers expected profit is composed of two parts, one part rA from product A and the other part rB from product B, thus

r = rA + rB , where
rA = ( p A w)QA ( p A b) E[QA DA ]+ g A E[ DA QA ]+
(2) (3)

Wherein we define The manufacturers expected profit is

[ x y ]+ max{0, x y} .

m = ( w cA )QA + (vA b) E[QA DA ]+

(4)

In the following sections, unless otherwise stated, channel refers to the channel of product A. Channel expected profit is A = rA + m .

3. The retailers best response to a given returns policy


In this section, we first seek to find what will the retailer respond to a given order quantity of product A, and how will the optimal order quantity of product B vary with that of product A. This will help us understand the retailers action in the situation when the order quantity of product A is fixed in the returns contract. Then we seek to find the optimal order quantity of product A for the retailer when it is not fixed in the returns contract. 3.1 Retailers optimal order quantity of product B for a given QA There are cases when the order quantity of product A is fixed in the procuring contract or the manufacturer only has limited production capacity and the quantity he can provide is limited to a given amount. e Define ( x, y ) Pr{DB ( x) > y} , then (QA , QB ) represents the probability that the last unit of product B will be sold out, given QA and QB . About this function we have the following lemma. Lemma 1: (i) (ii)

(QA , QB ) is increasing in QA if QA < QB / , remains stable else;

(QA , QB ) is decreasing in QB . (iii) For any constant (0,1) , there exists a ( ) satisfying ( , ) = .

Proof: see the appendix. Lemma 1 states that the more product A is ordered, the more possible the last unit of product B will be sold out. On the contrary, the more product B is ordered, the less possible the last unit of product B will be sold out. However order more product A over a threshold will not help selling product B. To maximize the expected profit from product B, the following condition must hold,

( pB cB ) (QA , QB ) = (cB vB )(1 (QA , QB )) ,


which means the possible revenue and the possible loss from the last unit of product B ordered is equal and the retailer will not benefit from stocking more product B. In this way, we obtain the following proposition, * Proposition 2: (i) The optimal order quantity of product B, QB (QA ) is determined by the following condition,
* (QA , QB ) =

cB vB . p B vB

(5)

262

cB vB , then we have following properties. p B vB * * (ii): If QA < , QB (QA ) is increasing in QA , else if QA , QB (QA ) = . Let be such that ( , ) =
(iii): QB (QA ) is greater than the original optimal order quantity of product B. Proof to 2(i) is omitted. Proof to Property 2(ii) is through direct use of Lemma 1. * * Proof to Property 2(iii) is quite straight, since QB (QA ) > QB (0) for any QA > 0 . Property 2(ii) means it is better to order more product B when more product A is ordered, unless too much product A has been ordered. When this happens, it is best to order a specific quantity determined only by the demand distribution and product Bs cost and price parameters. Property 2(iii) means the optimal order quantity of product B is greater when considering product As affection. These results can be explained like this, since product B is a complement of product A, the more product A is ordered, the more it will sell and the more demand of product B will be incurred. For the retailer it is better to stock more product B to meet the increased demand. About the retailers profit from product B, we have * Proposition 3: When QB (QA ) is adopted from (5), the retailers expected profit from product B, rB is increasing in QA .
*

drB dr r =0 B = B . dQB dQA QA p cB * e * , we have Note that Pr{QB (QA ) > DB } = 1 (QA , QB ) = B p B vB
Proof. Since QB (QA ) is adopted from (5), we have
*

drB * e = ( p B vB ) E[QB (QA ) DB ]+ QA dQA


* e = ( pB vB ) Pr{QB (QA ) > DB }Pr{DA > QA }( )

= ( pB cB ) Pr{DA > QA } >0


Here we impose an assumption that the initial demand is independent of each other. Proposition 3 states that if the retailer acts wisely enough, his profit from product B is always increasing when ordering more product A, which means the retailer can internalize the positive effect of externality of product A. 3.2 Retailers optimal order quantity of product A It is common that the order quantity of product A is not fixed in the returns contract. In such a situation the retailer will seek to find an optimal order quantity which maximizes his own total expected profit r . After examining the first order condition, we obtain the following optimality condition: Proposition 4: The retailers optimal order quantity of product A, QA is determined by

Pr{DA < QA } =

p A w + g A + ( p B cB ) . p A b + g A + ( p B cB )

(6)

Property (i): QA is affected by the price and cost parameters of product B, while independent of product Bs demand distribution. Property (ii): QA is increasing in Proof: Note that
263

drA dE[QA DA ]+ dE[ DA QA ]+ = ( p A w) ( p A b) gA dQA dQA dQA


So we get

drA = ( p A + g A w) ( p A + g A b) Pr{DA < QA } dQA


Together with

drB = ( pB cB ) Pr{DA > QA } , we obtain dQA


d r = ( p A + g A w + ( pB cB )) ( p A + g A b + ( pB cB )) Pr{DA < QA } dQA

After examining the second derivative of

r , we can see r is concave. Setting the first derivative of r

to zero we get the optimality condition. The two properties can be obtained directly from equation (6). Corollary 5: The demand information or goodwill penalty information perceived by the manufacturer is distorted. From proposition 4, the retailers decision will deviate from the manufacturers expectation. This will cause the manufacturer to readjust his estimation information about the demand and the retailers cost and price parameters. While retail price is easy to observe, the demand information and goodwill penalty information is difficult to observe. Since the manufacturer sets the returns contract well ahead of selling season, when the retailer orders, his forecast of market demand can be different from that of manufacturer. 3.3 The manufacturers coordinating returns policy For the manufacturer product A is his only concern, since he does not have correlation information about product B and he has no control over product B, either. So his pricing and returns policy aims at maximizing his expected profit ignoring product B. When no return is allowed, the underage and overage cost perceived by the retailer are per unit p A + g A w and w v A , respectively. Note the channel underage and overage cost are per unit p A + g A c A and c A v A , respectively. It has been pointed out in literature that this leads to channel inefficiency, where retailer will stock less than channel optimal quantity. This is quite intuitive since retailer bears more than the total channel overstock risk while he receives only part of channel revenue. To coordinate the channel the manufacturer offers a returns contract, which can reduce the overstock risk the retailer bears. Through returns policy the manufacturer can improve channel efficiency and mutual benefit. On returns policy there are lots of papers. To summarize, we have the following proposition. Proposition 6: The manufacturers coordinating returns policy ( w, b) satisfies
* Pr{DA < QA } =

pA + g A cA pA + g A w . = pA + g A vA pA + g A b

(7)

The proof is omitted here, please refer to related literature. Note from Proposition 6, there exist a set of possible contract parameters can coordinate the channel. Each pair of ( w, b) specifies the channel profit allocation between the two parties. Let
* Pr{DA < QA } =

pA + g A cA , we get pA + g A vA w = (1 )( p A + g A ) + b
(8)

Proposition 7: In a coordinated channel,

m is increasing in b .

264

Proof.

* * m = ( w c A )QA + (vA b) E[QA DA ]+ * = [(1 )( p A + g A ) + b c A ]QA (b v A ) * = [ QA Q* A Q* A

FA ( x)dx
Q* A 0

* FA ( x)dx]b + [(1 )( p A + g A ) c A ]QA + v A

FA ( x)dx

So we need only prove

* QA > FA ( x)dx . This holds since 0

Q* A

Q* A

FA ( x)dx <

Q* A

* * FA (QA )dx = QA .

This proposition tells us that when return compensation increases, the manufacturer actually allocates himself a greater proportion of channel profit.

4. The impact of externality


In this section we study the impact of positive externality on the supply chain coordinated with returns policy. First we analyze how positive externality affects the retailers order quantity compared with channel optimal decision. Then we analyze the consequent effect on the expected profits of the retailer and the supplier, respectively. 4.1 Effects on order quantity * * Let QA , represent the channel optimal order quantity and service level, respectively, when ignoring the effect of externality. Let QA ,

represent the retailers optimal order quantity and service level, respectively,

when he faces a coordinating returns policy and considers the effect of externality. Here service level refers to the probability that demand for product A can be satisfied. Based on analysis in the previous section, from equation (6) and (7) we have
* * Pr{DA < QA } =

pA + g A cA pA + g A vA

(9)

Pr{DA < QA } =

p A + g A w + ( p B cB ) p A + g A b + ( p B cB )
*

(10)

From the above two equations, it can been seen that the optimal channel decisions QA ,

* are independent

of returns contract parameters ( w, b) while the retailers optimal decisions internalizing externality effect are related to returns contract parameters ( w, b) . Since the retailer can internalize the effect of externality of product A, his decision may deviate from the channel optimal decision. Next we compare the retailers optimal decision with the channel optimal decision, and analyze which factors affect such a deviation and how. We have: Proposition 8: When the manufacturer offers a coordinating returns policy, and the retailer internalizes the positive externality effect, * * (i) QA > QA , > ; (ii) Q QA QA > 0 , and Q is increasing in
*

(iii) Q is increasing in b . Proof: (i)

p A + g A w + ( p B cB ) p A + g A w p A + g A c A > = = * p A + g A b + ( p B cB ) p A + g A b p A + g A v A
*

Here we have used the fact that ( w, b) is a coordinating returns policy. Since demand distribution function is monotonic increasing, it follows that QA > QA . (ii) Since QA is independent of Proposition 4.
265
*

, we need only prove QA is increasing in , which follows from

(iii) Since QA is independent of b , we need only prove QA is increasing in b . Note that a coordinating returns policy satisfies w = (1 )( p A + g A ) + b , so
* *

=
*

* ( p A + g A b ) + ( p B cB ) 1 * = 1 ( p B cB ) p A + g A b + ( p B cB ) 1+

pA + g A b

Since 1 > 0 , it is easy to see

is increasing in b . Since demand distribution function is monotonic

increasing, it follows that QA is increasing in b . Proposition 8 reveals that if the retailer internalizes the externality effect he will order more product A than channel optimal quantity. This deviation is greater when the externality effect is greater or when the returns compensation is greater. When the externality effect is greater, the retailer can obtain more profit from product B by ordering more product A, so he tends to overstock more. When the returns compensation increases, the situation is more complex since the wholesale price increases, too. This can be explained in this way, when return compensation increases, the retailers proportion of channel profit decreases, his deviation from channel optimal decision will have less effect on his profit. 4.2 Effects on each parties expected profits Considering the effect of positive externality the retailers optimal decision QA deviates from channel optimal decision QA . In consequence each partys profit will be affected. Proposition 9: When the manufacturer offers a coordinating returns policy, and the retailer internalizes the positive externality effect, (i) m is decreasing in ; (ii)
* m (QA ) < m (QA ) ; *

(iii) rA is decreasing in Proof. (i) Since

2 d m * * (QA ) = 0 , and d 2m = (b vA ) f A (QA ) < 0 , it follows that when QA > QA , dQA dQA

, rB and r is increasing in .

d m d m d m dQA dQA * < 0 . From Proposition 8(iii), > 0 , and QA > QA , we have = < 0. dQA d d dQA d d m * < 0 and QA > QA . (ii) It follows since dQA
(iii) The proof is omitted here. Proposition 9 states that retailers overstock action hurts the manufacturer. Actually when the retailer overstocks than channel optimal quantity his own profit from product A is reduced, too. But the retailer can obtain more profit from product B to cover the loss of product A. However the manufacturer cannot internalize the effect of externality, his loss can not be compensated.

5. Numerical analysis
To illustrate the impact of externality on the manufacturers profit and channel efficiency we use a numerical example. The initial demands of product A and B are uniformly distributed over (0, 400] and (0,100] , respectively. Other parameters are set to the following values: p A = 100 , c A = 40 , v A = 20 , g A = 20 ,

pB = 50 , cB = 10 , vB = 5 . We calculate the ratio of profit loss as:


[expected profits without externality] - [expected profits with externality] . [expected profits without externality]

We plot the ratio of profit loss in Fig. 1.

266

Fig. 1 Ratio of profit loss

It can be seen from Figure 1 that as the externality effect increases, the ratio of profit loss increases, for both the manufacture and the channel. As the buy back price increases the situation is worsen. For example, the channel profit will drop over 4% when b = 80 and = 2 . Moreover we can observe the ratio of channel profit loss is always greater than that of manufacturer in a given situation.

6. Conclusions
This paper studies the impact of positive externality on a supply chain in which returns policy is used as coordination mechanism. We focus on a typical situation where the retailer can internalize the positive externality effect while the manufacturer can not. We find that the retailer will overstock than channel optimal quantity and when the effect of externality is stronger or returns compensation is greater, the retailers order quantity deviates more from channel optimal decision. When returns policy is offered, the positive externality effect will benefit the retailer but hurt the manufacturer, especially when returns compensation is greater. Note that when the retailer can set retail price the problem will be more complex and this could be further studied. Appendix PROOF OF LEMMA 1: (i) When QA < QB / ,

(QA , QB ) = dx
0
QA 2

QA

QB x

f ( x, y )dy + dx
QA

QB QA

f ( x, y )dy

For 0 QA1 < QA 2 < QB / ,

(QA 2 , QB ) (QA1 , QB ) =
When QA > QB / ,

QA1

dx

QB QA1

QB x

f ( x, y )dy +

QA 2

dx

QB QA1

QB QA 2

f ( x, y )dy > 0

Since f ( x, y ) is positive. This leads to

(QA 2 , QB ) > (QA1 , QB ) .

(QA , QB ) is a constant independent of QA . e (ii) Since DB (QA ) = DB + min{DA , QA } is a random variable independent of QB , it is obvious
(iii)

e Pr{DB (QA ) > QB } is decreasing in QB .

(Q, Q) = dx
0

Q x

f ( x, y )dy + dx
Q

f ( x, y )dy

It is not difficult to verify that K (Q ) (Q, Q ) is decreasing in Q , and K (0) = 1 , K ( +) = 0 .


267

Since K (0) = 1 is continuous, for any


References

(0,1) , there must exists a satisfying ( , ) = .

Emmons, H. and S. Gilbert. The Role of Returns policies in pricing and inventory decisions for catalogue goods. Management Science, 1998, 44(2): 276-83. [2] Granot, D. and S. Yin. On the Effectiveness of Returns Policies in the Price-Dependent Newsvendor Model. Naval Research Logistics, 2005, 52():765-779. [3] Bose, I. and P. Anand. On returns policies with exogenous price. European Journal of Operational Research, 2007, 178:782-788. [4] Pasternack, B. A. Optimal Pricing and Returns Policies for Perishable Commodities. Marketing Science,1985, 4(2):166-176. [5] Lu Xiangwen, Song Jing-sheng and Amelia Regan. Rebate, returns and price protection policies in channel coordination. IIE Transactions, 2007, 39:111-124 [6] Padmanabhan, V. and I. P. L. Png. Returns policies: make money by making good. Sloan Management Review, 1995, Fall: 65-72. [7] Tsay, A. A. Managing retail channel overstock: Markdown money and return policies. Journal of Retailing, 2001,77:457-492. [8] Cachon, G. Supply chain coordination with contracts, Supply chain management: Design, coordination and operation, S. Graves and T. de Kok (Editors), Handbooks in Operations Research and Management Science, Vol. 11, Elsevier, Amsterdam, 2004, Chap. 6. [9] Netessine, S. and F. Zhang. Positive vs. Negative Externalities in Inventory Management: Implications for Supply Chain Design. Manufacturing & Service Operations Management, 2005, 7(1): 5873. [10] Wang, Q. and M. Parlar. A three-person game theory model arising in stochastic inventory control theory. European Journal of Operational Research, 1994, 76: 83-97. [11] Netessine, S. and N. Rudi. Centralized and competitive inventory models with demand substitution. Operations Research, 1999, 51: 329-335. [12] Boyaci, T. Inventory competition and coordination in a multiple-channel distribution system. Working Paper, 2002, McGill University.

[1]

268

Study on a Model of Lean Supplier Management Based on the Lean Production


Xu Zhiduan1, Guo Yixun2
1 MBA Education Center in School of Management,Xiamen University, Xiamen 2 Statistics Department in College of Economics, Xiamen University, Xiamen P.R.China, 361005

Abstract Lean production needs lean supply that puts forward much stricter requirements on suppliers. The nucleus of a lean supply is lean supplier management. In this paper, we present a model of lean supplier management between an OEM and its suppliers for the objectives of eliminating wastes, reducing cost and improvement continuously based on the lean production. This model includes supplier selection and categorization, supplier improvement, supplier certification and supplier evaluation. First, the supplier selection process and some basic principals about selection criteria are developed, and all suppliers will be categorized so that different management measures can be used effectively. Then, we design the Supplier Quality Assessment process that focus on a comprehensive, continuous improvement of suppliers quality system and processes utilizing benchmarked and time-proven techniques. Finally, the index system of performance evaluation on lean suppliers is given in order to understand what performances a supplier has achieved over the past period, to identify chances that a supplier will be improved, and to provide evidences for re-certification of suppliers during next period. Key words Lean production, Lean supplier management, Selection and categorization, Quality assessment, Performance evaluation

1. Introduction
With uncertainty in competitive business environment, OEMs placed in the middle of supply chain are facing challenges about product variety, lower cost and better quality. Lean thinking which aims eliminating wastes, reducing cost and improvement continuously provides a strategic guiding tool for OEMs so as to gain competitive advantages[1]. Therefore, the lean production approach pioneered by Toyota is being adopted by lots of OEMs, especially in the electronic industry, such as DELL and KODAK. The term lean embodies a system that uses less of all inputs to create outputs similar to the mass production system but offering an increased choice to the end customers[2]. For full effectiveness, the lean production system must be extended down through the supply chain. The need for minimal inventory for cost and quality reasons and early detection of defects requires a kanban supply arrangement[3]. Suppliers need to deliver frequently, in small quantities, as required to the point of use with total quality guaranteed eliminating the need for incoming inspection. Suppliers are also involved in the design of components with assemblers, organizing their supply base into a tired hierarchical structure. It means that an OEM adopting lean production approach and its key suppliers (called lean suppliers) should be locked together in long term. Suppliers are not only the most direct outer element which influences manufacturers' production and business, but also the key factor which guarantee the quality, price, delivery and service of the product. This paper focuses on an electronic manufacturer and its suppliers from an OEMs perspective. For the sake of convenience, an electronic manufacturer will be referred to as the Company throughout the paper. In this paper, we will present a framework for lean supplier management based on the lean production. Then, we will focus on how to select lean suppliers, control quality of lean suppliers, and evaluate performances of lean suppliers.

2. A Framework for Lean Supplier Management


Lean production system can not be realized without a lean supply. A lean supply arrangement should provide a flow of goods, services and technology from suppliers to the Company (with the associated flows of information and other communications in both directions) without waste[4]. The nucleus of a lean supply is lean supplier management. Fig.1 shows a framework for lean supplier management. The framework covers all aspects of lean supplier management including business strategy, supplier categorization, supplier improvement, supplier certification and supplier evaluation.
269

Market Drivers

Commodity Strategy

Business Strategies

Select Supplier

Supplier Quality Assessment

Categorize Supplier Improved Supplier Base Define Metrics and Expectations Prioritize Gaps and/ or Opportunities Improve(Verify) Certified Suppliers

Performance Evaluation

Productivity Project Center

Supplier Certification

Fig. 1 The framework for lean supplier management

The purpose of the framework is to manage the Companys portfolio of suppliers, utilizing Total Quality Management techniques, so that increased productivity results from the optimal deployment of resources. Based on this framework, supplier management activities may be divided into two parts: the first part is aimed at selecting a new supplier, the second part is to manage those suppliers who have been in the Companys supplier base. If a new supplier is selected and verified, it will enter into the Companys supplier base and then can be managed by the second part activities. The first part will be described in Section 3. The second part includes the followings: (1)Supplier quality assessment: This is a comprehensive, on-site evaluation of suppliers quality system and processes. Utilizing benchmarked and time-proven techniques, a supplier is assessed for its ability to meet the Companys quality and cycle time expectations. This aggressive, evidence-based approach is used to identify the suppliers current capabilities, focus on continuous improvement and ability to meet ever-increasing demands. Section 4 will give the details. (2)Performance evaluation: Performance evaluation is to examine those suppliers in the Companys supplier base over past period by way of some indexes such as quality, delivery, cost, responsiveness & support and innovation. So the Company may know on which level its each supplier will be placed and then should take some corresponding effective measures to improve. Section 5 will describe the specific evaluating methods. (3)Productivity project center: Productivity here refers to the ability to cash quality and/or reliability improvements. It is expected that suppliers who are in the Companys top 80% spend will provide a number of productivity ideas to the Company each year. In fact, the Company can establish a supplier on-line idea database in which a supplier is encouraged to connect to and put forwards proposals for productivity. Productivity project center is a process to manage these proposals and track information about the adoption of these proposals. The information is very helpful for the specific commodity manager to make sourcing decisions. (4)Supplier certification: Certification is the designation/status earned by a supplier who consistently demonstrates excellent levels of quality, productivity, and delivery performance. By means of certification, suppliers may benefit from systematic improvements which potentially increase the suppliers profit margin, first consideration for new business and visible recognition from the Company; the Company may benefit from confidence in the suppliers quality systems to consistently produce defect-free and reliable products/services, products/services are received when needed, and continual (year- over- year) productivity improvements. If a degradation of supplier performance occurs at any time, and causes significantly negative impact on the Company operations, the suppliers certification status will be withdrawn. Although certification is not the prime objective of supplier management, it is the Companys vision to have all of key suppliers (top 90% spend and critical)
270

become certified. In this way, the Company may build a competitive and world-class supply base and ensure optimal deployment of valuable resources. In this framework, the improved supplier base is dynamic. Maybe new selected and verified suppliers are input. Meanwhile, a suppliers data will be upgraded based on its performance over past period. Those suppliers who dont pass the certification will be kicked out.

3. Selection and Categorization of Lean Suppliers


Lean supplier management starts from the selection of lean suppliers. According to the characteristics of the lean production, key lean suppliers may play significant roles as co-producers. So the selection of lean suppliers is one of critical success factors for the lean production[5]. In this section, the supplier selection process will be described first, then some basic principals about criteria for lean suppliers have to be discussed, and finally all suppliers will be categorized so that different management measures can be used effectively. 3.1 The supplier selection process Supplier Selection is the process of developing criteria and representative importance weightings by which potential suppliers will be evaluated. The best candidates proceed to the negotiation phase for final determination of who will be chosen to fulfill the strategy. Because of the risk of bias, the supplier selection process is summarized below: (1)Develop criteria based on market and business needs (2)Weight the criteria based on importance (3)Identify all potential suppliers (4)Reduce the number of suppliers to be considered based on required criteria (5)Visit Supplier and perform Supplier Quality Assessment to further evaluate (6)Perform final evaluation and rating of the supplier finalists (7)Negotiate and Select best supplier or combination of suppliers (8)Feedback learning to selected supplier(s) for subsequent quality planning 3.2 The basic principals of developing criteria for lean suppliers It is important to emphasize on lean supply seamlessly between the Company and suppliers. So the selection criteria for lean suppliers are usually focused on quality, cost, cycle time and delivery. It depends on the Companys specific situation. The selection criteria is not a one size fits all. But there are some basic principals to develop the selection criteria. In order to meet the need for lean production, the potential suppliers may be examined through the followings: quality assurance system; flexibility of production; responsiveness to changeable plans; capability for managing inventories; flexibility of delivery; reputations. In a word, a strategy should have been developed for sourcing that has taken into consideration particular market drivers and business strategies. Suppliers are selected based on this strategy. Then suppliers need to be categorized based on short-term and long-term needs. The categorization determines the extent and responsibility for the metrics definition/expectations, gaps/opportunities, and improvement phases. 3.3 Supplier categorization Categorization or grouping is a way to manage a large base of suppliers, in a way that maximizes results in each category by minimizing both the amount and intensity of resources expended in each category. Categorization will determine the nature and level of resources utilized and what expectations are placed on the supplier. Guidelines for categorizing suppliers include, but are not limited to: Product/Industry growth(declining versus increasing); Market drivers (price versus technology); Length of desired involvement (short-term versus long-term); Criticality (low versus high); Requirements (standard versus custom); Types of projects with suppliers (tactical versus strategic); Switching costs (low versus high). The above is to be used as a guideline only. The business needs and value of opportunities will determine the ultimate decision of the suppliers category. Usually, there are four types of suppliers as follows: Type, Low value/low risk and convenience sources; Type, High value/low risk and multiple Sources; Type, Low
271

value/high risk or sole sources; Type, High value/high risk and/or single sources[4]. Based on these four types of suppliers, we divide suppliers into four categories as follows: (1)Strategic suppliers: Suppliers are designated as Strategic when the Company desires a longer-term relationship, possibly due to product or service co-development opportunities. A supplier in this category is a Type I supplier. (2)Key item suppliers: Key Item suppliers are those who require involvement with the Company due to the criticality of the product or service they provide. This involvement is usually a medium-term effort in order to receive a level of assurance prior to a ship to use status. This is most likely Type II suppliers, and possibly some Is and IIIs. (3)Manage-By-Exception suppliers: Whether caused by a change in the Companys processes, or a problem or change caused by the Supplier, it may become necessary to engage in a short-term project with the supplier. This supplier could come from any of the four types. (4)Approved suppliers: Any supplier who has not been identified as belonging to one of the other categories (Strategic, Key Item, and Manage-By-Exception) but in the Companys supply base is in the approved category. It is in the Companys best interests to reduce the key supply base (90% spend) to a manageable number of suppliers. Being competitive within their industry is an expectation common to all approved suppliers. The Companys resources will be minimally utilized for this category. If additional resources are necessary beyond general communication or the setting of expectations, this is an indication that the supplier may not be categorized properly or that the supplier should be removed from the base. Based on the category chosen for the supplier, subsequent efforts will be different. The Tab.1 below provides a summarization.
Suppliers Strategic Key Item Manage-by-Ex ception Approved Not yet selected
Tab.1 Different efforts on the supplier based on the category Define Metrics and Expectations Identify Gaps / Opportunities

Improve (verify)

Consider certification criteria plus client needs. Client needs. Specific to problem. Communicate Certification criteria, generic expectations. Supplier quality assessment and other needed information.

Jointly identify and prioritize. Use some specific standards of the Company to develop plan. Establish target expectations. Supplier responsible. Select best supplier based on assessment results and information obtained.

Close gaps and show results jointly. Re-categorize supplier if/when needed. Work quality plan to completion then re-categorize supplier. Report progress until gap closed. Maintain data monitoring system and move back to approved. Supplier responsible; provides data to Commodity Manager, as requested. Categorize supplier and continue as noted.

4. Quality Systems and Assessments of Lean Suppliers


4.1Quality systems It is generally believed that quality does not happen by chance, especially over the long term. As the Company desires to have higher levels of assurance with regards to supplier quality, reliability, serviceability, and delivery, additional attention is placed on the suppliers quality system. Therefore, the following expectations are placed on a supplier: (1)Have a documented quality system. (2)Use process controls and stress defect prevention rather than defect detection. (3)Maintain records that support lot traceability. (4)Maintain records that support reliability and serviceability performance metrics. (5)Characterize all processes. (6)Achieve designs and processes that result in Cp > 2 and Cpk > 1.5. (7)Strive for continual improvement in quality and reliability in all facets of operations. The following Tab.2 may help to distinguish where quality tools may be best utilized. It is meant to be used a 272

guideline only. 4.2 The Supplier Quality Assessment The Supplier Quality Assessment process is a comprehensive, on-site evaluation of suppliers quality system and processes. Utilizing benchmarked and time-proven techniques, a supplier is assessed for its ability to meet the Companys quality and cycle time expectations. This aggressive, evidence-based approach is used to identify the suppliers current capabilities, focus on continuous improvement and ability to meet ever-increasing demands. Elements of the assessment include: (1)Management of the quality system and business: Organization, commitment, measurement and reporting, training, cost analysis, continuous improvement activities and teams, and customer feedback.
Category Strategic Define Metrics
Tab.2 Quality tools matrix Identify Gaps

Improve (verify) Management-By-Fact(MBF); Concurrent product / process design; Design for x (DFX); Mistake proofing / fail-safing; Seven basic quality tools; Data interpretation / presentation

Key Item

Concurrent product / process design; Voice of the Customer (VOC); Quality Function Deployment (QFD); Cycle time methodology; Defect measurement / 6 sigma; Failure Modes and Effects and Criticality Analysis (FMECA) / Failure Modes and Effects Analysis (FMEA); Reliability methods; Process capability; Data interpretation / presentation; Design of Experiments (DOE);Decision and risk analysis VOC; QFD; Defect measurement / 6 sigma; Item Quality Process; FMECA /FMEA; Reliability methods; Process capability; Data interpretation / presentation; DOE; Decision & risk analysis Defect measurement / 6 sigma; FMECA /FMEA; Reliability methods; Seven basic tools; Data interpretation / presentation; Decision & risk analysis

Value Analysis and Value Engineering (VA/VE); Pugh Concept Selection; Process Mapping; Process capability; Descriptive statistics; Graphical techniques

Manage-By-E xception

VA/VE; Pugh Concept Selection; Process Mapping; Cycle time methodology; Toleratzing; Item Quality Process; Process capability; Descriptive statistics; Graphical techniques Process Mapping; Graphical techniques

MBF; Design for x (DFX); Item Quality Process; Mistake proofing / fail-safing; Seven basic quality tools; Data interpretation / presentation MBF; Design for x (DFX); Cycle time methodology; Mistake proofing / fail-safing; Seven basic quality tools; Data interpretation / presentation; Statistical software Seven basic quality tools; Data interpretation / presentation; Strategic Cost Analysis Seven basic quality tools; Data interpretation / presentation

80% Spend (subset of Approved) 100%Spend (Approved)

Defect measurement / 6 sigma; Data interpretation / presentation Defect measurement / 6 sigma; Data interpretation / presentation

Process Mapping; Graphical techniques; Benchmarking Graphical techniques

(2)Process capability: Understanding of customer requirements, specification review, order entry, use of process flow maps, capability studies, and identification of key process parameters that affect ability to meet customer requirements, and process control. (3)Change control: Customer notification of supplier caused changes, audit trails, and revision management. (4)Process control: Training, data collection and usage, ongoing control criteria, use of statistical and problem-solving tools, and process evaluation and improvement. (5)Control of purchased and non-conforming materials: How data is defined, tracked, analyzed, and used to improve the purchasing, design, contract, and production processes. (6)Corrective and preventive action: Analysis of problems, implementation of solutions to prevent recurrence of problems, usage of data to identify trends and prevent potential problems, internal auditing, and verification of effectiveness of solutions. Although the assessment utilizes several ISO900X concepts, it goes beyond ISO documentation evaluation, into the effectiveness of the processes themselves. ISO registration does not necessarily correlate to a successful
273

Supplier Quality Assessment result.

5. Performance Evaluation on Lean Suppliers


The purposes of performance evaluation on lean suppliers are to understand what performances a supplier has achieved over the past period (a year usually), to identify chances that a supplier will be improved, and to provide evidences for re-certification of suppliers during next period. Obviously, the objectives of performance evaluation are those who have been approved in the Companys supplier base and are active over the past period. Based on the lean production, the index system of performance evaluation on lean suppliers may be adopted as shown in Fig.2.
1.0 Quality (weight=6) 1.1 Lot acceptance rate 1.2 Sampling inspection defect rate 1.3 Parts scrap rate 1.4 Discrepant parts per million 2.1 Delivery rate on time 2.2 Delivery cycle time 2.3 Changeable orders acceptance rate 3.1 Price level 3.2 Quotation behavior 3.3 Attitude and behavior of cost-reduction 3.4 Payment 4.1 Sensitivity to Complaints 4.2Communication 4.3 Attitude of collaboration 4.4 Co-improvement 4.5 Service and support after sales 5.1 Attention to new product development 5.2 Usage intension of new Technology

2.0 Delivery (weight=5)

The index system of performance evaluation on lean suppliers

3.0 Cost (weight=5)

4.0 Responsiveness & support (weight=5)

5.0 Innovation (weight=5)

Fig. 2 The index system of performance evaluation on lean suppliers

In Fig.2, the second level has eighteen indexes in which each index is evaluated a score between 0 and 5; the first level has five indexes, i.e. quality, delivery, cost, responsiveness & support and innovation, where the score of each index is equal to its weight (shown in Fig. 2) multiply the average score of its second level indexes. All scores of the first level indexes are summed to be the final score of a supplier. The full score is 100. In general, the evaluators may be those who contact with suppliers daily inside the Company, usually from the plan department, the purchasing department, the product development department, the quality management department, the production department and etc.

6. Conclusions
Modern enterprises have begun to realize the suppliers great influence to them and regard the establishment and development of the cooperation relation with suppliers as their business strategy. Lean production needs lean supply that puts forward much stricter requirements on suppliers. No doubt, lean supplier management has become one of the key success factors for an OEM. In this paper, we describe some practices on lean supplier management such as supplier selection and categorization, quality control and assessment, performance evaluation. Of course, lean supplier management covers more than these. The purpose of this paper is just to provide some useful tools and methods for those who want to improve their supplier management based on the lean production.
References

[1] [2] [3]

Lewis, M.A. Lean production and sustainable competitive advantage, International Journal of Production and Operations Management, 2000, 28 (8): 959978 Womack, J.P., Jones, D.T. From lean production to the lean enterprise, Harvard Business Review, 1994, 75: 93103 Ronan M. Lean supply: the design and cost reduction dimensions, European Journal of Purchasing and Supply Management,

274

[4] [5]

2001, 7:227242 Lamming, R. Squaring lean supply with supply chain management, International Journal of Operations and Production Management, 1996, 16 (2): 183196 New, S., Ramsay, J. A critical appraisal of aspects of the lean chain approach, European Journal of Purchasing and Supply Management, 1997, 2 (2): 93102.

275

Study on Lead Time Inventory Models Based on Customer Waiting Time


Yan Lei, Chen Rongqiu, Li Li
School of Management, Huazhong University of Science & Technology, P.R.China, 430074

Abstract Traditional inventory models usually ignore the customer selection behavior facing stockout or predigest the economic and credit lost caused by customer waiting time. The relationship between the customer waiting time and the order lead time is modeled. A new algorithm is put forward to solve the model. A numerical example is given to prove the effectiveness of this model. Key words Customer waiting time, Lead time, Customer lost, Inventory

1. Introduction
Motivated by the philosophy of just-in-time (JIT), in the area of production/inventory control, many papers that have discussed the issue of changing the givens, as in Silver[1], have been presented. Givens such as lead time and set-up time are regarded as fixed parameters in the traditional production/inventory models. However in the practical situations, givens may be changed by various efforts[2].For example, the ultimate goal of JIT is to eliminate waste, and the successful experience of the Japanese has evidenced that this goal can be achieved through actions such as shortening lead time, reducing set-up time cost and improving quality. Also, Time-based-Competition has forced firms to analyze and optimize time as a competitive advantage[3]. In recent years, the issue of shorting lead time has received a lot of interest. A great emphasis in reducing lead time is to lower the safety stock, reduce the loss caused by stockout, increase the service level to the customer and gain the competitive advantages in business. Liao and Shyu[4] first presented a probabilistic inventory model in which the order quantity was predetermined and lead time was a unique variable. Lead time is decomposed into n components each having a different crashing cost for reduced lead time in that paper. Consequently, the lead time crashing cost is described using a piecewise linear function. Ouyang and Chang[2], Ben-Daya and Raouf[5], Ouyang[6], moon and choi[7], Hariga and Ben-Daya[8] and Manas Kumar Maiti and Manoranjan Maiti[9] also developed various analytical inventory models to explore the lead time reduction problem. In above papers, Ben-Daya and Raouf[5] developed a model that considers both lead time and order quantity as decision variables. Later, Ouyang et al[2] extended the models considering shortages where the total amount of stockout is considered as a mixture of backorders and lost sales. But they assumed a given service level and, therefore, the reorder point is fixed. In the above inventory models with stockout loss considered, the authors assume lead time demand follows a Normal distribution and the fraction of the shortage that will be backordered is fixed . Then 1- is the fraction of the lost sales under stockout. They study on the lead time problem with the object of the minimum expected annual total cost when equals 0, 0.5, 0.8 and 1. But in practical situations, the probability of lost sales depends on the comparison between customer waiting time under stockout and the reach time of products. That is, the probability of lost sales depends on the relationship of customer waiting time and lead time. With the above consideration and based on classical production/inventory control strategies, first the concept of customer waiting time is introduced and the quantitive relationship between customer waiting time and lead time is analyzed. Then the inventory model is studied on with a mixture of backorders and lost sales, where the relationship between customer waiting time and lead time, the lead time, lot size and the reorder point are decision variables. And the model is solved by iterative algorithms. Furthermore, a numerical example is given to illustrate the results and concluding remarks are made.

This research is supported by National Natural Science Funds of China (No: 70332001).

276

2. The relationship between customer waiting time and lead time


Under the theory of Time-base-competition, the competitive advantage for firms is instant response to customer demands. Based on traditional inventory strategies and practical situations, we assume customers know that they need to wait under stockout. If the reach time of products exceeds the waiting time, customers will cancel the buying behavior then lost sales occur. Because of the homogeneous character to waiting time for a special customer segment of some product and in order to simply the model, we assume that the waiting time of different customers is the same, denoted by , with reference to the dealing method to customer utility threshold considering customer expected utility in Li[10]. The relationship between and L under stockout is studied on then. Consider the supply chain of a single product consisting the supplier, retailer and customers. When the inventory decreases to the reorder point r , the retailer reorders with order lot size Q . After a lead time L , products arrive and satisfy customer demands. We assume every customer orders one product and the lead time demand is denoted by X and follows Normal distribution X ~ N ( DL, L ) and r = DL + k L P( X > r ) = P( Z > k ) = q , k is the safety factor and q is customer service level. We assume the retailer determine r and L of a single period based on historical and forecasted demand. Because of the complexity of customer arrival time, we assume the time of reordering is 0 and we begin to analyze the arrival time from time 0. We assume the arrival time is a random variable and is denoted by Y and Y followings a normal distribution Y ~ N ( t , t ) . Its distribution function and density function are FY ( y ) and fY ( y ) respectively. Under stockout and if L y , customer waiting time is larger than the reach time of products, customers may wait and backorders occur. Although there is no any lost sale, the credit loss occurs to the supply chain. If < L y , the waiting time is smaller than the reach time and lost sales occur. Then more credit loss occurs to the whole supply chain. In a inventory per cycle, set Ew as expected customer backorders, then
Ew = ( x r ) f X ( x )
r L L

fY ( y )dxdy = E ( x r ) +

fY ( y )dy

(1)

Set El as expected demand lost sales, then


El = ( x r ) f X ( x)
r L 0

fY ( y )dxdy = E ( x r ) +

fY ( y )dy

(2)

3. Basic model
In this paper, we study the lead time reduction problem on the lot sized reorder point inventory models with a mixture of backorders and lost sales, where the relationship between customer waiting time and lead time and lot size, reorder point and lead time are decision variables. Specifically, when lead time demand follows a Normal distribution, we established the following expected annual total cost function: C (Q, r , L) = set-up cost + holding cost + stockout cost +lead time crashing cost
= D( + 0 ) AD Q D D + h( + r DL + El ) + Ew + El + R( L) Q 2 Q Q Q

(3)

as R( L) = ci ( Li 1 L) + c j (v j u j ) , L [ Li , Li 1 ] ( i = 1, 2,..., n ),
j =1

i 1

where
Q: D:

lot size,

annual demand rate, A : set-up cost per order, h : annual inventory holding cost per unit, : shortage cost per unit short,
277

0 : marginal profit per unit,


E ( x r )+ :

expected

demand

shortage

at

the

end

of

cycle E ( x r ) + = L (k ) ,where

(k ) = (k ) k [1 (k ) ] , and denote the standard normal probability density function and distribution
function, respectively, ui , vi , ci : i th component of lead time has a minimum duration ui and normal duration vi , and a crashing cost per unit time ci , i = 1, 2,..., n . For convenience, let c1 c2 ... cn , Li : length of lead time with components 1, 2,..., n crashed to their minimum duration,
Li = v j (v j u j ) ( i = 1, 2,..., n ).Also let L0 v j ,
j =1 j =1 j =1 n i n

R ( L) : lead time crashing cost per cycle R ( L) = ci ( Li 1 L) + c j (v j u j ) , for L [ Li , Li 1 ] .


j =1

i 1

And R( L0 ) = 0 . The optimal Q, r , L pair is determined with the objective of minimizing the expected annual total cost in order to make inventory strategies. Differentiating equation (3) with respect to Q and r in any given interval ( Li , Li 1 ) yields.
D( + 0 ) C (Q, r , L) AD h D D = 2 + 2 Ew El 2 R ( L) Q Q 2 Q Q2 Q E D Ew D( + 0 ) El C (Q, r , L) = h+h l + + r r Q r Q r

(4) (5)

Where
L L Ew E = f X ( x)dx fY ( y )dy ; l = f X ( x)dx fY ( y )dy . 0 r L r r r

Let F (r ) = f X ( x)dx , so
r L L Ew E = F (r ) fY ( y )dy ; l = F (r ) fY ( y )dy . 0 L r r

Then equation (5) equals to:


L L D( + 0 ) C (Q, r , L) D = h h + F (r ) fY ( y )dy F (r ) 0 fY ( y )dy L r Q Q

(6)

From equations (4) and (6), it is obvious to see

2 C (Q, r , L) 2 C (Q, r , L) > 0 and > 0 . So equation (3) is 2 Q r 2

convex with respect to Q and r in any given interval ( Li , Li 1 ). Let C (Q, r , L) / Q = 0 and C (Q, r , L) / r = 0 , then:
2 D[ A + R ( L) + Ew + ( + 0 ) El ] Q= h
F (r ) = hQ
1/ 2

(7) (8)

[ hQ + D( + 0 )] 0

fY ( y ) + D

fY ( y )

Differentiating equation (3) with respect to L yields, L L D 0 C (Q, r , L) 1 1 D = hk L1/ 2 + (h + ) L1/ 2 (k ) fY ( y )dy + L1/ 2 (k ) fY ( y )dy 0 0 L 2 2 2Q Q
+(h + D 0 D ) E ( x r ) + fY ( L ) + E ( x r ) + fY ( L) Q Q

(9)

Differentiating (9) with respect to L yields


278

L L D fY ( y )dy + 0 fY ( y )dy 0 3 / 2 1 1 L 0 2 C (Q, r , L) L (k ) 3 / 2 fY ( y )dy + hk L h 2 0 4 4 Q L

(10)

So equations (10) <0, for given Q and r , C (Q, r , L) is concave with respect to L , L [ Li , Li 1 ] , and the minimum expected annual total cost will occur at the end points of the interval.

4. Algorithm and numerical example


4.1 Algorithm For searching the minimum expected annual total cost (Q, r, L), the key is how to obtain the optimal (Q, r) pair given L by solving the above equations (7) and (8) iteratively until convergence. We can easily prove the convergence of the procedure by adopting a similar graphical technique used in Hadley and Whitin[11]. Thus, we can use the following overall procedure to find the optimal Q, r and L when customer waiting time is . Step 1: For each Li ( i = 1, 2,..., n ) start with
2 D[ A + R ( Li )] Q= h
1/ 2

and repeat step 2 and step 3 until convergence. Step 2: Find r from equation (8) using a line search. Step 3: With r found in step 2, compute Q from equation (7). Step 4: For each pair (Qi , ri , Li ) , compute the corresponding total expected annual cost C (Qi , ri , Li ) ( i = 1, 2,..., n ).The optimal pair (Q, r , L) will be the values foe which the total expected annual cost is minimum.
4.2 Numerical example In order to illustrate the above solution procedure, let us consider inventory system with these data: D 600 units/year, A $200 per order, h $20 per item per year, $50 per shortage, 0 $150 per lost sales, =7 unites/week, t = 1 week and t = 1 week.

The lead time has three components ,the remaining lead time data are shown in Table 1;the results of the solution procedure are shown in Table 2 for =0,1,2. A summary of these results is also presented in Table 3.
Lead time component 1 2 3
Tab. 1 Lead time data Minimum duration, Normal duration, ui (day) vi (day)

Unit crashing cost, ci ($/day) 0.4 1.2 5.0

20 20 16

6 6 9

Tab.2 Results of the optimal procedure ( Li in week)

Li

C (Q, r , L)

8 0 6 4 3 8 1 6 4 3 8 2 6 4 3

133 105 75 59 130 102 70 53 122 95 66 51

117 117 121 129 117 117 121 129 119 119 122 130

3060 3060 2991 3069 3101 3010 2905 2944 2975 2936 2832 2930

279


0 1 2

Tab3 Summary of the results the optimal procedure r Li Q

C (Q, r , L) 2991 2905 2832

4 4 4

121 121 121

75 70 66

5. Conclusions
Traditional inventory models usually ignore the customer selection behavior facing stockout or predigest the economic and credit lost caused by waiting time. This paper study the relationship between customer waiting time and the lead time, establish the inventory models when lead time demand and customer waiting time both follow Normal distributions. A new algorithm is put forward to solve the model. A numerical example is given to prove the effectiveness of the model. It may be an interesting research topic to investigate the lead time demand follow general distributions. Moreover, a possible extension of this work may take more complex customer arrival times.
References

[1] [2]
[3] [4]

[5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16]

Silver, E. A., Pyke, D. F. and Peterson R. Inventory Management and Production Planning and Scheduling. New York: Wiley, 1998 Liang Yuh, Ouyang, Hung Chi Chang. Lot Size Reorder Point Inventory Model with Controllable Lead Time and Set-up Cost. Int J of Systems Science, 2002, 33(8):632~635 G. Jr. Stalk. Time - the Next Source of Competitive Advantage. Harvard Business Review, 1988, 4:41~51. Liao,C. J. and Shyu, C. H. An Analytical Determination of Lead Time with Normal Demand. International Journal of Operations and Production Management, 1991, 11:72~78. Ben-Daya. M and Raouf. An Inventory Models Involving Lead Time as Decision Variable. Journal of the Operational Research Society, 1994, 45:579~582 Ouyang L., Yeh, N. and Wu, K. Mixture Inventory Model with Backorders and Lost Sales for Variable Lead Time. Operations Research Society, 1996, 47:829~832 Moon, I. and Choi, S. A Note on Lead Time and Distributional Assumptions in Continuous Review Inventory models. Computers and Operations Research, 1998, 25: 1007~1012 Hariga, M. and Ben-Daya, M. Some Stochastic Inventory Models with Deterministic Variable Lead Time.European Journal of Operational Research, 1999, 113:42~51 Manas Kumar Maiti, Manoranjan Maiti.Two-storage Inventory Model with Lot-size Dependent Fuzzy Lead-time under Possibility Constraints via Genetic Algorithm. European Journal of Operational Research, 2007, 179:352~371 Li L. The Role of Inventory in Delivery to time Competition. Management Science, 1992, 38 (2): 182~197 Hadley, G. and Whitin,T. Analysis of Inventory Systems, New Jersey: Prentice-Hall ,1963 Ouyang, L. Y., Chen, C. K. and Chang, H. C. Lead Time and Ordering Cost Reductions in Continuous Review Inventory Systems with Partial Backorders. Journal of the Operational Research Society, 1999, 50:1272~1279 Sven Axster.A Simple Procedure for Determining Order Quantities under a Fill Rate Constraint and Normally Distributed Lead-time Demand. European Journal of Operational Research, 2006, 174: 480~491 Wikner J. Continuous to Time Dynamic Modeling of Variable Lead Times. Int J of Production Research, 2003, 41:2787~2798 Sunil Chopra, Gilles Reinhardt and Maqbool Dada. The Effect of Lead Time Uncertainty on Safety Stocks. Decision Sciences, 2004, 34: 12~24 Sarker, B. R. and Coates, E. R. Manufacturing Setup Cost Reduction under Variable Lead Times and nite Opportunities for Investment. International Journal of Production Economics, 1997, 49: 237~247.

280

The Method Based on the PSO Algorithm for the Order Planning of the Steel Plant
Zhang Tao, Cheng Haigang, Chu Xiaoxuan, Xie Meiping
School of Information Management and Engineering, Shanghai University of Finance and Economics, Shanghai, P.R.China, 200433

Abstract This paper constructs an integer programming model for the order planning, which considers inventory matching and production planning simultaneously and regards minimizing the total cost of earliness/tardiness penalty, tardiness penalty in delivery time window, production, inventory matching and order cancellation penalty as the objective. Then, this paper adopts the PSO algorithm as the solution strategy and also designs the heuristic rules to repair infeasible solutions. The numerical analysis shows that the model satisfies the production process, and the solutions obtained by this method are superior to those obtained by the method that considers inventory matching and production planning in different phases. So, the model and the algorithm are valid. Key Words System Engineering; MTO (make to order); MTS (make to stock); Order planning, inventory matching, Particle swarm optimization (PSO)

1. Introduction
Recent years, order planning has become a focus in the research field of production management in iron-steel industry. In order to meet the demand of multi-category and small-lot, iron-steel enterprises should consider the planning strategy based on MTO (Make-to-Order). Order planning is quite important in the MTO management system of iron-steel enterprises. Paper [1] presented an integer programming model for order planning in iron-steel enterprises based on the MTO management system, and presented a genetic algorithm as the solution strategy. Besides on time delivery, the iron-steel factories should also consider the utilization ratio of inventory and equipment. Paper [2] presented an order planning model with multi-objectives. On the other hand, customer demands are unbalanced. For ensuring the utilization ratio of the equipment, the factories ought to forecast new orders basing on the certain rules. It is just the production method of MTS (Make-to- Stock) that the factories keep the products which are produced according to the forecast orders as inventory so as to meet the future demands. Inventory matching is necessary under the MTS production method. Many researchers have studied the inventory matching problem. Kalagnamam researched the matching of finished products and semi-finished products separately with the objectives of maximizing the inventory matching and minimizing the loss of the remaining materials[3]. Actually, in order to improve the market service level as well as to ensure the interior production economy, the iron-steel factories usually adopt the production mode which combines the concepts of MTO and MTS[4]. After received the customer orders, firstly, the steel factory go to the step of inventory matching, that is, pick the products that satisfy the customer orders from the stock and deliver directly. Then make production planning for the orders which cannot be matched with current inventory and decide the processing time of the orders in each process[5]. However, inventory matching and production planning could influence each other. Paper [6] considered inventory matching and production planning jointly, considered the entire processing line as only one process, and from material input to products output, every order should be finished in one period. On the basis of paper [6], this paper studies the order planning considering both inventory matching and production arranging simultaneously based on the MTO-MTS management architecture, and develops an integer programming model for order planning, presents a modified PSO algorithm with heuristic repaired strategy for infeasible solutions to solve the model.

This paper is supported by National Natural Science Fund of China (No.70501018, 70602031) and 211 Project of Shanghai University of Finance & Economics.

281

2. Mathematical model of order planning


2.1 Description of the order planning problem In iron-steel enterprises, according to the demands of customer orders, inventory matching picks the finished products in stock which have right qualification. The inventory products steel grade should not be lower than the quality requirement of the order (on the assumption that one order can match with not more than one kind product in stock). According to the constraints, production planning assigns the processing period of every process to the orders which are waiting to be produced. Based on the MTO-MTS production model and following the requirements of management integration, this paper works out the order planning on the basis of considering five days as the period unit, aiming at the orders in two months (eight period units) and taking both inventory matching and production planning into considerations. Considering that there are N due orders in planning horizon [1, T ] , when making order planning, there are three

choices, i.e., inventory matching, workshop producing and order cancellation. The procedure of inventory matching/production planning is just as follow. Suppose that there are N orders, J processes and K kind of inventory products, the weight of every order, delivery time, production routine (refers to the processes passed through and machines available to use in every process) and material specifications are all known, every order follows the same processes and the production capacity of every process is fixed. To meet the order demands, order planning decides whether to use the inventory matching or workshop producing (production period in every process) under the preconditions of satisfying the production capacity constraints and the sequence relations in order to minimize the total penalties for the orders. (Every order should pass all the processes. In the same period unit, only two processes of every order could be finished at most.) Notations: N denotes the total quantity of the orders;
K denotes the quantity of the kinds of inventory products; J denotes the total quantity of the processes; T denotes the planning horizon; i denotes the demand weight of order i ; Qk denotes the weight of inventory product k ;
E jt denotes the production capacity of process j in period t ;

[ a i , bi ] denotes the delivery time window of order i ;

i denotes the delivery time penalty coefficient of unit weight of order i in the time window; i denotes the earliness penalty coefficient of unit weight of order i ; denotes the tardiness penalty coefficient of unit weight of order i ;
i

p i denotes order cancellation penalty coefficient of unit weight of order i ;

denotes the minimal expected load factor of each process, a real number between (0,1) ;
v j denotes the penalty coefficient of insufficient utilization of production capacity of process j ;

I j 0 denotes the original inventory quantity of process j ; I j max denotes the maximal inventory capacity of process j ;

Decision variables:

1, order i chooses to match with inventory k Yik = otherwise 0, 1, order i is processed in process j in period t X ijt = otherwise 0,
282

Where i denotes the index of order, j denotes the index of process, t denotes the period. i = 1,..., N , j = 1,..., M , t = 1,..., T The decision-making factors mainly include:
' 1) Matching cost coming from inventory matching. Assume that the steel grade order i requires is k , the actual steel grade of the matching inventory product is k and the cost coming from the inventory matching with order i is ik , which is the material lost penalty caused by material cutting according to the order specification.

k, k' belong to the same steel sequence and in the same steel grade ik ' C ik = ik + hik g kk ' k, k' belong to the same steel sequence but k has higher steel grade than k k, k' do not belong to the same steel sequence

Where gkk denotes the unit cost of using high-quality products as normal products, hik denotes the matching weight of order i and the inventory product with steel grade k. 2) Cost of workshop producing of the orders. In order to obtain the production economy, utilization ratio of the equipments should not be too low. So when the production quantity of process j in period t is less than the minimal production lot, the penalty cost should be considered, which is denoted by unit penalty coefficient vj. 3) Penalty about time of delivery. This paper designs a penalty function to ensure that the delivery time in time window could be earlier. If the order delivery time is out of the time window but still in the acceptable scope of delivery time, the delivery belongs to tardy delivery and the unit penalty coefficient is i . If the order delivery time is ahead of the time window, the delivery belongs to early delivery and the unit penalty coefficient is i. 4) Penalty of order cancellation. If in the acceptable tardy delivery time, the plants still could not deliver the products, so that order cancellation penalties should also be taken into consideration. 2.2 Mathematical model The mathematical model is formulated as follows:
min f1 = i i min(max( X iJt t a i ,0), bi a i )
i =1 N

(1) (2) (3) (4) (5)

min f 2 = i i max(a i X iJt t + i i max( X iJt t bi ,0) 0


i =1

min f 3 = i p i [1 X iJt Yik ]


i =1 t =1 k =1

min f 4 = C ik Yik
i =1 k =1

min f 5 = v j max( (E jt X ijt i ),0)


t =1 j =1 i =1

The combined objective function is constructed as below ( 1 ,

, 5 are the weight coefficients):


(6)

min f 6 = min( 1 f1 + 2 f 2 + 3 f 3 + 4 f 4 + 5 f 5 )

s.t.
i =1
T

Yik hik Q k k = 1, 2,
K

,K ,N

(7) (8) (9)


,N

t =1

X iJt + Yik 1
k =1

i = 1,2,

i =1 T

X ijt i E jt
T

j = 1,2, j = 1,

, J , t = 1,..., T , J 1 , i = 1,2,

t =1

tX ijt tX i ( j +1) t
t =1

(10)

283

X ijt 2
j =1

i = 1,2,

, N , t = 1,..., T , J 1 , t = 1,..., T

(11) (12)

I j 0 + ( i X ijq i X i ( j +1) q ) I j max j = 1, 2,


i =1 q =1 q =1

Where, objective function (1) denotes the penalty when delivery time is closer to bi in the time window [ a i , bi ] . J denotes the last process of the order. Objective function (2) denotes the earliness/tardiness penalty. Objective function (3) denotes the order cancellation penalty. Objective function (4) denotes the inventory matching cost. Objective function (5) denotes the penalty for insufficient utilization of production capacity. Constraint (7) denotes the quantity constraint of inventory. hik denotes the weight of inventory product
k which is matched by order i , considering that inventory matching might cause the material lost, hik is

usually a little large than order weight i . Constraint (8) ensures that the same order could not choose both inventory matching and workshop producing at the same time, in addition, every order could match only one kind of inventory product at most. Constraint (9) denotes the production capacity limitation of each process, that is, in unit period the production quantity of one process could not be larger than its production capacity, besides, in order to keep a certain utilization ratio of the equipments, production quantity must be larger than a certain proportion of the production capacity. Constraint (10) ensures that if order i is being processed in one process, it must pass through the following processes and the production period of the next process should not be larger than the production period of the former process. Constraint (11) ensures that in the same period, every order could finish not more than two processes. Constraint (12) ensures that in every period and every process, the inventory of goods in process should not exceed the inventory capacity of each process.

3. PSO algorithm for the order planning


The solution of this model is a kind of NP-hard problem, and it is difficult to be solved to get the optimal solution by the accurate algorithm in a reasonable time[7]. This paper adopts the PSO algorithm to solve the order planning model. 3.1 Basic principles of PSO algorithm PSO algorithm is first introduced by James Kennedy and Russell Eberhartin 1995[8]. The basic principles of PSO algorithm is based on the research of the food-searching activities of bird colonies. Every particle is a vector of n dimensions, which presents a certain solution in the solution space of n dimensions. For example, X r = ( x r1 , x r 2 , , x rn ) represents a solution of particle r , Vr = (v r1 , v r 2 , , v rn ) presents the flying speed of particle r , Pr = ( p r1 , p r 2 ,
, p rn ) presents the best position (best solution) that particle r has ever

experienced, Pg presents the best position that all the particle have ever experienced, and g presents the index of the particle which is in the best position. The evolution formula of PSO algorithm could be described as follows[9]:
Vr
s +1

= ( s )Vr + c1 r1 ( Pr X r ) + c 2 r2 ( Pg X r )
Xr
s +1

(13) (14)

= X r + Vr

s +1

R Where, r = 1, 2,..., R presents the size of the particle swarm; s presents the iterations, (s ) is the inertia

coefficient of the iteration s ; c1 and c2 is the accelerate constant, which adjust the step length of the particles flying to the best position of its own and of the whole swarm separately and their values are usually in range [0,2]; r1s and r2s are random numbers which distribute equally in range [0,1]. Some studies show that bigger inertia coefficient is good for global optimum search and smaller inertia coefficient is good for local optimum search [10]. So this paper sets a relatively big original inertia coefficient and then attenuates it linearly to improve the local search ability. The linear attenuation formula is as follows:
284

( s ) = 0.9

s MaxStep

0.5

(15)

In order to ensure the global search ability, this paper adopts the method in which many particle swarms fly separately and parallel and the swarms exchange outstanding individuals periodically. 3.2 Encoding for the PSO algorithm The subsection natural number coding method this paper adopts is showed below: Lr = [c r1 , c r 2 , c rN ; p r11 , p r12 , p r1J , p rN 1 , p rNJ ] . Where, vector Lr represents the position of particle r in every dimension(a solution of r ) . [c r1 , c r 2 ,, c rN ] represents the first half of the coding, which denotes the inventory matching, in which c ri ( 0 r R , 1 n N ) represents order i matches with inventory c ri . p rij ( 1 j J ) denotes the production period of order i in process j , if p rij = 0 , it means order i will not be produced in process j . [ p r11 , p r12 , p r1J , p rN 1 , p rNJ ] represents the last half of the coding, in which [ p ri1 , p ri 2 , p riJ ] is the subpart of the last half of the coding and denotes the production period of order i in process 1,
, J . If the values of [ p ri1 , p ri 2 , p riJ ] are all zero, it means that order i is not going

to be produced; if the values are all bigger than zero, the order is going to be produced. c ri and p rij should not be bigger than zero at the same time, that is, one order cannot be both matched with inventory and produced in workshops. 3.3 Creation of initial feasible solutions One order cannot be both matched with inventory and produced in workshop, so in order to satisfy constraint (8), the last half of the coding is created first. Pick an integer randomly in range [ T , T ] as the value of p r11 . If the value falls into range [ T ,0] , set p r11 = 0 and p r12 = 0,..., p r1J = 0 , which means the order will not be produced. If p r11 > 0 , pick an integer randomly in range [ p r11 , T ] as the value of p r12 and analogize the other values in turn to create the value of pr1 j , which is the production period of the first order in every process. In the same pattern, the production period in every process of the other orders is got, now the last half of the coding is created. Then, creating the first half of the coding, if order i is not produced ([ p ri1 , p ri 2 , p riJ ] are all zero), pick an integer randomly in range [1, K ] as the value of cri , which means order i matches with product cri . If order i will be produced, set c ri = 0 . So in this manner, a particle is created. R particles could be created in the same way and form a swarm, L swarms in all should be generated. The particles created following the steps above could satisfy constraints (8), (10) and (11). Then the heuristic repair of infeasible solutions could ensure constraints (7), (9) and (12), and L R initial feasible solutions are obtained. 3.4 Procedure of the Algorithm From the design of the algorithm above, the Procedure of PSO algorithm is showed below: 1) Initialize all the parameters. Inertia coefficient w0 = 0.9 ,cognitive parameter c1 = c 2 = 2.0 , maximal iterations MaxStep =500, particle quantity R =100, swarm number L =10 and so on. 2) Every swarm create R feasible particles randomly, and there are L swarms altogether. Compute the value of objective function (6). Initialize the current solutions of the particles as the best solutions for each particle till now, initialize the best solutions of the individuals of the swarms as the best solutions of every swarm and initialize the best solution of all the swarms as the global best solution. 3) For s = 1 to MaxStep For l = 1 to L For r = 1 to R Use function (13) and (14) to move the particles; Repair the infeasible solutions by infeasible solution repairing strategy; End For
285

On the basis of the objective value computed by function (6), update Pg (the best solution of swarm l ) and
l

Pr (the historical best solution of particle r of swarm l )


l

End For Update Pg (the global best solution); If n%10=0, the swarms integrate two by two randomly, pick out 30% of the particles and exchange Pr for each other; n + + (Statistic of iterations); Update inertia coefficient according to function (15); End For 4) End. Output the experiment results such as Pg
l

4. Computational experiments
This paper adopts VC++ to programs and runs on the Pentium M. The basic data is that planning horizon T = 8 (take five days as a period unit); J = 3 (steel-making and casting, hot rolling and cold rolling); particle number R = 100 , swarm number L =10, maximal iterations MaxStep =500. Set the penalty coefficients in objective function i = 0.02, i = 0.1, i = 1.0, p i = 20.0 , unit penalty of utilization ratio v j = 9 , lowest utilization ratio = 0.7 , matching penalty coefficient ik = 0.2 i , penalty coefficient of high-quality products matched with orders of low steel grade demand g kk ' = 0.4 . Set the initial inertia coefficient (0) = 0.9 and cognitive parameter c1 = c 2 = 2.0 . Test each data group ten times separately, statistic the best target value BF, average target value AF, worst target value WF and record the results of order planning corresponding to BF , computing time and average convergence iterations( target value is computed by objective function (6)). Tab. 1 shows the computation results of the instances, and the order arranging results corresponding to the BF of the instances.
Tab. 1 Computation results for the instances Orders for Orders for Earliness time inventory production orders (S) matching 49 13 17 3

N
30 40 80

BF 173706 171463 369843

AF 179283 182915 397696

WF 189844 190468 435268

Tardiness orders 2 2 5

Canceled orders 0 0 2

59 78

17 29

23 49

4 7

Tab. 1 shows that on the basis of satisfying strict constraints such as the inventory quantity, intermediate inventory capacity and production capacity, the model in this paper arrange inventory matching and production planning reasonably and ensures that most of the orders could deliver on time. Fig. 1 shows the convergence curve of an algorithm computing result for N=40, K=6, Q=10000. Fig. 1shows that within 10 generations the target value has fast convergence rate and could get to the satisfied value within 100 generations. This paper also does contrast analysis between the results of first-matching-then-producing (FMTP) and those of considering them jointly (take N = 40 as example, the program runs 5 times and the best result is obtained). The FMTP processing rule is that match the orders first, then make production planning for the unmatched orders and cancel the orders which are not included in any production plans. Tab. 2 shows the results.

286

1000000 900000 800000 700000

Convergence curve

Target value

600000 500000 400000 300000 200000 100000 0 0 20 40 60 80 100 120 140 160

Iterations

Fig. 1 Convergence curve of the algorithm for N=40


Tab. 2 Comparison of the results of first-matching-then-producing and consider them jointly Inventory Orders for Production Orders for Cancellation Cancellation Total cost matching Matching cost Production cost orders cost 747 14 211028 26 0 0 211775

N FMTP Optimize jointly

1551

17

175688

23

177239

Table 2 indicates that the target value of the model that optimize inventory matching and production planning jointly is improved by 16% than that of FMTP.

5. Conclusion
On the basis of MTO-MTS production model, in order to improve the integrated production management level of iron-steel enterprises, reduce inventory and production cost, this paper brings inventory matching into the process of order planning and constructs the joint optimization model in which inventory matching and workshop producing is considered jointly. According to the characters of order delivery, a penalty function which ensures that the delivery time could be as early as possible in the time window is created. On the basis of the specialty of the model, this paper adopts the PSO algorithm as the solution strategy and also designs the heuristic rules to repair infeasible solutions. The computational results show that the model is feasible and the algorithm is valid. Experiments show that when process number increases, the position renewal of PSO algorithm might create a lot of infeasible solutions and the resolve capability might degrade seriously. And because of the large amount of the model constraints, when the program keeps the feasibility of the solutions from beginning to the end, the searching space is limited. Therefore, how to design more efficient algorithm and solution strategy for the order planning model is one of the researching directions in the future.
References

[1] [2] [3] [4] [5]

[6]

Zhang Tao, Wang Meng-guang, Tang Li-xin. The model and algorithm for the order planning of the steel plant. Control Theory and Applications, 2000, 17(5): 711-715(in Chinese) Shixin Liu, Jiafu Tang, Jianhai Song. Order-planning model and algorithm for manufacturing steel sheets [J]. International Journal of Production Economics, 2006, 100: 30-43 Kalagnanam J, Dawande M, Trumbo M, et al. Surplus inventory matching problems in the process Industry. Operations Research, 2000, 48(4): 505-516 Ivo J.B.F. Adan, Jan van der Wal. Combining make to order and make to stock. OR Spektrum, 1998, 20: 73-81 Hu Kunyuan, Chen Wenming, Wang Dingwei, Zheng Binglin. Research for joint optimization of inventory matching and production planning considering mass factor. Systems Engingeering-Theory Methodology applications, 2004, 13(3): 200-204(in Chinese) Hu Kunyuan, Chang Chunguang, Zheng Binglin, Wang Dingwei. The model and algorithm for joint optimization of inventory

287

matching and production planning in steel plant. Information and Control, 2004, 33(2): 177-180(in Chinese) Chang S C. Scheduling flexible flow shops with no setup effects. IEEE Trans on Robotics and Automation, 1994, 10(2): 112~122 [8] Kennedy J, Eberhart R.C. Particle Swarm Optimization. In:Proc. IEEE Int'l. Conf. on Neural Networks, IV.Piscataway, NJ:IEEE Service Center, 1995. 1942-1948 [9] Zeng Jianchao, Jie Jing, Cui Zhihua. Particle Swarm Optimization Algorithm. Beijing: Science Publishing Company, 2004(in Chinese) [10] Ioan Cristian Trelea. The particle swarm optimization algorithm: convergence analysis and parameter selection. Information Processing Letters, 2003, 85:317325 [7]

288

Positioning Model of Purchasing Based on Kraljics Purchasing Portfolio Matrix and Factor Analysis
Zhao Zhenfeng, Guo Danxia, Ding Liuming
College of Management, Huazhong University of Science and Technology, P.R.China, 430074

Abstract Kraljics purchasing portfolio matrix is the most important analytical tool in sourcing supply management. Considering the abstract and the difficulty of quantification of the matrixs two dimensions, the indicators of the matrix was established and quantified in the study. The materials purchased were classified from Kraljic matrixs dimension respectively by using factor analysis. The final location of materials in Kraljics purchasing portfolio matrix can be confirmed from the first two categorized results. Finally, a case was presented to demonstrate the effectiveness of the model. Key words Positioning model of purchasing, Kraljics purchasing portfolio matrix, Factor analysis

1 Introduction
Currently, purchasing cost constitutes the majority of the total cost of goods sold (Gadde and Hakansson, 2001)[1]. The purchasing of materials has also become increasingly important because they account for a large part of the value creation related to the buying firms products and services. The change has been accompanied by increasing attention to purchasing, as a field of strategic interest, both from managers and researchers. Reducing purchasing cost is one of the most efficient strategies to enhance profit. Therefore, purchasing management is becoming an essential strategic issue. Given the variety of materials procured, especially in some large enterprise, the kinds of materials reach to ten thousand or even more. Consequently, the need for differentiated approaches to purchasing tactics is increasing, which confronts firms with new challenges. The positioning model of purchasing aimed at developing distinctive purchasing and supply strategies, and improving the efficiency of purchasing management. Classification method based on ABC-analysis has been used widely in inventory management, also in provisioning management. Traditional method based on ABC-analysis simply classified the materials into three groups according to some factors that it is not perfect to the purchasing management of enterprise in practice. For instance, supposed material A is high supply risk but low profit impact, and material B is opposite. Using traditional ABC-analysis and considering the factor of consumption amount, material A and material B probably will be positioned in the same category and thereby taken same purchasing strategy. Obviously, its not meeting the demand of purchasing management, and the classification of A and B need to be subdivided further. Kraljic (1983)[2] introduced the first comprehensive portfolio approach for purchasing and supply management. Its general idea is to minimize supply risk and make the most of buying power Kraljics approach including the construction of a portfolio matrix that classifies products on the basis of two dimensions: profit impact and supply risk (low and high). Profit impact means the potential contribution to the total profit of enterprise. Supply risk plays an important role to that whether purchasing materials can be adequately supplied. The result is a 22 matrix and a classification in four categories: strategic, bottleneck, leverage and non-critical items, see Fig.1. Each of the four categories requires a distinctive approach towards suppliers. The Kraljic portfolio approach is generally considered as an important breakthrough in the development of theory in the field of purchasing and supply management (Syson, 1992)[3]. Syson concluded that Kraljics approach represents the most important single diagnostic and prescriptive tool available to purchasing and supply management. Lamming and Harrison (2001) [4]confirmed that Kraljics matrix remains the foundation of purchasing strategy for many organizations across different sectors. In contrast with a growing acceptance and use of purchasing portfolio models, there are puzzles and unanswered questions. The dimension of matrix is too abstract and without specific and measurable indicators, and is not given detailed methods to quantify it.
Foundation item: The National Natural Science Foundation of China(70332001) Biographies: Zhao Zhenfeng(1973),male, doctor ,zzf2003@163.com, tel:027-62412838

289

The suppliers side of the buyerseller relationship is considered as a disregarded element in Kraljics model. The Kraljic approach does not explicitly take into account the possible strategies and reactions of suppliers (Kamann, 2000)[5]. Some scholars made further development based on the Kraljic matrix. Cui et al (2006)[6] provided a positioning model of purchasing based on the threshold of supplier, and quantified the two dimensions of Kraljic matrix. They evaluated the supply risk by setting valid criteria to confirm the amount of available vendor and thought the risk was opposite to the number of available suppliers. The appraisement was too simple to reflect the reality. Gelderman, Weele (2002)[7] analyzed the application of Kraljic matrix in the enterprise from both practical and theoretical perspective, and revealed three distinctive measures to determine supply risk: consensus method, one-by-one method and weighted factor score method. However, consensus method is based on a process of reasoning and discussing, which is subjective; it is hard to find appropriate indicator by using one-by-one method, some threats might be neglected; it need to obtain too many data for weighted factor score method. Dubois, Pedersen (2002)[8]established a three-echelon indicator system and used AHP to quantify supply risk. Considering the advantages of Kraljics purchasing portfolio matrix, the indicators of matrix was refined and quantified in this paper. The materials purchased were classified from Kraljics purchasing portfolio matrixs dimension respectively by using factor analysis. The final location of material in Kraljics purchasing portfolio matrix can be confirmed from the first two categorized results. Finally, a case was presented to demonstrate the effectiveness of the model. The results indicate that classifying the purchasing materials by using Kraljic matrix is more accurate and pertinent than the method based on traditional ABC-analysis.

High

bottleneck Supply risk

strategic

non-critical

leverage

Low

Profit impact

High

Fig.1. Dimensions and categories in the Kraljic matrix

2 Indicators of positioning model based on Kraljic matrix


Based on the Kraljic matrix, the paper establishes a three-echelon pyramid structure of indicators system from two dimensions supply risk and profit impact respectively. This paper refined the supply risk indicators from three aspects: attribute of materials, suppliers impact, and the effect of external situation (see tab.1). Profit impact indicators see tab.2. These indicators involved reflect the actual situation and relevant aspects of materials purchased. Organizations should determine their own criteria and their own specific threshold values. The general strategic recommendation of indicators, as provided by tab 1 and tab 2, could be elaborated and tailored in view of company specific circumstances and conditions. As for non-quantitative indicators, we can use such as Delphi method to quantify them.

290

Tab.1 indicators of supply risk

one level

two level the degree of substitutable number function

three level

The number of indicator A1 A2 A3 A4 A5 A6 A7 A8 A9 A10 A11 A12 A13 A14 A15 A16 A17 A18 A19 A20 A21 A22 A23 A24

purchasing batch market attractiveness attribute of materials depreciation annual procurement rates demand stability number of supplier lifecycle of material lifecycle of terminal product inventory cost importance impact on the final product out of stock loss indicators of supply risk validity of supplier suppliers impact reliability of supplier punctuality rate of delivery product qualification rate response time of serving lead time business relationship cooperation time valid supplier amount infrastructure reliability of logistics system the impact of external situation product competition transport conditions logistics information system sales volume of product advantages of technology the number of competitors
Tab.2 indicators of profit impact

one level

two level purchasing amount

three level % terminal product % total cost purchasing batch annual consumption rates technical support price discount cooperation time level of monopoly

internal impact purchasing quantity indicators of profit impact external impact terminal product competition

relationship with suppliers

competitor number technical superiority

3 Positioning model based on Kraljic matrix and factor analysis


3. 1 Mathematical model of factor analysis (1) Data preprocessed For a given set of n purchasing materials, p indicators , we can get a np matrix X. In order to eliminate the effect of indicators different dimension, make sure the comparability, the sample data Y need to preprocess. Standardized sample data Y matrix: X = ( xij ) n p = ( x1 , x2 ,..., x p ) formula transformation:

291

xij =
yj =

y ij y j sj

, (i = 1,2,

, n)

(1)

where

2 1 n 1 n yij s j = ( yij y j ) n j =1 n 1 j =1

(2) Factor analysis Factor analysis is a statistical approach that can be used to analyze interrelationships among a large number of variables and to explain these variables in terms of their common underlying dimensions (factors). It involves finding a way of condensing the information contained in a number of original variables into a smaller set of dimensions with a minimum loss of information. Let X be the random vector for sample with m variables X 1 , X 2 , X P . The factor model postulates that X is linearly dependent on a few unobservable random variables F1 , F2 ,

Fm called common factors, and m

additional sources of variation e1 , e2 ,..., e p called errors. The factor analysis model is:

X 1 = a11 f1 + a12 f 2 + X =a f +a f + 2 21 1 22 2 X p = a p1 f1 + a p 2 f 2 +
Or in matrix notation

+ a1m f m + e1 + a 2 m f m + e2 + a pm f m + e p
(3)
T T

(2)

X = AF + E
Where

X = ( x1, x 2,..., xp ) , F = ( f1 , f 2 ,... f m ) E = (e1 , e2 ,...e p )


T

The random vectors e are assumed to satisfy the following conditions: E[e] = 0 . The effect of vectors is neglected. Therefore, we have

X = AF

(4)

(3) Confirming the weighting of each factor The principle for selecting the important factors was based on the principle proposed by Kaiser. Eigenvalue is larger than one, and cumulative variance contribution percent greater than 85%. Some change is need to factor loading matrix using varimax orthogonal rotation method. And then we can capture the important factors of sample data: f1 , f 2 ,... f m , and their corresponding variance contribution percent b1 , b2 ,..., bm . The weighting of each indicator w1 , w2 ,..., wm is defined as:

wj =

bj

b
i =1

( j = 1, 2,

, m)

(5)

(4) Weighted factor score Supposed principal component factor scores is z1 , z2 ,..., zn , and weighted factor score S1 , S 2 ,..., S n can be calculated from the relationship between weight vector and factor score, defined as :

Si = w j z j (i = 1, 2,
j =1

, n)

(6)

3.2 Positioning model of purchasing based on Kraljic matrix

According to Pareto principle, the total is equated to 100 percent, the vital few items account for a substantial amount (80 percent) of cumulative percentage of occurrences and the useful many occupy only the remaining 20 percent of occurrences. On the basis of the sequencing result from
292

High bottleneck 20% Supply risk strategic

non-critical

leverage

Low Profit impact

20%

High

Fig 2 Positioning model of purchasing based on Kraljic matrix

Kraljics purchasing portfolio matrixs dimension respectively by using factor analysis, we choose 20 percent as division point for Kraljic matrix, the final location of material in Kraljics purchasing portfolio matrix can be confirmed from the first two categorized results. For example, bottleneck items are defined by 80% ranks of profit impact and 20% ranks of high risk. (See Fig.2)
3.3 Positioning results and relevant strategy analysis based on Krajlic matrix

Each of the four categories requires a distinctive approach towards suppliers. (1)Strategic items represent a considerable value to the organization in terms of large impact on profit and a high supply risk. Often strategic products can only be purchased from a few suppliers or even single source, causing significant supply risk. The general recommendation for supplier management in this quadrant is to develop solid win-win strategic relationships with suppliers and long-term contract with key suppliers which should always contribute to the competitive advantage of the firm. Even in the case of a strategic partnership, enterprise should try to restrict or reduce the dependence on the supplier involved. By making the product less complex, alternative solutions are within reach. If necessary, new suppliers are developed. (2)Bottleneck items are by definition of low value and of high risk. Bottleneck items cause a lot of problems and risks. Volume insurance, vendor control, security of inventories and backup plans are recommended here. The most common alternatives refer to the productor to the supplier (searching, managing and developing suppliers, or cross-sourcing). The possibilities within the bottleneck quadrant are: keeping stocks, hedging, Internet buying, broadening the specifications, searching for alternative sources, and risk analysis in combination with contingency planning, standardization and pooling of their purchasing requirements etc. (3)In general, leverage items can be obtained from various suppliers. These products represent a relatively large share of the end products cost price in combination with a relatively low supply risk. The buyer has many possibilities and incentives for negotiation, since small percentages of cost savings usually involve large sums of money (Olsen and Ellram, 1997)[9]. At the same time the supply risk is minimal. These characteristics justify an aggressive approach to the supply market (e.g. Van Weele, 2000)[10]. Therefore, leverage items allow the buying company to exploit its full purchasing power, for instance through tendering, target pricing and product substitution. The generally preferred leverage position can be used for a rather aggressive supplier management. Competitive bidding and short-term contracts are feasible options to exploit the leverage position. Since suppliers and products are interchangeable, there is no need for long-term supply contracts. (4)Non-critical items usually have a small value per unit. Non-critical items require efficient processing, product standardization, order volume and inventory optimization, and reducing the logistic and administrative complexity. Preferably, non-critical items are put together in large quantities, increasing the buying power of the firm, if necessary, a process of standardization, bundling of purchasing requirements, individual ordering, and efficient processing is pursued. The pooling strategy is executed by a framework agreement with a preferred supplier, systems contracting (Elliott-Shircore and Steele, 1985)[11], a Vendor Managed Inventory system, or an e-procurement solution.

293

4 Application of model
4.1 Case introduction A total number of ten sample variables were selected at random from one manufacturing industry in Wuhan. The case studies entailed the use of factor analysis to evaluate supply risk. To the part of potential profit impact analysis, we only give the final evaluation results. The sample data see tab 3. The process of calculation was neglected and the final position was given (see tab.4)
Tab.3 Sample data

indicator number 1 2 3 4 5 6 7 8 9 10 indicator number 1 2 3 4 5 6 7 8 9 10

A1 9 3 1 0 1 0 0 1 1 2 A13 0.99 0.87 0.89 0.96 0.92 0.85 0.97 0.84 0.82 0.99

A2 1 3 1 5 1 5 1 1 5 1 A14 0.1 2 1 7 0.5 7 1 2 4 2

A3 270 1150 20 2900 80 100 60 400 340 260 A15 2 5 40 15 3 2 30 10 60 10

A4 406272 199017.3 170765 101444.4 97692 714000 67035 60546 56760 40000 A16 5 5 1 3 3 1 5 1 3 5

A5 5 5 1 5 3 5 1 5 3 3 A17 30 17 12 1 25 4 36 7 16 60

A6 1 3 2 5 2 12 3 1 3 2 A18 2 15 12 20 3 56 1 3 28 8

A7 20 18 4 48 12 60 6 12 15 18 A19 5 3 1 5 5 1 3 5 1 3

A8 18 12 6 18 24 12 8 36 18 12 A20 3 5 3 3 5 3 3 5 3 5

A9 13995.45 37985.12 3300 28654 11523.69 21420 18523 18163.8 10590.33 11456.98 A21 5 5 3 3 5 3 5 3 3 5

A10 5 3 1 3 5 1 5 3 1 5 A22 27000 650000 5600 556650 7800 650000 17600 890000 56720 21463

A11 870 2500 1200 15600 820 21000 25300 3600 180 550 A23 5 1 1 3 3 5 5 3 1 3

A12 1 0.77 0.94 0.82 1 0.74 0.96 0.76 0.88 0.91 A24 18 5 3 2 1 26 4 11 2 5

(Notation A1A2A24 is consistent with Tab 1)


Tab 4 the result of positioning model for purchasing sample Profit potential Positioning Traditional ABC positioning Supply risk sequencing sequencing results results 2 5 bottleneck A

No. 1 2 3 4 5 6 7 8 9 10

3 10 1 7 4 6 8 9 5

4 3 2 7 1 9 6 8 10

bottleneck leverage strategic non-critical leverage non-critical non-critical non-critical non-critical

B B B C A C C C C

294

4.2 Results analysis From tab.8, we can see the difference between the positioning method based on Kraljic matrix and factor analysis and the method based on traditional ABC analysis clearly. For example, the material 4 was classified in to category B according to consumption rate and it doesnt need to pay more attention to it. But its substitutability is low, and if there is accident for supply, it will be hard to find new substitute to provide it timely. Furthermore, the life cycle of its terminal product is short, so the purchasing risk is high. By the means of factor analysis, the material 4 belongs to high supply risk and high profit impact materials that is strategic item, so we should attach more importance to it. Material 1and 6 were classified as category A by ABC-analysis simultaneously. In fact, they are different obviously by the way of factor analysis. Material 1 has high supply risk and common profit impact, but material 6 has common supply risk and high profit impact. Therefore the purchasing strategy of material 1 and 6 should be different.

5 Conclusions
The paper presents the conception of positioning model of purchasing based on Kraljic matrix and factor analysis, and then illustrates its application. The positioning results of tab 8 reveal the main difference between these two positioning methods. Compared with traditional ABC analysis, the positioning model based on Kraljic matrix and factor analysis considers almost all relevant factors that affect procurement, and it is more accurate and efficient. Furthermore, the positioning model resolves the abstract of Kraljic matrix and difficulty to quantify issues.
References

Gadde, L.E., Hakansson, H.,. Supply Network Strategies. Wiley, Chichester. 2001:45-60 Kraljic, P., Purchasing must become supply management. Harvard Business Review, 1983,61(5):109- 117 Syson, R.,. Improve Purchase Performance. London, Pitman Publishing. 1992 Lamming, R.C., Harrison, D.,. Smaller customers and larger suppliers: the potential for strategic purchasing approach: a case study. Proceedings of the 10th International IPSERA Conference, Jonkoping, Sweden, 2001:595610 [5] Kamann, D.J.F.,. Matrices, cubes and triangles in purchasing. Poster Presentation at the Ninth International IPSERA Conference, London, Canada, 2000:16 [6] Cui N.F., Kang Y., Lin S.X., 2006. Study of Purchasing Positioning Mode Based on suppliers threshold. Chinese Journal of Management Review. 2006,18(4):54-58 (in Chinese) [7] Gelderman, C.J., Van Weele, A.J.,. Strategic direction through purchasing portfolio management: a case study. Journal of Supply Chain Management, 2002,38 (2):3037 [8] Dubois, A., Pedersen, A.C.,. Why relationships do not fit into purchasing portfolio modelsa comparison between the portfolio and industrial network approaches. European Journal of Purchasing & Supply Management, 2002, 8 (1):3542 [9] Olsen, R.F., Ellram, L.M.,. A portfolio approach to supplier relationships. Industrial Marketing Management, 1997,26 (2):101113 [10] Van Weele, A.J.,. Purchasing Management: Analysis, Planning and Practice. Chapman & Hall, London. 2000 [11] Elliott-Shircore, T.I., Steele, P.T.,. Procurement positioning overview. Purchasing and Supply Management ,1985,12:2326

[1] [2] [3] [4]

295

Research on Coordinated Replenishments by Alternative Supply Sources in a Logistics System


Zheng Aihua, Zhao Qiuhong
School of Economics and Management, BeiHang University, P.R.China, 100083

Abstract This paper sets up two models in two-level logistics system with two distributors and two retailers. In the first model, the distributor makes ordering for one retailer only, in the second model, the distributor can act as alternative supply source when the other one is out of stock. It is shown by theoretical and computational analyses that the model with coordinated replenishments by alternative supply sources outperforms the other one with regard to reducing reorder point and total costs of the logistics system. Key words Alternative supply sources, Coordinated replenishments, Inventory and transportation costs

1. Introduction
Logistics cost contributes to a large percent of operation cost in many companies. So, it is vital to reduce the logistics costs in order to obtain advantage in competitions, and inventory cost is of major consideration among them. The researches on inventory problems are abundant, while some of them focus on coordinated replenishment. Kleywegt and Nori (2002, 2004) proposed inventory models under Vendor-Managed Inventory(VMI) situation, but they only considered direct delivery due to the complexity of the algorithm. Cheung and Lee (2002) examined two information-based supply chain, specifically, they considered a supplier serving multiple retailers located in a close proximity. Minkoff (1993) used a markov decision process (MDP) Model to describe an integrated logistics system with stochastic demand, and his objective is about the inventory and transportation optimizing. In order to reduce inventory costs, Cetinkaya and Lee (2000) set up VMI model, in which the supplier has the option of delaying delivery in anticipation of orders from other retailers. Bertazzi and Paletta (2005) consider a complex logistics system, which studied two different types of VMI police, and Axsater and Zhang(1999), used a new kind of retailer policy to satisfied customer demand, but in these models there were only one facility to supply the retailers. Different from above researches, Leon and Chakhlevitch (2001) considered about coordinated replenishments by alternative supply sources, but they took all the parameters of two models as the same, which has less reality. As the current researches seldom consider about the coordinated replenishments, and those with coordinated replenishment are much simple based on the assumption, deep researches on this aspect is needed and valuable.

2. The Problem
Considering in a two-level logistics system, there are a supplier S, two distributors D1D2 and two retailers r r. The system distributes one kind product to customers, and the demands faced by different retailers are Poisson processes with different means. We assume that the supplier S has unlimited stock, transportation distances between any two facilities in the logistics system are constant, so do the lead times of the distributors and the retailers. Each facility orders in batches and applies the (R,Q)-type ordering policy with order quantity depending on facility level. Whenever the inventory position at the facility declines to reorder point R, an order of size Q units is placed to the upstream facility. The costs considered include transportation costs, holding costs in the distributors, and holding and shortage cost in the retailers. The holding costs are charged for each unit in stock per time unit in distributors and retailers. The retailers also incur shortage costs for each backordered unit per time unit. Denote Di as distributor i , and ri as retailer i , the other notations in the paper are as follows:

296

i : demand rate at retailer i , i = 1,2


Ri : retailer i s reorder point, i = 1,2 Qi : retailer i s batch size, i = 1,2 R0i : distributor i s reorder point (in units of retailer i 's batches) , i = 1,2 Q0i : distributor i s batch size (in units of retailer i 's batches) , i = 1,2 L0i : distributor i s lead time, i = 1,2 Lij : retailer j s lead time from distributor i , i = 1,2 , j = 1,2 M ij : transportation cost per order from distributor i to retailer j , i = 1,2 , j = 1,2 M 0i : transportation cost per order placed by distributor i , i = 1,2 h0i : distributor i s holding cost per unit per time unit, i = 1,2 hi : retailer i s holding cost per unit per time unit, i = 1,2 li : retailer i s backorder cost per unit per time unit, i = 1,2
n I Di : inventory on hand at distributor i in model n ( times of retailer i 's batches), i = 1,2 , n = 1,2

I rni : inventory on hand at retailer i in model n (times of retailer i 's batches), i = 1,2 , n = 1,2 S rni : backorders at retailer i in model n (times of retailer i 's batches), i = 1,2 , n = 1,2

3. The Model
Two models are set up in the considered two-level logistics system. In model 1, the distributor supplies one retailer only, in model 2, the distributors can act as alternative supply sources when the other distributor is out of stock. See Fig.1 and Fig. 2 for reference.

Fig. 1: Model 1

Fig. 2: Model 2

Model 1: In model 1, we assume that distributor i only supplies retailer i , i = 1,2 . If retailer i places an order and distributor i is out of stock, the retailer's order is backordered at the distributor. In this situation, the system is a combination of two identical serial systems which can be considered separately. The total average cost of the system can be given by:

TC 1 =
1 1

i =1,2

1 Di

i =1,2

1 ri

(1)

where CDi and Cri are the average cost at distributor i and retailer i per unit time, which can be calculated respectively as follows:
1 1 CDi = h0i E I Di +

( )

i M 0i Q0i i M ii Qi

i = 1,2 i = 1,2

(2)

1 1 1 Cri = hi E I ri + li E S ri +

( )

( )

(3)

where E I Di is the mean of the quantities of the inventory at distributor i, E I ri and E S ri denotes
The work is supported by National Natural Science Foundation of China (Project Topic: Integrated Research on location of distribution center, inventory and transportation decisions under uncertain environment. Project No. 70301001)

( )
1

( )
1

( )
1

297

respectively the means of the inventory and backorder at retailer i. 1 1 1 In the following, we illustrate the procedures calculating E I Di , E I ri and E S ri .

Denote respectively (R0i , Q0i ) and (Ri , Qi ) be the ordering policies followed by distributor Di and where R = Ri + Qi + R0i [3]. Denote IP(t ) be the echelon inventory position(on-hand plus

( ) ( )

( )

retailer ri , i = 1,2 . For the purpose of simplicity, let Wi represent the whole of ri and Di , then Wi is under a

(R, Q0i ) policy,

on-order inventory minus backorders) at the supplier's level at time t , IP be the random variable with a distribution given by the steady-state distribution of IP(t ) ,it is easy to see that IP is uniformly distributed in
i

[R + 1, , R + Q0i ][6]. Then E (IPr ), the effective echelon inventory position at the retailers' level (inventory at
IP D(L0i ) E IPri = IP D(L0i ) mQi

and in transit to the retailers, minus backorders at the retailers), can be calculated as follows [3]:

( )

if

IP D(L0i ) Ri
(4)

others

where IP D (L0i ) is the steady-state echelon inventory level of Wi ( inventory at the supplier and retailers plus inventory in transit to the retailers minus backorders at the retailers). So, the effective on hand inventory level of distributor Di is as follows:

EI

( )
1 Di

IP D(L0i ) mQi > Ri in both (4) and (5).

where D (L0i ) is the customers demand during lead time L0i and m is the largest integer satisfying Let E ILri = E IPri D (Li )

0 = IP D(L0i ) E IPri = mQi

( )

if

IP D(L0i ) Ri
(5)

others

( ) ( )

(6)

where D(Li ) is the customers demand during lead time Li , then we can get the means of the inventory and backorder at retailer i:
1 1 E I ri = max 0, E ILri E S ri = max 0, E ILri

( )

( ( ))

( )

( ))

(7)

Model 2: In model 2, the retailers can be coordinated replenished by alternative distributors. That is ,when retailer i places an order and distributor i is out of stock, the retailer's order is transferred to distributor j if distributor j s on hand stock is available, else the order is backordered at the distributor. However, an alternative ordering supposes a higher transportation cost. In this situation, the total average cost of the system can be given by:

TC 2 =
and
2 2 C Di = h0i E I Di +

i =1,2

2 Di

+ C r2 i
i =1,2

(8)

( )

Pi i + (1 Pi ) j Q0i

M 0i

i = 1,2 , j = 1,2 and i j

(9)

Cr2 = hi E I r2 + li E S r2 + i i i

( )

( )

Where Pi (i = 1,2 ) is the probability that retailer i orders from distributor i [5]. Observably, when Pi = 1 , (9)is the same as (2), so do (10) and (3). In the part, we divided all the states of model 2 into four parts according to the inventory level of the 2 2 2 2 2 2 distributor, that is, State a: I D1 > 0, I D2 > 0 , State b: I D1 = 0, I D2 = 0 , State c: I D1 > 0, I D2 = 0 , and State

i M ii Pi + M ij (1 Pi ) i = 1,2 , j = 1,2 , and i j Qi

(10)

298

d: I D1 = 0, I D2 > 0 . Let Pa , Pb , Pc , Pd be the probabilities of the four


2 2

sub-states, which can be gained through

simulation. Then we get :


2 a b c d I Di = Pa I Di + Pb I Di + Pc I Di + Pd I Di

i = 1,2 i = 1,2 i = 1,2 .


(11)

I r2 = Pa I ra + Pb I rbi + Pc I rci + Pd I rd i i i S r2i = Pa S rai + Pb S rbi + Pc S rci + Pd S rdi

And we can get Pi (the probability that retailer i orders from distributor i ) as P = Pa + Pc and 1

P2 = Pa + Pd . Because in State a and c, distributor D1 has on hand inventory, and can supply order to retailer r1 .
So do State a and d. State a and b are similar with that in model 1, because in these two states, no retailer is coordinated replenished. In state c and d, one distributor has available on hand stock, and the other one is out of stock, so coordinated replenishment is delivered. In the following, we illustrate the methods calculating the on hand inventory levels of different facilities. Without loss of generality, we take State c as an example, in which D1 supplies r1 and r2 . Like model 1, let

W1 represent the whole of D1 , r1 and r2 , then W1 is under a (R,Q01 ) policy, where R =

Besides, let r represent the whole of r1 and r2 , then r has its own batch size Qr and reorder point Rr .So E (IPr ) , the effective echelon inventory position at the retailer r s level, can be calculated as follows:

i =1, 2

R + Q + R
i i =1, 2 i

01 .

IP D(L01 ) if IP D(L01 ) Rr (12) E (IPr ) = IP D(L01 ) mQr others where x1 and x2 are the proportions in times that r1 and r2 get their supplies from D1 . Qr = x1Q1 + x2Q2 , c Rr = Ri + Qi ( x1Q1 + x2Q2 ) . The effective on hand inventory level of distributor D1 : E I D1 , can be

( )

i =1, 2

i =1, 2

also calculate as follows:

0 if IP D(L01 ) Rr (13) E I = IP D(L01 ) E (IPr ) = m( x1Q1 + x2Q2 ) others where D (L01 ) is the customers demand during the distributor lead time L01 and m is the largest integer satisfying IP D (L01 ) mQr > Rr in both (12) and (13).

( )
c D1

Let T =

of consumer demand U (t ) can be calculate as U (t ) = T IP(t ) . U (t ) is the sum of U 1 U 2 ,which are the demand amount of retailer r1 and r2 ,and they can be obtain by random disaggregation. Then we can get

i =1, 2

R + Q + R
i i =1, 2 i

01

+ Q01 denotes W1 s reorder-up-to inventory level then at time t , the amount

E ILri = E IPri D(Li ) = Ti U i D(Li )


E I r2 = max 0, E ILri E S r2 = max 0, E ILri i = 1,2 i i

( )

( )

(14)

and

( )

( ( ))

( )

( ))

(15)

where Ti denotes detailer ri s reorder-up-to inventory leveland D(Li ) is customers demand during the retailer lead time Li .

299

4 The Examples
In this part , we can see the results reports from a simulation study that evaluates the total costs of the two models above. We give the parameters as follows: 1 = 3, 2 = 2 , h01 = 6, h02 = 5 , h1 = h2 = 7 , l1 = 10, l2 = 9 ,

M 01 = M 02 = 80 , M 11 = M 22 = 50 , M 12 = M 21 = 65 , L01 = L02 = 5 , L11 = L12 = 1 , L12 = L21 = 1.5 .


And Qi can be obtained using the EOQ formula[5] Q =

2M (h + l ) ,and we get Q1 = 9, Q2 = 8 . Because hl we assume that the distributors reorder point R0i and batch size Q0i are both times of Qi , we
choose R01 = 9,18,27 ; R02 = 8,16,24 ; Q01 = 9,18,27,36 ; Q02 = 8,16,24,32 . Our objective is to evaluate the values of reorder points Ri which minimize the average total costs of the two models.
Tab. 1 : Results of two models

Case 1 2 3 4 5 6 7 8 9 10 11 12

R01 9 9 9 9 18 18 18 18 27 27 27 27

Q01 9 18 27 36 9 18 27 36 9 18 27 36

R02 8 8 8 8 16 16 16 16 24 24 24 24

Q02 R1 8 16 24 32 8 16 24 32 8 16 24 32 8 8 8 8 6 9 9 9 7 8 9 10 6 5 6 5 5 5 6 6 5 7 5 7

Model 1 R2 TC1 307.8 318 360.6 402.2 387.4 410.8 442.7 491.7 479.8 510.3 548.9 582.9 R1 6 7 7 8 7 9 7 9 6 6 7 9 R2 6 4 4 6 5 5 6 6 4 7 6 6

Model 2 TC2 291.7 301.5 338.6 371.9 367.6 381.7 418 457.6 448.2 463.1 502.3 542.7

Ratio 0.06 0.05 0.06 0.08 0.05 0.08 0.06 0.07 0.07 0.10 0.09 0.07

In the results form, we use a new parameter Radio(TC1 - TC2) / TC2, in order to show the advantage of model 2 in total costs. Through the form above, we can see that in general the reorder points in model 2 are not exceed model 1, and all the total costs in model 2 are less than that in model 1. Obviously, these are advantages of model 2 because of coordinated replenishments by alternative distributors.

5 Conclusion
snotesole of retailer This paper sets up two models in two-level logistics system with two distributors and two retailers. In the first model, the distributor makes ordering for one retailer only, in the second model, the distributor can act as alternative supply source when another one is out of stock. By comparing the differences in the evaluating results, we can see the advantages of model 2 because of coordinated replenishment by alternative distributors. But there is also shortcoming in the model evaluating, for example, the cases we choose do not include all the possibility because aggregate of cases is too large. Above all, it is shown in this paper that the model with coordinated replenishments by alternative supply sources outperforms the other one with regard to reducing reorder point and total costs of the logistics system. We will deeply research on this aspect, and it is hoped that new achievement can be obtain in the later.
References

[1]

Kleywegt A.J., Nori V.S., Savelsbergh M.W.P. The stochastic inventory routing problem with direct deliveries. Transportation

300

[2] [3] [4] [5] [6] [7] [8] [9]

Science, 2002, 36:94 -118 Kleywegt A.J., Nori V.S., Savelsbergh M.W.P. Dynamic Programming Approximations for a Stochastic Inventory Routing Problem. Transportation Science, 2004, 38:42-70 Cheung K.L., Lee H.L. The inventory benefit of shipment coordination and stock rebalancing in a supply chain. Management Science, 2002, 48: 300-306 Minkoff A. A Markov Decision Model and Decomposition Heuristic for Dynamic Vehicle Dispatching. Operations Research, 1993, 41:77-90 Ng C.T., Leon Y.O.L., Chakhlevitch K. Coordinated replenishments with alternative supply sources in two-level supply chains. International Journal of production economics, 2001, 73:227-240 Cetynkaya S.,Lee C.Y. Stock replenishment and shipment scheduling for vendor managed inventory systems. Management Science, 2000, 46:217-232 Sivazlian B.D. A continuous-review (s, S) inventory system with arbitrary interarrival distribution between unit demand. Operation Research, 1974, 22:65-71 Axsater S., Zhang W.F. A joint replenishment policy for multi-echelon inventory control. International Journal of Production Economics, 1999, 59:243-250 Bertazzi L., Paletta G., Speranza M.G. Minimizing the total cost in a integrated vendor-managed inventory system. Journal of Heuristics, 2005, 11:393-419.

301

Solving the Joint Replenishment Problem with Warehouse-Space Restrictions Using a Genetic Algorithm
Ming-Jong Yao
Department of Industrial Engineering and Enterprise Information, Tunghai University, 180, Sec. 3, Taichung-Kang Road, Taichung City, 40704 Taiwan, R.O.C. Email: myao@thu.edu.tw

Abstract This study is an extension of the Joint Replenishment Problem (JRP) that takes into accounts warehouse-space restrictions. The focus of this study is to determine the lot size of each product under power-of-two policy to minimize the total cost per unit time and to generate a feasible replenishment schedule of multiple products without exceeding the available warehouse-space. In order to solve this problem, we propose a hybrid genetic algorithm (HGA). We utilize the ability of multi-dimensional search of GA to obtain candidates in the solution space, and test the feasibility of any candidate using the proposed heuristics. By our numerical experiments, we demonstrate that the proposed HGA could effectively solve the JRP with warehouse-space restrictions. Therefore, it could serve as an effective decision-support tool for the logistic managers. Keywords: Inventory, joint replenishment problem, scheduling, genetic algorithm, warehouse space

1.

Introduction

In this paper, we are interested in obtaining an optimal replenishment strategy for a distribution center with limited warehouse space available for inventory storage. The decision maker concerns with the determination of lot sizes and replenishment schedules of n products in the distribution center. The focus of this study is to optimally coordinate the replenishment of products so as to minimize the total costs incurred per unit time without violating the warehouse-space restrictions. This study is actually an extension of a well-known lot sizing and scheduling problem, viz., the Joint Replenishment Problem (JRP). However, without considering the warehouse-space restrictions, one could encounter an infeasible replenishment strategy for the practice in the distribution center even he/she is able to obtain an optimal solution of the conventional JRP. Therefore, we are motivated to study the JRP considering the warehouse-space restrictions and to propose a solution approach to support the managers decision making in the distribution center.

2.

Literature Review

The objective of the JRP is to minimize the total cost incurred per unit time. The cost terms considered generally include setup costs and inventory holding costs. Most of early studies assume that the replenishment cycle time of product i (denoted by Ti) is equal to a positive integer ki times B, i.e., Ti= ki B, where B is a basic period. Also, it is usually assumed that the replenishment frequency for major setup, denoted by k0, is always set to 1 in the JRP. The JRP has been studied over thirty years, and extensive research efforts have been addressed to attempt efficient heuristics for solving the JRP. One may refer to van Eijs (1993), Viswanathan (1996). Wildeman, et al. (1997) and Lee and Yao (2003) However, all the solution approaches in the literature did not take into accounts the warehouse-space restrictions when solving the JRP. It is assumed that the replenishment in lots arrives at the warehouse only at the beginning of some basic period in this study. Many researchers have been studying the scheduling of cyclic jobs or lots to minimize the peak requirement of production resources. For instance, one may refer to Park and Yun (1985), Yao (2001) and Yao, et al. (2003). Recently, Murthy, Benton and Rubin (2003) devoted themselves to solve the lot scheduling problem so as to minimize the maximum warehouse space requirement given fixed replenishment lot sizes (and/or replenishment cycles) of multiple products. Murthy, Benton and Rubin (2003) commented that the scheduling of an activity in a given basic period does not affect the capacity requirement in basic periods in which this activity is not scheduled. It is unlike the situation in inventory replenishment of multiple products where inventory left over
302

at the end of the last basic period becomes the inventory at the beginning of the next basic period. Using the assumption that the replenishment cycle time of each product is an integer multiple of a basic period, Murthy, Benton and Rubin (2003) formulated a mathematical model and proposed a heuristic for solving the lot scheduling problem. A decision maker may first ignore the warehouse-space restrictions and apply the solution approaches for the conventional JRP to solve the problem. However, he/she often finds that the obtained solutions are infeasible for the practice of the distribution center since the maximum warehouse-space required exceeds the available size. Therefore, we are encouraged to study the JRP considering the warehouse-space restrictions and to propose a solution approach to support the managers decision making in the distribution center.

3.

The JRP with Warehouse-space Restrictions under Power-of-Two Policy

In this section, we introduce the mathematical model for the JRP with the warehouse-space restrictions under Power-of-Two policy. Table 1 summarizes the notations for formulating the mathematical model for the JRP.
Notation di hi A ai Ti ki B
The notations for the model formulation of the JRP. Definition the demand rate of product i the holding cost each unit per unit time for product i the major setup cost incurred in the distribution center the minor setup cost incurred while each product i is replenished the length of time between two consecutive minor setups for product i the replenishment multiplier of product i A basic period in the planning horizon Table 1:

Note that the author referred to Murthy, Benton and Rubins (2003) mathematical model for the derivation of the constraints discussed in this subsection. Given a vector of multipliers (k1 , k2 , , kn ), we denote a replenishment schedule for such a vector as X(k1 , k2 , , kn ), ( x1 (k1 ), x2 (k2 ),..., xn (kn )) where xi (ki ) is the earliest scheduled replenishment time point of product i. For a value of basic period B, I it ( xi (ki ), B) is the inventory level of product i at time t (as a function of xi (ki ) and B since the replenishment quantity of product i is qi = di ki B ). Denote si as the warehouse space required for a unit of product i. Define St ( X(k1 , k2 , , kn )) as the total warehouse space requirement at time t; then, St ( X(k1 , k2 , , kn ), B) = i =1 si I it ( xi (ki ), B) . Also, for a
n

replenishment schedule X(k1 , k2 , , kn ), we define S max ( X(k1 , k2 , , kn ), B) = max{St X(k1 , k2 , , kn ), B} as


0 t <

its

maximum

warehouse

space

requirement.

Let

W max

be

the

warehouse

space

available.

If

S max ( X(k1 , k2 , , kn ), B) W max , we have a feasible replenishment schedule for the solution (k1 , k2 , , kn , B). Let lcm() be an operator that takes the least common multipliers of a set of integers. Now, we are ready to summarize the mathematical model for the JRP with warehouse-space restrictions under PoT policy as follows. Minimize Subject to
n n TCPoT (k1 , k2 , , kn , B) = (1/ B) A / k0 + (ai / ki ) + ( B / 2) ki di hi i =1 i =1

(1) (2) (3) (4) (5) (6)

I it ( xi (ki ), B) = di ki B di (t xi ( ki )), xi ( ki ) t ( xi (ki )) + ki B), i = 1,..., n, I i ( xi ( ki ) + ki B ) ( xi (ki ), B) = I it ( xi (ki ), B),0 t < ki B, i = 1,..., n,

i =1 si Iit ( xi (ki ), B) W max ,0 t < lcm(k1 , k2 ,..., kn ) B,


n

0 xi (ki ) < ki B, i = 1,..., n, k0 = 1 and ki = 2 pi , p 0, integer, i.

For each product, we use (2) to represent its inventory level as a function of xi (ki ) and B in the formulation above. Equations (3) indicate the periodic pattern of replenishment cycles. The inequalities in (4) define the
303

maximum warehouse space restrictions. At last, as shown in (5), the first replenishment time xi (ki ) of each product must be scheduled within its replenishment cycle time. Note that the number of inequalities in (4) under GI policy could be enormously large when most of the multipliers of {ki} are prime numbers. Such a phenomenon not only makes the feasibility solving the problem. Under PoT policy, we enjoy an interesting property that lcm(k1 , k2 ,..., kn ) = max(k1 , k2 ,...., kn ) , and it also helps to prevent the number of constraints in the mathematical model growing too fast.

4.

The Proposed Hybrid Genetic Alorithm

First, we briefly introduce the overview of our GA as follows. warehouse-space restrictions includes a vector of multipliers (k1 , k2 , , kn ) and the value of the basic period (B). Besides the goal of cost minimization, we have tofeasible replenishment schedule for the obtained solution. In our study, our GA first ignores the warehouse-space restrictions, searches in the solution space of (k1 , k2 , , kn ) , and tries to minimize the objective value. We define K (k1 , k2 , , kn ) for simpler noresentation. For a given vector of multipliers K, the tion is convex with respect to B. By setting the first-order condition equal to zero, i.e., TC (K , B) / B = 0 , we can easily locate the minimum at B(K ) by eq. (7).
n B(K ) = 2 ( A / k0 ) + (ai / ki ) i =1

di hi ki
i =1

(7)

4.1

The application of genetic algorithm In this study, we apply GA to solve the JRP 4.1.1 The chromosome representation and the initial population Since our GA searches in the solution space of (k1 , k2 , , kn ), each multiplier ki shall be represented as a particular part of a chromosome, the first u1 bits are used to encode the value of k1 and the particular piece of chromosome from the (u1 + 1) bit to the (u1 + u2) bit represents the value of k , and so on.
2 th th

Under the PoT policy, each ki is a power-of-two inteage, i.e., ki = 2 , for some nonnegative integer vi. In our GA, we represent ki by its (integer) value of vi for encoding in the chromosome. For example, if we use ui = 3 to represent all the possible values of ki, then there exist 2ui = 23 = 8 possible values of vi, namely, {0, 1, 2, , 7} (in which they correspond to (0,0,0), 0,1), , (1,1,1), respectively, in binary-coding). In such a case, we may use the binary strings (0,1,0) and (1,0,1) to represent ki = 22 = 4 and ki = 25 = 32, respectively. We randomly generate the binary strings to obtain the initial population of the GA. 4.1.2 Selection and fitness function evaluation The selection mechanism in GA simulates the more copies, the average stay even, and the worst die off. When applied to the selection mechanism, we need to evaluate the fitness of each chromosome in its population. Since there may exist problems associated with fitness values ems, we propose to perform fitness normalization in our GA. In this study, we decided to use linear ranking normalization in our GA. In linear ranking normalization, all of the chromosomes in a population are ranked and stored on a temporary list. Ranking of the chromosomes is carried out according to their fitness. We denote the size of a population as PS and the index of a chromosome within the temporary list as itemp. Then, the best-fit chromosome reserves the first portion of the list and has the highest rank in the list (itemp = PS), whereas the least-fit chromosome reserves the last portion of the list and has the least rank (itemp = 1). Note that the selection pressure (SP) takes values in the range of [1.0, 2.0]. (One may refer to Marzouk and Moselhi (2003) for reference.) The normalized fitness values of chromosome itemp (within the temporary list) are calculated as follows: evalitemp = 2 SP + [2( SP 1)(itemp 1) /( PS 1)]

vi

(8)

After normalizing fitness, we used a roulette wheel mechanism for selecting chromosomes for reproduction. In our GA, we consider the elitism process to overcome the problem of losing the superior chromosomes in each
304

population because of the random nature in selection and the effect of crossover and mutation (Hunter, 1998). By referring to Beans (1994) strategy, we copy 20% of the chromosomes with the best fitness value directly to the next generation. 4.1.3 The genetic operators Those chromosomes, that survive the selection step, undergo the alternation by two genetic operators, namely, crossover and mutation, to generate the rest of the 80% of the chromosomes in the next generation. In this study, we choose uniform crossover in our GA for its crossover operations since uniform corossover, like multi-point crossover, has been claimed to reduce the bias associated with the length of the binary representation used and the particular coding for a given parameter set (Pohlheim, 2004). In a uniform crossover, we first create a crossover mask, which is the same length as the chromosome structure, at random. The parity of the bits in the mask indicates which parent will supply the offspring with which bits. For each bit the parent who contributes to the offspring is chosen randomly with equal probability when applying a uniform crossover. One offspring is produced by taking the bit from the first parent if the corresponding mask bit is 1 or the bit from the second parent if the corresponding mask bit is 0. And, the other offspring is created using the inverse of the mask. Next, we apply the mutation operator to the population that just experienced the crossover operator. The mutation operator randomly chooses ones among the genes of all chromosomes in the population with a fixed mutation rate (denoted by MR), and it flips the chosen genes. 4.1.4 The parameter setting in our GA Here, we present our heuristic rules for setting the parameters of our GA. First, the termination condition of our GA uses the following rule: it stops the evolutionary process as the best-on-hand solution shows no improvement during the last 50 generations. Recall that n is the number of items. Based on our numerical experiments, we recommend the population size to be set as PS = 10n. In our GA, the crossover rate (CR) and the mutation rate (MR) vary linearly during the evolutionary process. In the beginning of the evolution, we set the crossover rate at a higher level (CR = 0.90) while the mutation rate is lower (MR = 0.05), so that our GA can take advantage of the chromosomes characteristics. During the evolutionary process, the crossover rate decreases by 0.001 for each generation and the mutation rate increases by 0.01 after 100 generations. The crossover rate and the mutation rate stop their variation as they reach a specified level, i.e., CR = 0.20 and MR = 0.20, respectively. As the crossover rate decreases, the chromosomes may become similar to one another during evolution. Therefore, we increase the mutation rate (to raise the diversity of the population) at the same time so that our GA can still explore new regions in the search space. We believe that our GA shall take the best advantage of the intrinsic parallelism of the genetic algorithms (i.e., a majority of different chromosomes were evaluated at each generation by using a uniform crossover operator) and such a varying parameter setting. 4.2 The generation of feasible repleni Our focus here is to introduce a feasibility testing procedure and a binary-search procedure to generate of a feasible replenishment schedule for each candidate solution obtained by the GA. 4.2.1 A feasibility testing procedure The proposed feasibility testing procedure, namely, Proc FT, attempts to obtain a feasible replenishment schedule. Given a vector of replenishment multipliers K and a value of basic period B, we denote X(K) as a candidate replenishment schedule and L(X(K), B) as the maximum warehouse-space ement (MWSR) when using the replenishment sche(X)K. An overview of the feasibility testing procedure is as follows. Assume that we are given a valf B and a vector of multipliers K = (k1 , k2 , , kn ) and W max . We first use an Initial Schedule Procedure, namely, Proc.IS, to replenent schedule X1(K), and calculate the corresponding MWSR of L(X1(K),B). Determine L* as the best Mobtained to date and X* as its corresponding schedule. (Since X1(K) is the first replenishment schedule then set L* = L(X1(K),B) and X*=X1(K) after Proc.IS is done). Obviously, when L* W max , i.e., the nning ho larger than the available warehouse space W max ,we obtain a feasible replenishment
305

schedule. We define an indicator in Proc FT. If a feasireplenishment schedule is obtained in Proc FT, is equal to 1; otherwise, = 0. After Proc.IS, if one secures no feasible replenishment schedule, i.e., = 0, we employ a Schedule Smoothing Procedure to improve L* until = 1 or L* can nolonger be improved. If L* has not been improved for consecutive iterations, we stop the procedure (Proc FT) where the value of is a thresho criterion for termination (which should be defined at the discretion of the anayst). Otherwise, randomly choose a subset of products for re-optimization, fix the replenishment schedule for the rest of the products, and return to start another run of local search. 4.2.2 A binary-search procedure If one can employ Proc FT to generate a feasible replenishment schedule for a given vecto of K = (k1 , k2 , , kn ) and its corresponding local minimum B(K ) , then it surely secures an optimal value for the given vector of K. But, re exists no feasible replenishment schedule for (K , B(K )) , then we need to search foa value of B, denoted by (K , B(K )) , so as to enable (K , B(K )) to feasible replenishment schedule with the minimal ob function value K. When generating a feasible replenishment schedule, the major concern is the maximum warehouse-space requirement S max ( X(K ), B) = max{St ( X(K ), B)} . Therefore, the easiest way to fix an infeasible replenishment
0 t <

scB (so as to reduce the maximum warehouse-space requirement). For the given vector of K, the objective function is convex with respect to B since it can be easily shown as positive definite. Recall that we use the binary-search procedure after learning that (K , B(K )) is infeasible. Therefore, for B (0, B(K )) , we should search for the maximum value of asible replenishment schedule tsolution with the minimal objective function value. Having the search range (0, B(K )) , we are ready to use a binary search to find the maximum value of B with a feasible rept schedule using Proc FT. We start the binary search by testing the vector K at B = B(K ) / 2 . We continue the binary search until the specified error allowance is reached Baseical experience, the binary search is efficient when the error allowance in B is set as 102.

5. Numerical Experime
In this section, by benchmark examples, whe will show that the proposed hybrid genetic alorithm (HGA) is an effective solution approach for solving the JRP with warehouse-space restrictions. Here, we use six benchmark examples presented in Fujitas (1978) paper to evaluate the performance of the proposed HGA. Note that the optimal solution of the JRP without warehouse-space restrictions serves a lower bound on the optimal objective function value of the JRP with warehouse-space restrictions. Also, recall that Lee and Yaos (2003) algorithm obtains an optimal solution fozr the unconstrained JRP under PoT policy, which is denoted as UTC. Furthermore, we let CTC be the objective function value of a solution for the JRP warehouse-space restrictions. Then, we could make use of UTC as a benchmark to define a performance index (which is a measure of Relative Error), namely, RE as shown in (9).

RE = [(TCC TCU ) / TCU ] 100%

(9)

We also use the results from Lee and Yaos (2003) algorithm to set the value of W max , i.e., the warehouse space available. Note that Lee and Yaos (2003) algorithm solves the optimlution for the JRP by picking the one with the minimum objective function value among many local optima. Let (K l , Bl ) be the lth local optimum obtained by Lee and Yaos (2003) W max = max{Smax ( X(K l ), Bl )} 65%.
l

algorithm.

Then,

we

set

the

space

available

by

We first collect the relative error (RE) and the run time after solving each of six benchmak examples by the propose HGA for 30 trials because of the random nature of the genetic algorithm. We summarized our experimental results in Table 2.
306

Table 2 shows the RE of the best solution among all the 30 trials of the proposed HGA is close to the average RE. Therefore, the solution quality of the proposed HGA is very stable. Also, we may observe the average run time of the proposed HGA is less than 130 seconds for these six examples. The proposed HGA is efficient to obtain a close-to-optimal solution. Consequently, the proposed HGA may serve as a decision-support tool for the managers facing the JRP with warehouse space restrictions.

6.

Concluding Remarks

This study is an extension of the Joint Replenishment Problem (JRP) that takes into accounts warehouse-space restrictions. The focus of this study is to determine the lot size of each product under power-of-two policy to minimize the total cost per unit time and to generate a feasible replenishment schedule of multiple products without exceeding the available warehouse-space. In order to solve this problem, we propose a hybrid genetic algorithm (HGA). We utilize the ability of multi-dimensional search of GA to obtain candidates in the solution space, and test the feasibility of any candidate using a new heuristic, namely, Proc FT. By our numerical experiments, we demonstrate that the proposed HGA (using a uniform crossover operator and varying crossover rates and mutation rates) could effectively solve the JRP with warehouse-space restrictions. Therefore, it could serve as an effective decision-support tool for the logistic managers.
Table2: The computational results of the six benchmark examples.

The best solution Date Set 1 2 3 4 5 6 RE 0.95% 0.21% 1.05% 0.69% 0.13% 0.12% Average RE 0.98% 0.25% 1.25% 0.75% 0.14% 0.18% 0.59%

The Proposed HGA Aveage run time 77.20 94.80 85.16 62.45 63.72 124.10 84.57

Average
Reference

0.53%

[1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13]

Bean, J. C. Genetic algorithm and random keys for sequencing and optimization. ORSA Journal on Computing, 1994, 6: 154-160. Fujita, S. The application of marginal analysis to the economic lot size scheduling Problem. AIIE Transactions, 1978, 10: 354-361. Hunter, A. Crossing over genetic algorithms: the Sugal generalized GA. Journal of Heuristics, 1998, 4: 179-192. Lee, F.C. and Yao, M.J. A global optimum search algorithm for the joint replenishment problem under power-of-two policy. Computers and Operations Research, 2003, 30: 1319-1333. Marzouk, M. and Moselh, O. Constraint-based genetic algorithm for earthmoving fleet selection. Canadian Journal of Civil Engineering, 2003, 30: 673-683. Murthy, N.N., Benton, W.C. and Rubin, P.A. Offsetting inventory cycles of items sharing storage. European Journal of Operational Research, 2003, 150: 304-319. Park, K.S., Yun, D.K. Optimal scheduling of periodic activities. Operations Research, 1985, 33: 690-695. Pohlheim, H. Evolutionary Algorithms: Principles, Methods, and Algoorithms [online]. Available from http://www.geatbx.com/index.html, 2001 [accessed on 6 May 2004]. van Eijs, M.J.G. A note on the joint replenishment problem under constant demand. Journal of the Operational Research Society, 1993, 44: 185-191. [10] Viswanathan, S. A new optimal algorithm for the joint replenishment problem. Journal of the Operational Research Society, 1996, 47: 936-944. Wildeman, R.E., Frenk, J.B.G., Dekker, R. An efficient optimal solution method for the joint replenishment problem. European Journal of Operational Research, 1997, 99: 433-444. Yao, M.J. The peak load minimization problem in cyclic production. Computer and Operations Research, 2001, 28: 1441-1460. Yao, M.J., Elmaghraby, S.E., Chen, I.C. On the feasibility testing of the economic lot scheduling problem using the extended basic period approach. Journal of the Chinese Institute of Industrial Engineering, 2003, 20: 435-448.

307

An GA-based Alternative Approach on the Capacitated Warehouse Allocation of Customers with Uncertain Demands
Zhou Gengui, Ye Feng, Cao Jian, Cao Zhengyu
College of Business Administration Zhejiang University of Technology, Hangzhou 310014, P.R.China

Abstract With the increasing importance of seamless supply chain integration to business success, the role of warehouses has become more of flow-through transshipment facilities intended for timely order fulfillment than inventory stocking points. The ability to serve all customers in a timely manner becomes even more important in nowadays supply chain network. One of the most effective ways of enhancing this strategic position of warehouses is to serve customers from an alternative warehouse when an assigned warehouse is due to stock outs because of the uncertain demands from customers. In this paper, we consider a capacitated warehouse allocation problem with uncertain demands. Firstly we deal with this problem via an alternative approach with a primary and a secondary warehouse for customers. Secondly this problem is formulated as an Integer Programming model and solved by using a genetic algorithm. In the GA approach, all candidate solutions are encoded as a permutation of the warehouse labels, and the evaluation of them are based on the total shipping cost of both the primary warehouses and the nearest secondary warehouses to all customers. Finally, the application of the proposed GA to a real world problem and numerical analysis show its high effectiveness and efficiency. Key words Supply chain network, Capacitated allocation, Uncertain demands, Genetic algorithm.

1. Introduction
Many systems in warehouse management are characterized by providing goods at their locations to a given set of fixed points or customers. Usually warehouse location analysis typically assumes that all customers demands could be served by the warehouse to which customers are assigned. But it is not always the case in practice because of the uncertain demands from customers. It is very common that the customer orders cannot be filled entirely from one warehouse inventory due to stock outs because a 100% in-stock policy for all possible demand levels would require too much safety stock to be practical. A well-accepted operational strategy in this situation is to supply a customer with part of goods from an alternative warehouse. Although it surely results in additional transportation cost, it can maintain a high level of stock availability to the customers. Early in 1989, Pirkul investigated this problem in considering the situation of facility location with primary and secondary facility requirements (Pirkul, 1989). It is an uncapacitated facility problem that locates uncapacitated facilities among a set of potential sites to minimize cost of serving a number of demand points each requiring service from two different facilities. Recently, Meshkat and Ballou considered a capacitated warehouse location problem with uncertain stock availability (Meshkat and Ballou, 1996). Here the uncertain stock availability means that a customer demand may not definitely supplied by one warehouse, i.e., each warehouse may serve as both a primary and a secondary warehouse for some customers. However, this problem with uncertain stock availability of warehouses does not receive much attention from researchers. In contrast with the well-known location/allocation problem (e.g., Domschke and Drex, 1984; Aikens, 1985; Wilson, 1986; Current et al., 1990; Geoffrion et al., 1995), this facility location problem with primary and secondary facilities is NP-hard and commonly it can be dealt with by the conventional techniques like Lagrangian Relaxation (Pirkul, 1989) and heuristics (Meshkat and Ballou, 1996). Usually they are not efficient or effective to solve larger scale problems. On the other hand, the location of facilities may be or could be fixed firstly because of some geographical and practical reasons for some companies in practice. Therefore, the consideration of allocation of customers to facilities or warehouses with a primary and a secondary warehouse makes the problem simpler and more practical. In this paper, we consider a capacitated warehouse allocation problem with a primary and a secondary warehouse for customers. This problem is different from the existing related warehouse location problem, but it is still more complicated than the well-known generalized assignment problem (Cattrysse and Van Wassenhove,
The research work was partially supported by National Nature Science Foundation of China (No. 70671095).

308

1992). Therefore, it is also NP-hard. We formulate this problem as an Integer Programming model but we solve it by using a genetic algorithm approach. In the GA approach, all candidate solutions are encoded as a permutation of the warehouse labels, and the evaluation of them are based on the total shipping cost of both the primary warehouses and the nearest secondary warehouses to all customers. The application of the proposed GA to a real world problem and numerical analysis show its high effectiveness and efficiency.

2. Modeling Formulation
To simplify the model formulation, we take the same assumption of Meshkat and Ballou (1996), i.e., where a primary warehouse location is known, its secondary warehouse location is also known. The secondary warehouse may simply be the nearest warehouse to the primary warehouse, or with the lowest shipping cost in place of shipping the out-stock goods from there to the customers. Fig.1 gives out an illustration of such kind of supply chain network with three warehouses.

Fig.1 A Supply Chain Network with Three Warehouses

Given m customers and r warehouses, the capacitated allocation problem with a primary and a secondary warehouse for customers can be formulated as the following integer programming: min f subject to: =

i =1 j =1

wijdi xij +

i =1 j =1

wij (1 - ) di zij

(1)

j =1 r

xij

= 1,

i = 1,2, , m

(2)

j =1

zij

= 1,

i = 1,2, , m

(3)

i =1

di xij +
xij + zij = 1,

k =1

(1 - )dk zkj

qj,

j = 1,2,, r

(4) (5) (6)

i = 1,2, , m, j = 1,2,, r i = 1,2, , m, j = 1,2, , r

xij , zij = 0 or 1,

where i = index for customers j = index for warehouses di = demand of customer i = proportion of the demand of each customer that is served by the primary warehouse assigned to it. ~ [0, 1]
309

wij = unit shipping cost for customer i when the primary warehouse assigned to it is warehouse j wij = unit shipping cost for customer i when the secondary warehouse assigned to it is warehouse j qj = supply capacity for warehouse j 1 if warehouse j serves customer i as the primary warehouse xij = 0 otherwise 1 if warehouse j serzves customer i as the secondary warehouse zij = 0 otherwise The objective function minimizes the total shipping cost related to satisfying the primary and secondary demands for all customers. Constraint sets (2) and (3) state that for every customer there should be only one primary and one secondary warehouse to supply goods. Constraint set (4) guarantees that all goods from one warehouse to satisfy the primary and secondary demands for customers are within the supply capacity of that warehouse. Constraint set (5) states that the same warehouse cannot serve both as the primary and secondary warehouse for a given customer. Constraint set (6) expresses the integrality condition for all decision variables. Since the above formulation is equivalent to a generalized assignment problem, it falls within the category of NP-complete combinatorial optimization problem. In the following section, we will discuss our proposed GA approach on this NP-hard problem.

3. Genetic Algorithm Approach


A genetic algorithm (GA) can be understood as an "intelligent" probabilistic search algorithm which can be applied to a variety of combinatorial optimization problems (Gen and Cheng, 2000). The theoretical foundations of GAs were originally developed by Holland (1975). The idea of GAs is based on the evolutionary process of biological organisms in nature. During the course of the evolution, natural populations evolve according to the principles of natural selection and "survival of the fittest". Individuals which are more successful in adapting to their environment will have a better chance of surviving and reproducing, whilst individuals which are less fit will be eliminated. This means that the genes from the highly fit individuals will spread to an increasing number of individuals in each successive generation. The combination of good characteristics from highly adapted ancestors may produce even more fit offspring. In this way, species evolve to become more and more well adapted to their environment. A GA simulates these processes by taking an initial population of individuals and applying genetic operators in each reproduction. In optimization terms, each individual in the population is encoded into a string or chromosome which represents a possible solution to a given problem. The fitness of an individual is evaluated with respect to a given objective function. Highly fit individuals or solutions are given opportunities to reproduce by exchanging pieces of their genetic information, in a crossover procedure, with other highly fit individuals. This produces new "offspring" solution (i.e. children), which share some characteristics taken from both parents. Mutation is often applied after crossover by altering some genes in the strings. The offspring can either replace the whole population (generational approach) or replace less fit individuals (steady-state approach). This evaluation-selection-reproduction cycle is repeated until a satisfactory solution is found. 4.1 Genetic Representation A good representation scheme is crucial for a GA to solve a given problem. It should be not only meaningful to represent a candidate solution of the problem, but also operational for crossover, mutation and other problem-specific operators such that minimal computational effort is involved in these procedures. For the above capacitated warehouse allocation problem, we use a concise but efficient representation in which the candidate solution structure is an ordered structure of integer numbers. It is an m-dimensional vector for an m-customers and r-warehouses allocation problem. The integer numbers in the vector identify the warehouse as assigned to vector elements denoted by the customer. Fig.2 illustrates this scheme of the genetic representation for a two-warehouse
310

allocation with seven-customers. This representation ensures that all the equality constraints in (2) are automatically satisfied since exactly one warehouse (the primary warehouse) is assigned to each customer. In decoding, if we keep this assumption that the secondary warehouse may simply be the nearest warehouse to the primary warehouse, or with the lowest shipping cost in place of shipping the out-stock goods from there to the customers, the decision variables xij and zij can be obtained simultaneously, and the equality constraints in (3) and (5) will also be automatically satisfied. As to the capacity constraints in (4), it can be coped with by both the penalty strategy and the repairing strategy described in the following subsections.

Fig.2 A Genetic Representation for Warehouse Allocation

4.2 Evaluation and Penalization Decoding the above ordered representation generates a candidate solution for the capacitated warehouse allocation of customers with a primary and a secondary warehouse, as well as its fitness value according to the objective function in (1). But, it may produce infeasible solutions, i.e., exceeding the capacity limit of some warehouses. In this case, we may consider the total shipping cost occurring at each warehouse, and simultaneously penalizing it by its degree of infeasibility to evaluate the infeasible solution, and take this penalized objective function value as its fitness value. We propose the following equation as the fitness function for the kth individual. fk =
where

j =1

fkj

(7)

fk j = Fkj(x) Pkj(x) Fkj(x) =

(8)
wij (1 - ) di zij

i =1

wijdi xij +

i =1

(9)

+[

i =1

di xij +

k =1

(1 - )dk zkj - qj ]/qj

Pkj(x) =

if

i =1

di xij +

k =1

(1 - )dk zkj > qj (1 - )dk zkj

(10)

1,

if

i =1

di xij +

k =1

qj

Equation (10) guarantees that all those individuals violating capacity constraints be penalized with a positive factor greater than 1 (where is an adjustable parameter for the severity of penalty, 1 2). If an individual violates a capacity constraint, the individual will be less selected for the next generation in the evolutionary process due to the greater increase in its objective value.

311

4.3 Modification Besides penalizing those infeasible individuals generated in initial population or genetic operation (crossover and mutation), it is possible to further improve the algorithm by using a problem-specific heuristic operator. Here, we adopt a modification operation similar to a perturbation mutation. Let Cj represent the set of the customers assigned to warehouse j in a chromosome. If the total customers demands exceed the capacity of warehouse j, i.e.,

i =1

di xij +

k =1

(1 - )dk zkj

> qj, then a single randomly selected customer i

is reassigned from

warehouse j to the next warehouse (in the order of j+1, , r, , j-1) which has sufficient remaining capacity (if one can be found). After reassignment, if xi,j+1 + zij+1 > 1, it is necessary to re-allocate the secondary warehouse for customer i. Fig. 3 illustrates this mutation process on an infeasible chromosome. The modification process improves the feasibility of all individuals by reassigning customers from overly-burdened warehouses to less-utilized warehouses, but also improves savings in shipping costs by reassigning customers to a warehouse with lower shipping cost. 4.4 Genetic Algorithm Procedure The steps operated in our proposed GA approach for the capacitated warehouse allocation of customers with a primary and a secondary warehouse problem is as follows: Step 1. Generate an initial population of N (population size) randomly individuals (candidate solutions). Each of the initial individuals is generated by randomly assigning a customer to a warehouse. All integer numbers in a chromosome are between 1 to r and generated with the probability of 1/r.

Fig.3 An Example of the Problem-Specific Heuristic Operator

Step 2. Generate a pool of child solutions by crossover and mutation operation on randomly selected parents with a pre-determined rate (crossover rate and mutation rate). A simple one-point crossover operator is adopted, in which a crossover point p (1 < p < m) is selected randomly and the child solution is generated by combining the first p genes of one parent with the remaining m p genes of the other parent, or vice versa with equal probabilities. An exchange mutation procedure follows the crossover operation. It operates an exchange of elements in two randomly selected genes on a randomly selected parent. Step 3. Improve all infeasible individuals in the sense of not satisfying the capacity constraints according to the procedure in Subsection 3.3 and replace those infeasible ones. Step 4. Decode all individuals to obtain the candidate solutions and their fitness values according to the procedure in Subsection 3.2. 312

Step 5. Select a new population of N individuals for the next evolutionary generation (iteration) by using the ( + )-selection strategy. Step 6. Repeat Steps 2-5 until a given generation number. The best candidate solution found so far will be the optimal or near-optimal solutions of the solved problem.

4. Algorithm Applications
Actual data taken from Min and Melachronoudis (1999) and Melachrinoudis and Min (2000) are used to test the proposed GA for this problem. To ensure the confidentiality of the data, a real firm facing this problem is referred to as Alpha. Alpha produces and distributes chain link fences and their related hardware items to a total of twenty-one customers (e.g., retailers including Home Depot and Lowes) across the U.S. To avoid overlapped distribution and any duplicated delivery efforts with the existing distribution network in the Midwest, South and West, Alpha plans to move its current warehouses from Boston, Massachusetts to a new location in a Mid-Atlantic state. After initial screening, the seven candidate sites considered are: Baltimore in Maryland, Williamsport in Maryland, Wheeling in West Virginia, Pittsburgh in Pennsylvania, Erie in Pennsylvania, Harrisburg in Pennsylvania, and Boston in Massachusetts. Among these seven sites, a decision maker (logistics manager) can choose any arbitrary number of warehouses. For an illustrative purpose, we consider Baltimore in Maryland, Williamsport in Maryland, and Wheeling in West Virginia as the potential warehouses, denoted as warehouses 1, 2, and 3. All shipping cost data between these warehouses and their customers are recapitulated in Tab.1.
Tab.1 First Year Unit Shipping Cost (in Dollar)

Customer i Wallingford Ankeny Posen W.Chicago Indianapolis Louisville Boston Baltimore Westland Blaine Charlotte Auburn Kenvil Menands Columbus West Chester Philadelphia Pittsburgh Nashville Richmond Milwaukee

Unit Shipping Cost at Warehouse j 1 2.9 3.9 3.5 3.5 3.3 3.3 3.1 2.5 3.2 4.0 3.1 3.1 2.8 3.0 3.1 3.2 2.7 2.9 3.5 2.7 3.6 2 3.2 4.0 3.6 3.6 3.4 3.4 3.3 2.9 3.3 4.1 3.3 3.4 3.0 3.2 3.2 3.3 3.0 3.0 3.6 3.0 3.7 3 3.5 4.3 3.5 3.6 3.0 3.1 3.8 3.0 3.0 4.6 3.4 4.0 3.2 3.6 2.6 2.8 3.1 2.4 3.5 3.2 3.8

Demand (in unit) 113,644 25,360 82,507 80,159 75,274 116,064 329,263 162,106 151,417 40,833 97,758 63,643 367,379 276,387 85,180 79,662 122,560 106,198 57,305 119,524 60,096

Generally, the evolutionary process of a GA is an equilibrium process of exploration and exploitation to search for a fit structure or optimal solution whereas the parameter setting for crossover rate and mutation rate plays an important role on it. After a large amount of experiments, the all parameters for the proposed GA are set in the following manner: population size = 100; maximum number of generations = 500; one-point crossover rate = 0.7~0.8; exchange mutation rate = 0.2~0.4; modification mutation rate = 0.6~0.8. As to the penalty coefficient , a series of simulation experiments indicated that a too much small value of has little influence on infeasible solutions violating the capacity constraint. The best range of turned out is around 1.2~1.4. If we assume that all customers demands could be satisfied by 100% by the assigned warehouse, i.e., = 1.0. By using the above GA approach, the optimal solution for this problem can be obtained by 100% in
313

probability in all 20 trials. The total shipping cost is $7980344.00 and the optimal allocation for all customers is as follows.

If we assume that all customers 90% demands are supplied by their primary warehouses and the rest 10% demands are supplied by their secondary warehouses, i.e., = 0.9. With the same approach as described above, the new optimal solution for this case can be also obtained by 100% in probability in all 20 trials. The total shipping cost is $8021597.00 and its optimal allocation for customers with a primary warehouse and a secondary warehouse is as follows:

Compared with two results, it shows that the salespersons could seek out a nearby warehouse and have a customers demand shipped from an alternative warehouse when stock outs occurred in a local warehouse. Although the total shipping cost will increase by about 5%, this strategy for customers allocation guarantees that the warehouse service level can reach to 100% (90% demands are filled from a primary warehouse and 10% demands are filled from a secondary warehouse). Actually, by varying the value of , we can easily obtain different alternatives of the warehouse allocation with uncertain stock availability for customers.
Tab.2 The Results of Numerical Experiments

Problems 1 2 3 4 5 6 7 8 9 10

Customers 30 40 50 60 70 80 90 100 120 150

Warehouses 3 4 5 6 7 8 8 8 10 10

Flexibility 10 % 10 % 10 % 10 % 10 % 10 % 10 % 10 % 10% 10%

Cost Increasing Rate 2.13 % 2.00 % 1.95 % 1.98 % 1.32 % 1.59 % 1.68% 1.46% 1.82% 1..91%

In further testing the effectiveness of the proposed GA approach on this warehouse allocation problem, we Fig. out ten larger scale numerical examples from 30 customers to 150 customers), which are generated randomly with the unit shipping cost uniformly and randomly distributed in the range of [20, 50] and the customers demands uniformly and randomly distributed in the range of [1000, 100000]. By using the same GA and set of parameters as analyzed above, Tab.2 summarizes the best results in all 20 trials for each problem. In Tab.2, flexibility is defined as the possibility of 10% demand of a customer filled out by a secondary warehouse when its assigned warehouse is out of stock. The results show that the warehouse allocation in this way may guarantee 10% customers demand being filled out under the condition of uncertain stock availability only within about 2 % increase in total shipping cost.

5. Conclusion
Warehouse allocation problem is not a simple problem in practice. There are many practical factors to be considered to give out a satisfying allocation alternative for logistics managers. In this paper, we considered a warehouse allocation problem with uncertain demands or stock availability. Combined with the shipping cost, we
314

dealt with this strategy by allowing a primary warehouse and a secondary warehouse to fill the demands for all customers. Generally, this problem was formulated as an Integer Programming, but we solved this problem by a genetic algorithm approach. The application on a real-world problem showed the effectiveness of the genetic algorithms approach on this problem. Despite the proven merits of the proposed algorithms, further research work needs to be done to make the model more adaptable to practice. First, all warehouses can be with varying capacities instead of the equal capacities. Second, all demands filled by secondary warehouses should be balanced as fare as possible to avoid the focusing fill-out on one or more warehouses. Furthermore, Future research work should be extended to such warehouse allocation problem with various types of warehouses as private, public, and contract warehouses and different combination of transportation carriers.
References

Aikens C H. Facility location models for distribution planning. European Journal of Operational Research, 1985, 22: 263-279 Cattrysse D, Van Wassenhove L N. A survey of algorithms for the generalized assignment problem. European Journal of Operational Research, 1992, 60: 260-272 [3] Current J R, Min H, Schilling D A. Multiobjective analysis of location decisions. European Journal of Operational Research, 1990, 49(12): 295-307 [4] Domschke W, Drex A. An International Bibliography on Location and Layout Planning. Heidelberg: Springer, 1984 [5] Gen M, Cheng R. Genetic Algorithms and Engineering Optimization. New York: John Wiley & Sons, Inc, 2000 [6] Geoffrion A M, Morris J G, Webster S T. Distribution system design, In: Drezner Z, eds. Facility Location: A Survey of Application and Methods. New York: Springer, 1995. 181-198 [7] Meshkat H, Ballou R H, Warehouse location with uncertain stock availability. Journal of Business Logistics, 1996, 17(2): 197-216 [8] Melachrinoudis E, Min H. Dynamic relocation and phase-out of a two-echelon, hybrid plant and warehousing facility: a multiple objective approach. European Journal of Operational Research, 2000, 123(1): 1-15 [9] Min H, Melachrinoudis E. The relocation of a hybrid manufacturing and distribution facility from supply chain perspectives: a case study. Omega, 1999, 27(1): 75-85 [10] Pirkul H. The uncapacitated facility location problem with primary and secondary facility requirements. IIE Transactions, 1989, 21(4): 337-348 [11] Wilson A G. Industrial location models 1: a review and an integrating framework. Environment and Planning A, 1986, 18: 175-205

[1] [2]

315

Evolution of Manufacturing Systems: from Product Competitive Advantage towards Collaborative Value Creation
Yongjiang Shi
Cambridge University, The United Kingdom

Abstract As the emerging requirements for business growth and technology development, the boundary of a manufacturing system has been extended from factory towards various types of network relationships. The missions of manufacturing system are also transformed and re-defined. This paper, based upon research work in different industrial sectors in the last ten years, introduces a manufacturing system evolutionary picture from which four main types of manufacturing systems can be identified and analysed in order to understand their characteristics of system and behind strategic drivers or intentions. The paper suggests that the classical manufacturing strategy theory should be adapted into a contingency strategy to respond the manufacturing evolutions and treat different systems interdependently. It also suggests a new conceptual framework for further research on new manufacturing evolution and contingency strategy. Key words Manufacturing, System Evolution, Network Behavior, Globalisation

1. Introduction
During the last two decade, the boundaries of the manufacturing system have been extended from the factory to the international manufacturing network (Shi and Gregory, 1998)[1], as well as to the supply network (Lamming, et al 2000)[2], the value network (Bovel and Martha, 2000)[3],, and the global manufacturing virtual network (Li, et al, 2000)[4],. This has been driven by intensified competition, the fragmented market, globalised collaboration, and faster technology innovation. What are the implications of the manufacturing evolution to practitioners and academicians? Has the mission of manufacturing system been changed, or the value creation mechanisms and business principles changed? What are the new relationships between effectiveness and efficiency in manufacturing industry in the new information, communication technology and knowledge era? This paper seeks to answer the questions, by identifying the evolution of manufacturing system and some key challenges to the classical manufacturing strategy theory, especially from the perspectives of the networked manufacturing system and value-based manufacturing strategy. The analysis and discussion draw upon recent eight-year case studies conducted by the Centre for International Manufacturing of the University of Cambridge involving a wide range of industrial sectors. The original purpose of the case studies was to focus on networked manufacturing systems from either geographic expansion or vertical externalisation or both perspectives in order to understand the new manufacturing systems behaviour and design process. The frameworks introduced in the paper are preliminary but explore a general picture of evolutionary development of corporate international and collaborative manufacturing system. The paper provides an integrated understanding of current theories and practices from both international and inter-firm aspects as well as seeks to identify the driving forces and developing trends of manufacturing evolutions.

2. Manufacturing System Evolution Matrix


The Manufacturing system has evolved into various kinds of network-based relationships from the traditional input-output transformation model. During the last twenty years multinational corporations (MNCs) have attempted to globalise their geographically dispersed factories by co-ordinating them into a synergetic network (Flaherty, 1986; Ferdows, 1997; Shi and Gregory, 1998)[5][6][1]. This transformation has changed basic manufacturing functions and effectiveness from the orientation of product-based competitive advantages towards the orientation of network strategic capability development, which drives the manufacturing system beyond the factory wall and the strategy beyond product focus.
The authors contact is Centre for International Manufacturing, Cambridge University, E-mail: ys@eng.cam.ac.uk

316

Besides MNCs international expansions, it has become more popular for all types of companies to downsize and outsource their non-core business tasks and to set-up inter-firm collaborations (Lambert, et al, 1998; Lamming et al, 2000; Brewer, et al, 2001)[7][2][8]. This development has pushed the manufacturing system further into a new relationship beyond the traditional concept of the firm that owns and internally operates their factories. Currently, it is no longer a secret that, although a company may only own a very small portion of a supply chain, they are still strategically able to co-ordinate or integrate the whole supply chain to deliver a competitive product to its targeted market. It is equally interesting to notice that there are increasing observations about geographic clustering emerging worldwide (Piroe and Sabel, 1984; Porter, 1998) [9][10]. The clusters actually form different supply networks some of them are internally self-sufficient in a region and others are virtually integrated with other clusters. The two types of supply networks demonstrate that inter-firm collaborations have emerged as a new type of manufacturing system. Combining both developments, as the Figure 1 illustrates, a new type of manufacturing network can be derived with the characteristics of international and inter-firm relationships. The new combination provides a new operational environment for manufacturing system to access, optimise and operate its strategic resources. The global manufacturing virtual network (GMVN) was suggested to explore the new generation of manufacturing architecture (Li, Shi and Gregory, 2000; Shi and Gregory, 2002) [4][11]. Many other researches on global outsourcing and partnership also seeks to develop the system with similar architecture and strategic capabilities pursuing higher value and innovation (Normann and Ranfrez, 1993; Parolini, 1999; Bovel and Martha , 2000) [12][13][3].
- intensified competition forces

competence and narrow down business

every player to focus on its core

and

Key Driving Forces of Ownership Extension:

-technology innovation Ownership Exte

Supply Chain or Supply Network, Outsourcing, Make or Buy Decision, Virtual Enterprise, Extended Company, Agile Mfg,

Global Mfg. Virtual Network (GMVN); Value Constellation; Value

Key Drivers Forcing an Integration/Synt hesis of the Both Key Driving Forces of Geographic Extension:

Factor y or Plant tional

Interna Manufacturin g Networks:

based manufacturing system

focus

factory

- pursue strategic market

Geography Extension Figure 1: Manufacturing System Evolution Matrix and Key Drivers

Why does a manufacturing system have to evolve into a such complex relationship? Company has no other choice. In many circumstances, when there are traditional product-based competitive advantages, which even achieve the order winning criteria, this still cannot satisfy new corporate demands. For example, a case study demonstrates that a very successful order-winning UK aerospace engine company was deeply shocked by comparing its profitability with its other American competitors and by the huge pressures from its shareholders. The reasons for such unpredicted shocks were not only the cruel results of their product success and at the same time their financial failure, but also a huge challenge to their traditional ideas relating to the excellence of their product design, technology, engineering, and production. The company indeed had been very successful to gain much more orders and a greater market share than any other competitors in the world because of its modularised engine platform, advanced manufacturing technologies, and the best engine performances. However, at the same time, while the company enjoyed the advantage of more orders and production activities, its competitors have initiated to change the rules of the game. The UK companys proud engines were recognised as a commodity 317

and integrated under its competitors solution packages. The competitor companies were fully engaged into a new service business by providing the total solution of power to airline companies. The competitors had also outsourced a large portion of their manufacturing to their Far East suppliers, and redefined manufacturing system as a value creation system rather than product-oriented production or transition system. This was the time when the UK company realised that the traditional best manufacturing capabilities were no longer good enough for future competition and value creation. They realised that the manufacturing system and its strategy would have to be extended to match the changed rules of market competition. Therefore, the manufacturing system has to be capable of not only providing competitive products but also finding a proper position in an innovative solution to final customers. From this case study and an understanding of the trends of the manufacturing system towards a network relationship (Figure 1), the following lessons could be learned: (1) The traditional manufacturing strategy focusing on a product and its effective factory might not be enough, especially for creating higher business value. (2) Manufacturing system has been extended into new operational space international and inter-firm relationship mainly because of new competition game and strategy. (3) Manufacturing system boundary changes imply manufacturing strategy also needs to be changed in terms of its contents and process. Based on the three basic directions that the manufacturing system is heading towards in the matrix (Fig. 1), the following four sections of the paper will review each of the evolutionary direction, and analyse their main impacts on the missions of manufacturing system.

3. Implications to Production and Operations Management (P/OM)


Each of the above four types of manufacturing systems has its own characteristics. They have very deep impacts on further understanding about manufacturing system and further development of the academic discipline of production and operations management (P/OM). They imply that manufacturing strategy should be changed to a more contingency based model rather than an universal recipe covering all scenarios. The above discussion might also be able to draw the following general conclusions: When manufacturing system contexts change, the system boundaries have to be adapted to reorganise new functions in order to satisfy external requirements. When manufacturing boundaries have been changed, the system contents or building blocks have also been changed, which intimates new architectures, dynamic mechanisms and attributes or behaviours of the system. When external requirement and internal attribute have changed, the missions or aims of the system should be re-defined or re-explored, and if a system has such radical changes, its design process or strategy formulation process should also be evolved. To face the emerging challenges to the manufacturing strategy, there seem to be two different types of policies in operations management research work. One is to develop a more generic strategy seeking to cover wider range of contents (Slack and Lewis, 2002) [14]. Another is to develop more specific strategies seeking to fit into different types of systems under a contingency plan, which is supported by this paper. This section discusses differences between the policies and suggest that the boundaries of the manufacturing system do need to extended and new manufacturing strategies for different systems do need to be researched and developed.

3.1 Theoretical Limitation of the Manufacturing Strategy in Corporate Strategy


It is interesting to combine Skinners thoughts on manufacturing effectiveness (1969) [15] with Ansoffs corporate strategy model (1965) [16]. Ansoffs corporate growth model considers strategic development through the relationship between products and markets. The classical manufacturing strategy theory mainly deals with the business growth through existing products penetration into existing market one fourth or half of the corporate strategy in Ansoffs matrix. It indicates that would be too small business space for manufacturing strategy to only focus on existing
318

product or existing market or even both together. From Ansoff model, manufacturing strategy actually can play very critical roles in every four grid. In the new market development, manufacturing system can provide many new potentials from perspectives of international dispersion, especially by globalising existing product, and function similarity, mainly by exploiting products core technology or function to penetrate into other related new market segments. In the new product development, manufacturing can have even wider contributions from global product, platform or modularity development, widening product ranges, strengthening the economies of scope, towards a whole supply/value chain operations and total solution providence to existing customers. Even in the diversification of business development, manufacturing strategy also can have strong influence on corporate strategy and business growth. The power can come from not only manufacturing capability, internal business mechanism and their replication in other business units, which are the most critical implications of resources based view in strategic management, but also identification of new business opportunities through building up, scanning and positioning in a widen value network based on demands of final customer which might mean a completely new territory for the firm but still have strong relationship with current business (Bovet and Martha, 2000) [3]. Therefore, the classical manufacturing strategy will continuously play its critical role in many manufacturing companies and help them sharpen products competitive edge and gain more market share. But, for more and more companies, seeking higher value creation and innovative product/service or solution to new customers, the classical manufacturing strategy is not aggressive enough to generate new ideas, capture potential opportunities, move to a better position in a supply/value chain, and cultivate manufacturing system itself with new strategic and evolutionary capabilities. From this perspective, the paper suggests that different scenario based manufacturing strategies will be more effective for extending the manufacturing function and designing a more effective manufacturing system than the classical or a universal model. 3.2 A Contingency Manufacturing Strategy for Different Manufacturing Systems Based on Ansoff business development model and the four types of manufacturing systems analysis in Fig 1, it is significant to make a link between both together and develop a contingency manufacturing strategy. The contingency manufacturing strategy can be structured from both corporate strategy and manufacturing system perspectives. As corporate strategy has covered much wider issues that manufacturing system themselves can independently solve, it might be more effective to build the contingency manufacturing strategy based on characteristics of manufacturing systems: International manufacturing networks can mainly contribute to new market development and partially support new product development in the corporate strategy development matrix Supply networks (inter-firm alliances) can significantly contribute to new product development, especially to define companys position in the supply network, and support diversification Global manufacturing virtual (supply) networks can mainly contribute to the value network or constellation in order to develop hidden business opportunities and run globalised collaborations This contingency manufacturing strategy pushes manufacturing system towards not only the network relationships but also business level directly linking with corporate strategy. There could be four types of manufacturing strategies dealing with different levels of manufacturing systems from factory/ plant level, to international dispersed factory network, to inter-firm factory network, and further to GMVN. It is worth to realise that, as network relationship becomes more complex and a dominated feature, focal companys role in a network is reduced as well as the strategy becomes more emerged than planned (Mintzberg, et al, 1997) [17]. 3.3 How far the Boundary should be Pushed: Effectiveness of Manufacturing System It is arguable that the extension of manufacturing boundary is confusions of decision hierarchy between manufacturing function, business, and corporate levels. Many gaps are emerging between the levels when manufacturing environments are changing dramatically. But if there is no serious debate from both discipline and inter-discipline aspects, academic opportunism would not be able to establish fundamental transformation between paradigms. However, on the other hand, any ignorance of the emerging challenges will cause further
319

fragmentation of the body of knowledge, like a buzz-word brand replacing deep understanding of a phenomenon, and damage theoretical integration and eventually fail to deliver synthesised and implement-able knowledge. It is important to notice that there is no intention in this paper to change manufacturing strategy to whole business strategy and cover every business function. Other functions like marketing and even new product introduction (NPI) should still play their own roles in the business strategy areas. But manufacturing system and strategy, as they coordinate resource to satisfy external requirements, have to or should open a wider window engaging with new opportunities for higher value and growth. When Skinner first time raised the question about manufacturing effectiveness and argued the differentiation between efficiency and effectiveness, it was very similar to be challenged by the question how far the manufacturing system boundary should been extended. Skinner pushed it to product - the outcome of manufacturing system, which has fundamentally changed peoples mindset about manufacturing and operations management. It is very logical to define a systems mission first and then design it and later operate it. The effectiveness should lead the efficiency, or doing the right thing first and then doing it rightly (Skinner, 1985) [18]. The current debates on boundaries of manufacturing system and strategy is still about the system effectiveness. The effectiveness represented by product is not enough now, mainly because that even a very successful product with many orders and larger market share nowadays may still not be able to generate higher value, as the previous case demonstrates. More fundamentally, the core challenge might be not only the profitability issues but also the life-warning signal indicating a business course or the rule of a game has been changed. If the company fails to sense the change and still focuses on its product order-winning criteria, its innovation power will be damaged and sustainability will be jeopardised. Especially, in the new knowledge economy, the manufacturing effectiveness and its determinants have been fundamentally evolved from physical product based competition towards the new competition on potential opportunity identification and flexible organisation of global resources. One of the key performance criteria is the value to the stakeholders. Manufacturing system, as the real power engine of manufacturing companies, has to upgrade its mission or effectiveness from order-winning towards value creation as well as appropriation. The new evolution of the effectiveness of manufacturing system implies a new definition of manufacturing which represents the full cycle of business process from understanding markets through product and process design to operations and distribution, taking into account economic, financial and people issues. The manufacturing also defines the cycle boundary starting from a market and also ending at it. Nowadays rarely has a company that can own this full cycle. Manufacturing strategy therefore should decide its position, collaboration, and more importantly dynamics adapting itself to its environments. 3.4 Relationship between Positioning and Capability in Strategy Development One of the new contributions of Slack and Lewis most recent book is their strategic reconciliation a balanced view between market requirement (or strategic positioning) and operations resource (capability development) (2002)[14]. Their process oriented strategy demonstrates the strategy formation mechanism. The reconciliation actually not only solves a long-term debates between Porters positioning school (1985) [19] and resource-based view (Panrose, 1959; Teece, et al 1997) [20][21] in strategic management area but also provides an implementable procedure. This is a serious contribution from operations management to strategic management, which illustrate the richness of manufacturing and operations management from both practical and theoretical aspects. 3.5 New Frameworks for Research and Manufacturing Strategy New challenges to manufacturing system and strategy create a new context of manufacturing strategy research. This can be demonstrated into a three-dimension space that provides a new vision highlighting three key dimensions supply/value chain, internationalisation, and externalisation or collaboration. The generic three-dimension strategic environment indicates the key decision areas, and the future manufacturing system should be built with these four basic building blocks: Supply and/or Value Creation Network Manufacturing system and its tasks must be defined along the
320

supply/demand or value-creation networks by proposition, configuration and optimisation for higher value and competitive advantages; Strategic Collaboration (Externalisation) A spectrum of collaboration including intra-firm coordination and inter-firm co-operation has to be evaluated and decided. In the inter-firm collaborations, it also has wide span of choices from strategic alliances for longer term commitment towards virtual community, or arm length trading relationship development for more flexibility; Manufacturing Internationalisation Manufacturing system is no longer a single site factory. It has to decide not only geographic expansion or dispersion but also internationalisation evolution process and cross-cultural integration; Strategic Synthesis Process The above three dimensions cannot work independently in the current global competition environment. Therefore, it is essential to synthesise them into an integrated manufacturing system with a systematic strategy process and most appropriated technology including cyber platform. The above scenarios create four types of manufacturing systems from factory, towards international manufacturing network, inter-firm supply network, and global manufacturing virtual network (GMVN). Therefore, process of the contingency manufacturing strategy should depend upon each type of manufacturing system, and have its own contexts and contents of decision areas. There is no doubt that needs more research work to understand, develop and validate.

4.

Conclusion

This paper is based on recent eight-year case observations undertaken in many multinational corporations in the aerospace, automotive, engineering, garment and electronics industries. Original objectives aim at the evolutionary process of the manufacturing system beginning with a factory-based system and moving towards a globally collaborated inter-firm network. It attempts to explain why manufacturing companies evolve into this more and more self-uncontrollable relationship. The questions raised from the research are beyond that the classical or existing theory of manufacturing strategy can answer. As manufacturing system contexts change, the manufacturing systems with new boundaries, mission, architecture, mechanism and capability have to be adapted to reorganise new functions in order to satisfy external requirements. As manufacturing system evolves into quite different relationship oriented networks, a universal process of manufacturing strategy process is not specific enough to fully satisfy the strategic planning and system design, a contingency manufacturing strategy therefore is demanded. In manufacturing industry, taking advantage of and leveraging other companies existing resources becomes more important than self-owning these resources which implies that various types of inter-firm collaborations are therefore the preferred architecture. In the collaboration, understanding the dynamics and keeping a balance are more and more critical in the spectrum of collaboration from one end of corporate hierarchy, to international strategic alliance, and towards a networked virtual enterprise, geographic clustering, and free market end. These new challenges ask new solutions for industry. They require not only a new architecture of the manufacturing system based more on an inter-firm relationship but also require a new strategy process with a context-based contingency framework. To achieve the new tasks, the new conceptual framework with three main dimensions for the networked manufacturing system and the new contingency manufacturing strategy process are needed in order to guide the further research, integrate different disciplinary knowledge, and eventually manufacturing business practices.
References

[1] [2]

Shi, Y. and M. Gregory, (1998). International Manufacturing Networks to develop global competitive capabilities, Journal of Operations Management, Vol. 16 Lamming, R. et al., (2000). An Initial Classification of Supply Networks, International Journal of Operations and Production Management, Vol. 20

321

[3] [4]

[5] [6] [7] [8] [9] [10] [11]

[12] [13] [14] [15] [16] [17] [18] [19] [20] [21]

Bovel D. and J. Martha, (2000). From Supply Chain to Value Net, Journal of Business Strategy, July-August Li, X., Y. Shi and M. J. Gregory, (2000). Global Manufacturing Virtual Network (GMVN) and Its Position in the Spectrum of Strategic Alliance, in Dierdonck and Vereecke (eds), Operations Management: crossing borders and boundaries: the changing role of operations, EurOMA 7th International Annual Conference, Ghent, Belgium, June 4-7 2000, pp330-337 Flaherty, M.T. Coordinating International Manufacturing and Technology, in M. E. Porter (eds), Competition in Global Industries, Harvard Business School Press, 1986 Ferdows, K. (1997). Making the Most of Foreign Factories, Harvard Business Review, March-April Lambert, D.M., Cooper, M.C. & Pagh, J.D.,1998, 'Supply Chain Management: Implementation Issues and Research Opportunities', International Journal of Logistcs Management, 9,2,1-19 Brewer, A.M et al (eds.), (2001). Handbook of Logistics and Supply-Chain Management, Pergamon Piroe and Sabel, (1984). The second Industrial divide: possibilities for prosperity, Basic Books Porter, M. (1998), Clusters and the new economics of competitiveness, Harvard Business Review, December Shi, Y. and M. Gregory, (2002). Global Manufacturing Virtual Network (GMVN): its dynamic position in the spectrum of manufacturing collaborations, in UJ Franke (ed), Managing Virtual Web Organisations in the 21st Century: issues and challenges, Idea Group Publishing Normann and Ranfrez, (1993). From Value Chain to Value Constellation: designing interactive strategy, Harvard Business Review, Aug-Sept. Parolini, C. (1999). The Value Net: a tool for competitive strategy, John Wiley Books Slack N and M Lewis, (2002). Operations Strategy, Pearson Education Limited Skinner, W. (1969). Manufacturing: Missing link in Corporate Strategy, Harvard Business Review, May-June Porter, M. (1986). Competition in Global Industries, Harvard Business School Press Mintzberg, H. et al (1997), Strategy Safari, Free Press Skinner, W. (1985). Manufacturing: the Formidable Competitive Weapon, John Wiley & Sons Porter, M. (1985). Competitive Advantages: Creating and sustaining superior performance, Free Press Penrose, E., (1959), The Theory of the Growth of the Firm, Oxford: Oxford University Press Teece, D, G Pisano, and A. Shuen, (1997). Dynamic Capabilities and Strategic Management, Strategic Management Journal, Vol. 18:7, pp 509-533

322

SECTION THREE
DECISION THEORY AND APPLICATION

323

324

Efficiency Improvement with Minimum Amelioration


Zha Yong, Liang Liang
School of Management, University of Science and Technology of China, 96 Jinzhai Road, Hefei, Anhui, P.R.China zhabeer@ustc.edu.cn, lliang@ustc.edu.cn

Abstract Decision makers are keen to seek a practical and feasible way to improve the efficiency of their firms. Current DEA researches mostly focus on efficiency measurement, which can not satisfy the desire of decision makers. The current study develops an approach to deal with this issue. Differing with the model proposed by Gonzalez (2001)[21], this article provides a modified model to re-allocate the inputs and outputs of inefficient DMUs with minimum amelioration by considering the preference of decision makers and other factors which impact the learning process. It is shown that Gonzalez model is a special case of the modified model. Further, a heuristic algorithm is proposed to solve the model. Key words DEA, Efficiency improvement, Minimum amelioration, Heuristic algorithm

1 Introduction
Decision makers are keen to seek a practical and feasible way to improve the efficiency of their firms, as

illustrated by Edvardsen (2003)[1] and Zheng Jinghai (2000)[2]. Some studies devote their efforts to efficiency improvement within the framework of principal-agent contractual theory (e.g. Leibenstein, 1996) [3], while others consider technical inefficiency as the result of a lack of knowledge or managerial ability (e.g. Farrel, 1957) [4]. Correspondingly, efficiency improvement may be achieved through incentive or learning process under various assumptions (e.g. Bogetoft, 1994) [5]. Alternatively, Data envelopment analysis (DEA), incepted by Charnes[6], has been popular as a non-parametric model in dealing with the efficiency of a decision making unit (DMU) relative to an empirical production possibility boundary. Traditional DEA methods(e.g. Charnes et al., 1978; Banker et al., 1984; Brockett et al., 1997; Cooper et al., 2000, etc)[6], [7], [8], [9] readily produce a range of efficiency scores reflecting different sources of inefficiency and reference sets of observed DMUs, which give a clear glimpse of how the effort can be paid out by decision makers to improve the efficiency of their DMUs. Moreover, current DEA researchers show great lights on efficiency measurement(e.g. Bogetoft and Hougaard, 1999; Coelli, 1998; Dervaus et al., 1998; Frei and Harker, 1999; etc)[10], [11], [12], [13] to reach the EPF with a special way, and suggest some information for decision makers to make out well in production process in a theoretical way, such as distance function (Shephard, 1953; Fare and Grosskopf, 2000)[14], [15], gradient method (Peng, 1997; Wang and Wang, 2000)[16], [17], deviation minimization (Wu, 2004)[18] and multi-objective projection (Guan and Ma, 2003)[19], etc. Other than previous researches which concerned more with the problems of measuring inefficiency, several researchers showed a great interest in searching a practical benchmarking and best practice for inefficient DMUs to learn from. It has been applied in various areas, for instance, bank system (Schaffnit et al., 1997)[20], Chinese state enterprise (Zheng et al., 2000)[2], etc. Also, Gonzalez (2001)[21] described a most relevant benchmark for inefficient DMU to imitate. He argued that the more similar an inefficient firm in inputs is to an efficient firm, the shorter the path it goes through onto the efficient frontier. A concept of input-specific contractions is brought forward and the sum of all inputs contractions is coincided with the smallest input-specific contraction under the assumption of convexity of the boundary set. Gonzalez (2004)[22] also focus on the knowledge component to proposes a learning strategies that based on the similarity between inefficient firm and benchmark firm. However, the technical and managerial circumstance of the benchmarking firm can not be imitated entirely, which results in the possible failure of the inefficient firm to become efficient. This paper is characterized as the following aspects: 1. In spite of Gonzalezs model, focusing on the only consideration of the total change of inputs, this paper pays more attention to the total change of both inputs and outputs, which provides decision makers more appropriate and practical information to plan a further improvement. 325

2. A generalized model is provided to depict the problem of efficiency improvement with a minimum amelioration. It is more concerned with decision makers preference and the costs when regulating current inputs and outputs concurrently, which can be regarded as the selection of weights in objective function. As a result, Gonzalezs model is a special case of the model this paper proposed. 3. In this paper, the calculation amount solving the model is lower than Gonzalezs model when the sum of the number of inputs used and outputs produced is larger than 3. Since DEA shows an illustrious predominance in dealing with efficiency evaluation of multiple inputs and multiple outputs, the advantage in calculation amount is obvious. The rest of the paper is organized as follows: section 2 describes a generalized model of minimum amelioration for inefficient model, and a simplified model without the consideration of decision makers preference and cost in learning process is provided. A heuristic algorithm for the solution of efficiency amelioration model is displayed in section 3. Section 4 compares the difference between the model provided in this paper and Gonzalezs model, and show how the model advances ahead. Conclusions are made in last section.

2 Model of efficiency improvement with minimum amelioration


2.1 Model description Unlike conventional DEA models, which focusing on efficiency measurement, decision makers are more preferring to improve their efficiency with the least and possible change of current levels of inputs and outputs. The model of efficiency improvement with minimum amelioration corresponding to Inefficient DMU 0 can be depicted as the following

Min P I i + P O r
i =1 r =1

s.t.

x
j =1 j
n j =1

ij

= xi 0 i , i = 1,2,..., m
y rj = y rj + r , r = 1,2,...s

(1)

j , r , i 0, where ( xi 0 i , y rj + r ) is efficient
Where the significance of model (1) lies in is that, the total amount of the inputs and outputs been re-allocated in the process of efficiency improvement for inefficient DMU to be projected onto the efficient I O boundary is the least. P and P reflect the preference of decision maker or any other factors that may be considered in decision procedure respectively, for instance, changing cost, etc. In order to guarantee the new DMU, denoted as the combination of ( xi 0 i , y r 0 + r ) , the following transformation is provided

Min P I i 0 + P O r 0
i =1 r =1

s.t. u r ( y r 0 + r 0 ) = 1
r =1

(2)

w (x
i =1 i

i0

i0 ) = 1

w x u
i =1 i ij r =1

y rj 0

j = 1,2,..., n

326

xi 0 i 0 0; i = 1,2,..., m

r 0 0, r = 1, , s
u r , wi , r = 1,2,..., s; i = 1,2,..., m
Now we devote our interests to the solution of model (2) by ignoring the weights P be decided by decision maker. A simplified model is proposed
I

and P , which can

Min r 0 + i 0
r =1 i =1

s.t.

u
r =1 m i =1

( yr 0 + r 0 ) = 1 i0 ) = 1 j = 1,2,..., n

(3)

w (x
i m s i =1 r =1

i0

wi xij u r y rj 0

xi 0 i 0 0; i = 1,2,..., m

r 0 0, r = 1, , s
u r , wi , r = 1,2,..., s; i = 1,2,..., m
If DMU 0 is inefficient, the optimum value of model (3) is not equal to zero (otherwise, DMU 0 is DEA (weak) efficient). The following proposition corresponding to model (3) is reached. Proposition 1:
s

r =1

r0

= 0 or
m

i =1 i i0

i0

= 0 when model (3) reaches its optimization.

Proof: Let

u r r 0 = 0 ,
r =1

w
i =1

= 0 , model (3) can be converted to

Min 0 + 0

s.t.

u
r =1 m i =1 s

yr0 + 0 = 1 0 = 1 j = 1,2,..., n
(4)

w x
i m i =1 r =1

i0

wi xij u r y rj 0
(A) model (4) is the same as the following model

u r , wi , 0 , 0 0; r = 1,2,..., s; i = 1,2,..., m
Min (1 + 0 )(1

10 ) 1+ 0 10 1+ 0
=1
(5)

s.t.

1 1 + 0

u
r =1

yr0 =

1 1+ 0

w x
i =1 i

i0

327

m s 1 ( wi xij u r y rj ) 0 1 + 0 i =1 r =1

j = 1,2,..., n

0 , 0 0; u r , wi 0; r = 1,2,..., s; i = 1,2,..., m
Let u r =
'

10 1 1 u r , wi' = wi , r = 1,2,..., s; i = 1,2,..., m , 0 = 1+ 0 1+ 0 1 + 0


Min (1 + 0 )(1 0 )

Model (5) is equivalent to

s.t.

u
r =1 m i =1
' r

' r

yr0 = 0
' i

(6)

w x w x u
i =1
' i

i0

=1 j = 1,2,..., n

ij

r =1

y rj 0

0 , 0 0; u r' , wi' 0; r = 1,2,..., s; i = 1,2,..., m 0 , one of the components of the objective of model (6), is not lower than zero, and is not included in the constraints of model (6). Thus 0 is equal to zero when minimizing the objective function,
Note that which results in

w
i =1 i

i0

= 0 . i 0 = 0, i = 1, , m under the assumption of wi , i 0 0, i = 1,, m . wi , which is the only difference between model (3) and (4) when

Moreover, the coefficient

i 0 = 0, i = 1, , m ,, has no effect on the objective function and the objective of the two models rest only with
the regulation of outputs. In other words, the optimum value is reached when

i =1

i0

= 0 , which gives a proof to

the independence between objective function and inputs regulation. (B) Obviously 0 1 in model (4). Moreover, 0 1 , for the reason that the objective of model (3) is to minimize the objective function. Model (4) is equal to

Max (1 0 )(1

1+ 0 ) 10

s.t.

1 10 1 10
m

u
r =1 i

yr0 = 1 = 1+ 0 10
(7)

w x
i =1

i0

m s 1 ( wi xij u r y rj ) 0 1 0 i =1 r =1

j = 1,2,..., n

0 , 0 0; u r , wi 0; r = 1,2,..., s; i = 1,2,..., m
Let u r =
''

1 + 0 1 1 u r , wi'' = wi , r = 1,2,..., s; i = 1,2,..., m , 0 = 1 0 10 10

Model (7) is equivalent to 328

Max (1 0 )(1 0 )

s.t.

u
r =1 m i =1

'' r

yr0 = 1 = 0 j = 1,2,..., n

(8)

w x
m s i =1 r =1

'' i i0

wi'' xij u r'' y rj 0


Similarly, we can get the result that

0 , 0 0; u r'' , wi'' 0; r = 1,2,..., s; i = 1,2,..., m

r =1

r0

= 0 , which gives a proof to the independence between objective

function and outputs regulation. Combining part (A) and (B), proposition 1 is proofed. Proposition 1 shows that, either contract current level of the inputs or expand current level of the outputs, is the alternative way to transform inefficient DMUs into efficiency with the exertion of minimum amelioration, while re-allocating the inputs and outputs concurrently is excluded. 2.2 Re-distribution of inputs and outputs * * * * Supposing r , i ; r = 1,, s, i = 1,, m , 0 , 0 represent the unique optimum amelioration of model (4), a linear re-distribution model is provided to dispatch re-distribution model is as follows
* 0 , 0* into each input and output. The output-oriented

Min
s

r =1 r0

r0

s.t.


r =1
* r

* = 0

(9)

r 0 0, r = 1,2,..., s
When re-allocating the inputs of inefficient DMU, an input-oriented re-distribution model is

Min s.t.

i =1

i0

i =1

* i

i 0 = 0*

(10)

xi 0 i 0 0, i = 1,2,..., m
* 0 must be re-distributed in the special output that coincides with the largest r* . If the * * largest r is not unique, 0 can be dispatched freely among the corresponding outputs, which doesnt affect

In model (9),

the optimum amelioration of inefficient DMUs. Moreover, it provides a special space for decision makers to adjust the resources of inputs and outputs as their preferences. A similar analysis is applied in considering redistributing inputs corresponding to model (10).

3 A heuristic algorithm
Obviously, model (3) is a non-linear programming, and can not be solved directly. Comparing model (3) with model (4), (9), (10), we conclude that the optimal solutions of model (4), (9), (10) consist of a feasible solution of 329

model (3), which is also an approximate optimal solution of model (3). With a deep consideration of this point, a heuristic algorithm is suggested * * * * Step 1: solve model (4), and we get the optimal value r , i ; r = 1,, s, i = 1,, m , 0 , 0 ; Step 2: solve model (9) and (10) according to the optimum value of Step 1, and we get the optimal solution:

, r = 1,, s; i*0 , i = 1,, m


* r0

Step 3: compare the optimal solutions of model (9) and (10) and select the less of the two solutions as the approximate optimal solution of model (3).

4 Model comparison of efficiency improvement


4.1 projection comparison In order to make clear the difference among various models of efficiency improvement, we follow a simple data set in table 1, where each DMU use two inputs and produce one output.
Table 1: data DMU A B C D E F

x1
1 2 4 6 8 2.8

x2
5 3 1 1 2 5

y
1 1 1 1 1 1

We solve model (3), (4) and Gonzalez model, the results are illustrated in Fig 1. A, B, C and D are efficient while E, F are inefficient. (E1, E2, E3), (F1F2F3) represent the reference points of E and F onto the best practice frontier according to model (3), (4) and Gonzalez model respectively. Gonzalez model prefers to ameliorate a special input whose level is the maximum of current levels for the reason that it chooses a weighted way as

x
i =1

i0
i0

, While model (3) can alter the weights with regard to the preference of decision makers and any

other factors that influence the improvement process, and model (4) shows a simplified instance of this method.

x2
6 5 4 3 2 1

F1 A F2 B E2 E3 1 2

F3 E E1 C 3 4 5 D 6 7 8

x1

Fig 1: projection comparison of the three models

330

4.2 calculation amount comparison Gonzalez model is optimized by calculating the contraction in each input-specific input and select the smallest one, while model (3) is approximately optimized by the heuristic algorithm which combines the optimal values of model (4), (9), (10). In order to eliminate the difference between Gonzalezs model and model (3), an extension is provided in the former one by considering both inputs contraction and outputs expansion in Gonzalez model. Now consider n observed DMUs, with s outputs produced by m inputs, the calculation amount is equal to m + s and 3 with respect to an inefficient DMU by Gonzalez model and model (3) respectively, moreover, the total calculation amount of the inefficient DMUs is equivalent to n0 (m + s ) ( n0 is the number of inefficient DMUs) and

3 n0 . For instance, the data set in table 1 includes 6 DMUs using 2 inputs to produce an output. The calculation
amount is equal to 3 times for each inefficient DMU E and F and the total calculation amount is equal to 6 according to Gonzalez model, which is the same as model (3) in this paper. If n0 = 5, m = 6, s = 4 , the calculation amount is equal to 10 and 3 times, while the total calculation amount is 50 and 15 respectively, which can show a great advantage of the heuristic algorithm of model (3) over Gonzalez model. That is to say, the (total) calculation amount is growing as a great speed with the growth of the number of inputs and outputs.

5 Conclusion
In this paper, we proposed a generalized DEA model focusing on efficiency improvement to satisfy the desire of decision makers, who are not willing to improve technical efficiency of their firms according to traditional DEA models. Unlike current contexts, we care more about various factors which show a relative influence in learning process and can not be ignored, such as preference of decision makers, re-allocation cost of inputs and outputs, etc. We study a simplified model instead of the generalized DEA model, ignoring the weights in objective function, and suggest a heuristic algorithm to find out an approximate optimal solution for efficiency improvement. We also make the following intriguing observation about the difference among various DEA model based on a numerical study: the calculation amount using the heuristic algorithm is much lower than the one using Gonzalez model, which is another advantage of the paper. It would be an interesting task for future research to investigate efficiency improvement further. Acknowledgements This work was supported by the National Natural Science Foundation of China (Grant No. 70525001).
References Edvardsen D F, Forsund F R. International benchmarking of electricity distribution utilities. Resource and Energy Economics, 2003, 25, 353-371. [2] Zheng Jinghai, Liu Xiaxuan and Bigsten A. Efficiency, technical progress and best practice in Chinese Stage Enterprises. Working papers in Economics no 30, 2000. [3] Leibenstein H. Allocative efficiency versue X-efficiency. American Economic Review, 1966, 56, 392-415. [4] Farrell M J. The measurement of production efficiency. Journal of the Royal Statistics Society A, 1957, 120, 253-281. [5] Bogetoft P. Incentive efficient production frontiers: An agency perspective on DEA. Management Science, 1994, 40(8), 959-968. [6] Charnes A, Cooper W W, Rhodes E. Measuring the efficiency of decision making units. European Journal of Operational Research, 1978, 2, 429-444. [7] Banker R D, Charnes A, Cooper W W. Some models for estimating technical and scale inefficiencies in data envelopment analysis. Management Science, 1984, 30(9), 1078-1092. [8] Brockett P L, Charnes W W, Huang Z M and Sun D B. Data transformations in DEA cone ratio envelopment approaches for monitoring bank performances, European Journal of Operational Research, 1997, 98(2), 250-268. [9] Cooper W W, Seiford L M and Zhu J. A unified additive model approach for evaluating inefficiency and congestion with associated measures in DEA. Socio-Economic Planning Sciences, 2000, 34(1),1-25. [10] Bogetoft P, Hougaard J L. Efficiency evaluations based on potential non-proportional improvements. Journal of Productivity Analysis, 1999, 12, 233-247. [11] Coelli T J. A multi-stage methodology for the solution of orientated DEA models. Operations Research Letters, 1998, 23(3-5), [1]

331

143-149. [12] Dervaus B, Kerstens K, Vanden E P, Radial and nonradial static efficiency decompositions: A focus on congestion measurement. Transportation Research B, 1998, 32(5), 299-312. [13] Frei F X, Harker P T. Projections onto efficient frontiers: Theoretical and computational extensions to DEA. Journal of Production Analysis, 1999, 11, 275-300. [14] Shephard R W. Cost and Production Functions, Princeton University Press, 1953, Princeton. [15] Fare R and Grosskopf S. Theory and application of directional distance functions, Journal of Productivity Analysis, 2000, 13, 93-102. [16] Peng Yu. The gradient of DEA efficiency. System Engineering Theory & Practice. 1997, 11, 51-55(in Chinese) [17] Wang Xinyu, Wang Hongxin. A general algorithm to calculate the gradient of DEA efficiency for inefficient DMUs. Operations Research and Management Science, 2000, 9(2), 12-17(in Chinese) [18] Wu Jianjiang. A method on minimizing the sum of deviations to transform decision making unit into DEA efficient. Mathematics in Practice and Theory, 2004, 34(10), 69-73(in Chinese) [19] Guan Jiancheng, Ma Ning. A multi-objective DEA projection model and its applications. Chinese Journal of Management Science, 2003, 11(1), 66-70(in Chinese) [20] Schaffnit C, Rosen D, Paradi J C. Best practice analysis of bank branches: an application of DEA in a large Canadian bank. European Journal of Operational Research, 1997, 98, 269-289. [21] Gonzalez E, Alvarez A. From efficiency measurement to efficiency improvement: the choice of a relevant benchmark. European Journal of Operational Research, 2001, 133, 512-520. [22] Gonzalez E and Carcaba A. Efficiency improvement through learning. International Journal of Technology Management, 2004, 27(6-7), 628-638.

332

A Method of Fuzzy Multi-Attribute Group Decision-Making Problem with Linguistic Variable


Deng WeiWu Qizong
School of Management and Economics, Beijing Institute of Technology, P.R.China, 100081

Abstract For some fuzzy multi attribute group decision making problem, we prefer to make decision firstly and integrate the experts opinion later. When the opinion expressed by linguistic variable or qualitative indices , the operation will be difficult . In this paper , we present a method to deal with the problem including information of linguistic valuation in fuzzy multi-attribute group decision-making . The method is simple and convenient , an example is given to explain it. Key words Borda function, Fuzzy multi-attribute group decision making , Integrating, linguistic variable

1. Introduction
In the field of engineering and economics, there exist a common decision making problem, such as reservoir location problems, project management, producing planning, etc. In these cases, a group consist of some professional experts is needed to choose and valuate a set of feasible solutions under multi-attributes. For these attributes, some could be represent by a quantitative form, but some are hard to get quantitative number, it depends on the experts familiarity degree of certain field. When an expert has not enough information or confidence to get quantification and appraisal for the unfamiliar field, they may give a qualitative valuation or judgment by natural linguistic variable. Fuzzy mathematic is a very effective method to deal with linguistic variable , it usually can be represented by a fuzzy number. Furthermore, in terms of difference of led by preferences and appraisal among different attributes, decision makers may have not uniform ranking and results for the feasible solutions. Thus, how to integrate opinions of experts and conclude the groups rank and choice is very important. Surveying the research literatures, there are two ways to deal with the problem. One is aggregating the rank first and making decision the latter, that is, first aggregating each decision makers valuation fuzzy number under singular objective to a group valuation number, then aggregating each group valuation number under singular objective to a multi-objective group valuation number , where obtaining a set of group valuation numbers , comparing the numbers . In paper [1,2,3], five feasible aggregating methods have been presented, and their characters have been compared . The other way is making decision first and synthesizing the latter, that is aggregating each decision makers valuation fuzzy number under singular objective to a multi-objective group valuation number first, then comparing the numbers of different decision makers and ranking , aggregating the ranking order at last. In this paper, we present a feasible method to deal with fuzzy multi-attribute group decision-making problem including linguistic variable: experts give their valuations and weights of every attribute, which may be qualitative or quantitative form , depending on experts knowledge ; for the former , decision makers represent the linguistic valuation and weight by a trapezoidal fuzzy number ; for the latter ,deal the quantitative valuation and weight with a dimensionless yield form ; Secondly , decision makers integrate these fuzzy numbers originating form single attribute by a comprehensive number , rank the feasible solutions by Bass-Kwakernaak fuzzy number ranking method , it means a single experts preference ; Lastly , aggregate the solutions rank by Borda function , thus we get the group rank and group choice . The paper is organized as follows: In section 2, we give some preliminary definitions and method of dealing with linguistic variable. In section 3, theory of trapezoidal fuzzy number and Bass-Kwakernaak fuzzy number ranking is introduced, we main concern on Arithmetic of trapezoidal fuzzy number and Bass-Kwakernaaks ranking method. In section 4, fuzzy valuation index value s computation and ranking is explained, it is an important step to integrate valuation index of group decision making ; In section 5, group ranking indexs 333

computation theory is introduced, we take Borda point to rank the projects; In section 6, we present the step of our method, clarify the algorithm step; In section 7, we give an example to explain the method , it is proved to be an effective method.

2. Linguistic Variable and its Fuzzy representation Definition 1 R is real number set, we call fuzzy set A is a fuzzy number in R ,if its membership function A (x) satisfy:
(1) A ( x) = 0, x (, c ] [ d ,+ ) (2) A ( x) = 0 is strictly increasing monotonously in [c, a ] , strictly decreasing monotonously in [b, d ] (3) A ( x) = 1, x [a, b] Fuzzy number have many forms, such as triangle fuzzy number, peak fuzzy number, normal number. In this paper, we take advantage of trapezoidal fuzzy number to represent Linguistic Variable, its membership function A(x) may be:
( x c) (a c) 1 A ( x) = ( x d ) (b d ) 0 cxa a xb bxd otherwise

Usually, it can be easily expressed by a four-dimensional number: A = ( a, b, c, d ) . Trapezoidal fuzzy numbers arithmetic can be achieved by Zadech s fuzzy extension mapping, as explained in section 3. Linguistic variable method is an effective tool to handle the decision-making problem which are difficult to quantificationally measure. Linguistic variables value is a word or sentence expressed by natural or artificial language. There are several kinds of linguistic variable, such as weight coefficient variable, qualitative objective variable, the former can be defined by W ={lower, low, moderate, high, higher}, the latter can be defined by

L ={worse, bad, moderate, good, better}, ect. These linguistic variable can be expressed by fuzzy number. For
example, decision makers valuation of weight coefficient , which is a linguistic variable, can be expressed by a trapezoidal fuzzy number, its membership degree may be shown in Tab. 1.
Tab. 1 decision makers valuation of weight coefficient expressed by a trapezoidal fuzzy number each attribute linguistic variable lower low moderate high higher membership degree

W (low)(0.2,0.4,0.4,0.6) erate ( W (mod ) 0.3,0.5,0.5,0.6) ) ( W (high0.5,0.7,0.7,1)


W (higher )(0.7,0.8,0.8,1)

(lower )(0,0,0,0.3)

For the sake of the compatibility of quantitative objective and qualitative objective expressed by linguistic variable, we make steps as follow: deal with the attribute of quantitative objective properly,then transform the attribute number into a dimensionless index[4]that is a four-dimensional number. Suppose the attribute value of project Ai under the quantitative objective C j is V ij ,while

i = 1,2,

, m, j = n1 + 1,

, n ,its dimensionless value has given by:


for benifit objective V 1 j (V 1 j + + V mj ) h V ij = 1 1 for cos t objective 1 V ij [(V 1 j + + V mj )] i = 1, , m; j = n1 + 1, , n; h = 1, , l. (1)

Qualitative objectives dimensionless value can be expressed by:


h h h h h E ij = ( E ij , E ij , E ij , E ij ), i = 1, , m; j = n1 + 1, , n; h = 1, , l.

334

3.Arithmetic of trapezoidal fuzzy number and Bass-Kwakernaaks ranking method Definition 2 Suppose ( a1 , b1 ; c1 , d 1) and ( a 2 , b2 ; c 2 , d 2) are trapezoidal fuzzy number, then the sum of
them is a trapezoidal fuzzy number too, and

(a1 , b1 ; c1 , d 1) (a 2 , b2 ; c 2 , d 2) (a1 + a 2 , b1 + b2 , c1 + d 1 , c 2 + d 2)
Where means fuzzy sum. Definition 3 Suppose ( a1 , b1 ; c1 , d 1) and ( a 2 , b2 ; c 2 , d 2) are trapezoidal fuzzy number, then the product of them is a trapeziform fuzzy number, Bonissone define the product by[5]:

(a1 , b1 ; c1 , d 1) (a 2 , b2 ; c 2 , d 2) (a1 a 2 , b1 b2 , ; d 2 + a 2 c1 c1 d 2 , b1 c 2 + b2 d 1 d 1 c3)


Base on the stochastic model of multi-attribute decision making problem, Bass and Kwakernaak found the multi-attribute decision making model under fuzzy environment, and give a method of fuzzy number ranking[6], the method usually become other methods referrence and norm. Definition 4 O / X = {i , O / X ( i | x1 , x 2 , , x n )} , i N , xi I , where I means [0,1] , and fuzzy set

O / X s eigenfunction is defined by

O / X (i | x1 , x2 ,

1, xi x j , j N , , x n)} = 0, otherwise.

Where N is natural number set. O / X is a condition set which describes the preference of fuzzy set, that is: A known project Ai belong to preference set if and only if xi x j , j N
Definition 5 O = {i , O (i )} is called a fuzzy preference set, when its membership function is:

O (i ) =

x1, x 2, x n

sup { O / X ( i | x1 , x 2 ,

, x n) min{ A j ( x j )}} =
j

sup
x1, x 2, , x n x i max x j

min { A j ( x j )}
j

(2)

where O (i ) represents the possible degree of fuzzy set Ai being a optimal selective

project.

The fuzzy preference set can be make sure of the ranking of fuzzy sets. When there are only two fuzzy sets, formula (2) can be simplified as follow:

O (i ) = sup min{ A ( x1), A ( x 2)}


x1 x 2
1 2

(3)

According as Bass-Kwakernaaks fuzzy preference relation, the fuzzy set which has largest peak value(the variables value when its membership degree is 1) is selected by a best project.

4.Fuzzy valuation index value s computation and ranking


Suppose decision makers valuation value(arithmetic value)of different project and attribute is expressed by h E ,the weight coefficient is W j , where
h ij

E ij = (cij , d ij ; ij , ij ),
h h h h h

W j = (a j , b j ; j , j ), i = 1,
h h h h h

, m; j = 1,

, n, h = 1,

,l

h h h Note r ij = W h E ij r ij is defined by formula (4): j

r ij = (a j cij , b j d ij ; a j ij + cij j j ij , b j ij + ij j j ij )
h h h h h h h h h h h h h h h h h

(4)

i = 1,
multi-attribute:

, m;

j = 1,

, n; h = 1,

Aggregate singular decision makers valuation value of each attribute to fuzzy valuation index value F ih of
h h h h h F i = n [( E i1 W 1 ) ( E i 2 W 2)

h ( E in W h)], i = 1, n

, m; j = 1,

, l.

(5)

each decision makers should rank the selective projects. F ih is a trapeziform When fuzzy index F ih is given
335

fuzzy number , according to Bass-Kwakernaaks ranking method, the projectsranking can be achieved by the corresponding peak value. When the projects peak value is lager ,the project should be ranked by a preferential place.

5.Group ranking indexs computation


For multi-attribute decision making problem , before coming into being a ranking index for all projects, we must consider how to aggregate each decision makers ranking index. In this paper, the multi-attribute decision making problem we concern about is a sort of group decision making problem has democratic character, such as committee decision making, etc. This kind of group decision making problems main trait is each decision maker in the group has a coequal status, they have a unanimous benefit, and they respect each other. Each decision maker may have different preference of every attribute, or has different valuation value of each qualitative objective, so as to the coherence of different project is not absolute, but considering the character of group decision making as shown above, we can use social choice theory to integrate each members preference, conclude a group (society) preference and achieve the final decision making. We take Borda function method[7] to integrate each decision makers ranking . Each decision maker rank the selective projects, put number l 1, l 2, ,1,0 into the first, second , , worst place. Suppose decision maker h mark pi as his rank number of project i then we can calculate the final number(Borda point f B ( Ai ) ,as shown in formula (6))of all the selective projectsranking,
h

f B ( Ai) = A jA

\ { Ai}, h =1, l

h pi N ( Ai A j )

(6)

where A = { A1 , A2 , , An} is a set of selective projects, N ( x which makers believe project Ai is prior to project A j . project which has a largest Borda point as a optimal project.

y ) represent the decision makers number

According to the value of f B ( Ai ) ,we can get the group preference ranking of all projects and select the

6.Algorithm step
Based on the analysis and discussion above, we conclude the algorithm step of the Fuzzy Multi-attribute Group Decision-making Problem with Linguistic Variable: Step 1 Give selective projects and its valuation objective, classify the objective into two sorts: qualitative objective and quantitative objective; Step 2 Deal each quantitative objective with a dimensionless value according to formula (1), and represent the value by a four-dimensional number ; Step 3 Give linguistic variable of weight coefficient and qualitative objective, find its membership function expressed by fuzzy number; Step 4 Each decision maker give his valuation value of linguistic variable of all weight coefficient and qualitative objective; h Step 5 Get each decision makers fuzzy valuation index r ij based on weight analysis according to formula (4);
Step 6 Get each decision makers fuzzy valuation index F ih of multi-attribute according to formula (5), rank

the projects based on the number of peak value according to Bass-Kwakernaak ranking method; Step 7 Get each projects Borda point f B ( Ai ) according to formula (6), rank the project based on the value of

f B ( Ai ) , select the optimal project which has the largest f B ( Ai ) .

7.Example
Water conservancy project is a great project relating to the national economy and the people's livelihood, strict item evaluation must be argued before progressing. Usually some selective projects have been put forward,
336

it is a very complex system and not been made decision by a singular person, it is a multi-attribute of group decision making problem. Suppose four experts (or four sorts of experts: such as governor, engineer, environmentalist, economist) have been invited to evaluate the item, they review projects of item from their familiar field ,such as investment, time limit, preventing floodwater, generating electricity and environmentalism. Since difference of knowledge and information of experts, they may give an exact computation value for some attribute, but for other attribute, they can only give a linguistic valuation value by the word good or common. For example, governor may have enough knowledge and information about investment and time limit based on finance and budget plan, so for the two attributes, they can give their quantitative evaluation. But for other attributes such as preventing floodwater, generating electricity and environmentalism, professional knowledge must be mastered, so they can hard to give an exact quantitative value. Similarly, for given evaluation of different projects by the experts, weight coefficients of attributes are not a positive appraisal, they incline to be described by linguistic variable. So using method of fuzzy group decision making helps to improve the veracity of decision, debase the risk of decision. Suppose weight coefficients of attribute evaluated by experts are shown by Tab. 2, experts evaluation of projects is shown by Tab. 3.
Tab. 2 experts evaluation of weight coefficients of each attribute Experts Governor Engineer Environmentalist Economist Investment higher moderate lower higher Time Limit higher moderate low high preventing floodwater moderate higher high low each project preventing floodwater moderate high higher 5.0billion 8.0 billion 10 billion 4.0 billion 5.0 billion 8.0 billion higher high higher generating electricity low higher higher moderate generating electricity higher higher lower 10 billion 20 billion 1.0 billion high high low 8.0 billion 10 billion 3.0 billion Environment -alism low high higher low Environment -alism moderate better bad good better bad 5.0 billion 7.0 billion -1.0 billion 7.0 billion 8.0 billion 1.0 billion

Tab. 3 experts evaluation of Experts Governor Projects A1 A2 A3 A1 A2 A3 A1 A2 A3 A1 A2 A3 Investment 1.2billion 2.5 billion 0.8 billion high higher moderate 1.5 billion 3.0 billion 1.0 billion 1.4 billion 2.6 billion 0.65 billion Time Limit 4years 6 years 2.5 years 3.5 years 6.5 years 2 years long longer moderate long long short

Engineer Environ mentalist Economist

Deal each quantitative objective with a dimensionless value give linguistic variable of weight coefficient and qualitative objective, find its membership function expressed by fuzzy number, as shown by Tab. 4.
Experts Governor Tab.4 qualitative and linguistic variable objectives fuzzy number represent preventing generating Projects Investment Time Limit floodwater electricity
A1 A2 A3 A1

Environment -alism
(.3,.5,.5,.6) (.7,.8,.8,1) (0,0,0,.3) (.5,.7,.7,1) (.7,.8,.8,1) (.2,.4,.4,.6) (.42,.42,.42,.42) (.58,.58,.58,.58) (0,0,0,0) (.44,.44,.44,.44) (.5,.5,.5,.5,) (.06,.06,.06,.06)

Engineer Environ -mentalist Economist

A2 A3 A1 A2 A3 A1 A2 A3

(.34,.34,.34,.34) (.16,.16,.16,.16) (.5,.5,.5,.5) (.2,.4,.4,.6) (0,0,0,.3) (.3,.5,.5,.6) (.48,.48,.48,.48) (.17,.17,.17,.17) (.35,.35,.35,.35) (.27,.27,.27,.27) (.15,.15,.15,.15) (.58,.58,.58,.58)

(.31,.31,.31,.31) (.20,.20,.20,.20) (.49,.49,.49,.49) (.30,.30,.30,.30) (.16,.16,.16,.16) (.54,.54,.54,.54) (.2,.4,.4,.6) (0,0,0,.3) (.3,.5,.5,.6) (.2,.4,.4,.6) (.2,.4,.4,.6) (.5,.7,.7,1)

(.3,.5,.5,.6) (.5,.7,.7,1) (.7,.8,.8,1) (.22,.22,.22,.22) (.35,.35,.35,.35) (.43,.43,.43,.43) (.26,.26,.26,.26) (.29,.29,.29,.29,) (.47,.47,.47,.47) (.7,.8,.8,1) (.5,.7,.7,1) (.7,.8,.8,1)

(.7,.8,.8,1) (.7,.8,.8,1) (0,0,0,.3) (.32,.32,.32,.32) (.65,.65,.65,.65) (.03,.03,.03,.03) (.5,.7,.7,1) (.5,.7,.7,1) (.2,.4,.4,.6) (.38,.38,.38,.38) (.48,.48,.48,.48) (.14,.14,.14,.14)

337

Weight coefficients of attribute given by experts are expressed by fuzzy number,as shown by Tab. 5.
Experts Governor Engineer Environ -mentalist Economist Investment (.7,.8,.8,1) (.3,.5,.5,.6) (0,0,0,.3) (.7,.8,.8,1) Tab.5 Weight coefficients of attribute preventing Time Limit floodwater (.7,.8,.8,1) (.3,.5,.5,.6) (.3,.5,.5,.6) (.2,.4,.4,.6) (.5,.7,.7,1) (.7,.8,.8,1) (.5,.7,.7,1) (.2,.4,.4,.6) generating electricity (.2,.4,.4,.6) (.7,.8,.8,1) (.7,.8,.8,1) (.3,.5,.5,.6) Environment -alism (.2,.4,.4,.6) (.5,.7,.7,1) (.7,.8,.8,1) (.2,.4,.4,.6)

h Decision makers fuzzy evaluation index r ij of each project based on weight coefficients analysis, as shown

by Tab. 6. Decision makers fuzzy evaluation index F ih of multi-attribute, as shown by Tab. 7.


Tab. 6 decision makers fuzzy evaluation index

r ij
generating electricity
(.14,.32,.08,.32) (.14,.32,.08,.32) (0,0,-.06,0) (.224,.256,.224,.256) (.445,.52,.445,.52) (.021,.024,.021,.024) (.35,.56,.3,.56) (.35,.56,.3,.56) (.14,.32,.1,.32) (.114,.19,.114,.19) (.144,.24,.144,.24) (.042,.07,.042,.07)

r ij

Projects
A1

Investment
(.238,.272,.238,.272) (.112,.128,.112,.128) (.35,.4,.35,.4) (.06,.2,-.02,.2) (0,0,-.06,0) (.09,.25,.03,.25) (0,0,0,0) (0,0,0,0) (0,0,0,0) (.189,.216,.189,.216) (.105,.12,.105,.12) (.406,.464,.406,.464)

Time Limit
(.217,.248,.217,.248) (.14,.16,.14,.16) (.343,.392,.343,.392) (.09,.15,.09,.15) (.048,.08,.048,.08) (.162,.27,.162,.27) (.04,.16,-.04,.16) (0,0,.06,0) (.06,.2,0,.2) (.1,.28,.02,.28) (.1,.28,.02,.28) (.25,.49,.15,.49)

preventing floodwater
(.09,.25,.03,.25) (.15,.35,.05,.35) (.21,.4,.15,.4) (.154,.176,.154,.176) (.245,.28,.245,.28) (.301,.344,.301,.344) (.13,.182,.13,.182) (.145,.203,.145,.203) (.235,.329,.235,.329) (.14,.32,-.04,.32) (.1,.28,0,.28) (.14,.32,.08,.32)

Environment -alism
(.06,.2,0,.2) (.14,.32,.08,.32) (0,0,-.06,0) (.25,.49,.15,.49) (.35,.56,.29,.56) (.1,.28,.02,.28) (.294,.336,.294,.336) (.406,.464,.406,.464) (0,0,0,0) (.088,.176,.088,.176) (.1,.2,.1,.2,) (.012,.024,.012,.024)

1 ij

A2 A3 A1

2 ij

A2 A3 A1

3 ij

A2 A3 A1

4 ij

A2 A3

Tab. 7 Decision makers fuzzy evaluation index Projects A1

Fi

of multi-attribute

Fi

(.149,.258,.113,.0.258) (.1364,.2556,.0924,.0.2556) (.1806,.2384,.1446,.2384) (.1556,.2544,.1196,.2544) (.2176,.288,.1936,.288) (.1348,.2336,.1068,.2336) (.1628,.2476,.1368,.2476) (.1802,.2454,.1598,.2454) (.087,.1698,.067,.1698) (.1262,.2364,.0742,.2364) (.1098,.224,.0738,.224) (.17,.2736,.138,.2736)

1 i

A2 A3 A1

2 i

A2 A3 A1

3 i

A2 A3 A1

4 i

A2 A3

338

According to Bass-Kwakernaak method, rank the projects though comparing with the peak values, then for Governor, his ranking is A1 A2 A3 ; for Engineer, his ranking is A2 A1 A3 ;for Environ-mentalist, his ranking is A1

A2

A3 ;for Economist, his ranking is

A3

A1

A2 .

Considering each projects Borda point, put number 2, 1, 0 into the first, second, third place project, we get project A1 s Borda point is f B ( A1) 22+12+00 6; project A2 s Borda point is f B ( A2) 21+12+014project A3 s Borda point is f B ( A3) 21+10+032. So the groups ranking is A1 A2 A3 the optimal project is A1 .

8.Conclusion
For a certain of multi-attribute group decision making, consider making decision first and synthetizing later , transform the qualitative objective and linguistic variable of weight coefficient into a trapezoidal fuzzy number, aggregate each decision makers valuation under singular objective to a fuzzy valuation number under multi objective, rank the fuzzy number based on Bass-Kwakernaaks possible density ranking method ,get decision makers ranking of projects under a singular object, integrate the group ranking of projects according to the value of Borda point. The method is feasible and logistic, give a way to solve multi-attribute group decision making problem.
References Andras B, Lucien D, Istvan B. Combination of fuzzy numbers representing expert opinions. Fuzzy Sets and System,1993(57) Buckley J T. Ranking alternative using fuzzy numbers. Fuzzy Sets and Systems,1985(15) Oser P, Dias Jr. Ranking alternatives using numbers. A Computational approach. Fuzzy and Systems,1993(56) Dubois D, Prade H. Fuzzy Sets and Systems. Academic Press, New York,1980 P.P.Bonissone. A pattern recognition approach to the problem of linguistic approximation in system analysis.IEEE 1976 International Conference on Cybernetics and Society,1979 [6] S.M.Bass and H.Kwakernaak. Rating and ranking of multiple aspect alternatives using fuzzy sets.Automatica,1977(3) [7] Yue Chaoyuan. Decision making theory and method. Beijing: Science Press,2003 [8] Fu Wei, Meng Bo. A method of fuzzy multi-objective group decision making. System engineering and electronic technology,1996(12) [9] Song Ricong, Chen Hanlin. Multi-objective fuzzy group expectation method. Transaction of Sichuan normal university, 1997 (9) [10] Li Rongjun. Fuzzy multi-criteria decision making theory and its application. Beijing: Science Press,2002 [1] [2] [3] [4] [5]

339

Research on E-Business Investment Decision Making Based on Option Games


Han Guowen
Economics and Management School, Wuhan University, Wuhan,

430072, P.R.China

Abstract The theory of option games being the combination of two successful theories, namely real options and game theory, has a great potential to applications in many real situations. E-business investment has the character of large scale and high uncertainty. This paper employs the option games model to research the E-bank investment decision-making.The real option method can avoid the pitfall of Net Present Value(NPV) approach for e-business investment decision-making. But it ignores the interaction between competitors. In the light of the problems existing in e-business investment decision- making, this paper employs the option games models to study the decision-making for e-business investment. It can consider the key factors such as uncertainty, flexibility, and timing, the effect of competition for each firm. Keywords E-business, Optimal timing, Option games, Competition environment

1Introduction
The e-business develops so fast since it comes into being. Many firms know the development potential of the e-business, and plan to invest it. However, the customers demands of the e-business change frequently and the competition environment is intense, the e-business investment has the high degree of uncertainty. Under uncertain environment, we always apply real options method to make decision, compare the investment as American option, and think that waiting can make the firm acquire more information and the high value of the investment opportunities. This kind of investment decision method gives the project decision more flexibility comparing with NPV method, but it considers the uncertainty term of the investment only. It doesnt consider the competition environment of the firms. In fact for acquiring preemption in vigorous competition environment, the firms usually rush ahead to invest e-business. That is to say, the problem of e-business investment under uncertainty and competition is demanding a more rigorous framework. Fudenberg and Tirole(1985)[1] early combined real options with strategic interaction of firms, discussed game theory on how to choose the optimal timing of firms investment decision under uncertainty. Dixit and Pindyck(1989)[2] expanded Fudenberg-Tirole models, studied the situation of new market with option method under uncertain environment supposing that the investment revenue followed a geometric Brown motion. Huisman and Kort expanded the new market model to study the technology upgrading of two firms in current market, considered the uncertain demands influence on technological innovation investment making decision. Smets(1993) [3]established continuous-time model of option games that considered rational strategic between two firms, and used the mixed strategy theory to analyze the Nash equilibrium in the duopoly models under uncertainty both symmetrical and asymmetrical. Dixit and Pindyck(1994)[4] studied the case under imperfect competition, analyzed market, perpetual option and imperfect information of two competitors in continuous-time, then gave the leader and follower threshold and value. The first option games textbook appeared with Huisman (2001)[5], focusing important theoretical models of option games in continuous-time mainly for technology applications. Before, Grenadier (2000)[ 7] edited a good selection of option games papers. A nice new addition is the forthcoming textbook of Smit & Trigeorgis (2003) [7]focusing mainly discrete-time option games models, in a light and thorough approach, with many practical examples. As for the field of e-business investment decision making, many researchers considered uncertainty while applying real options to the analysis of investment opportunity. But they didnt consider the factor of competition. In order to solve the problems and shortages existing in current e-business investment evaluation and decision, this paper employs the option games models to make decision for e-business investment. It can consider the key factors such as uncertainty, flexibility, and timing, the effect of competition for each firm.
340

2Option games analysis on e-business investment decision-making under competition


We consider two identical, risk neutral and value maximizing firms that can make a e-business investment expenditure I(>0) to active on a market. They have the symmetrical strategies. This means that the "entry" of a firm affects the current profit flow of the other firm reducing the profit flow because this model considers negative externalities. The firms face a (inverse) demand curve expressed by the profit flow p(t) for given by:
p (t ) = Y (t ) D ( N i , N j )

(1)

Where D( Ni, Nj ) is a deterministic demand parameter for firm i, which depends on the status of firms i and j. Ni=1 means that firm i invested. Ni=0 means that firm i has not invested yet, and the possible values of D( Ni, Nj) are: D(0, 0) means that both firms not invested yet (but there is a profit flow YD(0, 0) because the firms are already active in the market);D(1, 0) means that firm i invested and is the "leader" because the firm j has not invested yet; D(0, 1) means that firm i is the "follower" because only the other firm j has invested becoming the leader; and D(1, 1) means that both firms invested in the market (simultaneous investment). Y(t) is the stochastic demand shock following a geometric Brown motion: dY (t ) = Y (t ) + Y (t ) dz (2) Where is the expected rate of return, is the volatility, dZ is the Wiener incrementIf the operational cost is zero, p(t) is interpreted as profit flow. The deterministic demand parameters have the additional constraint of negative externality given by the inequality: D(1,0)> D(1,1)> D(0,0)> D(0,1). According to real options, there is a typical optimal timing problem and the solution is performed backwards. This means that first we need to estimate the value of the follower (given that the leader entered before), and then the leader value given that the leader knows that the optimal follower entry can happen in the future. 2.1 The follower value and investment threshold Supposed that the leader has entered the market, we use partial differential equation to calculate the follower value F(Y) and investment threshold YF. F(Y) is given by the ordinary differential equation (ODE) below:
1 2 ' '' 2 2 Where r is risk-free rate, F = F / Y F = F / Y YD(0,1) is profit flow. The ODE solution is a
1 2

Y F + YF rF + YD(0,1) = 0

2 2 ''

'

(3)

homogeneous solution of type AY 1 + BY 2 , plus a particular solution.


We get two roots of the equation:


1 2

2 2

2 + ( 0.5 ) r = 0

1, 2 =

2 2 2 ( 0.5 ) + 2 r 2

(4)

Because of boundary conditions and 2 is negative root, constant B=0. According to value matching conditions and the smooth pasting, we can get:
Y D (1,1) F (Y = YF ) = F I r D (1,1) FY (Y = YF ) = r

(5) (6)

From that we can get:


A= 1 YF 1 D (1,1) D (0,1) r

(7)

341

YF =

1 ( r ) I
( 1 1)( D (1,1) D (0,1))

(8)

So the solution for the follower's ODE is given by:


F (Y ) = YD(0,1) r +(

Y ( D (1,1) D (0,1)) ) 1 [ F I] r YF
Y F (Y ) = YD(1,1) r I

if Y<YF

(9)

if YYF

(10)

Assuming the leaders investment time is TL. It is the time that the project value hits the leaders investment threshold. Then the follower investment timing TF can be expressed as:
TF = inf{t TL : Y YF }

(11)

2.2 The leader value and investment threshold When the leader is in monopoly phase, its profit flow is YD(0,0). When t=T*=TF, the follower enters and the profit drops to YD(1,1),and the leaders investment value is equal to the followers. We can follow with the similar steps used for the follower case, in order to calculate the leader value. However, perhaps it is easier another method, namely the differential equation approach for the value of the leader during the monopolistic phase, denoted by V(Y) = L(Y) + I. This value V(Y) needs to match the value of simultaneous investment (follower value) at the boundary point Y = YF. The differential equation of V(Y) is given by:
1 2

Y V + YV rV + YD(1,0) = 0

2 2 ''

'

(12)

Where YD(1,0) is profit flow, represented by the profit flow during the monopolistic phase. The solution is a homogeneous solution plus a particular solution. YD(1,0) (13) V (Y ) = BY 1 + r The constant B is the parameter that remains to be calculated, requiring only one boundary condition for that. The biggest difference compared with the constant A (eq.7) from the follower value, is that the constant B is negative, reflecting in the (expected) leader value, the losses due to the possible future follower investment exercise. This is mathematically showed below. The relevant boundary condition is the value-matching at the point that the follower entry (at YF). The smooth-pasting condition is not applicable here because this point is not an optimal control of the leader. It is derived of one optimization problem but from the other player. This boundary condition is:
Y D (1,1) V (YF ) = F r

(14)

The leader value during the monopolist phase is equal to the simultaneous investment value at YF. Equaling the two last equations, we get the value of the constant B in function of YF.
Y ( D (1,1) D (1,0) B= F YF 1 ( r )

(15)

By substituting equation (15) into equation (13), we can get in the enterprise value V(Y) in monopoly phase. Therefore, we can get the leader value from the equation below:
L (Y ) = YD(1,0) r
+[

Y ( D (1,1) D (1,0)) ] 1 F I r YF
Y

Y < YF

(16)

If YYF, the value of becoming leader is equal to the value of becoming follower, which is equal to the
342

value of simultaneous investment S(Y):


L (Y ) = S (Y ) = YD(1,1) r
I

YYF

(17)

Supposing the leader knows that the optimal investment strategy of follower is investing at the moment when
TF = inf{t 0 : Y YF } .The optimal investment strategy of the leader is investing at the moment when TL = inf{t 0 : L (Y ) = F (Y )} .

2.3 The equilibrium analysis on e-business investment strategy Since two firms are symmetry, there are two kinds of equilibrium investment strategy: preemption equilibrium and simultaneous equilibrium. (1) if the initial state is Y(0)<YF, for every firm, the investment strategy is: If the rival hasnt invested, the optimal timing of investment entry is Y(t)YL; If the rival has invested, the optimal timing of investment entry is Y(t)=YL. At this time the investment equilibrium is preemption equilibrium. Given the strategy above, we can get

the sequential investment strategy equilibrium: When Y(0)<YL, for the two firms, waiting rather than investing immediately in e-business is the optimal equilibrium investment strategy. One will wait until Y(t)=YL, the other will wait until Y(t)=YF. For leader and follower, two firms has no distinguish. When YL<Y(0)<YF, because of preemption, the firm which invests in e-business first will acquire some monopoly profits. It makes L(Y)>F(Y). One of them will invest with probability, but the optimal investment strategy of another firm is waiting to carry out investment option until Y(t)=YF . (2) if the initial state is Y(0)YF, and there isnt any firms to carry out the e-business investment option before, L(Y)= F(Y), therefore the investment strategy is: One firm invests when the other carries out investment option. That is simultaneous investment strategy equilibrium.

3Empirical analysis
Both firm A and B intend to invest in e-business in this field and B is the main rival of A. They are balanced players in games. Assumption = 5% , = 20% , r = 10% , I = 20 , Y (t = 0) = 1 , D (0,1) = 1, D (0,0) = 2, D (1,1) = 2.5, D (1,0) = 4 . D (0,1) = 1, D (0,0) = 2, D (1,1) = 2.5, D (1,0) = 4 According to these parameters, we can get: 1 = 1.6085 , YF = 1.7623 , when Y < 1.7623 , YL = 0.57 .

Figure1 Leader and follower values and entry thresholds

From figure 1 we can know that, when the demand Y < YL = 0.57 (Leader entry), the optimal strategy of firm A is not investing. Meanwhile, firms investment opportunity value is equal to option value. When
343

YL = 0.57 , the value of leader A is higher than followers value and simultaneous investment value. At this time,

the optimal strategy of firm A is entering the e-business market, to acquire higher monopoly profits. When the demand YF = 1.7623 (Follower entry), simultaneous investment value is higher than leader value of A, the optimal strategy of the follower B is entering the market, acquiring simultaneous investment profits with A. So point YF = 1.7623 is the investment threshold of firm B in symmetric duopoly. If using general option making-decision method, we can get the optimal investment timing YM = 2.6434 . Because competition erodes the value of e-business investment flexibility, investment timing with consideration of competition is different from that got from general real options.

4. Conclusion
E-business investment is featured as large scale and highly uncertainty. Using real options can avoid the disadvantage of NPV method. But it ignores the interaction between competitors. This paper applies model of duopoly timing option games, takes into account the uncertainty and competition. There are preemption and simultaneous strategy equilibrium according to different initial time and model parameters. Under the threat of the rival, the firm may preempt e-business investment, or waiting if the rival has invested early. Competition erodes the value of e-business investment flexibility; investment timing with consideration of competition is different from that got from general real options. It has some significance of guidance and practice in e-business investment decision-making of firms.
References [1] [2] [3] [4] [5] [6] [7] Fudenberg D,Tirole J. Pre-emption and rent equalization in the adoption of new technology.Review of Economic Studies, 1985,No.52:383-401. Dixit A ,Entry and Exit Decisions under Uncertainty, Journal of Political Economy.1989, No.6: 620-638 Smets,F, Exporting versus FDI: The effect of uncertainty, strategic interactions, Working paper. 1991,Yale University, New Haven, CT Dixit A K,Pindyck R S.,Investment under uncertainty, 1994,Princeton, NJ: Princeton University Press Huisman K J M.,Technology investment: A game theoretic real options approach,2001,Boston(MAU S A): Kluwer Academic Pub. Grenadier,.S.R.,Option exercise games: An application to the equilibrium investment strategies of firms.Review of Financial Studies,2002,No.15:691-721. Smit,H.T.J,L.Trigeorgis,.Flexibility and commitment in strategic investment, working paper, 2003,Tinbergen institute, Erasmus University, Rotterdam.

344

Resolve the Multi-Factor Tradeoff Problem in Bid Evaluation by Using Computer Regression Analysis
He Liang, Zhao Ping, Wu Ming, Kang Hui
Defense Economics Academy, Wuhan, P.R. China, 430035

Abstract Customarily modern procurement adopts a multi-purpose procurement way, so a tradeoff problem of different indexes in the evaluation system of bids will often be encountered. The purpose of this paper is to provide a new method to resolve the problem by using regression analysis. This method can evaluation all bids with some cheap computer software. By proving the scientificity of this new method, the author thinks that it can avoid the defects of the past evaluation methods efficiently. Key words Regression analysis, Bid evaluation, Tradeoff in different indexes

1. Introduction
Customarily modern procurement adopts a multi-purpose procurement way, so a tradeoff problem of different indexes in the evaluation system of bids will often be encountered. This paper aims at the defects of the past evaluation methods, such as high cost, subjectivity of indexes, and doing with the linear-factors tradeoff only. In this paper, the author proposes a new supporting method of bid evaluation. The old thinking style usually sums all the factorial indexes in certain proportions, which are called Weight Variables, then making an overall estimation, however, the new method this paper proposed goes beyond it and considers one key factor of all estimated factors as the function of others, then performs the regression analysis by using the estimation results of all bidders' indexes as the sample, finally evaluates all the bids through the comparison of the residuals or sorting them by using the equivalent-benefit curves or setting up a indexes-related evaluation system. This method can effectively avoid the affection of subjective factors and the judging errors, furthermore, solve the complex tradeoff problems that other methods can hardly resolved. This method can do the regression analysis by using some cheap computer software, so we can save some costs and enhance the efficiency of our procurement.

2. Backgrounds
Purchasing by invitation to bid of the present commercial and public ministers always bases on multi-aims procurement, which dont take price to be the only considered item, but under the condition of fulfill the budget constraint and other basic requirement to choose from a propriety of lower price, good quality and better service so etc evaluate items to arrange and form the best suppliers. In other words, that means purchasing the most worth commodities, services and projects at an acceptable range of price, which not equals to the lowest price. Which pair of the resultant could be the best? How to measure the competitors advantage equations act on different aspects, with an order to get an integrated result? Normally saying, every suppliers of bidding competitors cannot gain the championships in every item, such as better quality and more functions means for higher cost and price meanwhile; priority in price perhaps stands for less service. Especially in condition of modern times, all sorts of enterprises will emphasis the individualized and characteristics development and extrude several parts advance. Therefore scientifically solve the trade-off between each counting items and figures, which is a necessary question for us to cross over in the way of gaining procurement aim and achieve efficiency optimization. 2.1 Characterization of methods existed Nowadays, aiming at this puzzle, both alien and domestic research and implement raised a lot of methods, such as AHP, Incidence Matrix, Link Relative Method, blurring evaluation method, better range selecting and berange selecting so etc. We held this consideration that, these methods basically follow the same consideration during the problem-solving of multi-factors, which take a lot of specialistic experiences to be foundation, bring some concrete arithmetic to measure each evaluate items weight and so, take every factors to be counted into the
345

last assess result in the form of some firmed weight. Showed in formula (1),
W j = wi sij
i =1 m

(1)

WjIntegrate evaluate of the j-th bid wiWeight of the i-th factor in the assessing system sijmarks for i-th factors of the j-th bid Its normal form shown as follow:
W = wi s i
i =1 m

(2)

The main difference between them are the methods and arithmetic of gaining the weight wi, but basically the same in summarizing these different factors together in some special proportion which in fact decides the character of this formulas science and justice factor. Some of the methods do not compute each bids plus mark but only compare them across, which actually are the same as be based on weights{ wi}. 2.1 Limitation of traditional methods This paper thinks, the methods that are often adopted have several radical disables: 2.1.1 Settled assessing system defaults the only relationship of linear substitution between each factors, but actually associative of most factors are non-linear. Take quota fk and fl randomly, led by function (3)

s k W sl w 2s = = l which is a constant figure 2k = 0 sl W s k wk sl

(3)

So, the relationship between sk and sl are linear. For example, when sk and sl separately stand for price and quality of the product, if the two vendors products are distanced by sl in quality and wl wk sl in price, so they may gain the same mark after the two factors integrated. In another words, purchase better products one should pay much more money in a fixed proportion. In the real economic world, the purchasing factors effect each other, get a familiar relationship, but not surely to be linear. Still take price and quality to be example, if the relationship is linear, we can take a straight line A in Fig.1 to represent. But when it comes to the technologic product to be bided, one should cost a lot and spend a lot of money on renewing and D&R, which cost fringe cost increase, that is 2 sk sl2 > 0 , such as curve B, in the chart. According to matured products, it decreases apparently in fringe cost, 2 sk sl2 < 0 , such as curve C.

Fig.1 Analysis of traditional methods

Fig.2 Methodology of new method

The Line Ssprice in the chart is price-control line (the most acceptable highest price), use traditional methods to assess the bid, if purchasing the high-tech product, we must have take price discrimination toward bidders who improve their products quality in triangle zone I (its essential is discrimination to the high tech); if purchase for low technology products, it means overestimate of bidders price advantages in zone II, make a fault choice, and low purchase effect. 2.1.2 What they take on deciding quota of ultimately reference are from experts experience and experiences in the past, which cannot avoid the subjective deflection. When deciding each factors quotas of each methods, mainly take two kinds of effect: one is make those hard
346

and difficult ones more easier to compute and compare; the other one is to use unique method to acquire experts opinion or take multiple solve to so many experiences in order to eliminate their objectiveness. But actually, for its final quotas are decided by mans subjective judge, we cannot delete every subjective error. 2.1.3 To acquire a integrate evaluation using different quotas, may effect the advantages and disadvantages of product and service, and cause some faults in assessment and even unable to make decision. Make all sort of assessing factors together to a multiple result and which usually make part of the real condition effected by others and not be displayed on the final result. For example, some bids have a prominent advantage in some aspects while faint in other aspects, always can gain a high mark in its only advantage; and some bids are good at some special aspects while others are bad, so in the total mark, good one will balance the bad ones. Actually and normally, participants in the same bid always the same in their abilities, so the methods of multi-factors tradeoff always do not effect and cannot work well to distinguish their goods or bad. Besides that, such as shown on chart 1, the bids in the same curve a get the same marks, we cannot make a proper choice, and should do a new tradeoff between the price and technology again, but traditional method can not do anything but only subjective decision. 2.1.4 Matrix assess system centered by quota-combination has many limitations in areas of application, and gain a high cost and low efficiency in frequently amending its system. To compute the quotas scientifically and avoid the subjective decision, we should collect specialists disposal as more as possible; and, according to purchase of different products and services, the quotas are different, the flexible of assess system is not high, objectively need different purchase need different experts to recalculate the quotas. All of these activities will cost a lot. If we take a single purchase, it is not accountable to establish a one-life purchasing assess system in spending; and to those participate in different purchases, it also costs a lot in continually calculating quotas again and again. 2.1.5 Disable of automatization. Besides, nowadays most methods only provide compute methods when compute the quotas, cannot directly apply in computer mathematic software disposal, they can either work by hands or take software quadric development, which is high cost and low efficiency.

3. Methodology
To solve the questions raised above, we provide an almost new bid-assess method, introduce it as below: When take tradeoff to k factor quotas such as s1s2s3sk ,set up one central factors target function as below: s1=(s2s3sk) (4) And we take all of the figures provided by each tender document and assess result of each bid to be one figures of the samples, and add figures acquired from market survey, to build up the sample. Take statistical regression analysis, to get regression functions IV in distinctiveness . Assess and queue the real value s1, fitted value s1 and residual value (s1 s1 ) of each bid, to make decision support for definite bid-choosing. Normally, we can take value and value-grade to be s1 (this is main because price is the central factors of bid evaluation, which can directly reply the question of worth or not to purchase at a set price. Surely when tradeoff between different factors, one can choose other quota), s2s3sk separately stand for quality, service, technology, function and so etc quotas and assess mark. So the real meaning of function IV is: to acquire set quality, services, technology, functions and so etc to form a reasonable price of products and services. This reasonable is decided by available market but not subjectively opinions from experts, with a objective character. To be simply, we only take s1 as tender offer and s2 as product quality to be illustrated. We show this s1 and s2 provided by each bidder and its measured figure to be drawn on this s1- s2 coordinate system, each point presents for a bidder. We make use of regression analysis to deal with the regression equation s1=(s2), and draw its curve on this coordinate system (as is in the Fig.2). Curve in the chart relatively and accuracy displays the relationship of tradeoff between price and quality. The
347

points on the curve, equals to each other on the sense of worth-for, one can choose a high price bid while fund abundant and choose low-price one when fund-in-shortage. The points in the zone below the curve represent the bids, the farther points distance the curve, the higher the product effectiveness be. Meanwhile the points among the zone above the curve are just opposite. The advantages and advantages in quality and price tradeoff represent by the points and curve, can be chosen as the base of bid-assess Therefore, this kind of bid-assess breakthrough a traditional thought of sum each factors together to gain a multiple evaluation. The balance formed by each factors is decided by all bidders, who represent the objective market rules, which avoiding the subjectively thinking, and this method can be substituted by solvable linear factors, and solve the tradeoff of non-linear factors, which cannot be worked out by traditional methods. And this method is generally used for each products and services biding. Which is more important that we only use the compute methods of Mathematical statistics, we can take Mathematica, Mathslab, SAS so etc lower price common mathematic software to solve them in computers conveniently.

4. Assumptions and Limitations


This bran-new method for Bid Evaluation, taking the following three assumptions as premises: a. The data in the bid which as the regression sample is actual, and can represent objective law of the market; b. The sample quantity of the regression analysis is much fuller; c. There exists some regression relation between each index The first assumption is something to establish in actual bid activity. Because from total speaking, the supplier that participates to bid in order to win the bid, so according to its true produce ability and economic power they establish the bid project. But we can only choose one among the bidders, so take them as the sample can represent realistic circumstance of the market that we may make use of with. The second assumption is because only have the sample more, its regression equation can reflect the regulation between various factor index under the condition of the particular market economy technique. This explains that this method even is applicable to have the certain quantity of bidders who participate the bid. Because the bid is generally more than three bidders, so we can carry on linear analysis to the various main factors at least, this make that method deteriorate to the statistical weight method for the traditional type, but the computation for the value of weights can be free from the subjective judgment error at least, and calculation is also easier than other methods. For the data shortage, we can also take the market research to make up the shortage. In fact, if only three or four suppliers participate to bid, the bid evaluation is comparative simple, so the problem to tradeoff in different indexes is nonexistent. The third assumption explains: we can use this method only when there exists regression relation between the indexes which needed to be balanced .Generally speaking, the procurement take the price as the most important index for the bid evaluation. In the economics, we think price relates with product and service value, or think price is decided by the supply balance, but in spite of which kind of standpoint, all can demonstrate that price has regression relation to the other indexes: a. In concrete usage, we can take significance test to check whether the regression equation is tenable. b. in the bids activity, all the bids are the whole market we can choose( the sample is the collectivity we can use), so we can use the sample to get the regression equation and neednt to carry on the description of the suppliers out of the sample. c. Although there are no regression relation among some indexes for the bid evaluation , they usually together connect with other indexes, Such as, the function and quantities dont complete connect with each other, but they codetermine the price, so we can carry on comprehensive of regression analysis. d. The regression relation of here is to aim at all bidders not aim to only one bidder. Such as, for single bidder, how the service combines with the product quantity only accord to its own bid strategy, but in regard to the all suppliers, the bidder whose product quantity is good, for they neednt worry about the product quantity, is willing
348

to provide the after-sales service commitment, and they also have the real strength to make the commitment. In fact, each factor has the regression relation, which is also the foundation of the traditional weight-based method to define its linear relation, if there isnt this kind of connection, the bid evaluation is almost replaced by subjective adjudicate. Through the analysis to the assumption for the new method, we think although this method is subjected to many restricts, we can use it in many bids activities , it can be the main method of the bid evaluation , and also can rise the assistance analysis function when other methods are hard to carry on tradeoff for multifactor. We should still point out that: from the principle of the mathematics statistics, the form for the regression equation (fitting equation)can be selected from the certain meaning, this will get a different equation .we must use significance level ( )to decide which one will be suitable for the bid evaluation. But because the regression equation is only responsible for the sample which constitutes by all data in the bid documents but is not used for the forecast, and we can use the computer software to get significance bit longer and more accurate, so as long as we choose the right equation model, its fitting effect is very obvious. Therefore, the form of the equation will not affect the final taxis result.

5. Concrete operation of new method


In virtue of the regression analysis to carry on the bid evaluation, the writer puts forward three kinds of different concrete ways: comparison of the residuals, using equivalent-benefit curve, setting up indexes-related evaluation system. 5.1 Evaluation based on comparison of the residuals During the bid evaluation, we can use the regression equation to get the result for the key factor we should trade-off, and through the analysis for the residuals of the bid. We can get the evaluation result. Its step is as follows: a. The discrimination and expansion of the sample. we can take the combination of all the evaluation indexes about the bidders as sample X={Xj| Xj=( s1j, s2j, s3j,, skj)}, and add the market reference data that the bidders mastered when needed. Meanwhile we should discriminate and get rid of those who provide the deceitful data. When the bid quantity is big, during the evaluation, in order to emphasis the difference between the excellent supplier, we can also get rid of some bad bid of the initial judgment.( such as the indexes of a supplier are same as the indexes of anther supplier, but its price is higher).Do like this would be factitious to lift the total performance level that represent the total benefit, so we should reserve these samples which analytical results are even close to the objective circumstance of the market. b. Fix on a index (usually is price) for dependent variable s1, one or several indexes as the independent variable quantity( s2, s3,,sk), use the principle of mathematical statistics to carry on the analysis to the sample, and in virtue of the Mathematica, Mathslab, SAS etc. software to find out the regression equation s1=(s2, s3,, sk).
c. put the j array of the bid indexes ( s1j, s2j, s3j,, skj) to the equation and get the fitting value( s1 j ), residual (s1 j s1 j ) and residual relative value( (s1 j s1 j ) s1 j ). The residuals must include negative ones and positive d. Take the residual relative value as the evaluation value. If the s1 is bigger means more better, then also (s1 j s1 j ) s1 j is better; contrarily we can analogy 5.2 Evaluation by using equivalent-benefit curve The preamble paper shows that although the points on the equivalent-benefit curve are different for the procurement result (such as buy the high-quality product with the high price, and buy the same high-quality product with the low price), they are equal in the degree of "worth-for". Considering the meanings of the tradeoff in different indexes, this curve (l1) also divide the whole sample space B (namely all bids) intoB1, B2 and B3 (B=B1+B2+B2): 349

ones

B1= {X i (s1 j s1 j ) > 0, X i X }

(5) (6) (7)

{ ( B3= {X (s
i

B2= X i s1 j s1 j = 0, X i X
1j

s1 j

) ) < 0, X

} X}

If s1 is a more smaller more better factor (means procurement performance negative related with it. For example, the quote), seeing from the procurement benefit, the B3 is better than B2, B2 is better than B1.so we can make B3 or sample within the B3B2 to re- return to the regression analysis, find out another equivalent-benefit curve stand for more excellent supplier community l2, and make B3 or B3B2 further demarcation; thus again and again, until the concentration of Bi can't make again back to the regression analysis. Thus, we completed all taxis for bids, and provide the further basis to choose the bid. 5.3 Evaluation with associated-standard system Retrogressive method can be used separately to trade off two or more indexes. it can deal with indexes whatever linear correlation or nonlinear correlation ,beside this ,this method can combine with other method. For example, take the traditional weight-based method as the main way to evaluate the bid, and then take this method as assistant to analyze the factors which are hard to trade off. Indexes-related evaluation is one of these modes. This kind of method is the method that gets scores with each index, and then adds the weighted values of them. Get grade of each supplier, but for the parts of the indexes at last. We should take the dynamic indexes as foundation to the grade, building up the associated-standard system. For example, when we assess the price of the j-bid (use s1j to express), the adoption is not the united standard (the price of all bidders carry on the comparison with the same price standard, and calculate to get a mark), but according to the products quality, function, capability, after-sales service etc that in the bid documents. Then find out a prices standard which find out from the regression equation by using the other related indexes ( s1j, s2j, s3j,skj).Also is to say, not simple with the high and low of the quote to assess the price of the bidders, but see whether the quote is reasonable from the comparison of the product quantity, function, after-sales service etc.

6. A simple example
An enterprise invites bids to buy a batch of components, and its target is to control the price between $9 and $11. Eleven suppliers offer the quote and provide the main technique index in bid documents, and all accord with the requests. In addition, the tenderee master three groups related data, for the sake of convenience, treating it as three conjecture bidders. Now we only use the residual to trade off the two factors: Establish s1 as quote price, the s2 as performance evaluation. After carrying on the analysis to the sample, then use mathematica4.0 to process regression calculation and test, each data is in the following Tab.1:
Tab.1

s1 s1 (s1 s1) s1
Taxis Bider s1 s2

Bider s1 s2

1 8.5 25 -0.0239398 -0.00281645 6 8 10.3 30 0.0635847 0.00617327 11

2 9.35 26.75 -0.023916 -0.00255786 7 9 10.45 30.5 0.120202 0.0115026 13

Data of the example 3 4 9.50 9.70 27 27.25

5 9.75 28 -0.034307 -0.00351866 5 12 10.65 32.5 -0.069246 -0.00650197 3

6 9.85 28.75 -0.125471 -0.0127382 2 13 10.8 32.75 0.023786 0.00220241 9

7 9.975 29.5 -0.163325 -0.0163734 1 14 10.85 33 0.0136642 0.00125937 8

0.03299 0.00347263 10 10 10.5 31 0.078377 0.00746448 12

0.145798 0.0150307 14 11 10.575 32 -0.0381971 -0.00361202 4

s1 s1 (s1 s1 ) s1
Taxis

The Mathematica language statement inputs and the outputs in the mathematica4.0 about the problem are
350

listed as Fig.3, and so,


s 1=-117.342+12.0945 s 2+0.387388 s PValue=4.1306410-9
2

+ 0.00419969 s

(8) (9)

Shown as the equation (9), the significance of the regression equation (8) is very strong, it shows that the evaluation result is rather accurate. Use the residual relative value as the basis of the bid evaluation. Because the price is a more smaller more better evaluation factor, so the relative value for negative is in the first rank. From the table one, ordinal number for 7 and 8 of bid compare in the product function-price is the best, the number for 5, 1, 2, 11 and 12 of the bid offer the price are all reasonable. Although other bids are some to have the advantage on the technique function, it quoted to exceed the value of its product. At here, we carried on to trade off the evaluation to two factors only, but most in general use mathematics software ( such as mathematica4.0) and data analysis software( such as SAS) and both can handle diverse many orders regression calculation with more than one unknown quantity, thus can carry on tradeoff more than two factors. Here no use to give examples.

7. Conclusion
The method that this text put forward can be used separately to trade off two or several factors, moreover, this method can handle indexes whatever linear correlation or nonlinear correlation. In addition, beside this, this method can combine with other method. for example, take the traditional indexes and weight-based method as the main way to evaluate the bid, but take this method as assistant to analyze the factors which are hard to trade off. In addition we can still use weighted indexes method for the bid evaluation. But we set up dynamic indicator: for example. We assess the indexes of the price (set up it as), we should take different standard which find out from the regression equation by using the other related indexes (s1j, s2j, s3j,,s kj). Also is to say, not simple with the high and low of the quote to assess the price of the bidders, but see whether the quote is reasonable from the comparison of the product quantity, function, after-sales service etc.

Fig.3 The inputs & outputs of the example in computer

351

References

[1] [2] [3] [4] [5] [6]

CHEN Jian. On the comprehensive evaluation method of construction bid evaluation. Journal of Nanping Teachers College, 2006(02):25-30(in Chinese) Li Ping, Gu Xingyi. Study on the indicator system and method of evaluating bidders in inviting tenders and Evaluation. http://www.wenloo.com/WenLooPage112658-4.htm(in Chinese) LIANG Wei, LIU Man-feng. Research on technological innovation project selection appraisal system. Science-Technology and Management, 2006(05):55-60(in Chinese) HUANG Yong-chun,QI Guo-you,ZENG Sai-xing. A new method of bid evaluation. Construction Management Modernization, 2006(3):51-53(in Chinese) SONG Hui-gang. Research on application of evaluating method to equipment bidding procurement. Journal of Dalian University of Technology(Social Sciences), 2005(03)(in Chinese) Wu baixue. Present situation and analysis of appraise tenders method for building market. Fujian Architecture & Construction, 2006(06) (in Chinese)

352

Monotonic Vector Space Model and Its Partition Algorithms


Hu Jianwen, Hu Xiaofeng, Zu Shuguang, Si Guangya
Strategy Simulation Teaching Department, National Defense University, Beijing 100091, China Correspondence should be addressed to Hu Jianwen (email: hjwc3i@sina.com)

Abstract: This paper presents a model: monotonic vector space (MVS) .It is a special vector space where there exist some monotonic functions . the model have wide application in image processing ,system capability engineering etc .This paper mainly deal with the requirementbased partition operation in MVS and two algorithms are proposed .one is for the continuous MVS ,another is for discrete type .The algorithm for partition of continuous MVS is based on hyperbox approximation . The algorithm for partition of discrete MVS is based on greedy tactics. The two algorithms could partition MVS with high efficiency based on specified requirements . Key words: Monotonic vector space, Hyperbox approximation ,Greedy tactics ,Partition algorithm

1.INTRODUCTION
Monotonic vector space (MVS) is a special vector space where there exist some monotonic functions. Many practical problems can be considered as operations in the MVS. For example, if the vector space is composed of system capability indexes[1], the capability index requirement analysis is a partition operation in MVS. The screening operation in MVS could be applied for sensibility analysis for system capability index [2] [3]. Latin-cubic sampling [4] operation in MVS could greatly reduce variance, therefore it is high valuable for simulation experiment in system capability engineering. In this paper, the main topic is requirement-based partition algorithm which could be applied for image processing ,system capability engineering etc .

2.MONOTONIC VECTOR

space

Definition1 . Monotonic Vector Space P (MVS). Rn is n-dimensional Euclid space. Let P R n , P is n-dimensional subspace of Rn. There exists function fu : P u . u is a real variable. f u is monotonic. Every

dimension could be continuous or discrete type .For most practical situations, the function f u is a black box. It is very difficult to get the direct explicit function .But the output can be calculated by numerical methods such as : simulation , numerical differentiation equation etc. MVS is abstract model for many practical problems. For example, a monotonic vector space P for a ground-to-air missile system may include the searching radar detecting capability index dimension P1 , delay time index dimension P2 , and tracking radar detecting capability index dimension P3 . So, the equation u = fu ( p1 , p2 , p3 ) describes the relation between the three system indexes (i.e. P1 , P2 , and P3 ) and one requirement index (i.e. the kill probability index u ). Function f u is obviously monotonic, that is to say, the value of u (the kill probability) will monotonically increase or decrease with one of three system indexes changing monotonically while the others maintaining to be constants. We call p1 and p 3 monotonic increasing indexes and p 2 monotonic decreasing index. The requirement-based partition operation in MVS is defined as follows: For the function fu : P u , u k is a specified requirement value, the requirement-based partition operation is to produce the requirement-meeting zone in MVS which is partitioned to requirement-meeting zone and not meeting zone. we call the requirement-meeting zone monotonic vector requirement locus (MVRL). For the above example ,the MVRL is the requirement-meeting capability vector zone which is critical for system analysis and design. For simplicity, assume all dimensions are of monotonic increasing (for the type of decrease-monotonic, it can be transformed to type of monotonic increasing). the MVRL is point set meeting fu ( p1 , p2 ,..., pn ) uk , uk 0 . And each dimension pi is transformed to interval [0,Vi].

353

3.The algorithm for requirement-based partition operation in

continous

MVS

3.1 Some definitions and theorems Based on fu and the constraints on requirements, producing the MVRL is how to find out the points meeting requirements. Generally speaking, it is very difficult to get the direct function. So, the paper deals mainly with the method to get MVRL without direct functions. Definition 2. The minimal envelop hyperbox (MEH). The cross product of each dimensions interval [lower bound, upper bound]. For simplicity, it is set to [ 0,V1] [ 0,V2 ] [ 0,Vn ] , subject to Vi Vi .

Assuming that x, y are the dimensions of a MVS, the MVRL is the point set meeting the inequality x + y 2 4 ( x 0, y 0), the field [0, 2] [0, 2] is the minimal envelop hyperbox (MEH). Any point in the
2

MEH is not bound to be in MIRL, but any point outside the MEH is bound not to be in MIRL. Definition 3. The maximal equally-divided and included hyperbox (MIH) and its opposite hyperbox. Set a = [ 0, KV1] [ 0, KV2 ] [ 0, KVn ] , Pl , and K > K , coefficient K [0,1], let

[0, K V1] [0, K V2 ] [0, K Vn ] Pl , is the maximal equally-divided = ( KV1, V1] ( KV2, V2 ] ( K Vn, Vn ] is corresponding opposite hybperbox.

and

included

hyperbox.

The above mentioned equally-divided means using the common K to divide each dimension. For example, in the above example, x 2 + y 2 4( x 0, y 0); the field [0, 2] [0, 2] is the minimal envelop hyperbox (MEH), [0, 2] [0, 2] is the maximal equally-divided and included hyperbox (MIH). ( 2, 2] ( 2, 2] is opposite hyperbox. K = 2 2.
Theorem . Any point in the opposite hyperbox of the maximal equally-divided and included hyperbox does not meet the system requirement, namely, is not in the MIRL. 3.2The algorithm to requirement-based partition operation This algorithm recursively approximates MVRL by the maximal equally-divided. The steps are as follows: 1) Get the minimal envelop hyperbox(MEH), 1.1) K = 1; 1.2) all dimension except the kth dimension are set to 0 (the minimal value). For the monotonic property of fu, binary method are applied to get the upper bound of the kth dimension. The process is to get the worst possible value, namely the upper bound, when other dimensions are set to the best value, namely the lower bound . 1.3) if k = n, the cross product of interval [0, upper bound ] of each dimension is the minimized envelop hyperbox (MEH), go to 2). Else k=k+1; go to 1.2). 2) Get the MIH. 2.1) Judge whether the volume of MEH is smaller than the value E (which represents the exit condition) or not. If true, make interpolation, and exit. If certain an dimensions interval in the MEH is smaller than certain a value, the dimension need not be divided any more, so dimensions decreased. 2.2) Obviously, according to the monotonic property of fu, when K1 K2 , fu ( K1V1, K1V2 K1Vn ) fu ( K 2V1, K 2V2 K 2Vn ), so that the binary method can be applied to get the K (the

dividing ratio for MIH) and so MIH are acquired. 2.3) Every dimension of MEH is divided into two intervals. One is inside the MIH, and the other is outside. The inside field is indexed by 0, and the other one indexed by 1. So, there are 2n (n is total number of dimensions) interval combinations, every element in these combinations is an dimensions interval which indexed by 0 or 1 in the above-mentioned way. All combinations just correspond to binary digit from 0 to 2n1. Number 0 is the MIH (every element in this combination is indexed by 0), and any point inside it meeting the system requirement. So, it should be saved. Number 2n1 is the opposite hyperbox for MIH (every element in this combination is indexed by 0). According to Theorem 1, any point inside it does not meet the system requirement. So, it should be removed. For the rest of combinations, number 1 to 2n2, every one is regarded as the new MEH, and go to 2) and processed recursively.
354

Now, the process of algorithm is illustrated in Fig. 1. Assume that the MVRL is the point set {( x, y ) | x 2 + y 2 r 2 ; x 0; y 0}, the rectangle with dashed line is MIH. First according to the monotonic property of fu, to and fro, we search the proper point to get the MIH along the diagonal of MEH (indexed by 00 in the figure) by binary method. And then, get rid of the opposite hyperbox (indexed by 00 in the figure), the other hyperbox is respectively regarded as MEH, then perform the process recursively to get all MIHs of which the MVRL is composed.

Fig.1. process of acquiring MVRL .

4.The algorithm for requirement-based partition operation IN DISCRETE

MVS

If every dimension in MVS is discrete and is divided into several segments, an effective algorithms for partition of MVS is proposed. It finds out the combinations which meet the specified requirements . At first, we assume there are m dimensions in MVS and every dimension is divided into m n segments. And so, there are n combinations. All segments are labeled ascendingly with marks #1,#2,#3,,#n. The algorithm is the as following: Step 1 . For every combination, specify a reprenstative value and place it into a combination table. Reprenstative value is the expected decreased number after the combination is calculated. For example, assume m=4,n=5 ,if some a combination #3#3#3#4, initially, can meet the requirement, it means 3 3 3 4 = 108 combinations can meet the requirement .(for example,#1#1#1#1,#1#2#2#2,#3#3#3#3,etc ,every digit of which is less than or equal to the corresponding digit of the combination #3#3#3#4 ,all the combination meet the requirement because the requirement is fu ( p1 , p2 ,..., pn ) uk , uk 0 and f u is monotonic increasing. all these combinations are called under-cut set). And similarly, if the combination doesnt meet the requirementit 3 3 3 2 = 54 combinations doesnt meet the requirement, such as the means combinations :#4#4#4#4,#4#4#3#2,#5#5#5#5 etc which are called up-cut set. Because it is impossible to judge whether some a combination meet the requirement or not beforehand, the representative value of every combination should synthesize the under-cut set and up-cut set. It can be calculated as follows: Let RV: representative value; NUNC: the total number of elements in under-cut set; NUPC: the total number of elements in up-cut set; INUNC: the initial total number of elements in under-cut set; INUPC: the initial total number of elements in up-cut set; RV= NUPC * INUNC /( INUNC + INUPC)+NUNC*INUPC/(INUNC + INUPC);
355

for example ,the initial RV

of combination #3#3#3#4 is

108*54/108+54+54*108/108+54=72;

When some a combination is calculated, if it meet the requirement ,all elements in under-cut set are cut. Otherwise, all elements in up-cut set are cut. Step2. Based on greedy tactics, scan the combination table (it is indexed by a heap structure in order to find the combination with the maximal RV quickly ) and select the combination with the maximal representative value. Assume the combination #I1#I2 #I3 #I4 with the maximal RV . (1) If the combination #I1#I2 #I3 #I4 meets the requirement, all elements in under-cut set will be cut, thus under-cut set of every combination in combination table could decrease. The way to decrease elements number of under-cut set for the combination #J1# J2 #J3 #J4 is as follows: let #Ki =min(#Ii , #Ji ) , the total number of elements in adjusted under-cut set for #J1# J2 #J3 #J4= #J1# J2 #J3 #J4 - #K1# K2 #K3 #K4 ,if #K1# K2 #K3 #K4 is not in the combination table , the total number of elements in under-cut set for #J1# J2 #J3 #J4 remain unchangeable. (2) If the combination #I1#I2 #I3 #I4 doesnt meet the requirement, all elements in up-cut set will be cut, thus up-cut set of every combination in combination table could decrease. The way to decrease elements number of up-cut set for the combination #J1# J2 #J3 #J4 is as follows: let #Ki =max(#Ii , #Ji ) , the total elements number of adjusted up-cut set for #J1# J2 #J3 #J4= #J1# J2 #J3 #J4 - #K1# K2 #K3 #K4 ,if #K1# K2 #K3 #K4 is not in the combination table , the elements total number of up-cut set for #J1# J2 #J3 #J4 remain unchangeable. Step 3. Adjust the combination table. If the selected combination meets the requirement, delete it and all elements in its under-cut set in the combination table. Otherwise, delete it and its up-cut set. For example, if combination #3#3#3#4 meets the requirement , all the elements in under-cut set such as : #1#1#1#1 #1#2#2#2#3#3#3#3 etc in the combination table would be delete.
Step 4, According to the adjusted RV of every combination, go to step 2 until the combination table is empty. An illustrative example is as follows: The problem is

Assume

there is four dimensions : x, y, z , u in MVS, let (

x = 2(#1), 3(#2), 4(#3), 5(#4) y = 2(#1), 3(#2), 4(#3), 5(#4)

z = 2(#1), 3(#2), 4(#3), 5(#4) u = 2(#1), 3(#2), 4(#3), 5(#4)

) ,so

the total combinations number is 44 = 256. 2 2 2 2 the requirement condition is x + y + z + u 50 The above-mentioned algorithm is applied to quickly find the combinations which meet requirement condition. Step 1 find the combination with the maximal RV from combination table. Initially, #3#3# 2#2 with the maximal RV 36 is located Step 2 delete all the elements in the under-cut set for combination #3#3#2#2 in combination table.
Step 3 adjust up-cut or under-cut set for the residual combinations in combination table and then recalculate the RV. With combination #4#1#1#1 as example , the adjusting process is the following: Get the intersection combination of #4#1#1#1 and #3#3#2#2Min#4#1#1#1, #3#3#2#2

= #3#1#1#1 Judge the combination #3#1#1#1 is in under-cut set or not .If it is in, adjust under-cut set and RV for #4#1#1#1 as the following way : The total number of elements in adjusted under-cut set for #4#1#1#1= The total number of elements in under-cut set for #4#1#1#1- The total number of elements in under-cut set for #3#1#1#1= 4-3 = 1.And then adjust RV on the basis of new total number of elements in under-cut set. The adjusted RV is 4.7. The adjusted RV of other combinations is listed in combination table as fig.3.
356

Perform the above steps until the combination table is empty. In this example, after 67 iterations all the acquired. combinations meet requirement ,such as :#1#2#2#4,#2#1#2#4, ,#3#3#2#2 etc , are Approximately only one quarter of all combinations are calculated in the example. the algorithm is of high efficiency. Total elements number of under-cut set 1 2 3 Total elements number of up-cut set 256 192 128 RV 1.99 3.96 5.86

Combination No #1#1#1#1 #2# #1#1#1 #3# 1#1#1 #3#3#2#2 #4#1#1#1 #2#4#4#4 #3#4#4#4 #4 #4 #4 #4

36 4* 128 192 256

36 64 3 2 1

36 7.53 5.86 3.96 1.99

find the combination #3#3# 2#2 with the maximal RV from combination table. and then delete all the elements in the under-cut set for combination#3#3#2#2in combination table.

The under-cut set for combination #3#3#2#2 #1#1#1#1 #2 #1#1#1 #3#1#1#1 #3#2#1#1 #3 #2 #2#1 #3#3#1#1

Fig.2. combination table

Cobination No

Total elements number of under-cut set

Total elements number of up-cut set

RV

#4#1#1#1 #4#2#1#1 #4#3#1#1

1*(have been decreased) 2 3

64 48 16

4.7 8.57 10.9

#4#4#4#4

220

1.85

357

5 conclusion
Monotonic vector space (MVS) is a special vector space where there exist some monotonic functions. This model have wide application in image processing ,system capability engineering etc. This paper mainly deal with the requirement-based partition operation. Two algorithms have been proposed. The future research should include : Applying SVM (support vector machine )[5] for high-efficiency partition operation in MVS. Other operations in MVS such as :sampling ,screening ,set operation ,searching etc, which could be applied for solving many practical problems .
References

Hu jian wen etc.A novel complex-system-view-based method for system effectiveness analysis: monotonic indexes space. Science in china, series F.2006,1 [8] Bettonvil, B., Kleijnen, J.P.C., 1996. Searching for important factors in simulation models with many factors: sequential bifurcation.[J] European Journal of Operational Research 96 (1), 180-94. [9] Cheng, R.C.H. (1997), Searching for important factors: sequential bifurcation under uncertainty.[C]Proceedings of the 1997 Winter Simulation Conference, edited by S. Andradottir,K.J. Healy, D.H. Withers and B.L. Nelson, pp. 275-280 [10] McKay, M.D., Beckman, R.J., Conover, W.J.. A comparison of three methods for selecting values of input variables in the analysis of output from a computer code. [J]Technometrics 1979 21 (2), 239245 [11] N. Cristianini and J. Shawe-Taylor AN INTRODUCTION TO SUPPORT VECTOR MACHINES(and other kernel-based learning methods) Cambridge University Press, 2000

[7]

358

Fitting Analysis of the Airport Passenger Throughput Based on Arima Model


Jia Chuanliang1, Sun Ying2, Wang Lubin3, Ma Yanlin4
1 School of Management Science and Engineering, Central University of Finance and Economics, P.R.China, 100081 2 Institute of Policy and Management, Chinese Academy of Sciences, P.R.China, 100080 3 School of Distance, Central University of Finance and Economics, P.R.China, 100081 4 School of Information, Central University of Finance and Economics, P.R.China, 100081

Abstract The airport passenger throughput has been increasing fast in resent years in China. Its quite important for airline companies and airports to study the throughput and accurately forecast the trend in the future. In this paper, ARIMA model is used to study it and the applicability of the model is discussed. By the numerical example the course of fitting with ARIMA model is shown and the steps of testing are given. The outcome shows that ARIMA model successfully describes the trend of the airport passenger throughput. Key words Airport passenger throughput, ARIMA model, Non-stationary time series

1 Introduction
With the rapid developing of economy in China, airplane is utilized more and more as an important transport tool. In 2006, the passenger throughput of the capital airport amounted to be more than 48 million. Its about 20% more than that in 2005. The rapid increasing of the airport passenger throughput brings much income to airline companies and airports. But at the same time, it takes an enormous challenge to them because the scale of airport must satisfy the requirement of the passenger throughput, and the scheduling of flights in airline companies must adapt to the changing of the passenger throughput. Therefore, the status of the airport passenger throughput need to be discussed and the trend need to be forecasted. The airport passenger throughput has been studied in many articles. Similarly much research is on the highway passenger transport volume, the power generation and etc. Some methods, such as artificial neural networks [1, 2], Holt-Winters [3], wavelet analysis [4], grey method [5], and so on, are used in forecasting. But the applicability of the methods in the research of the airport passenger throughput is in discussion. This may impact the effect of the methods seriously. In this paper, ARIMA model is founded to describe the changing of the airport passenger throughput and the capital airport is taken as an example. Also the applicability of the model is discussed.

2 ARIMA Model
In 1970, Box and Jenkins proposed ARIMA model [6] (Autoregressive Integrated Moving Average) which is applicable for the non-stationary time series. The principle of the model is as follows. At first it transforms the non-stationary time series into stationary ones by differences for several times. Pricing and estimating parameters for the stationary time series are following. Then the value of p and q are got and the time series can be fitted and forecasted according to ARIMA (p,d,q) model. In essence ARIMA model is the same as ARMA model after transforming the non-stationary time series into stationary ones. That is, the non-stationary time series } are transformed into the stationary ones Wt which adapts to ARMA(p,q) model by Yt {y1 , y 2 , y 3 differences, such as Yt = Yt Yt 1 Yt = (Yt ) = (Yt Yt 1 ) , and so on.
2

Wt = 1Wt 1 + 2Wt 2 +
That is,

+ pWt p + et 1 et 1 2 et 2

q et q

(1)

( B)Wt = ( B)et
This research has been supported by Funds of CUFE, No: 06XY016.

(2)

359

( B) = 1 1 B 2 B 2 ( B) = 1 1 B 2 B 2
The modules of all the roots of

pBp q Bq

(3) (4)

( B) = 0 and ( B) = 0 are beyond 1. 1 , 2 , 3 p are q are moving average parameters. et is white noise series auto-regressive parameters and 1 , 2 , 3
which is independent with each other and follow a normal distribution. And Wt is synthesis auto-regressive moving average series, that is ARIMA (p,d,q). Thus the general formula of ARIMA model is as follows:

( B)(1 B)Yt = ( B)et


3 Foundation of ARIMA Model

(5)

In the course, the non-stationary time series is transformed into the stationary ones firstly which is applicable for ARMA model. The model is founded according to it. Here the passenger throughput of an airline company in capital airport is taken as an example and the time span is from January 2004 to June 2004. The passenger throughput of every 3 days in each month is sampled and the samples amount to be 60. Fig.1 is according to the samples.
45000.00 40000.00 35000.00 Value 30000.00 25000.00 20000.00 15000.00 12 3 4 5 6 7 8 9 1 1 1 1 1 1 1 1 1 12 2 2 2 2 2 2 2 2 2 3 3 3 3 3 3 33 3 34 4 4 4 4 4 4 4 4 4 5 5 5 5 5 5 5 5 5 5 6 0 12 3 4 5 6 7 8 9 0 12 3 4 5 6 7 8 9 0 12 34 5 6 7 8 9 0 1 2 34 5 6 7 8 9 0 12 3 4 5 6 7 8 9 0 Case Number

Fig.1 Passenger throughput from January 2004 to June 2004

The curve in Fig.1 shows the trend of ascending, and some samples make a great difference. Therefore, we deal with the passenger throughput with the logarithm, and make the first difference. Then the outcome is shown in Fig.2.
0.60

0.40 0.20 Value 0.00 -0.20 -0.40 -0.60 12 3 4 5 6 7 8 9 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 3 3 3 3 3 3 3 3 3 3 4 4 4 4 4 4 4 4 4 4 5 5 5 5 5 5 5 5 5 5 6 0 12 3 4 5 6 7 8 9 0 12 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 12 3 4 5 6 7 8 9 0 12 3 4 5 6 7 8 9 0 Case Number

Fig.2 The outcome of first difference of passenger throughput

360

In Fig.2, the outcome shows smooth trend and satisfies the hypothesis of ARMA model on the whole. The figures of auto-correlation function and partial auto-correlation function of the stationary time series are as follows.
1.0 Coefficient Upper Confidence Limit Lower Confidence Limit

0.5

ACF

0.0

-0.5

-1.0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16

Lag Number

Fig.3 Auto-correlation function

1.0

Coefficient Upper Confidence Limit Lower Confidence Limit

0.5 artial ACF

0.0

-0.5

-1.0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16

Lag Number

Fig.4 Partial auto-correlation function

In Fig.3 and Fig.4, its clear that the auto-correlation function and the partial auto-correlation function is heavy-tailed. Therefore ARMA model is applicable. And also we can see that the value 1 and 2 are beyond the confidence interval. Thus the value p and q cannot beyond 2. So well fit the time series with ARIMA (1,1,1), ARIMA (2,1,1), ARIMA (1,1,2), ARIMA (2,1,2) separately. The best ARIMA model will be chosen and used to forecast the passenger throughput. Generally the method of judging models is to compare the value of AIC (Akaike's Information Criterion) and BIC (Schwarz's Bayesian Criterion), and the value is smaller, the model fits better. By comparing the outcome of the models, we find that the value of AIC and the BIC of ARIMA (1,1,1) is smallest. And all statistics can pass t-test.

361

Tab.1 Residual Diagnostics

Value Number of Residuals Number of Parameters Residual df Adjusted Residual Sum of Squares Residual Sum of Squares Residual Variance Model Std. Error Log-Likelihood Akaike's Information Criterion (AIC) Schwarz's Bayesian Criterion (BIC) 59 2 56 1.403 1.718 .024 .156 26.572 -47.144 -40.911

Tab.2 Parameter Estimates

Estimates Non-Seasonal Lags Constant AR1 MA1 .328 .962 .005

Std Error .150 .118 .002

t 2.191 8.142 2.515

Approx Sig .033 .000 .015

Melard's algorithm was used for estimation.

4 Model Testing and Analysis


The validity of ARIMA (1,1,1) needs to be tested and the standard is whether the residual series is white noise series or not. The auto-correlation function and partial auto-correlation function of the residual series of ARIMA (1,1,1) are shown in Fig.5 and Fig.6.

1.0

Coefficient Upper Confidence Limit Lower Confidence Limit

0.5

ACF

0.0

-0.5

-1.0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16

Lag Number

Fig.5 Auto-correlation function of the residual series

362

1.0

Coefficient Upper Confidence Limit Lower Confidence Limit

0.5

artial ACF

0.0

-0.5

-1.0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16

Lag Number

Fig.6 Partial auto-correlation function of the residual series

In Fig.5 and Fig.6, the value of ACF and PACF of the residual series are all in the stochastic interval which shows that the residual series is white noise series. It indicates that the difference of the fitting value of the model and the original value is not significant. The comparison of the fitting value by ARIMA (1,1,1) and the original value is shown in Fig.7.
45,00 40,00 35,00 Value 30,00 25,00 20,00 15,00 135 7 9 1 1 1 1 12 2 2 22 3 3 3 3 3 44 4 4 4 5 5 5 5 5 135791357913579 13579 13579 Case Number

Fig.7 Comparison of the fitting value and the original value

In Fig.7, it shows that the trend of the fitting curve of ARIMA (1,1,1) is according with that of the original value, and most of the points fit well. From the fitting value and the original value we get the error is 8.27%. That is to say the model is quite accurate.

5 Conclusions
By analyzing the data of passenger throughput of an airline company in the capital airport, the ARIMA (1,1,1) model is founded to fit the scale of passenger throughput in this paper. The model is quite simple for use and reflects the fact well. So it could be used to schedule the flats and cost in airline companies and airports.
References

[1]

Petrowski A. A pipelined implementation of the back-propagation algorithm on a parallel machine. Artificial Neural Network. 1991.

363

[2] [3] [4] [5] [6]

Xiao Bin, Liu Lu, Liao Cheng. Artificial Neural Networks Method for Predicting the Airport Passenger Throughput. Aeronautical Computer Technique, 2000, 30(3):8-11. Li Xiaotong. Comparison of Holt-winters models and X-11 models in prediction. JOURNAL OF ZHANGJIAKOU TEACHERS COLLEGE, 2006, 16(2):70-73. (in Chinese) Liang Qiang, Fan Ying, Wei Yiming. A long-term trend forecasting approach for oil price based on wavelet analysis. Chinese Journal of Management Science, 2005, 13(1):30-36. (in Chinese) Jiang Zhihua, Zhu Guobao. Grey Model-GM (1, 1) and Its Application to Predicting Transportation Volume. Journal of Wuhan University of Technology (Transportation Science & Engineering), 2004, 28(2):305-307. (in Chinese) Peter J B, Richard A D. Time series theory and methods (Translated by Zheng Tian). Beijing: Higher Education Press, 2001. (in Chinese) Wei Xue. Methods and application of statistical analysis in SPSS. Beijing: Publishing House of Electronics Industry, 2004. Voor D, Dougherty M, Watson S. Combing Kohonen maps with ARIMA time series model to forecast traffic flow. Transportation Research Part C, 1996, 4(5):307-318. Jiaxun Ni, Wei Yuan, Danhui Yi, and so on. Applied statistics. Beijing: China Renmin University Press, 1992. (in Chinese) Cihua Liu. Stochastic process (the Second Edition). Wuhan: Huazhong University of Science and Technology Press, 2001. (in Chinese)

364

Research on the Knowledge Reorganization Methods of Emergency Plans


Jia Xiaona, Rong Lili
Institute of Systems Engineering, Dalian University of Technology, Dalian, P.R. China, 116024

Abstract When emergencies break out, decision-makers should quickly get useful knowledge quickly for effective commands and assignments. Emergency Plans usually can be the action plans or guidelines for quick responses to emergencies. So, if the emergency plans could be decomposed, and reorganized into new knowledge according to the content of documents, we can supply relative knowledge pieces to decision-makers. Absolutely, it will improve the ability of to emergency quick responses. For solving the problems above, a comprehensive classification of emergency plans is proposed firstly in this paper. Then, we analyze the structure of text, including the physical structure of text and the logical structure of text, and get the characteristics of the text structure on emergency plans. At last, we do some research on the methods of knowledge reorganization on emergency plans based on different levels, which include the classification of emergency plans, the physical structure of text and the logical structure of text. Using the knowledge reorganization methods proposed in this paper, the objective of quickly obtaining the knowledge of emergency plans is established. Key words Emergency plans comprehensive classification, Physical text structure analysis, Logical text structure analysis, Knowledge reorganization

1. Introduction
The frequent occurrences of emergencies bring much damage to society in recent years. For reducing the losses to the most, working out correct emergency plans is imperative under such a circumstance. At present, the research on the classification, constitution and dynamic management of emergency plans is earlier in foreign, and the technique and methods in quick response is comparatively mature (Linet and Ediz,2004; Girgin and Unlu,2005)[1][2]. In domestic, the research on emergency plans is becoming a new topic of general interest. A survey for the Attention of Science Tendercy on Emergency Plans in cnki shows that the research on emergency plans has ascended in line from the beginning of 2002 to the end of 2005.The research mostly focuses on the classification (Wu Zongzhi and Liu Mao,2003)[3], constitution and management (Zhong Kaibin and Zhang Jia,2006; Wang Ting and Huang Chao,2006; Liu Yongfa,2005)[4][5][6], formal description(Li Hongchen and Deng Yunfeng,2006)[7] of emergency plans, and so on.

Fig.1 Survey for the attention of science tendercy on emergency plans

When emergencies break out, the decision-makers should quickly get the useful knowledge for effective command-assignment. Emergency plans is the action plan, action guideline and action direction for quick response to emergencies, they tell the answers to what is it, how to deal with it, who will execute it, when will they do, what kinds of resources will be used, where are these resources and so on, in the process of before, during, and after emergency. In critical condition, the knowledge should be got as soon as possible for decision-making, such as a document, a definition, a clause, a flow or a technique. So, if the emergency plans

This research has been supported by National Natural Science Funds of China (No: 70571011, 70431001 ).

365

could be decomposed, and reorganized into new knowledge according to the content of documents, we can supply the relative knowledge pieces to decision-makers. Absolutely, it will improve the ability for quick response to emergencies. For solving the problems presented above, a way of comprehensive classification is firstly presented in this paper through integrating several existing classifications. Secondly, we will analyze the text structure of emergency plans, including the physical structure and the logical structure of text, for getting the characteristics of the text structure of emergency plans. Thirdly, we will discuss several methods of knowledge reorganization based on different levels, which are the classification of emergency plans, the physical structure and the logical structure of text. By using the knowledge reorganization methods, the objective of obtaining knowledge of emergency plans is established. At last, the ability of quick response to emergencies is improved.

2. A comprehensive classification of emergency plans


After the release of Master State Plan for Rapid Response to Public Emergencies on 8th January 2006, hundred of emergency plans have been released in succession. But, multi-accidents are likely to be broken out, such as SARS, terror attack, avian influenza (bird flu), tsunami, hurricane, earthquake, air crash. If the emergency plans can be reasonably organized and classified, i.e. the knowledge reorganization and creation [11]., the problems of isolation, intercross and contradiction among emergency plans could be avoided effectively. Then at different potential critical situations, the decision-makers can obtain the related emergency plans quickly. At the same time, it is also the foundation of setting up the base of emergency plans. There are many kinds of classification for emergency plans at present. For instance, according to classification of emergency, they are classified into four classes, which are Emergency Plans of Natural Disaster, Industry Accident, Public Health Event and Social Security Incident [6]. According to the system of emergency plans, they are classified into the Master State emergency plans, the Special State emergency plans, the Sectional State emergency plans and the Master Local emergency plans. According to the level of emergency plan, they are classified into the State emergency plans, the Provincial emergency plans, the Municipal emergency plans, the County-levels and Thereafter emergency plans.

Fig.2 A classifications of the special state emergency plans

Analyzing several kinds of classification presented above, we find that all of emergency plans can be classified into Natural Disaster, Industry Accident, Public Health Event and Social Security Incident emergency plans, except for the master state and local emergency plans. For example, the Special State Emergency Plan can be also classified into four kinds as shown in Fig.2. So, we cluster all levels master emergency plans, including the state, local, into one separate class. Then, combining the way of classification presented in paper [6], we
366

classified emergency plans into five classes, which are Natural Disaster Emergency Plans, Industry Accident Emergency Plans, Public Health Event Emergency Plans, Social Security Incident Emergency Plans and the Master Emergency Plans, as shown in Fig.3. It is a good method to reasonably organize the emergency plans. At the same time, the problems of isolation, intercross and contradiction among emergency plans could be also avoided in a certain degree.

Fig.3 The comprehensive classification of emergency plans

3. Text structure of emergency plans analysis


3.1 Definition structured text What is structured text (Wang Jian and Zhou Zhiying,2003)[8]? It is a kind of text that has clear hierarchical structures. The structure of text can be classified into the physical structure and the logical structure. The physical structure is the nature hierarchy of the text. For a book, the catalogue embodies the physical structure of the text, which is consisted of the chapters-sections--paragraphs. The logical structure of text is embodied by the contents and the way of expressions, it refers to the semantic characteristics of the text. Generally, the paragraph is the smallest unit which constitutes the text, and certain section of the text consists of one or several paragraphs, and there are semantic relations among them. So, in contrary to the hierarchy division of physical structure, the hierarchy division of the logical structure is more difficult. But, the hierarchy division of text includes physical and logical hierarchy division. 3.2 Characteristics of text structure of emergency plans Emergency Plans are the action plans and guidelines for quick response to emergencies. So, emergency plans are the carrier of knowledge which can be supplied to the decision-makers for decision-making. When certain emergency breaks out, we should help the decision-makers to find out the useful knowledge, which is the collecting of chapters, sections or paragraphs. So, analyzing the text structural characteristic of emergency plans is necessary. It is the foundation of text hierarchy division. We have collected hundreds of emergency plans through the internet. At first, we regard some of the collected emergency plans as the samples of study, including 21 state-levels emergency plans, 21 province-levels emergency plans, counting up to 42 plans related to emergencies. Through analyzing the chosen samples, we find that all of emergency plans are standard structured text, which have the same physical and logical structure. For the physical structure of text, all the emergency plans have fixed text structure, namely are organized by clear tree hierarchal structure, as the titles, chapters, sections, paragraphs and texts. And, the texts also consist of sentences, words. The physical structure of text is embodies by different levels of hierarchy, as shown in Fig.4. On logical structure, one subject is usually demonstrated by one or several paragraphs. They are organized according to the semantic relations among paragraphs, and form a whole knowledge piece at last. The knowledge piece is the collections of paragraphs based on analysis of the content of each paragraph. How to find out the knowledge pieces? We should firstly analyze the semantic relation among paragraphs, and reorganized the gained paragraphs according to the semantic. The interrelated paragraphs may be distributed in different paragraphs of 367

the same document, or distributed in different documents. Taking one section of the State Earthquake Emergency Plans as example, the situation that interrelated paragraphs together demonstrate one subject is shown in Fig.5.
Documents

Chapter1

...

Chapterk

Section1

...

Section i

Sectioni+1

...

Sectioni+n

Paragraph1

...

Paragraph j

Paragraph j+p

...

Paragraph j+p+q

Text1

...

Textj

Textj+p

...

Textj+p+q

Fig.4 Tree hierarchy structure of emergency plans text

Fig.5 Logical structural characteristics of emergency plans

4. Methods of Knowledge Reorganization Based Different Levels


The object of this paper is to supply useful knowledge to decision-makers. It is a process of knowledge discovery. What is knowledge discovery (Liu Liyang, 2000; Losiewicz and Oard,2000) [9][10]? Actually, it is a common process that distinguishes effective, original, useful and understood patterns from Data Warehouse. Knowledge reorganization (Xu Shudong,2002)[11] is an important way of discovering knowledge. It is a process of collecting, ordering and combining old object of knowledge according to their attributes and categories. Knowledge unit which is the smallest unit to constitute one specific subject is the basic object of knowledge reorganization. And knowledge reorganization is also a process of reorganizing knowledge factors. In this process, knowledge units are retrieved and reorganized. Through reorganizing knowledge, a new knowledge warehouse system is formed. As discussed above that knowledge unit is the foundation of reorganizing knowledge, the definition of knowledge unit is very important. Knowledge units could be defined at different levels, which are macro-level, middle-level, micro-level. In previous papers, we have discussed the classification and characteristic of text structure on emergency plans. According to the previous conclusions, we will propose different methods of
368

knowledge reorganization from different levels of the comprehensive classification, the physical text structure and the logical text structure. Thereby, we can provide useful knowledge to the decision-makers for quick response to emergencies. 4.1 Knowledge Reorganization Based on Classification of Emergency Plans At the macro-level, we regard a whole document as the smallest knowledge unit. Actually, the classification of emergency plans is a method of knowledge reorganization, which is called clustering reorganization. It is a method that we can reorganize a mass of disperse, stochastic, out-of-order knowledge into a new knowledge system using the way of classifying and clustering. And the reorganized knowledge can be orderly, effectively used in that system. By this way, all of the emergency plans are reorganized. We can quickly get the needed emergency plans that relate to certain emergencies. But a whole document is regarded as the smallest knowledge unit, which is a way of reorganization with coarse-grain. At critical situation, the decision-makers should quickly get the useful knowledge pieces, such as a definition, a clause, a flow or a technique, not a whole document. So in order to satisfy the knowledge that users need, we should reorganize knowledge in more deepen level. 4.2 Knowledge Reorganization Based on Physical Structure of Emergency Plans We have analyzed the physical structure of emergency plans, and got the characteristics of them. At the middle-level, one chapter, section or paragraph itself accounts for a clear subject, can be seen a single knowledge unit. And we regard one paragraph as the smallest knowledge unit. Based on the classification of emergency plans, we reorganize the knowledge according as the physical text structure of emergency plans. Then, the decision-makers can get one or more chapters, sections, paragraphs which have natural clear subject in a whole document, supporting higher-level decision-making for quick response to emergencies. We take the Operational Mechanism of Master State Plan for Rapid Response to Public Emergencies as example for obtaining knowledge nodes at different hierarchy, which are a definition, flow or a technique and so on, as shown Fig.6.

Fig.6 The natural hierarchy structure of Operational Mechanism in Master State Plan for Rapid Response to Public Emergencies

However, this kind of knowledge reorganization exists much defect .The retrieved knowledge is sets of chapters, sections, paragraphs, texts in the document. It is lack of considering semantic relation among them. But, the decision-makers usually need to know more concrete and detailed knowledge. 4.3 Knowledge Reorganization Based on Logical Structure of Emergency Plans The text of emergency plan has its own traits in logical structure. In general, a clear subject is usually explained by one or more paragraphs, which together consist of a knowledge piece according to their semantic relation. The interrelated paragraphs may be distributed in different paragraphs of the same document, or distributed in different documents. At the micro-level, we regard paragraph as the smallest knowledge unit based on comprehensive classification and physical hierarchy division on emergency plans text. Through analyzing the logical characteristics of text, the semantic relation among interrelated paragraphs is created. At last, the knowledge is reorganized at the micro-level.
369

As we can see, the semantic relation among paragraphs is the most important in the process of knowledge reorganization by the logical structural traits of text. So, how to create the semantic relation? Obviously, providing what kind of knowledge is driven by the requirement of users. The answer of users question is sometimes the sets of interrelated paragraphs-knowledge piece that have semantic relation in logic. And the questions are extracted from the text of emergency plans. They are from the content of the text. So we can extract all the potential questions from the emergency plans according to certain method. In terms of their attribute and category, the questions are collected, reordered and reorganized. This is a method of reorganizing knowledge at the level of logic. Using this method, the documents of emergency plans are transferred a knowledge system that the decision-makers can use directly at the micro-level.

Fig.7 Knowledge reorganization based on logical structure of texts

5. Conclusion
In this paper, a comprehensive classification of emergency plans is firstly presented. Then we concluded the characteristic of text structure by analyzing the physical and logical structure of emergency plans. Finally, we discussed the methods of knowledge reorganization on emergency plans based on different levels, which include the classification of emergency plans, the physical structure of text and the logical structure of text. Using the knowledge reorganization methods proposed in this paper, the objective of quickly obtaining the knowledge of emergency plans is established, and the ability of quick response to emergencies is improved.
References

[1] [2] [3] [4] [5] [6] [7] [8] [9]

Linet zdamar, Ediz Ekinci and Beste Kkyazici.Emergency Logistics Planning in Natural Disasters, Annals of Operations Research, 2004, 129(4): 217~245 S. Girgin, K. Unlu and U. Yetis. Use of GIS as a Supporting Tool for Environmental Risk Assessment and Emergency Response Plans, Comparative Risk Assessment and Environmental Decision Making, 2005(38): 267~274 Wu Zongzhi, Liu Mao. Gradation and Categorization System of Emergency Plan for Major Accidents and Their Main Contents, China Safety Science Journal,2003(1):15~18(in Chinese) Zhong Kaibin, Zhang Jia. Discussion on Constitution and Management of Emergency Plans, Social Sciences of Gansu, 2006(3):240~243(in Chinese) Wang Ting, Huang Chao. Research on the Constitution of Serious Accident Plans for Police Officers, Journal of Jiangsu Police Officer College,2006(5): 9~12(in Chinese) Liu Yongfa. Emergency Plans and Necessities for Constituting Emergency Plans, Disaster Reduction in China, 2005(19):33-35(in Chinese) Li Hongchen, Deng Yunfeng, Liu Yanjun. Formal Description of Emergency Plans, Journal of Safety Science and Technology, 2006,2(4):29~34(in Chinese) Wang Jian, Zhou Zhiying, Xiao Huiyong. The Design and Implementation of Structured Text Retrieval System, Engineering and Application of Computer,2003(19):133~135(in Chinese) Liu Liyang. Data Mining and Knowledge Discovery in Database, Henan Radio Televition University, 2000(3):42~43(in

370

Chinese) [10] Losiewicz P, Oard D W, Kostoff R N. Textual Data Mining to Support Science and Technology Management, Journal of Intelligent Information Systems,2000(15):99~119 [11] Xu Shudong. Knowledge Creation and Knowledge Reorganization, Journal of Information, 2002(6):24~25(in Chinese)

371

Relative Closeness Method for MAGDM with Heterogeneous Information


Li Dengfeng
Department Five, Dalian Naval Academy, Dalian, Liaoning 116018, P. R. China

Abstract The aim of this paper is to develop a new methodology for fuzzy multi-attribute group decision making (FMAGDM) with heterogeneous information. In this paper, different distances are used to measure difference between an alternative and the ideal solution (IS) as well as the negative ideal solution (NIS). A new relative closeness (RC) method for FMAGDM is developed by introducing the multi-attribute ranking index based on the particular measure of closeness to the IS. The RC method determines a compromise solution, providing a maximum group utility for the majority and a minimum of an individual regret for the opponent. Effectiveness of the method proposed in this paper is illustrated with a real example of the equipment system whole alternative evaluation. Key words Multi-attribute group decision making, Decision analysis, Linguistic variable, Fuzzy set

1. Introduction
Fuzzy multi-attribute group decision making (FMAGDM) problems are wide spread in real life decision situations[1][2][3]. The evaluation of different alternatives often involves multiple decision makers (DMs) from different areas with distinct knowledge and multiple attributes need to be taken into account. These attributes may have different kinds of nature, either quantitative or qualitative, and assessment information provided by DMs can be vague or uncertain[4][5]. Many aspects of uncertainties in this type of problems have non-probabilistic characteristics since they are related to imprecision and vagueness in meanings. Therefore, linguistic descriptors, such as likely and impossible, are used by DMs to describe an event. In such cases, the fuzzy linguistic approach provides a systematic way to represent linguistic variables in a natural decision-making procedure[6][7][8]. It does not require a DM to provide precise values of an alternative on qualitative attributes such as risk and reliability. So it can be used as a complementary tool to classical methods to deal with uncertainty, and especially linguistic information is suitable for representing attributes in situations which are too complex or too ill-defined to be reasonably described in conventional quantitative expressions. Therefore, it is not uncommon that decision frameworks for modeling these problems could be heterogeneous, namely, the assessments provided by DMs may be measured in different formats such as real numbers and intervals for quantitative attributes and linguistic terms and linguistic labels for qualitative ones according to DMs knowledge areas and the nature of evaluated attribute. There exist several common-used methods such as the Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) developed by Huang and Yoon[6]. However, these methods can only deal with decision problems with homogeneous information. In this paper, a new relative closeness (RC) methodology for solving FMAGDM problems with heterogeneous information is proposed. In this methodology, different distances are used to measure difference between an alternative and the ideal solution (IS) as well as the negative ideal solution (NIS). The multi-attribute ranking index is introduced based on the particular measure of closeness to the IS, which is developed from the weighted Minkowski distance used as an aggregating function in a compromise programming method.

2. FMAGDM problem with heterogeneous information


Assume that there exist a group consisting of

P = { p1 , p2 ,

K DMs pk ( k = 1,2, , K ), denoted by , p K } . The group has to choose one of or rank n feasible alternatives x j ( j = 1,2, , n ) based

on m attributes oi ( i = 1,2,

X = {x1 , x2 ,

, m ), both quantitative and qualitative. Denote an alternative set by , xn } and an attribute set by O = {o1 , o2 , , om } . Usually, O is divided into four subsets O t

This research has been supported by National Natural Science Funds of China (Differential Game Theoretical Models of Fire Allocation for Warship Formation and Decision System, No: 70571086).

372

( t = 1, 2, 3, 4 ), where Ot is an attribute set in which attribute function value is linguistic, fuzzy number, interval value and real number, respectively. k Let z ij be rating of x j ( j = 1,2,
k when oi O4 . Where S k = {s0 , s1k ,
k semantics of each term s h ( h = 1,2,

, n ) on oi ( i = 1,2,

, m ) given by pk ( k = 1,2,

k , K ). Namely, z ij

k k k k k equals to aij S k when oi O1 or (aij , bijk , cij , d ijk ) when oi O2 or [bijk , cij ] when oi O3 or bij R + k , s g k } is a linguistic term set given by pk ( k = 1,2, , K ), and the k k , g k ) is given by a trapezium fuzzy number ~hk = (ah , bhk , ch , d hk ) defined s

k in the [0,1] interval. Then the above FMAGDM problem can be concisely expressed as Y k = ( z ij ) mn

, K ), which is referred to as a fuzzy decision matrix usually used to represent the FMAGDM problem. k k Let z = ( z1kj , z 2 j , , z mj ) T , which is sometime regarded as alternative x j .
k j

( k = 1,2,

Since the physical dimensions and measurements of m attributes are different, so the Y k ( k = 1,2,

,K )

need to be normalized using the following normalization method. k k For ratings z ij = aij S k ( oi O1 ), multi-granular information need to be unified, i.e., experts preferences must be transformed into a single linguistic term set that we call basic linguistic term set (BLTS) denoted by S 0 = {c0 , c1 , , cT } . To do this, the granularity of the BLTS has to be as high as possible. Once the BLTS S 0 has been selected, the following multi-granular transformation function is applied to transform every linguistic value into a fuzzy set defined on S 0 :

: S k F (S 0 )
( stk ) = { th / ch | th = max min{ s k ( y ) , ch ( y )}, h {0 ,1, , T }}
y
t

( stk S k )

k Where S k = {s0 , s1k ,

k , s g k } and S 0 = {c0 , c1 ,

, cT } with T g k , F ( S 0 ) is the set of fuzzy sets

defined on S 0 , and si ( y ) and ch ( y ) are the membership functions of the fuzzy sets associated to stk S k
k k and ch S 0 , respectively. Then z ij = aij S k is transformed into a fuzzy set k k k (aij ) = { ijh / ch | ijh = max min{ a k ( y ) , ch ( y )}, h {0 ,1, , T }} y
ij

(1)

k k k k Usually (aij ) is concisely expressed as a vector (aij ) = ( ij 0 , ij1 ,

k , ijT ) , still denoted as rijk .

k k k k k For rating z ij = ( aij , bij , cij , d ij ) ( oi O2 ), it is normalized as follows according to profit attribute and cost

attribute

rijk
where

k k k k (aij / d i max , bij / d i max , cij / d i max , d ij / d i max ) if oi O2p = k k k k b (1 d ij / d i max ,1 cij / d i max ,1 bij / d i max ,1 aij / d i max ) if oi O2

(2)

k k k d i max = max{d ijk z ij = (aij , bijk , cij , d ijk ), ( j = 1,2,

, n; k = 1,2,

, K )}

( oi O2 )

and
k k k k ai min = min{aij z ij = (aij , bijk , cij , d ijk ), ( j = 1,2,

, n; k = 1,2,

, K )} ( oi O2 )

k k k In a similar way, rating z ij = [bij , cij ] ( oi O3 ) is normalized as follows


k [bijk / ci max , cij / ci max ] if oi O3p r = k k b [1 cij / ci max ,1 bij / ci max ] if oi O3 k ij

(3)

where
k k k ci max = max{cij z ij =[bijk , cij ], ( j = 1,2,

, n; k = 1,2,

, K )}

( oi O3 )

and

373

k k bi min = min{bijk z ij =[bijk , cij ], ( j = 1,2, k k Rating z ij = bij ( oi O4 ) is normalized as follows

, n; k = 1,2,

, K )}

( oi O3 )

rijk
where

k bij / bi max if oi O4p = k b 1 bij / bi max if oi O4

(4)

k bi max = max{bij | j = 1,2,

, n; k = 1,2, , n; k = 1,2,

, K}

( oi O4 ) ( oi O4 )

and
k bi min = min{bij | j = 1,2,

, K}

, K ) is transformed into the normalized matrix, which can be concisely expressed as k R = (r ) mn . Denote r = (r1kj , r2kj , , rmj ) T , which is sometime regarded as x j .
k k ij k j

Thus, Y k ( k = 1,2,

3. Relative closeness method for FMAGDM problem


The main principle of the RC method is that the FMAGDM problem with heterogeneous information is transformed into the fuzzy group problem, which is solved using MADM method with each DM being regarded as attribute. The main method and process is summarize as follows. k Step 1: Construct the IS x k + and the NIS x k for each DM. max{aij | j = 1,2, , n} ( oi O1 ) is computed
k ij k ij

by
k ij

the
k ij

definition

of

the
p 2

linguistic
k ij

label
k ij k ij

set
k ij

[5],

denoted

by
b 2

z ik +aik +

max{(a , b , c , d ) | j = 1,2,

, n} ( oi O ) or min{(a , b , c , d ) | j = 1,2,

, n} ( oi O ) is computed

using some fuzzy ranking method according to profit attribute and cost attribute, respectively, and denoted by k z ik +(aik + , bik + , cik + , d ik + ) . In a similar way, max{[bijk , cij ] | j = 1,2, , n} ( oi O3p ) or
k min{[bijk , cij ] | j = 1,2,

, n} ( oi O3b ) is computed using some interval number ranking method, respectively, , n} ( oi O4p ) or min{bijk | j = 1,2,
b , n} ( oi O4 ) is k , z m+ ) T be attribute value vector of the

k and denoted by z ik + = [bik + , cik + ] . max{bij | j = 1,2,

k computed, respectively, and denoted by z ik + = bik + . Let z k + = ( z1k + , z 2 + , k IS x k + . In a similar way, let z k = ( z1k , z 2 ,

k , z m ) T be attribute value vector of the NIS x k . k z k + = ( z1k + , z 2 + , k , z m+ ) T

Step 2: Normalized each fuzzy decision making matrix. Y k ,

and

= (z , z ,
k 1 k 2 k

,z )
k+

k m

( k = 1,2,
k+ 2

, K ) are normalized using Eqs. (1)-(4), denote the normalized matrixes and
k+ T m

vectors as R , r

= (r , r ,
k+ 1

,r )

and r k = (r1k , r2k ,


G i

k , rm ) T .

Step 3: Determine the group attribute weight vector. Let ik be weight of attribute oi ( i = 1,2,

,m)

given by pk k = 1,2,

, K . Then the group weight


L m m

( i = 1,2,

, m ) is a solution of the following

nonlinear programming problem

min{z = ( il iG ) 2 | iG = 1, iG 0, (i = 1,2,
l =1 i =1 i =1

, m)}

(5)

Using Lagrange function method, the optimal solution of Eq. (5) is derived as follows

iG = il / L
l =1

(i = 1,2,

, m)

(6)

Step 4: Compute the distance between each alternative and the IS as well as MIS for each DM. Difference between the normalized values rijk of x j and ri k + of x k + on oi O1 is described as a distance using

similarity measurement
k 1 (rijk , ri k + ) =| t ij t ik + | / T

(7)

Where
374

k k k t ij = h ijh / ijh , h=0 h =0

k k t ik + = h ih+ / ih+ h=0 h =0

(8)

Obviously, 0 1 (rijk , ri k + ) 1 . The distance ( p -power of Minkowski distance) between the normalized values of x j and x k + on all oi O1 is defined as follows according to Eq. (7)

1 p (rOk1 j , rOk1+ ) =
Where p is a distance parameter.

oi O1

G i

1 (rijk , ri k + )) p

(9)

Difference between the normalized values of x j and x k + on oi O2 is described as a distance


k k 2 (rijk , ri k + ) = p [( ik + ij ) p + 2( ik + ijk ) p + 2( ik + ijk ) p + (ik + ij ) p ] / 6

(10)

Obviously, 0 2 (rijk , ri k + ) 1 . The distance between the normalized values of x j and x k + on all

oi O2 is defined as follows according to Eq. (10)

2 p (rOk2 j , rOk2+ ) =

oi O2

G i

2 (rijk , ri k + )) p

(11)

Difference between the normalized values of x j and x k + on oi O3 is described as a distance

3 (rijk , ri k + ) = p [( ik + ijk ) p + ( ik + ijk ) p ] / 2

(12)

Obviously, 0 3 (rijk , ri k + ) 1 . The distance between the normalized values of x j and x k + on all oi O3 is defined as follows according to Eq. (12)

3 p (rOk3 j , rOk3+ ) =

oi O3

G i

3 ( rijk , ri k + )) p

(13)

Difference between the normalized values of x j and x k + on oi O4 is described as a distance

4 (rijk , ri k + ) =| ik + ijk |
oi O4 is defined as follows according to Eq. (14)

(14)

Obviously, 0 4 (rijk , ri k + ) 1 . The distance between the normalized values of x j and x k + on all

4 p (rOk4 j , rOk4+ ) =

oi O4

G i

4 (rijk , ri k + )) p

(15)

Using Eqs. (9), (11), (13) and (15), the distance between x j and x k + for pk is defined as follows

p (r jk , r k + ) = p 1 p ( rOk1 j , rOk1+ ) + 2 p (rOk2 j , rOk2+ ) + 3 p (rOk3 j , rOk3+ ) + 4 p (rOk4 j , rOk4+ )


follows

(16)

In a similar way, the distance between the normalized values of x j and x k on all oi O1 is defined as

1 p (rOk1 j , rOk1 ) =
where

oi O1

G i

1 (rijk , ri k )) p

(17)

1 (rijk , ri k ) =| t ijk t ik | / T ,

k k t ik = h ih / ih h=0 h=0

(18)

The distance between the normalized values of x j and x k on all oi O2 is defined as follows

2 p (rOk2 j , rOk2 ) =
where

oi O2

G i

2 ( rijk , ri k )) p

(19)

k 2 (rijk , ri k ) = p [( ik ijk ) p + 2( ik ijk ) p + 2( ik ijk ) p + (ik ij ) p ] / 6

(20)

The distance between the normalized values of x j and x k on all oi O3 is defined as follows
375

3 p (rOk3 j , rOk3 ) =
where

oi O3

G i

3 (rijk , ri k )) p

(21)

k 3 (rijk , ri k ) = p [( ik ij ) p + ( ik ijk ) p ] / 2

(22)

The distance between the normalized values of x j and x k on all oi O4 is defined as follows

4 p ( rOk4 j , rOk4 ) =
where

oi O4

G i

4 ( rijk , ri k )) p

(23)

4 (rijk , ri k ) =| ik ijk |
Using Eqs. (17), (19), (21) and (23), the distance between x j and x k for pk is defined as follows

(24)

p (r jk , r k ) = p 1 (rOk1 j , rOk1 ) + 2 (rOk2 j , rOk2 ) + 3 (rOk3 j , rOk3 ) + 4 (rOk4 j , rOk4 )


Step 5: Compute relative closeness degree of each alternative for each DM. Let
p (r k + ) = max{ p (r jk , r k + )} 1 j n , + k+ k k+ p ( r ) = min{ p (r j , r )} 1 j n + p ( r k ) = max{ p (r jk , r k )} 1 j n k k k p (r ) = min{ p (r j , r )} 1 j n

(25)

(26)

To reflect differences between x j and the IS x k + as well as the NIS x k , define

p ( r jk ) = k

p (r jk , r k ) p (r k ) p (r k + ) p (r jk , r k + ) + (1 k ) + + p (r k ) p (r k ) p (r k + ) p (r k + )

(27)

Where k [0,1] is a compromise coefficient, which may be regarded as a weight of decision strategy for closing to the IS x k + . Obviously, 0 p (r jk ) 1 and the larger p (r jk ) , the better the alternative x j
k (corresponding to r jk ) for pk . Let p (r jk ) denoted by pj . Then relative closeness degrees of each x j for all k pk can be expressed concisely in matrix format as follows p = ( pj ) K n .

Step 6: Compute weights of each DM. Using weights ik of oi i = 1,2, and the group attribute weights
G i

, m for each pk k = 1,2,

,K

in the Step 3, weights


m K

of each pk can be derived from the following

nonlinear programming problem

min{Q = [k ( ik iG )]2 | k = 1, k 0, (k = 1,2,


k =1 i =1 k =1

, K )}

(28)

Using Lagrange function method, the optimal solution of Eq. (28) is derived as follows

k = { [ (ik iG ) 2 / (lt lG ) 2 ]}1


t =1 i =1 i =1 + + + Step 7: Compute relative closeness degree of each alternative. Let p = ( p1 , p 2 , p = ( p1 , p 2 ,

(29)
+ , pK ) T and

, pK ) T be relative closeness degree vector of the IS x + and NIS x , respectively, where

+ k k pk = max{ pj | j = 1,2, , n} and pk = min{ pj | j = 1,2, , n} ( k = 1, 2, , K ).

The distance between x j ( j = 1, 2,

, n ) and x + as well as x is defined as follows

p (x j , x+ ) = p
Let

[ (
k k =1

+ pk

k pj )] p ,

(30)

p ( x + ) = max{ p ( x j , x + )} 1 j n , + + + p ( x ) = min{ p ( x j , x )} 1 j n

+ p ( x ) = max{ p ( x j , x )} 1 j n p ( x ) = min{ p ( x j , x )} 1 j n

(31)

376

In a similar way to Eq. (27), relative closeness degree of alternative x j is defined as follows

p (x j ) =

p (x j , x ) p (x ) p (x+ ) p (x j , x+ ) + (1 ) + + p (x ) p (x ) p (x+ ) p (x+ )

(32)

Where [0,1] is an coefficient. Obviously, 0 p ( x j ) 1 and the larger p ( x j ) the better x j . Ranking order of all x j ( j = 1,2,

, n ) is generated according to the decreasing order of p ( x j ) and the

optimal alternative of the group is obtained.

4. A Real Example
Consider a real problem of the equipment system whole alternative evaluation. In this problem, there exists a group of three experts (or DMs) which has to choose one best alternative from three alternatives to be demonstrated on equipment systems based on a two-level attribute system. In this attribute system, there exist 14 attributes in first level such as airspace, target, maneuverability, C3I and developing risk and 39 attributes in second level such as maximum and minimum combat height, radar echo area, velocity, working environment and technology risk. Data information is omitted. Using the RC method, alternative x2 is selected as the best one for the group and ranking order of three alternatives is x2

x1

x3 . Computation in detail is omitted.

5. Conclusion
There exist lots of FMAGDM problems with homogeneous information which may be expressed in different formats such as real numbers, intervals, linguistic terms and linguistic labels according to DMs knowledge areas and the nature of evaluated attribute. Such problems is solved using the new RC methodology of compromise ranking developed by introducing the multi-attribute ranking index based on the particular measure of closeness to the IS. The RC method determines a compromise solution, providing a maximum group utility for the majority and a minimum of an individual regret for the opponent. The implementation process, effectiveness and feasibility of the RC method proposed in this paper are illustrated with a real example of the equipment system whole alternative evaluation. It is expected that the RC method can be applied to many fields such as management and military.
References

[1] [2] [3] [4] [5] [6] [7] [8]

Chen C. T. Extensions of the TOPSIS for group decision-making under fuzzy environment. Fuzzy Sets and Systems, 2000, 114: 1-9 Hwang C. L., Yoon K. Multiple Attributes Decision Making Methods and Applications. Berlin Heidelberg: Springer, 1981 Li Dengfeng, Yang Jianbo. Fuzzy linear programming technique for multiattribute group decision making in fuzzy environments. Information Sciences, 2004, 158: 263-275 Li Dengfeng. An approach to fuzzy multiattribute decision making under uncertainty. Information Sciences, 2005, 169 (1-2): 97-112 Martnez L., Liu Jun, Yang Jianbo, Herrera F. A multi-granular hierarchical linguistic model for design evaluation based on safety and cost analysis. International Journal of Intelligent Systems, 2005, 20(12): 1161-1194 Opricovic S., Tzeng G. -H. Compromise solution by MCDM methods. European Journal of Operational Research, 2004, 156 (2): 445-455 Wang R.-C., Chuu S.-J. Group decision-making using a fuzzy linguistic approach for evaluating the flexibility in a manufacturing system. European Journal of Operational Research, 2004, 154 (3): 563-572 Zadeh L. A. Toward a generalized theory of uncertainty (GTU) An outline. Information Sciences, 2005, 172: 1-40

377

A Maxmin Model for Allocating the Fixed Cost Based upon DEA
Li Yongjun, Liang Liang
lionli@mail.ustc.edu.cn School of Management, University of Science and Technology of China, Hefei, 230052, China

Abstract Based on DEA (Data Envelopment Analysis), this paper has studied how to allocate the fixed cost among decision making units (DMUs) in a reasonable way. It has proven that all DMUs can be DEA efficient if the allocated cost is treated as an additional input in efficiency measurement. Based upon this conclusion, a maxmin allocation model is proposed in a global perspective, which uses a set of common weights to search the DMU with the minimum allocated cost step by step until there is no flexibility space in allocating the fixed cost for any DMUs. It is followed by a comparison to traditional approaches. Key words DEA, Cost, Allocation, Maxmin

1. Introduction
Data envelopment analysis (DEA) has been proven an effective tool for performance evaluation and benchmarking, since it was first introduced by Charnes et al. (1978). Recently, one of most important applications of DEA technique is to allocate the fixed cost among peer decision making units (DMUs). Cook and Kress (1999) first make a try to solve the problem under DEA structure. In their approach, they presume that the allocated cost can be seen as a new input

measure to all DMUs. Then according to the two principles: efficiency invariance and Pareto-minimality, the equitable allocation is achieved by solving several linear programming problems. However, in the general case, their approach relies on finding a single efficient DMU e.g. via a cone-ratio (Charnes et al. 1990) approach. Based upon Cook and Kress theoretical foundation, Cook and Zhu (2005) extend the approach to other DEA models (CCR, BCC) with input and output orientations, and give a feasible (but not optimal) cost allocation. With the same assumption of treating the allocated cost as a new input, Beasley (2003) provides an alternative DEA-based cost allocation approach by maximizing the average efficiency across all DMUs and adding additional constraints and models to obtain a unique cost allocation. However, two shortcomings exit. One is that the proposed model, a non-linear programming, causes a bit computational difficulties for decision makers. The other is that it may be difficult in allocation operation, since there is a huge gap between the minimum allocated cost and the maximum one. The reminder of this paper is organized as follows. The next section develops models for characterizing and measuring the efficiency of each DMU taking into account its allocated cost. It is followed by that the proof that all DMUs can be DEA efficient. Then, based upon the conclusion, a Maxmin model is proposed in Section 3. In section 4, the obtained allocation is compared to traditional approaches in rationality and allocation operation perspective. Conclusions and directions for future research are given in the last section.

2. Efficiency evaluation Suppose we have n independent homogeneous decision making units, where each DMU j ( j = 1, 2, , n )
consumes m inputs xij , i = 1,2,..., m to generate s outputs y rj , r = 1,2,..., s . A cost R is to be distributed among n DMUs. And the allocated cost to DMU j is denoted as R j ,

R
j =1

= R . Considering the allocated

cost, here, we use the following model to evaluate the relative efficiency of DMU 0
Max

u
r =1 m i =1

yr 0

vi xio + R0

= E0

This research was supported by the National Natural Science Foundation for Distinguished Young Scholars of China under Grant 70525001.

378

s.t.
m

u
r =1 i

y rj + Rj

v x
i =1

(1)

ij

R
j =1

=R

u r , vi , Ri 0, r , i, j
where E 0 is denoted as the relative efficiency of DMU 0 . On this definition of relative efficiency, refer to Beasley (2003), it has explicitly set weight factor of the allocated cost to one for simplicity. Theorem 1: Each DMU from model (1) can be DEA efficient. Proof: Consider equations as follows:

u
r =1 i

y rj + Rj

v x
i =1

=1

j = 1, 2,..., n

(2)

ij

R
j =1

=R

u r , vi , Ri 0, r , i, j
Apparently, the feasible solutions to equations (2) are not only feasible, but also optimal ones to model (1). Therefore, if equations (2) have feasible solutions, then all DMUs can be DEA efficient. Let
s

vi = 0, i

u l = R / y lj , l {1,2,..., s}, u r = 0, r l ; R j = R y lj / y lj , j
j =1 j =1

then

Ej =

u r y rj
r =1 i

v x
i =1

u l y lj Rj

y lj R / y lj
j =1 n

ij

+ Rj

R y lj / y lj
j =1

= 1 , j=1,2,,n

R
j =1

= R y lj / ylj = R
j =1 j =1

u r , vi , Ri 0, r , i, j
Thus, vi = 0, i ; u l = R /

ylj , l {1,2,..., s}, u r = 0, r l; R j = R ylj / ylj , j is a feasible


j =1 j =1

solution to equations (2), which means all DMUs from model (1) can be DEA efficient.

3. Maxmin-allocation model
Let consider the following model:

Max min R j
u ,v
1 j n

s.t.

R j = u r y rj vi xij , j = 1, 2,..., n
r =1 i =1

(3)

R
j =1

=R

u r , vi , R j 0 , r , i, j

379

From the equations (2), the allocated cost of DMU j can also be expressed as R j = u r y rj vi xij ,

j = 1, 2,..., n . And model (3) has two multiple objective functions. However, if let min R j = , the model (3) 1 j n
can be translated to the following model:

r =1

i =1

max
u,v m

s.t.

R j = u r y rj vi xij , j = 1, 2,..., n
r =1 i =1

(4)

R
j =1

=R

R j , j = 1,2,..., n u r , vi , R j 0 , r , i, j
Algorithm for getting the Maxmin-allocation * * * * Step 1: Let l = 1 , solve model (4) to get the optimal solutions 1 , u1r , v1i , R1 j , r , i, j ; All DMUs can be divided into two groups: J 1 = { j R1 j =
*

1* , j J } ; J 2 = { j R1*j > 1* , j J } , go to step 2.


max
u ,v

Step 2: Set l = l + 1 , solve the following general model.

s.t.

R j = u r y rj vi xij , j J 2l 2
r =1 i =1

(5)

R j = u r y rj vi xij = 1* , j J 1
r =1 i =1

* R j = u r y rj vi xij = 2 , j J 2l 3 r =1 i =1

R j , j J 2l 2

R
j =1

=R

u r , vi , R j 0 , r , i, j
Denote the optimal solutions to model (5) as subsets:
* J 2l 1 = { j Rlj = l* , j J 2l 2 } * J 2l = { j Rlj > l* , j J 2l 2 } = J 2l 2 J 2l 1 * * * l* , u lr , vli , Rlj , r , i, j . Then, J 2l 2 can be divided into two

(6)

If J 2l , then go to step 2 again, else go to step 3.

* k

Step 3: Denote the total model evolution times as k, (k n), and corrsponding optimal solutions as * * , u klr , v ki , r , i . And the procedure stops.

Therefore, all DMU set J = {1,2,..., n} can be divided into k groups as J =

J
l =1

2 l 1

,And the unique

380

R * = 1* j * * R j = 2 optimal allocation is * * ( R1* , R2 ,..., Rn ) = R * = k* j

j J1 j J 3 where j J 2 k 1
* * *

k* > k*1 > ... > 1* .

Deduction 1 >
* k

* k 1

> ... >

* 1 ,

and u kr , v ki , Rkj , r , i, j are also optimal solutions to model (4).


* * r* = u kr , i* = v ki , r , i , then the optimal allocated cost can be

Deduction 2The optimal allocation is unique. (from the computing procedure, it is easy to know)

The common weights can be denoted as express as R j =


*

r* y rj i* xij , j J . The proposed approach can have following properties:


r =1 i =1

Property

1.

To

any

two

DMUs,

such

as
* l

DMU l and
* k

DMU k l , k {1,2,..., n} if

xil = xik , i = 1,2,..., m, y rl y rk , r = 1,2,..., s , then R R


Proof: From the condition of xil = xik , i = 1,2,..., m, y rl y rk , r = 1,2,..., s , we can conclude
* * x , r y rl r y rk , therefore, Rl =
n

i =1

* i il

x =

i =1

* i ik

r =1

r =1

r* y rl
r =1

i* xil
i =1

r* y rk
r =1

i =1

* i ik

* x = Rk

The property shows that, with the same input profiles, the DMU with higher output profiles will afford more. Property 2. To any two DMUs, such as DMU l and DMU k l , k {1,2,..., n} if
* xil xik , i = 1,2,..., m, y rl = y rk , r = 1,2,..., s , then Rl* Rk .

The proof is similar to that of Property 1. It shows that, with the same output profiles, the DMU with lower input profiles will afford more.

4. Allocation Comparison
In this section, we illustrate the proposed method for the cost allocation problem using the same data set in Table 1 as was given in Cook and Kress (1999). It involves 12 DMUs, 2 outputs and 3 inputs with R=100.
Table 1 Data sample DMU 1 2 3 4 5 6 7 8 9 10 11 12 Input 1 9 8 7 9 6 17 10 5 5 6 5 6 Input 2 39 26 31 16 16 29 18 33 25 64 25 64 Input 3 350 298 422 281 301 360 540 276 323 444 323 444 Output 1 67 73 75 70 75 83 72 78 75 74 25 104 Output 2 751 611 584 665 445 1070 457 590 1047 1072 350 1199

As the computational procedure given above, we have that: Step 1: Solve model (4) and get the optimal

solutions solutions

1* = 3.9588, 2* = 5.0334,

J 1 = {11} , J 3 = {5} ,

J 2 = J {11} = {1,2,...,10,12}
Step 2: Run model (5) and get the optimal

J 4 = J 2 J 3 = {1,2,3,4,6,7,8,9,10,12} .
381

Step 3: Let l=2, run model (5) and so on, Step 12: J 212 1 = {12} , J 212 = , then computing procedure stops. The optimal solution
*

* 12 = 13.562 .

The allocated cost to each DMU j is doneted as R j , as given in second column of Table 2. In order to make comparison to traditional approaches, for simplicity, we denote the allocation results based upon Cook and CK B Kress (1999), Beasley (2003) as R j , R j respectively.
Table 2 Allocations comparison
 DMU

* j

R CK j
14.52 6.74 9.32 5.6 5.79 8.15 8.86 6.26 7.31 10.08 7.31 10.08
CK 9

RB j
6.78 7.21 6.83 8.47 7.08 10.06 5.09 7.74 15.11 10.08 1.58 13.97

1 2 3 4 5 6 7 8 9 10 11 12

8.4945 6.911 6.6056 7.5218 5.0334 12.103 5.1691 6.6735 11.843 12.125 3.9588 13.562

From Cook and Kress (1999) model, it shows R

=R

CK 11

= 7.31 . This arises since, from Table 1,

DMU 9 and DMU 11 have identical input profiles ( X 9 = X 11 ). However, these DMU pairs have different
output profiles ( Y9 > Y11 ). The reason for that is that DMU 9 has utilized the common platform more than

DMU 11 , either in using times, or frequency. Then it is reasonable to claim that DMU 9 should afford more
fixed cost than DMU 11 . In fact, the results based upon our approach is R9 = 11.843> R11 = 3.9588. Further
* *

more, another DMU pairs, such as DMU10 and DMU12 have similar relationship as compared to

DMU 9 and DMU 11 , for X 10 = X 12 , Y10 < Y12 . As from the results of Cook and Kress (1999),
CK CK * * R10 = R12 = 10.08 , While from our approach, we have R10 = 12.125> R12 = 13.562. Therefore, Our approach

has this property (Property 1), while Cook and Kress (1999) does not. B B B B Since Beasleys results shows R9 = 15.11 > R11 = 1.58, R10 = 10.08<R12 = 13.97 , so his approach is similar to ours for that property. However, the ratios between the allocated costs of DMU pairs are different. For B B example, from Beasleys results, R9 / R11 = 15.11/1.58 = 9.5633 , while from our model,
* * R9 / R11 = 11.843/3.9588 = 2.9916. Apparently, The ratio based upon Beasleys approach is bigger than ours.

Let take a look at the ratios of output profiles of DMU pairs, it shows that Y9 / Y11 = 75 / 25 1047 / 350 3 . Therefore, Our approach has some advantage in fixed cost allocation operation, as compared to Beasleys.

5. Conclusion
In this paper, we put forward a DEA-based approach for fixed cost allocation problem. By the comparison of numerical example, it shows that proposed method is relatively reasonable and executive as compared to the traditional approaches. Possible further research is to take into account the competition among the various DMUs. Therefore, using the integration of DEA and Game Theory to solve the problems will be remarkable for further studies.
382

References [1] [2] [3] [4] [5] Charnes, A., Cooper, W.W., Rhodes, E. Measuring the efficiency of decision making units. European Journal of Operational Research 1978; 2: 429444. W.D. Cook, M. Kress, Characterizing an equitable allocation of shared costs: A DEA approach, European Journal of Operational Research 119 (1999) 652661. A. Charnes, W.W. Cooper, D.B. Sun, Z.M. Huang. Polyhedral cone-ratio DEA models with an illustrative application to large commercial banks[J]. Journal of Econometrics, 1990, (46): 7391. Wade D. Cook, Joe Zhu, Allocation of shared costs among decision making units: a DEA approach, Computers & Operations Research 32 (2005) 21712178 J.E. Beasley. Allocating fixed costs and resources via data envelopment analysis [J], European Journal of Operational Research 147 (2003) 198216

383

Empirical Analysis on the International Competitiveness of China Telecommunication Operators


Li Yuanhui, Ding Huiping
School of Economics and Management, Beijing JiaoTong University, P.R. China, 100044

Abstract This paper aims at setting up one suite of evaluation indicator system which is objective and quantifiable from the view of financial report information disclosure for further analysis and utilization. Furthermore, the international competitiveness of selected typical operators will be analyzed in order to evaluate the competitiveness of China major operators and to find the gaps and disadvantages. Also the bottle-neck of the competitiveness will be described so as to dig out the root reasons and the way for improvement. Key Words Telecommunication Operators, International Competitiveness, Competitiveness Analysis

1. Introduction
In the year of 2002, the split of original China Telecom by northern and southern is a milestone which indicates China telecommunication market has stepped into one new scenario of competition by six major operators which include China Telecom (CT), China Mobile (CMCC), China Netcom (CNC), China Unicom (CU), China SATCOM (CSC) and China Railcom (CRC). According to the WTO agreements, this market will be opening to outside world step by step. The foreign capital will enter the value added telecommunication market even the telecom infrastructure. The huge potential of Chinese market makes China will definitely be the destination of the major telecommunication operators all over the world. The foreign operator with abundant capital will first enter the most profitable city and most likely they will build up the partnership with local operators which will cause more heavy competition environment for other local operators. So the competitiveness analysis of local operators is the high priority of the management team as well as the telecom industry surveillance department.

2. Literature review
Among the research of the competitiveness analysis, the most effective one is the International Competitiveness Analysis (WEF and IMD, 1980). Based on competitiveness theory research, this analysis has built up the total quantitative appraisal index system. Since 1990, some scholar start to research the reform of China Telecommunication industry and the research was focused on how to break the exclusive monopoly situation and bring up the competitive environment as well as to split the original China Telecom in order to construct other new operators. Nowadays, the development of China telecommunication industry under the market competition environment becomes the new focus of the scholars and the research will follow two prospective. First, from the industry point of view, the research will focus on the introduction of reforming experience of western developed countries and the solution for China based on Chinas specific situation. The other way, from the view of the enterprise, the scholar will focus on the marketing, competitiveness and management style. Beside these, there is very little research on the competency indicator system of telecommunication enterprises. In recent years, some scholars have turned their direction to competitiveness research. Summing up their views, the indicators for positioning operator normally include the following items, market share and leadership of the market, the influence to the market, the development and transformation of the new technology, the effective distribution of the telecom resources including external and internal resources, the positioning for HR input and output, capital operation capability, integrated international
This research has been supported by National Natural Science Funds of China (Capability of Logistic Enterprises Facing to Supply Chain Competition,NO:70472002)

384

competency represented by this company, service accepted level and the internal governance integrity. Based on the above summary, we can say that current study has never setup the analysis system of the international competitiveness of telecommunication operators from the view of financial report utilization .This paper will focus on this based on abundant data source and setup this system which is objective, quantifiable from the view of financial report information disclosure.

3. Analysis indicator system constructions


The analysis of the enterprise competitiveness should be a comprehensive, systematic and scientific process which refers to the comprehensive and systematic of the indicator system, the strict data acquisition, the systematic weighting and statistic process, etc. But in realistic, most of the original data could not be found through public database so that we can not follow the ideal procedure but to simplify the indicator system. The paper considered the characteristic of telecommunication operator which include end-to-end network, proactive technical development, fast technical update, in-visible product and the monopoly of telecom itself. The simplified indicator system looks like the following Tab.1.
Tab.1 The simplified indicator system on the international competitiveness analysis of
 External Competitiveness

China telecommunication operators Enterprise Scale Operating Income/Total Asset /Total Fixed Asset Profitability Operation Ability Net Profit /Return on Total Sales/Return on Total Asset / EBITDA Rate ARPU/Total Subscribers Operating Expenditure Percentage /Total Asset Turnover /Labor Productivity Technology-intensive

Potential Competitiveness

Management Level Technology Level

4. The international competitiveness calculation and analysis of China telecommunication operators


4.1 Original data sample of telecommunication operators This paper takes CT, CMCC and CU as the target for China telecommunication operators international competitiveness analysis since they are most typical operator in China. Among them, CT is the largest fixed line operator while CMCC is the largest mobile operator of China. CU is the only integrate telecommunication operator in China with all kinds of service licenses. Several international operators were taken into account for comparison such as Verizon, Vodafone, BT and NTT. The following calculation is based on the company annual financial report of 2004 and 2005. 4.2 The comparison for the international competitiveness of operators by each indicator Since it is very difficult to collect the original data and the report currency unit of each company is not the same, this paper use Million US$ as the standard for calculation. 4.2.1 The comparison of operator scale is shown as the Fig.1, Fig.2 and Fig.3.

385

100000 90000 80000 70000 60000 50000 40000 30000 20000 10000 0

Operating Income 2004 Operating Income 2005

300000 250000 200000 150000 100000 50000 0

Total Asset 2004 Total Asset 2005

od a

Fig. 1 Operating Income Comparison(MUS $)


10000 5000 0

Fig. 2 Total Asset Comparison(MUS $)

Total Fixxed Asset 2004 Total Fixed Asset 2005 300000 250000 200000 150000 100000 50000 0

CU

CT

CC

BT

af on

CM

od

-5000

er iz o

-10000 -15000

Net Profit 2004 Net Profit 2005

B od T af on V e er iz on N TT

CC

CU

CT

CM

-20000

Fig. 3 Total Fixed Asset Comparison(MUS $)

Fig. 4 Net Profit Comparison(MUS $)

For the comparison of company scale, Vodafone is several times as CMCC and the scale of the other foreign operator are far more beyond China operators expect BT. The operating income of domestic enterprises are far more behind foreign operators. The fixed asset of BT is on the same level with China operators but the Verizon and NTT are several times of that. In short, the scale of domestic operators is still very small compare to foreign operators even the total scale of all three domestic operators is less than half of the NTT. One of the reason is China modern telecommunication only has 50 years history while US operators has more than 100 years history. The other reason is that foreign operators expand fast through acquisition and capital operations. Vodafone is not recognized by the world until the US$120B merge with Mannesmann. International operation is also one major reason for this kind of huge scale. For instance, Vodafone has subsidiaries in 29 countries. 4.2.2 The comparison of enterprise profitability is shown as the Fig.4, Fig.5, Fig.6 and Fig.7.

386

TT

B od T af on V e er iz on N TT

CT

CU

CC

fo ne

izo n

CT

BT

TT

CM

CM

er

CC

CU

Return on Total Sales 2004(%) Return on Total Sales 2005 (%)


25.00% 20.00% 15.00% 10.00% 5.00% 0.00% -5.00% -10.00% -15.00% -20.00% -25.00%

Return on Total Asset 2004(%) Return on Total Asset 2005(%)


20.00% 15.00% 10.00% 5.00%

CC

CU

B od T af on V e er izo n N TT

CM

CT

0.00%

CU

CC

CM

-5.00%

Fig.5 Return on Total S ales Comparison

-10.00%

Fig.6 Return on Total Asset Comparison

60% 50% 40% 30% 20% 10% 0%

EBITDA 2004(%) EBITDA 2005(%)

160 140 120 100 80 60 40 20 0

Total Subscriber 2004(M ) Total Subscriber 2005(M )

CU

CC

BT

CT

af on

er iz o

TT

CU

CT

CC

BT

af on

CM

er iz o

CM

od

od

Fig. 7 EBITDA Comparison

Fig. 8 Total S ubscriber Comparison

From the data we can see that CT, CMCC has higher net profit, return on total assets and EBITDA rate than four foreign operators. The return on total assets of China Mobile in 2005 was 10 times of NTT. The return on total sales is also far beyond many foreign operators except CU while the gap is not very wide. However, if careful study, the reasons for such high profit margins of domestic enterprises, is not so optimistic. One very important reason is that China has no fully open and competitive markets. All the operators are still large state-owned enterprises with the uniform fee system. Profitability therefore could not directly reflect the overall competitiveness. In contrast, facing the increasing pressure from foreign operators, domestic enterprises should improve their internal and competitive edge, in addition to cultivating their own advantages other than cheap human capital. This can improve their overall competitiveness. 4.2.3 The comparison of Operation Ability is shown as the Fig.8 and Fig.9

387

TT

B od T af on V e er izo n N TT

CT

ARPU 2004($)
450.00 400.00 350.00 300.00 250.00 200.00 150.00 100.00 50.00 0.00

ARPU 2005($)

140.00% 120.00% 100.00% 80.00% 60.00% 40.00% 20.00% 0.00%

Operating Expenditure Percentage 2004 Operating Expenditure Percentage 2005

CU

CC

fo ne

izo n

BT

CT

CM

od a

er

TT

Fig. 9 ARPU Comparison

Fig.10 Operating Expenditure Percentage Comparison

The total number of subscribers of three China operators is far more beyond foreign operators except CU slightly lower than Vodafone. China operators have large-scale users and the total number of subscribers of the three companies has reached 100 million in 2005. CMCC has the most subscribers among those seven. It is mainly because China is a country with a large population and owns the world's largest telecommunications market. But the indicators reflect ARPU is not optimistic compare to the situation. The ARPU of China operators is at the lowest level compared with foreign operators. China Telecom's ARPU is only US$4.8 in 2005, less than one-tenth of the foreign enterprises. One of the reasons is that China is a low-income country with very low GDP per capita. But the other main reason is lack of commercial users. 4.2.4 The comparison of management level is shown as the Fig.10, Fig.11 and Fig.12
Total Asset Turnover 2004 Total Asset Turnover 2005
Labor productivity 2004
0.8 0.7 0.6 0.5 0.4 0.3 0.2

2.50 2.00 1.50 1.00 0.50 0.00

Labor productivity 2005

0.1

CU

CC

BT V od af on e V er izo n

CT

TT

CM

CU

CC

BT V od af on e V er izo n

CT

CM

Fig.11 Toal asset turnover comparison

Fig.12 Labor Productivity Comparison(MUS $ per capital)

The data shows the first two indicators have no much more difference from the foreign operators but labor productivity is relatively far behind. In recent years, most of the large telecommunications companies started to reduce costs by reduce the number of employees. Till 2004, British Telecom (BT) has reached a total of 13,000 layoffs and nearly one-third of the company's call center was closed. Before the merger, AT&T wireless announced in 2003 to implement cost reduction programs and planed to cut about 1,000 staff, equivalent to 3% of
388

TT

BT od af on V e er iz on N TT

CM

CC

CU

CT

the total employee.


Tab. 2 Labor cost comparison  Year NTT CMCC CU MUS$ 2002 20585 5325 3005 2003 17964 7501 4029 2004 17828 7700 5527

As shown in the Tab.2, from 2002 to 2004, the communications giant NTT company labor costs has a visible downward trend in contrast to the CMCC and CU which has a rising labor expenditure. However, the per capita labor costs are still low compare to foreign operators. This is definitely the major reason why China operators could survive with such a low level productivity. With the economic globalization, all countries will gradually come to the same per capita labor costs. China will no longer have this advantage so increase the labor productivity is critical for the sustainable development of China operators. 4.2.5 The comparison of Technology level is shown as Fig.13 As shown in the Fig.13, CMCC and CU have a slightly lower technology-intensive than foreign operators but CT has a larger gap. This is not due to China operators not attach importance to the development of technology, but little input- output. China operators attach great importance to the pursuit of new technologies, focus on the expansion of scale, for foreign operators the focus of technology is the stability and integrated network. In this regard our operators should strengthen mutual cooperation and avoid wasteful duplication of resources and increase input-output.

5.The comprehensive calculation of international competitiveness


5.1 Methodology Entropy weighting method is an objective method which confirms the index weight according to the relationship between the original data, avoiding the defects of subjective weighting method. Entropy weighting method is transparent in principle and simple in calculation, so it is practical. This paper selects Entropy weighting method to calculate Comprehensive Competency Indicator of the sample enterprise.
0.60 0.50 0.40 0.30 0.20 0.10 0.00

Technology-intensive 2004 Technology-intensive 2005

CU

CC

fo ne

CT

BT

n izo V er

CM

Fig.13 Technology-intensive Comparison(MUS $ per capital)

5.2 Calculation process 5.2.1 Data Standardization Entropy weighting method uses Logarithmic function during the Calculation process and requires that all data is more than zero. So we should use the modification linear transformation method to standardize the original data, that is, non-dimensional treatment. 389

od a

TT

We assume the original data matrix is X = X ij mn .

( )

(1)

During the analysis, m=7, Representing number of sample enterprises; n=13, Representing number of indicator. After treatment, we assume the data matrix still is X = X ij mn (2)

( )

5.2.2 Calculate the proportion of enterprise i indicator value under indicator j:

Pij =

X ij

X ij
i =1

(3)

5.2.3 Calculate the weighting of index j The Entropy value of index j:

e j = ln (1m ) Pij ln Pij


i =1

(4) (5)
gj

Difference coefficient of index j:

g j = 1 ej
The weighting of index j: w j =

gj
i =1

(6)

5.2.4 Calculate Enterprise Comprehensive International Competitiveness Indicator Value under Entropy weighting method:

vi = w j Pij
j =1

(7)

5.3 Results The calculation results are shown as Tab.3. It can be seen that there is not a big change in the ranking of the overall capacity of the seven sample operators in 2004 to 2005. Vodafone was ranked first in two years, scoring over many domestic operators from the ranks. British Telecom declined from 3 to 6. In general, the domestic operators are ranked lower than foreign operators, China Unicom ranked last. The top four, except NTT which is all service operator, all the other three are mobile operators and this indicate that mobile communication operators are more competitive than fixed line operators.
Tab.3 Enterprise comprehensive international competitiveness indicator value and ranking
 Enterprise

CT CMCC CU BT Vodafone Verizon NTT

Comprehensive Competency Indicator value 2004 0.115019 0.125699 0.111491 0.138356 0.233974 0.133429 0.142033

Ranking 2004 6 5 7 3 1 4 2

Comprehensive Competency Indicator value 2005 0.134294 0.143029 0.126420 0.123327 0.169057 0.153576 0.150297

Ranking 2005 5 4 7 6 1 2 3

6. Conclusion
In general, the telecommunication is still one weak industry in China. The opening of the market brings us the opportunities as well as the threat. The limitary opening is very benefit to the foundation of the effective
390

competition. And this is the key for the deeply reform of China telecommunication industry through building up the modern enterprise system. At the same time, the introduction of modern management philosophy will accelerate the internationalism of China operators. On the other hand, we will be ready for the potential threats such as loss market share, brain drain, etc. There are some other issues which we have to take into account that the protective policy from government will be declined and cancelled finally which means the domestic operators have to increase its own competitiveness for the upcoming threat and competition. Based on the above analysis plus the development of world telecommunication industry, the competition environment for China operators is very tough. With the decreasing or disappearing of the policy of state support gradually, China telecommunication operators must take precautions, through the study of foreign enterprises in personnel, capital, and operating management experience, taking effective measures to enhance their competitiveness and to ensure that enterprises have sufficient strength to meet the challenge of foreign operators.

References
Cang Jigang. The research on China telecommunication operator competitiveness. Wuhan: Wuhan University Press, 2005. Zhou Yonghe. The research on China mobile telecommunication operators competitiveness. Heilongjiang: Heilongjiang University Press, 2005. [3] Li Guang. The research on competitiveness evaluation indicator system of the 21 century enterprise. Productivity research,2000,6:137-138 [4] He Chao, Feng zongxian.The analysis on international competitiveness of China telecom business. Journal of Shanxi business school, 2002,15(2):24-27 [5] Liu Zhongmin. The research on evaluation and application of enterprise competitiveness. Ms D Thesis. Liaolin: Liaolin engineering and technology university,2004 [6] Gao Yinzi. The construction of telecom enterprise competition model and Quantitative indicator system. Ms D Thesis. Beijing: Beijing University of Posts and Telecommunications,2003 [7] China telecom industry analysis report. http://www.cei.gov.cn/,2005 [8] China telecom industry analysis report. http://www.cei.gov.cn/,2006 [1] [2]

391

An Improved Approach to Conjoint Analysis for the Complex Decision-Making


Liu Chengming, Li Chunhao
School of Management, Jilin University, P.R. China, 130025

Abstract The classical conjoint analysis (CA) has three drawbacks in computing preference values of the evaluated objects. Firstly, the three estimation methods of profile utilities may be invalid. Secondly, the three methods do not well reflect inherent interactive relations of complex systems. Thirdly, CA does not consider inaccurate characteristic of evaluators judgments. Therefore, conclusions made by the classical CA are probably incorrect. To overcome these three drawbacks, according to the thought of the meta-synthesis from qualitative analysis to quantitative analysis of complex system theory, and based on the theory of the technique of fuzzy neural network, the improved approach to CA for the complex decision-making is presented. The numerical demonstration verifies that the developed approach is able to obtain rank of evaluated objects much closer to the real, and proves to be more reasonable than the classical CA. Key words Complex system, Conjoint analysis, Fuzzy neural network, Evaluated object

1. Introduction
Conjoint analysis (CA) which was called conjoint measurement is an approach for evaluating system problems by using multivariate statistical analysis (Luce and Tukey, 1964)[1]. It is mainly used to infer valuators profile utilities and attribute utilities of evaluated objects (EO). CA has widely been used in various fields, such as marketing analysis, R&D of new products, projects selection (Lee, Cho and Lee, 2006; Rohae, 2003; Probert, Dawson and Cockrill, 2005; Pullman, Moore and Wardell, 2002; Sethuraman, Kerin and Cron, 2005)[2][3][4][5][6]. Least squares (Shen and Ke, 1998)[7], LINMAP (Misha and Umesh, 2005)[8], MONANOVA (Noguchi and Ishii, 2000)[9] are the currently primary estimation methods of profile utilities (EMPU). Viewing from the principles of estimating profile utilities, the theoretical foundation of least squares, LINMAP, MONANOVA is linear regression, linear programming and variance analysis, respectively. The three methods assume profile utilities meet presumptive formulae. Then using preference scores given by evaluators to fit the formulae (i.e., least squares, LINMAP) or directly using the formula and scores to compute (i.e., MONANOVA), so profile utilities of EOs are generated. It is highlighted that the formulae are employed to approximately reflect mechanisms of systems. However, although the approach of fitting formulae bases on statistical theory, issues that the presumptive formulae can not pass statistical tests and EMPUs are invalid will appear. Directly compute profile utilities betaking the formula will result in an issue that conclusions derived are unreliable, because of the unreasonably presumptive formulae. The reason that engendering the above issues is, problems that CA researches on almost are decision-making problems of complex systems which possess the non-linearity, emergence and semi-structured characteristics (Gentil and Montain, 2004)[10]. Considering mechanisms of complex systems are difficult to identify, so it is generally hard for valuators to presume similar modality of presumptive formulae to systematic inherent rules. In addition, preference scores given on the basis of subjective judgments are inaccurate. But in the light of CAs principle, valuators should present determinately numerical judgments which are obviously contradictive to really subjective judgments of them. For overcoming the issues and validly calculating profile utilities of EOs, through using the complex system theorys thought of meta-synthesis from qualitative analysis to quantitative analysis for reference (Gu and Tang, 2005) [11] and introducing the technique of fuzzy neural network (FNN), an improved approach to CA for complex decision-making is developed in this paper. The subsequent structure of the paper is as follows. In section 2, CA is briefly introduced and the drawbacks of CA are analyzed. Section 3 develops the improved approach to CA for complex decision-making. Then, the

This research has been supported by National Natural Science Funds of China (Research on a Semi-Empirical and Semi-Theoretical Analytic Hierarchy Approach of Exceeding AHP/ANP, No: 70471015) and Innovation Foundation of Jilin University (No: 2004gl1).

392

numerical analysis is conducted in section 4 to demonstrate the developed approach is effective and scientific. The paper concludes with final remarks in section 5.

2. The brief introduction and drawbacks of CA


2.1 The brief introduction of CA According to two data collection methods, CA mainly can be classified as two forms, namely, trade-off and full-profile (Shen and Ke, 1998)[7]. Thereinto, full-profile is being used more widely (Yoo and Ohta, 1995)[12]. The reason is that full-profile simultaneously takes mutual functions among attributes into account and can reasonably evaluate EOs by holistic view; on the contrary, trade-off can not do that (Wittink and Cattin, 1989)[13]. CA of full-profile form primarily includes the following six steps (Dahan and Scinivasan)[14]: (1) Aiming at the existing EOs (EEO), P suitable attributes of EOs are chosen and suitable levels are presented. Let z pq p denote the q p th level of the pth attribute, where, p = 1,2, , P . (2) Constructing

combinations of different attribute-levels as virtual EOs (VEO). (3) Inviting valuators to evaluate VEOs and present their preference scores by employing sequencing method or scoring method. And scoring method is more popular, which includes Likert scale and 1 to 100 likelihood-of-acquisition scale. (4) Selecting one of EMPUs. (5) Calculating profile utilities of EOs. Least squares of EMPUs is in common use (Dahan and Scinivasan)[14]. The principle of it to calculate profile utilities is additive model. Namely,
U = v pq p x pq p
p =1 P

(1)

z pq p can be obtained by the formula (2).

Y = a + v pq p x pq p +
p =1

(2)

In formula (1) and (2), Y donates preference scores of EOs, a denotes the intercept of fitting function. x pq p stands for dummy variables of different attribute-levels, x pq p =1 when
z pq p appears, otherwise, x pq p =0. And v pq p is a fitting parameter, which means the attribute utility of the pst attribute at the q p st level. Let be the

stochastic error of the fitting function. 2.2 Drawbacks of CA The drawbacks of CA are as follows: (1) EMPUs may be invalid in some conditions. What can be found through analyzing EMPUs is the premise of drawing reasonable conclusions according to EMPUs. The premise is that valuators can identify inherent mechanisms of systems or at least presumptive formulae can pass statistical tests. But it can be hardly guaranteed when valuators are confronting complex system problems. The reason is that interrelated mechanisms of systems elements are always hardly recognized, and it is always difficult to present reasonably approximate hypotheses about system mechanisms, because of the non-linearity, emergence and semi-structured characteristics of complex systems. Considering the reason, presumptive formulae may not pass statistical tests and the EMPUs will be invalid. (2) EMPUs do not well reflect inherent interactive relations of complex systems. Regarding an EO as whole, complex relations, such as substitution, supplement, and match, etc. may exist among its attributes. The interactive relations influence profile utilities of an EO in two ways. One is the substitution relation exists among some attributes and the supplement, match relations among others, when attributes affect profile utilities. The other is complex relations among attributes show effect of level dependence, namely, one attribute has different relations with other attributes, when they are in different interval values. However, a profile utility is the sum of attribute utilities (least squares, MONANOVA) or is the sum of attribute utilities which is assigned weights (LINMAP). It can be known that the two computational patterns only
393

reflect substitution relation, nevertheless do not indicate supplement, match relation among attributes. (3) The traditional CA does not consider inaccurate characteristic of valuators judgments. According to the principle of CA, preference scores of researches are subjectively given by valuators. Both social psychology and cognitive psychology deem judgments of people are not accurate. Inaccurate judgments do not imply objective mechanisms are distorted by people, on the contrary, it reflects cognitive limitations of people (Utkin and Augustin, 2007)[15]. It is evident that judgments and evaluations presented by valuators satisfy the reality. Consequently, adopting fuzzy numbers to describe analysis and judgments of valuators is reasonable. However, CA requires that valuators should present determinately numerical judgments. It is highlighted that the demand is contradictive to subjective judgments of valuators.

3. The theoretical foundation and steps of the improved CA approach


3.1 The theoretical foundation for the approach EMPUs of the classical CA have very limited abilities to identify system mechanisms because of the non-linearity, emergence and semi-structured characteristics of complex systems. In consequence, that how to enhance the power of recognizing complex relations is the key to improve CA and reinforce reliability, reasonableness of tackling real problems. In view of this point, the paper adopts neural network (NN) technique which holds the capacity of fitting non-linearity to identify system mechanisms. As one kind of NN, BP (Back Propagation)-FNN can implement non-linear mapping between input and output parameters only depending on training data. In reality, implementation process of mapping is the process that BP-FNN extracts features of training data and recognizes system mechanisms. In addition, BP-FNN has not only strong abilities of self-adapting learning, non-linearity approximation but also functions of expressing rule-based human knowledge, processing fuzzy information and blurry inference (Rong and Wang, 2003)[16]. Thus, BP-FNN can be seen as an excellent technique to identify inherent mechanisms of systems, and the paper applies it to CA analysis. Then the BP-FNN based approach to CA for complex decision-making is developed. For convenient expounding, the approach is called BP-CA for short. 3.2 Steps of BP-CA Applications of CA can be classified into two kinds, namely, evaluating EEOs and evaluating VEOs. The operation steps of the two kinds are similar, and main steps are completely same. For conveniently presenting BP-CA, the following part only introduces primary steps of using CA to evaluate VEOs . VEOs can be constructed according to step (1) and (2) of 2.1. Suppose that the i th attribute of the VEOs is Fi , the j th level is Lij , where, 1 i I , 1 j J , I and J denotes the total number of attributes and levels

of corresponding attribute, respectively. Generally deliberating, assume that K VEOs are developed. Let ( ( Ok = ( L1kj1) , L(2kj)2 , , L(IjkI) ) be the k th VEO, where, L1kj1) , L(2kj)2 , , L(kI) respectively stands for the attribute-level Ij values of the 1st, 2nd , , I th attribute, 1 k K . The linguistic evaluation presented of the k th EVO is expressed as Tk , where, 1 k K . The BP-CA includes the following five basic steps: Step 1: Constructing the structure of BP-FNN. Fig. 1 is the structure of BP-FNN. The fuzzy input layer and the fuzzy output layer are got through respectively fuzzifying the inputs, the output according to some categories of membership functions. The triangle membership function is simple, intelligible and can well express every fuzzy variable, so it is used to fuzzify input and output variables. The triangle membership function of nine linguistic preference evaluations is showed in Fig. 2. In Fig. 1, the input and output layer are gained by the triangle membership function, so that there are three nodes which linked to each node of the input layer and the output layer. The number of hidden layers, hidden nodes in each hidden layer, the learning rate, the allowable error scope and the training step will be determined based on real training process. ( Step 2: Defining learning samples. Let L1kj1) , L(2kj)2 , , L(kI) respectively be the Ok s attribute-levels of the Ij 1st, 2nd, , I th attribute, where, 1 k K . So it is known that there are K groups of input-output learning
394

( samples, and the k th sample can be expressed by ( L1kj1) , L(2kj)2 ,


( L1kj )
1

k , L(Ij ) Tk )
I

Tk

L(Ijk )
I

Input layer Fuzzy input layer

Hidden layer 1 Hidden layer n

Fuzzy output layer Output layer

Fig. 1 The Structure of BP-FNN for BP-CA.

Step 3: Training BP-FNN. Using K learning samples to train the BP-FNN by the BP algorithm and adjusting the number of hidden layers and that of hidden nodes in each hidden layer by controlling learning error, then the optimal neural network can be obtained.
(x) 1 worst worse bad less than bad moderate less than good good better best

Fig. 2 The Triangle Membership Function of Linguistic Evaluations


( Step 4: Computing VEOs fuzzy numbers of profile utilities (FNPU). Input L1kj1) , L(2kj)2 , , L(kI) of Ok to Ij

the trained BP network, then FNPUs k = [ k , k , k ] can be derived in fuzzy output layer. Step 5: Ranking VEOs. At present, mainly ranking methods of fuzzy numbers includes three kinds, i.e., the intuition method, the -cut method and the calculation method. But all methods of the three kinds only use partial information of fuzzy numbers. Hence, the paper adopts the comprehensive ranking method of fuzzy numbers to sort k ( 1 k K ), which well includes information of fuzzy numbers (Hu and Jiang, 2001)[17]. Sorting k involves three steps. Firstly, determining number interval [c, d] that fuzzy numbers belong to. Secondly, , fuzzy computing fuzzy standard deviations k = ( k 2 + k 2 + k 2 k k k k k k ) /18 means X k = ( k + k + k ) / 3 , fuzzy information quantities Ek = 1 2( k k ) /[3(d c)] of k . At last, computing restraining indices k = Ek X k + ( Ek 1) k of fuzzy numbers, and ranking VEOs according to k . The rule of ranking is, if k is bigger than the kth VEO is more excellent.

4. Numerical demonstration
Here is conducted the experimental comparative analysis between the BP-CA and the classical CA by a numerical example. Its aim is to prove the validity of the BP-CA. 4.1 The numerical example and assumptions The numerical example is that ten valuators evaluate twelve EEOs A1A12. They are combined by two attributes r1, r2 which each has seven attribute-levels. Suppose that there exists the nonlinear formula (3) which includes r1, r2, y, and let it be a nonlinear system in reality, where, y denotes the real profile utilities of EOs.
y = r12r2 + r1r22 + 10(r1 + r2 ) + 100

(3)

According as the attributes, the levels and the demand for EOs numbers of the classical CA (Shen and Ke, 1998)[7], 24 EOs including both A1A12 EEOs and B1B12 VEOs are provided by orthogonal arrays. They are listed in Tab. 1.

395

Tab.1 Attributes and corresponding levels of EOs EOs Attributes and AttributeLevels of EOs EOs Attributes and AttributeLevels of EOs r1 r2 r1 r2 A1 2 4 B1 1 4 A2 3 4 B2 6 4 A3 5 4 B3 1 10 A4 4 10 B4 3 13 A5 5 10 B5 2 22 A6 6 13 B6 4 22 A7 7 13 B7 3 25 A8 1 22 B8 7 25 A9 1 25 B9 2 32 A10 6 32 B10 5 32 A11 7 32 B11 4 37 A12 3 37 B12 6 37

Seeing that inaccurateness of objective judgments, there must be some errors between profile utilities judged by valuators and real utilities of EOs. To simulate the inaccurate judgments, supposing that profile utilities y ' of each EO evaluated by ten valuators satisfy a normal distribution. And the paper refers the method of generating data satisfying normal distribution in the eighth reference (Mishra and Umesh, 2005), so the normal distributions the great probability of occurrence is the real profile utility of the EO and the standard deviation is 1/5 of the real profile utility. Here linguistic variables are applied to represent preference evaluations y '' given by valuators, and assume that evaluations are about superiority and inferiority of EOs. Let the maximum and the minimum y ' of 24 EOs respectively be the lowest bound and highest bound, namely, 66.73 is the lowest bound and 4332.20 is the highest bound, and then nine intervals are divided. The approach to simulate preference evaluations of valuators is, if y ' of EOs are subject to intervals [66.73, 565.67], [515.67, 1039.61], [989.61, 1513.55], [1463.55, 1987.49], [1937.49, 2461.44], [2411.44, 2935.38], [2885.38, 3409.32], [3359.32, 3883.26], [3833.26, 4332.20], then the linguistic evaluations of EOs are worst, worse, bad, less than bad, moderate, less than good, good, better, best, respectively. If an EOs y ' simultaneously falls into two intervals, then randomly choose an interval, and the linguistic evaluation corresponding to the chosen interval is the preference evaluation of the EO. 4.2 Computing FNPUs of EEOs based on BP-CA According to the steps of BP-CA, the structure of BP-FNN can be constructed by letting r1, r2 of EOs be the inputs and y '' be the output of network. On the condition that the learning rate is 0.1 and the learning error is 0.0001 of BP algorithm, the A1A12s FNPUs listed in Tab. 2 are computed.
Tab.2 EEOs FNPUs computed by BP-CA EEOs FNPUs EEOs FNPUs EEOs FNPUs A1 [0.9981.0003.000] A5 [0.9991.0003.000] A9 [1.0001.8993.900] A2 [1.0001.0003.000] A6 [1.0001.0013.001] A10 [3.5015.4987.500] A3 [1.0001.0002.995] A7 [0.9991.0012.999] A11 [4.0006.0017.900] A4 [1.0001.0003.001] A8 [1.0001.6013.600] A12 [3.0985.1047.100]

4.3 Computing profile utilities of EEOs based on the classical CA By the nine Likert scale, linguistic evaluations, worst, worse, bad, less than bad, moderate, less than good, good, better and best, are turned into quantitative evaluations yT that range from 1 to 9. On the basis of 24 EOs

levels of attributes r1, r2 and yT given by ten valuators, SPSS 13.0 for Windows is employed to do regression analysis. The regression equation passes R-test, F-test and t-test, so using least squares to calculate both attribute utilities and profile utilities of EEOs is feasible. Attribute utilities computed are listed in Tab. 3.
Tab. 3 Attribute utilities of EEOs by the classical CA
 Levels of Attribute r1

1 -0.283 4* 0

2 -0.526 10 -0.065

3* 0 13 -0.508

4 0.321 22 1.198

5 0.161 25 1.504

6 0.655 32 3.685

7 0.874 37 5.010

Attribute Utilities of r1 Levels Levels of Attribute r2 Attribute Utilities of r2 Levels

*Note: Independence variables corresponding to the two levels are eliminated, so attribute utilities are equal to 0.

396

4.4 Comparative analysis between the two orders computed and the real order of EEOs Let ST , S F be the orders of A1A12 respectively computed by the classical CA, BP-CA, and let S R be the real order of A1A12. According to attribute utilities, FNPUs and real profile utilities of EEOs, ST , S F , S R are

gained and listed in Tab. 4.


Tab. 4 ST, SF, SR of A1A12 EEOs A1 12 9 10 A2 11 10 11 A3 8 12 12 A4 7 7 7 A5 10 8 9 A6 9 6 6 A7 6 11 8 A8 5 5 5 A9 4 4 4 A10 3 2 2 A11 2 1 1 A12 1 3 3

ST SF SR

In order to easily testify validity of the BP-CA, ST , S F are respectively compared to S R , and the comparative results are showed in Tab. 5. Let the proportion of EOs number of N (N=0, 1, 2, 3, 4) sequence between S F and S R to the number of EEOs be Pr op( S F , S R , N ) , then the meaning of Pr op( ST , S R , N ) can be easily known. By Tab. 5, it can be found that Pr op( S F , S R , 0) is obviously more than Pr op( ST , S R , 0) , and Pr op( S F , S R , 1) , Pr op( S F , S R , 2) are respectively less than Pr op( S F , S R , 1) , Pr op( S F , S R , 2) . It is suggested that the computational precision of BP-CA is evidently higher than the traditional CAs. In addition, BP-CA well and truly identifies the three EEOs A10, A11, A12 which rank the preceding three sequences in S R . However, the traditional CA does not recognize exact sequences of A10, A11, A12 in S R , and deems the EEO A12 which ranks the third is the best. Thus in view of both the computational precision of approach and the ability to identify the best EEO, the BP-CA is more excellent than the classical CA. The results of numerical analysis verify the order obtained by BP-CA is closer to the real order of EEOs, and computational effects are superior to the classical CA. Therefore BP-CA is practically valuable to solve real problems.
Tab. 5 ST, SF Respectively Compared with S R Difference Between Two Orders 0 Sequence 1 Sequence 2 Sequence 3 Sequence 4 Sequence Total

ST S R
Number 4 4 2 1 1 12 Proportion (%) 33.33 33.33 16.67 8.33 8.33 100.00* Cumulative proportion (%) 33.33 66.66 83.33 91.66 100.00* Number 9 1 0 1 1 12

SF SR
Proportion (%) 75.00 8.33 0 8.33 8.33 100.00 Cumulative proportion (%) 75.00 83.33 83.33 91.66 100.00* -

*Note: The real sum of deviations should be 100%, but the value is 99.99% because of computing errors.

5. Conclusions
It is intended in this paper that the three estimation methods of profile utilities may be invalid and do not well reflect inherent interactive relations of complex systems when they are employed to compute profile utilities of evaluated objects. Meanwhile, CA does not consider inaccurate characteristic of valuators judgments. To overcome above shortcomings, based on the theory of CA, referring to the complex system theorys thought of meta-synthesis from qualitative analysis to quantitative analysis, through introducing the technique of fuzzy neural network, an improved approach to CA for the complex decision-making (i.e., the BP-FNN based approach to CA for complex decision-making) is presented in this paper. The distinguished advantages of the presented approach lie in that it can effectively approximately capture inherent laws of complex systems on the condition that systems non-linear mechanisms are unclear, and it is a general approach which can be used to dispose varieties of linear and non-linear systems problems. The results of numerical demonstration show the order
397

obtained by BP-CA is closer to the real order of evaluated objects, and computational effects are superior to the classical CA. Therefore the improved approach to CA for complex decision-making can scientifically solve real decision problems. It can be expected for its wide application to more real world problems.
References [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] Luce R D, Tukey J W. Simultaneous conjoint measurement: A new type of fundamental measurement. Journal of Mathematical Psychology, 1964, 1 (1): 1-27 Lee J, Cho Y S, Lee J J, Lee C Y. Forecasting future demand for large-screen television sets using conjoint analysis with diffusion model. Technological Forecasting and Social Change, 2006, 73 (4): 362-376 Rohae M. Conjoint analysis as a new methodology for Korean typography guideline in Web environment. International Journal of Industrial Ergonomics, 2003, 32 (5): 341-348 Probert E J, Dawson G F, Cockrill A. Evaluating preference within the compositing industry in Wales using a conjoint analysis approach. Resources Conservation & Recycling, 2005, 45 (2): 128-141 Pullman M E, Moore W L, Wardell D G. A comparison of quality function deployment and conjoint analysis in new product design. The Journal of Product Innovation Management, 2002, 19 (5): 354-364 Sethuraman R, Kerin R A, Cron W L. A field study comparing online and offline data collection methods for identifying product attribute preferences using conjoint analysis. Journal of Business Research, 2005, 58 (5): 602-610 Shen Hao, Ke Huixin. Principal and application of conjoint analysis. Application of Statistics and Management, 1998, 17 (4): 39-45 (in Chinese) Mishra S, Umesh U N. Determining the quality of conjoint analysis results using violation of a priori signs. Journal of Business Research, 2005, 58(3): 301-311 Noguchi H, Ishii H. Methods for determining the statistical part worth value of factors in conjoint analysis. Mathematical and Computer Modeling, 2000, 31(10-12): 261-271 Gentil S, Montmain J. Hierarchical representation of complex systems for supporting human decision making. Advanced Engineering Informatics, 2004, 18: 143-159 Gu J, Tang X. Meta-synthesis approach to complex system modeling. European Journal of Operational Research, 2005, 166 (3): 597-614 Yoo D., Ohta H. Optimal pricing and product-planning for new multiattribute products based on conjoint analysis. International Journal of Production Economics, 1995, 38 (2-30): 245-253 Wittink D R, Cattin P. Commercial use of conjoint analysis: an update. Journal of Marketing, 1989, 53 (3): 91-96 Dahan E, Scrinivasan V. The predictive power of Internet-based product concept testing using visual depiction and animation. The Journal of Product Innovation Management, 2000, 17 (2): 99-109 Utkin L V, Augustin T. Decision making under incomplete data using the imprecise Dirichlet model. International Journal of Approximate Reasoning, 2007, 44(3): 322-338 Rong Lili, Wang Zhongtuo. Learning evaluation system based on knowledge and fuzzy neural networks. Journal of Management Sciences in China, 2003, 6 (3): 1-7 (in Chinese) Hu Weiwen, Jiang Liping. Comprehensive Ranking of Fuzzy Numbers and Its Application in the Decision Making of Naval Vessel Systems. Journal of Huazhong University of Science and Technology, 2001, 29 (1): 76-78 (in Chinese)

398

Applying Fuzzy Set and Rough Set to Evaluate Risk Level in IT Project Management
Lu Xinyuan1 , Zhang Jinlong2
1 Department of Information Management, Huazhong Normal University; Wuhan, 430079,China. 2 School of Management, Huazhong University of Science & Technology; Wuhan,430074, China.

Abstract Risk evaluation is an importance process in IT project management. Aiming at there are some problems during the IT project risk management, this paper uses the method of fuzzy set and rough set, which integrates the advantage of the two theories, and constructs the risk level evaluation index system and the evaluation model of IT project, and exemplifies the evaluation system and model. Key words Fuzzy, Rough set, Risk evaluation, Evaluation model, IT project

1. Introduction
The evaluation of risk level in IT project management is more difficult because there is a wide rang of unique characters: often referring a great investment and more complicated evaluation indexes, the managements ability, the grate correlative degree and low diaphaneity of information, the high dynamic of communication, and there are many indexes that cant be quantitatively measured. Therefore it seems a nonstructural questions, for which the experience, knowledge and preference of the decision-maker play an important role in decision-making. As the complexity of decision process, it is believed that hardly this problem could be resolved perfectly by only one theory. This paper proposes an evaluation model of risk level based on fuzzy set and rough set, which can solve risk level of multilayer indexes. The model integrates the advantage of rough set and fuzzy set: First, it calculates the significance of attribute through RS theory, to build first class indexes weight set. It does not require any additional empirical information of data, such as probability in statistics, and it can judge whether or not all the indexes have the same significance under the given condition only by the known information, so the weight of each attribute could be evaluated according to its significance. The weight is gained by analysis of the similar project data in the past, and therefore is more objective than hypothesis or just adopting the experts experience. Then Attributes reduction helps to reduce the complexity of computation. At last a method of fuzzy comprehensive evaluation integrates all experts' opinions effectively and gives a reasonable evaluation result.

2. The risk evaluation model of IT project


2.1 The literature review of risk management in IT project Boehm suggested a process of IT project consisting of some main phases: risk identification, analysis, assessment, prioritization, and risk control. In each phase, some methods or tool are using to project risk management. In recent years, great attention has been paid to the risk identification and monitoring. Especially the method of risk level assessment. The majority of existing literature relating to the assessment and analysis of risks tends to focus on the assessment of decisions within the project, and from this, the level of risk determined in terms of cost or time overrun. This tends to be achieved through utilizing project management techniques such as Critical Path Analysis (CPA). Within this assessment of the project, variations of time duration or cost are allocated to a particular event, and simulation techniques, such as Monte Carlo, can then be used to calculate the spread in these attributes required to complete the project. Moreover, some other methods are introduced such as the method of Delphi, the combination of subjective and objective method, the grey relevancy method, eigenvector method, analytical
This research was supported by the National Natural Science Foundation of China under Grant 70571025, and also supported by the DanGui Project of Huazhong Noraml University.

399

hierarchy process (AHP) , the entropy value of information, the least squares method based on Frank-wolfe. Especially, the fuzzy and the rough set theory, which were introduced to forecast and determine the weight of risk factors, has become a prevalent way in those years. 2.2 Knowledge denotation of IT project According to rough set theory, the data being disposed can be expressed by a knowledge denotation system, which is mainly described by the essential information of the objects. Rough set regards knowledge as the ability to classify objects, and it uses a decision table to describe knowledge system. Definition1: knowledge denotation system can be seen as S =< U , C D,V , f > , where U = {x1 , x1 ,..., x n } is a nonempty finite set of objects. A = C D Are attributes set, which is divided into two subsets: C stands for expresses the value field of attributes, and condition attributes and D stands for decision attributes. V = Va is the range of the attribute value. f : U A V is a function, which means that selecting a element from set U randomly, it can always find a Vr from set V coincident with the attribute in set C and
Vr
a A

D: a A, x U , f ( x, a ) Va .

2.3 The risk factors analysis of IT project There are many kinds of IT projects; each kind differs in characteristics. Referring to article [8], this paper gives a general risk evaluation index system, which is shown in fig.1. The system considers both the interior factors and exterior factors in IT project, which is divided into three classes. The first class describes the index kinds. It contains 5 indexes. Each index is divided into 2 to 3 sub indexes, which forms the second class. It represents the index factors, and then the third represents index items. When it comes to an actual project case, we can choose some indexes from this general index system to evaluate, according to the category of project and the evaluating demands.
Tab. 1 Index Project complexity (D1) The main risk factors of IT project Explanation Technology difficult Sections involved Influence on operation Innovation Design level (D2) The risk evaluation system of IT project Team risk (D3) Practicability Security HR structure Personal quality Number of similar projects Project experience (D4) Percent of successful projects Quality control Risk management Maintenance capability (D5) Maintenance plan Maintenance quality Symbol C1 C2 C3 C4 C5 C6 C7 C8 C9 C10 C11 C12 C13 C14

2.4 Constructing of risk evaluation model This section adopts fuzzy to analyze the importance of elements of clusters competitiveness, in other words, to research into this question: which elements are more important for developing competitiveness of cluster in IT project. This section constitutes data collection, statistical calculation, confirmation of weight of each index and computation of comprehensive evaluation score. The procedure of evaluating the competitive index importance of industrial cluster in tourism is as following. Definition 2: Supposed mapping f : X F (Y ) x f ( x) = B F (Y ) ,the commutation of set 400

T : F( X ) F(Y ) , A T ( A) = B Then extend the two commutations to fuzzy sets, and the ~ ~ ~ f : X F (Y ) , x f ( x) = B can be called fuzzy mapping form X to Y .
There are four main step during the constructing of evaluation risk model as shown below: Step1:Let X = {x1 , x 2 , , x n } Y = { y1 , y 2 , , y m } .

r ~ ~ r r f ( xi ) = B = i1 + i 2 + + im = (ri1 , ri 2 , , rim ) F (Y ) , i = 1,2, , n then the y1 y2 ym fuzzy matrix can be constructed by row of ( ri1 , ri 2 , , rim ) , and the fuzzy relation can be made certain only as
~ f : X F (Y ) , xi
follows

r11 r ~ R f = 21

r12 r22 rn 2

r1m r2 m rnm
(1)

Where , R f ( xi , y i ) = rij = f ( xi )( y j ) .
Step2 :Supposed a fuzzy matrix is given as follows:

rn1

r11 R=
~ f R : X F(Y ) , xi

r12 r22 rn 2

r1m r2 m rnm
(2)

r21 rn1

Let

i = 1,2,

, n j = 1,2,

~ ~ ~ f R ( xi ) = (ri1 , ri 2 , , rim ) F (Y ) , where, f R ( xi )( y i ) = rij = R ( xi , y i ) , ~ ~ , m . f R is the fuzzy mapping from X to Y , the fuzzy mapping f R can also be

made certain. Step3 :Weight values of factors using AHP In the process of construction of judgment matrix, inconsistency of judgment in decision-makers mind will cause inconsistency of judgment matrix. Just when judgment matrix is consistent completely, the largest eigenvalue of A equals n, that is max = n ; while, if judgment matrix is inconsistent, there will be max > n , and at this situation max n can be used to test the degree of consistency of judgment matrix. Calculation of Consistence Index (CI) is as follows:

CI =

max n
n 1

(3)

The value of CI is the smaller, the consistency of judgment matrix is the higher. When CI = 0 , judgment matrix is consistent completely. Since variance of consistency may be caused by random events, when check whether judgment matrix is consistent or not, it is necessary to compare CI with RI (Random Index) so that obtain CR (Consistence Rate).

CR = CI / RI
random variance is. The relation between RI and order of judgment matrix is as follows (As show in Tab. 2):
Tab. 2 Order RI 1 0 RI of judgment matrix (From the first-order matrix to the ninth-order matrix) 2 0 3 0.58 4 0.9 5 1.12 6 1.24 7 1.32 8 1.41 9 1.45

(4)

RI is related with order of matrix, and the larger the value of order is, the larger probability of occurring

Generally, as to judgment matrix whose order is larger than 3 ( n 3 ), if CR 0.1 , the estimation of aij in matrix A are consistent on the whole; if CR 0.1 , variance of estimation from consistency is too large,
401

therefore some indexes must be adjusted until obtaining satisfactory degree of consistency. Additionally, basing on CR = 0.1 and the values of RI in Table.2, it is easy to calculate unique critical eigenvalue ( max ) corresponding to specific order of matrix ( n ). The formula is as follows:
max = CR * RI *(n 1) + n Then obtain max (As showed in Tab.3).
Tab.3 Order RI 1 0 0

(5)

max of judgment matrix( from the first-order matrix to the ninth-order matrix)
2 0 0 3 0.58 3.116 4 0.9 4.07 5 1.12 5.45 6 1.24 6.62 7 1.32 7.79 8 1.41 8.99 9 1.45 11.34

max

If max > max , it indicates aij in A are inconsistent, so aij must be adjusted. After adjusting aij , calculate once again until fulfill this condition: max < max . When max < max , ANC can be used to calculate the

weight of indexes: firstly, make each column of judgment matrix normalization; secondly, calculate the sum of each row, then obtain a vector; finally, make the vector normalization, and the result will be the weight of indexes.
aij = aij / aij
i =1 n

(6)

Weight of judgment matrix Bi-C is


WBCi = aij / aij i
j =1 i =1 j =1
i

(7)

Therefore, weight values of indexes on each level have been obtained. Assume that WAB is the weight values of judgment matrix A-B, then weight of each index can be expressed as follows:
C WA i = WABi WBCi i

(8)

Step4: Statistical calculation. In order to resolve the problem that it is difficult to integrate with many indexes with different dimensions, after finishing collection of data, these original data need to be adapted so that all data have the same dimension, and this process is dimensionless. After eliminating the effect of different dimensions, the value of weight can be used to indicate the importance of index. Assume that evaluation index system A has m first-level indexes and n second-level indexes, moreover the i th first-level index contains nt

second-level indexes. The equation (1) can be expressed this relation.

n = nt
t =1

(9)

As to the first-level index Bi ( i = 1,

, m )dimensionless of indexes is as follows


wi = Di

D
i =1

(10)

where WBi is the value after dimensionlessand Di is the value before dimensionless. Therefore, the risk factors evaluation set can be constructed. In the set of W = ( w1 , w2 ,
B =W R

, wn ) , then do the

operation of max-min, in other word , the integration evaluation matrix can be calculated as follows: (11)

An example
Suppose there are some IT project need to evaluated, and the risk factors set is D = {D1 , D2 , D3 , D4 , D5 }

Where u1 means Project complexity; u 2 means Design level; u 3 means Team risk; u 4 means Project experience; u 5 means Maintenance capability. The evaluation set is defined as V = {v1 , v 2 , v3 , v 4 , v5 }
402

where, v1 means high, v 2 means rather high, v3 means normal, v 4 means rather weak, and v5 means weak. The single risk factor evaluation matrix can be constructed experts in each evaluation team and the result are listed as in Tab.4.
Factors set D D1 Project complexity D2 Design level D3 Team risk D4 Project experience D5 Maintenance capability

as following table. Supposed there are 10

Tab. 4 The weight of risk factors in IT project Evaluation set V v1 5(c11) 3(c21) 2(c31) 0(c41) 3(c51) v2 3(c12) 4(c22) 3(c32) 4(c42) 2(c52) v3 2(c13) 2(c23) 2(c33) 2(c43) 1(c53) v4 0(c14) 1(c24) 3(c34) 2(c44) 3(c54) v5 0(c15) 0(c25) 0(c35) 2(c45) 1(c55)

Let cij ( i = 1,2,

,5 j = 1,2,

,5 )is number of the approved to the Di in v j ,and let


rij = cij

c
j =1

( i = 1,2,
ij

,5 )

c
j =1

ij

= 10 is the number of experts. So the single factor evaluation matrix is as follows:


0.5 0.3 R = 0.2 0 0.3 0.3 0.4 0.3 0.4 0.2 0.2 0.2 0.2 0.2 0.1 0 0

0.1 0 0.3 0 0.2 0.2 0.3 0.1

The integrative evaluation of W1 = (0.2, 0.3, 0.2, 0.2, 0.1) then the matrix can be calculated according to

M (, ) :
B1 = W1 R = (0.26, 0.34, 0.19, 0.16, 0.05)

In order to give the accurate evaluation result, we should do the unitary operation , and all the risk be measured with 1. So the evaluation set can be express with V = (1, 0.8, 0.7, 0.6, 0.5) T , then :
1 0.8 P = B1 V = (0.26, 0.34, 0.19, 0.16, 0.05) 0.7 = 0.786 1 0.6 0.5

rank can

By the same waythe risk level of the other IT project can be calculated , so the risk level order are as follows: P2 > P > P4 > P3 . 1

4. Conclusion
The research of risk evaluation is a focus of attention for IT project in recent years. This paper introduces a integrative evaluation model of IT project based on fuzzy set and rough set, which gives a simple and objective way to solve the risk decision problems with multilayer indexes. This method can help the managers to estimate the risk rank according to the analysis results, so that they can adjust the development strategy so that it ensures the project will be successful in the fullest extent.

403

References Qing. Rough set and its illation [M]. Beijing: Science press,2001 James Jiang, Gary Klein . Software development risks to project effectiveness[J].The Journal of Systems and Software[J] 52 (2000) 3-10. [3] B.W boehm, Software risk management: principles and practices[C]. IEEE software 8(1),191,32-41 [4] Lu Xinyuan ,Zhang Jinlong, Huang Xinfeng. A Bidding Risk Decision Model of IT Project Based on Rough Set and Fuzzy Set[C]. International Conference Service System and Service Management. 2006:1044-1049 [5] Fan Zhiping, Zhang Quan. An integrated approach to determining weights in multiple [J].Journal of Management Sciences in China, 1998,9: 50-53. [6] Bogdan, Rebiasz Fuzziness and randomness in investment project risk appraisal[J].Computers and operations research,2007,34,199-210 [7] Chen Degang a, Zhang Wenxiu b, Daniel Yeung. Rough approximations on a complete completely distributive lattice with applications to generalized rough sets[J]. Information Sciences,2006,176: 18291848 [8] V.Carr, J.H.M. Tah. A fuzzy approach to construction project risk assessment and analysis:construction project risk management system[J]. Advancees in Engineering Software. 2001,32: 847-857 [9] Marcio Oliveird. Supporting risks in software project management[J].the Journal Systems and Software ,2004,70: 21-35 [10] Linda Wallacea, Mark Keilb, Arun Rai. Understanding software project risk: a cluster analysis. Information & Management[J], 2004,42:115125 [11] Salwa Ammar, Ronald Wright. Applying fuzzy-set theory to performance evaluation[J]. Socio-Economic Planning Sciences,2000,34: 285-302 [12] Rajen B. Bhatt, M. Gopal. On fuzzy-rough sets approach to feature selection[J] .Pattern Recognition Letters,2005,26:965975 [1] [2]

404

Research on the Dynamic Choosing Question about Network Product Compatibility and Price
Sheng Yongxiang
School of Economics and Management, Jiangsu University of Science and Techology, Mengxi Road 2, Zhenjiang, P.R.China ,212003

Abstract The network product compatibility and the price are the network product producing enterprise manage policy-makings the important aspect. This thesis embarks from the demand and infer the demand function, considering the product of the network effect, the compatibility as well as the price. According to the network products market of the actual situation, the thesis establishes three dynamic choosing model forms , considering the product compatibility and has carried on the computation with the reversion induction, moreover carries on the example analysis to the third form. Key Words Network products ,Compatibility Prices ,Choice

1. Introduction
otherwise the machine is not compatible. All network users think that the product compatibility is very popular performance. In determining what type of networking products, All network users think that the compatibility and price is a major factor. Domestic Scholars mainly do research on products differentiation and firms compatibility choice, external network and Standard communications and enterprise compatibilitys competition with the game theory[1][2][3][4]. Foreign scholars mainly do research on the computer industry compatibility and network externality problem with the defense equilibrium[5][6][7][8][9][10]. However, considering the price and the compatibilitys factors not are not related with the game theory. From the demands of consumers this paper derives demand functions ,considering the effect of the network, compatibility and price. Considering the actual market situation of network products, this has established three forms of the choice of a dynamic model about product compatibility and price. Not only reverse inductive method are used for the calculation, but also to analyze the third example form.
2. The basic assumption of products compatible model

Before establishing compatible products model in the game theory, we first put forward the following assumptions on both the supply and demand for products : 2.1 Supplying market (1) Products double alienation. Assuming that there are only two kinds of quality of the same market with similar products, each product which every enterprise products Alienation of each product has two different parameters. One is a description of the parameters of the level of product features locations value, the other is a products compatibility parameters the degree of products compatible s ij, among them, ij=1,2. (2)The biggest difference in the level of product characteristics. Assuming that the level of the enterprises products feature has been the biggest difference ,the locations values are respectively situated in both sides of a line. That is to say , a1=0a2=1. (3) The diversification of products Compatibility. Using sij [0 1] representatives the compatibilitys
coefficient of j with i , sij=1 representatives the complete compatibility of j with i , sij=0 representatives the incomplete compatibility of j with i , sij(01)representatives the partial compatibility of j with i .

Demanding market (1)Consumers demand for this product is one unit. Consumers preference h obeys the uniform distribution [0,1]. t is the transferring cost rate. Consumers have the linear transferring cost, that is to say , the consumer who has h preference, needs to pay the transferring cost th2, in addition to pay the price of p1 when to

2.2

This Project was supported by the National Natural Science Foundation of China under Grant 70472005.

405

buy product one. The consumer who has h preference, needs to pay the transferring cost t(1-h)2, in addition to pay the price of p2 when to buy product one.

(2)Network effect. When the value of products for the consumer increases with an increase of the introduction of the product or compatible product in the volume, it is a network externality. It is an integral part of consumer income, the consumer is willing to make the necessary expenditures. Using e>0 is represent for the markets strength of the network effect. In the media market, the value of e is greater , but the value of e is small in the furniture market. Using ni(i=1,2) is expressed for the network size of product i. It depends on the enterprise with its opponents of the past sales xb and the future sales xp. p b p b p p b b n i = x i + x i + s ij ( x j + x j ) = ( x i + s ij x j ) + ( x i + s ij x j ) eni is called the network size of the enterprise i . (3) Assuming that consumers buy these products, they have been fully aware of two products on the price the compatibilitythe level difference of product features. (4) Consumer purchasing decisions are carried by the utility maximization principle. The utility for the consumers buying product one is: u 1 = ( p 1 + th 2 en 1 ) The utility for the consumers buying product two is: u 2 = [ p 2 + t (1 h ) 2 en 2 ] Among them ,

is expressed for the basic consumer willingness to pay.

The condition of these consumers buying product one is: u1 u 2 that is to say :
( p 2 p 1 ) + t ((1 h ) 2 h 2 ) + e n 0 .Among them , n = n1 n 2 .

Similarly, the condition of these consumers buying product one is: u1 u 2 . the preference h* that meet

u1 = u 2 is called by no difference to the consumer other preferences. Thus, the demand function for products one is :

x1 ( s12 , s 21 , p1 , p 2 ) = h * = 1 + 21t [( p 2 p1 ) + en] 2


The demand function for products two is : x 2 ( s12 , s 21 , p1 , p 2 ) = 1 h =
* 1 2

21t [( p 2 p1 ) + en]

To determine the demand decided by the market, we in addition to know the past sales of this enterprises and his opponents, need to know the future sales of this enterprises and his opponents. A realistic assumption is : x1p = x1 x 2p = x 2 . After calculating, the two demand function for the two enterprises are:
b b x1 ( s12 , s 21 , p1 , p 2 ) = 1 + [( p 2 p1 ) + 1 e( 2(( x1b + s12 x 2 x 2 s 21 x1b ) + s )] 2 2

x 2 ( s 12 , s 21 , p 1 , p 2 ) =

1 2

[( p 2 p 1 ) +

1 2

b b e ( 2 (( x 1b + s 12 x 2 x 2 s 21 x 1b ) + s )]

Among them ,

1 2 t e ( 2 s1 s 2 )

s = s12 s 21 .

3. The dynamic game theorys

model of the compatibility about products


model of the compatibility about

According to the actual market situation, the dynamic game theorys

products can be divided into the following forms : 3.1 Two enterprises have determined the compatibility about products in order in the first two stages, then determined their own prices in the third stage. Two enterprises have determined the compatibility about products in order in the first two stages which are s12 s 21 ,then at the same time determined their own Competition prices in the third stage. That is shown in Figure
1.

s12

s 21

p1 , p2

Figure1 The sequential chart about the enterprise choosing the compatibility and determining the price again

The profit functions for the two enterprises are:


406

b b 1 (s1 , s2 , p1 , p2 ) = ( p1 c){1 + [( p2 p1 ) + 1 e(2((x1b + s12 x2 x2 s21x1b ) + s)]} 2 2


b b 2 ( s1 , s 2 , p 1 , p 2 ) = ( p 1 c ){ 1 [( p 2 p 1 ) + 1 e ( 2 (( x1b + s12 x 2 x 2 s 21 x1b ) + s )]} 2 2

(1) (2)

Among them , c is expressed for the marginal cost. In order to solve the sub-game refined Nash equilibrium in the model, we set out from the third stage with backward induction method to compute.In the third stage of the game, for the fixed degree of compatibility s12

s 21 , enterprise i regards the price of opponent as a fixed price, solves p .To maximize the profit function of his i
own enterprise which is i ( s1 , s 2 , p1 , p 2 ) .Using the first-order condition
i pi

=0(i=1,2),we can the two price (3) (4)

response functions about these two enterprises.

p 1R ( p 2 ) = 1 [ P2 + C + 2
R p 2 ( p1 ) = 1 [ P2 + C + 2

1 2 1 2

b b + 1 e ( 2 (( x1b + s12 x 2 x 2 s 21 x1b ) + s )] 2 b b 1 e ( 2 (( x1b + s12 x 2 x 2 s 21 x1b ) + s )] 2

Price reaction functions show the prices of the two enterprises complement each other, that is to say, the price of rival products is higher, the price of the product is higher. The intersection of the two functions is given the price reaction function of equilibrium :

p 1B ( s12 , s 21 ) = C +
B p 2 ( s12 , s 21 ) = C +

1 2 1 2

b b + 1 e ( 2 (( x1b + s12 x 2 x 2 s 21 x1b ) + s ) 6 b b 1 e ( 2 (( x1b + s12 x 2 x 2 s 21 x1b ) + s ) 6

(5) (6)

When they are balanced , the prices increase With the increasing of the marginal cost c and the relative compatibility s . We Can further deduce the two enterprises balanced output function and balanced profit function : . x1B ( s12 , s 21 ) =
B x 2 ( s12 , s 21 ) = 1 2 1 2 b b + 1 e ( 2 ( x1b + s12 x 2 x 2 s 21 x1b ) + s ) 6 b b 1 e ( 2 ( x1b + s12 x 2 x 2 s 21 x1b ) + s ) 6 b b ( 3 + e [ 2 ( x1b + s12 x 2 x 2 s 21 x1b ) + s )] 2

(7) (8) (9) (10)

1B ( s12 , s 21 ) =

1 36

2B ( s12 , s 21 ) =

1 36

b b (3 e[2( x1b + s12 x 2 x 2 s 21 x1b ) + s )]2

In the third stage of the game, enterprise 2 chooses the compatibility s 21 to maximize the expected profit function under the condition of the expected profit function.
B 2 ( s12 ,s21 ) s21

=0

(11)

We get s 21 = s 21 ( s12 ) then Substitute 1B ( s12 , s 21 ) and get 1B ( s12 , s 21 R ( s12 )) . In the first stage of the game, enterprise 1 chooses the compatibility s12 to maximize the expected profit function under the condition of the expected profit function.
d1B ( s12 ,s21R ( s12 ) ds21
* *

=0
* * *

(12)

We get s12 = s12 , then substitute s12 into the above formulas and get s 21 p1 p 2 .
3.2 Two enterprises have determined the compatibility about products at the same time, then determined their own prices in order. Two enterprises have determined the compatibility about products in the first stage which are s12 s 21 at the
same time ,then determined their own Competition prices in the second stage and in the third stage. That is shown in Figure 2.
s12s21 p1 p2 1,2

Figure2

The sequential chart about two enterprises simultaneously determining the product compatibility price again successively .

and determining the

407

We use the same principal according to the profit functions for the two enterprises are:
b b 1 ( s12 , s 21 , p1 , p 2 ) = ( p1 c){ 1 + [( p 2 p1 ) + 1 e(2(( x1b + s12 x 2 x 2 s 21 x1b ) + s )]} 2 2 b b 2 ( s12 , s 21 , p1 , p 2 ) = ( p1 c){ 1 [( p 2 p1 ) + 1 e(2(( x1b + s12 x 2 x 2 s 21 x1b ) + s)]} 2 2

(1) (2)

In order to solve the sub-game refined Nash equilibrium in the model, we set out from the third stage with backward induction method to compute. In the third stage of the game, for the fixed degree of compatibility s12 s 21 p1 , enterprise 2 maximize his
own the profit function 2 ( s12 , s 21 , p1 , p 2 ) . Using the first-order condition
R functions about enterprise two. p2 ( s12 , s21 , p1 ) .

2 p2

=0, we can get the price response

In the second stage of the game, for the fixed degree of compatibility s12 and s 21 ,we
R 2

substitute

p ( s12 , s 21 , p1 ) into the profit function of enterprises one and also maximize the profit function of enterprise one
1 ( s1 , s 2 , p1 , p 2R ) . Using the first-order condition p11 =0, We can get the price reaction function of enterprise one
2

p 1R ( s 12 , s
In the
R 1 R 2

21

) .
stage
R 1

of the game , we substitute the obtained p ( s12 , s 21 ), p ( s12 , s 21 , p ( s12 , s 21 )) respectively into the profit functions of enterprise one and enterprise
two ,at the same time ,we use the first-order condition compatibility s12 = s s 21 = s
* 12 * 21 .We

second

1 s12

=0 s 2 =0 and get two enterprises values of


21

further backward substitute and get p1 p 2 .

3.3 Two enterprises firstly have determined the compatibility about products at the same time , then determined their own prices at the same time. Two enterprises have determined the compatibility about products in the first stage which are s12 s 21 at the
same time ,then determined their own Competition prices at the same time in the second stage. That is shown in Figure 3.

s12,s21

p1,,p2

1,2

Figure3

The sequential chart about two enterprises simultaneously determining the product compatibility and simultaneously determining price.

The profit functions of two enterprises are respectively as follows:


b b 1 ( s1 , s 2 , p1 , p 2 ) = ( p1 c){ 1 + [( p 2 p1 ) + 1 e(2(( x1b + s12 x2 x2 s 21 x1b ) + s)]} 2 2
b b 2 ( s1 , s 2 , p1 , p 2 ) = ( p1 c ){ 1 [( p 2 p1 ) + 1 e( 2(( x1b + s12 x 2 x 2 s 21 x1b ) + s )]} 2 2

(1) (2)

Among them , c is expressed for the marginal cost. In order to solve the sub-game refined Nash equilibrium in the model, we set out from the second stage with backward induction method to compute. In the second stage of the game, for the fixed degree of compatibility s12 21 , enterprise i regards the price of s
opponent as a fixed price, solves

pi .To maximize the profit function of his own


i pi R

enterprise which is

i ( s1 , s 2 , p1 , p 2 ) .Using the first-order condition


R

=0(i=1,2),we can get the two price response functions about

these two enterprises which are p1 ( s12 , s 21 ), p 2 ( s12 , s 21 ) , then we respectively substitute them into the profit functions about enterprise one and enterprise two .

408

In the first stage of the game, at the same time


s
2 21

we use the first-order condition

1 s 12

=0

=0,then solve the values of compatibility about these two enterprises ,and at last we again substitute
* *

backward them and get p1 p 2

4 . Conclusions
In this paper we consider the network scale factor, the unit cost of producing enterprises, the compatibility factor, transferring cost factor and the number of consumers, and then has established three models on the price and compatibility ,and further compute enterprise one and enterprise two how to choose the compatibility and price. Through three models calculation, we get the following conclusions: (1) The direction of the two manufacturers compatibility can be asymmetric ,but asymmetry will affect the profits. (2) For enterprises, the choosing orders on compatibility and price would affect the profits of these enterprises. (3) For symmetrical enterprises of standardized products, prices and profits are the same.
References Lu Wen LongChen Hong Ming.Product DifferentiationS and CompatibilityS options [J] Huazhong University of Science and Technology Journal 2003.12 69- 71. [2] Lu Wen LongChen Hong Ming. China's Third-generation Mobile Network and External Communications Standards Competition. [J] Management Engineer Journal2004.4 113- 115. [3] Liu Rong Jiao .Compatible Competition Game Analysis [J] China Industrial Economy 2003.2 83- 89. [4] Wang Jiao Information industry standard selection problem JInformation Science 2006.1 130- 134. [5] Bresnahan TGreenstein S. Technological competition and the structure of the computer industry [J].Journal of Industrial Economics , 1999 (47):1-40 [6] Brynjolfsson EKemerer C. Network Externalities in the microcomputer software : an Econometric analysis of ten spreadsheet market [J].Management Science , 1996 (42):1627-1647 [7] Calbral LSalant D. Monopoly pricing with network externalities[J]. International Journal of Industrial Organization , 1999 (17):199-214 [8] [8]Farrell J , Saloner G. Installed base and compatibility : Innovation ,product preannouncements ,and predation [J]. American Economic Review 1986 (76):940-955 [9] Katz MShapiro C. Systems competition and network effects [J]. Journal of Economic Perspectives , 1994 (8):93-115 [10] Matutes CRegibeau P. Mix and match : product compatibility without network externalities[J]. Rand Journal of Economics , 1988 (19):221-234 [1]

409

Military Conflict Decision Modeling Based on Fuzzy Hypergame


Song Yexin, Dai Mingqiang, Cui Yan
College of Science, Naval University of Engineering, Wuhan, P.R.China, 430033

Abstract In real military conflicts, the army commander of each side should maximally utilize tactical cheat to produce a wrong impression to the opponent in order to realize his own objectives, so the options and the outcome preference order of the opposite side perceived by the commander is imperfect and uncertain. In this paper, the military conflict situation is modeled as a first-level hypergame with fuzzy linguistic preference perceptions. A fuzzy numbers ranking method is used to determine the crisp preference perception vectors. The process of hypergame stability analysis is given for obtaining the hypergame-preserving equilibrium or hypergame-destroying equilibrium. According to these hypergame equilibria, the most possible outcomes of military conflicts can be efficiently forecasted. A simulation example is provided to illustrate the method. Keywords Military conflict, Hypergame analysis, Equilibrium outcomes, Fuzzy preference perceptions.

1. Introduction
Military conflict decision is a process to make the action plan based on the conflict situation perceived by the commander. In real military conflicts, the army commander of each side should maximally utilize tactical cheat to produce a wrong impression to the opponent in order to realize his own objectives, so the information about the opposite side is imperfect and uncertain. The options and the outcome preference order of the opposite side perceived by the commander may be wrong. Therefore, the military conflict situation can not be modeled by a simple game, and the traditional conflict analysis [1, 2], which is a powerful decision analysis method to forecast the most possible outcomes of a conflict, is not suit for such military conflict environment with wrong perception information. On the contrary, hypergame [3-8] is an efficient framework to deal with such conflict situations. In a hypergame, the players involved may have incorrect subjective perception about the other players options, strategies, or preferences, or even be unaware of some of the players in the game. Fraser, Wang and Hipel [4, 5] analyze the Cuban missile crisis and the Falkland Island crisis using hypergame theory, respectively. Wang and Hipel [6] model the Persian gulf war between Iraq and the U.S.-led Allied forces as a hypergame, and use hypergame analysis to explain how misunderstandings affect the behavior of each decision maker as well as the possible conflict resolutions. In these papers, players in hypergame situation all express their perception about the opponent players preference information using crisp forms. But because of the ambiguity existing in available information as well as the essential fuzziness in human judgement, the commander in complex military conflict just can express their preference perception using uncertain forms, such as linguistic variables etc. In this paper, we suppose that each commander in military conflict perceives the opponent players outcome preferences over outcome space using the linguistic values or labels [7, 8], subjectively represented by the trapezoidal fuzzy numbers. We model the military conflict situation as a fuzzy first-level hypergame. In the face of the fuzzy linguistic preference information, a fuzzy numbers ranking method [9] is used to determine the crisp outcome preference perception vectors. The process of hypergame stability analysis is given for obtaining the hypergame-preserving equilibrium or hypergame-destroying equilibrium. According to these hypergame equilibria, the most possible outcomes of military conflicts can be efficiently forecasted. Finally, a simulation example verifies the feasibility and effectiveness of the proposed method.

2. A fuzzy hypergame model for military conflict


A game model consists of players, strategies, outcomes and payoff functions. In a military conflict, the players are the commanders of both sides. Denote the set of players to be N = {1, 2} , the set of strategies for player i to be Si , i N , the set of payoff functions to be vi , i N , which reflect the player is preferences over the outcome space O = S1 S2 . If all the outcomes are ranked in order according to player is payoffs by writing the

This work is supported by National Natural Science Foundation of China (No: 70471031) and Scientific Research Foundation of Naval University of Engineering (No: HGDJJ06004).

410

most preferred outcome on the left and least preferred on the right, then a preference vector (PV) is formed for player i and denoted by Vi . In a game with complete information, there are no misperceptions, each player is represented by only one PV, and all the players see the same set of PVs. Consequently, a two-person game can be formed by a set of PVs: (1) G = {V1 , V2 } . In real military conflicts, the commanders interpretations of the preference vector can be different from one another because of misperceptions. Denote Vij to be player i's PV as interpreted by player j. Vij is often different from Vi , so that the game played by player j will be different from the one played by player i. Since each players game can be formed by the PV of the player himself and the perceived PV of the opponent, player j's game, H 0 , j = 1, 2 , can be expressed as j
0 H10 = {V1 , V21} , H 2 = {V12 ,V2 } .

(2)

Then, the conflict can be modeled as a first-level hypergame.


V V 0 H 1 = { H10 , H 2 } = 1 , 12 . V21 V2

(3)

In the first-level hypergame model (3), a key difficulty is determining the PV of the opponent. Because of the ambiguity existing in available information as well as the essential fuzziness in human judgement, each army commander (player) can not perceive the opponent players preference information clearly. We suppose that each commander just perceives the opponent players outcome preferences over outcome space using the linguistic values, which are subjectively represented by the trapezoidal fuzzy numbers. Denote (o1 , o2 , , oK ) to be K outcomes in the outcome space
k k 0 ij ijk ij ijk 1 , k = 1, 2,

O = S1 S 2 ,

and trapezoidal fuzzy number

k k k aij = (ij , ijk , ij , ijk ) ,

, K , be player i's preference for outcome ok perceived by player j. Then, the


1 2 K V {a12 , a12 , , a12 } 0 H 1 = {H10 , H 2 } = 1 2 1 , . K {a21 , a21 , , a21} V2

military conflict with fuzzy preference perceptions can be modeled as (4)

3. Hypergame stability analysis


In order to determine equilibrium outcomes of the fuzzy first-level hypergame, before the hypergame stability analysis, a defuzzification function is used to rank the fuzzy outcome preference perception and obtain the crisp outcome preference perception vectors. Definition 1: The membership function of trapezoidal fuzzy number a = ( , , , ) in satisfies the following:
( x ) /( ), 1, a ( x) = ~ ( x) /( ), 0,

x , x , x ,
otherwise.

(5)

For fuzzy number a and [0, 1], the -cut of a , written as a , is a closed interval on . For
L R L R (0, 1], let a = {x : a ( x) , x } = [ a , a ], where a and a are its left and right endpoints, L R respectively. For = 0 , let a = cls{x : a ( x) > , x } =[ a , a ], where cls{A} denotes the closure of set A . L R Thus, if a = ( , , , ) , for [0, 1], a = + ( ) , a = ( ) .

Definition 2 [9]: The function D : S is a mapping from the set of fuzzy numbers S to the set of real numbers . If for any a S ,
1 1 L R (a + a )d , 20 then D is a defuzzification function, and D(a) is the defuzzification value of fuzzy number a . D(a) =

(6)

This function is the mean of the two areas defined respectively by the left and the right slopes of the fuzzy
411

number, and the vertical axis. Based on defuzzification function D , we can define a crisp total ordering for fuzzy numbers. Definition 3: Suppose that S is a set of fuzzy numbers, for any distinct ai , a j S , an ordering for fuzzy numbers is defined as: 1) if D(ai ) < D(a j ) , then ai < a j ; 2) if D(ai ) = D(a j ) , then ai = a j ; 3) if D(ai ) >
D(a j ) , then ai > a j .

For the fuzzy outcome preference perceptions


k ij k ij k ij k ij k ij k ij

k k k aij = (ij , ijk , ij , ijk ) , i, j = 1, 2; k = 1, 2, , K , k ij

since

(a ) = [ + ( ), ( )] , using (6), the defuzzification value of a


k D(aij ) =

is (7)

1 1 k 1 k k k k k k k k k 0 [ ij + (ij ij ) + ij ( ij ij )]d = 4 ( ij + ij + ij + ij ) . 2
,K .

k According to definition 3, the bigger D(aij ) is, the more preferred the outcome o k is. So, we can determine k the crisp outcome preference perception vector Vij according to the defuzzification values D(aij ) , k = 1, 2,

Based on the crisp outcome preference vector, hypergame stability analysis is carried out in a two-step procedure [4, 5]. The first step is the individual stability analysis, which is performed for the individual players in their own games using the solution concept of Fraser and Hipel [1]. As a result, the resolutions to the conflict are predicted in accordance with each players viewpoint. In the second step, the overall stability is calculated based on the individual strategy selections. The individual stability analysis of game H 0 for the player j is performed according to the PV of the player j himself V j and the perceived PV of the opponent Vij . In this stage, when a player can unilaterally change his situation from an outcome to a more preferred outcome by changing his strategy, given that the strategic selection of its counterpart remains the same, the more preferred outcome is called a unilateral improvement (UI) from the original outcome for the player. Each outcome for a given player is classified to the following three types: rational (r), sequentially sanctioned (s) and unstable (u). An outcome is rational for a player if it has no UI from the outcome. An outcome is sequentially sanctioned for a player if, after it invokes an UI, its counterpart can make a sequential improvement and bring about a less preferred outcome to original one for the player. An outcome is unstable for a player if there is at least one of its UIs unsanctioned. An outcome is stable for a player if, due to the other players sequential sanctioning, he has no UI which can improve his situation to a more preferred outcome. After examining the stability of each outcome for each player, the stability of outcomes across players is analyzed. Definition 4: An outcome is an equilibrium perceived by player j if it is stable for all players according to the individual stability analysis of game H 0 . j According to definition 4, assume the set of stable outcomes for player i in game H 0 to be O (j i ) , the set of j equilibrium outcomes perceived by player j can be expressed as follows.
E j = {o k | o k O (j i ) , i = 1, 2} = O (1) O (2) . j j

(8)

* Based on the set of equilibrium outcomes perceived by player j, the stable strategy sq and the set S * of j

stable strategies for player j can be determined as


* * S * = {sq S j | o k = ( s p , sq ), o k E j } , j

j = 1, 2 .

(9)

* where s p Si , i j , and outcome o k , which is combination of strategy s p of player i and strategy sq of

player j , is an equilibrium perceived by player j. The overall equilibria for the first-level hypergame are formed by combining each players stable strategies in the individual games. Denote
* E = S1* S2 .

(10)

Definition 5: E is called the set of overall equilibria for the first-level hypergame. If outcome * * * * e = ( s1 , s2 ) E , and e E1 E2 , then e is called a hypergame-preserving equilibrium. If e = ( s1 , s2 ) E , and
e E1 E2 , then e is called a hypergame-destroying equilibrium.

412

According to definition 5, a hypergame-preserving equilibrium is an equilibrium perceived as solution by all players in the hypergame. A hypergame-destroying equilibrium is an equilibrium that at least on of the player believes e is not a solution. Based on these hypergame equilibria, the most possible outcomes of military conflicts can be efficiently forecasted.

4. Military conflict simulation analysis


A war will break out between country A (player 1) and country B (player 2). The actual options available to A are A-1: Attack from the air, A-2: Attack from the sea and A-3: Negotiations. The actual options available to B are B-1: Defend the air, B-2: Defend the sea and B-3: Negotiations. Suppose that both sides correctly perceive the options and strategies available to each country. The options and possible outcomes are listed in Tab.1. Each outcome is represented by a column of 1s and 0s, where 1 means that the corresponding option is taken by the country, while 0 indicates it is rejected. Each outcome is referred to a number for convenience.
A-1 A A-2 A-3 B-1 B B-2 B-3 Outcome Numbers 0 0 1 0 0 1 1 Tab.1 1 0 0 1 0 0 2 The options and possible outcomes 1 1 1 0 0 0 0 0 0 1 1 1 0 0 0 0 0 0 0 0 1 1 0 0 1 0 1 0 1 0 0 1 0 0 0 1 3 4 5 6 7 8 0 1 0 1 1 0 9 1 1 0 1 0 0 10 1 1 0 0 1 0 11 1 1 0 0 0 1 12 1 1 0 1 1 0 13

Tab.2 Symbols The true PV of A The true PV of B

The true PV of each country Preference Vectors(PVs) 4, 8, 12, 3, 6, 11, 10, 5, 9, 2, 1, 7, 13 1, 7, 2, 9, 5, 13, 6, 10, 3, 11, 8, 4, 12

V1 V2

Tab.2 gives the outcome preference vector of each country. Due to the incomplete information and misperceptions, every country can not perceive the opponent countrys outcome preferences accurately and clearly. They use the linguistic values, which are subjectively represented by the trapezoidal fuzzy numbers, to express their perception for the opponent player outcome preferences as follows: 2 3 a1 = (0.90, 0.95, 0.95, 1.00), a21 = (0.75, 0.85, 0.85, 0.95), a21 = (0.40, 0.50, 0.60, 0.90), 21
4 5 6 a21 = (0.10, 0.30, 0.40, 0.70), a21 = (0.60, 0.70, 0.75, 0.75), a21 = (0.50, 0.70, 0.80, 0.90), 7 8 9 a21 = (0.80, 0.85, 0.95, 1.00), a21 = (0.30, 0.45, 0.55, 0.60), a21 = (0.65, 0.75, 0.85, 0.95),

a10 = (0.55, 0.65, 0.70, 0.75), a11 = (0.20, 0.50, 0.50, 0.80), a12 = (0.10, 0.20, 0.30, 0.40), 21 21 21 a13 = (0.40, 0.60, 0.70, 0.80); 21
1 2 3 a12 = (0.25, 0.35, 0.45, 0.55), a12 = (0.30, 0.35, 0.40, 0.45), a12 = (0.65, 0.75, 0.85, 0.95), 4 5 6 a12 = (0.85, 0.90, 0.95, 1.00), a12 = (0.35, 0.50, 0.50, 0.65), a12 = (0.50, 0.70, 0.70, 0.90), 7 8 9 a12 = (0.10, 0.20, 0.30, 0.40), a12 = (0.70, 0.80, 0.90, 1.00), a12 = (0.20, 0.40, 0.50, 0.70), 10 11 12 a12 = (0.60, 0.65, 0.70, 0.75), a12 = (0.30, 0.60, 0.60, 0.90), a12 = (0.80, 0.95, 1.00, 1.00), 13 a12 = (0.30, 0.40, 0.45, 0.60).
k Using (7), calculate the defuzzification values D(aij ) , k = 1, 2, ,13 , as follows:

1 2 3 4 5 6 7 r21 =0.950, r21 =0.850, r21 =0.600, r21 =0.375, r21 =0.700, r21 =0.725, r21 =0.900, 8 9 10 11 12 13 r21 =0.475, r21 =0.800, r21 =0.663, r21 =0.500, r21 =0.250, r21 = 0.625; 1 2 3 4 5 6 7 r12 =0.400, r12 =0.375, r12 =0.800, r12 =0.925, r12 =0.500, r12 =0.700, r12 =0.250, 8 9 10 11 12 13 r12 =0.850, r12 =0.450, r12 =0.675, r12 =0.600, r12 =0.938, r12 =0.438.
k According to the defuzzification values D(aij ) and definition 3, the crisp outcome preference perception

vector V21 and V12 can be determined and listed in Tab.3.


413

Since V21 V2 , V12 V1 in the overall first-level hypergame H 1 , country A and country B are playing separate
games. Hence,
V V 0 H 1 = { H10 , H 2 } = 1 , 12 . V21 V2
Tab.3 The perceived PV of each country Symbols Preference Vectors(PVs) The PV of B perceived by A The PV of A perceived by B

V21 V12

1, 7, 2, 9, 6, 5, 10,13, 3, 11, 8, 4, 12 12, 4, 8, 3, 6, 10, 11, 5, 9, 13, 1, 2, 7

0 Tab.4 and Tab.5 give the individual stability analysis of game H10 and H 2 , respectively. In Tab.4 and

Tab.5, when a player has more than one UI from a given outcome, the UIs are written in a column below the outcome, where the more preferred UIs at the top and the less preferred at the bottom. All the outcomes are marked as r, s or u above them according to their types of stability. If an outcome is a perceived equilibrium, it is marked by an E, otherwise, it is not an equilibrium and marked by a X.
Tab. 4 Stability A r 4 The individual stability analysis of game H10 s 8 4 s 12 4 8 r 2 r 3 r 6 s 11 3 s 10 6 r 5 s 9 5 u 2 6 10 u 11 10 13 10 E u 1 4 8 12 u 8 7 9 6 11 X u 7 3 11 u 4 2 5 3 12 X u 13 5 9 u 12 10 13 11 13 X

V1
UI Stability

r 1

r 7

s 9 7

s 6 7 9 5 X

u 5 2

r 10

u 13 10

u 3 2 5 9 E

V21
UI Outcomes

1 X

2 X

3 X

4 X

6 E

7 X

8 X

Equilibrium perceived by A

0 Tab.5 The individual stability analysis of game H 2

Stability A

r 12

s 4 12

s 8 12 4 r 2

r 3

r 6

s 10 6

s 11 3

r 5

s 9 5

s 13 5 9 u 11 13 10 10 X

u 1 12 4 8 u 8 7 9 6 11 X

u 2 6 10 u 4 2 5 3 12 X

u 7 3 11 u 12 13 10 11 13 E

V12
UI Stability

r 1

r 7

s 9 7

s 5 2

r 13

u 6 7 9 7 X

u 10 13

u 3 2 5 9 E

V2
UI Outcomes

1 X

2 X

3 X

4 X

5 E

6 X

8 X

Equilibrium perceived by B

Form Tab.4, in H10 , stable outcomes for country A are 3, 4, 5, 6, 8, 9, 10, 11 and 12, while 1, 2, 6, 7, 9 and 10 are stable for country B. the outcomes 6, 9 and 10 are stable for both players and equilibria perceived by country A. Form Tab.5, 3, 4, 5, 6, 8, 9, 10, 11, 12 and 13 are stable for country A and 1, 2, 5, 7, 9 and 13 are stable for country B. Thus the outcomes 5, 9 and 13 are equilibria perceived by country B. So, E1 = {6,9,10} , E2 = {5,9,13} . According to the above sets of equilibrium outcomes perceived by two countries, the stable strategies for
414

country A are A-2: Attack from the sea and (A-1, A-2): Attack from the air and the sea, the stable strategy for country B is (B-1, B-2): Defend the air and the sea. That is
* S1* ={A-2, (A-1, A-2)}, S 2 ={(B-1, B-2)}.

So, the set of overall equilibria for first-level hypergame H 1 is


* E = S1* S 2 = {9, 13} .

It is obvious that the outcome 9 is a hypergame-preserving equilibrium, but 13 is a hypergame-destroying equilibrium. From the knowledge of overall stability analysis, the likely decisions for the two countries are that country A will take a strategy of attacking from the sea, while country B defend the air and the sea simultaneously.

5. Conclusion
The military conflict situation is modeled as a fuzzy first-level hypergame in this paper. A defuzzification function is first used to rank the fuzzy outcome preference perception and determine the crisp outcome preference perception vectors. The process of hypergame stability analysis is then given for obtaining the hypergamepreserving equilibrium or hypergame-destroying equilibrium. According to these hypergame equilibria, the most possible outcomes of military conflicts can be efficiently forecasted.
References [1] [2] [3] [4] [5] [6] [7] Fraser N M, and Hipel KW. Conflict Analysis: Models and Resolutions. New York: Elsevier, 1984 Li Ming. Military decision modeling with conflict analysis. Proceedings of the 1996 IEEE International Conference on Systems, Man and Cybernetics. 1996. 2552-2557 Wang M, Hipel K W and Frase N M. Solution concepts in hypergames. Applied Mathematics and Computation. 1989, 34: 147-171 Fraser N M, Wang M, and Hipel KW. Hypergame theory in two-person conflicts with application to the Cuban missile crisis. Information and Decision Technologies. 1990, 16: 301-319 Hipel KW, Wang M, and Fraser N M. Hypergame analysis of the Falkland Island crisis. Internat. Stud. Quart. 1988, 32: 335-358 Wang M, and Hipel K W. Modeling misperceptions in the Persian gulf crisis. Proceedings of the 1991 IEEE International Conference on Systems, Man and Cybernetics. 1991. 1989-1995 Song Yexin., Wang Qian. and Li Zhijun. A group decision making method for integrating outcome preferences in hypergame situations. In: Lipo, W., Yaochu, J. (eds.): Fuzzy Systems and Knowledge Discovery. Lecture Notes in Artificial Intelligence, Berlin Heidelberg New York, Springer-Verlag. 2005, 3613: 676-683 Song Yexin, Li Zhijun, and Chen Yongqiang. Fuzzy Information Fusion for Hypergame Outcome Preference Perception. Intelligent Control and Automation, 2006, 344: 882-887 Liou T S , Wang M J. Ranking fuzzy numbers with integral value. Fuzzy Sets and Systems. 1992, 50: 247-255.

[8] [9]

415

Research on Inferring ELECTRE-III s Parameters and a Case on Naval Gun Weapon System Integration
Sun Shiyan, Wei Hua, Wang Hangyu, Li Lu
Institute of Electronic Engineering, Wuhan Navy University of Engineering, P.R.China,

430033

Abstract ELECTRE-is one of important Multiple attributes decision making (MADM) methods for system integration. Setting parameters in ELECTRE- is a vital and difficult step. In this paper, a method of inferring ELECTRE-s parameters with incomplete information based on robustness analysis is presented. First, ELECTRE- is transformed into a continuous smooth function of each parameter vector. Then, an analysis structure by maximizing robustness margin is provided. By this structure, several parameters inferring algorithms are also discussed. At last, a naval gun weapon system integration problem is put forward and analyzed based on the method above. Key words Robustness analysis, Multiple attributes decision making, ELECTRE-, Naval gun weapon system

1 Introduction
Classical parameters (such as weights, cut or threshold) setting methods in MADM methods usually base on decision makers common sense or intuition, and result is often rigid. Due to human experiences uncertainty, this way is more or less arbitrary which may minify the decision makings creditability. There are two approaches that acknowledge this difficulty and try to remedy it: sensitivity analysis and robustness analysis. Sensitivity analysis is a way to estimate how much the parameters vary when leading to a constant result, or to find which is the most sensitive parameter and which is a substitute action for the optimal. The studies about sensitivity analysis are numerous. But sensitivity analysis has its disadvantage of requiring an estimated center value for each parameter and only considering the corresponding solution. It is also often performed on a single parameter at a time, and ignores the possible interdependencies between parameters. Furthermore, sensitivity analysis is theoretic rather than practical. Robustness analysis is a hotspot in recent years which wins academes great attention, and is supposed to be an effective way replacing sensitivity analysis on processing imperfect information in MADM. Compared to sensitivity analysis, robustness analysis considers parameters combination rather than a single parameter at a time. It tries to know whether the conclusion changes (or not) when the parameter values vary, rather than how the parameters vary without changing conclusions. Moreover, it has a more extensive consideration, including environment parameters, method models and model parameters. There some representative contribution including the studies of Roy, Vincke, Kouvelis and Yu, Dias, etc., which exploited our scope on robustness analysis. Vincke proposed a formalism to define the concepts of robust solution and robust method. His concept of robustness is that no solution contradicts the first one [1][2][3]; Kouvelis and Yudefine robust solution as a rank with the best worst-case behavior in the context of discrete optimization problems[4]; Dias defines robust solution as a scenarios combination that holds the rank unchanged with incomplete information[5], which is also the background concept of this paper. In sum, Robustness analysis is a new direction in MADM, which emphasizes that uncertain factors should be considered, and the optimal solution must be also robust and neutral. ELECTRE- is one kind of important MADM methods based on valued outranking relation. How to set parameters is an important step for ELECTRE-. In the second section , the problem to be discussed is formulized, including ELECTRE- method, its transform and the incomplete informations denotation. In the third section, a frame of robustness analysis method based on classical optimization theory is presented, which is applied in inferring ELECTRE-s parameters in the fourth section. First, the constraints are transformed into continuous smooth functions of parameters vector. Then several inferring algorithms of parameters are provided by maximizing robustness margin based on mathematics programming. The shipborne weapon system has became more enormous, more complex and more expensive to cope with increasing surface and air threat, which will conflict with relatively skimpy military budget and limited loadage capability. As a subsystem of warship combat system, naval gun weapon system plays an important role and takes
416

on many missions. How to obtain an optimal system integration plan, which will make the efficiency, cost, rate of progress, life force, reliability, maintainability, percent of tonnage to be a suited level generally, is always a pivotal problem for weapon system analysis expert and warship design department. MADM methods have been applied in the integration of many weapon system.The fifth section provides an illustrative example of naval gun weapon system integration as a case study on the method of inferring ELECTRE- s parameters. Furthermore, it is discussed that this method can also be applied to other MADM methods in the last section.

Formulation

2.1 ELECTRE- method The outranking relation construction is based on the alternatives set A = {a1 , a 2 , , a n } . To obtain the outranking relation between the alternatives ai and a k ( ai , a k A ), concordance test and disconcordance test

proceed as follows: 1) Basic information Let J = { j J : j = 1,2, , m} denote the set of attribute indices. y j (ai ) denotes the evaluation of an alternative ai A on the attribute j . All the attributes are assumed to be benefit type, namely the higher the value, the better it is. Let w ={ w1 , w2 , , wm } denote the set of weights proposed by decision makers. q j is preference indifference threshold between alternatives on the j -th attribute, p j denote strictly preference threshold ,and v j denote veto threshold. Namely when the preference attribute value between y j (ai ) and
y j (a k ) exceeds v j , the judgment that alternative ai generally outranks a k is no longer accepted.

2) Define concordance indices and disconcordance indices C (ai , a k ) is defined as concordance indices of alternative couples ( ai , a k ) A 2 in ELECTRE- method:
C (ai , a k ) = j =1 w j c j (ai , a k )
m

j =1 w j
m

(1)

where
if yai ya k q j j j 0 c j (ai , ak ) = 1 if yai ya k p j j j j j yai ya kq j otherwise p jq j

(2)

Index c j (ai , ak ) denotes the degree of preference ai over a k on the j -th attribute, and C (ai , a k ) denotes the measurement that sustains the judgment ai outrank a k . Disconcordance indices is d j (ai , a k ) :
0 if ya k yai p j j j d j ( ai , a k ) = 1 if ya k yai v j j j yak yai p j j j otherwise v jp j d j (ai , a k ) denotes the measurement that rejects the judgment ai outrank a k .
c j (ai , ak )
1
d j (ai , ak )

(3)

y j (ak )

0
yai p j j

yai q j j

y j (ai )

y j ai p j +

yai v j + j

Fig. 1 Definition of c j (a i , a k ) and d j (a i , a k ) functions

417

In Fig. 1, horizontal axis denotes y j (a k ) , vertical axis denotes the function values of c j (ai , ak ) and
d j (ai , a k ) , in which broken line is c j (ai , ak ) function, and dashdotted line is d j (ai , a k ) function.

Suppose
J + = { j J | d j (ai , ak ) C (ai , ak )} J = { j J | d j (ai , a k ) > C (ai , a k )} J + (ai , ak ) denotes the subset of J d j ( a i , a k ) < C ( ai , a k ) .

(4)

that d j (ai , a k ) C (ai , a k ) , J (ai , ak ) denotes the subset that

3) Define valued outranking relationship Define valued outranking relationship between alternative couple ( ai , a k ) as S (ai , a k ) :
C ( a i , a k ) 1 d j ( ai , a k ) S ( ai , a k ) = C ( ai , a k ) jJ ( ai , ak ) 1 C (ai , a k ) if j J + if j J

(5)

S (a i , a k ) denotes the measurement that sustains the judgment ai generally outrank a k .

Suppose
1 D j (ai , a k ) = 1 d j (ai , ak ) 1 C (a , a ) i k if j J + if j J

(6)

Then
D ( a i , a k ) = D j ( ai , a k )
jJ

(7)

Then
S ( ai , a k ) = C ( a i , a k ) D ( ai , a k )

(8)

By the valued outranking relation S (ai , a k ) , it is possible to define a family of nested crisp outranking relations S :
S = {( ai , a k ) A A : S (ai , ak ) }, [0.5,1]

(9)

is belief level that


2.2

S (a i , a k ) satisfies S .

Transformed ELECTRE- From the formulas (6)~(8), there is a nonlinear relationship between S (ai , a k ) and w , which is difficult to

be solved. Furthermore the subset of attributes J cant be integrated into programming function. The Ref. [6] studied the weights and veto thresholds in S (ai , a k ) , and also consider that S (ai , a k ) is a continuous undifferentiable nonlinear concave function of w . To make classical optimization methods applied easily, S (ai , a k ) is transformed as follows. Transformed disconcordance index d ' j (ai , a k ) is
0 if yak yai u j j j if yak yai v j j d ' j ( ai , a k ) = 1 j yak yai u j j j otherwise v ju j u j is veto indifference threshold, p j u j v j .

(10)

Transformed d ' j (ai , a k ) is shown as Fig. 2: Transformed D' j (ai , a k ) is:


1 D ' j (ai , a k ) = 1 d ' j (ai , a k ) if j J + if j J

(11)

418

c j (ai , ak )

d ' j (ai , ak )

y j (ak )

0
yai p j j yai q j j y j (ai ) yai u j + j yai v j + j

Fig. 2 Functions of c j (ai , ak ) and d ' j ( ai , ak )

Then
D' j (ai , ak ) = Min{1,1 d ' j (ai , ak )} D ' ( ai , a k ) = D ' j ( ai , a k )
j

(12) (13)

The transformed S (ai , a k ) is:


S ' ( ai , a k ) = C ( a i , a k ) D ' ( ai , a k ) S ' = {( ai , a k ) A A : S ' (ai , a k ) }

(14) (15)

2.3 Incomplete informations denotation Usually, decision maker is difficult to provide exact parameters, but may present some linear constraints easily, by which an incomplete information space can be constructed. It is formed as follows: 1) Weights space (a) wl j w j wu j ;

(b) 1 w1 + 2 w2 + + m wm 0 ; (c) 0 + 1 w1 + 2 w2 + + m wm = 0 . All these constraints can be denoted as the space W


w = ( w1 , w2 , , wm ) W R m

2) Cutting level space

l u
Denoted as the space

R
3) Preference threshold space
pl j p j pu j , j J

Denoted as
p = ( p1 , p 2 , , p m ) P R m

4) Preference indifference threshold space


ql j q j qu j , j J

Denoted as
q = (q1 , q 2 , , q m ) Q R m .

5) Veto threshold space (a) v l j v j v u j , j J ; (b) v j p j q j ; (c) v j = p j + j w j . Denoted as


v = (v1 , v 2 , , v m ) V R m

419

6) Veto indifference threshold space (a) u l j u j u u j , j J ; (b) v j u j p j ; (c) u j = p j + j (v j p j ) . Denoted as


u = (u1 , u 2 , , u m ) U R m

Suppose

= {w, , p, q, v, u}
The parameters space
= W P Q V U

Then

3 A frame of robustness analysis


A formalized definition of robustness is provided in the Ref. [7]: Definition 1: A conclusion C r , is said to be robust with respect to a domain , of possible values for the preference and technical parameters, if there is no a particular set of parameters, , which invalidates the conclusion C r . The problem of inferring parameters in ELECTRE- is solved by maximizing robustness margin based on mathematics programming. Let
S + = {(ai , ak ) | S ' (ai , ak ) } S = {(ai , ak ) | S ' (ai , ak ) < + }

(16)

is a tiny positive number. To keep the rank constant, the set S + and S must be unchanged. So, the optimal parameters vectors can also be got by maximizing robustness margin with the constraint conditions S + and S .
Max ( ) + s.t. S (ai , ak ) (ai , ak ) S S (ai , ak ) + (ai , ak ) S , j J

(MP)

denotes every parameters in the section 2.3, and denotes constrained domain.
Definition 2: Let

= S ' ( ai , a k )

is Robustness Margin of alternative ai outranking a k , 0 1 .


The rank is more robust when is larger. Definition 3: If the programming (MP) have solution, the maximum of ( max ) can be obtained which is called maximum robustness margin, and the solution is optimal parameter vector opt . ELECTRE-s parameters inferring algorithm: 1) Problem formulation: Define alternatives set A and attributes set J ; Experts assign evaluation values y j ( ai ) . 2) Set initial parameters : Set initial parameters 0 , parameters constraints, and other ELECTRE-s parameters; 3) S + and S : Obtain S + and S by ELECTRE-; 4) max and opt : Solve the programming (MP) to obtain maximum robustness margin max and optimal
420

parameter vector opt ; 5) S ' (ai , ak ) : Obtain the indices S ' (ai , ak ) by ELECTRE- with opt , ai , a k A ; 6)Rank : Obtain the rank of set A by S ' (ai , ak ) .

4 Inferring ELECTRE-s parameters


The parameters structure of ELECTRE- is listed in Fig.3. The parameters to be inferred based on robustness analysis include local parameters and global parameters. When local parameters are studied, the others are fixed. In this paper, weights, cutting level, thresholds and other local parameters will be studied respectively. (The parameters to be discussed are at the left half of the Fig. 3.) It is more difficult to infer global parameters, which will be discussed in later papers.
Inferring parameters Inferring local parameters Weights and cutting level Inferring global parameters Preference parameters Veto parameters

Fig. 3 Parameters structure to be Inferred

4.1 Inferring Weights and cutting level When the parameters weights and the cutting level are analyzed, others parameters are fixed. The constraints of (MP) can be embodied as follow:
Max ( w, ) m m + s.t. j =1 w j c j (ai , ak ) D' (ai , ak ) j =1 w j (ai , ak ) S m m j =1 w j c j (ai , ak ) D' (ai , ak ) j =1 w j + (ai , ak ) S m w j = 1, w j , w W , j J , j =1

(MP1)

The programming (MP1) is a linear programming of weights w and cutting level . The optimal estimation of w and can be obtained by simplex method.
4.2 Inferring Preference parameters 1) Inferring preference threshold q j and preference indifference threshold p j

When inferring preference threshold q j and preference indifference threshold p j at single attribute j , other parameters are fixed. The programming (MP) is segmented and concave is difficult to solved. A eclectic method is to smooth the segmented function c j (ai , ak ) :
0 if j (ai , ak ) q u j if j (ai , ak ) p l j c j (ai , ak ) = 1 ( j (ai , a k )q j ) /( p jq j ) otherwise

(17)

Then the constraints of (MP) can be embodied as follow:


Max ( p j , q j ) s.t. ( + C ) m w / w c (a , a ) (a , a ) S + j =1 j j j i k i k j D' + + m -( C j ) j =1 w j / w j c j (ai , a k ) (ai , ak ) S D' pl j p j pu j , ql j q j qu j , j J

(MP2)

C j ( ai , a k ) = C ( a i , a k ) w j c j ( ai , a k )

j =1 w j
m

If the programming (MP2) has solution, the optimal preference threshold p opt and optimal preference j
421

indifference threshold vector q jopt can be obtained. 2) Inferring preference threshold vector q and preference indifference threshold vector p The segmented function c j (ai , a k ) is smoothed in the same way as above. If the programming problem has solution, the optimal preference threshold vector p opt and optimal preference indifference threshold vector qopt can be obtained. 4.3 Inferring Veto parameters 1) Inferring veto threshold v j and veto indifference threshold u j Other parameters are fixed when veto threshold v j and veto indifference threshold u j of single attribute
j of single attribute j are inferred. The programming (MP) is also is segmented and concave. The smoothed function d ' j (ai , a k ) is: 0 if j (a k , a i ) u l j 1 if j (a k , a i ) v u j d ' j (ai , a k ) = ( (a , a )u ) /(v u ) otherwise j j j j i k

(18)

Then:
Max (u j , v j ) + s.t. ( + ) / C D j ' d ' j (ai , ak ) (ai , ak ) S -( + + ) / C D j ' d ' j (ai , ak ) (ai , ak ) S u l j u j u u j , vl j v j vu j , j J D j ' = D' / d ' j (ai , ak )

(MP3)

If the programming MP3 has solution, the optimal veto threshold v opt and optimal veto indifference j threshold vector u opt can be obtained. j 2) Inferring veto threshold vector q and veto indifference threshold vector p The segmented function d ' j (ai , a k ) is smoothed in the same way as above. If the programming problem has solution, the optimal veto threshold vector vopt and optimal veto indifference threshold vector u opt can be obtained.

5 A case study of naval gun weapon system integration


As the first step of the process to obtain an optimal integration plan of naval gun weapon system, it is to design some systems that satisfy basis sea battle need and other constrain conditions, namely feasible integration alternatives. Classical system structure of feasible integration alternatives is expressed as below: (Main sensor)(Spare sensor)+( Director) +(Naval gun firepower system) Main sensor is track radar; Spare sensor is photoelectric device; Director includes some fire control equipments; Naval gun firepower system include some types of naval guns such as AK630, 37F, AK76 or their combination. Three feasible configuration alternatives on certain type warship are listed as Tab.1.
Sensor a1 a2 a3 344 radar 347G radar TR47 radar Tab. 1 Three feasible integration alternatives Spare Sensor Director GD-2 JM-83 OFD-b30 ZPJ-2B ZPJ-2B CC24 Naval gun firepower system 1AK76 1AK630 337F 237F 1AK76

Decision makers attach different regard on every attribute in terms of special integration problem. Weights reflect decision makers preference. Suppose attributes set of naval gun weapon system is w ,
w ={efficiency; cost; risk; applicability; compatibility; life force}

According to the algorithm of section 3, the process of naval gun weapon system integration is listed as
422

bellow: 1) Standardized attributes values of three feasible integration alternatives are provided in Tab.2, initial weights vector w0 is listed in Tab. 3. Parameters constraints are set as follows, which are more or less simple but has its generality: 0.3 w1 0.5 , 0.1 w2 0.2 , w1 2w2 , 0.1 w3 0.2 , 0.1 w4 0.2 , w5 0.1 , 0.1 w6 0.2 , = 0.001 , 0.4 0.6 , 0.5 m j 0.7 , 0.5 p j 0.7 , 0 q j 0.2 , 0.5 j 0.7 .
Tab. 2 Standardized attributes values
Attribute
a1 a2 a3

efficiency 0.4103 0.8936 0.0579

cost 0.3529 0.8132 0.0099

risk 0.1389 0.2028 0.1987

applicability 0.6038 0.2722 0.1988

compatibility 0.0153 0.7468 0.4451

life force 0.9318 0.4660 0.4186

2) The result of ELECTRE- with initial parameters:


a2 a1 , a 2 a3

3) Resolve programming (MP1). The optimal weights vector wopt are listed in Tab.3, opt = 0.4 .
Tab. 3 Initial and optimal weights vector
w1 w2 w3 w4

wopt
w5 w6

w0
w opt

0.4 0.40

0.14 0.15

0.1 0.10

0.13 0.20

0.08 0.001

0.15 0.149

4) Resolve programming (MP2) and (MP3). The optimal threshold vectors are listed in Tab.4:
Tab. 4 The optimal threshold vectors
Attribute 1 2 3 4 5 6

( p opt , q opt ) (u opt , vopt )

(0.7,02) (0.7,02)

(0.5,0.2) (0.6,0.1)

(0.5,0.2) (0.6,0.1)

(0.6,0.1) (0.6,0.1)

(0.7,02) (0.7,0.2)

(0.7,0.2) (0.7,02)

5) The final result of ELECTRE- with optimal parameters:


a2 a3 a1

6 Summary and conclusions


In this paper, a robustness analysis method is presented in terms of inferring ELECTRE-s parameters. The programming (MP1) is linear, convex, and can be solved by simplex method easily; but the programming (MP2) and (MP3) are nonlinear, concave and difficult to be solved, which need further study to find more proper way. On the other hand, this method can also be applied to infer parameters in other MADM methods. For example, for AHP method or TOPSIS method, which are based on the utility theory the S (ai , a k ) can be defined as
S ( ai , a k ) = ( a i ) ( a k )

(19)

(ai ), (a k ) is the evaluation of alternative ai k respectively. The optimal parameters can also be obtained a
by solving the programming (MP).
References [1] [2] [3] Roy, B., A Missing Link in OR-DA: Robustness Analysis, Foundations of Computing and Decision Sciences, 1998, 23(1): 141-160. Vincke, P., Robust Solutions And Methods in Decision Aid, Journal of Multi-Criteria Decision Analysis 1999, 8(1):181-187. Vincke, P., Robust and Neutral Methods for Aggregating Preferences into an Outranking Relation, European Journal of Operational Research, 1999, 112(2): 405-412.

423

[4] [5] [6] [7]

Vincke, P., About the Application of MCDM to Some Robustness Problems, European Journal of Operational Research, 2006. 176(3): 645-658. Kouvelis P., Yu G., Robust Discrete Optimization and Its Application, New York: Kluwer Academic Publishers, 1997. Luis C. D., Joao N. C., On Computing ELECTREs Credibility Indices under Partial Information , Journal of Multi-Criteria Decision Analysis 1999, 8(1):74-92. Figueira J.(Ed.), Multiple Criteria Decision Analysis: State of the Art Surveys, New York: Kluwer Academic Publishers, 2005: 346

424

A Co-Marginalistic Contribution Value for Set Games on Matroids


Sun Hao, Xu Genjiu, He Hua
Department of Applied Mathematics, Northwestern Polytechnical University, P.R.China, 710072

Abstract The purpose of this paper is to explore the possibility of characterizing a co-marginalistic contribution value on the space of set games restricted by matroids. Their axiomatic characterizations include global efficiency and equal treatment property, and one kind of axiom of co-coalitional marginalistic monotonicity, which is a modification of the axiom of monotonicity. Keywords CMC-value, Axiomatic characterizations, Matroid

1. Introduction

A cooperative TU-game is described as a triple ( N , v,


N

) , where N is a finite players set,

is the set of

. In TU-games, the gain of coalitions is considered to be a real real numbers, and v is a mapping from 2 to number. But if the result of cooperation in cooperative games is expressed by a set instead of a real number, then the new model studied first by Hoede [1], are called set games, which is a triple ( N , v, u ) where N is a finite player set, u denotes an abstract set, called universe, v is a mapping v : 2 2 with v ( ) := , so the worth
N u

v ( S ) of coalition S is a subset of the universe U If no confusion arises, we call

( N, v)

a set game. Let G ( u )


n

denote the space of set games and n be the cardinality of N . A solution concept f ( N , v ) is a mapping

f : G ( u ) ( 2u ) i.e., for any


n

value if f ( N , v ) is a singleton for

( N , v ) G ( u ) , f ( N , v ) = ( fi ( N , v ) )iN ( 2u ) which is called a any set game ( N , v ) G ( u ) . Aarts et al. [2] have discussed some values

characterized by the additivity, analogous to that for TU-games, and other standard axioms. The individually marginalistic value for monotonic set games( v ( S ) v ( T ) whenever S T N was proposed by Aarts et al. [3]. A coalitional power value for set games, which is equivalent to the individually marginalistic value for monotonic set games, was discussed by Sun et al.[4]. Sun et al.[5] have characterized a co-marginalistic contribution value for set games, which can be considered to be the analog of the solidarity value for n -person TU-games in [6]. One type of values, called semi-marginialistic values for set game, was presented by Sun et al.[7]. The root of their characterization's idea was from Young [8]. The Shapley value for games on matroids was discussed by Bilbao et al. [9] [10]. Two type of values for set games on matroids were characterized by Sun et al. [11] [12]. Our purpose in this paper is to explore the possibility of characterizing the co-marginalistic contribution value on the space of set games restricted by matroids.

The organization of this paper is as follows: Section 2 recalls some results of matroid. Section 3 introduces the set games on matroids and co-marginalistic contribution value characterized by three axioms for set games on matroids.

2. Matroids
Assume that the following two rules of cooperation between players hold. If a coalition can form, then any sub-coalitions is feasible. In general, the players that take part in the formation of a coalition have common interests. Therefore, any subset of these players has at least the same common interests. Secondly, if there are two coalitions where the cardinal diers in one element, a player of the largest can join with the smallest making a feasible coalition. For this reason, we will define the feasible coalitions by using combinatorial geometries called matroid. In the following we will discuss the co-marginalistic contribution value for set games on matroids. Definition 2.1. A matroid is a pair ( N , M ) consisting of a finite set N and a collection M of subsets of

N satisfying the following properties:

This research has been supported by National Natural Science Funds of China (No: 70571065).

425

(i). M (ii). If S M and T S then T M

(iii). If T , S M with S = T + 1 , then there exists i S \ T such that T {i} M .

Example 2.2. Let E be the set edges of a graph G and let M = M ( G ) be a family that consists of those

subsets of E that do not contain a cycle of G , The matroid matroid of G .


N

( E, M )

is called a graphic matroid or forest

If M = 2 , then it is called the trivial matroid ( consist of all subsets). Instead of the formal approach above mentioned, the next characteristics of a matroid will be of importance throughout the remainder of this paper. Elements of a given matroid are called independent set and furthermore, for a given set system ( N , M ) and S N , a subset B S , B M is called a basis of S if

B {i} M for all i S \ B . That is, B is a basis of S when B is a maximal feasible subset of S . For
every S N , all bases of S have the same cardinality.We call the bases of N the bases of the matroid ( N , M ) . The property (ii) in Definition 2.1 implies that the subsets of any basis are independent sets and thus, 2 B M for every basis B B ( M ) , where B ( M ) is the family of bases of N . We suppose that
S M

{i : i S} = N

3. Axiomatization of the co-marginalistic contribution value for set games on matroids Definition 3.1. A set game on a matroid M is a quadruple ( N , v, M , u ) ; where N is a finite players set, v
is a mapping v : M 2 with v ( ) = , M is a matroid, and u , named universe, is an abstract finite set. For
u

any basis B B ( M ) , a set game

( B, vB ) is

obtained by restricting v to the power set of B , i.e., for any

S B , vB ( S ) = v ( S ) .
Let
M

(u )
a

be the collection of all set games on the matroid M . In the following, we will discuss a

value for set games on the space on


M

( u ) is

( u ) and let ( N , v, M ) denote a set game on the space M ( u ) . A value f M u n M mapping f : ( u ) ( 2 ) , which associates with any set game ( N , v, M ) ( u ) a
M

set-valued vector f ( N , v, M ) = f i ( N , v, M )

s, T N , S T refer to the proper subset S of T .


Definition 3.2. A co-marginalistic contribution value, or shortly CMC-value on the set game space
M

iN

( 2u ) or shortly f ( N , v, M ) . In the sequel, for any


n

(u )

is a member of the family of set game-theoretic values of the following form: for all i N .

CMCi ( N , v, M ) =
=

T M jT iT

v (T ) v (T { j}) = v (T ) v (T { j})
T M iT


(1)

jT

T M iT

CMC CMC
v T BB ( M ) T B iT
v

v T

BB ( M )

CMCi ( B, vB )

where the co-marginalistic contribution CMCT = v (T ) \


Definition 3.3. Let f be a value on the set game space

v (T { j}) .
jT

( u ) . We say the value

f has

(i) the global efficiency, if the solution allocates all the attainable items to the players, that is f i ( N , v, M ) = v ( S ) , for all ( N , v, M ) M ( u ) . (2) (ii) the equal treatment property, if f i ( N , v, M ) = f j ( N , v, M ) for any pair i N , j N , i j , of
426
iN

SM

substitutes

in

the

set

game

( N , v, M ) M ( u )

(i.e., v S {i} = v S { j}

S M with S N \ {i, j} . In words, two substitutes in a set game


(iii) the co-marginalistic contribution monotonicity, if for

( N , v, M ) are allocated the same items. M all ( N , v, M ) , ( N , w, M ) ( u ) and all


(3)

for

all

iN fi ( N , v, M ) f i ( N , w, M ) ,
satisfying
jT v v CMCT CMCTw for all T M , where the co-marginalistic contribution CMCT =

v (T ) \ v (T { j} )

Lemma 3.4. If a value

f on M ( u ) has the global efficiency and co-marginalistic contribution

monotonicity property, then f i ( N , v, M ) CMCi ( N , v, M ) holds, for all


Proof. Suppose a value f on
M

( N , v, M )

and all i N .

(u )

has the global efficiency and semi-marginalistic monotonicity

property.

( N , v, M ) M ( u ) be a set game and inclusion f i ( N , v, M ) CMCi ( N , v, M ) , let x f i ( N , v, M ) x CMCi ( N , v, M ) . Define a new set game ( N , w, M ) as follows:
Let

i N .In
, but

order on

to

show the

the

assume,

contrary,

v ( S ) \ { x} w(S ) = v(S )

for all S M with x u \ v ( S ) .

for all S M with x v ( S ) ;

(4)

Notice that x w ( S ) , for all S M . From this observation, together with the global efficiency of

f applied to the set game ( N , w, M ) , we derive the following chain of inclusions:


fi ( N , w, M )

f ( N , w, M ) = w ( S ) u \ { x} .
j jN S M w v

Particularly, x f i ( N , w, M ) .

Next we claim that CMCS = CMCS , for S M . Consequently, f i ( N , w, M ) = f i ( N , v, M ) by the co-marginalistic contribution monotonicity of f , but this equality contradicts the facts that x f i ( N , v, M ) and

x f i ( N , w, M ) . This contradiction completes the proof, provided we establish the claim

above-mentioned. Let S M with i S . We distinguish two cases. If x v ( S ) , then w ( S ) = v ( S ) and it holds that
w v CMCS = w ( S ) \ w ( S { j} ) = v ( S ) \ v ( S { j} ) = CMCS . jS jS

(5) of the

If

x v ( S ) then

w ( S ) = v ( S ) \ { x} as

well

as

x v ( S { j} ) (because
jS

assumption x CMCi ( N , v, M ) and thus it holds that


w CMCS = w ( S ) \ w ( S { j} ) = v ( S ) \ { x} \ w ( S { j} ) jS jS v = v ( S ) \ v ( S { j} ) = CMCS . jS

(6)

Theorem 3.5. (cf. Sun et al.[5]) For any basis B B ( M ) , there exists a unique value f on the space of set

game ( u ) ( B being the set of players) having the global efficiency, the equal treatment property and the co-marginalistic monotonicity. The value f is described as follows: for any i B ,

427

f i ( B , vB ) =

S B iS

v ( S ) \ v ( S { j}) = CMC ( B, v ) .

jS

(7)

Theorem 3.6. There exists a unique value f on the set game space

(u )

satisfying the global efficiency,

the equal treatment property and the co-marginalistic contribution monotonicity. The value f is described as follows: for any set game

( N , v, M ) M ( u ) and for any i N , fi ( N , v, M ) = CMCi ( N , v, M )

(8)

Proof. It is easily proved that the co-marginalistic contribution value CMC ( N , v, M ) has the global

efficiency, the equal treatment property and the co-marginalistic contribution monotonicity. In order to show the uniqueness of Theorem 3.6, we only prove the following chain: for any i N ,
BB ( M )

CMCi ( B, vB ) = CMCi ( N , v, M ) fi ( N , v, M )

BB ( M )

f i ( B , vB ) =

BB ( M )

CMCi ( B, vB )

(9)

f i ( N , v, M )

By Lemma 3.4 and Theorem 3.5, the remainder of proof is that, for any i N , the conclusion
BB ( M )

fi ( B, vB ) holds. It is sufficient to verify that, for any i B and for any basis

B B ( M ) of the matroid M , fi ( N , v, M ) fi ( B, vB ) . Let us proceed in three steps for the above claim.
The elementary set game ET for any T N with T is defined by ET ( S ) = u whenever T = S ,

( v (T ) ET ) T M ( S ) \ ( v (T ) ET ) ( S \ { j}) ( v (T ) ET ) ( S ) \ ( v (T ) ET ) ( S \ { j}) since, jS T M jS T B T B if S B with i B , ( v (T ) ET ) ( S ) \ ( v ( T ) ET ) ( S \ { j} ) T M jS T M = v ( S ) \ v ( S \ { j} ) = ( v (T ) ET ) ( S ) \ ( v (T ) ET ) ( S \ { j} ) , and otherwise, jS T B jS T B B ( v (T ) ET ) ( S ) \ TB ( v (T ) ET ) ( S \ { j}) = .By the co-marginalistic contribution T jS monotonicity of the value f , we have that
otherwise

ET ( S ) =

.For

any

S M

with

iS

fi ( N , v, M ) = f i N , ( v (T ) ET ), M fi N , ( v (T ) ET ), M T M T B Next we claim that, for any i N and for any i B ,

(10)

fi N , ( v (T ) ET ), M = f i N , v (T ) \ v (T \ { j} ) ET , M (11) T B T B jT The reason is that, for any S M with i S , the conclusion ( v (T ) ET ) ( S ) \ ( v (T ) ET ) ( S \ { j} ) = v (T ) \ v (T \ { j} ) ET jS T B jT T B T B

428

jT T B T B, fi N , v (T ) \ v (T \ { j} ) ET , M f i N , v (T ) \ v (T \ { j} ) ET , M Then, T B jT jT jS

( S ) \ v (T ) \ v (T \ { j}) ET ( S \ { j})

holds.

We

also

have

that,

for

any

we obtain that

fi N , v (T ) \ v (T \ { j} ) ET , M fi N , v (T ) \ v (T \ { j} ) ET , M Beca T B T B jT jT use the value f has the global efficiency, the equal treatment property and the co-marginalistic contribution
monotonicity, we have that

fi N , v (T ) \ v (T \ { j} ) ET , M = v (T ) \ v (T \ { j} ) . jT jT
Furthermore, we have

(12)

T B

f N , v (T ) \ v (T \ { j}) E
i

jT

, M = v (T ) \ v (T \ { j} ) = f i ( B, vB ) , T B jT
iT

the last equality following from Theorem 3.5. Thus,we know that

fi N , v (T ) \ v (T \ { j} ) ET , M fi ( B, vB ) T B jT
Combining (10), (11) and (13), we read that, for any i B and any B B ( M ) ,

(13)

f i ( N , v , M ) f i ( B , vB ) ,
And

(14)

f i ( N , v, M )

BB ( M )

f i ( B , vB ) .

(15)

By (9) and (15), that completes the proof of Theorem 3.6.


References Hoede C Graphs and games, Memorandum 1065, Faculty of Applied Mathematics, University of Twente, Enschede, The Netherlands. 1992 [2] Aarts H, Funaki Y, Hoede C. Set games, Homo Oeconomicus XVII (1/2), 137-154, (ACCEDO Verlagsgesellschaft, Munchen 2000), In : Power Indices and Coalition formation, (Manfred J. Holler, and Guillermo Owen, eds.), Kluwer Academic Publishers, Dordrecht, The Netherlands. 2000 [3] Aarts H, Funaki Y, Hoede C A marginalistic value for monotonic set games. International Journal of Game Theory, 1997, 26, 97-111. [4] Sun H, Zhang S, Li X A coalitional power value for set games. Acta Mathematicae Applied Sinica, English series. 2003,19: 417-424 [5] Sun H, Zhang S, Li X, Driessen T, Hoede C, A co-marginalistic contribution value for set games. International Game Theory Review,2001, 3, 351-362. [6] Nowak A, Radzik T. A Solidarity Value for n-Person Transferable Utility Games.International Journal of Game Theory,1994,23, 43-48 [7] Sun H, Driessen T, Semi-marginalistic values for set games. International Journal of Game Theory, 2006, 34, 241-258. [8] Young P, Monotonic solutions of cooperative games. International Journal of Game Theory, 1985, 14, 65-72. [9] Bilbao J, Driessen T, Jimnez Losada A, Lebrn E, The Shapley value for games on matroids: The static model. Math. Meth. Oper. Res., 2001, 53, 333-348. [10] Bilbao J, Driessen T, Jimnez Losada A, Lebrn E, The Shapley value for games on matroids: The dynamic model. Math. Meth. Oper. Res., 2002, 56, 287-301. [11] Sun H, Driessen T. A individually marginalistic value for set games on matroids. 2nd Cologne Twente Work Shop on Graphs [1]

429

and Combinatorial Optimization, University of Twente, The Netherlands, 116-120. (14th-16th, May 2003.) [12] Sun H, He H, A semi-marginalistic value for set games on matroids, proceedings the eighth national conference of operations research society of China, 2006, 610-615.

430

Dynamic Features Extraction in Soybean Futures Market of China


Meng Jie, Wang Huiwen
School of Economics and Management, Beihang University, Beijing, P.R.China, 100083

Abstract By applying symbolic data analysis (SDA), this paper investigates the dynamic features of soybean futures market of Dalian Commodity Exchange (DCE) during 2002 to 2004. First, interval data is created by classifying mass futures contracts as different years and different residual time to their deadlines; and then DIV clustering method is applied on these interval data which produces further simplified cubic time series table of interval symbolic data and greatly reduces the dimension of the sample space. Based on that, factor analysis of interval data is adopted to extract dynamic principal characteristics of soybean futures, which reduces the dimension of the variable space. The results of the case study, which are rightly coincident with the realities, verify the application value of SDA in analyzing mass, dynamic and complex data. Keywords Mass data, Soybean futures, Cubic time series table, Interval data, Factor analysis

1. Introduction
In the analysis of large scale data set, high dimension of both sample and variable spaces leads to complex computation and it is also difficult to obtain the integral structure of the data set. To solve the problem, E.Diday (1988) proposed a brand-new way of data analysis - symbolic data analysis (SDA), which is a kind of multivariate statistic analysis technique oriented to large scale database retrieval and capable of multilevel analysis. Extended from traditional data, cells of data table in SDA could be not only quantitative or qualitative but also a concept, multivalued variable, interval or distribution. Because of those advantages of SDA, it is especially effective in knowledge exploring to the data set of large scale and complex types. In the traditional multivariate statistic analysis, principal component analysis provides an efficient way for data reduction. In the field of SDA, Cazes (1997) presented principal component analysis (PCA) on interval data, and Lauro, Verde and Palumbo proposed factor discriminant analysis method on symbolic data. Based on global PCA on cubic time series data table, Wang, Hu (2003) introduced global PCA on cubic time series table of interval data, which was successfully applied in features extraction in stock market of China. There are three advantages of it: 1) SDA realizes dimension reduction in the sample space; 2) PCA performs dimension reduction in the variable space; 3) analysis on cubic time series data set explores dynamic features of the complex system. This paper applies global PCA on cubic time series table of interval data to the dynamic features extraction in soybean futures market of China. In section 2, modeling method of global PCA on cubic time series table of interval data is introduced. After that, section 3 adopts the method to analyze principal factors of the soybean futures market of China. Finally, we give a conclusion of the paper in section 4.

2. Global PCA on Cubic Time Series Table of Interval Data


The main idea of global PCA on interval data is: first, transform the cubic time series table of interval data into a numerical matrix; and then apply the classical global PCA on the transformed numerical data table; finally, construct the interval principal components from the numerical principal components. And the procedures of its algorithm are summarized as follows: (1) Cubic time series table of interval data and its transformation Denote Z as a cubic time series table of interval data which is composed of T periods of plane interval data table Zt (t=1,,T), that is

Z1 Z = Z T

This research has been supported by National Natural Science Funds of China (No: 70531010, 70521001, 70371007), and Beijing Natural Science Funds (No: 9052006).

431

t t x1t [ x11 , x11 ] where Z t = = t t xn [ xn1 , xnt1 ]

[ x1t p , x1tp ] , (t = 1, t t [ xnp , xnp ]

t , T ) , and observation xi is an interval object.

xit , a hyperrectangle in the p-dimension space, can be described by a matrix with 2p rows and p columns
where each row contains the coordinates of one vertex of the hyperrectangle in Rp, which can be denoted as

xit1 t x t Vi = i1 t x i1

xit2 xit2 xit2


t

t xip t xip t xip p 2 p

Compile all the transformed numerical matrix Vi (i = 1,

, n; t = 1,

, T ) as
3

V 1 V = V T
t x11 t V1t x11 where t V = = Vnt x t n1 xnt1

x1t p t x1 p

2p p , (t = 1, t xnp t xnp p 2 p ( n i 2 p ) p

, T ) is the transformed numerical matrix at t.

(2) Global PCA on the transformed numerical matrix V Apply the classical global PCA on the transformed numerical matrix V and obtain the first m numerical principal components F1 , , Fm which can be denoted as

Fj1 Fj = , j = 1, 2, FjT

,m

where

f f1tj ij t (i = 1, Fjt = , fij = (t ,2 p ) t f f nj ij


( t ,1)

, n; j = 1,

, m; t = 1,

,T ) .

(3) Conduct the interval principal components of Z Let

fijt = min f ij(t ,1) ,

, fij(t ,2

}, f

t ij

= max f ij(t ,1) ,

, fij(t ,2

};

then

fijt = fijt , f ijt is the interval

value of the i interval object on the j principal component. And the interval principal components of Z, denoted as F1,F2,,Fm, are conducted by

Fj1 Fj = , j = 1, FjT

,m

432

t t f1tj f1 j , f1 j t , ( j = 1, where F j = = t f nj f t , f t nj nj

, m; t = 1,

,T )

Finally, according to the loading plot and the rectangle projections of the interval objects on the factorial plane, we can find the dynamic features of the original complex system with much integrated and simplified results.

3. Factorial Analysis on Soybean Futures Market


In this section, global PCA on cubic time series table of interval data is applied to the soybean futures of DCE to analyze the dynamic marketing features of the different contracts in different time. We select eight exchange indexes open price, maximum price, minimum price, closing price, balance price, trading volume, turnover, open interest of the contracts daily record from 2002 to 2004. 3.1 Classification of the futures contracts In the futures market, futures contracts have particular characters: there are more than one contracts of every kind at the same time; besides, every contract has a valid trading period of time. Therefore, from the static point of view, contracts with different deadlines need to be considered in the meantime; and from the dynamic point of view, a single contract cant form a continuous time series, thats the main problems in the research of futures market. Proposed solution to the above difficulties in this paper is: tag every contract with its residual time to deadline at every point of time; therefore, there are 19 classes of each year from 2002 to 2004; finally, construct interval data table by selecting the minimum and maximum values of every index in each class. It is a kind of dynamic classification that samples in each class change as time goes on and every contract passes in and out of every class during its whole period of validity. Resultingly, thousands of daily contract records are transformed to 19 interval objects of each year which greatly reduces complexity of the research; 19 classes cover the whole trading period without missing information; each class of contract can form a continuous time series since there are new contract timely passing in and out; class features can be easily explored and compared which seems much integrated and efficient. To further summarize and simplify the data set, DIV method is applied to cluster the 19 interval objects. Variables of trading volume and open interest are selected as the clustering criterion and the results are listed in Tab.1.
Class Contracts (residual time to deadline) Tab.1 Class I 1~3 months DIV clustering results Class II Class III 4~6 7~9 months months Class IV 10~12 months Class V 13~19 months

To clearly illustrate different features of different classes, the mid values of the 19 interval objects in trading volume and open interest are selected and joint in Fig.1, which shows a rightly discriminated result of the DIV clustering.

Fig.1

Change tendencies of the contracts in trading volume and open interest

433

3.2 Dynamic factor analysis of the five classes of contracts Apply global PCA on cubic time series interval data of the five classes of the contracts from 2002 to 2004. The cumulate contribution proportion of the first two principal components is 77%, and the loading plot is shown in Fig.2 which exposes the relationships of the first two principal components and the original variables.

Fig.2

Loading plot

(***price represents the five price related variables which are overlapped together)

It is clear in Fig.2 that: 1) the price variables open price, maximum price, minimum price, closing price, balance price, which are highly correlated with each other, reflect the most notable feature in the soybean futures market of DCE. Actually, the balance prices of the five classes of soybean futures contracts have been greatly increased during 2002 to 2004 (seen from Fig.3). and so are the trading variables trading volume, turnover, open interest; 2) the three trading variables trading volume, turnover, open interest also have high correlations and show the second feature in the soybean futures market of DCE.

Fig.3

Price trendlines of the five classes of contracts

Furthermore, plot the interval principal components of the five classes of contracts in 2002~2004, which illustrate the features and change tendencies of the soybean futures market in price (component 1) and trading
434

(component 2) aspects.

Fig.4

Interval principal components of the five classes of contracts

From the direction of the first component in Fig.4, we can find the following features of the futures market in price: 1) contracts are fairly divided by time, which implies a yearly rise of the price in the futures market; 2) contract intervals in 2003 range larger, which implies higher fluctuations of contract price in 2003; 3) the disparities of the average price of different class of contracts is enlarged in 2004, where class I, II, III are higher than class IV, V. Besides, we can see from the direction of the second component in Fig.4 that: 1) contract trading of 2003 is more active than that of 2002 and 2004; 2) in 2002~2003, trading activity of class II, III is higher than class IV, V and the trading disparities between different classes tend to expand, while in 2004, the trading of class III and IV increased a little, which implies the trading time of the speculators in the soybean futures market generally advanced. 3.3 Radar-graphs of the five classes of contracts in 2002~2004 To compare with the above results of factor analysis, radar-graphs of the five classes of contracts in 2002~2004 are listed below. In Tab.2, each row gives comparisons of the five classes of contracts in the variables of balance price, trading volume and open interest of every year; while each column reflects the dynamic change tendencies of the five classes of contracts in every variable.
Tab.2 Comparisons of the five classes of contracts in 2002~2004

435

It is similar with the above results of PCA: in the same year, there is no great difference between the five classes of contracts in price, but large disparity in their trading volume and open interest where class II, III are higher than class IV, V; besides, the contract price has been increasing from 2002 to 2004, while the trading activity is higher in 2003.

4. Conclusion
This paper introduces and applies SDA technique to overcome difficulties of traditional modeling methods on large scale data set. Following the idea of data package, it realizes the reduction in both sample and variable spaces but without destruction to the original internal logic relationship of the dataset, which efficiently solves the contradiction of hard management of the mass data to its easy collection and storage. In this paper, SDA is applied to the simplification and the dynamic factor extraction in the soybean futures market of DCE. By classifying and clustering mass futures contracts as different years and different residual time to their deadlines, cubic time series table of interval symbolic data is constructed, which greatly reduces scale of the data set. Based on that, global PCA on interval data is adopted to extract dynamic principal characteristics of soybean futures. The results of the case study are proved same with the actual status, which verifies the validity and rationality of the modeling method in integrating and extracting information of the multidimensional and dynamic complex system.
Reference [1] [2] [3] [4] [5] Bock H H, Diday E. Analysis of Symbolic Data. Paris: Springer, 1999 Gettle-Summa M. Factorial Axis Interpretation by Symbolic Objects. Paris: IX-Dauphine, 1992 Wang H W. Partial Least-Squares Regression and Its Application. Beijing: National Defense Publishing, 1999(in Chinese) Hu Y. Several Special Topics Study on Interval Data of Symbolic Data and Their Application. Ph D Thesis. Beijing: Beihang University, 2003(in Chinese) Hu Y, Wang H W. A New Data Mining Method Based on Huge Data and its Application. Journal of Beijing University of Aeronautics and Astronautics(Social Sciences Edition), 2004, 17: 40-44(in Chinese)

436

Extension of the VIKOR for Decision-Making Problems under Fuzzy Environment


Wang Yongchun
Doctoral Student Squadron, Dalian Naval Academy , Liaoning, China, 116018

Abstract The multiple attribute decision making (MADM) problem with fuzzy data is investigated in this paper, initiated with a desire to utilize a fuzzy decision making tool in the configuration evaluating of C3I systems in service. In some cases, determining exact values of the attributes are difficult and as result, their values are considered as fuzzy data, which can be described by linguistic values or triangular fuzzy numbers. Therefore, the aim of this paper is to extend the compromise method VIKOR for decision-making problems under fuzzy environment. An algorithm is presented to determine the most preferable choice among all possible choices when data is fuzzy numbers. The procedure of the proposed algorithm is illustrated with a real example. Key words MADM, VIKOR, Triangular fuzzy number, Compromise, Majority rule

1. Introduction
This research was initiated with a desire to utilize a fuzzy decision making tool in the configuration evaluating of C3I systems in service for their technical maintenance. Attributes such as information accuracy, information consistency, system availability and picture completeness are considered. From a methodological point of view, the equipment configuration evaluating problem is a fuzzy multiple attribute decision-making problem where fuzzy assessments are considered. MADM problems, arise in many real-word situations (Chen and Hwang, 1992; Chu and Lin, 2003; Hwang and Yoon, 1981; Ma et al., 1999; Malakooti and Zhou, 1994) [4,8,13,18,19], are of importance in a variety of fields including engineering, economics, etc. For a review of the various MADM methods the reader is referred to, for example, Hwang and Yoon (1981), Chen and Hwang (1992), Stewart (1992) and Yoon and Hwang (1995). Liang and Ding have made these attributes of MADM problems into objective and subjective categories (Ding and Liang, 2005)[12]. Chu and Lin have extended TOPISIS method into fuzzy environment for robot selection (Chu and Lin, 2003)[8]. Wang and Parkand have resolved a MADM problem based on fuzzy preference information on alternatives (Wang and Parkand, 2005)[22]. In the equipment configuration evaluating decision problem, where ranking and selection is required. MADM situations are characterized by the following interrelated problems: The problems involve fuzziness and the decision maker has the difficult task of choosing the best alternative among the many alternatives. The imprecision comes from a variety of sources such as (i) unquantifiable information, (ii) incomplete information, (iii) no obtainable information (Chen, and Hwang)[6]. In many cases the decision maker has inexact information about the alternatives with respect to an attribute. The classical MADM methods cannot effectively handle problems with such imprecise information. These classical methods, both deterministic and random processes, tend to be less effective in conveying the imprecision and fuzziness characteristics. This has led to the development of fuzzy set theory (FST) by Zadeh (Buckley, 1985; Chen, 2000; Chen et al., in press; Dimova et al., 2006; Zadeh, 1965; Zadeh, 1975(a))[1-3,11,23,24], who proposed that the key elements in human thinking are not numbers but labels of fuzzy sets. FST is a powerful tool to handle imprecise data and fuzzy expressions that are more natural for humans than rigid mathematical rules and equations. It is obvious that much knowledge in the real world is fuzzy rather than precise. In C3I system ranking/selection problems, decision data of MADM problems may be have some structures such as bounded data, ordinal data, interval data, and fuzzy data. Without losing generality, assume that the decision data are expressed by fuzzy numbers. From a systematical and practical point of view, the equipment configuration evaluating problem is a fuzzy multiple attribute decision-making problem where the relativities and conflicts among multiple attribute are

This research has been supported by National Natural Science Founds of China (Differential Game Theoretical Models of Fire Allocation for Warship Formation and Decision System, No: 70571086)

437

considered (Chen et al., 2005; Chiu et al., 2004; Chu, 2002) [5,7,9]. The VIKOR method was developed for multiple attribute optimization of complex systems. It determines the compromise ranking-list, the compromise solution, and the weight stability intervals for preference stability of the compromise solution obtained with the initial (given) weights. This method focuses on ranking and selecting from a set of alternatives in the presence of conflicting attributes. It introduces the multiple attribute ranking index based on the particular measure of closeness to the ideal solution (Lin et al., 2005; Opricovic and Tzeng, 1994)[17,20]. In this paper, by considering the fact that, in some cases, determining exact values of the attributes are difficult and as result, their values are considered as fuzzy data, which can be described by linguistic values or triangular fuzzy numbers. Therefore, VIKOR is extended to develop a methodology for solving MADM problems under fuzzy environment. The rest of the paper is organized as follows: next section presents the MADM problem under fuzzy environment. Section 3 presents an algorithm to extend VIKOR to deal with the MADM problem with triangular fuzzy numbers. In Section 4, the proposed algorithmic method is illustrated with an example. The final section gives short conclusions.

2 Fuzzy MADM problems


In C3I system ranking/selection problems, the following assumptions or notations are used to represent the MADM problem: Alternatives are known: let X = {x1 , x 2 , , x n } denote a set of n ( 2 ) possible alternatives. Attributes are known: let F = { f 1 , f 2 , , f m } denote a set of m ( 2 ) attributes.

Weights of attributes are known: let W = ( w1 , w2 ,


wi 0 , i = 1,2,

, wm ) T be the vector of weights, where,

w
i =1

=1,

, m and wi denotes the weight of attribute f i . The decision matrix is known: let A = (a ij ) mn denote the decision matrix where a ij ( 0 ) is a fuzzy set, , m , j = 1,2, , n . Without losing generality, assume that a ij is expressed by a triangular

which describes the consequence of measuring the performance of candidate C3I system x j with respect to attribute f i , i = 1,2, fuzzy number. l The value of attribute function for the alternative a ij can be expressed by the minimum aij , mean aijm and
u maximum aij of the sampling attribute values, respectively. Then a ij can be expressed in a triangular fuzzy

number format as l m u a ij =( aij , aij , aij ), i = 1,2, Where, a a a .


l ij m ij u ij

, m , j = 1,2,

,n

~ ~ Given any two triangular fuzzy number M =( l , m, u )and N = (t , n, r ) , shown in Fig.1. These basic definitions

and notations of fuzzy sets are from (Li, 2003(a); Zadeh, 1975(b))[14,24]. According to the extended principle of the ~ ~ fuzzy set (Li, 2003(b); Opricovic and Tzeng, 2003)[14,21], algorithm operations of M and N can be derived as follows: ~ ~ ~ ~ R 1 ( M , N ) is the barycenter ranking method of triangular fuzzy numbers M and N , ~ ~ ~ ~ M > N , if c( M ) > c( N ) ~ ~ ~ ~ ~ ~ M > N , if c( M ) = c( N ) and ( M ) < ( N ) ~ ~ ~ ~ ~ ~ ~ ~ R 1 ( M , N ) = M = N , if c( M ) = c( N ) and ( M ) = ( N ) (1) ~ ~ ~ ~ ~ ~ M < N , if c( M ) = c( N ) and ( M ) > ( N ) ~ ~ ~ ~ M < N , if c( M ) < c( N ) ~ ~ Where c( M ) denotes the barycenter abscissa of the fuzzy triangular number M , shown in Fig.2, and ~ ~ c(N ) denotes that of N ~ c( M ) = (l + m + u ) 3 (2)

438

~ c( N ) = (t + n + r ) 3

(3)

~ ~ ~ ~ (M ) and (N ) denote the discrete degrees of fuzzy triangular numbers M and N from their

barycenters, respectively

( M ) = [(l 2 + m 2 + u 2 lm lu mu ) 18]1 2 ( N ) = [(t 2 + n 2 + r 2 tn tr nr ) 18]1 2


2

(4) (5)

~ ~ ~ ~ R ( M , N ) is the mean area ranking method of triangular fuzzy numbers M and N ~ ~ ~ ~ M > N , if s ( M ) > s ( N ) ~ ~ ~ ~ ~ ~ R 2 ( M , N ) = M = N , if s ( M ) = s ( N ) (6) ~ ~ ~ ~ M < N , if s ( M ) < s ( N ) ~ ~ ~ Where s ( M ) denotes the mean area index of the fuzzy triangular number M , shown in Fig.2, and s ( N ) ~ denotes that of N ~ ~ s ( M ) = (l + 2m + u ) 4 , s ( N ) = (t + 2n + r ) 4 (7) ~ ~ ~ ~ And d ( M , N ) is the distance of M and N . ~ ~ 1 d (M , N ) = (| l t | + | m n | + | u r |) 2 3

(8)
~ M
~ s (M )

~ ( x) y
1

~ N

~ M

~ ( x) y
1

M ( x) ~
x

x
0

r
~ ~

c( M )

Fig.1. The triangular fuzzy number M and N

Fig.2. The barycenter index c( M ) and mean area index

~ ~ s ( M ) of the triangular fuzzy number M

3 The fuzzy VIKOR method


Considering the fact that, in some cases, determining exact values of the attribute is fuzzy data, now we try to extend VIKOR under fuzzy environment. In sum, an algorithm to determine the most preferable choice among all possible choices, without losing generality, when data is triangular fuzzy numbers, with extended VIKOR approach is given in the following: Step 1: Establishing system evaluation attributes that relate system capabilities to goals and developing alternative systems for attaining the goals (Li, 2005; Li and Yang, 2004)[15,16]. A MADM problem with triangular fuzzy numbers can be concisely expressed in matrix format as x1 x2 xn ~ ~ ~ 1n f 1 11 12 ~ ~ ~ (9) 22 2n ~ f U = 2 21 ~ ~ ~ f m m1 m 2 mn
W = (w1 , w2 , , wm ) ~ is a triangular fuzzy number and = ( l , m , u ) . w is the weight of attribute f . ~ Where ij i i ij ij ij ij
T

~ Step 2: Construct the fuzzy normalized decision matrix. Linear normalization can be used to calculate f ij .

439

u ijl ijm ij ~ f ij = ( f ijl , f ijm , f iju ) = * , * , * j j j

(10)

i = 1,2. .m, j = 1,2,

, n , for benefit attributes


u ij ijm ijl ~ f ij = ( f ijl , f ijm , f iju ) = 1 * ,1 * ,1 * j j j

(11)

i = 1,2. .m, j = 1,2,

, n , for cost attributes,

Where,

* j

* is given by *j = max iju , ij 0 , i = 1,2, , m ; j = 1,2, , n . i

Step 3: Determine the ideal and negative-ideal solution according to Eqs.(1)-(7). ~ ~ ~ ~ ~ F *k = { f 1*k , f 2*k , , f m*k } = {max( R k ( f ij )) | i = 1,2, , m} , k = 1 or 2 j
~ ~ ~ F k = { f 1 k , f 2 k , ~ ~ , f m k } = {min ( R k f ij ) | i = 1,2, j , m} , k = 1 or 2

(12) (13)

Step 4: Calculate group utility indexes S j , individual regret indexes R j and the compromise indexes Q j .

In the VIKOR method, the multiple attribute measure for comprise ranking is developed from the L p -metric used as an aggregating function in a compromise programming method. The L p -metric with fuzzy triangular numbers can be calculated according to Eq.(8) as follows:
L p, j ~ ~ ~ ~ m = [ wi d ( f i * , f ij ) d ( f i * , f i )] p , 1 p ; i =1
1 p

j = 1,2,

,n

(14)

Then
m ~ ~ ~ ~ S j = L1, j = wi d ( f i * , f ij ) d ( f i * , f i ) , i =1

i = 1,2,
i = 1,2,

, m ; j = 1,2,
, m ; j = 1,2,

,n
,n

(15) (16) (17)

~ ~ ~ ~ R j = L , j = max[ wi d ( f i * , f ij ) d ( f i * , f i )] ,
i

Q j = v ( S j S * ) ( S S * ) + (1 v) ( R j R * ) ( R R * )

Where
S * = min S j , S = max S j , R * = min R j , R = max R j j j
j j

S * is with a maximum group utility (majority rule), and R * is with a minimum individual regret of the

opponent. Alternatives must also be the best ranked by S or/and R . This compromise solution is stable within a decision making process, which could be: voting by majority rule(when v > 0.5 is needed), or by consensus v 0.5 , or with veto ( v < 0.5 ). Here v ( 0 v 1 ) is the weight of the strategy of the majority of attributes (or the maximum group utility). Step 5: Rank the preference order of all alternatives according to Q j , S j and R j in decreasing order. The results are three ranking lists. Propose as a compromise solution the alternative ( a ) which is ranked the best by the measure Q (minimum) if the following two conditions are satisfied:
C1. Acceptable advantage:
Q(a) Q(a) DQ

(18)

Where a is the alternative with second position in the ranking list by Q ; DQ = 1 (n 1) ; n is the number of alternatives. C2. Acceptable stability in decision making: Alternative a must also be the best ranked by S or/and R . If one of the conditions is not satisfied, then a set of compromise solutions is proposed, which consists of: Alternatives a and a if only condition C2 is not satisfied, or Alternatives a, a, , a ( N ) if condition C1 is not satisfied; and a ( N ) is determined by the relation
440

Q(a ( N ) ) Q(a) < DQ for maximum N (the positions of these alternatives are in closeness).

4. Numerical example
In this section, we work out a numerical example to illustrate the VIKOR method for decision-making problems with triangular fuzzy numbers. A case study of comparing five C3I systems ( x1 , x 2 , x3 , x 4 , x5 ) in service was conducted to examine the applicability of this VIKOR method with triangular fuzzy numbers. Four technique and tactic capabilities, such as information accuracy F1 , information consistency F2 , system availability F3 and picture completeness F4 , are identified as the evaluation attributes for these systems.
Steps 1: The triangular fuzzy number decision matrix is shown in Tab. 1. Suppose that the vector of corresponding weight of each attribute is W = (0.125,0.315,0.5,0.06) T .
Tab. 1 The triangular fuzzy numbers decision matrix of 5 alternatives Alternative

F1
(50.1,73.5,96.1) (55.8,87.6,120.5) (24.4,34.3,49.5) (14.4,24.1,29.2) (73.7,104.2,141.7)

F2
(269,284,312) (144,152,165) (641,678,696) (119,134,156) (453,469,481)

F3
(2636,3345,3825) (1805,1814,1836) (1141,1856,2410) (1494,1564,1749) (2719,2862,2955)

F4
(9.7,37.2,69.6) (17.6,27.8,37.1) (10.7,16.5,22.8) (28.2,35.8,47.1) (37.5,46.1,55.9)

x1 x2 x3 x4 x5

Steps 2: The triangular fuzzy number normalized decision matrix is calculated, according to Eqs.(10) and (11) and shown in Tab. 2. Steps 3: The ideal and negative ideal solution are calculated, according to Eqs.(12) and (13), here k = 1 , and shown in bottom of Tab. 2.
Tab. 2 The triangular fuzzy number normalized decision matrix Alternative

F1
(0.35,0.52,0.69) (0.39,0.62,0.85) (0.17,0.24,0.35) (0.10,0.17,0.21) (0.52,0.74,1.00) (0.52,0.74,1.00) (0.10,0.17,0.21)

F2
(0.39,0.41,0.45) (0.21,0.22,0.24) (0.92,0.97,1.00) (0.17,0.19,0.22) (0.65,0.67,0.69) (0.92,0.97,1.00) (0.17,0.19,0.22)

F3
(0.69,0.87,1.00) (0.47,0.47,0.48) (0.30,0.49,0.63) (0.39,0.41,0.46) (0.71,0.75,0.77) (0.71,0.87,1.00) (0.30,0.41,0.46)

F4
(0.14,0.53,1.00) (0.25,0.40,0.53) (0.15,0.24,0.33) (0.41,0.51,0.68) (0.54,0.66,0.80) (0.54,0.66,1.00) (0.14,0.24,0.33)

x1 x2 x3 x4 x5 ~* f ~ f

Tab. 3 The ranking indexes and ranks Alternative

Sj
0.303 0.817 0.623 0.963 0.248

Rank 2 4 3 5 1

Rj
0.230 0.440 0.440 0.500 0.127

Rank 2 3 3 4 1

Qj
0.172 0.768 0.682 1.000 0

Rank 2 4 3 5 1

x1 x2 x3 x4 x5

Steps 4: The group utility indexes( S j ), individual regret indexes( R j ) and the compromise indexes( Q j ) are

calculated, according to Eqs.(14)-(17), here v = 0.5 , and shown in Tab 3, together with three corresponding ranking lists. Steps 5: According to Q j , S j R j and the acceptable conditions C1 and C2, the compromise ranking list of the five candidates is X 5 , X 1 , X 3 , X 2 , and X 4 . Obviously, the satisfactory selection is candidate X 5 and X 1 .

441

5. Conclusion
VIKOR is a helpful tool in multiple attribute decision making, particularly in a situation where the decision maker is not able, or does not know to express his/her preference at the beginning of system design. The obtained compromise solution could be accepted by the decision makers because it provides a maximum group utility and a minimum of the individual regret of the opponent. Considering the fact that, in some cases, determining precisely the exact value of the attributes is difficult and that, their values are considered as fuzzy data, therefore, in this paper VIKOR under fuzzy environment has been extended. Also, an algorithm to determine the most preferable choice among all possible choices, when data is triangular fuzzy numbers, is presented. The main ranking result is the compromise ranking list of alternatives, and the compromise solution with the advantage rate.
References [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] Buckley J.J., Fuzzy hierarchical analysis, Fuzzy Sets and Systems 17 (1985) 233-247. Chen C.-T., Extensions of the TOPSIS for group decision-making under fuzzy environment, Fuzzy Sets and Systems 114 (2000) 1-9. Chen C.-T., Lin C.-T., Sue-Fn Huang, A fuzzy approach for supplier evaluation and selection in supply chain management, International Journal of Production Economics, in press. Chen S.-J., Hwang C.-L., Fuzzy Multiple Attribute Decision Making: Methods and Applications, Springer, New York, 1992. Chen M.-F., Tzeng G.-H., Michael Tang, Fuzzy MADM Approach for Evaluation of Expatriate Assignments, International Journal of Information Technology and Decision Making, 4 (2) (2005) 1-20. Chen, S.J., Hwang, C.L., Fuzzy Multiple Attribute Decision-Making: Methods and Applications. Springer-Verlag, New York. 1992. Chiu Y.-C., Shyu J.Z., Tzeng G.-H., Fuzzy MADM for Evaluating the E-commerce Strategy, International Journal of Computer Applications in Technology, 19 (1) (2004) 12-22. Chu T.-C., Lin Y.-C., A Fuzzy TOPSIS Method for Robot Selection, Springer-Verlag, London, 2003. Chu T.-C., Selecting Plant Location via a Fuzzy TOPSIS Approach, Int J Adv Manuf Technol, 20 (2002) 859-864. Cook W.D., Kress M., A multiple-criteria composite index model for quantitative and qualitative data, European J. Oper. Res. 78 (1994) 367-379. Dimova L., Sevastianov P., Sevastianov D., MADM in a fuzzy setting: Investment projects assessment application, International Journal of Production Economics, 100 (1) (2006) 10-29. Ding J.-F. Liang G.-S., Using fuzzy MADM to select partners of strategic alliances for liner shipping, Information Sciences, 173 (1-3) (2005) 197-225. Hwang C.-L., Yoon K., Multiple Attribute Decision Making: Methods and Applications, Springer, Berlin, 1981. Li D.-F., Fuzzy Mutiobjective Many-Persion Decision Makings and Games, National Deference Industry Press, Beijing, 2003, 2-7 (I), 39-57 (II). Li D.-F., An approach to fuzzy muticriterion decision making under uncertainty, 169 (1-2) (2005) 97-112. Li D.-F., Yang J.-B., Fuzzy linear programming technique for multicriterion group decision making in fuzzy environments, 158 (1-4) (2004) 263-275. Lin J.-H., Tzeng G.-H., Jen W., Utilizing VIKOR to make ERP system supplier selection decision, Agriculture and Economics, 34 (2005), 69-90. Ma J., Fan Z.-P., Huang L.-H., A subjective and objective integrated approach to determine attribute weights, European J. Oper. Res. 112 (1999) 397-404. Malakooti B., Zhou Y.Q., Feed forward artiDcial neural networks for solving discrete multiple criteria decision making problems, Management Sci. 40 (1994) 1542-1561. Opricovic S., Tzeng GH., Compromise solution by MADM methods: A comparative analysis of VIKOR and TOPSIS. European Journal of Operational Research, 156 (2) (2004) 445-455. Opricovic S., Tzeng G.H., Defuzzification for a Fuzzy Multicriterion Decision Model, International Journal of Uncertainty, Fuzziness and Knowledge-based Systems, 11 (5) (2003), 635-652. Wang Y.-M., Parkand C., Multiple attribute decision making based on fuzzy preference information on alternatives: Ranking and weighting, Fuzzy Sets and Systems 153 (2005) 331-346. Zadeh L.A., Fuzzy sets, Inform. and Control 8 (1965) 338-353. Zadeh L.A., The concept of a linguistic variable and its application to approximate reasoning, Inform. Sci. 8 (1975) 301-357(a), 199-249(b).

442

Extended Models to Non-uniqueness of Cross Efficiency in Cross Efficiency Evaluation


Wu Jie, Liang Liang
School of Management, University of Science and Technology of China, P. R. China, 230026

Abstract Cross efficiency evaluation has been considered as a powerful extension tool of Data Envelopment Analysis that provides, not only a unique ordering among the Decision Making Units (DMUs), but also eliminates unrealistic weighting schemes without requiring the elicitation of weight restrictions from application area experts. But a factor which possibly reduces the usefulness of cross evaluation method is that the ultimate cross efficiency may not be unique because the weights which maximize evaluated DMUs simple efficiency may not be unique. This paper seeks to extend the model of Doyle and Green (1994), in which the ultimate cross efficiency of every DMU is achieved by introducing a secondary objective function. In this paper, different secondary objective functions are used to determine the ultimate cross efficiency, and the proposed models with the objective functions different from each other can be applied under different circumstances denoting different meanings. Key words Data envelopment analysis (DEA), Cross efficiency

1. Introduction
Data envelopment analysis (DEA) is concerned with comparative assessment of the efficiency of decision making units (DMUs). While it has been proven an effective approach in estimating the empirical efficient frontiers and measuring the relative efficiency of peer units, its flexibility in weighting multiple inputs and outputs and its nature of self-evaluation have been criticized. Cross-evaluation method is a DEA extension tool that could be utilized to identify good overall performers and rank DMUs since it was proposed by Sexton et al. (1986). Its main idea is to use DEA in a peer evaluation instead of a self-evaluation. There are two principal advantages for cross-evaluation method: (1) it provides a unique ordering among the DMUs; and (2) it eliminates unrealistic weight schemes without requiring the elicitation of weight restrictions from application area experts (Anderson, Hollingsworth, and Inman, 2002). Cross efficiency evaluation has been used in various applications, e.g., efficiency evaluations of nursing homes (Sexton et al.1986), R&D project selection (Oral, Kettani and Lang, 1991), preference voting (Green, Doyle and Cook, 1996), and others. However, as noted in Doyle and Green (1994), the non-uniqueness of the DEA optimal weights possibly reduces the usefulness of cross efficiency. Specifically, cross efficiency scores obtained from the original DEA are generally not unique, and depending on which of the alternate optimal solutions to the DEA linear programs is used, it may be possible to improve a DMUs (cross efficiency) performance rating, but generally only by worsening the ratings of others. Sexton et al. (1986) and Doyle and Green (1994) propose the use of secondary goals to deal with the non-uniqueness issue. They present aggressive and benevolent model formulations. In the case of the benevolent model, for example, the idea is to identify optimal weights that not only maximize the efficiency of a particular DMU under evaluation, but at the same time, maximize the average efficiency of other DMUs. In the case of the aggressive model, one seeks weights that minimize the average efficiency of those other units. The purpose of the current paper is to extend the model of Doyle and Green (1994) by introducing various secondary objective functions. Each new secondary objective function represents an efficiency evaluation criterion. With these new models, one can compare the efficiency scores and get a better picture of cross efficiency stability with respect to multiple DEA weights. The rest of this paper is organized as follows. Section 2 presents the cross efficiency evaluation approach. Extended models are introduced in section 3. Conclusions are given in section 4.

The research has been supported by Chinese National Science Fund for Distinguished Young Scholars (No: 70525001) and Special Fund for Graduates of Chinese Academy of Sciences for Science and Social Work (Innovation Groups).

443

2. Cross Efficiency Evaluation


Suppose we have a set of n DMUs, and each DMU j produces s different outputs from m different inputs. The ith input and rth output of DMU j ( j = 1, 2,..., n) are denoted by xij (i = 1,..., m) and yrj ( r = 1,..., s ) , respectively. Cross efficiency is often calculated as a two-phase process. The first phase is calculated using the standard DEA model, e.g., the CCR model of Charnes, Cooper and Rhodes, (1978). * Specifically, for any DMU d under evaluation, the efficiency score Edd under the CCR model is given by the following optimization problem:

max E dd = urd rrd


r =1

Subject to

(1)

v
i =1 s r =1

id

xid = 1
m

u rd y rj vid xij 0
i =1

j = 1,...., n

u rd , vid 0
where vid and urd represent ith input and rth output weights for DMU d . The cross efficiency of DMU j , using the weights that DMU d has chosen in model (1), is then:
s

Edj =

u
r =1 m i =1

* rd

yrj x

, d , j = 1, 2,..., n

(2)

* id ij

where (*) denotes optimal values in model (1). For DMUj ( j = 1,2,..., n) , the average of all E dj ( d = 1,2,..., n),

1 n E j = E dj , referred to as the cross efficiency score for DMUj. n d =1


We point out that model (3) can be also expressed equivalently in the following deviation variable form:

Min d s.t.

v
i =1 s r =1

id

xid = 1,
m

urd y rj vid xij + j = 0,


i =1

(3)

j = 1,..., n. u r , vi , j 0, for all r, i, j.


where 1- d . ( d
*

d is the deviation variable for DMU d and j the deviation variable for the jth DMU. Under
* d = 0. If DMU d is not efficient, then its efficiency score is can be regarded as a measure of inefficiency). We refer to the deviation variable j as the

this model, DMU d is efficient if and only if

d-inefficiency of DMU j. Note that optimal weights obtained from model (1) (or model (3)) are usually not unique. As a result, the cross efficiency defined in (2) is arbitrarily generated, depending on the optimal solution arising from the particular software in use (Despotis, 2002). To resolve this ambiguity, a secondary goal in cross efficiency evaluation is introduced. As discussed above, Doyle and Green (1994) present benevolent and aggressive model
444

formulations that seek to identify optimal weights that not only maximize the efficiency of a particular DMU under evaluation, but also minimize (maximize) the average efficiency of other DMUs. One form of the benevolent model focuses on finding a multiplier bundle that maximizes the ratio of outputs to inputs for the composite DMU made up of the n-1 peer units. (The composite DMU is created by aggregating the outputs and inputs for all n-1 peer units). The aggressive form of this would involve minimizing the ratio for the composite unit. Let us now examine various forms of secondary goals for aiding in cross evaluation. For purposes of presentation, it is convenient to use model (3) as a basis for this discussion.

3. Extended Models
Let the (CCR) inefficiency of DMU d be
* d . Now we consider the following models with different

objective function, but with the same restrictions, which was used to determine the optimal weights in model (3) of DMUd to evaluate other DMUs.
Min s.t.

'j
j =1 s

urd yrj vid xij + 'j = 0,


r =1 i =1

j = 1,..., n.

i =1 s

(4)

vid xid

=1

* urd yrd = 1 d r =1 d ur , vid , 'j 0, for all i, r , j.

Min Max 'j s.t.

urd yrj vid xij + 'j = 0,


r =1 i =1

j = 1,..., n.

vid xid = 1
i =1 s * urd yrd = 1 d r =1 d ur , vid , 'j

(5)

0, for all i, r , j.

Model (5) can be expressed equivalently in the following form: Min


s.t.

urd yrj vid xij + 'j = 0,


r =1 i =1

j = 1,..., n.

vid xid = 1
i =1 s * urd yrd = 1 d r =1

(5)

'j 0, j = 1,..., n.
d ur , vid , 'j , 0, for all i, r , j.

445

the effect of added constraints,

'j 0( j = 1,..., n), is to make the maximum deviation; they do


Min s.t. 1 n ' | j ' | n j =1

not change the feasible region of decision variables.

urd yrj vid xij + 'j = 0,


r =1 i =1

j = 1,..., n.

i =1 s

(6)

vid xid

=1

* urd yrd = 1 d r =1 d ur , vid , 'j 0, for all i, r , j.

1 n ' j . n j =1 1 1 ' ' ' ' ' ' ' ' ' Then we let a j = (| j | + j ) and b j = (| j | ( j ')) , we get the following 2 2
Where

' =

linear programming:

Min s.t.

1 n ' (a j + b'j ) n j =1

u
r =1 m i =1 s

d r

yrj vid xij + 'j = 0, j = 1,..., n.


i =1

v
r =1

d i id

x =1
* yrd = 1 d

(6)

d r

a 'j b 'j = 'j

1 n ' j , j = 1,..., n. n j =1

urd , vid , a 'j , b'j , 'j 0, for all i, r , j.


The proposed Model (4), (5) and (6) with the same restrictions which means DMUd chooses the weight combinations that make its efficiency equal to 1 d , the objective functions different from each other can be applied under different circumstances denoting different meanings. Remark 1: In Model (4), minimizing the sum of inefficiency

'j is equivalent to maximizing the sum of n

efficiency values given to n DMUs aims to gain the whole optimization. This criterion is intuitively appealing. We can easily imagine that all DMUs are thought to attempt to maximize their performance ratio, then the sum of their ratios should tend to be large within the given constraints. This model may be especially applicable to a system seeking for the whole optimization. For example, a supply chain consisting of a set of business entities that are involved in the design, development, manufacture and distribution of products. ' Remark 2: To minimize the maximal inefficiency j is equal to maximize the minimal efficiency among

n efficiencies in Model (5) tends to take sides of the weak DMUs, this principle has been proved to have good performance in Troutt (1997), such as identifying the same site as optimal for the superconducting supercollider (SSC) problem, applying to the full data set of educational programs etc. Remark 3: The objective function in Model (6) computes mean absolute deviation (MAD) of a set of data,
446

namely, the average of the absolute deviations of data points from their mean, hence, minimizing the objective function tries to decrease their efficiency difference, which to some extent demonstrates an equalitarian principle. In the above models, when DMUd under evaluation is changed (i.e., xid , i = 1,..., m ; y rd , r = 1,..., s and

d are changed in the constraints), different optimal solutions of vid and u rd are obtained. We obtain n
optimal weight vectors Wd = ( v1 ,..., vm , u1 ,..., u s ) , d = 1,..., n. Using this Wd , the cross efficiency for
* d* d* d* d* *

any DMU j ( j = 1,2,...n) , is then calculated as:

E j (W ) =
* d

u
r =1 m i =1

d* r

y rj x

, d , j = 1,2,..., n.

(7)

d* i ij *

For DMU j ( j = 1,2,...n) , the average of all E j (Wd ), d = 1,..., n. namely

Ej =

1 n E j (Wd* ), j = 1,2,..., n. n d =1

(8)

is our new cross efficiency score for DMU j .

4. Conclusions
Because DEA weights are generally not unique, the related cross efficiency may not be unique either. It is this non-uniqueness phenomenon that can undermine the usefulness of the cross evaluation method. This paper seeks to extend the model of Doyle and Green (1994), in which the ultimate cross efficiency of every DMU is achieved by introducing a secondary objective function. In this paper, different secondary objective functions are used to determine the ultimate cross efficiency, and the proposed models with their different objective functions can be applied under different circumstances.
References [1] [2] [3] [4] [5] [6] [7] [8] [9] Anderson T R, Hollingsworth K B, Inman L B. The fixed weighting nature of a cross-evaluation model. Journal of Productivity Analysis, 2002, 18 (1): 249-255 Charnes A, Cooper W W, Rhodes E. Measuring the efficiency of decision making units. European Journal of Operational Research, 1978, 2(1): 429-444 Charnes A. Cooper W W. Preface to topics in data envelopment analysis. Annals of Operations Research, 1985, 2(3):5994 Despotis D K. Improving the discriminating power of DEA: focus on globally efficient units. Journal of the Operational Research Society, 2002, 53(6): 314323 Doyle J, Green R. Efficiency and cross efficiency in DEA: Derivations, meanings and the uses. Journal of the Operational Research Society, 1994, 45 (5): 567-578 Green R, Doyle J, Cook W D. Preference voting and project ranking using DEA and cross-evaluation. European Journal of Operational Research, 1996, 90(4): 461-472 Troutt M D. Derivation of the Maximin Efficiency Ratio model from the maximum decisional efficiency principle. Annals of Operations Research, 1997, 73(6): 323338 Oral M, Kettani O, Lang P. A methodology for collective evaluation and selection of industrial R&D projects. Management Science, 1991, 37(7): 871-883 Sexton T R, Silkman R H, Hogan A J. Measuring Efficiency: An Assessment of Data Envelopment Analysis. San Francisco: Jossey-Bass, 1986. 32

447

Evaluating the Comparative Performance of the Regional S&T Competitiveness of China: A DEA Application and Problem
Wu Qiang, Wang Xiaoye
School of Management, University of Science and Technology of China, P.R.China, 230026

Abstract The evaluation and study of S&T competitiveness is not mature enough and in the course of continuous research. Based on systematically analyzing the indicator system of regional S&T competitiveness, we use TOPSIS method to empirically examine the gross S&T competitiveness, structural S&T competitiveness and comprehensive S&T competitiveness of 31 provinces of China, and then select data envelopment analysis (DEA) to evaluate the comparative performance of the regional S&T input and output factors of China. The results of DEA evaluation shows that 18 provinces are relatively efficient in 31 provinces, which are Jilin, Heilongjiang, Anhui, Fujian, Hunan, Hainan, Chongqing, Yunnan, Tibet, Gansu, Xinjiang, Beijing, Shanghai, Shaanxi, Tianjin, Zhejiang, Hubei and Guangdong and 58.1 percent of total provinces; the other 13 provinces are inefficient and 41.9 percent of total provinces. Contrasting the order of TOPSIS with the result of DEA, we discover such a perplexing problem: some provinces with obviously weak S&T competitiveness, the S&T productive efficiencies of which are effective very much, are even the benchmarks of the provinces with very strong S&T competitiveness. The cause of this problem is possibly that the convex and conical hypotheses of DEA arent fulfilled when the 31 provinces are estimated together. Key words Scientific and technical competitiveness, Data envelopment analysis, TOPSIS

1. Introduction
Scientific and technical (S&T) competitiveness has become the key tache of the industrial sustainable development and displaying the country or regions competitiveness. The research of evaluation on S&T competitiveness has acquired some development at home and abroadamong which the comparatively famous evaluations include The World Competitiveness Yearbook of IMD (2006)[1], Global Competitiveness Report of WEF (2006)[2], the correlative portion in Human Development Report of UNDP (2001)[3], and the relatively systemic studies in China such as Annual Report of Regional Innovation Capability of China (Research Group on Development and Strategy of S&T of China, 2006)[4], China Sustainable Development Strategy Report (Research Group of Sustainable Development Strategy of CAS, 2006)[5], Statistical and Monitorial Result of S&T Change of China (Science and Technology Information Statistics Center of MOST of China, 2007)[6] and so on. Some other evaluations that adopt one or several indicators also have certain value for the thorough research of the S&T Competitiveness. Nelson et al. (1992) study the condition of American technological leadership with many indicators, such as Scientists and Engineers Engaged in R&D per 10,000 Workers, Expenditures for R&D as Percentage of GNP, Country Shares of World High-Technology Exports, etc[7]. Roessner et al. (1996) develop four "input" indicators of a nation's future capacity to compete in international markets in high technology products and three "output" indicators of a nation's current international competitiveness[8]. Hicks et al. (2001) use the patent indicators to inquire the changing composition of innovative activity in the US between different areas or different type organizations[9]. McAleer et al. (2005) introduce the patent success ratio to measure the innovative activity in the US over the period 1915-2001[10]. In order to analyze the technical innovation and flows of knowledge from and to these cities, Maspons et al. (2004) examine the patents of 14 European cities during 1991-1995 and 1996-2001[11]. Through patent analysis, Bhattacharya (2004) compares the inventive activity and technological change between India and China during 1996-2001[12]. But no indicator system and compute model that are uniform and perfect have been presented yet because of the complexity of S&T system. So the evaluation and study of S&T competitiveness is not mature enough and in the course of continuous research. Making a comprehensive view on the monograph and paper at home and abroad, its not difficult to see that S&T competitiveness is a concept which has definitely intuitionistic meaning but cant been exactly defined, and that the regional S&T competitiveness desiderates deeper, more systemic, and more extensive study in both theory and practice. Data envelopment analysis (DEA) that is used to estimate the

This research was supported by the soft science project of public bidding of Hefei (2006-303).

448

Persons Engaged in S&T Activities Gross S&T Input Module Scientists and Engineers Technical Personnel in State-owned Enterprises and Institutions at Year-end Expenditures for S&T Activities R&D Expenditures Region Expenditure on S&T Promotion Funds and Operating Expenses for Science Expenditure in Large and Medium-sized Industrial Enterprises Spending for New Products Three Kinds of Applications for Patents Received Gross S&T Output Module Three Kinds of Patents Granted Scientific Papers Published in Chinese S&T Periodicals Chinese Scientific Papers Taken by Major Foreign Referencing System Transaction Value in Technical Market Exports of High-tech Products Revenue from the Sale of New Products in Large and Medium-sized Industrial Enterprises Per Capita Annual Education Expenditure of Urban Residents S&T Foundation Module Number of Computer Owned Per 100 Urban Households at the Year-end Student Enrollment in Institutions of Higher Education Per 10,000 Population Proportion of Population with College and Higher Level Persons Engaged in S&T Activities / Population (Year-end) Scientists and Engineers/ Persons Engaged in S&T Activities Structural S&T Input Module Technical Personnel in State-owned Enterprises and Institutions at Year-end / Population (Year-end) Expenditures for S&T Activities / GDP R&D Expenditures / Scientists and Engineers S&T Promotion Funds and Operating Expenses for Science / Budgetary Expenditure of Local Government Expenditure in Large and Medium-sized Industrial Enterprises Spending for New Products / Total Revenue from the Sale of Products Three Kinds of Applications for Patents Received / Population (Year-end) Three Kinds of Patents Granted / Population (Year-end) Scientific Papers Published in Chinese S&T Periodicals / Population (Year-end) Chinese Scientific Papers Taken by Major Foreign Referencing System / Scientific Papers Published in Chinese S&T Periodicals Transaction Value in Technical Market / GDP Exports of High-tech Products / Total Exports Revenue from the Sale of New Products in Large and Medium-sized Industrial Enterprises / Total Revenue from the Sale of Products

Comprehensive S&T Competitiveness

Structural S&T Competitiveness

Gross S&T Competitiveness

Structural S&T Ability

Gross S&T Ability

Fig.1 Evaluation indicator system of regional S&T competitiveness

efficiency of S&T competitiveness is one of the important things, and it deserves aborative discussion much more when aiming especially at the S&T productive efficiencies of the regions (the provinces) in China.

Structural S&T Output Module

449

2. Establishment of evaluation indicator system


Factors involved are large in quantity and intricate in relation when comparing and evaluating the S&T competitiveness of every province. The provinces in this paper, if not narrating especially, are 22 provinces, 4 municipalities, and 5 autonomous regions, dont include Hong Kong, Macao, and Taiwan. Systemic analysis and evaluation on their S&T competitiveness isnt an easy thing. Firstly we need to search various indicators as much as possible, establish rather complicated evaluation indicator system, and construct rather comprehensive analysis framework. And then we filter purposively according to the relativity among factors, reduce some less important indicators, and obtain the indicators with proper quantity and definite meaning gradually. The constructional process of such an intricate indicator system is on the basis of some principles, and the established system is not the simple combination of some indicators, but must be a uniform integer among which there is organic relation and must consolidate maturity and independence, scientificity and feasibility, systematism and hierarchy, activeness and stability as well as universalism and regionality well. The foundation of indicator system should be a dynamic process, and needs to regard the combination of complexity and simpleness, commonness and emphasis, the microcosmic and the macroscopic, the short-time and the long-time. A rather reasonable evaluation indicator system of the regional S&T competitiveness in China is established based on the authors sufficient reference to the research harvest of many experts and scholars (see Fig.1). The aim of this indicator system is to exactly judge every provinces true comprehensive S&T competitiveness in certain period. The comprehensive S&T competitiveness can be regarded as being converged by the two types of competitiveness, gross S&T competitiveness and structural S&T competitiveness. The gross S&T competitiveness is composed of three modules, gross S&T input, gross S&T output and S&T foundation, among which the combination of gross S&T input and gross S&T output can be named as gross S&T ability. The structural S&T competitiveness is also composed of three modules, structural S&T input, structural S&T output and S&T foundation, among which the combination of structural S&T input and structural S&T output can be named as structural S&T ability. Some concrete indicators are set below every module: gross S&T input module, gross S&T output module, structural S&T input module, and structural S&T output module are displayed by seven concrete indicators respectively; S&T foundation module is displayed by four concrete indicators. The relationship between concrete indicators and modules is presented in Fig.1. Every module of this evaluation system is not only independent but also relative, has respective content and characteristic, and forms its different indicator type. But they correlate tightly and restrict each other, and constitute multi-hierarchy and multi-dimensionality evaluation system of comprehensive S&T competitiveness altogether.

3. Evaluation methods
3.1 TOPSIS method TOPSIS (Technique for Order Preference by Similarity to Ideal Solution) initially presented by Hwang and Yoon (1981)[13] is a greatly effective method for solving multi-criteria decision-making problems, with which the condition of every provinces S&T competitiveness can be studied and be ranked in accordance with the relative closeness. The TOPSIS method consists of several steps in which the key is to calculate the relative closeness to the ideal solution:

E i Fi = E i + E i+
*

i = 1,2,

,n

(1)

where E i+ is the separation distance of each province to the ideal solution; E i is the separation distance of each province to the negative-ideal solution. 3.2 DEA DEA is a rather effective method for estimating the productive efficiency which has many concrete models, and CCR (developed by the famous operational researchers, Charnes, Cooper and Rhodes, in 1978)[14] is selected
450

in this paper which is also the most basic model, mainly used to estimate the gross efficiency made up of scale efficiency and technical efficiency of the multi-output-multi-input DMUs. Suppose that n DMUs (Decision Making Units) are considered, each DMU has p inputs and q outputs, the DMUs in this paper are the provinces that will be measured. Generally if DMU1 is estimated, the CCR model can be constructed as follows:

min n s.t. i X i + s = X 1 i =1 n iYi s + = Y1 i =1 i 0, i = 1,2, , n s + 0, s 0

(2)

where Xi is the input vector of DMUi, Yi is the output vector of DMUi and is the relatively comprehensive efficiency score.

4. Result and problem of evaluation


To begin with, we rank the gross S&T competitiveness, structural S&T competitiveness and comprehensive S&T competitiveness of 31 provinces in TOPSIS method according to the devised indicator in Fig.1, and the result is shown in Tab.1. Its not difficult to see from the table that the top five in gross S&T competitiveness are Beijing, Guangdong, Shanghai, Jiangsu, Shandong; the top five in structural S&T competitiveness are Beijing, Shanghai, Tianjin, Shaanxi, Guangdong; the top five in comprehensive S&T competitiveness are Beijing, Shanghai, Guangdong, Jiangsu, Shandong. Seven provinces are involved altogether here, which are the best ones in the interprovincial S&T competitiveness in China and the representative ones in displaying the whole S&T competitiveness in China. But the last five in gross S&T competitiveness are Guizhou, Ningxia, Hainan, Qinghai, Tibet; those in structural S&T competitiveness are Jiangxi, Guizhou, Inner Mongolia, Qinghai, Tibet; those in comprehensive S&T competitiveness are Hainan, Guizhou, Inner Mongolia, Qinghai, Tibet. Eight provinces are involved altogether here, which have relatively very weak S&T competitiveness. Next we analyze the S&T productive efficiency of the 31 provinces, altogether seven indicators are used in S&T input which are the concrete indicators below gross S&T input module shown in Fig.1 and are Persons Engaged in S&T Activities (10,000 persons), Scientists and Engineers (10,000 persons), Technical Personnel in State-owned Enterprises and Institutions at Year-end (10,000 persons), Expenditures for S&T Activities (100 million yuan), R&D Expenditures (100 million yuan), Region Expenditure on S&T Promotion Funds and Operating Expenses for Science (100 million yuan) and Expenditure in Large and Medium-sized Industrial Enterprises Spending for New Products (100 million yuan). Seven indicators are also selected in S&T output which are the concrete indicators below gross S&T output module shown in Fig.1 and are Three Kinds of Applications for Patents Received (piece), Three Kinds of Patents Granted (piece), Scientific Papers Published in Chinese S&T Periodicals (piece), Chinese Scientific Papers Taken by Major Foreign Referencing System (piece), Transaction Value in Technical Market (100 million yuan), Exports of High-tech Products (USD 100 million), Revenue from the Sale of New Products in Large and Medium-sized Industrial Enterprises (100 million yuan). The results of DEA evaluation are shown in Tab.1, which are based on the average data from 1998 to 2002 of these indicators and the input-based CCR model. From Tab.1 we can know that 18 provinces are relatively efficient in 31 provinces, which are Jilin, Heilongjiang, Anhui, Fujian, Hunan, Hainan, Chongqing, Yunnan, Tibet, Gansu, Xinjiang, Beijing, Shanghai, Shaanxi, Tianjin, Zhejiang, Hubei and Guangdong and 58.1 percent of total provinces; the other 13 provinces are inefficient and 41.9 percent of total provinces, which are ranked as follows according to efficiency scores in the bracket: Guangxi (0.996), Jiangsu (0.952), Inner Mongolia (0.911), Guizhou (0.910), Shandong(0.870), Jiangxi (0.864), Hebei (0.829), Henan (0.810), Liaoning (0.800), Sichuan (0.779), Ningxia (0.677), Shanxi (0.655),and Qinghai (0.586). The last column in Tab.1 presents the total
451

frequency this DMU is used to construct DEA-efficient surfaces of the other provinces. Generally speaking the DMUs on the efficient surface are comparatively ideal compared with that estimated DMU. Contrasting the order of TOPSIS with the result of DEA, we discover such a perplexing problem: some provinces with obviously weak S&T competitiveness, the S&T productive efficiencies of which are effective very much, are even the benchmarks of the provinces with very strong S&T competitiveness. For example, Hainan, such a province with very weak S&T competitiveness (ranked number 27), actually becomes the powerful competitor of the other 10 provinces in terms of S&T productive efficiency, and especially can become the learning goal of such huge provinces in S&T as Jiangsu (ranked number 4) and Shandong (ranked number 5). The cause of this problem is possibly that the convex and conical hypotheses of DEA arent fulfilled when the 31 provinces are estimated together. So the new evaluation manner must be applied, which sorts the DMUs first, and then respectively uses DEA evaluation on different classes supposing the classes accord the corresponding hypotheses.
Tab.1 The S&T competitiveness of provinces of China and the DEA evaluation results Ranked by TOPSIS Method DEA Evaluation Gross Structural Comprehen Frequency of the Serial Provinces S&T S&T -sive S&T Efficiency DMU in the Efficient DMU Number Score Surface Competiti Competiti Competiti Constructing the -veness -veness -veness Efficient Surfaces 1 Hebei 14 24 19 0.829 24,12,13,28 0 2 Shanxi 21 20 22 0.655 21,10,13,28,18 0 3 Inner Mongolia 26 29 29 0.911 21,28,12 0 4 Jilin 17 11 14 1.000 4 0 5 Heilongjiang 15 17 17 1.000 5 2 6 Anhui 18 15 18 1.000 6 0 7 Fujian 16 12 12 1.000 7 0 8 Jiangxi 24 27 24 0.864 13,12,22,28 0 9 Henan 12 26 16 0.810 28,12,24,13,30 0 10 Hunan 13 16 15 1.000 10 2 11 Guangxi 23 19 21 0.996 12,13 0 12 Hainan 29 23 27 1.000 12 10 13 Chongqing 19 10 13 1.000 13 11 14 Sichuan 9 8 9 0.779 23,22,28 0 15 Guizhou 27 28 28 0.910 5,13,12 0 16 Yunnan 20 25 23 1.000 16 0 17 Tibet 31 31 31 1.000 17 0 18 Gansu 25 18 20 1.000 18 1 19 Qinghai 30 30 30 0.586 21,10,13 0 20 Ningxia 28 21 26 0.677 21,12,13,28 0 21 Xinjiang 22 22 25 1.000 21 4 22 Beijing 1 1 1 1.000 22 4 23 Shanghai 3 2 2 1.000 23 3 24 Shaanxi 11 5 10 1.000 24 2 25 Tianjin 10 3 6 1.000 25 1 26 Liaoning 7 9 8 0.800 28,12,13,22,5,25 0 27 Jiangsu 4 6 4 0.952 28,23,31,12,13,22 0 28 Zhejiang 6 7 7 1.000 28 10 29 Shandong 5 13 5 0.870 28,23,13,12 0 30 Hubei 8 14 11 1.000 30 1 31 Guangdong 2 4 3 1.000 31 1 Note: The data source is got by computing the data of China Statistical Yearbook, China Statistical Yearbook on Science and Technology and WanFang Data Web from 1999 to 2003.

5. Conclusion
Based on the authors sufficient reference to the research harvest of many experts and scholars, a rather reasonable evaluation indicator system of the regional S&T competitiveness that has thirty-two indicators is established. We argue that the comprehensive S&T competitiveness can be regarded as being converged by the two types of competitiveness, gross S&T competitiveness and structural S&T competitiveness. Each kind of
452

comprehensive is formed by different module. At the same time, each module possesses content and characteristic of its own, and forms respectively the different indicator kinds, and has established the foundation to carry out the comprehensive appraisement of the regional S&T competitiveness of China. We use TOPSIS method to empirically examine the gross S&T competitiveness, structural S&T competitiveness and comprehensive S&T competitiveness of 31 provinces of China. Then we adopt DEA to evaluate the comparative performance of the regional S&T input and output factors of China. The results of DEA evaluation indicate that 18 provinces are relatively efficient in 31 provinces, which are Jilin, Heilongjiang, Anhui, Fujian, Hunan, Hainan, Chongqing, Yunnan, Tibet, Gansu, Xinjiang, Beijing, Shanghai, Shaanxi, Tianjin, Zhejiang, Hubei and Guangdong and 58.1 percent of total provinces; the other 13 provinces are inefficient and 41.9 percent of total provinces. These conclusions are credible on certain level, but also have one problem that does not allow overlooking. The DEA evaluation based on the input-based CCR model, treating all the provinces equally without discrimination, on the assumption that the convex and conical conditions can be fulfilled at the same time, isnt perfect. The case analysis shows some provinces with very weak S&T competitiveness actually become the powerful competitors of many provinces with strong S&T competitiveness and this kind of analysis result, having certain problems, is irresponsible to some extent. Hence, some provinces with very strong S&T competitiveness may be evaluated and judged on the basis of classification.

References
[1] [2] [3] [4] [5] [6] [7] IMD. The World Competitiveness Yearbook 2006. Lausanne: IMD, 2006 WEF. Global Competitiveness Report 2005-2006. Geneva, Switzerland: World Economic Forum, 2006 United Nations Development Programme. Human Development Report 2001: Making New Technologies Work for Human Development. Beijing: China Financial & Economics Publishing House, 2001(in Chinese) Research Group on Development and Strategy of Science and Technology of China. Annual Report of Regional Innovation Capability of China 2005~2006. Beijing: Science Press, 2006(in Chinese) Research Group of Sustainable Development Strategy of CAS. China Sustainable Development Strategy Report 2006. Beijing: Science Press, 2006(in Chinese) Science and Technology Information Statistics Center of MOST of China. Statistical and Monitorial Result of S&T Change of China 2006. www.sts.org.cn, 2007(in Chinese) Nelson P.R., Wright G. The rise and fall of American technological leadership: the postwar era in historical perspective. Journal of Economic Literature, 1992, 30(4): 1931-1964 Roessner J.D., Porter A.L., Newman N., Cauffiel D. Anticipating the future high-tech competitiveness of nations: indicators for twenty-eight countries. Technological Forecasting and Social Change, 1996, 51(2): 133-149 Hicks D., Breitzman T., Olivastro D., Hamilton K. The changing composition of innovative activity in the US: a portrait based on patent analysis. Research Policy, 2001,30(4): 681-703 McAleer M., Slottje D. A new measure of innovation: the patent success ratio. Scientometrics, 2005, 63(3): 421-429 Maspons R., Escorsa P. Flows of knowledge from and to cities: an analysis for Barcelona using patent statistics. Research Evaluation, 2004, 13(2): 104-117 Bhattacharya S. Mapping inventive activity and technological change through patent analysis: A case study of India and China. Scientometrics, 2004, 61(3): 361-381 Hwang C.L., Yoon K. Multiple Attribute Decision Making: Methods and Applications. New York: Springer-Verlag, 1981 Charnes A., Cooper W., Rhodes E. Measuring the efficiency of decision making units. European Journal of Operational Research, 1978, 2(6): 428-444

[8]
[9] [10] [11] [12] [13] [14]

453

Auctioning Total Permitted Pollution Discharge Capacity under a Uniform Price


Zhao Yong, Chen Yang, Wang Qing
Systems Engineering Institute, Huazhong University of Science and Technology, 1047 Luoyu Road, Wuhan, Hubei, 430074, P R China

Abstract Divisible goods auction is examined as a competition allocation method for total permitted pollution discharge capacity (TPPDC), in which the marginal cost of pollutant treatment is regarded as a polluters private information. An important and interesting equilibrium result is deduced for the auction under a uniform price with a general and continuous marginal cost function, which help to improve the creditability and validity of the pollutant gross control and the environmental plan. Key words Divisible Goods Auction, Uniform Price, Total Permitted Pollution Discharge Capacity

1. Introduction
As a strategy for environmental management, Pollutant Gross Control (PGC) may prevent the eco-system from worsening within certain limits, and help achieve sustainable development. As a limited resource, total permitted pollution discharge capacity (TPPDC) relates directly to every participant (comprising each polluter, each enterprise and the government), and has an important impact on the implementation of PGC and improvement of social conditions. A reasonable allocation policy for TPPDC should lead to decreased pollutant discharges and a reduction in overall control costs. TPPDC may be classed as divisible goods, whose allocation usually involves complicated private information. Its values depend on location, enterprise and pollutant discharge period. Recently, considerable attention has been given to the use of auction by divisible goods with uniform price. Back and Zender[6,7] demonstrate how such a uniform-price auction can yield sensible results, consider the strategic difference between unit-demand and divisible goods auctions, and compare uniform-price and discriminatory auctions. This research has motivated further theoretical analysis of the divisible goods auction. Wang and Zender[9] derive an equilibrium bidding strategy for a divisible goods auction involving asymmetrically informed risk-neutral and risk-averse bidders when there is random non-competitive demand. Recently, Kremer and Nyborg[10] used a model of fixed supply divisible-good auctions, to study the effect of different rationing rules on the set of equilibrium prices. Damianov[8] then showed that a low-price equilibrium cannot exist in a uniform price auction with endogenous supply, if the seller employs a proportional rationing rule and is consistent when selecting among profit-maximizing quantities. At the time of writing, much research [3,5,11] has been undertaken on TPPDC, but little published work is available on allocation by means of a divisible goods auction based on a declared cost function or a quoted price function. The present paper proposes an allocation model that utilizes a uniform price-based divisible goods auction aimed at improving the social creditability and validity of TPPDC allocation, in which the marginal cost is regarded as private information belonging solely to each polluter.

2. Problem Statement
Let Gi and qi[0,) (i=1,2,......,n) denote the actual pollutant discharge capacity and the permitted pollution discharge capacity respectively of the ith polluter, and n is the total number of polluters. The allocation of TPPDC, Q0, may be described by qi = Q0 , 0 qi Gi , i = 1,2 , n.
i

The cost function of treating surplus pollutant x = Gi qi of the ith polluter is given by i (x) . Let

g i (x ) =

d i

dx

0 denote the marginal cost of pollution treatment. Then, d i

dqi

0 , which means the

This work was supported by the National Science Foundation of China under Grant No. 70471077.

454

marginal cost of pollution treatment reduces as the permitted pollution discharge capacity rises. For simplicity, it is supposed that the government allocates TPPDC, Q0, under a uniform price , n ) in which the marginal cost g i ( x ) is declared by the ith polluter. This p = g i (Gi qi ) ( i = 1,2, polluter must pay pqi in order to be allocated the permitted pollution discharge capacity qi . The governments
p

decision goal is Max pQ0 . Further, let vi ( x ) denote the actual marginal treatment cost to the ith polluter. Then, a

specific allocation can be described as follows. Max pQ 0 (D)

s.t .

q i = Q 0 i g (G q ) = p , i = 1,2 , i i i i = 1,2 , 0 qi Gi ,

n n

Proposition 1. Suppose the marginal treatment costs declared by n polluters in Model D are the same as p = g ( x ) . If the equilibrium price is p * and the equilibrium permitted pollution discharge capacity of the ith

polluter is qi , then p

tends to zero as dg (x )
dx

increases.
x= x
*

Proof. The declared marginal treatment costs are the same, xi = x at any price p, and so the allocated

capacity of the ith polluter is qi = Gi x . Suppose the jth polluter causes a deviation from the equilibrium point (by improving gj(x) in order to gain an * increase in qj), leading to a new equilibrium price p = g ( x ) > p . Then j and the other ith polluters ( i j ) will be allocated q j ( x ) = Q0 (Gi x ) and Gi x respectively. Thus, the income of the ith polluter will be
i j

Uj =
So, or, Letting dU j
dx
x = x*

Gj

G j q j ( x )

(v ( y ) g (x))dy
j

dU j dx

Gj

G j q j ( x )

dG j d ( q j (x )) dg ( x ) , (v j (G j q j ( x )) g ( x )) dy + (v j (G j ) g (x )) dx dx dx

dU j

dx dg (x ) (G j x * ) (n 1)(v j (x* ) p * ) = 0 . = 0 , then dx x = x*

dq j ( x ) dg ( x ) . q j ( x ) + (v j (G j q j ( x )) g ( x )) dx dx

Thus,

v j x* p* dg ( x ) = (n 1) dx x = x* G j x*

( )

(2)

curve p = g ( x ) , the equilibrium price p

Equation (2) shows that the jth polluters optimal deviation should be to return to its status quo. The equation is sufficiently large at the equilibrium point of the marginal cost also indicates that when dg (x ) dx x = x
*

q * is constant), and G j , v j (x * ) and n are all fixed. Furthermore, if dg (x ) j


dx

will be sufficiently small, provided x


x = x*

remains unchanged (namely


j

Max (n 1)

v j p * , then the G j x*

equilibrium price p * 0 could be so small as to be zero. Proposition 1 show that the government would be unwise to use Model D because it does not guarantee sufficient fairness and validity of TPPDC allocation. The main reason is that the equality qi = Q0 restricts
i

any decision by the government. Actually, Max pQ0 of Model D is completely determined by polluters. So, the
p

lower the benefit to the government, the larger the polluters income. Consequently, to improve government decisions and limit the scale of false declarations by polluters, the equality restriction should be modified so

455

that

q
i

Q0 . To achieve this, the government should announce an upper limit on Q0 , and determine the actual

capacity Q according to the principle of maximizing government income based on all polluters specific declared marginal costs or prices. Not only will government decisions be significantly improved, but also the creditability and validity of TPPDC allocation be enhanced.

3. An Auction Model
To improve allocation efficiency and obtain the actual cost information from each polluter, we suppose that the allocation problem (N,q,d) satisfies the criteria listed below: 1) The marginal treatment cost gi (x) declared by the ith polluter is invariably less than the actual treatment cost vi(x), namely g i ( x ) < vi ( x ) . This qualification is required only near the equilibrium point. 2) i , dg i (x ) 0 and dvi (x ) 0 . In other words, the marginal costs increase with pollution treatment dx dx capacity x, such that g i ( x ) = a i x + bi (where ai 0 and bi 0 represent the variable and the fixed cost coefficient, respectively). This corresponds to sewage treatment in which a higher level of pollution treatment leads to a larger marginal treatment cost or increased difficulty in achieving the treatment target. 3) The governments goal is to maximize its income pqi = pQ , raised from TPPDC allocation by

choosing a specific total capacity Q Q0 and a uniform price p>0. Hence,

Max
(E)

pQ qi = Q Q0 i g (G q ) = p, i = 1,2 , i i i 0 q i Gi , i = 1,2 ,

s.t.

n n

4) The goal of the ith polluter is to maximize income cost or price g i ( x ) .

Gi

Gi qi

vi (x )dx pq i by declaring its smart marginal

5) There is no co-operation among the polluters. Conditions 1)-5) may be thought to describe an auction model of completely divisible goods under a uniform price. The following conclusions may be drawn regarding Model E. Proposition 2. In Model E, the governments optimal decision will be Q * = Q0 .
Proof. Suppose that the equilibrium price is p , and an equilibrium total capacity chosen by the government is Q * where Q * < Q0 . If there exists a polluter i who can profit from the deviation of the equilibrium while the
*

deviation can also make a profit for the government and p


*

(p

Q * + p * Q * (where > 0, > 0

)(

(Q

), then the players (comprising the government and the polluters) will reach their equilibrium

only while Q * = Q0 .
p vi g

p*

Gi-q*

Fig.1 Sketch map of O G i q i* , p *

456

Assuming the actual marginal cost for the ith polluter satisfies vi Gi q i* > g i Gi q i* = p * at the equilibrium point, then there must exist a neighborhood O Gi q , p

y < vi ( x ) for (x, y ) O (Gi q , p ), as illustrated in Fig.1.


* i *

* i

at the point in which there exists

The ith polluter makes such a deviation as g i : sufficiently small and

'

> 0 should ensure the point (Gi qi* , p * ) within the neighborhood O. * Due to the existence of the other polluters, if the government chooses the total capacity Q + , then the * price p will satisfy p p . So the governments income satisfies p (Q* + ) ( p * )(Q * + ) p *Q * ; in
other words, the government makes a profit. Otherwise, the allocated capacity qi of the ith polluter satisfies qi qi* according to g i' because p < p * . So its income will be

g i' (Gi qi* ) = g i (Gi qi* ) = p * , where > 0 is g i' (Gi qi* ) = p *

Gi

Gi qi

vi ( x )dx pqi = [

Gi qi*

Gi qi

vi ( x )dx p qi qi* ] + [
Gi Gi qi*

Gi

Gi qi*

vi ( x )dx pqi* ]
(3)

Gi

Gi qi*

vi (x )dx pqi*

vi ( x )dx p * qi*

which means the ith polluter will also profit by its smart deviation. Thus by accumulating the players profits, the former equilibrium capacity Q * chosen by the government must increase until Q * = Q0 . Proposition 2 shows that the government maximizes its income by choosing Q according to polluters specific declared costs or prices, but the optimal decision of the government will still be Q * = Q0 because of the income maximization principle. Comparing with Model D, the measure of no-fixed total actual allocating capacity Q can be seen as a governmental threat strategy to polluters that can stop the reverse competition of quoted prices among polluters in D, which is proved by Proposition 3.
Proposition 3. In the final equilibrium of Model E, the optimal price p obtained by a government satisfies
*

1 1 * * vi xi* Max v k x k p * vi xi* Min v k x k . k k n i n i

( )

( )

( )

( )

Proof. The allocated capacities q1 , q 2 ,..., q n may be regarded as functions of a uniform price p, where

q i = Q q j and
j i

dqi 0 (i=1,2,,n). dp
p

First, according to the optimization condition and Proposition 2, the governments goal Max pQ should satisfy at p :
dQ dp =
p= p
*

Q0 p*

(4)

Then the ith polluters goal Max


p

Gi
j i

Gi Q + q j

(vi (x ) p )dx should also satisfy at p*

dq j qi* vi xi* p * j i dp

( ( )

=0 p = p*

Namely, qi* = vi xi* p *

( ( )

) dq dp

j i

j p= p
*

dqi dp

+ v x * p * dqi i i dp p = p*

( ( )

p = p*

dQ = (vi (xi* ) p * ) dp

p = p*

dq + (vi (xi* ) p * ) i dp

p = p*

457

vi xi* p *

( ( )

)dQ dp

* + Max v k x k p * p= p
*

( )

)dq dp

i p = p*

(5)

Taking the sum of both sides of Equations (5) where i is from 1 to n

Q0 vi xi* p *
i

( ( )

)dQ dp

* + Max v k x k p * p = p* k

( )

)dQ dp
p = p*

p = p*

dQ * = vi xi* p * + Max v k x k p * k i dp

( ( )

i

( )
k

(6)

From Equations (4) and (6), Q0

(v (x ) p ) + (Max v (x ) p
i * i * k * k

) Q P
0 *

p*
Also, p *

1 * vi xi* Max v k x k k n i

( )

( )

1 * vi xi* Min v k x k if vi (xi* ) p * is minimized in Equation (5). k n i

( )

( )

Proposition 3 shows that Model E can guarantee a base price of allocation or auction for a government, and the government can almost gain the actual average marginal treatment cost of the polluters, provided n is sufficiently large. In contrast, Model D possibly induces very low quoted prices.

4. Conclusions
This paper examines a method for modeling a divisible goods auction under a uniform price based on a continuous marginal cost function, which may be applied to the allocation of total permitted pollution discharge capacity. By assuming that polluters are symmetric, such that their marginal treatment cost functions v(x) and pollutant discharge capacities G are each the same, it has been shown that the government obtains an equilibrium price, expressed as p * =

n 1 * v x . This price is in accordance with, but more universal in applicability than an n

( )

equilibrium price derived by Back[7] and Damianov[8], who used deductions from quoted price lists to determine only the low bound of the equilibrium price p* for symmetric bidders. The equilibrium price obtained herein may be fitted to the practical allocation and auction of many divisible goods or resources, such as network bandwidth, land, and electric power.

References
Zhang W. Game Theory and Communication Economics. Shanghai: Sanlian Publishing Company, 1996. Ma Z., Dudek D. Pollutant Gross Control and Pollutant Discharge Right Trade. Beijing: Chinese Environment Science Publishing Company, 1999.11. [3] Wang X., Xiao W., Hu Z. Two Methods and Efficiencies Comparing of Discharge Right Initial Allocating. Natural Science Development, 2004, No.1. [4] Demougin D., Fluet C. Monitoring Versus Incentives. European Economic Review, 2001, Vol.45: 1741-1764. [5] ReVelle C. Research Challenges in Environmental Management. European Journal of Operational Research, 2000, Vol.121: 218-232. [6] Back K., Zender J.F. Auctions of Divisible Goods: on the Rationale for the Treasury Experiment. Review of Financial Studies, 1993, Vol.6: 733-764. [7] Back K., Zender J.F. Auctions of Divisible Goods with Endogenous Supply. Economics Letters, 2001, Vol.73: 29-34. [8] Damianov D.S. The Uniform Price Auction with Endogenous Supply. Economics Letters, 2005, Vol.77: 101-112. [9] Wang J.J.D., Zender J.F. Auctioning Divisible Goods. Economic Theory, 2002, Vol.19: 673-705. [10] Kremer I., Nyborg K. Divisible-Good Auctions: the Role of Allocation Rules. RAND Journal of Economics, 2004, Vol.35: 147-159. [1] [2]

458

Application of the Fuzzy Simulation in the Evaluation Investment of Advanced Manufacturing Technology
Zhao Zhenwu1Tang Wansheng2
1 School of safety, Civil Aviation University of China, Tianjin 300300, P.R.China 2 School of Management, Tianjin University, Tianjin 300072, P.R.China

Abstract In this paper, a model for the evaluation of investments in advanced manufacturing technology is developed. The development of the model combines the three dimensions of financial criteria, strategic criteria and risk criteria. The economic criteria is addressed using the fuzzy net present value analysis. And fuzzy variables are used to replace the uncertain variables in the strategic criteria and risk criteria. The analytic hierarchy process and fuzzy set theory are applied to build the model. Then fuzzy simulation is employed to dispose of the evaluation model. Three number values are expressed with the efficient plane method. In this way, the uncertain variable existing in the evaluation process can be well solved. So the result tends further rationally. Finally, the model is utilized to assess three advanced manufacturing technology investment projects. Key words Advanced manufacturing technology, Fuzzy variable, Fuzzy simulation, Analytic hierarchy process

1 Introduction
Advanced Manufacturing Technology (AMT) is a modern method of production incorporating highly automated and sophisticated computerized design and operational systems[1]. AMT aims at manufacturing high quality products at low cost within the shortest delivery time. AMT is typically reflected by the achievements in high precision and sophisticated automation manufacturing operations. So the cost of AMT investment project is high. AMT investment project has a big risk and a long payback period. How to appraise the investment project correctly and effectively and make the scientific investment decision is the prerequisite and key of obtaining the good investment benefit. Then the method for the evaluation of investment in AMT seems very important. A lot of researchers, e.g., Lefley [2,3], Abdel-Kader and Dugdale [4] had put forward different factors for the evaluation of investment in AMT. But the use of precise values does not reflect the qualitative and subjective nature of many factors. There are many fuzzy factors in AMT project, such as produce flexibility, product quality, management level, product demand fluctuation[5]. The model is designed to permit estimated fuzzy values and to provide a consistent method of accounting for these factors. Many researchers employed analytic hierarchy process (AHP) to make investment decision of AMT[6-8], but traditional AHP expresses the uncertain variable with the precise value, which doesnt fit in with reality[9]. Wilhelm and Parsaei[10] developed fuzzy AHP to deal with the model for fuzzy factors, but the deconvolution of fuzzy numbers exists while utilizing fuzzy subtraction and division operation in this kind of method. To overcome this shortcoming an AMT investment model based on the fuzzy simulation method and hierarchical structure analysis is developed. The model is designed to permit fuzzy variable and dealed with fuzzy simulation technology, then the result for the evaluation of investment in AMT is more objective. The remainder of this paper is organized as follows. The next section provides some basic concepts about fuzzy variables and fuzzy simulation technique. This is followed by the proposed model based on fuzzy simulation. Then the proposed method is employed to evaluate investment projects, which illustrates the designed method is implemented easily and effective. Finally, some conclusions are offered.

2 Fuzzy variable and fuzzy simulation


In this section, we first review some basic concepts and results on fuzzy variable, then we introduce fuzzy simulation technique. 2.1 Fuzzy Variable Let be a nonempty set, () the power set of , and Pos a function from () to the set of real numbers.

Supported by National Natural Science Foundation of China(No. 70171004) and Scientific Research Fund of CAUC(No. 06qd01s)

459

Nahmias [11] and Liu [12] provided the following axioms: Axiom 1 Pos{} = 1 . Axiom 2 Pos{ } = 0 . Axiom 3 Pos{ i Ai } = sup i Pos{ Ai } ,for any collection in { Ai } in () .
Axiom 4 Let i be nonempty sets on which Pos i {} (i = 1,2, , n) satisfies the first three axioms, respectively,

and = 1 2 n .Then
Pos{ A} = sup
( 1 , 2 , , n ) A

Pos 1 { 1 } Pos 2 { 2 }

Pos n { n }
Pos n .

For each A () . In that case, we write Pos = Pos1 Pos 2

Definition 1: (Liu and Liu [13]) Let be a nonempty set, and () the power set of . Then Pos is called a possibility measure if it satisfies the first three axioms, Furthermore, the triplet ( , P( ) , Pos ) is called a

possibility space. Definition 2: (Nahmias [11]) A fuzzy variable is defined as a function from the possibility space ( , P( ) , Pos ) to R . Let be a fuzzy variable on the possibility space ( , P( ) , Pos ). Then its membership function may be derived from the possibility measure by (r ) = Pos{ | ( ) = r}, r R . Definition 3: (Liu and Liu [13]) Let be a fuzzy variable on the possibility space ( , P( ) , Pos ),then the necessity measure and credibility measure of { r} is respectively defined by
Nec{ r} = 1 sup (u ) ,
u <r

Cr { r } =

1 [Pos{ r} + Nec{ r}] , 2

Where (u ) is the membership function of . Definition 4: (Liu and Liu [13]) Let be a fuzzy variable on possibility space ( , P( ) , Pos ). Then the expected value E [ ] is defined as
E [ ] = Cr{ r}dr Cr{ r}dr ,

provided that at least one of the two integrals is finite. Proposition 1: (Liu [14]) The vector = (1 , 2 , , n ) is a fuzzy vector if and only if 1 , 2 , , n are fuzzy variables. Definition 5: (Liu [14]) Let f : R n R be a function,and 1 , 2 , , n are fuzzy variables on the possibility space ( , P( ) , Pos ). Then = f (1 , 2 , , n ) is a fuzzy variable defined as
( ) = f (1 ( ), 2 ( ), , n ( )) ,

for any . 2.2 Fuzzy Simulation Generally, it is difficult to calculate the expected values of fuzzy variables with generic membership functions. In what follows, we introduce fuzzy simulation technique for estimating the expected values of fuzzy variables (see [13], [14]). Let f : R n R be a function, and = (1 , 2 , , n ) is a fuzzy vector defined on the possibility space
( , P( ) , Pos ). Then f ( is also a fuzzy variable whose expected value is )
E [ f ( )] = Cr{ f ( ) r}dr

Cr{ f ( ) r}dr .

(1)

The fuzzy simulation method for estimating E[ f ( )] is described as follows.

1) Set e = 0. 2) Generate randomly k from such that Pos{ k } for k = 1,2, , N , where is a sufficiently small number. 3) Set vk = Pos{ k } . 4) Set a = f ( (1 ))

f ( ( N )) , b = f ( (1 ))

f ( ( N )) .

460

5) Generate randomly r from [a, b]


6) If r 0 , then e e + Cr{ f ( ) r} ,where Cr{ f ( ) r} = (max{vk f ( k )) r} + 1min{1 vk f ( ( k )) < r}) . ( k N 1k N
1 2 1 7) If r < 0 , then e e Cr{ f ( ) r} where Cr{ f ( ) r} = (max{vk f ( k )) r} + 1min{1 vk f ( ( k )) > r}) . ( k N 2 1k N

8) Repeat the steps from 5) to 7) for N times. )] 9) Compute E[ f ( = a 0 + b 0 + e (b a) / N . 10) Return E[ f ( )] .

3 Assessment model for AMT investment decision


A lot of researchers, e.g., Lefley, Abdel-Kader and Dugdale, etc, proposed some assessment factors for AMT investment decision making. In this paper these factors into three respects: financial criteria, strategic criteria and risk criteria. Fig. 1 shows the frame.
AMT investment projects

Financial criteria (NPV,IRR)

Strategy criteria

risk criteria

Product quality

Process flexibility

Customer requirements

Product research

Management level

Manufacture period

Volatility of demand

Life of market

Capability of technology

Reliability of supplier

Experience in the company

Fig.1 A model for AMT investment decision making

3.1 Financial criteria Suppose NPV denotes financial criteria. The traditional NPV formula for evaluating project is
NPV =

(1 + r )
t =1

xt

I0 ,

Where xt is the net cash flow arising at the end of year t , I 0 is the initial investment at time 0, r is the discount rate, and n is the projects life. These variables are crisp numbers. But in AMT investment project, These variables are represented by a triangular fuzzy number, which is more rational. Then fuzzy net present value ( FNPV ) is calculated as follows:

FNPV =
t =1

xt I0 , (1 + r ) t

(2)

Where xt is the fuzzy net cash flow arising at the end of year t , I 0 is the fuzzy initial investment at time 0, r is the fuzzy discount rate, and n is the projects life. xt , I 0 and r are fuzzy variables, then the expected value of FNPV is
n xt (3) E [FNPV ] = E I0 . t t =1 (1 + r ) As there are fuzzy variables in the denominator and molecule, the expected value of fuzzy variable E [ FNPV ]

is estimated by fuzzy simulation technique introduced in formula (1). 3.2 Strategic criteria Strategic criteria includes product quality, production flexibility, customer requirements, product research, management level and manufacture period. The weight and performance measure of these factors are qualitative variables. They are expressed with the linguistic variables. Accordingly, the fuzzy weight of the linguistic variables is W ={VI, I, MI, MU, U}= {(0.7,1.0,1.0 ), (0.5,0.7,1.0), (0.2,0.5,0.8), (0,0.3,0.5), (0,0,0.3)} , as shown in Fig.2. The fuzzy performance of the linguistic variables is

Ls ={VG, G, F, P, VP}= {(0.8,1.0,1.0), (0.6,0.8,1.0), (0.3,0.5,0.7 ), (0,0.2,0.4), (0,0,0.2)} , as shown in Fig.3.


461

Fig.2 Membership function of the weight

Fig.3 Membership function of the performance

The importance weight of the factor in strategic criteria is denoted Wi which belongs to W . Suppose Ak denotes an alternative k (k = 1,2, , m) and Lsaki denotes the fuzzy linguistic rating for alternative Ak with respect to strategic criteria. Lsaki belongs to Ls . Then the fuzzy strategic criteria measure of alternative Ak is denoted Fsmak and can be computed as follows:
Fsmak =
Wi and

1 n (Wi )(Lsaki ) . n i =1

(4)

Lsaki are fuzzy variables, then the expected value of Fsmak is

1 n (5) E [Fsmak ] = E (Wi )(Lsaki ) . n i =1 As there are fuzzy variables in the multiplication, the expected value of fuzzy variable E [ Fsmak ] is estimated

by fuzzy simulation technique introduced in formula (1). 3.3 Risk criteria Risk criteria includes Volatility of demand of market life capability of technology reliability of supplier and experience in the company. The weight and performance measure of these factors are qualitative variables. So they are expressed with the linguistic variables. The importance weight of the factor in risk criteria is denoted Wrj which belongs to W . Suppose Ak denotes an
alternative k (k = 1,2, , m) and Rakj denotes the fuzzy linguistic rating for alternative Ak with respect to risk criteria.

Rakj belongs to Ls . Then the fuzzy risk criteria measure of alternative


as follows:
Frmak = 1 s (Wrj )(Rakj ). s j =1

Ak is denoted

Frmak and can be computed


(6)

Wrj and Rakj are fuzzy variables, then the expected value of Fsmak is
1 s E [Frmak ] = E (Wrj )(Rakj ) . s j =1

(7)

As there are fuzzy variables in the multiplication, the expected value of fuzzy variable E [ Frmak ] is estimated by fuzzy simulation technique introduced in formula (1). 3.4 Selecting the best project Using the above steps, three values should be produced for each AMT project. The higher value of financial criteria or strategic criteria is the better. For the risk criteria, the lower value is the better as it means less associated risk. In principle it would be possible to combine the three major dimensions of financial criteria, strategic criteria and risk criteria into a single project score. However, this combination would mean a significant loss of information[15]. Thus, the efficient plane suggested by Accola (1994) is used. The selection of an alternative from the set of alternatives on the efficient plane depends on the relative preferences of decision maker(s) for financial criteria, strategic criteria and risk criteria. Any alternative located on the efficient plane
462

dominates other alternatives below the plane. As shown in Fig.4, the alternative on the efficient plane bounded by four alternatives A, B, C and D is better than the alternative E, F and G.

Fig.4 The efficient plane for selecting AMT investment projects

4. An illustrative example
In this section, the fuzzy simulation technique presented in the previous section is demonstrated via a numerical example. Suppose that three AMT alternatives (, , ) are evaluated. The investment period (t) is 6 years and the fuzzy discount rate ( r ) is (0.07,0.08,0.09) . The initial investments ( I 0 ) and net cash flows ( xt ) at end of year t to the planned horizon are as in Tab.1.
Tab.1 / 10 yuan
6

fuzzy value of I 0 and xt / 106yuan / 106yuan

I0
x1 x2
x3 x4 x5 x6

(0.6,0.8,0.9) (0.21,0.23,0.25) (0.25,0.26,0.29) (0.31,0.35,0.38) (0.27,0.28,0.29) (0.23,0.24,0.25) (0.22,0.23,0.24)

(0.7,0.9,1.1) (0.19,0.21,0.23) (0.24,0.25,0.28) (0.33,0.34,0.36) (0.23,0.24,0.26) (0.19,0.20,0.22) (0.18,0.19,0.20)

(0.7,0.8,0.9) (0.20,0.23,0.24) (0.24,0.26,0.27) (0.32,0.33,0.34) (0.25,0.26,0.27) (0.20,0.21,0.22) (0.17,0.18,0.21)

According to (3), the expected values of FNPV is computed by fuzzy simulation technique introduced in formula (1). The number of N is 5000 times in fuzzy simulation. Here, the program is written in Microsoft Visual C++ 6.0 and run on a PC. The results sorted by expected value in descending order are shown in Tab.2.
Table.2 Expected values of FNPV

alternative

expected value / 106yuan 0.459 0.232 0.333

The weight of the importance of each strategic factor is determined qualitatively and its equivalent triangular fuzzy number is shown in Tab.3. And the performance of each alternative with respect to each strategic factor is determined qualitatively and their equivalent triangular fuzzy numbers in Tab.3.
factor product quality production flexibility customer requirements product research management level manufacture period
Tab.3 Weight and performance of each strategic factor weight (0.5,0.7,1.0) (0.6,0.8,1.0) (0.6,0.8,1.0) (0.2,0.5,0.8) (0.3,0.5,0.7) (0.8,1.0,1.0) (0.7,1.0,1.0) (0.8,1.0,1.0) (0.6,0.8,1.0) (0.2,0.5,0.8) (0.6,0.8,1.0) (0.3,0.5,0.7) (0.5,0.7,1.0) (0.6,0.8,1.0) (0.3,0.5,0.7) (0.2,0.5,0.8) (0.3,0.5,0.7) (0.3,0.5,0.7) (0.3,0.5,0.7) (0.8,1.0,1.0) (0.3,0.5,0.7) (0.6,0.8,1.0) (0,0.2,0.4) (0.6,0.8,1.0)

463

According to (5), the expected values of Fsm is computed by fuzzy simulation technique. The number of N is 5000 times in fuzzy simulation. The results sorted by expected value in descending order are shown in Tab.4.
Tab.4 Expected values of

Fsm
expected value 0.492 0.448 0.372

alternative

The weight of the importance of each risk factor is determined qualitatively and its equivalent triangular fuzzy number is shown in Tab.5. And the performance of each alternative with respect to each risk factor is determined qualitatively and their equivalent triangular fuzzy numbers in Tab.5.
factor Volatility of demand life of market capability of technology reliability of supplier experience in the company
Tab.5 Weight and performance of each risk factor weight

(0.5,0.7,1.0) (0.2,0.5,0.8) (0.5,0.7,1.0) (0.2,0.5,0.8) (0.2,0.5,0.8)

(0,0.2,0.4) (0.3,0.5,0.7) (0,0,0.2) (0.3,0.5,0.7) (0.3,0.5,0.7)

(0.3,0.5,0.7) (0,0.2,0.4) (0.3,0.5,0.7) (0,0.2,0.4) (0.3,0.5,0.7)

(0.6,0.8,1.0) (0,0.2,0.4) (0.3,0.5,0.7) (0.3,0.5,0.7) (0,0.2,0.4)

According to (7), the expected values of Frm is computed by fuzzy simulation technique. The number of N is 5000 times in fuzzy simulation. The results sorted by expected value in descending order are shown in Tab.6.
Tab.6 Expected values of

Frm
expected value 0.211 0.256 0.301

alternative

Then the expected values of each criteria are used in developing the efficient plane as shown in Fig.4. The efficient plane bounded by four alternatives A, B, C and D depends on the decision makers preferences. It should be noted that project dominates the other two alternatives. Project provides the highest expected value of financial criteria, the biggest expected value of strategic criteria and the lowest risk. So the Project is valued to invest.

Fig.4 Three-dimension value of three investment projects

464

5. Conclusion
In this paper a model for AMT investment decision making is developed. The model is built with analytic hierarchy process based on fuzzy simulation, in which the weights of criteria and performance measures of alternatives are all represented by fuzzy variables. In this way, the evaluation process accords with the actual conditions more. At the end of this paper, an example showed that the proposed method could obtain exact evaluation results, and can be implemented easily.

References
[1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] Hunt V. D., Dictionary of Advanced Manufacturing Technology[M], New York: Elsevier,1987. Lefley F., Capital investment appraisal of advanced manufacturing technology[J], International Journal of Production Research, 1994,32(12):27512776. Lefley F., Strategic methodologies of investment appraisal of AMT projects: a review and synthesis[J], The Engineering Economist, 1996,41(4):345363. Abdel-Kader M. G. and Dugdale D., Evaluating Investments in Advanced Manufacturing Technology: A Fuzzy Set Theory Approach[J], British Accounting Review, 2001,33(4):455-489. Wilhelm M. and Parsaei H., A fuzzy linguistic approach to implementing a strategy for computer integrated manufacturing[J], Fuzzy Sets and Systems, 1991,42 (2):191-204. Accola W., Assessing risk and uncertainty in new technology investments[J], Accounting Horizons, 1994,8(3):. 19-35. Parsaei H. and Wilhelm M., A justification methodology for automated manufacturing technologies[J], Computers and Industrial Engineering, 1989,16(3):363373. Boucher T. and MacStravic E., Multiattribute evaluation within a present value framework and its relation to the analytic hierarchy process[J], The Engineering Economist, 1991,37(1):132. Cheng C. H., Yang K. L. and Hwang C. L., Evaluating attack helicopters by AHP based on linguistic variable weight[J], European Journal of Operational Research, 1999,116(2):423-435. Wilhelm M. R. and Parsaei H. R., A fuzzy linguistic approach to implementing a strategy for computer integrated manufacturing[J], Fuzzy Sets and Systems, 1991, 42(2):191-204. Nahmias S., Fuzzy variable[J], Fuzzy Sets and Systems, 1978, 1:97-101. Liu B., Uncertainty Theory: An Introduction to its Axiomatic Foundations, Springer-Verlag, Berlin, 2004. Liu B. and Liu Y., Expected value of fuzzy variable and fuzzy expected value model[J], IEEE Transactions on Fuzzy Systems, 2002, 10(4):445-450. Liu B., Theory and Practice of Uncertain Programming, Phisica-Verlag, Heidelberg , 2002. Accola W., Assessing risk and uncertainty in new technology investments[J], Accounting Horizons, 1994,8(3):19-35.

465

Survey on Rough Set Theory Based on Connection Degree


Zhou Xianzhong, Li Huaxiong, Huang Bing, Yang Pei
School of Management and Engineering, Nanjing University, Nanjing, P.R.China, 210093

Abstract Rough set theory is a relatively new soft computing tool to deal with vagueness and uncertainty. It has received much attention of the researchers around the world. In recent years, rough set theory based on connection degree has become a focus subject in rough set research. The basic concepts and models of rough set theory are introduced and the extension rough set model based on connection degree is analyzed in details. In the end, a method to determine the value of connection degree is proposed. Key words Rough Set, Connection Degree, Set Pair, Variable Precision, Fuzzy Set Pair

1 Introduction
Rough set theory is proposed by Zdzislaw Pawlak in the early 1980s[1,2], and now it has been widely conceived as a mathematical tool to deal with vague and uncertain information system, and it has been an available method in decision theories and applications research. Unlike other theories which are proposed to deal with vagueness and uncertainty such as fuzzy theory[3,4], Dempster-Shafer theory[5,6], bayes theory, etc. Rough set theory does not need any preliminary or additional information about data, i.e. rough set theory is objective[7]. Therefore, in recent years, rough set theory has become a focus subject and proved to be very useful in practice, and it seems to be of fundamental importance to artificial intelligence, cognitive sciences and data mining[8]. Classical rough set theory mainly concerns complete system, in which all the attribute values of each object are known, but most information systems are incomplete in practice, i.e. attribute values of objects may be unknown(missing, null). To deal with information system, Krysckiewicz, Stefanowki and Tsoukeas proposed new rough set model in which they extended the equivalence relation to tolerance relation and similarity relation[9~11], and Wang proposed a new rough set model based on limited tolerance relation[12]. Subsequently relevant method to deal with incomplete information system has been developed. Based on these researches, Huang Bin and Zhou Xianzhong proposed a new rough set model based on connection degree (set pair) [13~15] which is introduced by Zhao Keqin[16,17]. Subsequently, researches on rough set based on connection degree have risen in recent years[18~26], and it has been a focus research field of rough set theory. In this paper, well take an overview of researches on rough set theory based on connection degree.

2 Basic Concept of Rough Set


First, lets introduce the basic concept of rough set. A starting point of rough set based data analysis is a data set, called an information system. Denote the information system as S = U , A, V , f , where U and A , are finite, nonempty sets called the universe, and the set of attributes, respectively, and with every attribute a A we associate a set Va of their values, called the domain of a , and V = Va . In addition, f denotes the mapping: U V . Any subset B of A determines a binary relation RB on U, called an indiscernibility relation, and dened as follows: ( x, y ) RB iff a ( x) = a ( y ) for every a A , where a (x) denotes the value of attribute a for element x . Obviously RB is an equivalence relation. The family of all equivalence classes of RB , i.e., a partition partition U / B , containing x will be denoted by RB ( x ) or simply by B ( x ) . If that x and y are B -indiscernible (indiscernible with respect to B). If we distinguish in an information system two disjoint classes of attributes, called condition and decision attributes, respectively, then the system will be called a decision system, denoted by S = U , A, C D, f ,

This research has been supported by National Natural Science Funds of China (No: 70571032), China Postdoctoral Science Foundation(No: 20060390916), Jiangsu Planned Projects for Postdoctoral Research Funds(No: 0601019C)

determined by B , will be denoted by U / RB , or simply by U / B . An equivalence class of RB , i.e., block of the

(x, y ) RB , we will say

466

where C and D are disjoint sets of condition and decision attributes, respectively. Suppose given an information system S = U , A, V , f , X U , and B A .The task is to describe the defined as two sets B ( X ) and B( X ) , called the B -lower and the B -upper approximation of X , respectively, and dened as follows: set X in terms of attribute values from B . To this purpose, two operations assigning to every X U are

B( X ) = {x U RB ( x ) X }, B( X ) = {x U RB ( x ) X }

(1)

Hence, the B -lower approximation of a set X is the union of all elements that are included in X , whereas its B -upper approximation is the union of all elements that have a nonempty intersection with X . The set

BN B ( X ) = B( X ) B( X )

(2)

opposite case, i.e., if BN B ( X ) = , then X is referred to as to rough (inexact) with respect to B .

will be referred to as to the B -boundary region of X . If the boundary region of X is the empty set, i.e., BN B ( X ) = , then X is exact with respect to B . In the

3 Extension of Rough Set Model under Incomplete Information Systems


Classical rough set theory mainly concerns complete system. However, in practice most information systems are incomplete, i.e. attribute values of objects may be unknown (missing, null), and classical rough set theory is inefficacy in this case. To deal with information system, several new rough set models are proposed. 3.1 Rough set model based on tolerance relation Rough set model based on tolerance relation is proposed by Krysckiewcz[9,10]. In this model, equivalence relation RB is relaxed to tolerance relation TB , where TB is defined as follows:

TB ( x ) = { y U b B, b( x ) = b( y ) or b( x ) = * or b( y ) = *}

(3)

where * denotes missing values or null values. Based on this tolerance relation, lower approximation and upper approximation can be defined as follows:

TB ( X ) = {x x U , TB (x ) X } , TB ( X ) = {x x U , TB ( x ) X }

(4)

According to definition of tolerance relation, one may find that null value is equal to all possible values, which may classify two objects that have different attribute values into a class. So tolerance relation is too relaxant. To solve this problem, Stefanowki proposed dissymmetrical similarity relation. 3.2 Rough set model based on dissymmetrical similarity relation Rough set model based on dissymmetrical similarity relation is proposed by Stefanowki[11]. In this model, dissymmetrical similarity relation S B is introduced, where S B is defined as follows:

S B ( x ) = { y b B, b( x ) = * or b(x ) = b( y )}

(5)

and based on this dissymmetrical similarity relation, similarity sets RB ( x ) and RB1 ( x ) are defined 1 1 by: RB ( x ) = { y y U and x S B ( y )} , RB ( x ) = { y y U and y S B ( x )} , and generally RB ( x ) RB ( x ) .

According to the definition of RB ( x ) and RB ( x ) , lower approximation X B and upper approximation X S can
1 S B

be defined by:
S X B = {x x U , and RB 1 ( x ) X }

(6) (7)

X SB = { y y RB ( x ), x X }

improvement of lower approximation TB ( X ) and upper approximation TB ( X ) based on tolerance relation, i.e.,
467

By comparing dissymmetrical similarity relation with tolerance relation, Stefanowki proved that lower S B approximation X B and upper approximation X S based on dissymmetrical similarity relation is an

S TB ( X ) X B and X SB TB ( X ) .

For the dissymmetry of dissymmetrical similarity relation, two objects that in the same class may be classified as different class, that is to say, dissymmetrical similarity relation is too rigorous. To overcome this defect, Wang Guoyin proposed a new rough set model based on limited tolerance relation. 3.3 Rough set model based on limited tolerance relation For dissymmetrical similarity relation is too rigorous, a new rough set model based on limited tolerance relation is proposed by Wang Guoyin[12]. In his model, limited tolerance relation is defined as follows: Suppose S = U , A, V , f , B A , let PB ( x ) = {b b B and b( x ) *} , define

LB ( x) = { y b( x) = b( y ) = * or
b( x) = b( y )))}
defined by:

(( PB ( x) PB ( y ) ) and (b( x) * and b( y ) *)

(8)

According to the definition of LB ( x ) , lower approximation LB ( X ) and upper approximation LB ( X ) can be

L B ( X ) = { x x U , LB ( x ) X } LB ( X ) = { x x U , LB ( x ) X }

(9) (10)

It can be proved that the rough set model based on limited tolerance relation is an improvement of rough set S model based on tolerance relation or dissymmetrical similarity[12], i.e. TB ( X ) LB ( X ) X B and

X SB LB ( X ) TB ( X ) .

4 Rough Set Model Based on Connection Degree


Rough set model based on connection degree is a new rough set model proposed by Huang and Zhou in recent years. In this model, thresholds are introduced so that the tolerance class of x can be adjusted by the value of threshold, and the lower approximation and upper approximation can also be adjusted, so one can control the results in some way by threshold, which is consist with the notion of human-machine interface. The main idea of this rough set model is based on set pair theory, which is proposed by Zhao. The basic concept of set pair theory can be interpreted as follows. 4.1 Set pair theory The theory of set pair analysis, proposed by Chinese scholar Keqin Zhao in 1989, is still an extension of set theory for the study of intelligent systems characterized by uncertain and incomplete information. Shortly speaking, it is used to research the correlation between two sets. It connects the certainty relation of objectivity with the uncertainty relation of objectivity as an uncertain system to analyze and dispose. The basic concept of set pair theory can be interpreted as follows[26]: Let A and B be two given sets, and denote a set pair made up with the two sets with H = ( A, B ) . Under some specific background W, set pair H has n features, in which s features are mutual for A and B , p features are opposite for A and B , f features are neither mutual nor opposite for A and B . Define the ratio as follows: s / n : the identity degree of A and B under background W ; f / n : the discrepancy degree of A and B under background W ;

p / n : the contrary degree of A and B under background W ; s f p Let u w ( A, B ) = + i + j , represent the relation of A and B , the i is the coefficient of the n n n discrepancy degree, and j is specified as -1, u is called the connection degree of A and B under the background W . Simply denote
468

u ( A, B ) = a + bi + cj
where a = s / n, b = f / n, c = p / n . Obviously, 0 a, b, c 1 and a + b + c = 1 .

(11)

4.2 Rough set model based on connection degree In rough set model based on connection degree, set pair is introduced in tolerance relation. The main concept of this idea can be interpreted as follows[13~15]: For the incomplete information system S = U , A, V , f , B A , suppose that B = n , for x U , let

m( x ) = {a B a( x ) *} , U = {x U m( x ) n1},

where 0 1 1 . U expresses the set of objects

whose known attribute values are no less than n1 , so we can get new information system as
function PB 2 : U P(U ) as:

S = U , B, V , f . Let 0 < 2 < 1 , for the information system S = U , B, V , f , define a set-valued


PB 2 (x ) = { y U u (x, y ) = a + bi, a + b = 1, a 2 }

(12)

where a, b denote identity degree and discrepancy degree of x and y respectively. Obviously, PB 2 ( x ) defines a binary relation from U to U by setting
R 2 = ( x, y ) U U y PB 2 ( x )

(13)
2

If

( x, y ) R

, it means that the ratio of mutual attributes of x and y to all attributes is no less than
2

and there are no contrary attribute values between x and y . Accordingly, lower approximation R
upper approximation R

(X )

2 ,
and

(X )

can be defined by:

R 2 ( X ) = {x x U , R 2 ( x ) X }
R 2 ( X ) = {x x U , R 2 ( x ) X }
based on limited tolerance relation, i.e., LB ( X ) R tolerance class in some way by thresholds
2

(14) (15)

It can be proved that rough set model based on connection degree is an improvement of rough set model

( X ) and

R 2 ( X ) LB ( X ) . And one can control the

2 , which is consist with the notion of human-machine interface.

4.3 Rough set model based on -identical degree similarity relation In rough set model based on -identical degree similarity relation[23], identical degree is introduced so that

two objects would be classified into an element set only if they satisfied common similarity relation and their identical degree exceeded (or equaled to) threshold. -identical degree similarity relation in this rough set model is defined as follows: Suppose S = U , A, V , f , B A , define

-identical degree similarity relation SIM (B ) as:

SIM (B ) = {( x, y ) U U a B, a( x ) a( y ) , TB (x, y ) / B }
where TB ( x, y ) =

(16)

{a B, a( x ) = a( y ), a(x ) = 1}, x y B, x = y

, and denote element set of x as: (17)

B ( x ) = { y U (x, y ) SIM (B )}

According to the definition of element set approximation apr

B ( x ) , lower approximation apr ( X ) and upper

(X )

can be defined respectively by:

apr apr

( X ) = {x U B ( x ) X }

(18) (19)

( X ) = {x U B ( x ) X }

469

4.4 Variable precision rough set model based on set pair Variable precision rough set model is the extension of classic rough set model, it introduces precision threshold by which a certain extent classification error is permitted. By combining variable precision rough

set model with set pair theory, Liu Fuchun proposed a new rough set model[20,21]. In this rough set model, a concept of relatively classified fault rate is introduced so that the element set of x can be adjusted by both connection degree and precision degree . The relation in this model is defined similarly as that of rough set model based on connection degree: Suppose that S = U , A, V , f , B A , define element set of x as:
S B ( x ) = { y U u ( x, y ) = a + bi + cj, a + b }

(20)

and lower approximation R ( X ) and upper approximation R ( X ) are defined respectively by:
R ( X ) = {x U c S B ( x ), X } R ( X ) = {x U c S B (x ), X < 1 } ,

(21) (22)

where
0, if S B ( x ) = c S B ( x ), X = 1 S B ( x ) X / S B ( x ) , if S B (x )

(23)

Variable precision rough set model based on set pair is an improvement of rough set model based on connection degree, which may make it possible to find relationship between irrelevant data. 4.5 Fuzzy set pair rough set model According to the notion of truth-membership grade and false-membership grade in vague set[27], fuzzy set pair rough set model is proposed by combining fuzzy theory with rough set model based on connection degree. The main idea of this rough set model can be interpreted as follows[26]: Let U and W be two finite nonempty universes. R is a binary fuzzy relation from U to W . Define the mapping F : U F (W ) by: F ( x )( y ) = R( x, y ) , ( x, y ) U W . Denote u F ({x}, {y}) = a + bi + cj , where

a represents the truth-membership grade of y to F (x ) ; b represents the grade of uncertainty-membership of y to F ( x ) , and c represents the false-membership grade of y to F ( x ) . Define the F relation of x as follows: S F ( x ) = { y u F ( x, y ) = a + bi + cj, y W ,a + b ,0 1} (24)

It represents the set of elements in W whose false-membership grade to F ( x ) is not more than 1 . Based

on the definition of S F ( x ) , lower approximation RF ( X ) and upper approximation RF ( X ) are defined respectively by: RF ( X ) = {x S F ( x ) X } , RF ( X ) = {x S F ( x ) X } , and accordingly the lower and upper

fuzzy operators RF ( A) and RF ( A) can be defined by:

RF ( A)(x ) = A( y )
yS F ( x )

(25) (26)

RF ( A)( x ) = A( y )
yS F ( x )

As fuzzy set theory is introduced, this rough set model can be more effective in consistent with human subjective thought, which is significative in practice.

5 An Method to Determine

In the rough set model introduced above, it is necessary to determine the value of ,which needs preliminary knowledge of man, and in practice, how to set the value of is a difficult and important problem.
470

To solve this problem, here we propose a tentative method based on relative positive region to determine the value

of identity degree

. The main process of this method can be described as follows: Firstly, compute relative positive region POS C (D ) according to each , and find out the relative positive region POS C (D )
*

which includes the maximum objects. As one knows, POS

(D )

contains the objects that can be classified to


*

U / D based on relation C , then POS

(D )

includes the maximum objects that can be classified to U / D


C

based on relation C , therefore the certain rules generated by POS

(D )

are most extensive. Obviously, there

may be not only one value of that the relative positive region it correspond includes the maximum objects, and we denote all those as set O * ; Secondly, in O * , we find out the that the neighborhoods of objects based on are largest. As we know, if the neighborhoods of objects are larger, the corresponding rules are more representative. This idea is a tentative method to determine connection degree , and in consequent research it may be a representative subject to be study.

6 Conclusion
Classic rough set theory mainly deals with complete information system, in which all the attribute values of each object are known, and in practice, most information systems are incomplete, i.e. attribute values of objects may be unknown(missing, null). To deal with information system, many new rough set models are proposed. Among these new models, rough set model based on connection degree has received much attention of the researchers. In this paper, we discussed a series of rough set models based on connection degree, and in the end, a tentative method is proposed to determine the value of .

References
[1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] Pawlak. Z. Rough sets. Int. J. Comput. Inf. Sci. 11(1982), 341-356. Pawlak. Z. Rough sets: Theoretical Aspects of Reasoning about Data. Kluwer Academic Publishers 1992.248 Zadeh. L. A. Fuzzy set. Information and Control ,1965 ,8 (3) :338-353. Zadeh L. A. Fuzzy sets as a basis for a theory of possibility. Fuzzy Sets and Systems, 1(1978):3-28 Dempster. A. P. A generalization of Bayesian inference. Journal of the Royal Statistical Society , 1968 ,30 : 205 -247. Skowron. A. and Grzymala-Busse. J. W. From rough set theory to evidence theory. In Advances in the Dempster-Shafer Theory of Evidence. R. R. Yaeger, M. Fedrizzi, and J. Kacprzyk, Eds. Wiley, New York, 1994, 193-236. Pawlak. Z., Busse, J. G., Slowinski. R, et al. Rough Sets. Communications of the ACM, 1995, 38(11):89-95. Liu Q. Rough Sets and Rough Reasoning. Beijing: Science Press, 2001.254 Kryszkiewicz M. Rough set approach to incomplete information systems. Information Sciences.1998, 112: 39-49. Kryszkiewicz. M. Rules in incomplete information systems, Information Sciences, 1999, 113: 271-292. J. Stefamowski, A. Tsoukeas. On the extension of rough sets under incomplete information, International Journal of Intelligent System, 1999,16(1): 29-38. Wang G Y. Extension of rough set under incomplete information systems. Journal of computer research and development, 2002, 39:1238-1243. Huang B., Zhou X. Z. Rough set model based on set pair analysis in incomplete information system. Computer Science, 2002, 29: 1-3 (in Chinese). Huang B., Zhou. X. Z. Extension of rough set model based on connection degree under incomplete information systems. Systems engineering-theory & practice, 2004, 1:88-92(in Chinese). Huang B., Zhong. B., Zhou X. Z. Improved Rough Set Model Based on Set Pair. Computer Engineering and Applications, 2004, 2: 82-84(in Chinese). Zhao K. Q. Set pair analysis and its preliminary applications. Hangzhou: Zhejiang Science and Technology Press, 2000.198 (in Chinese). Huang D. C., Zhao K. Q., Lu Y Z, et al. The fundamental operation of arithmetic on connection number a+bi+cj and its applications. Mechanical & Electrical Engineering Magazine, 2000, 17: 81-84 (in Chinese). Lv D., Wu M. D. The Combination of Set Pair Analysis and Rough Set. Computer Engineering and Applications, 2005, 33: 176-178 (in Chinese). Zhao X., Liu T. M., Xiang Y. D. Extension of rough set model based on weighted connection degree in incomplete information systems. Computer Applications, 2005, 25(4): :824-826(in Chinese). Liu F. C. Variable Precision Rough Set Model Based on Set Pair Analysis. Computer Engineering and Applications, 2005.10:74-76(in Chinese). Liu F. C. An Algorithm for Attributes Reduction in Variable Precision Rough Set Model Based on Set Pair Analysis. Computer Engineering and Applications, 2006,5:8-10(in Chinese).

471

[22] Zhang C. Y., Liu B. X. One Direction Transfer Rough Sets Model of Incomplete Information System Based on SPA. Computer Engineering, 2006, 32(14):33-34(in Chinese). [23] Zhou H., Wang Q. Y., Fei Y., Yuan F. Rough set models based on -identical degree similarity relation, Computer Applications, 200626(3):666-667(in Chinese). [24] Chen S. Q., Tang Z. H., Xiao J. H., Algorithm of Data Mining Based on Rough Sets Pair Analysis and Its Application.

Computer Applications,2004, Vol.24, No.6, 74-77.


[25] Zhang C. Y., Xu G.L., Liu B. X. The Method of Set Pair Analysis Based on Rough Set Theory. Journal of Hebei Institute of Technology, 2006, Vol.28, No.1, 97-100. [26] Zhou L., Shu L. Rough Set Model Based on New Set Pair Analysis. Fuzzy Systems and Mathematics, 2006, 20(4):111-116. [27] Gau W. L. ,Buehrer D. J., Vague sets. Systems, Man and Cybernetics, 1993, 23 (MarchApril) : 610614.

472

A Study on the Safety Situation and Countermeasures to Chinese Natural Resources


Zhao Guohao
Shanxi University of Finance and Economics, P.R.China, 030006

Abstract The natural resources safety is an important part of the national security, which influences a nations politics, economy, national defense, society etc. The overall construction of the Chinese middle-class family society puts forward a higher request to the natural resources safety. A key problem of Chinese natural resources safety lies in the deficiency and lack of the management system. To deal with the problem is to establish the scientific development view of the natural resources, to perfect the natural resources management system, to build up diversified structure of resources ownership, to set up the scientific and reasonable natural resources price-setting mechanism, to participate actively in the global strategy of the natural resources, and to realize the sustainable development of Chinese social economy. Keyword Natural resources, Resources management, Countermeasure

1. An Analysis of the Safety Situation of Chinese Natural Resources


1.1 The meaning of the natural resources safety The natural resources refer to, under certain time and place, the natural environment main factor and important item that can produce the economy value to raise the present and future welfare. The natural resources safety refers to the situation that a country or an area acquires the natural resources continually, steadily, fully and economically and at the same time, keeps the good state of the natural resources basis and ecological environment. The target of the natural resources safety is to realize the validity of the natural resources exploitation and the equity of the natural resources consumption. The validity of the natural resources exploitation refers to the fact that the harmonious relationship between the natural resources and the ecological basis can ensure the valid and continual exploitation of the natural resources, and the fact that no element will have a bad impact on the exploitation of the natural resources. The equity of the natural resources safety include the equality between one generation and another, and the equality within the same generation, the equality between the individuals, the equality between the groups and the equality of the districts. The basic contradictory is the contradictory between the limited supply of the natural resources and the ever-increasing material demand of the human society. 1.2 The safety situation of Chinese natural resources In the process of Chinese economic development, there are realistic or latent safety problems in the supply and use of the natural resources. From the objective aspect, the basic situation in china is that the absolute amount of the natural resources is large, but the relative amount is small. Of the reserves of many important minerals such as iron, copper, aluminum, etc., no matter in the absolute amount or in the relative amount, China has already had no main country position, depending largely on imports, having a relative poor competitive ability in participating the global mineral resources; the energy structure is not reasonable, the difference between supply and demand to the high-quality resources, such as petroleum, natural gas etc., is increasing year by year. The amount of farmland of every person is small, the average farmland of every Chinese is 1.41 acres in 2004, which equals to 40% of the world level; the water resources of every person is only 2040 cubic meter, which equals to 1/4 of the world level, and China is one of the 13 countries which have lack of water resources most; the forest area of every person is 1/6 of the world level; the lawn area of every person is 1/3 of the world average level; the pressure of the demand of the natural resources is increasingly day by day, the growth trend of the resources consumption in China will goes up in certain period of the future. 1.3 The challenge that the Chinese natural resources safety faces The economic development of China's 28 years reform has gone through three stages: the stage of shortage
This research has been supported by Shanxi Natural Science Foundation in China(Study on Theory and Application to Resources Management Systems Engineering, No: 200611042)

473

economy; surplus economy and resources limitation; the stage of the resources limitation the Chinese economy is now in. The Chinese economic scale and the strength are still unceasingly strengthening. The production amount, consumption amount and the import amount are in the high level in the world. But, the rapid growth of the economy depends mainly on the over-consumption of water, soil, mineral and other scarce resources and the destruction of the environment. But the lack of the water, soil, and the mineral resources is the long-term bottleneck that restricts the Chinese national economy development. The impact of yearly 8% increases target on the resources, shows in data: every GDP coal consumption is 480,000 tons per billion Yuan; every GDP oil consumption is 55,000 tons per billion Yuan; every GDP waste water amounts are 2 million tons about per billion Yuan; every GDP water resource spoilage that can not be handled is 80 million Yuan every billion Yuan about; A total energy consumption for every 10,000 Yuan GDP is 3 times of world average level; a total energy consumption for every ton steel in some steel enterprises is 40% higher than international level, the power enterprises fire electricity coal consumption is 30% higher than international level, every 10,000 Yuan GDP water consumption is 5 times of world average level, every 10,000 dollar GDP energy consumption is 3 times of world average level. How can such high resources consumption and such large need prop up society economic growth? The waste and the low efficiency in the natural resources usage aggravate the Chinese natural resources safety crisis.

2. An Analysis of the Factors that Influence the Chinese Natural Resources Safety
The reasons that cause the resources limitation mainly lie in: the rapid growth of the population, the slow progress of the technique progress, the problem between the macroscopic policy and the microscopic mechanism, the change of world resources market and the inharmonious global resources management. Undoubtedly, as far as the problem of natural resources safety is concerned, politics factor, ethics factor, technology factor etc. play big roles, but, the deficiency and vacancy of the system play a decisive role and this is the deep root of the worsening Chinese natural resources safety. 2.1 The imperfection of natural resources management Chinese current state-owned natural resources management system is based on the traditional economic system and the operating mechanism, and characterized by the non-ownership natural resource management, this is not suitable for the socialist market economy development. There are many shortcomings. First, paying too much attention to technique management while paying relatively less attention to the ownership management. The natural resources management should include the ownership management and technique management at the meantime. However, judging from the regulation of the Chinese state-owned natural resources management, either in the aspect of laws, or in the practice of the management, much more attention is paid to technique management while neglecting the ownership management. Second, emphasizing decentralized management and the lack of centralized management. Different natural resources have different value. Because of the requirement for the classified management of state-owned natural resources, many administration departments have been constituted. This, to some extent, neglects the unity of the resources ownership and the value of the scarce economic resources, which actually turns the national ownership into department ownership and local ownership. Third, taking administrative method as major management method and using economical method as auxiliary one. Under the conditions of state ownership of the natural resources and the administrative management, the major method of the government managing the natural resources is administrative method, namely the examination and approval systems of the resources development and the plan ration system of resources use. Economic method is introduced into the state owned natural resources management system reform such as the exploitation fee, but there still exists the problems of weak economic methods and insufficient or absurd economic lever. Fourth, focusing on the monopoly of the ownership and neglecting the fluidity of ownership. The monopoly of the state-owned natural resources can not make the state owned natural resources ownership participate in the merchandise fluxion process, which has a bad effect on the optimized distribution of state owned natural resources.
474

2.2 The flaw in resources ownership system The relationship of the Chinese natural resources ownership is not clear and the transaction of resources ownership is not smooth. The un-safety of resources ownership will have an effect on the expectation of the user and influence the value of the natural resources on the owner and the form of the transaction. According to Chinese law, the natural resources belong to the state. Because the natural resource state ownership is exercised by State Council and the concerned departments, this makes the resources ownership manifest the national interest and the benefit of central authorities. The basic situation is that the relationship of the Chinese natural resources ownership is not clear. Various departments, areas, associations and individual implement plundering mining for capturing the natural resource exploitation right which causes the over-consumption of the natural resource and the waste is serious. In the meantime, the transaction of the Chinese resources ownership lacks transition, at present, only the concerned law of the land and the mineral resources ruled that these resources can be traded conditionally, but there is a blank of other resources transaction. But the condition of the mineral resource transaction is that money-seeking transaction is prohibited. This transaction is only another form of the government administration arrangement or assignment; the real ownership transaction is not existed. 2.3 The imperfection of the natural resources pricing system Because of the misleading of the traditional natural resources values and the inertial strength of the long-term planned economy system, there are many flaws in the Chinese natural resources price formation mechanism. First, at the natural resources management system aspect, there is no reasonable system foundation in forming the natural resources price. Second, although part of Chinese natural resources price has been let loose, some natural resources are still of national price setting, which makes the price lack flexibility, the price is not reflected from the supply-demand relation of the market. Third, the Chinese natural resource on-hand merchandise market is not very standard and the plan color is very thick, and the natural resources stock market is at the beginning stage, and is not very mature. Fourth, in the natural resources pricing process, there is a gap between the natural resources market price-setting and the government price-setting. The governments regulation of the natural resource prices is not clear and not continuous. Fifth, as far as the concrete price-setting method is concerned, the government plan color is strong, lacking scientific and rational natural resources price-setting method. 2.4 The lack of the transacting system in responding to the natural resources globalization The uneven distribution of the world natural resources and the quick development of the economic globalization make the natural resources globalization become an irresistible current. While using international natural resources market to seek development, Chinese rich natural resources and the frail resource industry is also placed on international natural resources market that is full of risk and hardship. The character of transaction strategy of Chinese natural resources choice is shown as the following: depending passively on the import of natural resources to satisfy a natural resources need; depending too much on native natural resources, which will lead to losing the good opportunity to import natural resources; the export of natural resources being in the form of low price raw material; the increase of the native natural resources need forcing the government to adjust transaction strategy of Chinese natural resources choice, which becomes the inevitable outcome of the governments active adjustment of the import and export relationship of natural resources inside and outside the nation.

3. The Countermeasures of Ensuring the Chinese Natural Resources Safety


The economic sustainable development can't neglect limitation factor of the environment, and cannot be based on the destruction of the natural resources. The natural resource safety is a key factor of sustainable development, which needs sound system to regulate the actual deeds. So, we should provide adequate encouragement and proper guide for natural resources exploitation through the system innovation, thus making the exploitation of natural resources in an orderly and steady safety situation. The scientific natural resources safety viewpoint is based on scientific development viewpoint. Its objective point lies in the rational and healthy development, As far as the natural resource development and exploitation is
475

concerned, its basic request lies in the optimization of natural resource exploitation as well as the effective strengthening of the natural resource bearing capacity, and building up the foundation of sustainable exploitation of the natural resources. This kind of exploitation is different from the depredation and usage of the natural resources under the name of "conquer doctrine" and is also different from the protection of the natural resources under the name of "pessimism doctrine". 3.1 The perfection of the natural resources management system In the active reform of current natural resources management system, we should draw lessons from the experience of the country who has a mature market economy development, and who can manage natural resources effectively. And at the same time, we should take into consideration the concrete circumstances of Chinas, taking into consideration the process and the object of Chinese market economy system reform progress, finding a set of natural resources management system which is suitable for Chinese market economy development. Setting up a relatively independent resources management department, undertaking the supervising and managing function that is presently undertaken by various resources enterprise and implemented by resources management departments, and gradually distribute gradually the national resource management function. Based on this foundation, perfecting the bagel system of the resources business as soon as possible; perfect the natural resources research and protection mechanism which is suitable for socialist market economy; building up a information system which is needed by natural resources management protection and reasonable use; drawing up the technique standard of the natural resources exploitation; practicing the examination and approval system of the natural resources exploitation; adopting the positive policy to ensure the natural resources safety; building up the deposit system of natural resources safety; trying best to cultivate and make standard natural resources exploration and development; perfecting the optimized distribution the natural resources which combines government management and the market operation. 3.2 Building up diversified natural resources ownership structure To perfect resources ownership system is to make clear the relationship of the resource ownership and its exclusiveness, and makes the transaction of the resource ownership possible. First, the sole natural resource ownership should be transformed into the diversified resource ownership system. Only through deeply reforming ownership system and forming lots of natural resources market corpus which have clear ownership, can the basic function market be fully displayed, strengthening the ownership limitation mechanism and establishing the weighing mechanism of the state ownership. Second, realizing ownership reorganization. By bringing in the foreign capital and joint investment, the resources enterprise not only brings in the fund, and the technology, but also impels the enterprise to transform the management mechanism, thus successfully realizing the enterprise ownership reorganization. Third, carrying out the marketalization of the ownership transaction. For example, the land market, except the transfer under the law, other land will be supplied through sale, renting, tendering and auction. Fourth, carrying out the stock system reform, allowing to appear the state owned resources enterprise or other kinds of resource enterprise, thus solving the a series of problems, such as funds deficiency, unclear ownership, the low concern of the officers and workers, and no local benefit. Fifth, implementing resources ownership responsibility system. Under the new ownership system, sticking to the principle of the combination of right and responsibility, making clear the level of ownership, and the corresponding right and responsibility, and signing the contract based on law. 3.3 Building up a reasonable natural resources price-setting mechanism The reasonable natural resources price is the bridge that communicates domestic and international natural resources market. If the natural resources price is not reasonable, the Chinese natural resources market will not be really in line with the international market. The natural resource price should fully reflect the relationship of supply and demand formed under government supervision. The formation of natural resources price of the developed countries has already had this characteristic. For example, in Canada, and the US, in the formation process of natural resources, the market mechanism plays the leading role, the natural resource price is often decided by the major international market, especially petroleum and coal. American petroleum price is based on
476

the oil price of West Texas in New York Commodity Exchange. The coal price of those countries is completely decided by the demander and the supplier according to the coal price international market, the government does not interfere. Although natural resources price in Canada and American is formed through the market mechanism, the government has certain supervision of the natural resources price in the meantime. During two petroleum crises period, the governments of both countries supervise and control the petroleum price, which lasts until the beginning of 1980s. The lessons we get from this are: first, setting up marketbased resource price formation mechanism; Second, the marketbased natural resources price can't completely reflect the social cost of the natural resources exploration, thus government must implement the supervision function. As far as the scarce natural resources is concerned, government price-setting should be carried out. Third, building up and perfecting natural resources price supervision organization, which carries out the supervision, regulation and management of natural resource price. 3.4 Building up a Reasonable International Cooperation Order of the Natural Resources To join in international organization actively and participate in the formulation of the globalized rule through negotiation and change the Chinese and other countries disadvantageous position in the natural resources globalization, and to promote the reconstruction of the fair and reasonable natural resources international cooperation. Based on mutual benefit, to participate in the multilateral and international cooperation of natural resources, to get rid of the negative influence of the "threat in China theory", to guarantee the Chinese natural resources safety, to perfect foreign investment law system, and to suppress the natural resource plundering and pollution passing of the developed countries. To raise the environment standard, and perfect legislation and enforcement of environmental protection, to perfect legislation and enforcement of such laws as foreign transaction law, to dodge the impact of the international transaction on the natural resource safety, to promote the international transaction of the natural resources, to encourage the local enterprise to participate in overseas, and to participate in the exploitation. To actively implement the trade strategy of the natural resource choice to solves the problem of different structures of natural resources, to get rid of contradiction between the supply and demand of the natural resources in the domestic economic growth process by dealing with the import and export relation of demotic and foreign natural resources.
References

[1] [2] [3] [4] [5]

Moffatt I. On measuring sustainable development indicators. International Journal of Sustainable Development and World Ecology, 1994, (1): 97-109 Zhao Guohao. Study on Natural Resources Management for Sustainable Development in China. In: Zhang Henry, Zhang Suodi, Guo Shufen, eds. Advances in Management of Technology. Marrickville: Aussino Academic Publishing House, 2006. 201-205 Moffatt I. Sustainable Development --- principles, analysis and policies. The Parthenon Publishing Group, 1995. 88-90 Zhao Guohao. Scientific Progress and Sustainable Development. Journal of Shanxi Institute of Economic Management, 1997, (4): 15-17 (in Chinese) Zhao Guohao. Analysis on the system Elements of Sustainable Development. Quantitative & Technical Economics, 1998, (2):18-20 (in Chinese)

477

SECTION FOUR
INFORMATION MANAGEMENT AND E-COMMERCE

479

480

A Comparison of Domestic and Overseas Current Situation of Web-Based Survey Research


Cheng Du, Shao Peij, Fang Jiaming
School of management, University of Electronic Science and Technology of China, P.R.China 610054,

Abstract During the past twenty years, web-based survey has played an influential role both in business enterprises and other organizations. Web-based survey, adopted with emerging information technology, attracts extensive attentions of both the academic and industry field, but also encounters many problems. This paper reviews the relevant literatures on web-based survey both in China and abroad, with the method of bibliometric analysis. Comparing domestic and overseas current situation of web-based survey research, we classify current web-based survey researches into different categories by their contents, and figure out each categorys advantages and disadvantages. At last, we construct the model of web-based survey research in order to give a clear perspective in web-based survey research. Following the model that we constructed, we get the table of classification of contents in web-based survey research and point out current situation in each class. Key words Web-based survey, Comparison research, Bibliometric analysis

1. Introduction
Simon Chadwick, who was the former president and now is Cambiar Companys partnership and market research chief supervisor, has made a new global market research companys study that is concerned with 50 global market research companies and 43 large scale market research service users. The results are: 25-30% of all survey research will be conducted online by 2010; Todays $1.3 billion online research market will triple to $4 billion by 2008.This increase will major from the research companies that possess ready-made customer crowd, the companies that enter web-based survey field recently and the various new web-based surveys methods. Meanwhile the proportion that web-based survey expense takes market research general budget will ascend from 28% in 2004 to 33% in 2006. Evidently web-based survey has been more and more approved by the market research companies. Compared with web-based survey in practice, research in this field is backward especially in China. The purpose of this paper lies in comparison between domestic and overseas web-based survey research in the contents and methods in order to make a suggestion that promotes domestic web-based survey

2. Method
This paper uses bibliometric and comparison research method. Datum are from some of the domestic and overseas databases, including CNKI, VIP Information Database, ELSEVIER, ABI, EBSCO and INSPEC. We classify the information for the research methods and contents, and analyze the advantages and disadvantages of the present research in order to give some advices.

3. Comparison of the present web-based survey research between domestic and overseas
This part will introduce the current situation of foreign and domestic web-based survey .In view of the fact that the present domestic related research is comparatively fewer, so the point of this part is to introduce the achievements of foreign research .In the next part we will compare these information, in order to construct the model of web-based survey research, and give suggestion of the web-based survey research in China. 3.1 Overseas present situation and application of web-based survey From management aspect, overseas mainstream research for web-based survey may approximately divide into the following two kinds: 1) The research about the advantages and disadvantages of web-based survey. This kind of research is the key issue of web-based survey research, and foreign scholars often use empirical approach(Coderre2004Griffis2003McDonald2003Ilieva2002Cobanoglu2001 Elfrink,2000 Dommeyer,2000 Couper,1997 Mehta,1995 Schuldt,1994 etc.)[1][2][3][4][5][6][7[][8][9][10].Through carrying out 481

statistical analysis for the data gotten by survey, they compared some aspects of the web-based survey with telephone survey and mail survey, such as response rates, response time, response content, response quality and survey cost etc.. The shortcoming of this kind of research: This analysis carries out merely for web-based survey in specific field. Research has no theory as a result, which cause the research of same purpose is repetitive; plenty of research results are identical. None of the above makes much contribution to web-based survey theory. And the research result is hard to have an actual guidance role for how to effective enforcement web-based survey. This research is so superficial that couldnt uncover the true reasons that will influence the response rates. 2) The research for web-based survey response rates. This kind of research is mainly concerned with how to take corresponding measures to raise the response rates of web-based survey. Some foreign scholars(Deutskens2004Dillman, 2000Church, 1993Yammarino, 1991etc.)[11][12][13][14]put forward that there are 4 factors which influence web-based survey response rates: Incentive, follow-up letter, the length of the questionnaire, and the question present way. Foreign experts in this field (Deutskens, 2004; etc.) make some empirical studies. This kind of research may divide into 2 following subclasses: The research for the incentive of web-based survey. In this aspect, proper application of incentive as well as correct time to use incentive is the key problem that researchers try to solve. The major content of this research is so clear that we may sum it up in Fig.1. In Fig.1, M shorts for money incentive; O shorts for non-money incentives; R shorts for rear-incentive. I is short for initiative-incentive. Many studies focus their objects on comparing the differences of these four incentive methods in order to find a favorable method or a method combination to improve the response rate and the data quality from the survey.
I

OI
O

MI
M

OR
R

MR

Fig.1 The study frame of the incentive study in web-based survey

The disadvantages of this subclass are:The researchers focus on the incentive effect on the web-based survey only in the interviewers perspective. Put excessivez stress on the response rates other than the response quality. No practical model is constructed. Researches of identical purpose are repetitive. How to use incentive correctly in the right time is still a problem that has not been solved. (1) Factor such as follow-up letter, question expression and length of questionnaire which influence response rates in web-based surveys. These roles of factors in web-based survey response rates are unusually examined in the same research (Bachmann, 2000; Dillman, 1998; etc.)[15][16] .This part of research adopt completely empirical approach. The result of research shows that follow-up letter (the 5-7 day after the first questionnaire) and a brief questionnaire is helpful to raise response rates. Comley ( 2000 )[17] has even suggested a formula of response rates according to the number of questions in a surveys first page: Response rates 40%(8% the number of questions in first page); Structured, multi-media questionnaire type is helpful to raise response rates. Additionally, foreign scholars think that web-based survey is more suitable for the traditional semi-structured problems. But an empirical study carried out by our team before (The Effect of Incentive and sponsorship in Web-Based Surveys: An Empirical 482

Study " ) has not supported this viewpoint. The problem of this kind of research:Experts only study in several factors that could influence web-based survey in their opinion, but no one integrate other scholars achievements to get the crucial factor of how to carry out web-based survey effectively. A former paper of our team ("An Exploratory Method on Evaluating Feasibility of Web-Based Survey") presents a method of evaluating web-based survey's feasibility. 20 variables that influence the success of the web-based survey are confirmed in which the factor of incentives is not contained. Therefore we need to study further in the factors which will influence the web-based survey. The numbers encompassed in the Table 1 are collected from partial foreign mainstream databases (keywords are: web-based survey \ internet-based survey), has listed the paper number of web-based survey, and most of these paper are empirical and quantitative studies. However, there are a number of researches for the identical purpose (only different in the research field); there are few papers that have played an actual guidance role for web-based survey study.
Table 1 paper publication numbers associated with web-based survey in the foreign periodical databases Database ELSEVIER ABI EBSCO INSPEC Paper number 155 282 463 176

3.2 Domestic present situation of related research in web-based survey Inputting keyword web-based survey in CNKI, 46 related papers can be found. In the same way, we can get 8 related papers additionally in VIP Information database. We classify these papers into 5 classes in Table 2.
Table 2 key issues in the web-based survey field* Research content Characteristic of web-based survey(advantages and disadvantage) Methods of web-based survey Enhancement and improvement of web-based survey(statistical) Developing web-based survey system(technical) The application of web-based survey * Each papers content is not single, therefore the total number in Table 2 is larger than 54. Number 15 7 12 4 8

(1)The characteristic of web-based survey: There is a kind of papers described the characteristic of web-based survey, most of which are referred to the comparison with the traditional survey. The merit of web-based survey concentrates in low cost, speed, objectivity, reliability and without time and region limit; its main limitation is that the representative of sample is hard to guarantee, and the background of sample is hard to know. (2)Methods of web-based survey: Methods include sending E-mail, using website, discussing on net, and tracing the whole process. At home scholars already start to study how to improve traditional web-based survey, such as the E-mail questionnaire survey method based on network interpersonal relation. (3)Enhancement and improvement of web-based survey: Enhancement and improvement concentrate on statistical method, as reducing error of sampling, measure error and response error and etc.. Scholars have put forward some improvement measures. (4) Developing Web-based survey system: The key issue of system design is technology, such as using ASP to develop web-based survey systems and etc.. (5)The application of web-based survey: The literatures have put forward some available plan to suit for different business, most of which concentrate on the application of web-based survey in education. Domestic web-based survey research is mainly from statistics, studying the choice of web-based survey sample and the control of survey deviation as well as various related researches. Generally speaking, these papers are lack of scientific study approach and viewpoints that can be strongly supported by the reliable datum. 483

According to the same problem such as the advantages and shortcomings of web-based survey, there are plenty of repetitive researches. There are few original studies. Empirical and quantitative study approaches are blanks in domestic web-based survey research. Therefore few papers of web-based survey researches can be found in high level periodicals.

4. Analyses
According to the process of implementing a web-based survey, we may draw the process of web-based survey for Fig.2; we may also have specific aim through Fig.2 for web-based survey research. The explanation for Fig.2 is as follows: Client: The organization that need to carry out web-based survey. Since Client and Investigator may be a same organization, Client shows with dotted line. Even if Client and Investigator are the same one, the problem, such as how project division and operation division to communicate effectively, is still a worthwhile issue to study. Investigator: The organization that actually operating the web-based survey. Questionnaire: The survey style that is used in web-based survey, such as e-survey, w-survey, pop up and int-survey. Respondent: The persons that will answer the Questionnaire. Feed-forward: The activity that Investigator has taken before getting response, such as incentive and follow-up letters. Feed-backward: The activity that Investigator takes after getting the response. External factors: The objects not included in web-based survey, such as other survey approaches.

External factors
Feed-forward

Client

Investigator

Questionnaire

Respondent

Feed-backward

Fig.2 The model of web-based survey research

Based on web-based survey model mentioned above, we conduct a classification according to different survey content. This kind of structural thought is helpful to know which aspect of web-based survey research we should start first, as well as which part in domestic research has shortcomings which need strengthened. According to this thinking way, we put forward Table 3.
Table 3 classification of contents in web-based survey research No. 1 2 3 4 Study Field Client and Investigator Investigator and Questionnaire Questionnaire and Respondent Investigator and Feed-forward Study Content (Examples) How project division and operation division to communicate effectively How Investigator transform the plan into Questionnaire accurately Respondent like taking part in what style of Questionnaires. How to raise response rates. Overseas NR NR Continuation Domestic NR NR NR

484

Continuation No. 5 6 7 8 9 10 Study Field Feed-forward and Respondent Feed-backward and Respondent Web-based survey and Environment Investigator Client Questionnaire Study Content (Examples) What kind of Feed-forward can make Respondent like to respond How to get useful information from response The advantages and disadvantage of web-based survey compared to other surveys. Subjective factors (such as personality and preference) and Objective factors (such as budget, technology and time limit) of Investigator Subjective factors(such as personality and preference)and Objective factors(such as technology condition and leisure time limit) of Client The style of Questionnaire Overseas NR Domestic NR NR NR

*: The number of denotes the quantity of papers aiming at this kind of problem published in both domestic and overseas periodicals.5 are the highest standard. NR stands for this classification has not been concerned, or papers publication number is few.

5. Conclusion
To sum up, though web-based survey developed very quickly in last a few years, and there are many researches about web-based survey, the research field is very narrow, and the high level papers, original papers as well as the research that can take affect for web-based survey practice are very few. For the web-based survey whose application is so strong, the research achievement may play a great guidance role for practice. Therefore, it is urgent to reinforce the study in this field. Now there are no or few researches in web-based survey involving problems as follows: evaluation of the web-based survey performance, right time to use web-based survey, verification the role of incentive that uses in web-based survey, assessment of availability of using web-based survey in specific project, and the theoretical model of applying web-based survey. However, there is no doubt that those researches are extremely important for the effective development and application of web-based survey. These directions need further research and these studies are possible and necessary.
References [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] Coderre F, Mathieu A, St-Laurent N.Comparison of the quality of qualitative data obtained through telephone, postal and email survey, International Journal of Market Research , 2004, 46 (3): 347-358. Griffis S, Goldsby J T, Cooper M.Web-based and mail surveys: A comparison of research, data, and cost, Journal of Business Logistics, 2003, 24 (2): 237-255. McDonald H, Stewart A.A comparison of online and postal data collection methods in marketing research, Marketing Intelligence & Planning, 2003, 21 (2): 85-95. [4] Ilieva J,Baron S,Healey M N.Online surveys in marketing research: pros and cons, International Journal of Market Research, 2002 , 44 (3) : 361-376. Cihan Cobanoglu, Bill Warde, Patrick J Moreo.A comparison of mail, fax and web-based survey methods, International Journal of Market Research, 2001,43(4): 441-455. Bachmann P D, Elfrink J, GaryV.E-mail and snailmail face off in rematch, Marketing Research, 2000,11 (4):11-15. Curt J Dommeyer, Elanor Moriarty.Comparing two forms of an e-mail survey: Embedded vs. attached. Market Research Society.Journal of the Market Research Society, 2000, 42(1): 39-53. Mick P Couper.Survey introductions and data quality, Public Opinion Quarterly, 1997, 61(2):317-339. Mehta R, Sivada E.Comparing response rates and response content in mail versus electronic surveys. Journal of the Market Research, 1995, 37 (4):429-439. Schuldt, Barbara A, Totten, Jeff W. (1994).Electronic mail vs. mail survey response rates, Marketing Research, 1994, 6(1):36-40. Elisabeth Deutskens, Ko de Ruyter, Martin Wetzels, Paul Oosterveld.(2004).Response Rate and Response Quality of Internet-Based Surveys: An Experimental Study, Marketing Letters, 2004, 15(1):21-37. David R Schaefer, Don A Dillman. (1999). Development of a standard e-mail methodology: results of an experiment, Public Opinion Quarterly, 1999, 62:378-398. Church, A.H.Estimating the effects of incentives on mail response rates: a meta-analysis, Public Opinion Quarterly, 1993, 57:6279.

485

[14] Yammarino, Francis J., Skinner, Steven J., Childers, Terry L.Understanding Mail Survey Response Behavior: A Meta-Analysis, Public Opinion Quarterly, 1991, 55 (4):613-640. [15] Bachmann P D, Elfrink J , GaryV. E-mail and snailmail face off in rematch [J ]. Marketing Research, 2000, 11 (4):11 15. [16] David R Schaefer, Don A Dillman.Development of a standard e-mail methodology: Results of an experiment, Public Opinion Quarterly, 1998, 62(3):378-398. [17] Peter Comley.Communications: Moderated e-mail groups, Market Research Society.Journal of the Market Research Society, 2000, 42(1):111-113.

486

Factors Influencing Consumers Repeated Online Shopping in China: An Empirical Study


1

Chang Yaping 1, Zhu Donghong2


School of Management, Huazhong University of Science and Technology, P.R.China, 430074

Abstract Those who have experience with online shopping dont repeat their behaviors much, and to overcome consumers repeated online shopping barriers is very helpful in enlarging the volume of e- commerce. This paper investigates the factors that influence Chinese consumers repeated online shopping by empirical method. The result shows trade reliability, personal information security, credit standing of website, payment and delivery, marketing combination, shopping interface and so on are indicators, among which trade reliability, personal information security and credit standing of website are of greater importance. Key words E-commerce, Online Shopping, Repeat

1. Introduction
According to the investigation made by ACNielsen, 63 percent of online people have experience with online shopping (2005) [1]. However, they dont often repeat their behaviors. The investigations of CNNIC shows only a quarter of online people often shop online (2006) [2]. During the survey, lots of consumers talked about their worries about online shopping, which are partly different from those of people who never shop online. In theory, it costs less and is easier to maintain old consumers than to attract new buyers (Reichheld and Schefter, 2000) [3]. Therefore, it will be of great significance to find out why those have experience with online shopping dont repeat their behaviors if we want to enlarge the volume of E-trade. In this paper, we are trying to answer the following two questions through our survey to those who repeat online shopping: 1. What factors influence the consumers to repeat shopping online? 2. What is the cognitive sequence of significance of these factors?

2. Literature Review
In recent years, the topic of online behaviors has been studied from different angles. On the whole, these studies are mainly concentrated in the intention of online shopping and adopting of online shopping, and quite few researches on consumers repeated online shopping have been found. Bhattacherjees study (2001a) [4] is one of the very first attempts to explain consumer online repurchasing behavior. His proposed model was formulated on the basis of expectation and confirmation theory (ECT), and postulated satisfaction, confirmation, and loyalty incentives as salient factors affecting consumer online repurchasing. Prior research on consumer online repurchase placed more emphasis on the impact of psychological factors. For instance, considerable attention has been given to the study of trust (Warrington et al., 2000; Lee and Turban, 2000) [5][6] and satisfaction formation (Bhattacherjee, 2001b; Khalifa and Liu, 2001) [7][8] in the context of consumer-based electronic commerce. A few studies, however, have attempted to investigate the impact of product/service characteristics, medium characteristics, and online shop and intermediary characteristics on consumer online repurchasing. For instance, Liang and Lai (2002)[9] was one study that explored the impact of web page design. Gefen and Devine (2001) [10], Dai Lei (2006)[11] focused on investigated the effect of service quality on consumer online purchase continuance. Similarly, Pingjun and Bert (2005)[12] investigated the effect of price perceptions on customers intention to return.

3. Method
3.1 Sampling Collection and Respondents Characteristics Online people were the research objective of the study, and all the samples were randomly selected from them. We adopted the form of response with guerdon by distributing 4000 questionnaires in all. Finally, 3218 487

copies were returned, among which 2615 were effective, including 920 responded by those having online shopping experience. The studys respondents consisted of more males (55%) than females (45%), with a wide variety of age groups mostly in the 18-30 age groups (66%). Accordingly, 66% of the sample was spinsterhood, 84% of the subjects had at least some college education. 3.2 Survey Variables A questionnaire was made to collect data from consumers. Before the formal survey, we ascertained the primary influencing factors based on the relevant literature review, apart from which we also conducted two brainstorm meetings and 12 deep interviews to 12 samples. Finally, we designed the questionnaire based on the pretest which covered 120 samples. The questionnaire described 33 factors from online shopping environment and online shops (Tab.1), and the former contains 1-18 variables and the latter contains 19-33 variables. All items adopted 7-point Likert scale, among which 1 means strongly unimportant, 4 means average, and 7 means strongly important. The respondents were demanded to judge the significance of these factors according to their own cognizance of online shopping.
Code X1 X2 X3 X4 X5 X6 X7 X8 X9 Survey Variables leak risk of bank card information leak risk of individual privacy worrying about the deal is a cheat difficult to judge the product quality visual differences between picture and goods risk of late delivery no after service or inefficient after service no abundant sort of goods as traditional shopping slower speed compared with traditional shopping Tab.1 Survey variables Code X19 X20 X21 X22 X23 X24 X25 X26 X27 X28 X29 X30 X31 X32 X33 Survey Variables can not find satisfactory goods slow responding from the sellers unsatisfactory price no attractive promotion no desirable payment way no satisfactory delivery high delivery cost no beautiful shopping interface inconvenient shopping interface incomplete information about goods unclear information about goods slow speed of the website uncertain credit standing and popularity of website no authoritative security identification complex balancing procedure

X18 dislike using the computers and the internet

X10 higher cost compared with traditional shopping X11 not content with shopping experience X12 not content with the glory feeling of traditional shopping X13 difficult to learn how to shop online X14 can not find and obtain the goods fleetly X15 complex process of order/payment X16 unsafe internet resource X17 unfamiliar with internet using

4. Results
4.1 Tests on Online Shopping Environment 4.1.1 Data test The survey adopted Cronbachs Alpha coefficient as the testing standard to observe the internal consistency of each item in the questionnaire. The Reliability Analysis procedure in the Scale module of SPSS13.0 was used to calculate the Cronbachs Alpha coefficient, which was 0.758, indicating the questionnaire was reliable. The exploratory factor analysis of SPSS13.0 was used to test the validity, the outcome of which showed that all the communalities of the survey variables were above 0.4, so the questionnaires constructing validity met the requirements. 4.1.2 Factor analysis In order to express the structure of the original questionnaire with fewer variables and maintain most 488

information provided by the original data, we adopted SPSS13.0 to analyze the 18 variables by using principal components of factor analysis. Bartlett Test of Spherality Sig. = 0.000. KMO Measure of Sampling Adequacy = 0.729. The result revealed five factors with Eigenvalue of one or higher that explained 60.749% of the cumulative variation. According to the included information, we named these five factors as: Convenience factor mainly refers to whether it is easy to find and use the internet or learn to shop online, and whether online shopping processes is more convenient than traditional shopping. It includes six variables: X13 (difficult to learn how to shop online), X14 (can not find and obtain the goods fleetly), X15 (complex process of order/payment), X16 (unsafe internet resource), X17 (unfamiliar with internet), X18(dislike using computers and the internet). Trade reliability factor mainly refers to whether the whole online shopping processes such as goods information, quality, exchange, delivering and after-service are reliable. It includes five variables: X3 (worrying about the deal is a cheat), X4 (difficult to judge product quality), X5 (visual differences between picture and goods, X6 (risk of late delivery), X7 (no after service or inefficient after service). Usefulness factor mainly refers to whether the online shopping can satisfy consumers needs such as sort of goods, shopping efficiency, cost, shopping feeling and so on compared with traditional shopping. It includes three variables: X8 (no abundant sort of goods as traditional shopping), X9 (slower speed compared with traditional shopping), X10 (higher cost compared with traditional shopping). Shopping feeling factor mainly refers to whether online shopping can satisfy consumers in the aspects of shopping experience and glory feeling compared with traditional shopping. For instance, consumers are content with shopping experience in the shopping mall, and are they content with online shopping? It includes two variables: X11(not content with shopping experience), X12 (not content with the glory feeling of traditional shopping). Individual information security factor mainly refers to that whether it is secure such as offering the information of bank card, individual phone and address on the net. It includes two variables: X1 (leak risk of bank card information) and X2 (leak risk of individual privacy). 4.1.3 Sequence of influencing degrees of the factors
7

6 5 Mark

3 2

1 security ndividual information Trade reliability Convenience Usefulness Shopping feeling

Fig.1

Mean values of factors

The above five factors can basically explain why many consumers repeat their online shopping. However, what is the influencing degree of each factor to the consumers? We need to calculate their mean values to sort the factors. 489

About their mean values, trade reliability factor ranks the first (5.36 points) and personal information security the second (5.19 points), with a slight distinction. The third, the fourth and the fifth are respectively usefulness (3.28 points), convenience (2.91 points), and shopping feeling (2.9 points) factors. All these mean values drop below 4 points, and there is a slight distinction between convenience and shopping feeling factors. Fig.1 shows the mean value of each factor. The above sequence resulted from sample data, and we use 2 Related Samples Tests Procedure in SPSS13.0 to further test the rationality of deducing the collectivity order according to sample order. Through the test, the Asymp. Sig. (2-tailed) between trade reliability and personal information security was greater than 0.05 on the confidence interval of 95%, but those between trade reliability, individual information security and usefulness were both less than 0.05 (Tab.2). The Asymp. Sig. (2-tailed) between shopping feeling and convenience was greater than 0.05, and those between other factors were all less than 0.05. This means on influencing degree no difference exists between trade reliability and individual information security, shopping feeling and convenience, and other order is objective.
Tab.2 2 Related samples tests procedure on online shopping environment Trade reliability Individual Individual information information security Trade reliabilityUsefulness Usefulness Shopping feeling security Usefulness 0.440 0.000 0.000 0.002

Asymp. Sig.

Shopping feelingConvenience 0.788

4.2 Tests on Online Shops 4.2.1 Data test Through the calculation, the Cronbachs Alpha coefficient was 0.857, indicating the questionnaire was reliable. The outcome of the exploratory factor analysis of SPSS13.0 showed that all the communalities of the survey variables were above 0.4, so the questionnaires constructing validity met the requirements. 4.2.2 Factor analysis We analyzed the 15 variables and obtained principal components through factor analysis.Bartlett Test of Spherality Sig. = 0.000. KMO Measure of Sampling Adequacy = 0.793. The result revealed four factors that explained 63.326% of the cumulative variation. According to the included information, we named these four factors as: Shopping interface factor mainly refers to whether the 7 interface is attractive to consumers, consumers can download 6 material from the website and the information provided by the online shops is useful. It includes five variables: X26(no beautiful 5 shopping interface), X27 (inconvenient shopping interface), X28 4 (incomplete information about goods), X29 (unclear information about goods, difficult to read and understand), X30(slow speed of 3 the website). Marketing combination factor mainly refers to whether 2 online goods, the responding speed, and the process, the price, the 1 promotion can satisfy the consumers. It includes five variables: X19(can not find satisfactory goods), X20 (slow responding from the sellers), X21(unsatisfactory price), X22 (no attractive promotion), X33 (complex balancing procedure). Payment and delivery factor refers to whether the way of payment and delivery can satisfy consumers. It includes 3 variables: X23(no desirable payment way), X24 (no satisfactory delivery), X25 (high delivery cost).
Mark Payment and delivery

Fig.2 Mean values of factors

Credit standing of Website

Marketing combination

Shopping interface

490

Credit standing of Website factor mainly refers to whether credit standing, popularity, security identification and so on can reassure the consumers. It includes two variables: X31 (uncertain credit standing and popularity of website), X32 (no authoritative security identification). 4.2.3 Sequence of influencing degrees of the factors About their mean values, credit standing of website factor ranks the first (5.47 points) and payment and delivery the second (4.40). The third and fourth are respectively marketing combination factor(4.31 points) and shopping interface factor (4.03 points). All their mean values exceed 4 points (4.08), and there is a slight distinction between payment and delivery and marketing combination. Fig.2 shows the mean value of each factor. Through the test, the Asymp. Sig. (2-tailed) between marketing combination and payment and delivery was greater than 0.05 on the confidence interval of 95%, but those between marketing combination, payment and delivery and shopping interface were all greater than 0.05 (Tab.3). This means on influencing degree no difference exists between payment and delivery and marketing combination, and other order is objective.
Tab.3 2 Related samples tests procedure on online shops Credit standing of Website - Payment and Payment and delivery Marketing combination delivery Marketing combination Shopping interface 0.000 0.971 0.001

Asymp. Sig.

Payment and delivery Shopping interface 0.002

5. Conclusions and Discussion


According to the above empirical study, we can draw the following two conclusions: 1. The factors which influence consumers repeated online shopping can be summarized as 9 indicators: convenience, trade reliability, usefulness, shopping feeling, personal information security, shopping interface, marketing combination, payment and delivery and credit standing of online shops. From the mean values of the factors, consumers basically think that convenience, usefulness and shopping experience are not barriers of repeated online shopping, shopping interface, payment and delivery and marketing combination are small barriers, but trade reliability, personal information security and credit standing of website are big barriers. The possible reason is that for these consumers, they ever did online shopping and experienced the convenience and usefulness; Besides, these consumers ever adopted online shopping, indicating that shopping feeling of traditional shopping is not quite important to them, therefore, shopping feeling can not influence their shopping behaviors. 2. The cognitive significance sequence of online shopping environment factors is: trade reliability, personal information security; usefulness; convenience, shopping feeling. The cognitive significance sequence of online shops is: credit standing of website; payment and delivery, marketing combination; shopping interface. Among these indicators, consumers pay more attention to goods information and trade reliability, credit standing of website and personal information security factors, all of which are correlated with online shopping security. On one hand, there are often some negative reports that bank accounts were stolen and some were cheated when trading online, which makes consumers worry about the security of online shopping and influences their purchasing behavior. On the other hand, because of the characteristics of online shopping, some consumers worry a lot about trade reliability maybe because they have relevant experiences. Trade reliability includes the following variables: X3 (worrying about the deal is a cheat, 5.61 points), X4 (difficult to judge the product quality, 5.83 points), X5 (visual differences between picture and goods, 5.48 points), X6 (risk of late delivery, 4.55 points), X7(no after service or inefficient after service, 5.33 points). All these variables get high scores, which basically is the result of consumers own experiences.

491

6. Marketing Suggestions
6.1 Disseminate Online Shopping from the Security Angle Those who have experience with online shopping still worry about the security; therefore, it is necessary to disseminate online shopping from the security angle, alleviate the negative effect of the reports made by the media to the consumers and strengthen their confidence in security. 6.2 Enhance the Security of Online Shopping such as The Third Authentication After all, online shops are virtual, whose quitting cost is lower compared with traditionally real shops. Besides, it is more difficult to obtain evidence when there are trade dissensions. All this increases consumers worries about credit standing of online shops. Doing the work about The Third Authentication can reduce consumers worries. 6.3 It is Necessary for Online Shops to Perfect Their Supporting Services The goods and service provided by online shops still have some defects, and online shops can perfect their supporting service by providing more flexible payment and delivery way, better description of goods information and marketing combination and so on.
References [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] Xinhua News Agency. ACNielsen: 60 percent of online people have experience with online shopping in China. Beijing Business Today , 2005-11-14(in Chinese) CNNIC. Statistical survey report on Internet development in China. http://www.cnnic.net.cn/index/0E/00/11/index.htm, 2006(in Chinese) Reichheld F.F. and Schefter P. E-Loyalty: Your secret weapon on the Web. Harvard Business Review, 2000, 78(4): 105131 Bhattacherjee A.. An empirical analysis of the antecedents of electronic commerce service continuance. Decision Support System, 2001a, 132(2):201-214 Warrington T.B., Abgrab N.I., and Caldwell H.M.. Building trust to develop competitive advantage in E-Business relationships. Competitiveness Review, 2,2000, 10(2):160-168 Lee M.K.O. and Turban E. A trust model for consumer Internet shopping. International Journal of Electronic Commerce, 2001, 6(1): 7592 Bhattacherjee A.. Understanding information systems continuance: An expectation confirmation model. MIS Quarterly, 2001b, 25(3): 351-370 Mohamed Khalifa, Vanessa Liu. Satisfaction with Internet-based services: A longitudinal study. 2001: 601-606 Liang, Ting-Peng and Hung-Jen Lai. Effect of store design on consumer purchases: An empirical study of on-line bookstores. Information & Management, 2002: 431-444 David Gefen, Pat Devine. Customer loyalty to an online store: The meaning of online service quality. ICIS, 2001: 613-618 Dai Lei. An empirical study of affecting B2C website continuance based on E-SQ. Logistics Management, 2006: 39-42(in Chinese) Pingjun Jiang Bert Rosenbloom. Customer intention to return online: Price perception, attribute-level performance, and satisfaction unfolding over Time. European Journal of Marketing, 2005, 39(1/2): 150-174

492

Risks versus Intendance Policies to Information Systems Engineering in China*


Fang Deying1, Sun Guorui2
1. http://www.bcbuu.edu.cn/Business College of Beijing Union University, Beijing, P.R.China, 100025 2. Economics and Management School, Henan University of Science and Technology, Luoyang, P.R.China, 471003

Abstract In this paper, firstly, the inherent risks of duality mechanism in information system developing are illustrated. Secondly, the functions of intendance are proved on the basis of each risk preference of all three participants and then the necessary and sufficient conditions that the functions put into effect are pointed out. As a result, the ongoing intendance policies of China are analyzed and the positive functions, whereabouts of lacks and overwork are demonstrated. Key words Information system, Engineering, risk, Third-party intendance, Intendance policy analysis

1. Introduction
Today, information management is the most prevalent applications field of IT if we divide IT applications into three categories as science computation, automation, and information management. Usually information systems engineering is the most investment project in an enterprise of China when the enterprise to be perfection by high-tech. However, to build information systems successfully, which mainly contents information management, not or not only depends on IT, but mostly depends on how to organize and manage the developing process, it is very unlike to other application fields of IT. Unfortunately, the market characteristics of China today is: a. In generally, owners are short of IT knowledge; b. The diseased credit standing in contractors market; c. The faultiness of technical standards system of the state; d. With time goes on, IT is more complex, the cost of information systems is going larger and larger, and the information engineering is playing a more and more important role. With all these factors, we know that: a. Information is far from completeness; b. Information from participants is asymmetric, and the abilities to process information are also asymmetric; c. Competition of market in Chinese is insufficient; d. Technical risks have a tendency of increasing. In view of above cases, and the software crisis remains as before additionally, it is clear that there are adverse selection risks and moral hazards in information systems engineering. It is an effective way to introduce the third-party intendance to reduce the risks. The state intendance policies are an important part of the intendance mechanism. The Ministry of Information Industry of China announced The Temporary Provisions to Information Systems Engineering Intendance (called it for short as TPISEI later), The Management Methods for the Qualification of Information Systems Engineering Intendance Firms and The Management Methods for the Qualification of Information systems engineering intendance engineer (called them for short as Methods later). They come into force from the 15th, December 2002 and the 1st, April 2003 separately. Base on what risks that the information systems engineering in traditional organization construction usually meets, this paper analyzes the functions of the intendance mechanism to inhibit risks, and what conditions that the functions running need, and also discusses the environment adaptation to the state intendance policies.

Fund by: National Nature Science Fund, No. 70371046 Fund by: top-notch talent of Beijing, 2007

493

2. Risks in engineering and intendance system


The intendance system is a set of scientific management institutions. It concludes a systematic mechanism, a perfect organized system, a perfect technical means, and strict canonical methods and procedures. It accomplishes its mission by the organic running of every part. To be specific, the intendance system turns the two-party system to a three-party system, which are owner, supervisor and contractor. According to the TPISEI, the definition of information systems engineering intendance is a set of actions performed by intendance firm, who has to be a corresponding qualification and committed by the owner to supervise and manage information systems engineering project according as relevant legal provisions, technical standards and the contract of information systems engineering. 2.1 Risks in binary mechanism First, there are only two participants in the traditional information systems engineering process. They are the owner and the contractor. This is a binary mechanism. In this mechanism, there are always risks of adverse selection no matter how about the markets competition and the tendency of the owner because of the intense asymmetric information and the imperfect credit system. That is, an agent (contractor) may hide his type (information) before an agreement making, and then the principle (owner) does not know the type of the agent. So the contract of principle-agency can only be made on the basis of the average expected value. And then dominant agents in type, their products have high quality and prices, have to withdraw from the market. Consequently the market will lose its efficacy. In the long run, the benefits of all participants in the information systems engineering will suffer great loss. Second, if the attitude of contractor is risk-averse, then the owner will face the moral hazard. That is, a principle cannot supervise an agents performances but can only observe the results of contract. Moreover the results are good or not are determined by two factors: one is the behavior of the agent; the other is the natural condition that you cannot supervise. So the agent who wants to evade risks would say the low return ascribe to the unfavorable natural conditions, and therefore without being censured by the principle. From the above analysis, it is known to us that the risks of information systems engineering project involves technical risk, natural environment risk and organization risk. The risks caused by binary mechanism belong to the organization risk, which is the necessity to introduce the third-party intendance system. 2.2 Risk analysis of the intendance mechanism With the assumed market characteristics above, we can analyze and introduce the sufficient condition of the intendance mechanism from the attitudes of the three participants to risk. According to the utility theory, all peoples attitude to risk can be divided into three types, which are risk-favored, risk-neutral and risk-averse. Assuming u(x) as an individual utility function, if u is a convex function, it is said the individual is risk-favored. If u is a concave function, it is said the individual is risk-averse. If u is a linear function, it is said the individual is risk-neutral. That is to say that the marginal utility of risk-favored is progressive, the marginal utility of risk-averse is regressive and the marginal utility of risk-neutral is invariant. The first attitude to risk cannot exist isolated. That is in one transaction, a rational person will not only pursue return. This attitude is often related to the individuals strategic profile. So we will analyze the three participants last two attitudes to risk next. Like table 1, there are 8 kinds of combination come from the three participants attitudes to risk. From No. to No. conditions, the contractor, which is dominant in information, is risk-neutral. According to the principle-agency theory, the behaviors of contractor can be supervised without any cost, so there are only technical risks and environmental risks and no organizational risks. In No. and No. conditions, the behaviors of the intendant can also be supervised without any cost, so it is no use to introduce intendance organizations at all. Because of the intendant try to avoid risks, the behaviors of intendant have no constraint in No. condition, thus the intendant become the only winner of all three participants. While in No. condition, 494

more unfortunately, the owner pay for the mere intendance fee but without any reward. So in these four conditions, if regardless of the technical effects of intendance (when there is enough competition in the market, the technical effects of intendance is not important), it is not suitable to introduce intendance mechanism.
Tab.1 The combination of the attitude to risk of the three participants The attitude to risk of the owner and intendant The attitude to risk of contractor neutral averse intendant neutral owner neutral averse neutral averse owner averse

contractor

No. to No. conditions are more realistic, at these conditions, contractor try to avoid risks. In No. and No. conditions, intendant is risk-neutral, and he is always faithful to owner. Because the asymmetric information, owner will try to build an incentive mechanism to share risks with contractor. Thus introducing an intendant not only can make up for the asymmetric information in some extend, but also can reduce the possibility of the punishment for the fault of contractor. Under this circumstance, the introduction of intendance mechanism will benefit to all the three participants. In No. to No. conditions, intendant is risk-averse. Owner has to make an incentive mechanism not only for intendant but also for contractor. So we cannot say whether it is right to introduce intendant and must consider whether the arrangement of the incentive mechanism is reasonable. Because it is more difficult to observe the behaviors of supervisor, owner has to build incentive mechanism according to the team theory. The main problem is how to avoid the phenomenon of hitchhike. It can be proved that when the original fortune of contractor and intendant or the owner is high enough, there will be a Pareto optimization for introducing intendance mechanism. From above, No. condition is the ideal one. It is no necessary to introduce an intendant, and at the same time, the social resources will be used in most efficiency. In other words, that is say social welfare is in most value. In the last four conditions, it is more effective to introduce the third-party intendance mechanism especially in No. and No. conditions, because it will benefit for all the three participants. No. and No. conditions are more common. All the three participants can benefit from it also if intendance contract is appropriate. One thing must be known that the reality is more complicated than our analysis above. For example, on one hand, the three percipients attitudes to risk are not public knowledge or you have to pay for it to know; on the other hand, their attitudes maybe not a constant, they may change with the change of time and occasions. We know from the above analysis of the basic combination, the more complicated the condition is, the less negation of the function of the intendance is, and the higher requirements to the arrangement of the intendance mechanism are needed. 2.3 Analysis of the environmental adaptability of intendance mechanism Different intendance mechanisms are needed in different social and economic environment, and the same intendance mechanism plays different roles in different social and economic environment. We will analyze the environmental requirements of the function of intendance mechanism next, and mainly take No. and No. conditions as examples on the basis of the Chinese market characteristics. 1. Basically speaking, mechanism arrangement is to find results of the information engineering project and the proper way to the share of risks, so the lower the uncertainty of the outside environment is, the higher interest three participants will get, and the higher social interest will be, and vice versa. So an open, fair and impartial outside environment is essential for the intendance mechanism to play an important role. 2. If contractor work hard more can bring more profit to owner, the function of intendance will be more remarkable, and vice versa. So this is an important factor for owner to consider when he chooses a project to be 495

supervised and price an intendance contract. 3. The original intention to introduce an intendance is to get the observed values of behaviors of contractor, so business ability of an intendant is very important. Owner has no chance to know the supervisors ability clearly, so the public policies of the government become a key problem. 4. In order to prevent intendant from hitchhike, an appropriate intendance mechanism is needed, the threshold of entrance to intendancy market should be heightened and the residual claim right of intendances should be given. 5. To avoid the resentment of contractor to intendant or the dispute between them two, perfect laws and regulations, improved technical standards are needed. 6. To avoid the relationship of principal-agency emerging between a contractor and an intendant, corresponding laws and regulations and a board of arbitration are needed.

3. Analysis of the present policies


If we want to regulate information-engineering projects and reduce risks in them, undoubtedly we can only begin with the following two aspects. First is to reduce the probability of risks from technique, environment and organization. Second is to restrain the loss of the hazard. In the first condition, if the market can be changed to an ideal state with the help of policies, it is no necessary to introduce the intendance mechanism. In No. and No. conditions, the policies will help the market to develop on the basis of a meaningful intendance. But now the environment such as the social, politic, economic and technical is not good and economical, so the policies must be enforced in accordance with the second aspect. Generally speaking, the two Methods meet No. and No. conditions compare with the requirement of the function of intendance. The TPISEI mainly solve the problems from No. and No. conditions. But now no enough policies exist to fit No. condition. Specifically speaking, the gratifying achievements of the provisions are as follows. The fifth item in TPISEI reduces hidden risks to owner by dividing intendants into three classes. The ninth item in TPISEI is about the definition of main content about intendance, which is beyond the western consulting system and fit in with the level of our present social condition. The eighteenth item in TPISEI eliminates the improper relationship between a contractor and an intendant. The fourth item of the first Methods is about the threshold for intendants to enter intendancy market. It prevents owners from adverse selection in some ways. The twelfth item characterizes the difficulties of the supervision, and distributes the responsibility of the supervisor approximately. The thirteenth item of the second Methods requires intendant has the experience of further educated and hence keeps the renewal of the intending ability. But there is something to be discussed in the Methods. The range of intendance is fixed by the investment to a project in the eighth item, though it meets the requirement of No. condition, it cannot give an intendance an enough room to play its role. For instance, for a large scale and high-risk project, if the investment does not be in the fixed list, it is not necessary to be supervised. In addition, whether to list the project, which involved the state security, must be considered clearly, because intendants would possess of the details of this project. So the advantages of introducing intendance are less than the disadvantages if the state security is an important performance target. The fourth item of the second Methods says that if someone wants to be an intendant, he must be an undergraduate with two-year experience or junior college graduate with four-year experience. Their level is equal to the assistant engineer and without any classification. They can be competent at the intendance work theoretically, but according to No. condition in this paper, under the present outside circumstance, if the level of the intendance team can be heightened, it will help to restraint or avoid the resentment of contractors. The fifth item in this Methods says that they should be trained in the fixed training institute; maybe it is beyond the range of policies.

496

4. Conclusions and proposals


From the above, introducing the intendance mechanism to information engineering is not the purpose but just a mean. It meets the requirements of present social and technical environment of China today. Though intendance is not all-purpose, the intendance mechanism of China needs to be redesigned and improved. Through the analysis of the functions of intendance, we can get the correct direction of the environment of the organization and support of the corresponding policies. Through the study of the environment of the intendance mechanism, the positive function, lacks and overwork of present policies are shown clearly to us. So here are some proposals to TPISEI and Methods published by the State Information Engineering Department. 1. To improve and carry out the Standard of the Process of the Software Development, promote the visualization of supervision and operational ability of intendance. A set of standards of intendance about information industry should be promulgated as soon as possible. And its content should arrange in intendance process and include tools of metric, assess and forecast in order to reduce technical risks and improve organizational criterion of intendance. 2. To Found and perfect the credit database of intendance firms and contractors after todays classification of intendance firms. And the database should publish regularly in the name of the government to reduce hidden risks and avoid the malignant competition. 3. All the technical ability and the organizational ability should be paid attention to when the qualification of intendants are classified. If intendants can be introduced technically and artistically, they will advance the development of information-engineering projects and will not be refused by the other two participants. So intendants qualification should be fixed up, they should be people who have no less than five year experience. It is surprised that there is no any report of the regulation of the intendance about information engineering in western countries in where a high level of information they are. Lots of information and technique consulting firms served with humanness, such as EDSDavidGartnerQ/P. They enter the process of informatization of their clients as the third party and serve their clients perennially and professionally from analyzing of the trend of the informatization to making of information plan and selecting of the application systems, they cooperate with their clients in many ways, instead of just do some certain projects, on the basis of a long and steady cooperation relationship. It has been proved since the latest 10 years in practice, the consulting mode play an active role. That is to say the mechanism arrangement is system engineering and there is no best mode of the mechanism but a proper one. The reason for the mode of western country lies in that they have a perfect social credit system and a complete technical standard. But this is just an ideal condition for us. So from the economics we know that under such circumstance the administrative means should be decreased and the organizational meaning should be enhanced. Furthermore, the humane meaning of the western firms shows us that they respect contractors and they think it is active to introduce the intendance mechanism in all process of an enterprise informatization. Meanwhile the arrangement of the intendance mechanism is a dynamic process and it must be changed with the social and economic environment. This is still a problem for us to make further research.
References [1] [2] [3] [4] [5] Zuo M Y. The Analysis of Intendance Mechanism in Information Systems Project with Management View. Applications of the Computer Systems, 2000,15(5):7-9(In Chinese) Fang D Y, Li M Q. The Analysis of Intendance Mechanism in Information Systems Project with Economics View. Journal of Industrial Engineerings Engineering Management, 2003, 17(2): 98-102 (In Chinese) Zhang W Y. Game Theory and Information Economics. Shanghai, China: In publishing in the San Lian book corp. of Shanghai and the peoples publish of Shanghai, 1996. 420, 508 (In Chinese) The Ministry of Information Industry of China (MII). The Temporary Provisions to Information Systems Engineering Intendance. Files of the MII in information series, 2002, No.570. (In Chinese) The Ministry of Information Industry of China (MII). The Management Methods for the Qualification of Information Systems

497

[6] [7]

Engineering Intendance Firms and The Management Methods for the Qualification of Information systems engineering intendance engineer. Files of the MII in information series, 2002, No.142. (In Chinese) Freimut B, Sascha U. Management of Explicit and Implicit Knowledge in Consulting Companies, Proceedings of the International Conference on Information Technology: Coding and Computing, ITCC.02. Eliot R, Peter D. Models for Understanding the Dynamics of Organizational Knowledge in Consulting Firms. Proceedings of the 34th Hawaii International Conference on System Sciences 2001.

498

Research on Core Competence of Enterprises in Electronic Business Management Mode


Hao Chunlu
on-job doctorate postgraduates of Wuhan Univeristy of Technology Deputy Director General of the First Cadres Department of the Organization Departments of Liaoning Provincial Committee of the Communist Party of China, P.R. China, 110006

Abstract In the 21st century of knowledge economy, the development of enterprises has shifted from a resources-oriented one to a market-oriented one, and also, enterprises have entered into a new phase of international competition. As for enterprises, core competence is represented. It expresses not only by its own advantage of inner resources, but also, and even more, by the recombination of its inner and outer resources, or say a strategic alliance with core enterprises play the leading role. Electronic-business as a new type of business mode, plays an important role in the transformation of the core competence of the enterprises. This paper, form the perspective of supplying chain management, is to study the core competence of enterprises in electronic business mode with the aid of the BCGs Growth-Share Matrix. Keywords E-business, Supply chain management, Core competence, Non-zero-sum competition

1. Introduction:
From the 1990s, computer network technique has been quickly developing. Information processing and transmission are breaching the restriction of time and space. Computer network and globalization of the economy are irreversible trends. Along with acceleration of the integration and globalization of world economy began to emerge and is speedily developing. E-business, here, is in its narrow sense. It refers to the effective business that realized through techniques of computer network and network communication. Since professor Poter has made the systemic expatiation and analysis of competitive strategy, it has stricken root in peoples heart. Implement of e-business has helped to promote the strategic transformation shifted from traditional antagonist mode of Win-Lost to the concurrent mode of Win-Win, which aiming at benefit of both of the parties. The application of the network technique in business has changed the way of the functioning of the market and also the rule. Internet as a channel of information transmission and two-way communication, is free, open, fair and almost costs none. It has broken free from the restriction of time and space, making all the enterprises communicate with one another fairly and freely, no matter where it is, large or small. Now B2B is widely applied in business and rule of competition is one of cooperative type, which pursuing Win-Win. It stresses to win great opportunities of development through cooperation and sharing knowledge. It aims to develop business opportunities together and bear risks together. This paper, with e-business as restrictive condition, is to study the specified form of core competence with the aid of BCGs Growth-Share Matrix and supply chain management(SCM), on the basis of the theory of non-zero competition which contains a spirit of cooperation.

2. SCM in E-business
1.1 The definition of supply chain Supply chain includes production supply chain and logistics supply chain. Traditional theory believes that supply chain embraces purchasing development department, productive department, warehouse, delivery department and others that involved in production and distribution. There, supply chain is considered as a process inside the manufacturing enterprise. But in this paper, supply chain means the combination of the inside enterprises supply chain and the outside enterprises supply chain, that is the realignment of the four flows (information flow, distribution flow, business flow and fund flow) or to form a functional network of supplier, producer, distributor, tradesman and final consumer along with the process when raw material is processed and then turned into semi-products and final products that can be distributed to the customers. It is not only simple chain of logistics, information and funds which links the supplier and the consumer, but also a value added chain. 499

Raw material has its value added during the process of productions and packing, transportation and meanwhile benefit is brought to each parties that engaged in the activity.
Table 1 The supply chain
Supply Supplier demand manufacture design assemblage e information flow distraction retai l consumer

demand draw information suppliers supplier

Place consumers consumer

core enterprise source of supply

source of demand

logistics flow and/or information flow fund flow

1.2 E-SCM Supply Chain Management, abbreviated as SCM, is defined as the following process: based on the understanding and grasp of the inherent laws and interrelation of each supply chain link, enterprises regulate physical distribution information flow, fund flow, value flow as well as service flow which are involved in various links between production and circulation process, by virtue of the functions management such as planning, organization, direction, coordination, controlling and maximum efficiency, and therefore enterprises are able to provide the greatest service for consumers at the minimum cost. There is an enterprise plays the key role in the typical supply chain. It is called core enterprise. Core enterprise is the coordination center of information flow and physical distribution. Its downstream relates from seller to user, and its upstream is composed of supplier and suppliers supplier. Core enterprise, as the information center obtain information from its upstream, and after the combination process, it passed them to the upstream enterprise. Besides, core enterprise is of the function of physical distribution center. Suppliers of parts deliver their kinds of spare parts to the core enterprises and, after assembly and other forms of processing thereof. The processed are passed to user by downstream enterprises. Information flow and physical distribution are required to coordinate organically. Only in this way, can supply chain obtain its competitive power. The basic advantage of supply chain management (e-SCM) lying in collecting and dealing with lots of supply chain information through network technique. Through the application of e-business, we can carry effective management on lots of the supply chain information resources, and improve efficiency of the whole supply chain. E-SCM can provide such service system as the information handles automatically, customer's order performance, the purchase management, inventory control and logistics distribution etc. with raising fluxion efficiency of goods and service in the supply chain. The main path of e-business supporting SCM: Through making use of such e-business technique as the EDI (the Electronic Data Interchange), EFT (the Electronic Funds Transfer) and ECR (the Evaluated Cash Receipt) and so on, plus with two typical pattern as QR (the Quick Response) and ECR (the Efficient Consumer Response), promoting the extensive application of the integration SCM.

500

Table 2 E-SCM
bank

supplier

client

payment/

supply

core enterprise

EDI bargain

logistics

Web bargain

transportation/storage

client

3. SCM and the enterprises core competence


2.1 The connotation of the enterprises core competence The connotation of the enterprise's core competence means the unique ability forms during the period of production and supplying service which can bring excessive profit and cannot be imitated easily by the competitors easily. It is the tremendous capital power and manage strength that forms in the process of production, the development of new products, after service and so on and is determined by the technology, culture or system which have particular advantage. The core competence mainly includes two parts: the core technique ability and organizing capacity. This paper pays attention to the second part: organizing capacity. It mainly includes: the ability of appointing strategic decision; the ability to gather, organize, collocate and recombine the resources effectively; the ability to construct, adapt and reform the environment; the ability to learn and perfect itself. 2.2 Applying the administration of supply chain to cultivate the enterprise's core competence 2.2.1 The enterprise's core competence under SCM Before SCM appears, the enterprises usually adopt the management style of "vertical integration". The companies under this mode usually expresses the form of "big and whole" or "small and whole", always making the marketing, the development of products, production plan, manufacture process, financing accountant, human resource, information administration, the equipments maintenance etc. essential to the enterprises, which make the manager spend too much time, energy and fund to pursue such work that that secondary and accessorial. Under this kind of management style, the company's core competence is formed by collecting its own core resources to cultivate inside. Through integrating its own inner resources, the company obtains advanced technique, and then lowers product cost and improves the quality of the products. Finally, the company wins its core competitive advantage in the market through cost advantage and the productions quality advantage which bring by techniques. Under this condition, the enterprise's core competence is expressed by its technique ability. 2.2.2 The enterprise's core competence under E-SCM The trend that the global information shares comes into being along with the international economy integration and the international division deepened, especially the swift development of information technique since the 1990s. In this case, the enterprise employs the traditional management style will confront such problems as "technique leaking", "high cost" and "comparatively long production cycle" and so on, which will block the improvement of the enterprise's core competence badly. Under the E-SCM, the enterprise's core competence is expressed as follows: the enterprise conforms its internal and external resources effectively and depends on the electronic business technique, achieves its core 501

competence through supply chain mode at last. Under this restriction, the enterprise's core competence not only embodies on its own internal resources advantage, but also on the supply chain's whole resources and whole competitiveness more. The enterprise's core competence mainly acquired from centralizing, organizing, collocating and regrouping the supply chain which is focused on itself. Under the condition of E-SCM, the enterprise makes use of EDI, EFT and so on effectively and employs the superior resources of other enterprises to make up the shortage of oneself at the time of making use of its advantage to carry on the core business. The SCM mode makes the enterprise can pass some significant but not core business to other enterprises on the supply chain's node after the establishment of core competence. So the core enterprises (manufacture, sellers) carrying on epiboly can build up a kind of strategic colleague relation that cooperate to win totally with the cooperation enterprises (suppliers). Thus, the enterprises core competence not only the embodies on its own internal advantage, but also more on the supply chain that the colleague members sharing information, sharing ideas, organizing and concentrating the resources together to build up the core competitive advantage. 2.2.3 The analysis of the case The Cisco Corporation makes use of an electronic channel , exerting its core competence by reforming its supply chain. It started to reform the supply chain in 1992. It passes majority of its production to the cooperative manufacturers and concentrates itself on the last tache--the debugging and designment of the products,and then the Cisco maintains the products' supply chain with the cooperative manufacturers together. They intercommunicate and cooperate mutually. In the relation between the Cisco and the both ends of supply chain, it keeps the enterprise communicating with its external copartners freely and enhance cooperation-competition advantage. The Cisco has organized a great-whole industry group. In this group, it collects the suppliers around itself via Internet and forms group advantage which it is in the center. This group takes the network as its information agency and connects all the vertical suppliers engaged in producing network equipment on the same industry chain. As the network information is fast, the Cisco and its cooperators can make immediate feedback to the market. On introducing the products to the market, the Cisco lowers the products introducing period greatly by reforming the supply chain.The Cisco's study shows, it will take probably five to ten weeks to lead each product averagely. At the same time, the enterprise has to spend vast manpower and material resources to collect and release the information at such a period. Therefore, the Technique Department brought in the Automatic Data Collecting System in the prototype period, and this lowers the time from one day to fifteen minutes. The Cisco carries on providing goods directly while shapeing this supply chain.It delivered all the goods to its clients directly in the past. Thus, it has to pass two cources in each supplyin goods: from suppliers to the Cisco, and then from the Cisco to the clients, the whole course will cost probably six days. However,the Cisco has become the first to supply goods directly in the world since 1997 and partial clients can acquire the goods from the suppliers directly. At present, the bargain amount achieved by this way has exceeded 50 percent of the total.

4. Through the BCGs Growth-Share Matrix to analyse the enterprises core competence
Under e-business management, by the adoption of B2B business management strategy,the enterprise realizes to fit together effectively of the resouces of different types in different enterprises, and thus become the strategic alliance of enterprises that different business type combine.Through the valid organization and harmony of the alliance inner part ,the enterprises core competence can be carried out. By virtue of different classification of different business type in enterprise divided by the BCGs Growth-Share Matrix, we can study how the enterprise passes different strategies on this foundation to carry out its core competence. The operation of an enterprise is divided into four types by the the BCGs Growth-Share Matrix: Problem, star, the cash cow and thin dog.

502

Table 3 The BCGs Growth-Share Matrix

The rising rate of the market

22%

the star

the problem

10%

5%

the cash

the thin dog

0 10X 4X 2X 1X 0.5X 0.1X

3.1 Carrying on analysis the four types divided by the BCGs Growth-Share Matrix The problem business means that the business with the high market growth rate and relatively low market quota. This is usually the new business of an enterprise. In order to develop the problem business, the enterprise must build up the factory, increase the equipments and personnel, then it will catch up with the market with quick development, and exceed the rival. All these mean a great deal of funds devotion. The star business means that the business with the high market growth rate and relatively high market quota. It can be regarded as the leader of the market with high speed, and it will become the enterprises cash cow business. However, this does not mean the star business can certainly bring billowing money for the enterprise, because the market grows up still in the high speed. The enterprise must continue to invest, in order to keep the same rising percentage as the market, and beat off the rival. The Cash cow business points out that the business with the low market growth rate and relatively high market quota, this is the leader in the mature market, it is the source of cash in the enterprise. Because the market is already mature, the enterprise doesnt need to pump capital to expand the market scale. Being the leader in the market at the same time, this business possesses the advantage of the scale economic and high limit profits, therefore all these can bring a great deal of source of money for the enterprise. With" cash cow" business, the enterprise usually pays funds on account and supports other three kinds of business that needs a great deal of cash. The "Thin dog" business means that the business with the low market growth rate and relatively low market quota. Under general circumstance, this business is usually tiny benefit even to be bad. The reason of the existence of thin dog business is more about the factor of the affection. Although it is always tiny benefit management, similar to the dog of the person which has been kept for many years and the host is very unwilling to throw it away. In fact, thin dog business usually needs to take up a lot of resources, such as the funds, the time of the management, and in most times this business is something to lose more than gains. 3.2 The BCGs Growth-Share Matrix and the enterprises core competence According to the classification of the operation of the enterprise divided by the BCGs Growth-Share Matrix and non-zero-sum competition theory, we analyze what kind of management strategy the enterprise adopts to shape the enterprise of strategic alliance in the process of competition. Therefore, we can realize keeping and raising the core competence of the enterprise. Based on non-zero-sum competition theory, the enterprise becomes a strategic alliance mainly according to two factors. On one hand the enterprise business belongs to the core business of that business enterprise or just edge business. On the other hand it refers to the position the enterprise takes up in the business realm: the leader or the follower. In the meantime, we treat the star, the cash cow business as the enterprises' core business, and the problem, the thin dog business as the enterprises edge business. Under the above-mentioned restricted conditions, the enterprise becomes four kinds of strategic alliance in the competition the cooperation mutually. Therefore, the enterprise's core competence is expressed in this four kinds of strategic alliances. 503

Table 4 The enterprises become the strategic alliance game The countermeasure of the enterprises The market position the enterprise takes up in the industry The strategic significance lead defense keep chase chase reorganization

of the enterprises business

Aim at star,the cash cow core business of the business enterprise, if this item of the enterprise is in the leading position, the enterprise taking part in the strategic alliance is placed in the defense position,constituting the defense game.The enterprise participating in the strategic alliance mainly has two purposes: One is to acquire the market and technique, to keep the existing market and technique advantage; the other is to insure the resources supply. Through the above-mentioned game,the enterprise attains a purpose of keeping its core competence. If this item of the enterprise is in the chasing position in the industry, the enterprise participating in the strategic alliance is placed in the chasing position, constituting to chase game.The enterprise participating in the strategic alliance mainly has the following purpose: through the alliance, it can learn the advanced technique or the management method from the leading company, or acquire a new sale outlet, and then through a strategic alliance to improve its core competence and raising its own competitive position in industry. Aim at problem, the thin dog edge business of enterprise, if this item of the enterprise is in the leading position in the industry, the enterprise participating in the stragegic alliance is placed in the keeping position, constituting to keep game. The enterprise participating in the strategic alliance mainly has the following purpose: It mainly makes use of the beneficial position to obtain the biggest benefits, through strategic alliance to extend its own sale share further,and then making use of the market advantage to keep its core competence more,through the best exploitation of the edge business to raise its whole core competence. If this item of the enterprise is in the chasing position in the industry, the enterprise participating in the strategic alliance is placed in the reorganizing position, constituting reorganization game.The enterprise participating in the strategic alliance mainly has the following purpose: The purpose of the alliance is for creating some advantages and value, making the enterprise sell out that business at a reasonable value finally. Reorganizing the business it is about to give up through the strategic alliance to acquire the biggest income,in order to maintain and raise its whole core competence.
References [1] [2] [3] [4] [5] Ge Renliang & Wei Xuejun, The Influnence On Supply Chain Mnagement Exerted by E-business, Business Research, Sept.,2002 Yuan Chunhui & Zuo Zhenlin & Liang Xiongjian, E-Business Rallys Enterprise Value Chain, Beijing Normal University Journal, Sept.,2002 Chen Changbin & Chen Gongyu, To view E-SCM rationally, Business Research, Sept.,2002 Li Guilin & Li Yuling, Three Magic Formula For Elevating Competive Power of Enterprise, Business Research, Apri., 2002 Li Sanbo, On Problems Exsiting in Chinese E-Business and Respective Countermeasures, Business Research, Jan., 2002

504

The Xia-Jin Bridges Impact on Local Tourism of both Sides of the Taiwan Strait from the Customer Value Perspective
Chien Yung-Tsai1 , Liao Sen-Kuei2, Duan WanChun 3
1 Faculty of Management and Economics, Kunming University of Science and Technology, P.R.China 650093 2 Graduate Institute of Commerce Automation and Management, Taipei University of Technology, Taiwan 106 3 Faculty of Management and Economics, Kunming University of Science and Technology, P.R.China 650093

Abstract To simulate the impact of the construction of the Xia-Jin (Xiamen-Jinmen) Bridge on local tourism as seen from the customer value perspective; incorporating governments on both sides of the Taiwan Strait as corporations, Taiwan residents as customers, and the construction of the Xia-Jin Bridge as a form of product or service in our research and analysis with respect to customer value. The aim of this research is to determine how functional value and emotional value contribute towards customer satisfaction and behavioral intentions in the customer value model. From our LISREL analysis of the simulated data, we found that positive relationships exist in both emotional value on customer satisfaction and customer satisfaction on behavioral intentions. Key words Customer value, Customer satisfaction, Behavioral intentions

1. Introduction
Discovering and fulfilling customer value has become the fundamental goal in modern corporate management, since every customer possesses a unique perception of value. Albrecht (1994) believed that customer value is a marketing concept based on the specific needs of customers, as well as a strategy whose goal is to satisfy that specific need. Kotler (1997) also pointed out that customer value is customers' overall perception of how products are able to satisfy their needs. For this reason, a growing number of firms are adopting the customer centered approach, which starts by treating each customer as a unique entity, and developing products or services that satisfy the needs and expectations of each customer. Hence customer value based concepts or theories such as customer-orientation, relationship management, creation of customer value, value marketing, and modeling and analysis of customer value, are often adopted in marketing. The aim of this research is to evaluate and analyze the impact pattern of customer value by simulating the customer value theory, designating the construction of the Xia-Jin Bridge as a product or service, the governments of both sides of the Taiwan Strait as corporations, Taiwanese residents as customers, and tourism as our subject industry. Previously, professional studies conducted on the Xia-Jin Bridge were mostly feasibility analyses and models of various advanced construction methods, or evaluations on the effectiveness of new bridge-building techniques. Only a few of them were aimed at determining its interactions with local tourism from the customer value perspective, what the expected impacts are, and how they influence tourists' consumption patterns (customers behavioral intentions). These are the main questions that we seek to answer. Based on the above motivations, this research is aimed at determining the impact of the Xia-Jin Bridge on local tourism as seen from a customer value perspective, which shall be a helpful guide in the construction of the bridge.

2. Literature Review
2.1 Customer Value Zeithaml (1995) defined customer value as consumers' evaluation of overall product effectiveness based on the benefits and costs associated with acquiring products. Monroe (1990) stated that historical theories on consumer behavior were established on the basis that consumers act rationally under adequate information; however in reality, we all live in a world of imbalanced information, where consumers develop preferences and choices based on their own product evaluations. Hence Monroe believed that consumers recognition of value is an exchange between recognized benefits and recognized sacrifices; their relationship is denoted as: recognition of value = recognized benefits / recognized sacrifices. Zeithaml believed that recognition of value is consumers' 505

evaluation of overall product effectiveness based on the benefits and costs associated with acquiring such product. Based on Zeithaml's definition of the recognition of product value, Bolton and Drew (1991) defined service value as consumers' evaluation of overall service satisfaction based on the effectiveness, benefits and costs associated with acquiring services. Cronin, Jr., Brady, Brand, Jr. and Shemwell (1997) consolidated all previous studies of service value and redefined service value as a mathematic function between the quality and costs of service. Bolton and Drew (1991); Monroe (1990); Morton and Rys (1987); Willians and Soutar (2000) conducted extensive research on customer value, or customers' overall evaluation of products and services. This perception is developed over the course of product or service purchases in relation to the benefits received and the costs associated (either monetary or non-monetary); this perception is highly personalized at the conscious level. Consolidating the above studies on customer relationship, we have identified two major aspects for determining customer value: functional value and emotional value, which shall be the main focus in the rest of the research. 2.1.1Related Models of Customer Value Cronin, Jr., Brady, and Hult (2000) consolidated previous studies relating to service value; after analyzing 6 service sectors they concluded that service value influences customers' satisfaction as well as their behavioral intentions. Model 1: Concluded based on the relevant studies on service value that a direct relationship exists between its value and performance results. Model 2: Concluded based on the relevant studies on satisfaction that a direct relationship exists between satisfaction and performance results. Model 3: Concluded based on previous research on the relationship between service quality, satisfaction, and behavioral intentions that service quality ultimately affects behavioral intentions through value and satisfaction. Model 4: Concluded that behavioral intentions is not only affected by service value and satisfaction; the service quality also needs to be taken into consideration. 2.2 Customer Satisfaction Kotler believed that customer satisfaction is how products or services purchased differ from customers' expectations; in other words, it is a reaction towards the difference between actual and expected outcomes. By consolidating the viewpoints of Churchill and Surprenant (1982); Engel, Blackwell and Miniard (1995), the level of customer satisfaction is determined in relation to consumers' expectations and the actual performance of the products or services; the difference between them indicates whether the products or services are satisfactory or unsatisfactory. If the actual benefit derived from using the product matches or even exceeds expectations, the customer becomes satisfied or very satisfied; otherwise the customer becomes unsatisfied or very unsatisfied. Customer satisfaction is an evaluation process where customers compare the actual benefits against their expectations after taking into account all costs associated during the purchase; the process can generate either favorable or unfavorable feelings and emotions. 2.3 Behavioral Intentions Boulding, Kalra, Staelin and Zeithaml (1993) believed that customers' awareness of service quality will impact their satisfaction towards the overall service, which subsequently influences their behavioral intentions. According to the research conducted by Zeithaml, Berry and Parasuraman (1996), behavioral intentions can be either beneficial or detrimental. Beneficial behaviors strengthen the relationship between the customer and the company, which may include compliments and preferences towards the company, increased purchases of the company's products or vouchers etc; on the other hand, detrimental behaviors may include avoidance of any association with the company or reduced purchases of the company's products. Therefore behavioral intentions can be used as an indicator of customer loyalty. Oliver (1980) believed that consumers' perceptions towards a certain product or service are derived from personal experience; consumers then develop their willingness to purchase based on these perceptions. 506

3. Methodology
3.1 Conceptual Framework Figure 1 depicts the conceptual framework of the study.

Function al value Customer satisfaction Emotiona l value intentions Behavioral

Figure 1 Conceptual Framework

3.2 Hypotheses Based on the structures of research conducted previously by scholars, our hypotheses are established as follows. H1: There is a positive relationship between functional value and customer satisfaction. Williams and Soutar (2000) discovered in their research that consumers' perceptions towards functionality such as the quality of tour design, package price, the comfort of tour buses and hotels, and the professionalism of tour guides, etc., may influence consumers' final attitude, which subsequently affects customer satisfaction. H2: There is a positive relationship between emotional value and customer satisfaction. In the same research, Williams and Soutar also pointed out that emotional value is consumer's perception after experiencing the tour. Throughout the tour, consumers may experience different emotions and feelings that contribute toward the final ratings on their tours, which subsequently affect customer satisfaction. H3: There is a positive relationship between customer satisfaction and behavioral intentions. Boulding, Kalra, Staelin and Zeithaml (1993) believe that customers' awareness of service quality will affect their overall satisfaction, and the satisfaction with service quality will subsequently change customers' behavioral intentions. 3.3 Quantifying the research variables The quantification of variables was done using the Likert 5-scale measurement. Based on previous relevant definitions, this research defines functional value as "customers' subjective perception towards the overall functionality of products and services during the course of their experiences", and defines emotional value as "customers overall perception during the course of experiencing products and services on the emotional level". We used qualitative scales in our questionnaire to quantify both functional and emotional values, and amended them according to specific industry characteristics as developed by Sheth et al in (1991). Based on previous relevant definitions, this research defines customer satisfaction as "evaluation and perception developed after consumption". To quantify, we designed an integrated survey as recommended by Fornell (1992) and Bolton (1991); we defined customers' behavioral intentions as "the likelihood that customers may change their attitudes, intentions, and recommendations to others based on their evaluations and feelings derived from an experience". We integrated a number of approaches and amended the quantifying method as suggested by Fornell and Gronholdt et al. (2000). 3.4 Research subject and samples This research is aimed at determining the impact of the Xia-Jin Bridge on local tourism as seen from the customer value perspective, by studying a sample of consumers in Taiwan. The rationale behind this was that 507

consumptions in both Xiamen and Jinmen were almost entirely contributed by non-local tourists; this sample was appropriate for the industry of our research. In addition, tourism in both regions was highly competitive, vital and representative of the local economies. The development of tourism is also heavily associated and dependent upon customer value. Gerbing and Anderson (1988) recommended a sample size of at least 150 for LISREL to achieve convergence and good fit. Hair et al. (1998) believed that when estimating parameters using the maximum likelihood method, a sample size above 100 is a minimum requirement; too small a sample results in inadequacy of convergence or fit. The subjects of this survey spread across a wide range of sectors including tourism, schools, hospitals etc; a total of 160 effective responses were collected, which satisfied the minimum sample size for LISREL. 49.1% of subjects were male, 50.9% were female, and consumers between ages 21 and 40 comprised 87.9% of the sample; in terms of educational level, college was the most common standard, being 57% of the entire sample; subjects earning NTD 20-35 thousand and 35-50 thousand per month formed the majority of the sample, accounting for 34.5% and 19.4%, respectively. 3.5 Reliability and validity Reliability is the primary standard for evaluating a measurement tool. It indicates the accuracy and precision of a measurement procedure; through which we are able to determine whether the measured results are consistent and stable. In this research, we used Cronbachs to evaluate the reliability of the factors and the clustering effect between variables. As we can see in Figure 1 below, the reliability of our research variables ranged between 0.72 and 0.83; according to previous studies, a Cronbachs above 0.7 indicates high significance. As we can see in Table 1, the means and standard deviations of each variable are displayed in columns 2 and 3; the numbers along the diagonal line are the Cronbachs produced from the reliability analysis. To better understand the level of correlation between functional value, emotional value, satisfaction and behavioral intentions for each variable, we used Pearson correlation to determine whether the relationships for each variable were significant. The numbers along the diagonal line are correlation coefficient between the pairs of variables; the greater the coefficient, the stronger the correlation between variables. As we can see in the following figure, positive correlations exist between each variable of our research.
Variable Functional value Emotional value Customer satisfaction Behavioral intentions M Table 1 SD .56 .69 .76 .70 Mean, Standard Deviation, and Correlation Coefficients Functional value Emotional value Customer satisfaction (.78) .53** .51** .41** ) (.72) .61** .66 (.82) .68** (.83) Behavioral intentions

4.13 3.72 3.64 3.73

Note. *<.05, **<.01, Cronbachs displayed in (

Validity is the appropriateness, which is a necessary characteristic for measurement tools to correctly capture observations required for the research; it must achieve the purpose of its measurement for the test to be valid. In our research, validity is the appropriateness of measurement tools. If the measure (questionnaire) covered all of the structures and content that this research intended to discuss, we can conclude that it had excellent content validity. In other words, the questionnaire is considered highly valid if it asks for the answers we need. Content validity can not be determined statistically. Our questionnaires were designed based on theory with reference to similar contents and questionnaire items utilized in previous studies, and amended according to the specific industry characteristics of our research, reviewed and tested by scholars, specialists, consumers and other relevant personnel. This research should be able to satisfy the validity requirement. Factor analysis is a statistical tool used to simplify variables and analyze for groups that exist between variables or search for common sectors behind each variable. There are two types available: exploratory factor analysis, EFA, and confirmatory factor analysis, CFA. Since this is a confirmatory research study, we chose the 508

confirmatory approach for our factor analysis. The variance explained was listed in Table 2 below, after conducting a factor analysis across each sector using SPSS 10.0. Using principal component analysis, the KMO test (Kaiser-Meyer-Olkin measure of sampling adequacy) and the test for sphericity both reached a level of significance, therefore we continued with the extraction of principal factors. We extracted four variables using the varimax method; they were functional value, emotional value, customer satisfaction and behavioral intentions.
Table 2 Variable Functional value Emotional value Customer satisfaction Behavioral intentions Factor Analysis KMO 0.70 0.71 0.72 0.80 Variance explained 71.22% 67.02% 84.54% 71.17%

Of these, customer satisfaction explained the highest proportion of variance (84.54%), and the variances explained by other factors were all above 67%, indicating a good factor analysis result.

4. Discussion
From the entire model, we believe that customer value affects consumers behavioral intentions through the perception of satisfaction. The GFI of this model was 0.91 and reached a level of significance; also it has been suggested by scholars that other indicators can be used for testing; for example, the CFI of 0.96 and the NFI of 0.94 both reached a level of significance, hence this model is considered acceptable. Based on the estimated parameters provided by LISREL, we tested whether these parameters under each variation fall within our prescribed range to determine whether these hypotheses were valid. The estimated parameters of hypotheses H2 and H3 were found to be positively related; however the estimated parameter of hypothesis H1 did not reach a level of significance, therefore we were unable to conclude that our findings supported the hypothesis.

5. Conclusion and Recommendation


5.1 Conclusion This research assumed that positive functional and emotional values contribute to higher customer value, which subsequently leads to higher customer satisfaction. If a corporation adopts a customer value-oriented marketing approach, it intends to influence customers perception towards satisfaction. This was consistent with the findings of scholars Patterson and Spreng (1997); Andreassen and Lindestad (1998); Ennew and Binks (1999); Chenet, Pierre, Tynan and Money (1999); Cronin, Jr., Brady, and Hult (2000) from their research showing that positive customer value leads to customer satisfaction. Hypotheses H2: emotional value is positively correlated to customer satisfaction and H3: customer satisfaction is positively correlated to behavioral intentions were significantly true. However, hypothesis H1: functional value is positively related to customer satisfaction was not significantly true, which was consistent with the research of scholar Liao Sen-Kuei (2003). The reason behind this should be that transportation was already available between Xiamen and Jinmen, including ferries and air traffic; consumers did not have great expectations from the construction of a new bridge, which could offer cheaper and more convenient land transportation. Other than that, this project was also under the influence of unstable political issues that exist between governments on both sides of the Taiwan Strait. This research proved that the adoption of customer value will affect customers behavioral intentions through middle variables, i.e. customer satisfaction, which subsequently affects customers loyalty and willingness to recommend to others. The more effective the customer value approach, the more likely that customers and corporations may continue business relationships, which leads to a stronger competitive advantage. This result was consistent with Kenichis belief (1983) that corporate strategies are established so that corporations can 509

effectively and continuously surpass competitors marginal benefits. 5.2 Historical Studies Looking through the relevant studies conducted previously, there has not yet been a Xia-Jin Bridge related research study that was intended to determine the impact of the bridge on local industries. The majority were focused on the feasibility of applying new construction techniques, or the benefit and feasibility of improving bridge building efficiency through new technologies. This research was aimed at determining the impact of the Xia-Jin Bridge on tourism in Xiamen and Jinmen as seen from the customer value perspective. This research took functional value and emotional value as variables for evaluating customer value; we discovered that emotional value is positively related to customer satisfaction, and customer satisfaction is positively related to behavioral intentions. This research focused on Taiwan residents as the survey subjects. The survey discovered that people believe the completion of the Xia-Jin Bridge not only strengthens the relationship between both governments, it also provides more convenient transportation, and is monumental with respect to history as well as the development of local society. The research also found that people supported the construction of the Xia-Jin Bridge and were willing to recommend the inclusion of this bridge into tour plans; it was favorably rated for its potential to develop differentiated tourism. 5.3 Implications for management This research explored the related impacts of functional value and emotional value on customer satisfaction and behavioral intentions, using the impacts of the Xia-Jin Bridge on local tourism developments as an example. This research explained that corporations or organizations must take many aspects of functional and emotional value into account when undergoing customer value oriented management, and cannot rely on single or partial factors to make decisions. 5.4 Future research In order to derive real benefits for local tourism from the construction of the Xia-Jin Bridge, we intend to inquire further, based on this research, into the following three areas: 1. Widen its applicability. Determine whether the conclusions of this research can be applied to other industries. 2. This research was aimed at determining the impact of the Xia-Jin Bridge on local tourism through the use of two factors: functional and emotional value, for Taiwan residents only. We recommend that subsequent research studies take the residents of both China and Taiwan as the research subject; this should enhance the reliability of this research. 3. This research was designed to look at the functional and emotional values of the Xia-Jin Bridge only. We recommend that subsequent research studies incorporate financial or economical factors to better simulate the overall impact and raise both governments willingness with respect to the construction of the Xia-Jin Bridge.
References [1] [2] Albrecht, K. Customer Value. Executive Excellence, 1994, pp. 14-15 Andreassen, T. W. and B. Lindestad. Customer Loyalty and Complex Services: The Impact of Corporate Image on Quality, Customer Satisfaction and Loyalty for Customers with Varying Degrees of Service Expertise. International Journal of Service Industry Management, Vol. 9(1), 1998, pp. 178-194 Bolton, R. N. and J. H. Drew. A Multistage Model of Consumers Assessments of Service Quality and Value. Journal of Consumer Research, Vol. 17, 1991, pp. 375-384 Boulding, W., A. Kalra, R. Staelin and V. A. Zeithaml. A Dynamic Process Model of Service Quality: From Expectations to Behavioral Intentions. Journal of Marketing Research, 1993, pp. 7-27 Chenet, Pierre, C.Tynan, and A.Money. Service Performance Gap: Re-Evaluation and Redevelopment. Journal of Business Research, Vol. 46, 1999, pp.133-147 Churchill, G. A. and C. Surprenant. An Investigation into the Determinants of Customer Satisfaction. Journal of Marketing Research, Vol. 19, 1982, pp. 491-504 Cronin, J. J. Jr., M. K. Brady, R. R. Brand, R. H. Jr., and D. J. Shemwell. A Cross-Variableal Test of the Effect and Conceptualization of Service Value. The Journal of Services Marketing, Vol. 11, Iss. 6, 1997, pp. 375-391

[3] [4] [5] [6] [7]

510

[8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20]

[21] [22] [23] [24] [25]

Cronin, J. J. Jr., M. K. Brady, and G. T. Hult. Assessing the Effects of Quality, Value, and Customer Satisfaction on Consumer Behavioral Intentions in Service Environments. Journal of Retailing, Vol. 76, Iss. 2, 2000, pp. 193-218 Engel, J. F., R. D. Blackwell, and P. W. Miniard. Consumer Behavior. 8th ed, Dryden Press, Texas, 1995 Ennew, C. T. and M. R. Binks. Impact of Participative Service Relationships on Quality, Satisfaction, and Retention: An Exploratory Study. Journal of Business Research, Vol. 46, 1999, pp.121-132 Fornell, C. A National Customer Satisfaction Barometer: The Swedish Experience. Journal of Marketing, Vol. 55, 1992, pp.1-21 Gerbing, D. W. and J. C. Anderson. An Updated Paradigm for Scale Development Incorporating Unidimensionality and Its Assessment. Journal of Marketing Research, Vol. 25(2), 1988, pp.186-192 Gronholdt, L., A.Martensen, and K.Kristensen. The Relationship Between Customer Satisfaction and Loyalty:Cross-Industry Differences. Total Quality Management, Vol.11, 2000, pp.509-515 Hair, J.F. Jr., R.E. Anderson, R.L. Tatham, & W.C. Black. Multivariate Data Analysis. 4th, Englewood Cliffs, Prentice-Hall Inc., New Jersey, 1998. Kenichi O. The Mind of the Strategist. Harmondsworth: Penguin Books, 1983 Kotler, P. Marketing Management: Analysis, Planning, Implementation, and Control. 9th ed., New Jersey: David Borkowsky, 1997 Monroe, K. B. Pricing: Making Profitable Decisions. 2nd ed, McGraw-Hill, New York, 1990 Morton, J. and M. E. Rys. Price Elasticity Prediction: New Research Tool for the Competitive '80s. Marketing News, Vol. 21, 1987, pp. 18 Oliver, R. L. A Cognitive Model of The Antecedents and Consequences of Satisfaction Decisions. Journal of Marketing Research, Vol. 17, Iss. 4, 1980, pp. 460 Patterson, P. G. and R. A. Spreng. Modeling the Relationship Between Perceived Value, Satisfaction and Repurchase Intentions in a Business-to-Business, Service Context: An Empirical Examination. The International Journal of Service Industry Management, Vol. 8(5), 1997, pp.415-432 Sheth, J. N., B. I. Newman, and B. L. Gross. Consumption Values and Market Choices-Theory and Applications. Cincinnati, OH: South-Western Publishing Co., 1991 Williams P. and G.N. Soutar. Dimensions of Customer Value and the Tourism Experience: An Exploratory Study. ANZMAC 2000 Visionary Marketing for the 21st Century: Facing the Challenge, 2000, pp.1415-1421 Wilson, T. D. An Integrated Model of the Buyer-Seller Relationship. Journal of Academy Science, Vol. 23, No. 4, 1995 Zeithaml, V. A., Leonard L. Berry, and A. Parasuraman. The Behavioral Consequences of Service Quality. Journal of Marketing Vol. 60(April), 1996, pp. 31-46 Liao Sen-Guei , Yang Su-Lan. Research on the Relationship Between Environment, Value, and Customer Satisfaction. National Taipei University of Technology, Conference of Knowledge and Management, November 2003 319-236

511

Studying on Customer Demands Information Processing


Lei Yi1, Tang Bingyong2
1,2

Glorious Sun School of Business and Management, DongHua University, ShangHai, P.R.China,200051

With the development of economy and society, customers are unsatisfied with the stereotyped products, the individuation voice come up. As customers usually describe their demands in nature language, it presents challenges: the demands are usually conflicting with each other and are often imprecise. The paper studies the operation process of handling customer demand. Two methods are proposed in this research. The first method classifies customer demand using natural language processing techniques in order to obtain customer demand unit. The second method determines the priority of customer demands. These methods of processing the customer demands will help the manufacturer adjust their product design and reduce the risk of customer dissatisfaction. Key words Ccustomer demand , Semantic decomposition, Priority of customer demands

Abstract

1. Introduction
The findings in Kalakota and Robinson show developing a new customer costs more than six times the effort of keeping a present one[1]. Hence, any demand of the existing customer needs to be treated more positively. While for most situations, due to insufficient technical knowledge and lack of awareness about specifications, customers may present demands with some inexact meanings. Many service managers revealed that the customer demands often received with verbally vague description and hard to handle. Thus, the fuzzy demands of the customers will lead to great difficulty in product design. For product design, two important issues need to be addressed. First, the existing customer demands of the features of the current products. Second, customers expectations of new product features. However, the voice of the customer is generally expressed in laymans language and not explicitly in terms of product features. In order to process such expressions, a method is needed to decompose and classify these expressions to enable the new demands to be explored.

2. Literature Review
2.1 QFD Review QFD finds extensive use in a variety of industrial applications. Thus, QFD is a critical research issue in the fields of design, production and quality engineering [2]. Cristiano have highlighted that the literature in the United States associated with QFD can be categorized into three groups: (i) introductory materials; (ii) surveys and case studies that illustrate the application of QFD; and (iii) extensions and improvements to the QFD methodologies [3]. The extension and improvement of QFD to elicit customer needs, has been addressed by Fungand ReVelle. Fung develop a hybrid method for customer needs elucidation and analysis for preparing quantitative variables in functional requirements [4]. They also present a quantitative mapping between the customer attributes and product characteristics [5]. A CRIS system that employs fuzzy sets is also presented to enable the effective interpretation of qualitative customer attributes into quantitative form so that the mapping between customers attributes and product characteristics can be performed more effectively. 2.2 Semantic analysis in natural language processing The customer demands are normally expressed using natural language mostly in laymans terms. For example, customers may require easy to handle or good performance for a new car. When collecting information on product features from a large group of customers, computerization of the data processing is needed in order to help improve the understanding of customer demands and reduce the processing time. However, the use of computers can result in difficulties in directly extracting information from the customers expressions. Such difficulties have been frequently discussed, and many applications or techniques have been developed to solve this problem. Among them, Natural Language Processing (NLP) provides a feasible solution [6]. In the field of Artificial Intelligence (AI), the focus of NLP is on the knowledge necessary to understand natural language. AI deals with language as a phenomenon of knowledge representation and use. The objective of NLP can be achieved 512

by using syntactically driven parsing and semantic grammar [7]. 2.3 Ranking or prioritizing customer demands Two methods being used most to ranking the customer demands: potential gain in customer value index, analytic hierarchy process. Hom proposed the Potential Gain in Customer Value (PGCV) index, which is an extension of a common marketing analysis method [8]. In such an analysis, a survey is designed to obtain customer ratings on the importance of certain product features and their performance. Through the survey, the priority order of product features is measured based on two essential dimensions: (i) the customers perception of the importance of the product features; and (ii) the performances of the individual features. The two dimensions form an Importance/Performance (IP) chart. The location in the four quadrants (A, B, C and D) of an IP chart denotes the strategic implication of the product features. An AHP framework has been used to determine the priority of customer requirements for an industrialized house construction scenario [9]. The customer requirements are decomposed into a hierarchical structure. The relative importance of customer requirements at each level is determined by a pair-wise comparison. The relative importance of each requirement at the lower level is multiplied by the priority weight of its upper level. The final priorities of the customer requirements are calculated using this bottom-up procedure.

3. Processing the customer demand


As we can see, there are four characters of customer demands: Fuzzy. The demands are usually fuzzy and indeterminate. Customers demands about the production are ambiguous and unspecific. Customer often use words like less than, in some sort to express their demands. Dynamic. The demands changes all the time. On one side, the customer demands will impenetrate the whole life cycle; it shows different forms in the different phases. On the other side, most customer Multiformity. The multiformity of customer demands lay on two sides: (1) from the overlay point of view. The customer demands include demands in product designing, manufacturing, management and performance etc. (2) from the expressing method. The customer demands express not only in natural language, sometimes also use figure, table and symbol to express the demands. Priority. The demands ranking different among each other. The importance and satisfaction of customer demands are different. Some basic function demands are extremely important and some technology parameters are slightly important. According to the characteristics of the customer demands, we use the following method to processing the demands. 3.1 Demand unit analysis tree As we discussed above, the customer demand are fuzzy, inexact and some times antinomy. We should decompose the demands into sub-demands. The decomposition of demands will help to set the demands model, define the customer demands information and made it easy to understand. Here, we defined demand unit as the smallest information unit which is indivisibility and can describe the customer demand unambiguously [10]. Customer demands have hierarchy. The customer demands model can be constructed by decompose the demands according to hierarchy relations [11]. This paper we use analysis tree to express the connections between demands hierarchy structure. We defined the demand root, demand unit and sub-demand as following: Demand Root(DR): the original customer demands.

(d ) DR(d ) (d ' )decompostition(d , d ' ) decompostition(d , d ' ) means demand d ' can be decomposed into demand d .
Demand Unit (DU): the leaf node in the demand analysis tree, the smallest information unit of customer demands.

(d ) DU (d ) (d ' )decomposition(d , d ' )


513

Sub-demand: the demands between demand root and demand unit.

(d ) SubD(d ) (d ' )decomposition(d , d ' )


Customer Demands

Sub-Demand

Sub-Demand

Sub-Demand

Demand Unit

Demand Unit

Demand Unit

Demand Unit

Demand Unit

Fig. 1 Demand unit analysis tree

During the decompositions, the original customer demands should be protect and keep the original means. If

d1 , d 2 ..., d n are the demand unit, DE (d ) is the demand information which demand d expressed, then: DE (d 1 ) DE ( d 2 ) ....... DE ( d n ) DE ( d ) .
3.2 Demand decomposition We should decompose the original fuzzy and unambiguous customer demand in order to modeling the analysis tree. There are two important questions: How to control the decompose dimension, when should we stop decomposition. How to decompose demands? The process of demand decomposition can be figure as:
Customer Mind

Expressi Fuzzy demands

Semantic analysis

Exact and clear demands Decompose Rules

Demand Unit

Fig. 2 Process of demand decomposition

If d1 , d 2 ...d n are the demand units, means there are no repetition between the demand units. The rules of decomposition can be expressed as:

514

DE (d ) DE (d1 ) DE (d 2 ) ... DE (d n ) (d i , d j ) DE (d i ) DE (d j ) = (i j )
Customers usually use natural language to describe their expectation. These describe language includes the customers demands for the products. We can mine the demands in the words through semantic rules. For example, the customer demands for a camera:

Cheap

The camera should be a bargain and easy to use

Semantic segmentation Good Quality

Customer demands

Yarage

Demand

Information

Fig. 3 Demand semantic segmentation

3.3 Customer Demand priority rating To determine the priority of demand units, the fuzzy IF-Then rules should be customer driven. The action part of the rules should be able to reflect customer needs for a specific demand unit. The demand unit with high importance ratings should have higher priorities than those with low importance rating. The priority should be discussing together with the customer satisfaction. We can represent the relationship between importance, satisfaction and priority:
Tab. 1 The relationship between importance, satisfaction and priority Importance High Medium Low Satisfaction Low Extremely high Slightly high Low Medium Very high Medium Very low High High Slightly low Extremely low

According to the table, the rules are developed: 1. IF importance is high AND degree of satisfaction is low, THEN priority is extremely high. 2. IF importance is high AND degree of satisfaction is medium, THEN priority is very high. 3. IF importance is high AND degree of satisfaction is high, THEN priority is slightly high. 4. IF importance is medium AND degree of satisfaction is low, THEN priority is high. 5. IF importance is medium AND degree of satisfaction is medium, THEN priority is medium. 6. IF importance is medium AND degree of satisfaction is high, THEN priority is slightly low. 7. IF importance is low AND degree of satisfaction is low, THEN priority is low. 8. IF importance is low AND degree of satisfaction is medium, THEN priority is very low. 9. IF importance is low AND degree of satisfaction is high, THEN priority is extremely low.

515

4. Conclusions
This research mainly achieved in processing of customer demands. Firstly, defined the demand unit and modeling the demand unit analysis tree. Secondly, use semantic to decomposes the demands. Thirdly, we determined the priority of demand units. The result show that although customer demands are usually described in laymans terms, they can still being processed by using semantic analyses. And given the ratings of importance and degree of satisfaction for customer demands, the fuzzy method can be used to determine their priority. Based on the priority of demands, the manufacturer can adjust their product design and reduce the risk of customer dissatisfaction.
References Kalakota, R., Robinson, M. E-business roadmap for success. New Jersey: Addison-Wesley.1999 Cohen, L.Quality Function Deployment, a How to Make QFD Work for You, AddisonWesley, Reading, MA.1995 Cristiano, J.J., Liker, J.K. and White, C.C. Key factors in the successful application of quality function development. IEEE Transactions on Engineering Management, 2001.48(1), 8195 [4] Fung, R.Y.K., Popplewell, K. and Xie, J. An intelligent hybrid system for customer requirements analysis and product attribute targets determination. International Journal of Production Research, 1998.36(1), 1334. [5] ReVelle, J.B., Moran, J.W. and Cox, C.A. The QFD Handbook, Wiley, New York, NY.1998 [6] Obermeier, K.K. Natural Language Processing Technologies in Artificial Intelligence, Ellis Horwood, Chichester, West Sussex, England.1987 [7] Carbonell, J.G. Natural language understanding, in Encyclopedia of Artificial Intelligence, Shapiro, S.C. (ed), Wiley, New York, NY, 1992. 9971015. [8] Hom,W.C. Make customer service analyses a little easier with the PGCV index. Quality Progress, 1997.30(3), 8993. [9] Armacost, R.L., Componation, P.J., Mullens, M.A. and Swart, W.W. AHP framework for prioritizing customer requirements in QFD: an industrialized housing application. 2004.26(4), 7279. [10] D. Magro, P.Torasso. Description and configuration of complex technical products in a virtual store[A]. In ECA2000 Workshop on Configuration [C], Berlin, 2000 [11] Barry OSullivan. Constraint-based Product Structuring for Configuration [A]. Proceedings of the 15th European Conference on Artificial Intelligence (ECAI 2002) [C]. Amsterdam: IOS Press, 2002,41-46 [1] [2] [3]

516

The Evaluation of Knowledge Management Performance Based on AHP


Liu Peide1,2
1 Economic Management School, Beijing Jiaotong University, Beijing ,China, 100044 2 Information Management School of Shandong Economic University, Jinan, China250014

Abstract The evaluation of knowledge management performance is not only an important way to know the level of knowledge management of enterprise but also a necessary component of it.This paper first describe the current situation of knowledge management performance evaluation in and out of home,the indicator system of knowledge managements performance evaluation and AHP based model are presented on the basis of the ralated reference and also do some example research.The examples demonstrated that: hierarchy process method can do well in evaluation. Key words Knowledge-based systems, AHP, Performance evaluation.

1 Introduction
Following the knowledge economys appearance, knowledge as the organizational intellective capital, has become to the first tactic resource during the development of economy. Moreover, the management of knowledge is the important instrument to improve the organizational competitive power and to drive the development of social economy, just as Peter F. Drucker, the master of management, said in Knowledge Management, if we said that the science management, which is at the beginning of the industrial economy, was the first revolution of enterprise management, during human being making for the 21st century, the management of enterprises allover the world will welcome the second revolution knowledge management with the coming of knowledge economy. [1] Carl Frappaolo, the executive vice-president of the American Delphi and the enterprise knowledge management consultant, thinks: knowledge management is the application of collective wisdom to improve the overall response and innovation, a new provided approach to let the enterprises realize the sharing of explicit knowledge and implicit knowledge. [2] In enterprises, the one who masters the up-to-the-minute knowledge, who invents and creates the new knowledge, who produces use value with more knowledge, will obtain the predominant position in the next competition [3]. In short, knowledge management puts emphasis on changing the management pattern to improve the competitive power using the knowledge and the capital of person with ability; applies itself to change the employees work attitude and behaviors through modern instruments and estimate open and trustful interior environment in the enterprises, which can make the employees to collaborate and share knowledge resource, accomplish more difficult task, bring more benefit and achieve higher goal. There needs a system to measure and evaluate the effect of the enterprise knowledge management. Such system lets the manager see the shortcomings of management, know where the key factors exist, and provide theoretic evidence for organization to improve its knowledge management. So the evaluation of the knowledge managements effect is the essential part of the knowledge management process. The knowledge management performance evaluation, which can reflect a organizations management status of knowledge and the organizations developing trend in the future, is a important way that let the organization meet and know the level of its knowledge management; let the enterprise make a comparison of knowledge level between the former and the later, or among other relative enterprises, find experiences and lessons, then, make a better use of the existed advantages of knowledge management , find problems which exist in the knowledge acquainting, sharing, innovation, utilization etc, and give mending measures aiming at these problems; it provide decision-making evidence for giving a right instruction to the development of enterprise knowledge management, and achieve the purpose of improving further knowledge management level and enhance the enterprises competitive power; meanwhile, it also can validate the disciplinarian problems brought forward from the research of knowledge management, find the new problems needed to be solved in the enterprise knowledge management, 517

impel the development of enterprise knowledge management science.

2 The current situation of knowledge management performance evaluation


In abroad, Edvinsson builds a model which can evaluate simple capital knowledge, based on analyzing and evaluating an insurance company of Sweden; Sveiby adopts the dynamic indicator to evaluate the value of the knowledge capital; Verna Allee, a famous scholar, had proposed a suit of methods for evaluating intellective capital [4]. This method particularizes 20 questions. Experts choose yes or no according the actual conditions of evaluating company. The more choosing is no, which indicates that the evaluating company should concentrate more energy on the management of the intelligence capital. Szala Marek has proposed the organizing capital, manpower capital, technological capital, market capital, and the indicator system which could measure the performance appraisal of information management. At home, Wei Jiang describes the key ability of enterprise in terms of knowledge; and proposes a fuzzily appraisal model of enterprises key synthetical ability [6]. The model sets up on the enterprises inherent combining to of knowledge and ability, and external manifestation of the comprehensive result. Wei Jiang still refers to an appraisal and measuring model of enterprises technological ability [7], who thinks the key to enterprises ability is the knowledge adhered to organization and individual. Doctor An Jin, Institute of Industrial Economic of CASS, tested and evaluated enterprises competitiveness by statistics method. Huang Lijun carried on the appraisal research of enterprises knowledge administrative system, from the indicator of the project management, communication ability, intellectual agencys function, maintainability, security, practicability, technological simplicity, file library function and so on[8]. Zhu Qihong uses the next five indicators: proportion of funds expenditure to income from sales of the products, the numbers of new products development, the number of patent, proportion of produce new product and training expense that creative new craft to income of products sell, adopts BP neural network model to appraise the knowledge of enterprises information management[9]. Liu Xisong etc regard the indicator system of appraising implementing defect of information management by the next five factors: Engineering level, labor productivity, enterprise inlay ability, customer satisfaction, competitive position [10]. Jiang Ronghua has proposed second class indicator of 26, based on enterprises organizational capital, manpower capital, technological capital, the capital of market and first class indicator system [11].

3 The indicator system of knowledge managements performance evaluation


The restriction factors of knowledge managements performance is a multiplayer dynamic system, and the involved factors are too many and the structure is rather complex, so that, in order to reflect the performance correctly, we should design the indicator system from diverse angles and layers. This paper ,based on the literature [11] and according to the practice of the famous knowledge management enterprise inside and outside such as Intel,IBM,Haier,Ienovo and so on , proposes the indicator system as the following figure(Table 1).
Table.1 Evaluation indicators system of knowledge management performance

518

4 The analytic hierarchy process method knowledge management system evaluation


4.1 The ascertainment of evaluating factors The set of evaluating factors (Table 1) is a muster of knowledge management systems evaluating indicators. 4.2 Computing the weighted set of evaluating factors The analytic hierarchy process method, just AHP method for short, is to express a complex decision-making problem as a sequential step-up hierarchy structure, compute the comparatively weightiness measurement of diversified decision-making behaviors, scheme and decision-making object under different rule and the whole rule, and then rank them according to the measurement, providing decision-making evidence for the decision-makers[12]. The steps to solve the real problems using AHP method is as follows: (1)Establishing the problems step-up hierarchy structure. According to the elementary analysis, divides the factors into several groups, and each group present a hierarchy. Then, ranks them as the sequence: the top layer, several relative middle layers and the bottom layer. The top layer presents the purpose of solving problems, just at which the AHP wants to arrive. The middle layers is the involved intermediate links while reaching the purpose, namely tactic layer, restricted layers, rule layer etc. The bottom layer displays the measures or policies used to solving problems. (2) Determining the comparative judgment matrix. The judgment matrix presents the situation of the comparative weightiness of this layers relative factors, aiming at some factors of the upper layer. Supposing that the factors Ak of A layer have relation to the next layer B1,B2, ,Bn, constitutes the judgment matrix as follows (figure 1). In the figure, Bij presents the weight indicator of comparative weightiness of Bi toBj, relative to factor Ak. Its crucial to determine this weight. We usually adopt the two methods: expert decision and individually subjective decision[12]. Expert decision is to invite relatively specialized experts considering the content of the evaluating problems, let the experts make comparison between factors using AHP according to the form of experts suggestion designed in advance. We constitute the judging matrix by filling in the result of the comparison, then synthetically analysis and compute the experts judging matrix to obtain the problems ordered weighted value. The individually subjective decision constitutes the judging matrix by comparing the cognitive and understanding level of individuals. This paper adopts the first method which let the experts give their determination to the mutually important degree of indicator systems each layer.

Fig. 1. Judgment matrix

AHP adopts the 1~9 marking method, brought forward by Satie, to constitute the judging matrix. Obviously, relative to the judging matrix, there have:

bij = 1

b ji

, bii = 1 .

(1)

(3) The single hierarchy sort. The single hierarchy sort computes the weighted value of this layers factors weightiness, according to some of the upper layers factors. The single hierarchy sort can come down to compute the eigenvector and eigenvalue of judging matrix B. That is to compute the eigenvector and eigenvalue which can satisfy the formula 2.

BW = MAX W
Thereinto,

(2)

MAX . Adopting the square root method, compute it as:

MAX is the maximum of eigenvalue of B. W is the normalized eigenvector corresponding to


519

Wi =

b
j =1

ij

b
n i =1 j =1

(3)
ij

Thereinto, i,j=1,2,,n So, W =( W1 , W2 ,...Wn ) just the eigenvector we are aftering.

MAX =
Thereinto, ( BW ) i means the ith heft of BW .

1 n

i =1

( BW ) i Wi

(4)

(4) The test of consistency. Each judgment has difficulty to reach a complete consistency because of the complexity of objective things and diversity of individuals subjective judgment. In order to make the result of AHP method basically reasonable. We need to test the consistency of each judging matrix using the following formula 5.

CR = CI RI , CI = (MAX n) /(n 1) .

(5)

Thereinto, CR is the random consistent proportion of judging matrix. RI is the averagely random consistent indicator of judging matrix. The 1-10 ranks matrixs RI is as the following table (table 2):
Table2 The averagely random consistentindicator RI of 1-10 judging matrix

n is the number of ranks of judging matrix. When the CR< 0. 10, we think the judging matrix has satisfying consistency. Otherwise, we should adjust it to obtain the satisfying consistency. 5) The whole hierarchy sort The whole hierarchy sort is to compute the weighted value of all factors weightiness in this layer according to the upper layer by taking advantage of all results of the single hierarchy sort in the same layer. The single hierarchy sort is just the whole hierarchy sort for the top layer.

Figure2 The matrix of the whole hierarchy sort

4.3 Establishing the object evaluating formula The object evaluating formula is a linear function to evaluate the layers each indicator. So

Y = I 1 C1 + I 2 C 2 +
by experts corresponding to indicator B j .

+ I n Cn

(6)

Thereinto, I j is the weight resulted form the whole hierarchy sort of layer B, C j is the average value given

520

5 The performance appraisal of knowledge management and demonstration research


5.1 Construct judgment matrix Via the investigation of 15 experts, it structure the judging matrix, eigenvector and consistency examine: (1). Judgment matrix A-A1: (2). Judgment matrix A1-B:

(3). judgment matrix

A2-B:

(4). Judgment matrix A3-B:

(5). Judgment matrix A4-B:

(6). Judgment matrix A5-B

By computing, the above judgment matrixes all satisfy consistency and every right has no logical mistake. We can get a clearly chief table that indicate the appraising knowledge of enterprises information management guideline system, in other words is arrangement compositor, after confirming overall target, decomposing target, building appraisal standard system and finishing every righted target(table 3), the numbers in the right bracket is the right of itself higher-up target, the overall right express the right of the relatively second class target. 5.2 Demonstration analysis Appraising some electrical information enterprises knowledge management performance indicators by 10 experts, the average is as follows: Bi Ci 76 1 80 2 57 3 70 4 78 5 73 6 65 7 75 8 88 9 92 10 94 11 96 12 74 13 74 14 75 15 68 16 75 17 87 18

We can get the objective appraise by basing equation 6 , replacing I i and Ci. So the overall knowledge management score is 78.

6 Conclusion
Knowledge management performance is a complex system project, whose aim is finding the weakness of the enterprise and consummating it, thereby accelerates the organized competition improvement. This paper bases the literature 11, skimpily appraising the system target, building arrange mental evaluating mathematics model and overall appraising system which could manage the knowledge performance. It is approved by instance: we will get the same affection by using arrange mental appreciable method.

521

Table 3

the overall compositor of appraising knowledge of enterprises information management guideline system Indicator 1 Indicator 2 B1 Management structure (0.2344) (0.5895) (0.1268) (0.0492) (0.2385) (0.6250) (0.1365) (0.1634) (0.5396) (0.2970) (0.5030) (0.3181) (0.0895) (0.0895) (0.4747) (0.1630) (0.1072) weight

Ij

0.05837 0.14678 0.03157 0.01225 0.03919 0.10269 0.02243 0.02034 0.06718 0.03698 0.02927 0.01851 0.00521 0.00521 0.19183 0.06587 0.04332 0.10309

A1 Organizational capita l(0.2490)

B2 Enterprise system B3 Enterprise culture B4 Staff satisfaction B5 Knowledge structure

A2 Human capital

(0.1643)

B6 Updatingknowledge capacity B7 Professional skills B8 Technologically advanced level

A3 Technology capital(0.1245 )

B9 New technological achievements B10 R & D input level B11 Market share B12 Sales growth B13 Customer satisfaction B14 Customer relations B15 Information Systems B16 Information access B17 Knowledge overt transformation

A4 Market capital

(0.0582)

A5 Knowledge systems(0.4041)

B18Utilization of knowledge resources (0.2551) References [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13]

Peter Druse. Knowledge Management. BeiJing: China Renmin University Press, 1999 XingMian etc. Application in the Management of Knowledge Evaluated of Fuzzy Multistage Appraisal. Operations Research and Management Science, 2004.3:86-89. Wei Jing. Nature of Knowledge and firm's Knowledge Management. Science Research Management. 2000.21 Verna Allee. Knowledge Evolution. 1998 Szala Marek. Two-level Pattern Recognition in a Class of Knowledge-based Systems. Knowledge-Based Systems, 2002, Vo1.15,(1), 95-101 Wei Jing. New Strategic Thinking on Enterprise AcquisitionsEnterprises Acquisitions and Integration Management Model Based on Core Competence. Science Press,2002, 6 Wei Jing. Technological Capability of Enterprise. Science Press, 2002. 3 Huang Lijun. Methods of Evaluation of Enterprise Knowledge Management System. Information Studies: Theory & Application, 2002(4):273-275. Zhu Qihong. An Evaluation Model on the Knowledge Management of Enterprise Based on the Artificial Networks. Science of science and Management of S.& T. 2003.(8):32-34. Liu Xisong, etc. The Appraisal Model of Knowledge-based Management. Commercial Research, 2004(1):1-2. Jiang Ronghua. Research on the Evaluation of Knowledge Management Performance. Central South University, 2004.3 Zhang Zhifeng. 2004:11-12. Research on Library Website of University Based on Quantitative Analysis. Wuhan University, 2004 11-12. Li Enke, Xu Guohua. Comprehensive Evaluation of Information Systems Using the Analytic Hierarchy Process. Journal of the China Society for Scientific and Technical Information. 1998,(6).

522

A Study of Pricing Patterns for Keyword Advertising Auction


Liu Shulin, Rong Wenjin
School of International Trade and Economics, University of International Business and Economics, P.R.China. 10029

Abstract Which pricing pattern for keyword advertising is the more profitable: pay-per-impression, pay-per-click, pay-per-call, or
pay-per-sale? In this paper, we address ourselves to answer this problem. We set about this work from two processes of keyword advertising purchasing and publishing. We think the behavior in purchasing process is based on how to use or publish it. Hence, we formally define the keyword advertising by investigating the publishing process of keyword advertising. To compare various pricing patterns, we set up a direct mechanism. We use the revenue equivalence theorem (RET) and other routine methods in the auction theory to solve the above problem. One of our main conclusions is based on reasonable assumptions that the expected revenues of the search engine are the same whether under pay-per-call or pay-per-sale if payment happens after click-through. Key words keyword advertising, RET, pay-per-click, pay-per-call, pay-per-sale, affiliated signals

Introduction

There was a relative universal phenomenon: a new thing has often more than one name. So does the keyword advertising. Both the sponsored search and keyword(s) advertising (auction) are prevalent. Some researchers employ the term search engine advertising[3]. Paid search advertising sometimes appears in a few papers, such as [7]. The phenomenon implies that the keyword search marketing is still in its infancy and the more innovation in the field will continue to emerge. It is the case that attracts us to study it. We can divide these names into two categories according to their last word. One ends in the word advertising, the other in auction. In fact, from the perspectives of the advertisers, auction is the purchasing process of rights of publishing ads and advertising is the process of exercising it. Paying our attention to different process can lead to two kinds of perspectives and methods of investigating the new economic mechanism, though both are not opposite completely. Much of previous research on the issue has focused on its feature of auction. For example, [5] investigates generalized second price auction (GSP) which is used by Google and other search engines. [1] presents a truth auction for keyword auction. Few researchers have studied keyword auction from the perspective of the advertising. Only the literature [8] sets up a model involving some concepts in advertising. In this paper, we think that the advertisers behaviors in auction depend on how to use the auctioned object and hence our analysis begins with the process of publishing ads. See section 2. From the start, the search engines have been concerned with how to set prices [6]. So far, search engines have already created various pricing models since the appearing of Internet advertisement. In fact, a brief history of Internet advertisement is rapid evolution of the pricing models. At the beginning of integrating the advertising into the web, the pricing model of online ads is the same as offline ones. The web sites charge a fixed fee for either a given period (for example, one year, just as an outdoor ads) or the times of showing (typically, one thousand, just as print ads) on a site. This pricing pattern is called pay-per-impression in this paper. In 1997, Overture introduced auction to sell search advertising, which make the online advertising enter into an important stage - keyword advertising. The basic idea behind the mechanism is that advertisers bid on their willingness-to-pay for the right to have their advertising link display on the result page associated with a specific search terms (for instance, digital cameras) and then pay only when a user actually clicks on the ads link (hence, pay-per-click). Although Google redesigned the rule of payment and adopted GSP in 2002, it does not go beyond the implication of the idea. The idea is great! It creates billions of dollars market for search engines and makes the keyword advertising be the fastest-growing part of the advertising industry [4]. However, the pay-per-click mechanism cannot avoid click fraud that the rivals or even the search engines
The keyword advertising is a form of targeted online advertising provided by search engines, in which the ads orderly displayed on the result page associated with a special keyword. For example, when users input mp3 in the search box, search engine will send the users to the result page which includes the ads of selling mp3 along with the natural search results. Most search engines employ auctions to allocate the position to advertisers, hence it is also referred to as keyword auction.

523

employ software-powered websites generating lots of ineffective clicks for advertisers. A natural extension is pay-per-call advertising. In 1999, Ingenio pioneered this pricing pattern and Findwhat and AOL signed up as Ingenios partner in 2004 and 2005, respectively. We use Findwhat as an example to explain this pricing pattern. The order of advertising displayed on result page for specific keyword depends still on the bids the advertisers put. But when the user clicks on the ads link, she is sent not to the websites of the advertisers, but to the page managed by Findwhat, which includes the brief depiction for the advertisers and their offers. Along with the ads link, there is a free phone call button on the result page. The Findwhat would charge the advertisers only when the users click on the button and talk live to the advertisers. Obviously, this pattern can solve the problem of click fraud to a large extent. In addition, the users who used the free phone button have stronger desire to purchase products than the users who just click on the ads link. Therefore, the advertisers will prefer this pattern and put higher bids under this pattern than under pay-per-click pattern. If so, pay-per-call is not ultimate end-result. The holy grail of advertising should be pay-per-sale. This is not only a theoretical design. In real world, Bill Gross, who invented the pay-per-click model in 1997, founded a search engine, SNAP, and has started to provide this service [4]. For instance, United Airlines places ads links on SNAPs search pages, but it pays nothing when somebody clicks or calls, but only when somebody actually buys a ticket. After all, both pay-per-call and pay-per-sale are far from mature just as pay-per-click in real world. Hence, this nicely gives us the enough space to study. A basic and important problem is which pricing pattern is the best for the search engine or advertisers. In section 3 and section 4, we address ourselves to this work. In particular, we set up a direct mechanism to depict a kind of keyword advertising in section 3 and solve it in section 4. The conclusions are arranged in section 5.

What is the keyword advertising?

In this section, we investigate the publishing process of keyword advertising that is the interactive process between the audience and the ads. The advertising can stimulate audiences to take a sequence of actions and eventually to buy the products or services. Conversely, these actions are also regarded as the states of an ad intensified by audience. This idea is formally presented as follow. To understand the information provided by keyword advertising, the audience of keyword advertising often take a sequence of actions, such as, browsing the ads link text, clicking the ads link and so on. From the advertisers perspective, the sequence of these actions forms the publishing process of advertising. We use action set, a = (a1 , a2 , , al ) , denotes the sequence of these actions, where l is the number of all potential actions. For example, under pay-per-call model, the action set may be a = (a1 , a2 , a3 , a4 , a5 ) = ( searching , browing , clicking , calling , buying ) (1)

Among all actions the users take, there are two kinds of actions in the central point. One is click-through, namely, clicking (the ad link) in the expression (1), denoted by ac , 1 c l . Click-through is one of important futures of Internet advertising. Two key assumptions in this paper involve the action. We discuss the two assumptions shortly. The other central action is called typical action, at , 1 t l , Once one user takes the action on an ad, the advertiser will pay one price to search engine. Thus, what action the search engines choose as typical action is central problem for them. This is main issue in this paper. For a particular ad i , taking action ak means more useful information is exposed to the audience and here
i we say the advertising i lies in the state ak for the audience. In other words, the state of advertising indicates

how far the advertiser and the audience communicate. For example, when a user clicks on ads link (one action) under pay-per-call model adopted by Findwhat, the state of the exposed advertising changes from brief ads link text to detailed introduction of firm and offers. Apparently, the latter state is far deeper than the predecessor for

Since one advertiser is allowed to publish just one ad, we use the same letter, i , to denote the advertiser and its ad in this paper. This doesnt lead to confusion.

524

interactive communication. Generally, when an audience takes action ak in keyword advertising, the state of
i i adverting i changes from ak 1 to ak . Therefore an sequence of states, a i = (a i , a i ,
1 2

, a li ) , is called state set for

ad i . Either action set or state set can depict interactive process of a given keyword advertising. Henceforth, to avoid cumbersome notation, we will use a = (a1 , a2 , , al ) to denote state set or action set. For any two states ar , as and 1 r < s l , we define transition probability (r , s) as the probability that the state of an ad changes from ar to as . Mathematically, transition probability (r , s) can be expressed through conditional probability of the state as given the state ar, that is, (r, s) = Pr(as|ar). For 1 r < s < t l , we have (r , t ) = (r , s) ( s, t ) . It is important to put some constraints on transition probability (r , s) . In this paper, there are two basic assumptions on (r , s) . Assumption 1. When 1 r < s c , (r , s ) depends on position of ad slot on the result page and thus on bidders bids. Thus, we rewrite it as j (r , s ), 1 j J , where J denotes the total number of advertising positions at auction. Assumption 2. When c < r < s l , (r , s ) doesnt depend on position of ad slot on the result page and is irrelevant to bidders bids. Note ac denotes click-through particularly through this paper. Both assumptions imply that for an ad i , the predecessor states of aci are related to bids the advertisers (bidders) put, while the successor states of aci are related to other private information the advertisers possess except valuation. Intuitively, the order of ad slots on the result page can only affect the probability that the users choose ads. Once the user fix on the particular one and click it, the initial order cant affect subsequent behaviors on the ad. Combining these two components action set and transition probability, we can characterize the keyword advertising from the viewpoint of the publishing process. Definition 2.1 (keyword advertising) Keyword advertising K = (a, ) has two components: A set of possible action a = (a1 , a2 , , al ) for all users and a transition probability vector (r, r+1) = Pr(ar+1|ar). According to this definition, al denote the last action the user takes, that is, buying the products or sending a verification message to the search engine, just as on the most auction webs.

A Direct Mechanism

Now, we investigate the purchasing process of the keyword advertising. Generally, a selling mechanism has three main components: a set of valuation messages for each bidder, an allocation rule, and a payment rule. In this section, we depict keyword auction as such a mechanism with these components. We will deduce some important results in next section. For a given term, suppose there are J advertising positions for auction and I advertisers (bidders). We index positions of the ads by j = 1, 2, , J and bidders by i = 1, 2, , I. We ignore reserve price and hence any participator can gain just one position allotted by the auctioneer according to allocation rule. Thus, we assume I J . In addition, we only consider the case under which the advertisers pay their bids when the typical action appears, that is, the generalized English auction. What is the object sold in the keyword auction? This is a basic problem but difficult to answer. In most literature, advertising positions are regarded as objects to be sold and hence keyword auction become an example of multiple objects auction. But a bidder is allowed to just bid one price though there are differences across advertising positions. Also, a bidder is allocated only one position. The changes make a big difference between the keyword auction and multiple object auctions. In this paper, the answer for this question is the behavior that a potential buyer (i.e. a user of search engine or an audience of the advertising) takes typical action one time, so we view the keyword auction as a single object auction. This idea is also natural since payment happens just when the user takes typical action. The bidders thus reckon the value of typical action for themselves. 3.1 The Valuation of Typical Action Now we investigate how to valuate the typical action. From the perspective of an advertiser i , the valuation 525

of typical action, xi , can be divided by two parts: marginal benefits coming from one effective showing, vi , and the probability, , that the state of i s ad transits from at to al . Hence, the valuation of the typical action can be written
xi = x(vi , t ) = vi

(2)

Although the bidder i knows her marginal benefits vi and the value of t , she doesnt know exact value of typical action since is unknown for her. The probability depends on the order of her ad on the result page and then depends on all bidders private information. Hence, the valuation structure (2) is interdependent or affiliated. The bidder i s private information, vi , is summarized as the realization of the random variable V , and is called i s signal. Therefore includes the signals of the other bidders. To compute the affiliated valuation, we assume that all bidders are risk-neutral. Thus, the bidder i evaluates the value of in view of its expectation as shown below.
E ( ) = Pj (vi ) j (t , l ), 1 t l
j =1 J

(3)

where Pj (vi ) is the probability that the bidder i with signal vi get the j th position on the result page and
j (t , l ) denotes conditional probability Pr(al | at ) when her ad lies in the j th position. So we can rewrite the

valuation function of the bidder i with the signal vi as follow


xi = x(vi , t ) = vi Pj (vi ) j (t , l ), 1 t l
j =1 J

(4)

Suppose that the function x is increasing and continuous in all its arguments. Each bidders signal vi is private information but is drawn independently from a common distribution F (v), v [ v , v ] . In accord with the assumptions in most literature on auction theory, we assume F (v) is twice-differentiable, and its density function, f (v) , is positive anywhere on [ v , v ] . The bidder i s valuation, xi , is the function of i s signal, vi by (4). Thus, xi is also the realization of the random variable, X , whose distribution and density function of the valuation x are denoted by G( x) and g ( x) = G ( x) , respectively. The domains of both the functions G ( x) and g ( x) are [ x , x ] . We will show the relationship between F ( x) and G ( x) in (19) of the appendix. Next, lets compute the probability Pj (vi ) in (4). Let b = ( x) denote equilibrium bidding function of the bidder with signal v . Note that the bid is a function of the valuation x . Since the bidders are risk neutral, we have (0) = 0. Suppose ( x) is strictly increasing (we will verify this later) and its inverse bidding function x = 1 (b) . If all other bidders bid according to the same function ( x) , the bidder i s probability of winning j th position by putting bid b is
I 1 1 I j 1 j 1 p j (b) = G ( (b)) (1 G ( (b))) , j = 1, I j I 1 I j j 1 Pj ( x) = p j ( ( x)) = G ( x) (1 G ( x)) , j = 1, I j ,J

(5)

Since in equilibrium a bidder bids b = ( x) , its equilibrium probability of winning j th position is


,J

(6)

Furthermore, the probability that the bidder with type v wins j th position, Pj (v) , can be written as
I 1 I j j 1 Pj (v) = Pj ( x(v)) = F (v) (1 F (v)) , j = 1, I j ,J

(7)

The derivation of (7) can be found in appendix. 3.2 Allocation and Payment Rule Let qi = q ( xi , t ) denote the probability that the bidder i s ad enters into the state ati when he reports his value to be xi and all other bidders report their values truthfully. In other words, q( xi , t ) is just the probability

An effective showing of keyword advertising implies the audience of advertising becomes eventually a consumer. When it is not necessary to be explicit about identity subscripts, we will suppress the subscripts.

526

that the bidder i get the object typical action an audience of the advertising takes in the mechanism. In particular,
qi = q( xi , t ) = Pj ( xi ) j (1, t ) = 1, 2, i
j =1 J

I,1 t l

(8)

where j (1, t ) denote the probability that

ad i enters into state ati when ad i lies in the position j .

Since we just consider generalized first price pricing, that is, the bidders pay their bids. Therefore, let mi = m( xi , t ) denote the expected payment of i when her report is xi and all other bidders tell the truth, we have
mi = m( xi , t ) ( xi ) q ( xi , t )

(9)

In particular, m(0, t) = (0)m(0, t) = 0. 3.3 The Direct Mechanism i The equations (4), (8) and (9) form a mechanism, {xi , q ( xi , t ), m( xi , t )} = 1, 2, I , 1 t l , where xi is a set of valuation messages for each bidder, q( xi , t ) is a allocation rule and m( xi , t ) is a payment rule. This is a direct mechanism which depicts the keyword auction with the typical action at under generalized first pricing pattern. Let H r , s ( xi ) = Pj ( xi ) j (r , s ) denotes expected probability that the bidder i s ad enters into state as
j =1 J

from state ar when she reports her valuation xi , then for all i = 1, 2, The mechanism mentioned above can be rewritten as follow {xi , H1,t ( xi ), ( xi ) H1,t ( xi )} = 1, 2, i

, I and 1 t l , we have

xi = x(vi , t ) = vi H t ,l (vi ) , q( xi , t ) = H1,t ( xi ) , m( xi , t ) ( xi ) H1,t ( xi )


I,1 t l

(10) (11)

For convenience, we assume (r , s) is continuously differentiable function and thus (r , s) / r > 0, 1 r < s l . Thus, the function H r , s () is also continuously differentiable function. At the end of this section, we give a property about H r , s () as shown in the following lemma.
Lemma 3.1 Proof

If 1 (r , s) 2 (r , s)

J (r , s) , then H r , s ( x) > 0 .

see [2].

4 Main Results
In this last section, we use a simple auction mechanism to characterize keyword auction. Thus, we can employ some ready results in classical auction theory to solve many problems appearing in keyword auction. Search engine (auctioneer) faces the problem of how to choose typical action at to maximize its revenue, while the advertisers (bidders) want to know how to choose their bids to optimize their payoffs. We use the revenue equivalence theorem (RET) to answer the above two problems. First, we have following proposition. Proposition 4.1 The mechanism (11) is incentive compatible. Proof we only show that allocation rule q( xi , t ) = H1,t ( xi ) is strictly increasing. In fact, we immediately arrive at the conclusion from the lemma 3.1. Proposition 4.1 suggests the revenue equivalence theorem (RET) holds, that is, the expected payment of a bidder with signal v and valuation x = x (v ) is
m( x, t )q( x, t ) x q (u , t )du
x x

(12)

Proposition 4.2 The equilibrium bidding function is


RET implies that the expected payments in any two incentive compatible mechanisms with the same allocation rule are equivalent up to a constant. So-called incentive compatible means a direct revelation mechanism in which truth telling is an optimal strategy for each agent. Formal definition of these concepts and regular proposition can be found in most literature on auction theory or microeconomic theory.

527

( x)x

x 1 x q(u, t )du q ( x, t )

(13)

Also, we have (a) ( x) is increasing with regard to x ; (b) When t > c , ( x) is independent of t ; (c) When t c , the necessary condition for maximizing ( x) is

Proof see appendix.

x x

H1,t (u ) t du H1,t ( x) t

x x

H1,t (u )du H1,t ( x)

(14)

Part (a) in Proposition 4.2 is natural and it supports our argument on bidding function. But parts (b) and (c) are surprising. Intuitively, the bigger values of t will be of great benefit to the advertisers because more definite potential buyers will reduce the advertising costs. Apparently, there is great motivation that the advertisers bid higher price. Why does the typical action after click-through not impact on the bids of advertisers? Note that a key condition of proposition 4.2 is the users actions after click-through have nothing to do with the rank of ads link on result page. Thus, the typical action is dependent of the behavior of bidding. Since the keyword auction is still very young, we lack statistical data enough to support our conclusions in practice. The expected return of search engine is the sum of the expected payment coming from all bidders, that is, = I Ex (m( x, t )) (15) Intuitively, since (10) and H1,t ( xi ) t < 0 , we conjecture that the expected revenue of the search engine depends on t when t c . But this is not case. We have another important proposition as shown below.
Proposition 4.3

The expected revenue of the search engine is

I ( x
x

1 G ( x) ) q ( x, t ) g ( x)dx g ( x) 1 F (v ) H1,c (v) f (v)dv f (v )


H1,t (v) t H1,t (v)

(16)

Also, we have (a) When t > c ,

= I ( c, l ) v v

(17)

and is independent of t . (b) When t c , the necessary condition for maximizing is


H t , c (v) t H t , c (v ) =

(18)

where H1,t (v) = H1,t ( x(v)) .


Proof see appendix.

The proposition 4.3 implies that the expected revenues of search engine are equal whether under the pay-per-call pattern or under the pay-per-sale pattern.

5 Conclusion
The result of Proposition 4.3 is unusual. It suggests that the search engines expected revenue under pay-per-call pattern are not more than that under pay-per-sale pattern. Also, pay-per-click is not pricing pattern under which the search engine can optimize its expected revenue. Pay-per-click may be apt to operate. Another implication is the advertisers dont bid higher price under pay-per-sale pattern than under pay-per-call pattern. But this doesnt agree with the real world. There are two reasons. One is the pattern in the real world isnt the same as the one in our paper completely, such as the assumption of risk neutral. The other is an ineffective showing of the advertising, i.e., the ad doesnt increase sales and is also useful to advertisers. Increasing sales isnt only one objective of advertisers, and brand awareness may lie in the intention of the advertiser. This is natural extension of this search.

528

References [1] Aggarwal G, Goel A, and Motvani R. Truthful auctions for pricing search keywords. In: Association for Computing Machinery(ACM), ACM Special Interest Group on Electronic Commerce(SIGEcom) eds. Proceedings of the 2006 ACM Conference on Electronic Commerce, Ann Arbor, MI, 2006. New York: ACM Press, 2006, 1-7 Chen Jianqing, Liu De and Whinston A B. Designing share structure in suctions of divisible goods. First Workshop On Sponsored Search Auctions (SSA05), 2005. Douzet Alexandre. An investigation of pay per click search engine advertising: modeling the PPC paradigm to lower cost per action. Second Workshop On Sponsored Search Auctions (SSA06), June 11, 2006, Ann Arbor, Michigan In Conjunction With ACM Conference On Electronic Commerce (EC06), 2006. Economist group. Online advertising: pay per sale. The Economist. October 1, 2005 Edelman B, Ostrovsky M, Schwarz M. Internet advertising and the generalized second price auction: selling billions of dollars worth of keywords. Stanford Research Paper No.1917, 2005. Fain D C, Pedersen J O. Sponsored search: a brief history. Second Workshop On Sponsored Search Auctions (SSA06), June 11, 2006, Ann Arbor, Michigan In Conjunction With ACM Conference On Electronic Commerce (EC06), 2006. Jansen B J, Resnick M. Examining searcher perceptions of and interactions with sponsored results. Second Workshop On Sponsored Search Auctions (SSA06), June 11, 2006 Ann Arbor, Michigan In Conjunction With ACM Conference On Electronic Commerce (EC06), 2006. Liu Shulin, Rong Wenjin. Bidding strategy in keyword auctions: a new economic viewpoint. In: the Third Pan-Pacific Game Thoery Conference. Beijing, October 20, 2006.

[2] [3]

[4] [5] [6] [7]

[8]

Appendix

Derivation of the equation (7) Let X = w(V ) . We suppose w(V ) is increasing function and there exists inverse function V = w1 ( X ) . Both
X and V are random variables. Thus, we have G ( x) = Pr( X x) = Pr( w(V ) x) = Pr(V w1 ( x)) = F ( w1 ( x)) = F (v)

(19)

Replacing G( x ) with F (v ) in (6), we obtain (7). Note that this proof has nothing to do with the particular form of w() as long as w() is increasing function. We can easily verify the valuation function (4) is increasing. Proof of Proposition 4.2 Combining (9) and (12), we immediately have the equilibrium bidding function (13). Next, we mainly show parts (a), (b), and (c). (a) Substituting q( x, t ) = H1,t ( x) (by (10)) into (13), we have

( x)x

x 1 x H1,t (u )du H1,t ( x)

(20)

Now, we show ( x) > 0 .Taking derivation of (x) and yielding

( x)1

2 H1,t ( x ) H1,t ( x) H1,t (u )du


x

H ( x)

2 1, t

H1,t ( x) H ( x)
2 1, t

H1,t (u )du
x

(21)

By Lemma 3.1, H1,t ( x) > 0 , ( x) > 0 , that is, ( x) is increasing with regard to x . (b) When t > c , according to assumptions 1 and 2, we have
x = v Pj (v) j (t , l ) = v (t , l ) Pj (v) = v (t , l )
j =1 j =1 J J

(22)

and
q( x, t ) = P j ( x) j (1, t ) = P j ( x) j (1, c) (c, t ) = H1,c ( x) (c, t )
j =1 j =1 J J

(23)

Substituting (23) into (13), we have

( x)x

x 1 x H1,c (u )du H1,c ( x)

(24)

529

From (22) and independence of (t , l ) and v , we know G( x ) = F ( x (t , l )) . Let


J I 1 I j j 1 H1,c (v) = H1, c ( x (v)) = F (v) (1 F (v)) j (1, c) j =1 I j

(25)

This suggests H1,c (v) is unrelated to t . Let (v, t ) = ( x(v)) , thus we have

(v, t ) = ( x(v))v (t , l )
Differentiating it with regard to t , we have

v (t ,l ) 1 x H1,c (u )du H1,c (v)

(26)

(v, t ) 1 = v t (t , l ) H1,c (v) v t (t , l ) = 0 t H1, c (v)

(27)

According to envelop theorem, we arrive at


( x) (v, t ) v (v, t ) = + =0 t v t t

(28)

This means that ( x) is independent of t . (c) When t c , according to assumption 1 and 2, we have
x = v Pj (v) j (t , l ) = v (c, l ) Pj (v) j (t , c) = v (c, l ) H t , c (v)
j =1 j =1 J J

(29)

and
q( x, t ) = P j ( x) j (1, t ) = H1,t ( x)
j =1 J

(30)

Substituting (30) into (13), we have

( x)x
Let H1,t (v) = H1,t ( x(v)) , thus from (7) we have

x 1 H1,t (u )du H1,t ( x) x

(31)

J I 1 I j j 1 H1,t (v) = H1,t ( x(v)) = F (v) (1 F (v)) j (1, t ) I j j =1

(32)

Let (v, t ) = ( x(v)) , thus we have

(v, t ) = ( x(v))v (c, l ) H t ,c (v)


Differentiating it with regard to t , we have

v ( c , l ) H t ,c ( v ) 1 H1,t (u )du 0 H1,t (v)

(33)

H (v) x x H1, t (u ) (v, t ) 1 = 2 { 1,t 0 H1,t (u )du H1,t (v)0 t du} t t H1,t (v)

(34)

That is,
( x) (v, t ) = t v
x = x (v )

v (v, t ) + t t

=
x = x (v )

(v, t ) t

x = x(v)

H ( x) x x H1, t (u ) 1 { 1,t 0 H1,t (u )du H1,t ( x)0 t du} t H ( x)


2 1, t

(35)

Let

( x) = 0 , we immediately get (14). t

Proof of Proposition 4.3 From (9), (13), and (15), we have

(t ) = I Ex (m( x, t )) = I Ex ( ( x)q ( x, t )) = I E ( xq( x, t ) q(u, t )du )


x

(36)

Expending the right-hand of (36) according to the definition of expectation, and via some routine 530

transformations we have
x x x x

= I Ex ( xq ( x, t ) q (u, t )du ) = I xq( x, t ) q(u, t )du g ( x)dx


= I xq( x, t ) g ( x)dx I
x x x x

( q(u, t)du )dG( x)


x x

Using integration by parts on the second term, and since G ( x ) = 0 and G ( x ) = 1 , we have

= I xq ( x, t ) g ( x)dx I q ( x, t )dx + I G ( x)q( x, t )dx


x x x

= I

x x

( xg ( x) + G ( x) 1) q( x, t )dx = I x x

1 G ( x) q( x, t ) g ( x)dx g ( x)

(a) When t > c , (22) and (23) hold. Replacing variable of integration x with v in (16) and noting G ( x) = F (v) and g ( x) = f (v) (t , l ) , we have

= I v (t , l ) v

1 F (v ) 1 (t , l ) H1,c (v) (c, t ) f (v) (t , l )dv f (v ) (t , l )

v 1 F (v ) = I ( c, l ) v H1,c (v) f (v)dv v f (v )

(37)

By (37) is independent of t . (b) When t c , (29) and (30) hold. Let x = w(v) = v (c, l ) H t ,c (v) , and w(v) is monotonically increasing. Thus, we have G ( x) = F (v) and g ( x) = f (v) w(v) . Similarly, Replacing variable of integration x with v in (16), we have
1 F (v ) w(v) H1,t (v) f (v)dv = I v (t , l ) H t , c (v) v f (v ) Substituting w(v) = (c, l ) {H t ,c (v) + v H t,c (v)} into the above equation, we have
v

= I v (t , l ) H t ,c (v) v

1 F (v ) 1 w(v) H1,t (v) f (v) w(v)dv f (v ) w(v)

= I v ( c, l ) H t , c ( v ) v

1 F (v ) (c, l ) [ H t ,c (v) + v H t ,c (v) H1,t (v) f (v)dv f (v )

v v 1 F (v ) = I ( c, l ) v H t ,c (v) f (v)dv v v (1 F (v)) H t, c (v) H1,t (v)dv v f (v )

(38)

The second integration in the square bracket of (38) is equal to

v v

v (1 F (v)) H t,c (v) H1,t (v)dv = v (1 F (v)) H1,t (v)dH t , c (v)


v v v v v v

= (1 F (v)) H t , c (v) H1,t (v)dv + vf (v)) H t ,c (v) H1,t (v)dv v(1 F (v)) H t , c (v) H1,t (v)dv
v v v 1 F (v ) = v H t ,c (v) f (v)dv v v (1 F (v)) H t ,c (v) H1,t (v)dv v f (v )

(39)

Substituting (39) into the (38), we have


= I (c, l ) v (1 F (v)) H t ,c (v) H1,t (v)dv
v v

(40)

Differentiating it with regard to t , we have


H (v) H (v) v d = I (c, l ) v (1 F (v)) t ,c H1,t (v) + 1,t H t , c (v) dv t v t dt

(41)

Thus, the necessary condition for maximizing is


H t , c (v) t H t , c (v ) = H1,t (v) t H1,t (v)

531

The Model Research of the Knowledge Transfer and Sharing in the ERP Implement
Yunfeng Shi1Lingling Zhang12, Xiuyu Zheng1
1 2

School of Management, Graduate University of Chinese Academy of Sciences, Beijing (100080), P.R.China

Chinese Academy of Sciences Research Center on Data Technology and Knowledge Economy, Beijing (100080), P.R.China

The paper carries on a research to the ERP implement from the angle of the knowledge transfer. After analyzing the process of the knowledge transfer and sharing, it analyzes the obstacle and existent problem of the transferred and shared knowledge in the ERP implement. And it tries to use wuli shili renli (WSR) systems approach to solve the model of the knowledge transfer and sharing in the ERP implement. Key words ERP implement, The knowledge transfer, Wuli shili renli (WSR)

Abstract

1. Introduction
The ERP has become the widespread adopted information system of the manufacturing industry. But the ERP implement circumstance doesn't give satisfaction. According to statistics, there is only 10%-20% success rate in all the China MRPII/ERP system implement items. Currently, some scholars start the research the angle of knowledge management to the ERP implement process. The think establishing the knowledge management strategy in the ERP implement process and creating valid knowledge transfer environment and path etc. are good for obtaining, applying the ERP related knowledge. They play an important part to carry out to transfer the key knowledge to business enterprise in the ERP implement , also being good for ERP implement success(Linda Argote and Paul Ingram2000)[1].The ERP implement generally adopts the model of the third consultative company participating. Concretely see, the formation process of ERP software, the consultation process of consultative adviser in the ERP implement, the application process of ERP software in the business enterprise, the personnel's training process and the process of the consultative adviser's customer to ERP software are a transfer process of many corpuses and crossing over more long time. The paper tries to carry on a theories quest to the ERP implement from the angle of the knowledge transfer. The studied contents mainly include the process of knowledge transfer and sharing in the ERP implement, the corpus transfer and sharing contents, the existent problems of the knowledge transfer and sharing and the countermeasure of raising the knowledge transfer and sharing efficiency.

2. The knowledge and its category in the ERP implement


Studying the knowledge transfer and sharing in the ERP implement needs to comprehend the definition of knowledge. Many scholars put forward their own definition to the knowledge. The knowledge is a true and accurate trust(Nanaka 1994) [2], Sabherwal and Becerra-Fernandez put the knowledge can improve the ability that the organization obtains real results. Towards to the organization, the strength of knowledge is according to the value, trust, information, logic etc. to choose, study, judge (Davenport and Prusak1998)[3]. Ikujiro Nonaka puts forward the comprehensive theories of the organizational knowledge creation and change. He thinks the knowledge can be divided into explicit knowledge and tacit knowledge(Nanaka1994) [2]. Organization for Economic Cooperation and Development (OECD) devide the broad sense knowledge content into 4 kinds: know-what, know-why, know-how, know-who. Generally speaking, the first and the second types are the knowledge that can express, belonging to explicit knowledge; the third and the fourth types are very difficult to use a writing completely to express definitely, belonging to tacit knowledge.

This research has been partially supported by grants from National Natural Science Foundation of China (No. 70501030 and 70531040, Innovation Group 70621001), National Natural Science Foundation of Beijing (No.9073020).

532

According to the definition of knowledge above mentioned and combining the characteristics of the business enterprise ERP implement, carry out ERP the knowledge type of the ERP implement as follows: (1) Explicit knowledge: Such as the craft, standard, norm, manual guide, regulation system, report, template, record etc., belonging to explicit knowledge, also are Organization knowledge. For example, the craft process belonging to the knowledge of Know-what , some Technical specifications belonging to the knowledge of Know-why , guide and template etc. belonging to the knowledge of Know-how , the manpower manager understanding professional personnel belonging to the knowledge of Know-who . When the ERP implement establish an implement team, the item representative director controls of the Know-who knowledge contributes to a set to set up a reasonable team, being advantageous to an ERP implement's problem to resolve. (2) Tacit knowledge: Such as personal technique, trade secret, operation behavior, each professional experts judicative opinion to the design text file in the business process reorganization, some usual operation practice of the section and business enterprise etc.. The knowledge does not come out with the form performance of documents, belonging to explicit knowledge, but playing an important part to the result of the ERP implement.

3. The process research of the knowledge transfer and sharing in the ERP implement.
3.1 The factor analysis that the ERP implement knowledge transfer Currently, the ERP implement generally adopts the model of the third consultative company participating. The corpus of the knowledge transfer and sharing include the ERP software square; carry out business enterprise square and consultation adviser square. Combining six types of knowledge transfer in the business enterprise information-based corpus of Meiyun Zuo (Meiyun Zuo 2004) [4], expand Nancy M. Dixon (Nancy M. Dixon, 2002) five types of scopes that knowledge transfers mode for organize more[5]. The ERP implement corpus' knowledge transfers and shares type and mode guide as table 1 showed.
Tab.1 ERP implement corpus' knowledge transfers and shares type and mode transfer corpus transfer direction knowledge type transfer type ERP software Carry out business enterprise Carry out business enterprise The ERP software transfers toward the implement business enterprise as main Double explicit knowledge as main explicit knowledge tacit knowledge contract type transfers contract type transfers instruction type transfer stipulation type transfer orientation type transfer transfer mode strategy transfer experts transfer

consultative adviser

ERP software

consultative adviser

Double

explicit knowledge as main

far transfer

3.2 The knowledge transfer and transform research in each stage of the ERP implement The consultation process of consultative adviser in the ERP implement, the application process of software in business enterprise, the training process of personnel and the process of the consultative adviser's customer to ERP software are the different knowledge at carry out corpus of transfer a conversion, are built-up study of organization to promote process. Ikujiro Nonaka (Nonaka I., H.Takeuchi, 1995; Nonaka I., N.Konno, 1998; Nonaka I., R.Toyama, N.Konno, 2000; Nonaka I., P.Reinmoeller, D.Senoo, 1998) put forward the theory of knowledge creating[6] [7] [8] [9]. The knowledge creation is epistemology and existence of knowledge to talk about two dimension interactions but occurrence. The whole ERP carries out process to also take place in the knowledge and the implement corpus here thus of interaction. We apply Ikujiro Nonaka four kinds of modes that the knowledge convert to the ERP implement, showed the knowledge transfer of the knowledge transfer and sharing process in the ERP implement.

533

Chose a type in the early years Design an implement Keep on an improvement

Tab.2 The knowledge transfer and transform in each stage of the ERP implement Socialization(tacit Externalization(tacitex Combination(explicitexplic Internalization(explicitt tacit) plicit) it) acit) The each corpus The organization turns Each corpus synthesizes an Comprehension and organization aim and system target information from the source feedback the other party personnel influences concept of various each kind information mutually The mutual The implement require of Carry out the information The team persist in influence between ERP is expressed or analysis of business enterprise carrying out the aim, the implement conceptualism data and business process target and strategy team member The mutual Express with the forms, Information analysis of Apply and control under influence in the such as theories, concept problem comprehension and way improvement and cause and effect diagnosis activity between the relation...etc implement team member

(1) The knowledge transfer and conversion in the early time of choosing a stage in the ERP implement: At carry out business enterprise ERP software to choose a stage, the business enterprise recommends through manufacturer's software, the software recruit to bid and the consultation recruit to bid etc. form realization mutually of the exchanges of information knowledge of the demand and supply, transfer with share. Such as figure 2 showed.
select consultation company and software knowledge margin finance reputation etc. implement enterprise requirement analysis software recommend recruit to bid Tacit knowledge ERP knowledge Software function etc. Explicit knowledge Software demo Tacit knowledge ERP knowledge Advanced management principle etc. Explicit knowledge Solution etc. requirement analysis consult with recommendatio n recruit to bid

Investigate Related text file Interview, record Invite bids book etc.

investigate understanding

ERP software manufacturer n

Consultative company m

investigate understanding

Fig.1 The ERP implement chooses a stage of the knowledge transfer

(2) The knowledge transfer and conversion in the design time of the ERP implement: consultative adviser and implement business enterprise take out manager and IT personnel to constitute an ERP implement troops from each section. The consultation process of consultative adviser, the software applying process in the business enterprise, the training process and adviser's customer to ERP software mainly take place at this stage. This is an important stage of different knowledge transfer a conversion between implement corpus. In the ERP implement the adviser provides an information-based solution to the enterprise ,IT item implement methodology, required analyzes knowledge, risk, quality, change management...etc. various required knowledge of ERP implement. The ERP implement process business enterprise's pressing document data and adviser that need to be provided according to contract's handing over the text file of business enterprise consignation thing is explicit knowledge to transfer, and the more important is the transfer and sharing of tacit knowledge. The tacit knowledge of the consultative adviser such as the profession implement experience, the ERP work thinking mode, and ERP software use technique etc. transfer and share to the enterprise personnel. The tacit knowledge of the enterprise personnel, such as corporate culture tradition, management usual practice, a process 534

design technique especially of the business enterprise etc. transfer to the consultation advisers. Complete business process reorganization and ERP software to turn to install at the customer of that business enterprise just in this kind of mutual knowledge transfer and sharing.

Make sure reorganization process and implement project Strategy program Information-based level Corporate culture etc. Implement enterprise Software training Tacit knowledge ERP knowledge Mold function Explicit knowledge Software, text file consigned thing etc. Tacit knowledge Project management knowledge Process reorganized knowledge, ethodogy Explicit knowledge Analyzed report, implement project etc. Investigate interview training discuss analysis

enterprise Related text files Business process Foundation data

ERP sofeware manufacture n Carry out contract

Consultative company m Carry out contract

Fig.2

The knowledge transfer in the ERP implement

(3) The knowledge transfer and conversion of an improvement stage in the ERP implement: There must be a

adjusted excellent turn according to the actual movement even two developments until attain an comparative stability appearance after the ERP top line circulating. This process is predominated by the implement personnel of business enterprise. The process of the business enterprise customer and the implement personnel discover problems, diagnosis analysis problems, solve problems are the process which continues to communicating, making use of, verifying the expect experience knowledge. Adjust, revise and deepen the comprehension or the understanding of the ERP knowledge passing this improvement activity. For example, consult with exterior adviser or manufacturer, the scope of the knowledge transfer and sharing expand to the business enterprise exterior personnel in this stage. Consultative adviser or software manufacturer generally get an implement result feedback from the after-sales service.
Discover problem Certain Analyse and diagnosis problem Continue to communicate, make use of, and validate expected knowledge and experience

problem

processing The circumstance of the software uses Information-based target Implement strength,etc. Tacit knowledge ERP related knowledge Experience technique application Explicit knowledge Solution etc.

Implement
After-sales consultation/co stomers feedback Tacit knowledge ERP knowledge Technique experience of the software usag etc. Explicit knowledge Problem solution enterprise

enterprise Related record The problem need to solve

After-sales consultation/c ostomers feedback

After-sales service

ERP software manufacture n

Consultative company m

After-sales service

Fig.3

The knowledge transfer and conversion of an improvement stage in the ERP implement

535

ERP software, the implement business enterprise and consultation adviser's knowledge of these threes transfer and share in the whole implement process each other promote mutual influence, became a knowledge to transfer with commonly shared of circulating chain, promote an ERP implement together.

4. The problem and the obstacle analysis of the knowledge transfer and sharing in the ERP implement
The ERP implement is a change of business enterprise management, the responsibility, power, benefit of each section faces to reallocated, not only facing a strategic problem, but also needing to solve for the military tactics concrete problem of attaining the purpose but adoption in the implement. Not only need to consider technique means a problem, but also need to consider a background habit humanities problem. So we can say the ERP implement is a systems engineering for synthesizing. We can use system methodology consideration to resolve an ERP implement problem. The problem and obstacle of the knowledge transfer and sharing in the ERP implement mainly body now as follows a few aspects: (1) The knowledge characteristic in the ERP implementwuli. In the ERP implement, speaking from the knowledge itself the main factors of baffling the knowledge transfer and sharing are the appearance, faintness, complexity of knowledge, the transferred amount and the knowledge profession. Implement corpus lack understanding to the related knowledge and its relation. And the business enterprise and consultation square doesn't value or follow like sheep the other party knowledge. Scoop out degree to value and investigation of the tacit knowledge, such as experience and technique...etc. not enough. (2) The method and the tool in the ERP implementshili. The method and tool application of the knowledge transfer and sharing are not enough, lack a related training; lacking the measure, the technique support, physical space or terrace of management of the guarantee knowledge, making the output knowledge in the activity can't keep effectively with management, also making the knowledge transfer and sharing to lack a convenient outlet. (3) The people in the ERP implementrenli. Lacking a mechanism, cultural atmosphere of the knowledge transfer and sharing, can not provide enough inspiring inducement for the knowledge owners and enough knowledge accrual foreground to the knowledge sharing person causing the implement activity of knowledge transfer and sharing can not launch well.

5. The model and the countermeasure of raising the efficiency of knowledge transfer and sharing in the ERP implement
Jifa Gu professor etc. put forward the system method of wulishilirenli. is a new method which trying to use determine the nature, continuous, multilayer, the rank preface and image thinking of comprehensive of the person's reasonableness structure to solve the management problem (Jifa Gu and Fei Gao, 1998) [10]. In the WSR system methodology, Wuli refers to the mechanism of material sport, which usually answers what is; Shili refers to work of truth, which usually answers the question of how to do; Renli refers to how to get along with people, which answers the problem of how should do and best how to do (Jifa Gu and Xipu Tang, 2006) [11]. The knowledge transfer and sharing in the ERP implement means various knowledge that the implement needs at carry out corpus in certain way and the outlet transfer and share. The knowledge, method, person constitute to the WSR in the knowledge transfer and sharing. In the ERP implement the process of the knowledge deliverer transfers the knowledge to the knowledge receiver, because of knowledge characteristic, the knowledge deliverer and knowledge receiver it would cause the obstacle of the knowledge transfer and sharing in the ERP implement. It can overcome the obstacle of the knowledge transfer and sharing in the ERP implement process passing adopting homologous measure from the WSR three aspects and creating the condition of the knowledge transfer and sharing. Such as figure 4 showed. (1)Wuli: Define knowledge object and transfer object definitely, then clear what knowledge deliver in what 536

person with sharing. The explicit knowledge transfers comparatively in brief and the knowledge quality can request definitely and control. So first of all the explicit knowledge transfers and shares between the corpus must be guaranteed. Tacit knowledge because of its tacit characteristic isn't easily known, so it needs to notice excavation. For example, the expectation of business enterprise leadership and key customer belongs to ERP implement target knowledge, the software implement and usage experience of the consultation adviser etc. . Moreover, must notice the fusion of knowledge between the corpuses. The business enterprise personnel's knowledge and consultation adviser's knowledge is with each other to repair a mutual communication, can't be pure limit with who is lord.
Knowledge Sharing Knowledge of Sharing channel Knowledge i i of

Wuli + Shili + Renli of the knowledge sharing

Begin

Implementing

Adjusting

conformity

Fig.4

the model of knowledge sharing in the ERP implement

(2)Shili: The different knowledge transfers between different object need to adopt different methodsand tools, and also need to provide a related applied training. Guaranteeing the knowledge transfer need to provide measure, the technique support, physical space or terrace, such as knowledge map, knowledge base, knowledge management terrace etc.. Succeed to carry out knowledge transfer sharing, also need Ba, set up the scene, making the knowledge deliverer, make knowledge receiver can participate in the knowledge transfer and sharing more conveniently, would like to transfer and receive knowledge from the angle of the sensitive and the will angle. (3)Renli: The will and intention of the Knowledge transferring corpuses are the obstacles in the process of transferring. The knowledge is personal weapon of competition, so should provide encourage prompting inducement for the knowledge deliverer, provide enough knowledge accrual foreground for the knowledge sharing person, make both parties have full motive to carry out the knowledge sharing. In the ERP implement, the key is to adopt effective prompting, promote the tacit knowledge to share, drive tacit knowledge sharing with the benefits. Wuli shili renli are three aspects that the knowledge transfer and sharing need to synthetically consider. While facing a concrete problem we can concretely analyze the environment appearance of place according to that time the concrete problem, have to lay particular emphasis on the three aspect of wuli shili renli and selection points.

6. Conclusions
Carrying on theories quest to the ERP implement's result from the knowledge transfer angle, which provides an all new angle for the theory research of the ERP implement. It not only contribute to the development and perfect of the ERP implement theories, also enrich the application of the knowledge transfer theories. The paper analyzes the process of the knowledge transfer and sharing in the ERP implement, the knowledge contents of transfer and sharing between the corpuses and puts forward wuli shili renli (WSR) systems approach to solve the problems of the knowledge transfer and sharing in the ERP implement.

537

References Linda Argote, Paul Ingram. Knowledge transfer:A Basis for Competitive Advantage in Firms. Organizational Behavior and Human Decision Processes, 2000, 82(1May): 150-169 [2] Ikujiro Nonaka. A Dynamic Theory of Organizational Knowledge Creation. Organization Science, 1994, 5(1): 14-37 [3] Davenport TH, Prusak L. Working knowledge:How organizations manage what they know. Boston: Harvard Business School Press, 1998. 65 [4] Meiyun Zuo. Six knowledge transfer of the organizations applying enterprise informatization. The calculator system application, 2004, 8: 72-74 [5] Nancy M. Dixon. Common knowledgeThe shared method and the case of the business enterprise knowledge. Shugui Wang , qunhong Shen, tran. Beijing: People's post and tele publisher, 2002. 30-160 [6] Nonaka I., H.Takeuchi. The knowledge Creating Company: How Japanese Companies Creat the Dynamics of Innovation? New York: Oxford University Press,1995. 38-45 [7] Nonaka I., N.Konno. The concept of Ba: Building a Foundation for Knowledge Creation. California Management Review, 1998, 40(3): 40-54 [8] Nonaka I., R.Toyama, N.Konno. SECI, Ba, and Leadship: A Unifying Model of Dynamic Knowledge Creation. In Teece, D. J., and I.Nonaka(Eds. ), New Perspectives on Knowledge-Based Firm and Organization. New York: Oxford University Press, 2000. 69-75 [9] Nonaka I., P.Reinmoeller, D.Senoo. The ART of Knowledge. European Management Journal, 1998, 16(6 Dec. ): 673-684 [10] Jifa Gu, Fei Gao. To See Wuli Shili Renli Systems Approach from the View of Management Science. Systems Engineering Theory & Practice, 1998, 8: 1-4 [11] Jifa Gu, Xipu Tang. Wuli Shili Renli Systems Approach: theory and application. Shanghai : Shanghai Science and technology educates publisher, 2006. 7-21 [1]

538

A Framework of Identifying IT Application Capabilities Based on IS Lifecycle


Wang Nianxin, Zhong Weijun, Mei Shue, Zhong Yulin
School of Economics and Management, Southeast University, P.R.China, 210096

Abstract Information technology (IT) application capabilities are the necessary conditions of information systems (IS) success and
improving firm competitiveness. For better measuring and developing IT application capabilities, dimensions of IT application capabilities should be identified in detail. In this paper, we drew on the process-oriented perspective to develop a framework of identifying IT application capabilities which was named IT application capabilities matrix. The vertical dimension of the matrix is five steps of IS lifecycle, the horizontal dimension is IT-related application capabilities which is constituted of IT infrastructure capabilities, technical IT application capabilities, managerial IT application capabilities, IT integration capabilities, IT-related transformational capabilities and IT-related dynamic capabilities. The framework of identifying IT application capabilities based on IS lifecycle provides a base of IT application capabilities evaluation and development, which can help improve the success ratio of IS and firm competitiveness. Key words Information technology, IT application Capability, Resource-based view, IT business value

1 Introduction
In fierce and dynamic competitive environment, many firms have been using IT to decrease products or service cost, increase operation efficiency, improve management and decision, and enhance firm competitiveness. Some theoretical and empirical studies have shown that IT capabilities are strongly positive related to firm competitiveness[1-5], but there are few researches on the evaluation of IT application capability and how to develop IT application capability. In order to measure and enhance IT application capabilities, one method of identifying IT application capabilities is needed firstly. In this paper, we developed a framework of identifying IT application capability based on IS lifecycle, the framework can be divided into two steps, one is decomposing the IS lifecycle into five periods which are strategy plan, system analysis, system design, system implementation, system operation management, maintains and evaluation. The other one is identifying the IT application capabilities in every period of IS lifecycle, IT capabilities includes IT infrastructure capabilities, technical IT capabilities, managerial IT capabilities, IT integration capabilities, IT-related transformational capabilities and IT-related dynamic capabilities. After the two steps, one IT application capabilities matrix can be brought forward.

2 Literature Review
During the past two decades, both business managers and academic researchers have shown considerable interest in understanding how IT helps to improve firm competitiveness. According to sequential order, those researches can decompose into three phases, the first is on the relationship between IT investment and productivity, the second is on the relationship between IT resources and competitive advantage, and the third is on the relationship between IT capabilities and firm competitiveness. 2.1 IT investment and productivity In the year of 1987Robert M.Solow, a Nobel laureate in economics, published a short paper in the New York Book Review, he brought forward the productivity paradox in IT investment firstly [6]. From then on, academic researchers have verified the relationship between IT investment and productivity or other value output form macro-level, industry-level, firm-level and process-level, but the results are as varied as the findings of the studies that generated the debate, there are direct positive [7-10], indirect positive [11-12] and no relationship [6,13] between IT investment and productivity. The reasons for productivity paradox are measurement errors, time lag, inappropriate management and productivity redistribution [14]. In fact, IT investment is just a necessary condition
Foundation Item: Project supported by National Natural Science Foundation of China (Grant NO: 70671024)

539

of productivity, IT investment only can transform into IT resources under appropriate management which can increase the IT conversion effectiveness [15] and the alignment among technology business competitive environment. 2.2 IT resources and competitive advantage Building on the assumptions that strategic resources are heterogeneously distributed across firms and that these difference are stable over time, the resource-based view argues that the source of competitive advantage is the value, rare, low inimitable, low substitutability resources [16-17]. Firms IT resources include IT infrastructure, technical IT resources, and human IT resources. A quality IT infrastructure can provide firms with information share across different departments, innovation, utilizing business opportunity, and responding the change of business strategy [18], integrated and flexible IT infrastructure is one important kind of organizational capabilities, so it may be one source of competitive advantage [19-20]. Technical IT resources are the business applications of deploying IT infrastructure, for example CAD, CAPP, ERP, EC, these applications can be used for increasing operation efficiency, improving information quality, reinforce, reinforcing the relationship between firms and the profit-related, so the technical IT resources with alignment to business operations may be one source of competitive advantage [21-22]. Human IT resources refer to specificity and knowledge, which comprise of technical skills and specialty, and managerial skills and specialty. Technical skills and specialty include programming, system analysis and design and so on; managerial skills and specialty include IS project management experience, communicational skill with business personnel. Human IT resources can lead to effective integration between business plan and IT plan, develop the applications costly which can support business operation effectively, and communicate with business departments and forecast requirement, so human IT resource may be one source of competitive advantage [1,4]. Based on the above-mentioned propositions, many researches have made empirical study the relationship between IT resources and competitive advantage, but the results are inconsistent, these paradoxical results due to IT resources cant meet value, rareness, imitability, and non-substitutability entirely, so IT resources isnt the source of sustained competitive advantage (SCA), these resources can result in short term competitive advantage at most. In article IT Doesnt Matter, Carr argues that IT is ubiquitous, increasingly inexpensive, and accessible to all firms. As such, it cannot provide differential advantage to anyone, because it is scarcity (not ubiquity) that creates the ability to generate supernormal rents. He notes that IT is following the pattern of railroads and telegraphs, where, as a mainly replicable, standardized infrastructural technology, its benefits are accessible to all and cannot create advantage. 2.3 IT capabilities and firm competitiveness Mixed empirical results are always an invitation to seek better theory [23]. For seeking for SCA and firm competitiveness, researchers draw on capability theory, core competency theory, and dynamic theory to find the IT-related sources of firm competitiveness. In the prior literatures, the IT capability elements of firm competitiveness are listed in tab.1.
Authors Mata and Fuerst [24] Powell and Dent-Micallef [12] Marchand et al [25] Bharadwaj [1] Dehning and Stratopoulos
[26]

Tab.1 IT Capability-related Elements of Firm Competitiveness Research type IT capability-related sources of firm competitiveness theoretical theoretical theoretical empirical empirical empirical empirical empirical IT management skills Flexible culture, strategic planning-IT integration, and supplier relationships Capabilities of collecting, organizing and maintaining information The right behaviors and values for working with information IT capability Managerial IT skills IT capability IS planning sophisticationsystem development capabilityIS support maturity, IS operation capability IT business experience, relationship infrastructure

Santhanam and Hartono [2] Ravichandran et al [4] Bhattand and Grover [27]

540

From the finding of prior researches, we can come to a conclusion that the source of firm competitiveness is IT capabilities, not IT resources. Like Santharam and Hartono stated that Firms rated as having superior IT capability were found have better profit and cost ratios compared to the industry average.

3 Typologies of IT Capability
In order to identify IT application capability, typologies of IT capability are studied to shed some lights on the classification of IT application capability. Capabilities are the complex routines which decide the efficiency of transform inputs to outputs [28], IT capabilities are the routines which firms use IT resources for supporting business operation. Previous literatures, like Bharadwaj, thought IT capabilities to be a single dimension that are strongly positive to firm competitiveness, researches now increasingly argue that IT capability is a multidimensional concept [2]. Many typologies are brought forward from about 7 perspectives, include impact perspective, resources perspective, evolvement perspective, centre perspective, effect perspective, function perspective, and carrier perspective. Tab.2 summarizes the major tenets and limitations of the seven perspectives. Another related typology of IT capability is nine core IT capabilities of Feeny and Willcocks [31-32], they suggest all these capabilities are necessary for firms to meet the three enduring challenges of (a) uniting business and IT vision, (b) delivering IT services, and (c) designing an IT architecture. The IT capabilities are: (1) Leadership Integrating IS/IT effort with business purpose and activity. (2)Business Systems Thinking Envisioning the business process that technology makes possible. (3)Relationship Building Getting the business constructively engaged in IS/IT issues. (4)Architecture Planning Creating the coherent blueprint for a technical platform that responds to current and future business needs. (5)Making Technology Work Rapidly achieving technical progress by one means or another. (6)Informed Buying Managing the IS/IT sourcing strategy that meets the interests of the business. (7)Contract Facilitation Ensuring the success of existing contracts for IS/IT services. (8)Contract Monitoring Protecting the businesss current and future contractual position. (9)Vendor Development Identifying the potential value added of IS/IT service suppliers.
Perspectives Impacts Peppard, et al (2004) Gregor, et al(2006) Resources Bharadwaj (2004) Dehning, et al (2003) Evolvement Zhang song, et al (2003) Center Wade, et al (2004) Effects Bhattand, et al (2005) Functions Ravichandran (2005) Focus Impacts on business IT resource Properties of change Center of location Effects on firm competitive IT/IS function Tab.2 Seven Typologies of IT Capability Classification Advantages Impact on operation Impact on management Impact on strategy IT infrastructure Technical IT resource Managerial IT resource IT-enabled intangible Static IT capability Dynamic IT capability Inside-out capability Outside-in capability Spanning capability Value capability Competitive capability Dynamic capability IS planning sophistication system development IS support maturity IS operation capability Document resource Management system Human resource Technology resource Good hierarchy Disadvantages No essence of IT capability No consider environment impacts No consider environment impacts No essence of IT capability Too generality Bad hierarchy No consider environment impacts Too generality No essence of IT capability No consider environment impacts No essence of IT capability Bad guidance for IT application

Good hierarchy Alignment with environment Alignment with business Good hierarchy Alignment with environment Good guidance for IT application Good guidance for capability collection, capability storage, capability use, and capability pass

Carrier Wu xiaobo, et al (2006)

Storage intermedium

541

As other capabilities, some firms capabilities are superior to others, partly because of the factors out-controlled of firms, and partly because wisdom and skills of firm management. Besides leading to deep understanding of IT application capability, those typologies also provide with the base of empirical study, but those typologies of IT capability are too general to do the empirical study, we need one more detail and operational classification of IT application capability, one new framework is needed.

4 Framework of Identifying IT application Capability based on IS Lifecycle


Since the central objectives of analysis in this paper is identifying IT application capabilities in detail, it is important to describe IS lifecycle and IT application capabilities detailedly. 4.1 The lifecycle of IS IS lifecycle, the application process of IT in firms, are composed by the whole process including problem presentation, team building, strategic planning, system analysis, system design, system implementation, system operation management, system maintenance and evaluation. General speaking, the lifecycle of IS can be divided into five steps, they are strategic planning, system analysis, system design, system implementation, and system operation management, maintenance and evaluation. Every steps has its different and specific assignment, and brings forward standard documentation and consigns to the next step, in the meanwhile, the next step keeps on the development process by these documentation. Fig.1 below provides an illustration of IS lifecycle model.
Generalization

New System Lifecycle System Development Process

Strategic Planing System Analysis Reference System Design System Implementation old system operation, maintenance Old System Lifecycle new system operation, maintenance and evaluation Time

Fig.1 IS Lifecycle Model

4.2 IT application capabilities Since one of the main analysis objectives in this paper is IT application capabilities, it is important to classify IT application capabilities. IT application capabilities are the routines and process of improving firm competitiveness through the alignment between technology and operation, however, sustain of firm competitiveness is the dynamic alignment and adjustment process between firm and competitive environment. So IT application capabilities are the organizational routines to achieve the alignment among technology, operation, and competitive environment. IT application capabilities are made up of thereinafter dimensions. (1)IT Infrastructure. A firms overall IT infrastructure comprises the computer and communication technologies and the shareable technical platforms and database. A quality IT infrastructure can provide firms with the ability to share information across different functions, innovate, and exploit business opportunities, and the flexibility to respond to changes in business strategy. (2)Technical IT application capability. Technical IT application capability refers to the specific knowledge about the development and operation of information technology applications, such as programming, system analysis and design, and competencies in emerging technologies. Technical IT application capability is the necessary ability to develop information systems. (3)Managerial IT application Capability. Managerial IT application Capability, which includes team forming, HR development, project management, refers to the management ability of conceiving, developing, and operating 542

IT applications. Managerial IT application Capability, complementarities of technical IT capability, can ensure to develop information systems efficiently with low cost. (4)IT Integration Capability. IT Integration Capability refers to the ability to integrate information technology with firm business. IT Integration Capability which includes the communication between IT staffs and business personnel, ensures to develop the IS required by business system, and achieve the alignment between technology system and business system. (5)IT-related Transformational Capability. IT-related transformational capability refers to the ability of change organizational system and structure, and business processes in order to reap the benefits brought by IT applications enough. (6)IT-related Dynamic Capability. IT-related dynamic capability refers to the ability to reconstruct the organizational capability, to realize alignment between firm and dynamic competitive environment. Different dimensions of IT application capabilities vary their impacts and effects in firm competitiveness. A quality IT infrastructure is a necessary faculty to participate in competition, but it can not provide firms with competitiveness, it can only provide firms with competitive parity, in many industries, firm cant survive without information technology. Technical IT application capability and managerial IT application capability refers to the ability to develop information systems efficiently with low cost, they can ensure the IT investment to transform into IT resources. IT integration capability and IT-related transformational capability are the necessity to realize the alignment between information technology and business through organization transformation and business process reengineering, which can lead to reap the benefits at the most. Because of dynamics, competitive environment can make IT invaluable or misalignment with business system, so Technical IT application capability, managerial IT application capability, IT integration capability and IT-related transformational capability only can result in temporal firm competitiveness. IT-related dynamic capability ensures the alignment between firm and competitive environment through reconstruct these above-mentioned IT application capabilities continuously, so IT-related dynamic capability is the only source of firm sustained competitiveness.
Tab.3 IT Application Capabilities and their Impacts in Firm Competitiveness IT Capabilities Impacts IT-related Dynamic Capability IT-related Transformational Capability IT Integration Capability Managerial IT application Capability Technical IT application capability IT Infrastructure Capability Firm strategy Business processes Business capabilities Temporal competitiveness IT applications development Technical platform Competitive parity Lead to

Sustained competitiveness

4.3 Framework of Identifying IT application Capability based on IS Lifecycle In every step of IS lifecycle, firms should have corresponding IT application capabilities to fulfill the specific requirement. How to evaluate, develop the corresponding IT application capabilities, which is very important to improve the success ratio of IS applications and firm competitiveness. The framework of identifying IT application capabilities based on IS lifecycle provides the base for evaluation and development of IT application capabilities. Tab.4 shows the framework and reference result. The framework is constituted by two dimensions, which one is every step of IS lifecycle, and the other one is every dimensions of IT application capabilities including IT infrastructure, technical IT application capability, managerial IT application capability, IT integration capability, IT-related transformational capability, and IT-related dynamic capability. IT application capabilities identified by the framework compose a matrix, which is named IT application capability matrix.

543

Tab.4 Framework of Identifying IT application capability based on IS Lifecycle cap steps Infrastructure Technical capability Managerial Integration Transformation Dynamic capability

Strategic Planning

System Analysis

System Design

System Implementation

System Operation, Evaluation and Maintenance

capability capability capability desirability enactment obligation analysis tech analysis Investment tech innovation regulation organization analysis framework information business strategy transformation strategy building information innovation management sourcing policy strategy making team building top management commitment feasibility communication analysis with top requirement process analysis management analysis data analysis process analysis investigation and communication information and work with research alignment logic system business personnel design framework design policy code design process design establishment to database design data analysis organization design ensure IS input/output information process design integration and design alignment flexibility process design module design physic system implementation database top management establishment commitment organization change programming user participation communication process change and debugging user training change management data preparation project and import management system conversion operation daily management management security and system privacy communication change management evaluation suddenness system management maintenance

tech tracking environment scanning environment alignment

tech tracking environment scanning environment alignment

tech tracking environment scanning environment alignment

tech tracking environment scanning environment alignment

tech tracking environment scanning environment alignment

5 Conclusions
In this paper, we bring forward a framework of identifying IT application capabilities based on IS lifecycle, this framework not only take into account the integration between IT and business system, but also focus on the alignment between firm and competitive environment, so the framework can lead to the integration and alignment among IT, business, and competitiveness. The framework of identifying IT application capabilities based on IS lifecycle can be used for evaluation and development of IT application capabilities, and enhancement of ratio of IS applications and firm competitiveness.
References [1] [2] [3] [4] Bharadwaj A. A resource-based perspective on information technology capability and firm performance: an empirical investigation. MIS Quarterly, 2000, 24(1): 169-196. Santhanam R., Hartono E. Issues in linking information technology capability to firm performance. MIS Quarterly, 2003, 27(1): 125-157. Peppard J., Ward J. Beyond strategic information systems: towards an IS capability. Journal of Strategic information systems, 2004, 14: 167-194. Ravichandran T., Lertwongsatien C. Effect of information systems resources and capability on firm performance: a

544

[5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] [26] [27] [28] [29] [30] [31] [32]

resource-based perspective. Journal of Management Information Systems, 2005, 21(4): 237-276. Pavlou P., Elsawy O. From IT leveraging competence to competitive advantage in turbulent environments: the case of new product development. Information Systems Research, 2006, 17(3): 198-227. Solow R.M. Wed better watch out. New York Book Review, 1987, 12: 36. Porter M., Millar V. How Information Gives You a Competitive Advantage. Harvard Business Review, 1985, 63(4): 149-160. Wiseman C. Strategic Information Systems. Homewood, Illinois: Irwin, 1988. Davis L., Dehning B., Stratopoulos T. Does the market recognize IT-enabled competitive advantage. Information & Management, 2003, 40: 705-716. Teo T., Wong P., Chia E. Information technology (IT) investment and the role of a firm: an exploratory study. International Journal of Information Management, 2000, 20: 269-286. Clemos E., Row M. Sustaining IT advantage: the role of structural difference. MIS Quarterly, 1991, 15(3): 275-292. Powell T.C, Dent-Micallef A. Information technology as competitive advantage: the role of human, business and technology resource. Strategic Management Journal, 1997, 18(5): 375-405. Carr N.G. IT doesnt matter. Harvard Business Review, 2003, 81(5): 41-49. Brynjolfsson E. Beyond the productivity paradox. Communications of the ACM, 1998, 41(8): 49-55. Weil P. The relationship between investment in information technology and firm performance. Information Systems Review, 1992, 3(4): 307-333. Wernerfelt B. A resource-based view of the firm. Strategic Management Journal, 1984, 5: 171-180. Barney J. Firm resource and sustained competitive advantage. Journal of Management, 1991, 17(1): 99-120. Weil P., Subramani M., Broadbent M. Building IT infrastructure for strategic agility. Sloan Management Review, 2002, 44(1): 57-65. Broadbent M., Weil P., Clair D. The implications of information technology infrastructure for business process redesign. MIS Quarterly, 1999, 32(2): 159-182. Ross J., Beath C., Goodhue D. Develop long-term competitiveness trough IT assets. Sloan Management Review, 1996, 38(1): 43-55. Melville N., Kraemer K., Gurbaxani V. Information technology and organization performance: an integrative model of IT business value. MIS Quarterly, 2004, 28(2): 183-322. Wade M., Hulland J. The resource-based view and information system research: review, extension, and suggestions for future research. MIS Quarterly, 2004, 28(1): 107-142. Soh C., Markus M. How IT creates business value: a process theory synthesis. Proceedings of the 16th Annual International Conference on Information Systems, Amsterdam, the Netherlands, 1995, December, 29-41. Mata F., Fuerst W. Barney J. Information technology and sustained competitive advantage: a resource-based review. MIS Quarterly, 1995, 19: 487-505. Marchand D., Kettinger W, Rollins J. Information orientation: people, technology and bottom line. Sloan Management Reviews, 2000, summer: 69-80. Dehning B, Stratopoulos T. Determinants of a sustainable competitive advantage due to an IT-enabled strategy. Strategic information Systems, 2003, 12:7-28. Bhattand G., Grover V. Types of information technology capabilities and their role in competitive advantage: an empirical study. Journal of Management Information Systems, 2006, 22(2): 253-277. Collis D. How valuable are organizational capability. Strategic Management Journal, 1994, 15(8):143-152. Zhang S., Huang l. An analysis of IT capability of enterprises based on the consideration of resources. Tongji University Journal of Social Science Section, 2003, 14(4):52-56 (in Chinese). Wu Xiao-bo, Hu Bao-liang, CaiQuan. The framework and paths acquiring competitive advantage through information technology capability. Science Research Management, 2006, 27(5):53-58 (in Chinese). Feeny D., Willcocks L. Re-designing the IS function around core capabilities. Long Range Planning, 1998, 31(3):354-367. Willcocks L., Feeny D., Olson N. Implementing core IS capabilities: Feeny-Willcocks IT governance and management framework revised. European Management Journal, 2006, 24(1):28-37.

545

Study on Product Lifecycle Management Coordination Management System Model of the Shipbuilding Enterprise
Wang Zhiying, Ge Shilun
Institute of Economy and Management. Jiangsu University of Science and Technology , Zhenjiang Jiangsu, 212003, China

Abstract The paper takes the product lifecycle management (PLM) of shipping as the object of study, analyzes the market pattern of world shipbuilding enterprise, presents PLM coordination management system models of the shipbuilding enterprise according to the present situation and trend of development of the shipbuilding enterprise of our country, describes the solution scheme of each layer, including the resource layer based on the relational database, the function layer based on the ontology and the workflow, the network layer based on J2EE and XML, the agent layer based on the multi-agent, the user layer based on the comprehensive coordination. The system model conforms to the actual informationization demand of the shipbuilding enterprise and the trend of development of the advanced manufacturing technique, it has certain reference value and the practical application value. Key words Shipbuilding, PLM, Coordination Management, Ontology, Web Service and XML

1.Introduction
Expanding the shipbuilding industry forces is proposed for the first time in the national Eleven-Fivedevelopment plan, the Chinese shipbuilding industry not only must do in a big way, but also must do strongly. The broad market prospect and the advantageous historical opportunity for our country shipbuilding industry development has been provided, for the shipbuilding center moves easterly, the international marine transportation industry recoveries and the world demand of shipping growths massively[1],our country is formulating the corresponding industrial policy, taking the shipping industry ability, the structure, the layout as the master line, promoting the reorganization of the shipbuilding enterprise, to cultivate the big shipbuilding group possessing the international competitive power. In 2006 first half year, the quantity finishing building shipping, the quantity continuing the order forms of new shipping and the quantity handholding shipping order forms occupies the world market share separately 15.3%, 27.1% and 20.3%[2]. The proportion of the Chinese shipbuilding industry which occupies in the global market is rising obviously. The Chinese shipbuilding industry has already faced with the huge development opportunity. It is imminent to speed up the innovation system construction, promote comprehensive scientific research development layer, set up the international brand form and form the proprietary intellectual property rights and certain influence brand in the world. But in highly speed growth behind, there are still plenty of shortcomings not to be ignored in the shipbuilding industry. Facing keen competition environment, the shipbuilding enterprise only use information technique, synthesize advanced manufacturing technique and the modern management pattern, establish integrated information system to be able to enhance synthesis competitive ability. At present, the information system of the domestic shipbuilding enterprise is mostly set up in the traditional partition manufacturing process workshop type kind of work in a factory specialization foundation, each system often is the independent information isolated island, isomerism system cannot realize the highly effective data transmission and the transformation respectively, coordination of the design, manufacture and management is bad. It is the trend of development of the digitized shipbuilding to take the comprehensive digitization, the comprehensive modulation and the network platform as the technical supports in order to realize digitization design, manufacture, management and integrated system, then establish dynamic shipbuilding virtual enterprise alliance.

This research has been supported by National Natural Science Foundation of China (Study on Data Model of Large-piece One-of-a-kind Manufacturing Enterprise, No:70472005). This research has been supported by Shipping Advanced Design and Manufacture Technique Emphasis Laboratory of Jiangsu Province of China(Construction of Shipbuilding Virtual Enterprise for Agile Manufacturing, No: CJ0605). This research has been supported by Colleges and Universities Natural Science of Jiangsu Province of China(Study on Key Technologies of Shipbuilding Virtual Enterprise Information Integration Based on Web Service, No:06KJD120062).

546

2.Overall structure of PLM coordination management system models of the shipbuilding


enterprise
Says from the digitization angle of the shipbuilding enterprise, the enterprise needs all information integration correlation to the PLM. PLM is the integrated platform of enterprise informationization, integrating with exterior process or the system, forming the product knowledge and the circulation architecture which is sufficiently use in the upstream and downstream of product lifecycle[3]. Since PLM has been proposed at the end of 20 century, the development is extremely rapid and becomes the attention focus of global manufacturing industry. PLM is the integrated platform of CAD/CAPP/CAM, PLM supports the parallel engineering, PLM is the integrated frame of CIMS. PLM is the solution which take the product as a core, stretches across the entire enterprise and the supply chain from the region, covers from the entire life cycle from the product conceptual phase continuously to the end of the product from the time, thus enables the enterprise to adjust the management method and the management way effectively in the digital economical time to give free rein to the unprecedented competitive advantage. From the product demand of human to the product elimination, in the value chain which penetrate product entire life cycle, each department of the enterprise has formed as an integrated, organic whole, various departments of this whole closely coordinates with each other. Looking from the present domestic and foreign research situation to the PLM, PLM belongs to a kind of new idea, no matter how looked from the fundamental research or the actual application condition, it is still at the start stage. Therefore, It has a wider significance to carry on the fundamental research and the practical application for PLM.
Design Personnel Process Personnel Manufacture Personnel .... User Layer

Coordination Client End

Coordination Client End

Coordination Client End

Coordination Server End

CAD

CAE

CAM

CAPP

CRM

SCM

ERP Agent Layer

Multi Agent

Internet/intranet:Web Services/J2EE/XML

Network Layer

Market Management

Material Management

Ship Design

Ship Building

Ship Sale

Ship Maintenance

Ship Recuperation

Function Layer

Market Management System

Material Management System

Blueprint Management System

Technology Management System

Project Management System

Ship Maintenance Management System

Ship Recuperation Management System

Workflow Management System

Modeling Language:UML Resource Layer Database Knowledge Library Model Library Method Library

Fig.1 PLM coordination management system model of the shipbuilding enterprise

PLM of the shipbuilding enterprise is the advanced management technique how the shipbuilding enterprise to reorganize fast the organizational structure, business process and resources disposition in the ships product entire 547

life cycle to realize the overall benefit maximization, facing the ship owner and the ship market. It is the extension and development of CAX technique. PLM and ERP, CRM, SCM and so on together constituted the main foundation construction of IT application of the shipbuilding enterprise. PLM coordination management system of the shipbuilding enterprise refers to the computer hardware and software system which the shipbuilding enterprise applies PLM technique. It is key to provide a unified management platform to help the shipbuilding enterprise carry on the comprehensive excellent management to the ship owner, the ship market, the brand, the sale, the service, the decision-making, the execution, the organization, the team, the achievements, the knowledge and the innovation and so on, so as to promote the shipbuilding enterprise to enhance the operation efficiency, reduce the operation cost and promote the overall benefit. It is the information platform and the link to connect with the various business departments of the shipbuilding enterprise. Fig.1 shows the PLM coordination management system models of the shipbuilding enterprise.

3. Each layer solution of PLM coordination management system models of the shipbuilding enterprise
3.1 Resource layer The resource layer expresses the origin of system resources, including interior resources and exterior resources of the shipbuilding enterprise, consisting of the unified knowledge library, the model library, the database, the method library, being responsible to respond to retrieval request, memory and safety control and so on. The resource layer is the sharing mechanism and the process monitoring coordination mechanism based on the correlation database and the knowledge library, through this mechanism to realize to share data information and process of ship product, through expansion of the shipbuilding enterprise various members' function to realize to cooperate with each other. At present, the popular general commercialization relational database management system is SQL Server, The relational database has provided the most basic function of data management. The database modeling language adopts UML[4], which describes the static structure with a kind of class, it not only defines the static structure, but also indicates the relation like connection, the dependence, the polymerization and so on. 3.2 Function layer 3.2.1 Knowledge expression of shipbuilding domain based on ontology The concept of ontology origins from the philosophy domain . In the domain of information system and knowledge system. Ontology, one kind of method of knowledge expression, stresses on the semantic stratification of knowledge description comparing it with the other means. So Ontology is appropriate for expressing domain knowledge in many research area[5], therefore, the paper also uses ontology to express the knowledge of shipbuilding domain. Ontology is the explicit standard sharing the generalization, thus ontology may define public glossary for some domain in order to share knowledge information. On the basis of massive research, many kind of description languages of ontology[7] was born, including RDF,RDF_S, OIL , DAML, CycL, Ontolingua, OWL, KIF,PSL [7]and so on. PSL is an international language which reorganizes different application process of life cycle of manufacturing industry, its goal is to find one kind of neutral standard language of process description standard, in order to integrate the correlation application in the manufacture life cycle process, therefore, the paper chooses PSL to describe the knowledge of shipbuilding domain. The structure of PSL includes three parts, namely PSL Core, Outer Core , PSL Extension. PSL Core includes 12 relations,2 functions,2 constants,17 truths and 5 definitions supporting truth. PSL makes the formalized stipulation for the behavior description, behavior occurrence, the object involved in behavior and the order behavior occurring successively and so on, which applies in the manufacture domain. 3.2.2 System management based on workflow PLM system of the shipbuilding enterprise needs to manage each kind of information and data, including the material information, the production information and the product information and so on. The product information includes the documents, the blueprint, the components, the craft as well as each kind of connection among them. 548

The information only participates in the entire flow of shipbuilding to achieve completely the goal of PLM in the coordination management. The workflow technique[8] is one kind of new technologies which started internationally to form at the beginning of 1990s .A workflow includes group of activities as well as the ordinal response between them, the process and the condition of start and the termination, as well as the description. The workflow resolves the business activities into the task, the role, the rule and the process that have been defined to complete the execution and the monitoring, in order to achieve the goal that enhances production organization level and the working efficiency. The question that workflow studies is to achieve some anticipated target, according to a group of pre-definition rule, cause the business process to transmit between many participants to provide the advanced method for production management of enterprise. Fig.2 shows the workflow management model of PLM system of the shipbuilding enterprise.
User Request

Workflow E ngine

Market Management Workflow System

nt me ge an a tem M Sys ial ter low Ma o rk f W

Sh ip Wo R e rk cu p f low era Sy tio n s te m

Model of Shipbuilding Domain

Ontology Library of Shipbuilding

Workflow Model

Blue print M an Wor kflow age ment S yst em

ce te nan Main em Ship flow Syst Wor k

Resource Model

Organizati on Model Procedure Model

Fig.2 Workflow management model

3.3 Network layer The network layer uses the XML and Web Service to realize. XML is a group of standards issued by the Internet union organization in 1998,its goal is in order to satisfy the network application demand which grows day by day, unify the data exchange standards, surmount HTML and take the place of as more formidable, more extended web architecture, simultaneously realize the cross platform data exchange and the operation. PLM system of ship product manages all correlation information and process of entire life cycle, which contains the massive treating processes describing, demonstrating, depositing the information . Therefore, XML is used to process the information in the PLM system to realize to utilize and share to the greatest extent [9]. Web Service is one kind of innovation technique based on the XML, it can transfer the module application described and contained by itself through the network standard agreement. XML is the foundation of Web Service structure, information model format of SQAP, WSDL and UDDI. XML possesses the module development model and the web merit, XML is one of the foundations of web service, it seeps each layer of web service[10]. XML is the basic form that expresses the data in web service platform. Besides easy to establish and easy to analyze, the main merit of XML lies in not only the irrelevant platform, but also the irrelevant manufacturer. PLM system of shipbuilding puts to use XML and Web Service to bring about transmitting the new and the data stream between the service 549

en t em nag M a ys tem gy S olo ow chn kf l Te Wo r

P roj W o ec t M r kf a lo w n agem Sys ent tem

provider and user, use SOAP to exchange the data on the basis of reciprocity between the application procedures, use WSDL to define module description standard mechanism based on XML , use WSDL to describe the essential details so as that the service solicitant can use the specific service, use UDDI to provide the distributed registering, issuing and discovering ,standard mechanism of Web Service based on Web. Developing tool of the network layer is J2EE. J2EE is a standard architecture, it provides a multi-layered application architecture based on the module, taking the application server as the core, as well as the feasibility, extendibility, manipulity and security of the system. The communication mechanism of database is JDBC-ODBC . Fig.3 shows the network layer model of PLMS of the shipbuilding enterprise.
HTTP/ SOAP

JSP

Web Browser (Applet Container)

Unique Resource Adapter of EIS JDBC

Database

Application Procedure Client End JNDI JDBC JAXP JMS JAAS

HTTP/ SOAP

Servlet JNDI JDBC JAXP JMS JAAS JTA JavaMail JAF JCA RMI-IIOP

EJB JNDI JDBC JAXP JMS JAAS JTA JavaMail JAF JCA Enterprise Information System

XML Information Library

SMTP

Common Server

Fig.3 Network layer model

3.4 Agent layer Agent layer is the virtual agent between Web Service layer and application layer. The paper uses multi-agent technique, Multi-agent is composed of multi-interaction, what the calculation element being called agent forms. Multi-agent theory has provided one kind of abstract analysis method for distributed system. The shipbuilding enterprise is representative distributed system. The function modules of PLM of the shipbuilding enterprise is looked upon as the network being composed of cooperative intelligence agent together, every agent has the certain function, and carries out combined efforts with other agent. Fig.4 shows the multi-agent models of PLM of the shipbuilding enterprise.
CADAgent CAEAgent Ship Manufacturer Agrnt CAMAgent CAPPAgent Ship Material Supplier Agrnt CRMAgent SCMAgent Other User Agrnt Interaction Layer ERPAgent Business Layer Transforming Resources Agnet Using Resources Agnet

Ship Ower Agent

Direct Resources

Interface Agent

Indirect Resources

Resource Layer

Fig.4 Multi-agent model

550

The interaction layer is composed of interface agent and client agent, mainly accomplishing the interaction work between system and people, gaining the information coming from the system outside. The business layer is composed of many agent such as CAD Agent from design stage, CAE agent from project stage, CAM agent from making a stage and so on, the key is how to realize business processing dynamic reorganization and make use of useful resources to do out intellectualized decision of strategic importance. The resource layer is composed of agent using and translating resources, mainly completing various operation for system resources. 3.5 User layer The amicable human-computer interaction interface must be provided for the different user to operate PLM system on different computer. Different enterprise also is able to request having diversity to man-machine interface according to respective operation objective. System model brought forward in the paper has provided unified cooperation management platform and the business gateway for all personnel of the shipbuilding enterprise, including design personnel, process personnel etc. The various enterprise business and cooperation can carry out within this unified environment. The cooperation management platform realizes sufficiently the seamless integration and coordination cooperating of enterprise. Every business module of the PLM of shipbuilding enterprise is integrated by the cooperation management platform, forming a rapid and intense connection organic entirety, removing the isolative information fundamentally, gaining the coordination , the interactive and integral benefits.

4. Conclusion
PLM system is the kind of typical management technique applying to the shipbuilding enterprise.The paper brings forward the PLM system model by the fact on entire life cycle theory of ship product. The information integration system model of coordination management is brought forward basing on PLM of ship product, simultaneously, the resolution scheme of every layer. The model brought forward in the paper bases on the current situation of the shipbuilding enterprise in world and the rapid development and implicit needs of our country shipbuilding enterprise. The model sufficiently combines with the actual informatization need about current manufacturing enterprise and the development trend of the advanced manufacturing technique. The model provides a reference frame for the further study and actual application of PLM of the shipbuilding enterprise.
References [1] [2] [3] [4] [5] [6] [7] [8] [9] Zhang Yaoguang. Layout Characteristic and Future of the World Shipbuilding Industry. Economic Geography, 2002,22 (6):716-719 China Industrial Association of Ship Industry. Economy Operation Analysis of the Whole Ship Industry in the first half of 2006. www.shipbuilding.com.cn, 2006.9.8 Kazuo Muto, Hisashi Kubota. Trends and Verification of 3D CAD/CAE/CAM/CAT/NET Work Systems and PLM Systems in Advance Automotive Manufacturing Technology. Review of Automotive Engineering, 2006(27): 337-344 C.A.Costa,J.A.Harding,R.L.M.Young.The application of UML and an open distributed process framework to information system design. Computer in Industry, 2001,46 (1):33-48 Alexander Smirnov,Michael Pashkin,Tatiana Levashova and Nikolai Chilov. Ontology-based support for semantic interoperability between SCM and PLM. Int.J.Product Lifecycle Management, 2006,1(3): 289-295 Vladimir Samoylov and Vladimir Gorodetsky. Ontology Issue in Multi-agent Distributed Learning. In: V. Gorodetsky, T. Liu, and V.A. Skormin (eds.): AIS-ADM 2005, LNAI 3505. 215-230,2005 Craig Schlenoff, Michael Gruninger, Florence Tissot, et al. The Process Specification Language(PSL) version 1.0[R]. USA National Institute of Standards and Technology,2000 Wil van der Aalst, Kees van Hee, Workflow Management: Models, Methods and Systems Cambridge, MIT press, 2002:101-120 Ray Lai. J2EE Platform Web Service. Zhou Bin, et al.tran. Beijing: Electronic Industry press, 2005:210-240

Suhas Joshi. Web Service Integration. EAI Journal. 2003.8:30-35

551

Design of Link Structure of Electronic Commerce Website


Wu Shaofei1, Wei Siying2, Zhang jinlong3
1

School of Computer Science & Engineering, WuHan Institute of Technology,WuHan, HuBei, P.R.China, 430074
2 3

School of Management, HuaZhong University of Science & Technology

School of Management, HuaZhong University of Science & Technology

Abstract Websites are good communication media between enterprise and customer under the Electronic Commerce circumstance. Since the bandwidth of network is very limited today, the quality of commerce is especially influenced by the link structure of website. Therefore, it's theoretically and practically important to research on the design and optimization strategy of website structure. This paper chiefly focuses the design of basic link structure of website. The average visiting time of customers is estimated by analyzing the downloading time and selecting time on each web page. From the standpoint of shortening the average visiting time of pages with greater importance, this paper proposed an approach for designing of website's basic link structure. This approach provides site designers with one useful tool in practice. Key words Electronic commerce, Website design, Tabu search, Link structure

1. Introduction
The basic link structure is hierarchy of website that is expressed as tree structure and is considered firstly by the designer of website structure. Generally designers design the basic structure website according to the important of website. But because the quantity of pages that are contained in the electronic commerce website contains is huge, completing the design of basic structure with handwork is difficult. The page of a commodity page is nearer to the homepage, its level is lower, the possibility that it will be visited is bigger, the commodity has more opportunities to be purchased by the customer [1-2]. Because the quantity of commodity that can be demonstrated by each index page t is not infinite, it is not impossible to put all pages of the website the top layer, therefore the design of website structure should be optimized. This paper presents an automatic method of design the basic website structure, the designer need only input the initial basic link structure of the website and the relative importance of each page, they can get the most optimized basic link structure of website. The work of this paper has provided a new method to design a basic link structure of fast, reasonable and large-scale electronic commerce website.

2. Description of the problem and establishment of the model


Generally speaking, the commodity information is demonstrated by one or a group of commodity pages. Regarding the simple commodity, the usual homepage is enough. But the complex commodity possibly needs more pages to introduce each quality separately. In this case, the commodity pages can correspond with the commodity that be sold in the commodity website through adding virtual commodity pages [3-4].
0 1 3 4 7 5 2 6 8 3 1 4
5

0 2 6

Fig.1 Converting Of Website Structure

For example in Fig.1, page 5, 7 and 8 are combined to introduce a kind of commodity (left graph), if we abstract these 3 pages as a commodity page (right graph), the commodity page can correspond with the commodity one by one. Therefore, each commodity homepage may belong to a series of index pages. For example, in the right graph of Fig.1 the page 5 may belong to the index page 2 and 0, the page 4 may belong to the 552

index page 1 and 0. Obviously each commodity page has a lowest level index page. In the design stage, the designer needs to determine which index page each commodity page should belong to. This is the goal of the design. If the website has m index pages and n commodity pages, its suffix set is Im and In respectively. We use matrix A={ai,j} {i, j Im } to represent the connection structure between the index pages. It is relatively stable. We use matrix X={xi,j} {i Im , jIn } to represent the link structure between the index pages and the commodity pages. ai,jxi,j{0,1}. If ai,jxi,j=1, it means the link from the page i to the page j exists, otherwise the link does not exist. Therefore the count of the index page i can be calculated. The equation is shown as follow: li =
jIm

i, j

x
jIn

i, j

iIm

(1)

Now we estimate the time from users open page i to they leaves it. The time is longer, the page is more difficult to visit, otherwise it is easier. The time is composed of two parts: (1) Average downloading time Di : it is mainly decided by the transmission speed v of network and the page size Vi, it can be calculated with (2). Di = Vi / v , i ImIm (2) (2) Average selecting time Si : it relates the link counter of the page. Because the care degree of page of different people is different, the browsing speed and the custom are dissimilar, thus the time each browser arrives the identical page respectively is also different. But regarding a certain website structure, in order to reach the identical page different browsers all need to pass the same way and index pages. Considered the general browsers choose the next page in the way of gradual filtration, therefore we can use the method of simulating the visit process to determine Si. So we hypothesize: Hypothesis 1: The user uses the choice method of circular browsing links. After each circulation they only care about the residual li/a (a>1) links. When there is only one link the circulation is over. Hypothesis 2: The average time that user visit each link is t, the selecting time can be calculated with (3). Obviously when a=2, Si = (2li-1)t. log a (li ) ali 1 Si =( ak )t= t , iIm (3) a 1 k =0 Only considered basic links, in order to reach each commodity page the customers have to pass its all upper index pages. Suppose j(jIn) is the upper index page set of the commodity page j, therefore the downloading time, selecting time and total time of commodity page j may be determined separately by Tdj = Tsj =
i j

(4)(6). (4) (5) (6)

D + Di
i

, (jIn) (jIn) (jIn)

i j

Tj = Tdj + Tsj ,

After completing the design of pages of electronic commerce website, the designer determines the basic link structure of the website according to the importance of each page. In other words, the important page should be put in low level of website. Otherwise the unimportant page should be put in high level of the website. This paper uses Hj (jIn) to represent the importance of each page. These data are given by the designer. Therefore the problem of designing initial basic link structural of electronic commerce website can be represented formally as follows: regarding an initial basic structure graph of website which is described by tree structure, how designs the basic structure of this website according to the designers judgment of the importance of each page to causes the average visit time the customers visit the high important commodity pages to be few. In order to make information content of pages is not too much, the count of link of each index page limited. We must control the average downloading time and the selecting time of commodity pages, prevent losing the customers 553

because the wait time is excessively long or looking for the commodity information is too difficult. Defining decision variable X = [xi,j] (iIm ,jIn): 1the link between page i and page j exists xi,j = 0, the link between page i and page j dose not exists Therefore we can establish following mathematical model: Minf(X) = s.t.
jIn

[Tj ( X ) Hj ]
= 1, jIn iIm Td Ts

(7) (8) (9) (10) (11)

i j

x i,

x
j In

i, j

Li
Td j ( X )

1 n
1 n

j In

Ts ( X )
j j In

Equation (7) is object function; (8) means each commodity page can only link one respective index page; (9) is the link count constraint of each index page; (10) is the constraint of the greatest average downloading time of commodity page; (11) is the constraint of the greatest average selecting time of commodity page. Td, Ts, Li (iIm), Hj (jIn) in the model are constants. Tdj (X), Tsj (X), Tj (X), jIn are middle variable which are determined by (1)(6).

3. Algorithm design
Because the upper index page set j(jIn) of the commodity page j is difficult to be expressed with analytic format, obviously the above model belongs to the nonlinear programming, it is unable to apply the classical algorithm directly. But we can use preferential assign algorithm to solve this problem. This algorithm can be simply described: it assigns those index pages important and as many as possible commodity pages in turn according to the index page level from high to low. Preferential assign algorithm is simple and easy to realize, when its feasible domain is small, this algorithm may not get the feasible solution. In order to insure we can get the feasible solution, we must seek a better algorithm. Genetic algorithm, tabu search, simulated annealing algorithm and other intelligent optimization algorithms can solve above problem [5]. But because there are lots of the pages in the electronic commerce website and the scale of website structure optimization is big, the efficiency of this algorithm is the key factor that should be considered. From the application situation, tabu search (TS) has the high search efficiency. Therefore this paper selects TS algorithm as the solution algorithm. We will introduce the primary factors of the TS algorithm designed for above problem: <1> Initial solution: (8) makes each commodity page only has a direct index page. So we can use reduplicate natural number code. Initial solution makes each commodity page belong to the lowest level index page to which the commodity page can belong. <2> Domain structure: the definition of current solution is: the solution set that makes a commodity page j belong to the next index page to which the commodity page can belong. The object function of the unfeasible solution in the solution domain adds a big positive number to guarantee it is bigger than any value of the object function of the feasible solution. <3> Tabu list and long-term table: tabu list records TabuSize motions that are just passed, in this paper TabuSize = 9. Long-term table F|N records how many times the commodity page moves and exerts frequency penalty: f(X) =f(X)+ wpenalty(j). <4> Absorb level function: When the object function value of some solutions in the tabu list surpasses the best solution in the history, the taboo is canceled. 554

4. Conclusion
This article has established the data model for design link structure of electronic commerce website and proposed preferential assign algorithm and the TS algorithms. The two algorithms can both achieve the goal of optimize. The preferential assign algorithm is easy and feasible, but the result is insufficiently ideal, and it may not be able to get the feasible solution. TS algorithm needs algorithm design, but it may get the optimized result. This optimized method proposed by this paper has certain theory significance for the design of the structure of electronic commerce website.
References [1] [2] [3] [4] [5] Nakayama T, Kato H, Yamane Y. Discovering the gap between Web site designers expectations and users behavior[J]. Computer Networks, 2000, 33: 823--835 Garofalakis J, Kappos P, Mourloukos M. Web site Optimization using page popularity [J]. IEEE Internet Computing, 1999, Jul.Aug: 22--29 Wang Y W, Wang D W, Design strategy of web page for e-supermarket [A], Jiang Pingyu et. al, International Conference on eCommerce Engineering, Xian: China Machine Press, 2001,101--107 Yen B P, Fu k. Accessibility on web navigation [A], Jiang Pingyu et. al, International Conference on eCommerce Engineering, Xian: China Machine Press, 2001,30--37 Kim J, Yoo B. Toward the optimal link structure of the cyber shopping mall [J]. Int. J. Human-ComputerStudies, 2000, 52: 531-- 551

555

Mitigating Risks in Software Projects Through Phased Development Process A Real Options Analysis
Chen Tao, Zhang Jinlong
School of Management, Huazhong University of Science and Technology, Wuhan, 430074, China chentaohust@yahoo.com.cn

Abstract Managing risks of software projects is a challenging task due to the rapidly changing environment. This paper aims to introduce the concept of staging strategy in financial literature into software project risk management. Real options theory is explored to justify phased software development process in uncertain environment. It is found that phased development process provides additional value for software projects and is a good tool to mitigate risks. The approach we present can help to understand and improve the flexibility of software process produced by multi-phase development scheme. Key words Software project, Risk management, Real options, Phased development process

1 Introduction
Information technology is seemed by organizations as a key resource of competitive advantage, such as reducing the inventory cost effectively, improving production efficiency, and facilitating better customer relationship management. Unfortunately, however, the productivity gains from IT investments may be neutral or negative due to the nature of high risks that characterizes most software project [1]. Unsuccessful management of software project risk can lead to a variety of problems, such as cost and schedule overruns, unmet user requirements, and failing to deliver business value of software systems. Most research related to software risk management has been limited to risk reduction, which aims to reduce the probability of software risks to the lowest point. For example, the project manager would decide to spend more time on elaborate system design or employ an experienced consultant to mitigate the risk of technical complexity. Unfortunately, however, not all software project risks can be eliminated by action and some of them need to be hedged. Risk hedging strategies are well accepted in financial risk area, but have not got enough concern in software risk management. There are few tools available to help project managers develop effective strategies to hedge risks in software projects and related theory base is also insufficient. In this paper, we present an analysis on the value of staging the software project development process to hedge software project risks based on real option theory. As discussed by Kumar [2], the theory of real options is a useful framework for understanding risk hedging in software projects. Many software project management decisions can be conceptualized as real options and the qualitative insights provided by real options theory are mostly consistent with decision makers intuition. The approach we present can help to understand and justify the strategy of multi-phase scheme in software project management. This article is organized as follows. In the next section, we explain the value of applying staging development strategy in managing software project risks. And then, we present a scenario which involves the development of an ERP system, where a growth and compound options are adopted to demonstrate the effect of staging decision on risk hedging. Finally, section 4 provides some conclusions of our work and discusses some possible extensions for future research work.

2 Hedging Risks in Software Projects Using Staging Strategy


While a large portion of prior literature are mainly concerned about risk reduction strategies, Kumar proposed a useful framework for understanding risk hedging in software projects based on the theory of real options. The proposed framework provided a systematic approach to risk management for software project managers. Another significant work is that Benaroch [3] presented a four step option-based approach to managing IT investment risk, which facilitated a more comprehensive identification of option configurations. Different form prior work, the

This research has been supported by National Natural Science Funds of China (No: 70571025)

556

focus of this paper is particularly on the application of staging strategy in hedging software risks Multi-staged financing is a well-accepted risk hedging strategy for venture capital firms in finance literatures. Researchers advocate that multistage strategy help the venture capitalist minimize private information costs by becoming an insider and by imposing management changes to protect his investment. Hsu [4] used Geskes compound option approach to value the options inherent in the multi-staging scheme. Dubil [5] developed a simulation methodology for comparing the risk of a multistage venture financing strategy vs. an up-front financing plan. Its found that risk mitigation due to multi-staging was significant in itself, irrespective of any agency issues. In fact, the staging strategy in software development has been already covered in the software engineering literature. The spiral model developed by Boehm [6] is, in essence, a phased project structure. At the beginning of each phase, investments are made to develop and evaluate variant approaches for meeting the objectives of the phase. The project is re-evaluated at the end of each phase, and then plans for the next phase are developed. Sullivan et al. [7] interpreted in terms of options that the spiral model is much more effective than the waterfall model. By developing and evaluating various approaches in each phase, an option is created to choose the best solution identified. More importantly, the investment in each phase creates an option to decide whether or not to invest in the next phase. It can be seemed as compound options, which increases value under uncertain environment. Panayi and Trigeorgis [8] employed the concept of the multi-stage option concept in the valuation of two actual case-study applications. It is shown that the options valuation can justify investment in multi-stage projects even though a project has a negative static NPV when considered in isolation. To the best of our knowledge, few literatures have discussed the risk hedging effect of staging scheme for software development in terms of real options theory. One exception is Erdogmus [9], who developed a methodology to value and plan commercial software development under multiple source of risk. The presented methodology combines real options analysis with earned-value based estimation. Based on prior works, this paper attempts to introduce the multi-staging strategy to software risk management, and illustrate its value on risk hedging in the real options framework. To reduce risk, project managers must be allowed to change their decisions as new information about uncertainty factors becomes available. This is the core concept of managerial flexibility in real option theory. Staging decision in project management equals to exercising a defer option, thus acquires a kind of this flexibility. For the further justification and quantification for the value of staging decision, the next section presents a case illustration and use the real option model to quantify the extra benefits obtained through staging decision.

3 Valuating Phased Software Process with Real Option Analyses


National Construction and Installation Engineering company, NCIE, was established in Chongqing in 1954. In the early 1970s, this company transferred to Hubei province. Now it is headquartered in Wuhan, and has dozens of branches scatted all over the country. The company possesses nearly 1,200 large and medium-sized construction equipments, as well as 100 million RMB of fixed assets, and with more than 2,260 staff. NCIEs major business involves civil engineering, installation, construction of power plant, road and bridges, and includes some real estate business. In recent years, the company has experienced a rapid development with a 10% increase in annual revenue on average. Started in 2005, NCIE planed to develop an Enterprise Resource Planning (ERP) system to support the integration of various business processes within the company. This system was expected to support most daily activities including purchasing, inventory, marketing, human resources, facilities and cost management, etc. According to initial arrangement, the ERP development would be completed in 3 years, and the total cost is estimated about $ 250,000. When the application is carried out successfully, it will bring about US$ 300,000 revenues. To evaluate this ERP project, we adopt the traditional NPV analysis approach and we assume the risk free rate of the return is 12%. The net present value would be US$ $34,884 (300,000e-0.12*3- 250,000e-0.12*3). 557

When traditional NPV analyze is used, it is implied that the total cost of this project would happen all or not. In practice, however, the development of this ERP project was divided into two stages. The inventory management system was an important part of the entire project and it was highly related to all logistics activities in the firm. Developing inventory management system first could help to understand the procurement, sales and manufacturing process, which would be integrated in the ERP system. Therefore, the management decided to adopt staging strategy to mitigate potential risks. One year was to be spent on building up the inventory management subsystem. If it was successfully implemented, the project team would proceed to the second phase of the whole ERP system developing with the expected time of 2 years. The cost of inventory management subsystem was assumed to be US$ 50,000, and the phase of the following development needed approximate US$ 200,000. Now we adopt real option models to examine the value of this staging decision. The simple growth option and the Geske (1979) compound option models are separately used as option calculators. Although the growth option does suffice in valuing the potential profit, the compound option model captures the value produced by staging decision more accurately. The investment in developing inventory management system, k, may be viewed as a European growth option on a future contract where the future price is equal to the expected revenue, V, with exercise price M. Thus the theoretical worth of the option today, C0, may be determined using the standard Black-Scholes growth option [10]:

C 0 = VN (d1 ) Me rT N (d 2 )
where
d1 = In(V M 1 ) + (r + 2 )T 2 , T
d 2 = d1 T

(1)

(2)

M cost of application design and development , = volatility of the expected revenue, r = risk free rate of interest, N () = the cumulative normal distribution function. Using these inputs mentioned above, the value of the growth option is C0 = US$ 162,559. Taking the cost of phase 1 for counted, the value of this project is C0 5,000e-0.12*1 = US$ 118,213. The project can also be viewed as a two-phase investment scenario. When the firm invests in Phase 1, it acquires the option to invest in Phase 2. These two phase investments may be valued utilizing a compound options framework developed by Geske, which can capture value of compound option more accurately [11]:
C 0 = VN (h1 + 1 , h2 + 2 ,

1 ) Me r N (h1 , h2 , 1 ) Ke r N (h1 ) 2 2
2 1

(3)

where
1 In(V ) + (r 2 ) 1 V 2 h1 =

h2 =

In(V

1 ) + (r 2 ) 2 2

(4)

V the value of V such that


(5) K = the cost of developing inventory management system, r = the risk-free interest rate, 1T1 t , 2T2 t , T2 T1 , N () = the cumulative normal distribution function, and N (a, b, ) = the bivariate cumulative normal distribution function with a and b as upper integral limits and as the correlation coefficient. Using the inputs V= US$ 300,000, K= US$ 50,000, M= US$ 200,000, T1=1 year, T2=3 year, r = 12%, =50%, in Equation (3), the value of the compound option is C0 = US$ 134,659. Comparing the compound option value with the traditional NPV, the option of this project is identified to be worth an additional US$ 99,775. That is to say, the compound option model have identified that the value 558
VN ( h2 + ) Me r N ( h2 ) = K

associated with staging strategy is worth US$ 99,775 more than the initial schedule. The extra value calculated by real option analysis approach can be interpreted in several aspects: Software projects are usually characterized by high risky nature, and the benefit streams are uncertain and affected by many risk factors. It has been shown that traditional NPV analysis tends to underestimate the investment in uncertain environment, sometimes by as much as a factor of two [12]. Real option analysis are able to capture the value managerial flexibility ignored by existing net cash flow method, therefore gives a more effective valuation for risky software projects. The development of the ERP system is divided as two developing stages, thus gives the managers a waiting option. If the outcome of the first round development turns out to be favorable, the remaining part of the developing tasks could be performed to accomplish the whole system. In the contrary case, the implementation could be shut down temporarily or even abandoned to hedging serious risks. One of the most common risks embedded in software project is misunderstanding the requirements or unclear requirements definition. The developing efforts in building inventory management system will help to learn about the actual need of end users. Because the inventory management module is highly related to other parts in ERP system, then a more robust and acceptable requirements definition could be achieved in subsequent development phase. Certain software projects, especially large scale projects such as ERP systems, will take on high complexity. If new technology is used, or if there are a large number of required links to other external systems, the development task will become even more difficult. A pilot project could provide the opportunity for team members to learn about specialized skills required by the project, or develop some software tools that would reduce the risk of complexity. Sensitivity analysis can identify the most influential inputs on the value of options. This will help decision makers to make optimal use of a phased strategy for planning software process. In what follows, we discuss the relationship between the expected payoffs, the variance, costs in each phase and the extra value of the phased strategy.

150

Extra value

100 50 0 250 300 400 500 600 700 800 900 Expected payoffs (thousand dallars)

Fig. 1. Expected payoffs impact on phased process

250

Extra value

200 150 100 50 0 20% 30% 50% 70% 90% 120% 150% Volatility of payoffs

Fig. 2. Project risks impact on phased process

Figure 1 shows that, with the project expected profit increase the extra value of phased development process 559

arise slightly at first, and then dropped sharply. This shows that when the project has a high yield, decision-makers tend to invest on the software project in a lump to acquire returns as soon as possible. The phased strategy is less effective. In the option pricing formula, the risk of investment projects is measured by the volatility of expected payoffs. The payoffs from investments fluctuate far more from the expectation, the greater the risk it represents. Sensitivity for the volatility is shown in figure 2. Obviously, when faced greater risks of software projects, it is sensible to stage the development process to alleviate serious risks, and the greater the risk, the more obvious the benefits of the phased strategy.
120 100 80 60 40 20 0 10% 30% 50% 60% 70% 80% Cost ratio of the second phase 90%

Extra value

Fig. 3.

Cost structures impact on phased process

Figure 3 show that the cost structure of software process has an impact on the effectiveness of phased strategy. As can be seen, the greater the cost of the second phase (that is the first stage of the development costs less), the more noticeable the benefits arising from phased process. This is mainly due to the change of the value of the waiting option embedded in phased process. With a smaller cost of the first round, more capital has been postponed until the second round of the input, and then produces greater value of the waiting options and more efficiency of phased process.

4 Summary and Conclusion


Managerial flexibility, including the ability of decision makers to delay, suspend, or abandon a software project in case of unfavorable conditions, is of significant importance to software project risk management. The staging strategy structures the project as a sequence of managerial decisions over time, and makes managers acquire a kind of managerial flexibility to cope with high risks that characterizes most software project. In this context, each stage of software project development can be viewed as a joint acquisition of valuable options to acquire more revenues at a later stage. Based on real options theory, this paper tries to introduce the multi-staging strategy well accepted in financial risk literatures to software risk management. We present a scenario of the development of an ERP system and use the real option model to illustrate the extra benefits obtained through staging strategy. The conclusions from the findings are essentially that phased development processes, such as spiral and agile processes, create economic value in the form of the flexibility to respond as uncertainties are resolved, and thus are good tools to hedge risks. This finding itself is not very surprising, but it is underpinned by well-formulated theory that gives rise to further results. However, there are several problems that deserve further discussion in this paper. The most possible argument may be the use of the Black and Scholes approach to options valuation, and debates about this issue have gone on for a long time. Not only the Black and Scholes formula but also any of a large family of similar approaches based on no-arbitrage assumptions makes strong statistical assumptions about the nature of the underlying uncertainty. These option pricing model generally assumes that the expected payoffs are characterized by certain probably distributions, geometric Brownian motion with a drift, for instance. This assumption may be defended for financial options, for which there could be an efficient market with numerous player and numerous stocks for trading. The law of large could apply to this complete market, thus justify the use of probability theory 560

[13]. Nevertheless, the situation for real options is rather different, especially for software product and process designs or management. As to software project investment, the number of players producing the consequence is usually quite small. Moreover, decision makers cannot obtain historic date of past revenues and costs to formulate the distribution of expected payoffs. Thus, the use of assumption on certain stochastic motion is not well-substantiated for software project valuation. On the contrary, Benaroch and Kauffman [14] examine the applicability of option pricing models and the Black-Scholes model in IT capital budgeting. Their work provide a formal theoretical grounding for the validity of the Black-Scholes option pricing model in the context of the spectrum of capital budgeting methods that might be employed to assess IT investments. Therefore, as most researchers do in real options literatures, we adopt the Black-Scholes growth option and Geske compound options formula to demonstrate the extra value of phased process in this paper. The assumption of lognormality of the perceived value of the software project maybe not valid for the real world case, and this remains a difficulty for an accurate quantitative valuation, but which is not our emphasis. On the whole, it is the logic of real options that persuades us how it can handle getting the timing right, scaling up or even abandonment, as the organization learns about its business environment and mitigate risks with the passage of time. Besides, in a financial market one can hedge the risk with a replicating portfolios" of market-priced stocks and bonds. The mechanism of hedging risk using phased development process is totally different. The goal is to keep investments small until risks are resolved. Such a structure provides an ability to manage risk by embedding options in a project to change course or even to abandon it as uncertainties are resolved over time. Finally, it should to be noticed that staging decision may also have an opportunity cost due to lost revenue, increased cost, etc. Situations with an opportunity cost of waiting can be modeled as options with possible leakage in value. Managers would therefore need to examine each phase of the project in terms of the benefits of staging scheme versus the opportunity cost, which will also be the further direction of research.
References [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [1] Salerno, L. M., What happened to the computer revolution? Harvard Business Review, 1985. 63(6): pp. 129 138. [2] Kumar, R. L., Managing risk in IT project: an options perspective. Information and Management, 2002. 40: pp. 63-74. [3] Benaroch, M., Managing Information Technology Investment Risk: A Real Options Perspective. Journal of Management Information Systems, 2002. 19(2): pp. 43-84. [4] Hsu, Y., Staging of Venture Capital Investment: A Real Options Analysis, in Financial Management Association European Meetings. 2002: London. [5] Dubil, R., The Optimality of Multistage Venture Capital Financing: An Option-Theoretic Approach, in Financial Management Association European Meetings. 2005: Siena, Italy. [6] Boehm, B. W., Spiral Model of Software Development and Enhancement. Computer, 1988. 21(5): pp. 61-72. [7] Sullivan, K. J., Chalasani, P., Jha, S., and Sazawal, V., Software Design as an Investment. Activity: A Real Options Perspective, in Real Options and Business Strategy, Trigeorgis, L., Editor. 1999: London. p. 215-262. [8] Panayi, S. and Trigeorgis, L., Multi-Stage Real Options: The Cases of Information Technology Infrastructure and International Bank Expansion. Quarterly Review of Economics and Finance, 1998. 38(Special Issue): pp. 675-692. [9] Erdogmus, H., Valuation of learning options in software development under private and market risk. The Engineering Economist, 2002. 47(3): pp. 308-353. [10] Black, F. and Scholes, M., The Pricing of Options and Corporate Liabilities. Journal of Political Economy, 1973. 81: pp. 637-659. [11] Geske, R., The Valuation of Compound Options. Journal of Financial Economics, 1979. 7: pp. 63-81. [12] Tallon, P., Kauffman, R., Whinston, A., and Zhu, K., Using Real Options Analysis for Evaluating Uncertain Investments in Information Technology: Insights From The ICIS 2001 Debate. Communications of the Association for Information Systems, 2002. 9: pp. 136-167. [13] Miller, L., Choi, S. H., and Park, C. S., Using an options approach to evaluate Korean information technology infrastructure. The Engineering Economist, 2004. 49(3): pp. 199-219. [14] Benaroch, M. and Kauffman, R. J., A Case for Using Real Options Pricing Analysis to Evaluate Information Technology Project Investments. Information Systems research, 1999. 10(1): pp. 70-86.

[13] [14]

561

Building on Management Knowledge Platform of Outsourcing


Wu F., Li P.P., Wang Q.
School of Management, Xian Jiaotong University, Xian, 710049, PRChina

Abstract The management knowledge platform of outsourcing decision was built. According to the interactive condition of knowledge during outsourcing, three levels information platforms, EDI, the client/server networking and the intelligent agent, are used to construct the decision-making model of outsourcing knowledge management and information management platform; the construction of outsourcing management knowledge platform contains the basic equipment structure framework of soft and hard wares; it makes up the drawback that once lacked of outsourcing decision platform. Key words Outsourcing, Platform design, Decision-making, Knowledge management

1. Introduction
Outsourcing management involves not only the multitudinous factors of outsourcing decision, such as degree of core product manufacture specification(MS), degree of losing, the production cost, suppliers manufacturing capacity, the operational specifications, the corresponding external environment and so on, but also the interrelation between supply and production operation, supply and sale, supply and finance. Complex products, such as automobile, motor whose quantity of parts and components is between 200 and 300, often have from 300 to 500 suppliers [1][2]. Their subjects of outsourcing decision relate to various departments such as corporation planning, R&D, financial, information management, purchasing department and so on. Therefore, it is necessary to build an outsourcing management knowledge platform to collect the extensive information to support outsourcing decision. It appears more obviously in virtual manufacturing organization and decentralized manufacturing mode under the background of rapid response and agile manufacturing [3][4]. he outsourcing management platform, aiming at the goal of enterprises long-term performance, is not only an information system platform, but also a knowledge management platform. Therefore, besides selecting appropriate infrastructure and relative solution of software and hardware, it is requisite that an effective mechanism of knowledge management and control[5].

2. The demand of outsourcing decision process for knowledge management platform


Outsourcing decision-making relates to multitudinous department personnel as well as parts and components information, suppliers information and knowledge. To complex products, it is difficult to finish this work by face-to-face mode. Thereby, an outsourcing management platform will enable the work to be completed smoothly and improve the efficiency and effect of outsourcing decision. The basic requirements of Outsourcing management knowledge platform are shown in Fig 1. Through the man-machine interface, by means of hardware infrastructure and corresponding software, clients can judge different products and parts, and conduct outsourcing decision-making, determine the suppliers type, as well as choose suppliers. Moreover, based on the outsourcing manufacturing, appropriate outsourcing decision information platform is determined by the communicating type between manufacturers and suppliers. The alternative platforms may be the simple data transmission, e.g. EDI, also may be the enterprise internet/extranet/extranet or the intelligent agents. In view of knowledge management, every type of outsourcing acquires corresponding information platform. The concrete choices of platform are analyzed as follows.

This research has been supported by National Natural Science Funds of China (70572038),and Information Management & Information Economics of Education Ministry, PRC. ( 0607-39).

the foundation of Key Lab of

562

User Manufacturing specification type of product

Supplier type

Supplier selection

Communication type between manufacturing and supplier

Dedicated EDI mode

Client/Server mode Internet /Intranet/extranet

Intelligen t agent mode

Hardware infrastructure

Fig.1 Outsourcing management knowledge platform.

3. The type of outsourcing communication


In the process of exterior resources are used to complete the manufacture by enterprises, the degree and media during the manufacturers and suppliers cooperation processes are different from one process to another. Some are just simple parts outsourcing, the interactive information relates to the type of parts, quantity and the date of delivery as well as simple cooperative manufacturing specifications at the first time, without technological interaction. Reversely, others needs extensive and frequent information interaction after the relationship between manufacturer and supplier is established. Regardless of its depth, breadth or the frequent degree, it cannot compare with the former in depth, wide and frequency. The communication type of the manufacturers and suppliers during outsourcing process is determined by its interactive mode. Fig.2 is the influence factors of knowledge interaction between manufactures and suppliers.
Knowledge interaction

Layers of supplier

Scope of knowledge interaction

Single layer

Multilayer

Order data

Manufacturing specification

Expertise

Cross-layer cooperation

Multi-layer integration fusion

Fig.2 Knowledge interaction of outsourcing

The knowledge interaction can be analyzed through the depth and width of manufacturers and suppliers knowledge and information interaction. The depth means the hierarchical of manufacturers and suppliers interaction, which includes mono-layer and multi-layer. The mono-layer refers to the manufacturers and the suppliers interact directly. The multi-layer refers to that the manufacturers and suppliers not only interact with subsystem or module, but also interact with the suppliers of their suppliers. Moreover, the multi-layer cooperation might be cross-layer in series, or integrating fusing mode in virtual networking. The width refers to the scope of knowledge and information interaction during the cooperative process of manufacturers and suppliers. This 563

interaction includes data interaction, manufacturing specification interaction and expertise interaction. According to the target of knowledge interaction, outsourcing can be divided into three types, arms length purchasing, integrated supply-chain, and virtual integration, see Fig.3.

Arms length

Integration Supply chain

Virtual integration

Low

Degree of knowledge interaction

High

Fig. 3 Three type of outsourcing

Arms length purchasing: parts of product are diffusively manufactured separately, without too much relationship among them. It just needs simple combination according to some kind of specification. This kind of outsourcing includes modular, high standardized product, for example, personal computer hardware manufacturing. The character of this kind outsourcing is the low knowledge interaction of manufacturers and suppliers, as well as the high product modulation. Usually, mass customization products can be finished by the approach. Integrated supply chain: during the outsourcing progress, the final integration of various parts, with complex boundaries among them, needs close cooperation. Due to the process of decentralized manufacture takes outsourcing by the subsystem or the part as the unit, it requires a little knowledge and information interaction. And the long-term, fixed cooperation has to be kept between manufacturers and suppliers. The case is Just-In-Time manufacture in the automobile manufacture industry. Virtual integration: during the outsourcing progress, various outsourcing business, with highly complex boundary and low degree of modulation, needs very frequent, thorough information sharing and interaction. This kind of business often is divided into the production and operation of job-shop, the small batch, the customized product or service. This kind of cooperation, usually accompanied with the character provisional, requires intelligent agent to accomplish its manufacture process. For example, agile manufacturing, intelligence manufacturings mode. Besides, IS/IT (information system/information technology) outsourcing, recently popularized, belongs to this kind. Considering IT industry quickly renew and highly professional, it is difficult to realize the enterprises information completely depending upon its own strength. Therefore, ASP Corporation arises at the historic moment. The customized service is provided by specialized IT Company. However, in view of different business processes of different enterprises, it is impossible to adopt the arms length purchasing which is the promptly solution schemes. During the outsourcing cooperation process, the suppliers work is not only making the structures description of manufacturers business process, but also communicating directly with relative department personnel by using information network. The ASP provider and the manufacture enterprise unifies closely in together.

4. The type of information technology platform


Realizing knowledge management enable technology, e.g. the technology infrastructure of knowledge management technology infrastructure is the precondition which various levels of outsourcing could be implemented. According to the exchanging and sharing knowledge technical levels among the cooperation outsourcing enterprises, the enterprises knowledge management infrastructure is divided into three layers. Firstly, primary technical facility level, dedicated knowledge interactive facility, like EDI and so on. EDI is 564

used to transfer mass files in the international trade. The typical data exchange is order transferred to suppliers, such as document number, date of availability, content description, dealer, product quantity, and unit price and so on. Thus it can be seen that the information transmitted by EDI is so limited that it could not meet the requirement for the data processing, the information statistics, the management methods and analysis models in more complex cooperation such as concurrent engineering and cross-country manufacturing. By means of that kind of technology, the communication of the most fundamental and necessary production, technology or engineering is conducted by cooperation enterprises to ensure the manufacturing process continuous executing. Secondly, Internet/Intranet/Extranet network level. Along with the increasing of outsourcing scope and cooperation depth, more widespread domain cooperation is demanded by the requirement that valuable, satisfactory product is required by the ultimate users. For instance, the cooperative scope is not merely restricted between the manufacturer and the direct part-supplier, but extends from the first level system-capability partner to second level and even to the more preliminary supplier. Considering the scope of function, the kind of cooperation covers from supply to manufacture, to the retailing allocation, the sale, until to the post-sale service entire supply-chain. Moreover, its quantity is far more than the first level. The knowledge management infrastructure system, which meets the management demand of this level manufacturing process, includes Internet, Intranet and Extranet. Thirdly, intelligent multi-agent level. Along with the enhancement of knowledge interaction level, a huge system which is composed by above supply-chain system cannot work high efficiently without a set of highly intelligent agent system. Therefore, the intelligent multi-agent level is upgrade from the second level and aim at knowledge management and manufacturing mode is transferred into integrated holonic intelligent manufacture system. The coordination of multi-agent system is the essential core of intelligent knowledge management system. There are three kinds of unavoidable difficulties in coordination of multi-agent system. Firstly, a coordinate mechanism base on the agent cooperation manufacture: to design a mechanism which could coordinate knowledge and information sharing and respective benefit of various partners. Secondly, to design an auction coordinated mechanism which takes consideration agents as media. Following issues should be solved: How to induce divergent interest subjects to coordinate in the auction by taking agent as medium? What are the basic principles to enhance the agent to cooperate behavior? How to train the agent to study cooperation rather than using procedure to limit cooperation? What are the credit principles built in the agent system? Finally, integrating the multi-agent enterprise modeling performance and human factors should be taken a consideration, e.g. how to coordinate the relationship between the system and humans? Fig. 4 is the multi-agent-intelligence enterprise knowledge management model[6] (fine line is information flow and knowledge flow, heavy line is material flow). There are four kinds of agent system in the model: execution organization which seeks the opportunity and entire enterprises knowledge management strategy, production organization which seeks and controls the production knowledge; supply-chain organization which is in charge of the distribution knowledge; market module which is responsible for the market knowledge. Knowledge enterprises based multi-agents require a knowledge management platform to conduct the activities, and various enterprises need right platform to support their business, therefore, the platform selection is required.

565

Knowledge enterprise

Execution module
Other agents Price agent Investment agent Other enterprises

Sale agent

Production module ( real factory)


Other agents Cost agent Operation agent

Other agemt

Buyer agent

E- Market module

Other agents Distribution agent Warehouse agent Retailer agent

Supply chain module

Fig. 4 Knowledge management model of multi-agent intelligent enterprise

5The selection of outsourcing management knowledge platform


According to the two indice of different outsourcing levels as well as the information and knowledge management infrastructure level, a two-dimensional system can be composed. When the outsourcing level is divided into three levels (arms length purchase, integrated supply-chain, and virtual integration) and the knowledge management infrastructure is divided into three levels (dedicated knowledge management system, Internet/Intranet/Extranet network management system and intelligent software agent system), a decision is formed, see Fig 5. Two issues should be taken consideration within building the model: one is requirement of knowledge management, the other is the possibility that technology meets above requirement. According to the definition of knowledge management: To provide right person with right knowledge. In the outsourcing cooperation, there are two strategies being taken consideration, which are as follow:
Dedicated knowledge interaction technology EDI
Virtual integration Cell(1) Internet/ Intranet/ Extranet Intelligent agent (ERP) Cell(3) Intelligent virtual integration organization Cell(6)

Cell2

Integration supply chain

Cell(4)

Cell(5) Integration module supply chain

Arms length

Cell(7) Basic dedicated communication component outsourcing

Cell(8)

Cell(9)

Fig. 5 Decision-making model of outsourcing knowledge management platform

First, during the virtual integrated outsourcing, for the sake of completing the cooperative mission, it is 566

necessary to share the different organizations knowledge. Therefore, the goal of knowledge management is breaking the barrier, promoting knowledge fusing, accelerating knowledge shifting. The object of knowledge management technological infrastructure is to supply the enable technology which could be meet above requirement. The precondition of this process is appraised correctly knowledge to be shared. Second, outsourcing involves different organizations personnel participation. Moreover, the number of participating organizations may be immense. Therefore, to the manufacturers, there must be another mechanism to protect their knowledge. With the high intelligent knowledge management system, this step must be limited reasonability in case corporation competence disclosure. There are two main strategies used to solve the problem. One is technology approach: establishing firewall, acquiring authority for the different participators according to the practical work demand, as well as managing it dynamically, promptly. The other one is the governing structure approach. According to different knowledge management demand, different structure relationship is adopted to control knowledge. The former is a short-term, realistic and concrete knowledge management, and the latter is long-term, virtual, abstract knowledge management.

6. Design of outsourcing management knowledge platform structure


The outsourcing management knowledge platform has two basic functions, outsourcing decision and outsourcing operation. The former is the main content of this article; the latter is correlated directly to enterprise core operation system (for example ERP system).Through appropriate type of platform, such as EDI/Internet, outsourcing management knowledge platform/Intelligent system, users take interaction with the knowledge management system. The structure of outsourcing management knowledge platform, see Fig. 6
User Platform type EDI Client-server Multi-agent

Make or outsourcing decision-making Losing degree of MS Core degree of MS

Supplier type Canonical operation ; Original innovation Supplier selection Cost/ Quality/ delivery

Data warehouse
Product MS database Cost of component Losing degree Of component

Core degree of component

Canonical operation of supplier

Original innovation Of supplier

Supply capacity of supplier Cost/quality/dilivery

Global database

Enterprise core operation system ERP module

Fig. 6 The structure of outsourcing management knowledge platform

The main functions completed by the platform are making in-house or outsourcing decision-making and supplier selection. The main activities include identifying losing degree and core degree of parts and put forwards 567

the right manufacturing strategies of parts according to relative models; Identifying suitable type of supplier in light of their canonical operation and original innovation; Identifying the suppliers in view of their cost, quality, and date of delivery. Besides, the platform also could conduct supply management which is not involved in this article. Above knowledge and information about decision is acquired from data and relative database. That includes the product manufacturing specification database, the manufacturing specification losing degree database, the suppliers original innovation database, the suppliers canonical operation database and the suppliers supply ability database. The data warehouse is connected to enterprise core operation system (ERP model); Meanwhile, it is connected to global database.

7. Conclusion
On the basis of the suppliers and manufacturers knowledge interaction condition during the outsourcing process, outsourcing is divided into three levels: arms length purchase, supply-chain and virtual integration, and its information technology is also divided into three levels. In addition, a two-dimension structure matrix which is composed of indice (outsourcing level and information platform) has been built and furthermore a decision-making model of outsourcing knowledge management and information management platform has been built.
References [1] [2] [3] [4] Tayles M. and Colin D. Moving from make /buy to strategic sourcing: the outsourcing decision process. Long Range Planning, 2001,34(5) : 605-622 Barthelemy J. IT outsourcing: evidence from France and Germany. European Management Journal, 2001, 19 (2):195- 202 Wu F., Li H.Z., Chu L. K. and D. Sculli. An outsourcing decision model for sustaining long-term performance. International Journal of Production Research, 2005, 43(12): 2513-2535 Wu F., Li H.Z., Chu L. K. and D. Sculli. Supplier selection of outsourcing: A manufacturing knowledge protection perspective, in Mak, K. L. eds. Logistics Strategies and Technologies for Global Business. The Proceeding of ICLSCM 2006, Hong Kong, 2006, 45 Jesper M. Framework for outsourcing manufacturing: strategic and operational implications. Computers in Industry, 2003, 49(1): 59-75 Wu D. J. Software agents for knowledge management. Expert System with Application, 2001, 20(1):51-64

[5] [6]

568

EAI(Enterprise Application Integration)Conceptual Architecture Composition in Telecom Industry


Yang Hongbin
School of Economics and Management, Beijing University of posts and telecommunications, P.R.China, 100876

Abstract In this article, we provide an approach to the creation of a Conceptual level System Component Model specific to Enterprise Application Integration (EAI) in telecom. As a byproduct, EAI specific Architectural Decisions, and an EAI specific Architecture Overview Diagram will also be generated. Key words EAI, Conceptual Architecture, Telecom supporting system, Web Services, Integration

1. Foreword
The telecommunications industry has seen rapid and continuous change in the last few years. Standards have changed, new technologies have emerged, customer preferences have evolved, and markets have matured. Telecom carriers have over a number of years created a complex web of interconnected applications each with its own means of communicating with other systems. As new systems are introduced, they too must be interfaced with each of these legacy applications, making every step more difficult to perform and more costly. This is a particular problem for todays telecommunication industry. A well-implemented EAI solution can achieve this. EAI seamlessly joins business-to business applications to allow systems to talk to one another without frontiers and without separate P2P interfaces. For telecommunications firms that can mean creating end-to-end revenue stream stability that not only addresses revenue/cost leakage but also helps to recover money more quickly.

2. Research technique
2.1 Overview Our reference architecture covers two major elements: Business Rules and EAI Workflow. More fundamentally, EAI solutions deliver savings of development costs and can reduce operational in comparison to P2P solutions. This paper focuses on the preparation of the conceptual EAI Component Model to facilitate that selection. Figure 1 below depicts the placement of this technique within the scope of total system development, and with respect to the related EAI techniques.
S e le c t T o ta l S y s te m A c tiv itie s fr o m C A D Engagem ent M odel Solution Outline
O u t lin e S o l u t io n R e q u ir e m e n t s

O u t lin e A p p lic a t io n M o d e l

E A I is d e p e n d e n t o n a n d c o n s t r a in e d b y th e to ta l S y s te m D e s ig n . A s a r e s u lt , th e s e c o r e te c h n iq u e s a r e n o t a p p lie d u n til s u ffic ie n t d e f in it io n o f th e to ta l s y s te m d e s ig n is c o m p le te .


P la c e m e n t o f E A I T e c h n iq u e s w ith R e s p e c t to th e C A D Engagem ent M odel Solution Outline
E A I A t t r ib u t e s B lu e p r in t C o m p o s it io n E A I C o n c e p tu a l A r c h it e c t u r e C o m p o s it io n E A I P r o d u c t S e le c t io n ( n o u n iq u e te c h n iq u e ) E A I S y s t e m D e s ig n

O u t lin e A r c h it e c t u r e M o d e l

Macro Design

R e f in e R e q u ir e m e n t s & A p p l ic a t io n M o d e l

I d e n t if ie s & o r g a n iz e s a t t r ib u t e s t h a t d r iv e E A I s y s t e m d e f in it io n

D e s ig n A r c h it e c t u r e M o d e l

D e f in e s E A I C o n c e p t u a l C om ponent M odel S e le c t io n f o llo w s s p e c if ic a t io n o f EAI C om ponent M odel

D e t a il R e q u ir e m e n t s & A p p l ic a t io n M o d e l

Micro Design

Design

Macro

R e f in e A r c h it e c t u r e M o d e l E A I B r o k e r in g D e s ig n

C o m p le t e s c o m p o n e n t a n d o p e r a t io n a l m o d e l s p e c if ic a t io n f o llo w in g p r o d u c t s e le c t io n

Design

Micro

E A I A d a p t e r D e s ig n E A I P ro c e s s M a n a g e m e n t D e s ig n E A I C o m m o n S e rv ic e s D e s ig n

D e f in e P h y s ic a l A p p lic a t io n D e s ig n

C o m p le t e s d e s ig n s p e c if ic a t io n fo r th e c o re E A I c o m p o n e n ts

Fig.1 - EAI conceptual architecture composition is one of EAI specific techniques

569

Note that prior to initiating product selection the component model will need to be elaborated to a specification level. Both product selection and component model specification are well documented already in the Method, and those techniques are not replicated here. The only EAI unique aspect to specification of the component model, is the use of the EAI Attributes Blueprint as the consolidated source of parameters for the model. The EAI Conceptual Architecture is expressed through a number of work products. These are tabulated. With the exception of those indicated, composition of an EAI architecture is dictated by the existing techniques for these work products.
Tab.1 - Conceptual architecture work products Work Products Involved In Conceptual Architecture ARC 100 Architectural Decisions ARC 101 Architecture Overview Diagram ARC 118 Change Cases ARC 108 Component Model ARC 102 Reference Architecture Fit Gap Analysis APP 011 System Context ARC 107 Architectural Template ARC 301 Current IT Environment ARC 111 Deployment Units ARC 119 Nonfunctional Requirements ARC 310 Standards ARC 117 Viability Assessment APP 303 Detailed Gap Analysis ORG 006 Future Organization Design X Work Products focus of this technique X X

EAI conceptual component model composition is achieved in five basic steps, as depicted in Fig.2. These steps are: Make initial EAI architectural decisions Identify required EAI services Partition the EAI subsystem into smaller subsystems, where required Identify EAI technology categories present, to align components with available product configurations Apply integration architectural patterns to coalesce and confirm the required components. Throughout these steps, architectural decisions are being made and confirmed.
M a k e I n itia l A r c h it e c t u r a l D e c is io n s

F o c u s o f th is te c h n iq u e p a p e r

I d e n t if y E A I S e rv ic e s

P a r t itio n t h e S y s t e m

I d e n t if y E A I C a t e g o r ie s

M ake R e m a in in g A r c h it e c t u r a l D e c is io n s

A p p ly A r c h it e c t u r a l P a t t e r n s

S ta n d a r d s te p s fo r c r e a tin g o r u p d a tin g th e W o r k P r o d u c ts a re n o t c o v e r e d in th is te c h n iq u e p a p e r

C r e a t e o r U p d a t e W o rk P r o d u c t s

Fig.2 - EAI conceptual architecture composition steps

570

2.2 Make Initial Architectural Decisions Tab.2 indicates the set of EAI Architectural Decisions that should be made up front for architecture composition. These decisions will ultimately be documented in the Architectural Decisions work product. To form these initial decisions, perform a systematic inspection of the attributes for each EAI facilitated interaction tabulated in the EAI Attributes Blueprint. If additional information is needed to make the decision, consult the clients architects and solution experts. It is best if the clients appointed integration architect is directly involved throughout this process. When that is not possible, review the recommended initial decisions with the client for approval. The execution notes for framing and making these decisions are contained in Section 8 Confirm Architectural Decisions. Tab.2 contains links to these execution notes, as well as background information, for each decision to be considered.
Tab.2 - Initial EAI architectural decisions to make Architectural Decision Topics That Apply Link to Execution Notes Utilization of the EAI Infrastructure Utilization of the EAI infrastructure within Link to Background Info Utilization of the EAI Infrastructure

EAI Functionality within Business Applications EAI Functionality Applications Business Application Insulation Process-intensive versus Data-oriented Situations Transactional Interactions Failure and Exception Handling End Point Insulation

Business EAI Functionality within Business Applications Business Application Insulation

Process-intensive versus Data-oriented Process-intensive versus Data-oriented Situations Situations Transactional Interactions Transactional Interactions Failure and Exception Handling Failure and Exception Handling Batch Platform

Event-triggered versus Batch Interaction Styles Event-triggered versus Batch Interaction Event-triggered versus Styles Interaction Styles Object or Component Platform Standards Object or Component Platform Standards Object or Component Standards Routing Routing Routing Data Transformation Message-oriented vs. Remote Procedure Call/Object-oriented Business Object Documents Adapter Functionality Security Audit Capabilities Recovery Data Transformation Data Transformation

Message-oriented vs. Remote Procedure Message-oriented vs. Remote Call/Object-oriented Procedure Call/Object-oriented Business Object Documents Business Object Documents Adapter Functionality Security Audit Capabilities Recovery Adapter Functionality Security Audit Capabilities Recovery

General EAI Model (Centralized vs. End Point General EAI Model (Centralized vs. End General EAI Model (Centralized vs. Point Services) Services) End Point Services) Functionality/Performance Trade-offs Functionality/Performance Trade-offs Functionality/Performance Trade-offs

2.3 EAI Architecture Overview Diagram Work Product The EAI focused architecture overview diagram for telecom information system depicts the partitioning of B2B interaction functionality from internal integration functionality, and a similar partitioning of operational data store / data mart integration functionality.

571

External Customers Small Large External Networks Internet Value Added Network Internal Users Call Centers Suppliers Personal Robotics Majestic Motion Controllers Management Dashboard ( Web App Server Order Entry (Websphere Commerce Suite) CRM (Siebel eBusiness) ERP (SAP R/3: FI/CO, HR, S/D) Billing System , Firewall / Proxy / VPN Server Business Applications

Internal Adapters B2B Integration Broker

EAI Hub Queues Process Manager

Integration Broker

External Services Credit Agency Public Key Authority

CrossReference DB

Operational Data Store Data Mart

Adapters

ETT

Fig. 3 - Architecture overview diagram Telecom EAI conceptual

2.4 EAI Component Model Example Work Product To support application integration needs, a logical EAI broker based system will be introduced, with most of the functionality executed and administered via the hub. This model provides a good match for their limited integration support team resources.The concentration will minimize the complexity and maintenance associated with individual application adapters. A separate, dedicated logical broker will be used to manage the B2B interactions with their suppliers. Similarly, separate ETT technologies will be applied to their operational data store/data mart integration requirement.

Fig.4 EAI component model EAI Category-level

572

3. Research conclusion
The potential benefits of EAI are evident to telecom business but many firms struggle to achieve these when EAI solutions are not implemented guided by a coherent and well-defined Conceptual Architecture or are hindered by other obstacles such as: 1.the issues involved in the complex world of governance. EAI touches on all systems and all parts of the organization making it an extremely sensitive operation and one difficult to get wide acceptance for. Again an internal sponsor alleviates these difficulties 2.the implementation of the EAI solution who sticks to a standard template rarely fits with the firms existing systems and infrastructure or answers their needs
References W3C Org. Web Services Description Language (WSDL) Version 2.0[S] http://www.w3.org 2003-9 JAMES SNELL,DOUG TIDWELL,PAVEL KULCHENKO, PROGRAMMING WEB SERVICES WITH SOAP,OREILLY DIRK KRAFZIG, KARL BANKE, DIRK SLAMA, ENTERPRISE SOA: SERVICE-ORIENTED ARCHITECTURE BEST PRACTICES, PRENTICE HALL PTR, 2006 [4] ERIC NEWCOMER,GREG LOMOW, UNDERSTANDING SOA WITH WEB SERVICES, ADDISON-WESLEY, 2006 [5] Rahul Sharma , Beth Stearns, Tony Ng. J2EE Connector Architecture and Enterprise Application Integration ,Prentice Hall PTR; (December 14, 2001) [6] IBM Corp. An EAI Solution using Websphere Business Integration2003 [7] K.Hammer. Web services and Enterprise Integration,EAI Journal,11,2001 [8] Matjaz Juric, Ramesh Nagappan, Rick Leander, S. Jeelani Basha Professional J2EE EAI Peer Information Inc.(December 2001) [9] Gregor Hohpe, Bobby Woolf Enterprise Integration Patterns: Designing, Building, and Deploying Messaging Solutions Addison-Wesley Professional (October 10, 2003) [10] Fred A.Cummins Enterprise Integration: An Architecture for Enterprise Application and Systems Integration Wiley (February 1, 2002) [11] W3C Org. Web Services Description Language (WSDL) Version 2.0[S] http://www.w3.org [12] 2003-9 [1] [2] [3]

573

Online Training Industry Supply Chain System Planning Based on CAS Theory
Zhao Jinshi1 Zhao Ying2 Zheng Xiaotao3
1 Aetna College of Economics and Management, Shanghai Jiaotong University, Shanghai, 200030 2 School of Tourism, Changchun University, Changchun, P.R.China, 130022 3 Finance College, Shanghai Normal University, 200030

Abstract This paper applies Complex Adaptive System Theory in online training industry supply chain system planning. It provides Multi-Agents simulation method for online training industry supply chain system planning. The paper analyzes that online training industry supply chain system is a kind of Complex Adaptive System. Then it provides a model to plan the supply chain system with CAS theory. Finally it gives a simulation process using Swarm software. Key words System planning, Online training, Supply chain, CAS, Simulation

1. Introduction
Online training industry is undergoing rapid development in China, which includes the development of training content, E-learning platform development, as well as training through the Internet transmission to learn and process. The research of the supply chain system of online training will help us to understand more clearly trades of online training system. And the players in the industry can formulate business strategy better. The supply chain includes suppliers, manufacturers, distributors, retailers, users and other entities in the supply network. The supply chain also includes enterprise activities across various functional departments. This paper analyzes the online training industry supply chain system with the characteristics of complex adaptive systems. Online training industry chain as a complex adaptive system cant be planned completely by an independent enterprise. In planning the supply chain process, we must consider the optimization of the supply chain system. Meanwhile we have to consider each individual optimization in supply chain. The system includes a number of individuals. So considering each of them has become difficult. Application of CAS system for supply chain planning can solve this problem to some extent.

2.The development of online training


As a type of distance education, online training market now have basically formed a relatively complete industrial chain. Online training industry chain includes developers, service providers, distributors, retailers and customers. While online training has not yet fully mature and sound industry chain, but the relationship between supply and demand chain has been basically formed. Content provider is to provide content, such as teaching video, courseware, papers; Platform developer provides learning management for content providers, including user management, learning management, discussion groups, Online testing functions. Most online training institutions pay more attention to the development of content than the teaching platform. This is mainly because most of the current online training is developed from the traditional training institutions. They are mainly based on content development. With the increasing demand for online training, the teaching process will be more and more demand for management functions. Teaching platform will be given due attention gradually. Integration operators make the training content being integrated into online learning platform and make them public through the Internet. Online learning function enables users to achieve. Integrating many content providers and platform providers into one or more learning platform is the critical link in the supply chain for a certain integration operator.

574

Direct Content provider provider Integration operator Platform proveder Distributor Retailer Learner

Agent

Figure 1:

Online training supply chain system

The sale channel of online training can be divided into direct marketing channels, agents and distributors. Online teaching is essentially an online service rather than substantive articles. Nowadays bank transfer, pay phone, online payments, electronics commerce and other means of payment and logistics are constantly improving. All these make direct marketing has become an important channel for online training. Direct marketing can reduce the cost of sales, but after all, is impossible to form a national network for access channel. Therefore agents and distributors are also the important forms of marketing channels. Online training products can be made in the low-cost form of the study cards in little size, minimal weight. Its logistics costs is very low, so there was no need to establish a multi-agent system. Implemented the flat sales, only a small number of agents can be established in all major regions. The profit of agents in the industrial chain depends on the operation of the integrated marketing strategy. Cards retailers are also a part of sales channels as sale terminals. The characteristics of online education make the channel flat. Developers to promote education are also likely to give retailers larger profit margins. The proportion of total retail sales of online training is according to the price policy of developers. Online retailers are mainly in the training of IT education centers and product sales place, such as stores, specializing in software, computer market, and other training institutions. Learner is the ultimate buyer of online teaching provided by the developers. Learning cards bought from various channels, visiting to the network through the Internet content and service providers, operating platform, acquiring knowledge and studying skills. These realize the value of online training. Online learners is the key element in achieving revenue in the whole industry value chain.[1]

3 Complex Adaptive System (CAS) theory


Complex Adaptive System (CAS) theory was addressed by Holland in his article Complex Adaptive System in 1992. All of Complex Adaptive Systems involve a large number of parts undergoing a kaleidoscopic array of simultaneous interactions. They all seem to share three characteristics: evolution, aggregate behavior, and anticipation [2]. Complex Adaptive System is such a system that includes many adaptive agents. These agents have their own targets, inner structure and survival will. The complexity of CAS comes from these agents adaptive environment behaviors and their interaction. In the process of adapting environment and agents interaction, agents change their environment. Meanwhile, they also change their own condition. Thereby every system continues its development course. Complex Adaptive System has following common characteristics: First, Complex Adaptive System is a network structure system involving a large number of agents. Those agents affect each other. For example, the agents can be people in society, or companies in the economic system, etc. Each agent has its own behavior role and bias. They make their decision according to other agents behavior. This adaptive mechanism shows the system complexity. Second, CAS theory believes that the interaction of agents is the basis of system, rather than the basis of system from the traditional viewpoint. Because of the interaction, complex system usually shows that the whole is greater than accumulation of the parts. The interaction is the power of system development. Third, CAS can connect the macro and the micro. The interaction of agents is the base of system. So agents 575

interaction can be considered totally. It goes further than traditional viewpoint that statistics is the only way from micro to macro. CAS theory believes that statistics method cant describe the agents, because of the adaptive and subjective characteristics. Fourth, CAS theory introduces the stochastic factor. It improves the descriptive capability of CAS theory. Its basic idea can be summarized that stochastic factors affect not only system condition but also the organization structure and behavior format. The agents in Complex Adaptive System can study experience and memory these experience. So CAS theory goes beyond the general stochastic method.[2] Fifth, CAS shows emergence and self-organized features. CAS sometimes may happen something that is not in its schedule. It is named emergence. And no single program of agent completely determines the systems behavior, in spite of the fact that each one of the agents holds common heterogeneous schemas [2]. It is CAS self-organized features.

4 Online training supply chain system planning based on CAS theory


4.1 System planning According system engineering viewpoint, the basic contents of system engineering involve system analysis, system planning and system implement. System analysis is about system research. It belongs to thinking. System implement belongs to practice. System planning is the bridge between thinking and practice [3]. According to upside discussion, online training supply chain system is a complex adaptive system. Therefore we should consider the characteristics of CAS when planning a certain enterprises supply chain. The companies in supply chain have different benefits. A certain enterprise should think about it totally when planning its supply chain. Maximizing their benefits is the reason they go together into a supply chain. So the online training supply chain planning is a total win mechanism planning. Because supply chain is an integrated body, we must consider the emergence of supply chain system.[4] 4.2 Main content of online training supply chain system planning 4.2.1 Integration operator should consider following factors when selecting content provider The factors include price of online training products, updating and upgrading quality and capacity. In this selecting process, there is the competition among content providers, but there is also competition among integration operators. Therefore, this is a complex process of selecting competition. 4.2.2 Integration operator should consider following factors when selecting distributors The factors include marketing capability and credibility of the information feedback capabilities. Similarly, the process of competition between integration operators and competition between distributors is also a competitive selection process. 4.2.3 Online training supply chain collectivity evaluation index Minimizing supply chain system cost is a kind of evaluation method. To harmonize supply chain under uncertainty, we can adjust the supply chain numbers, output and sale quantity. The model as following:

mincost = F (Y j ) + C1D + (C2 C1 )[ D min[


j =1 i =1

jMarketi

Y j , Di ]]

Formula

(1)

M : the number of members in a certain supply chain, N : the number of market, Cost : the cost of production and sale, Y j : the output of member j , F : function of cost, C1 : the cost of transportation and sale
in one market, C2 : the cost of transportation and sale in different market, Di : the requirement of market i , D : the total requirement.[5] For the international enterprises, they can adopt total profit of supply chain enterprises as the evaluation index:

576

vt (ek ,t , ot' 1 ) = max pt (ek ,t , ot' 1 , ot ) + k , k 'vt 1 (ek ,t 1 , ot ) Formula (2) ot t k' Vector k : exchange rate, ek ,t : the state when exchange rate equals k at term t , o and : strategy
and potential strategies collection, p : the function of profit after tax,
[5]

k ,k ' : the probability of exchange rate


: the

vector k divert to k ' , v : the maximum profit after tax of members under present exchange rate,

discount rate. 4.2.4 Adjustment parameters of supply chain system: Number of supplier and distributor, supply price and distribution price can be adjustment parameters. These parameters can be adjusted in simulation process. The adjustment can impact total profit.

5 Online training supply chain system simulation planning


5.1 Online training supply chain system simulation model Online training supply chain system simulation model based on CAS theory can use Multi-Agent technology to simulate and check. Multi-Agent simulation technology is a bottom-up simulation method based on CAS theory. This kind of method designs agents bias, behavior and interaction mechanism. Then the simulation system runs. We can find the complex features of the supply chain system, and understand the rules of the supply chain system. It can provide guide for practical supply chain system planning. Along with development of economy, supply chain system would become more complex. Multi-Agent simulation technology would have more application in supply chain system planning. Supply chain is the connection of enterprises. The double knots are the relationships of supply and requirement. The mechanism of Multi-Agent modeling based on CAS theory can be described as following: 5.1.1 Supply chain structure model Figure 1 describes the basic model of supply chain including supplier agent, operator agent and learner agent. There are great number relationships among them including material, fund and information. 5.1.2 Define agent Three kinds of agents exist in different position in supply chain. They have different roles, bias and behaviors. Defining the agents can express the difference. 5.1.3 Define interaction relationship The relationships among such kind of agents include negotiation, coming to an agreement, executing and reacting process. These relationships are the trigger mechanism of agent behavior
Supplier Agent Learner Agent

Supplier Agent

Operator Agent

Learner Agent

Supplier Agent

5.2 Supply chain system planning simulation Supply chain as a kind of CAS can use Swarm software to simulate it. The Santa Fe Institute developed Swarm software. Its a Multi-Agent simulation system. It adopts Object-Oriented design method. It realizes the development of program with the features of inherit and encapsulation. Swarm software defines a series of class to 577

Operator Agent Learner Agent

Figure 2: Supply chain structure model

sustain Multi-Agent simulation modeling. ModelSwarm and ObserverSwarm class can construct main framework of simulation program. And SwarmObject class can define the agents. Swarm simulation program construction step as following: 5.2.1 Define Supplychain ModelSwarm Using @interface Supplychain ModelSwarm: Swarm can define a class. Its a subclass of Swarm. 5.2.2 Define Agents Using @interface Supplychain: SwarmObject can define a subclass of SwarmObject. It inherits the basic attribute of SwarmObject. Then define its bias and behaviors. 5.2.3 Create Agents In ModelSwarm, buildobjects method can create simulation agents. 5.2.4 Arrange Supply chain Model Swarm schedule This step includes Model Actions and Model Schedule confirmation 5.2.5 Create Supplychain ObserverSwarm Supply chain Observer Swarm can be defined as the subclass of GUIS warm using @interface Supply chain Observer Swarm : GUIS warm. 5.2.6 Create data diagram Data diagram is used to express the running status of system. 5.2.7 edit main function Main function is used to control other functions. Through upside supply chain planning process, we can join following contents into the simulation model: selection of suppliers, selection of distributors, supply chain evaluation index and other parameters of the supply chain. The results of model running guide us find better planning of the certain supply chain.

6 discussion
Online training supply chain as a kind of complex adaptive system, its planning cant be done completely by an independent enterprise. Online training system is the formation of various enterprises business strategies according to their own situation and the results of decision-making environment. Thus, in the supply chain planning process, we must consider the optimization of the supply chain system. Meanwhile we have to consider each individual optimization in the supply chain. The system includes a number of individuals. So considering each of them has become difficult. Application of CAS system for supply chain planning can solve this problem to some extent. The supply chain planning is a total win mechanism planning. Because supply chain is an integrated body, we must consider the emergence of supply chain system. Supply chain system planning provides online training system planning a good idea. This paper work for analyzing this multi-agent systems training industry supply chain, and online training industry supply chain planning application CAS theory. Furthermore it provides the theory of supply chain planning model based on CAS framework, and the use of software simulation models and simulation steps by Swarm software. Construction of follow-up are mainly concentrated in specific types of Agents, Agents coordination mechanism between the definition and parameters related to the restructuring, Simulation on the supply chain planning system to achieve the reference guide.
References [1] Online IT training industry research report http://www.qdqhy.net/News/Show.asp?id=607, 2006.3 [2] Ricardo Chiva-Gomez. The facilitating factors for organizational learning: bringing ideas from complex adaptive systems Knowledge and Process Management. Chichester: 2003. 10(2): 99 [3] Wang huanchen System Planning and Its Method System Engineering 2000.3:1-3(in Chinese) [4] Kumara SRT, Ranjan P, Surana A, Narayanan V. Decision making in logistics: A chaos theory based analysis. CIRP ANNALS-MANUFACTURING TECHNOLOGY . 2003 52 (1):381-384 [5] Zhao lindu Supply Chain and Logistics Management Theory and Practise Engine Industry Press [6] 2003.4:2.37 (in Chinese )

578

[7] [8] [9] [10] [11] [12] [13]

Zhao tao , sun kinyan, sun haihong, li gang The Analysis and Modeling of Supply Chain system-based on CAS System Engineering Theory and Practice 2003.11:10-11( in Chinese ) Wang huanchen System Planning and Its Method System Engineering 2000.3:3-4(in Chinese) Kumara SRT, Ranjan P, Surana A, Narayanan V. Decision making in logistics: A chaos theory based analysis. CIRP ANNALS-MANUFACTURING TECHNOLOGY . 2003 52 (1): 381-384 Zhao lindu Supply Chain and Logistics Management Theory and Practice Engine Industry Press 2003.4:238(in Chinese) Kumara SRT, Ranjan P, Surana A, Narayanan V. Decision making in logistics: A chaos theory based analysis. CIRP ANNALS-MANUFACTURING TECHNOLOGY .2003 52 (1):381-384 Rao JJ, Ravulapati KK, Das TK A simulation-based approach to study stochastic inventory-planning games INTERNATIONAL JOURNAL OF SYSTEMS SCIENCE 2003 34 (12-13): 717-720 Lee H L, Billington C. Material Management in Decentralized Supply Chains. Operations Research, 1993 5: 835-838

579

A Tool for Risk Mitigation in Public Sector IS/IT Projects: An Evidence-Based Information Systems Project Risk Checklist
Lihong Zhou, Ana Cristina Vasconcelos, Miguel Baptista Nunes
Department of Information Studies, University of Sheffield, Regent Court, Sheffield, S1 4DP, UK

Abstract The design, development and implementation of Information Systems (IS) in the Public Sector are perceived to have been plagued by failure and ongoing problems. This is reflected in the number of such case-studies being currently used in teaching and training situations. This visibility is due the duty of accountability that forces the sector to disclose any emerging failure. Consequently, the sector is more vulnerable to criticism and public inspection, but it is also an ideal research field in risk management practices. This paper presents a study of Information Systems Project Risk Management aimed at identifying a risk ontology and checklist that will enable decision making and mitigation strategy planning in IS development. The study is based on a qualitative approach anchored on a critical literature review followed by a thorough case-study survey. The final ontology was divided into five main categories: Pre-Project, Customer, Project Management, Technological Issues, and Development Methodology. This ontology is designed to fit in real life systems development cycles and is aimed at supporting risk assessment and control. Key words Information Systems, Risk Management, Risk Assessment, Risk Identification, Risk Checklist

1. Introduction
Risk assessment is a vital process in any effective Information System (IS) development. In fact, risks are intrinsic to any project and risk taking is a necessary component of any process of decision-making (Nunes and Annansingh, 2002)[1]. Poor risk management of IS projects often leads to failure, a situation not uncommon in both the public and corporate community. Failures have been linked to incorrect market positioning, inadequate business and risk strategies, poorly informed decision-making based on insufficient information and without due authorisation from senior management (Nunes and Annansigh, 2002)[1]. Also, the situation is often exacerbated by absence of clearly defined risk limits, deliberately misleading reports, inadequate intra-organisational communication concerning risk vulnerability, superficial or unrealistic risk control, poor knowledge of the business environment and lack of timely decision-making. As a result, various interested parties such as shareholders and other corporate entities are deprived of valuable information, which could lead to the formulation of more comprehensive and reliable risk systems, particularly as they relate to information systems. However, as proposed by Keil et al. (1998)[2], before we can develop meaningful risk management strategies, however, we must identify these risks. Therefore as Drucker (1975)[3] proposed: "While it is futile to try and eliminate risk, and questionable to try and minimise it, it is essential that the risks taken are the right risks." So if it is not practicable to eliminate risks altogether, it must certainly be possible to manage projects in a way that recognises the existence of the risks and prepares, in advance, methods of dealing with them if they occur (Cadle and Yates, 2001)[4]. This entails two major activities risk identification and risk management. Risk identification and assessment are therefore the fundamental basis for the entire risk management process (Nunes and Annansigh, 2002)[1]. This paper aims at proposing a risk identification ontology in the form of a checklist that aims at supporting risk assessment, decision making concerning risk control and the planning of risk mitigation strategies. This ontology was constructed through an evidence-based approach closely linked to the reality of development and an analysis of failure emerging from real-life case-studies.

2. Research Questions and Design


The research presented in this paper was driven by the general aim of helping project managers and practitioners in their risk thinking, assessment and decision making. The literature in the field offers a rich variety of risk management frameworks and models. Therefore, and after an indicative literature review it was noted that it is in the risk identification and assessment processes that contributions for practitioners are in actual 580

need, namely clear checklists that can be used at the planning phase and as the basis for risk assessment. Consequently the following overarching research questions were formulated: What constitutes a good IS project risk identification checklist? How can a risk identification checklist be created? What should be the content of such a checklist? In attempting to respond to the above research questions and aims, this research project employed an inductive qualitative research methodology through a combination of critical literature review and a process of case study survey. Specifically, this research was performed using a desk study approach, exclusively using secondary sources. The strategy here is to adopt an inductive argument in order to explore failure and problems as experimented by different organisations, through the analysis of published and publicly available case-studies. Therefore, this study adopts a desk research approach through critical case-study survey analysis, that is, the study surveys non-theoretical secondary sources, based on applied research. The methodological framework on Fig.1, adapted from Bhandari et al. (2005)[5], was selected as the overarching research design. This framework encompasses the following four inductive steps and is based on the framework proposed by Yin (1984)[6]: performing a critical literature review on IS risk management and risk assessment was carried in order to provide a theoretical background to the study and establishing an initial proposition of main categories of risk in IS development for further exploration and critical analysis; establishing an appropriate set of case-studies was selected on the basis of its validity, descriptive value and reliability (in this case: 10 public sector case-studies following an Anglo-Saxon tradition); performing an analysis of individual case studies by using the key set of categories and theoretical knowledge as guides; producing a synthesis of the different case-studies to provide a response to the research question and to establish the risk identification ontology.
Research Question

Critical Literature Review

Establishing Key Risk Categories

Individual Case-Study Critical Analysis

Critical Review and Synthesis

Case-Study Selection

Theory Extension

Fig.1 - Framework of a Case-Study Survey Inductive Approach adapted from Bhandari et al. (2005)[5].

After careful consideration, it was decided that in order to identify risks in IS projects, the natural strategy would be to study and analyse cases of failure of this type of project. Past failure causes and events, can be interpreted as risks in future projects. Failure has had different degrees of visibility. In particular the failure of IS projects in the Public Sector has been the media delight (Harrin, 2007)[7] for a number of years now, due to the requirements for transparency and accountability in the sector. In no other sector are IS/IT projects perceived to have been so plagued by failure and ongoing problems. This is reflected in the number of such case-studies being currently used in teaching and training situations, and on the growing public concern on this type of project. In the UK, this is clearly illustrated in a recent report put together by the Parliamentary Office of Science and Technology in 2003 entitled Government IT Projects that aims to improve the success rate of public sector IT projects and highlight common pitfalls. This visibility is due the duty of accountability that forces the sector to disclose any emerging failure and submit to the scrutiny from political and social institutions. This certainly makes IT projects in the sector more vulnerable to criticism and public inspection, but also makes these projects 581

an ideal research instrument in risk management practices. Therefore, the research team behind this study took the deliberate decision to select 10 case-studies from an Anglo-Saxon tradition Public Sector, as shown in Appendix 2, specifically from the UK, US and New Zealand. This choice is rooted on to very high levels of transparency, detail, trustworthiness and credibility of the information disclosed about these failures. Nevertheless, although these case studies are all from the Public sector, they represent different areas of application and different national contexts, albeit in an Anglo-Saxon environment. The intention was to, within the same sector, cover a diversity of areas of application and of organisational context.

3. Establishing Key Risk Categories from a Critical Literature Review


There is a vast and rich amount of both professional and academic publications addressing IS design and development and their associate risks. The urgent necessity of risk management is recognised to be not only obvious and inevitable, but also complex and difficult to implement. From a distillation of this literature, it emerged that most project management authors focus on procedural aspects of the management process such as estimation, planning, monitoring, team building and change management (Chapman and Ward, 1997; Kliem and Ludin, 2000; Mantel et al. 2001, Pritchard 2004)[8][9][10][11]. Conversely, most SW engineers and computer science authors focus on technical problems of the design and development process, that is requirement specification, abstract representation of human activity systems and information environments, programming, testing and installation (Drori, 1997; Jalote, 2002; Taylor 2003; Tsui, 2004)[12][13][14][15]. However, practitioners require a more integrative and holistic approach in order to be able to think about risk in context and take decisions on avoidance and mitigation of these risks (Brown, 2000)[16]. Therefore, this study developed such a holistic conceptual model, presented in Fig.2, based on the five main dimensions of an IS project: Pre-Project, Customer, Project Management, Technological Issues, and Development Methodology.

Pre-Project Project Management

Customer

Development Methodology

Technological Issues

Fig.2 - Holistic Conceptual Risk Model

This holistic conceptual model aimed at establishing a manageable set of key risk categories in order to proceed with the analysis of the case-studies as proposed in the framework in Fig.1. The conceptual model itself resulted from the critical literature review and a synthesis of a number of existing holistic models. In particular, the conceptual model proposed was strongly influenced by propositions by Hughes and Cotterell (2002)[17] and Cadle and Yates (2001)[4].

4. Research Findings
4.1. Pre-Project Pre-project preparations and contracting are critical to the success of any type of project (Dvir et al, 1998)[18]. Before the project kicks-off, both the project team and the customer need to have a good and agreed understanding of the requirement specifications, contractual relationships, project scope and constraints (e.g. budget, technology in use, interfaces with other systems both internal and external, etc), organisational environment, as well as the business environment. Cadle and Yates (2001)[4] proposed that the contract is the 582

most serious critical risk factor in IS projects, since the contract is the negotiation and enforcement tool used by all parties to convey their needs, concerns and relationships. The risks are particularly significant if the project scope is ill-defined or not firmly agreed between parties. Furthermore, ill-defined or ambiguous requirement specifications are equally dangerous (Shull et al., 2000)[19], and likely to originate problems of usefulness and deviations from both timelines and budgets. These are well known risks and identified even in the earlier days of computing as discussed by Bostrom and Heinen (1977)[20]. These were found to be a prevalent cause of failure by the analysis of the case-studies. In particular, these problems became apparent in the Integrated National Crime Information System (INCIS) Project developed for the New Zealand Police. In this case the results were catastrophic. The final information system produce was hopelessly inadequate: The scope of INCIS has never been satisfactorily addressed in the documentation. In the initial Information Systems Planning exercise the scope was defined as intelligence within the Police. At no time were the boundaries set, or the role if INCIS defined and set in context within the Police. Subsequently, INCIS became an information rather than an intelligence system, radically affecting the scope of the project. There seems to be a great deal of confusion at even this broad-brush level of definition. INCIS - Small (2000:33-34)[21] Cadle and Yeates (2001)[4] claim that ambiguous roles of partners in project planning and scoping, as well as unclear relationships between these parties should be considered as important risks. This research found clear evidence of these risks in most of the case studies surveyed. This was particularly evident in the London Ambulance Service (LAS) Case Study: The intention with the award of the contract to SO was for them as the lead contractor to take on the overall project management responsibility although there is no specific reference to this in the contract. This role later became ambiguous as SO struggled to manage their own input to the project and LAS became more responsible by default for project management. The suppliers are clear that it was in reality LAS, through the Director of Support Services and the contract analyst, who were providing project management. LAS - Finkelstein (1995)[22] These problems can be further compounded by internal political difficulties in the customers organization as discussed in the next section. In terms of pre-project some of these internal issues were crucial in all of the case-studies analysed. Lack of understanding of organisational politics, culture and internal relationships were found to create close to insurmountable problems. This confirms the findings of other researches, such as Nah et al. (2001)[23] who suggest that top management support is needed throughout the implementation and top management needs to publicly and explicitly identify the project as a top priority. Finally, in terms of pre-project, it is in the early project planning process that catastrophic risks are often ignored and not taken into account. Planning needs to account for adequate resources (funds, staff, equipment, etc) and should precede the actual contract whenever possible. An important part of this early planning must also encompass the substitution of current systems and whenever necessary the interfacing with other systems. This was identified as a crucial risk factor in a number of case-studies, namely in the US Navy ERP project: DODs [US Department of Defence] past history of not implementing systems on time and within budget. The project faces numerous significant challenges and risks that must be dealt with as the project moves forward. For example, 44 system interfaces 6 with other Navy and DOD systems must be developed and implemented. Long-standing problems regarding the lack of integrated systems and use of nonstandard data within DOD pose significant challenges and risks to a successful Navy ERP interface with these systems. US Navy ERP GAO (2005)[24] In fact, the majority of failure causes identified in the case-studies could have been avoided if careful 583

planning would have done before the contract. The contract needs to be suitable for the specific project with clearly defined payment schedule and backup plans in case of delays and other unexpected emergencies. Additionally, a clear business plan and vision is required to steer the direction of the project and enable efficient monitoring throughout its life-cycle. These findings confirm a widely accepted view in the literature that a good plan should follow a sound business case based on a clear understanding of both long-term strategic and short-term tangible benefits. In accordance to this business case, planning should provide for efficient use resources, monitoring of costs, assessment and mitigation of risks and the adherence to sound quality standards. 4.2. Customer Customers are not always familiar with modern Information and Communication Technology (ICT) and their inherent affordances and risks. This may result in undue optimism and overambitious expectations by customers, which may result in overambitious requirement specifications and unwillingness to accept the final system after development. This was a recurrent theme in all the case-studies surveyed and a reality even in organisations with a long tradition of in-house IS design and development. Furthermore, the internal constitution and politics of the customer may represent result in another important subset of risks. IS projects require full cooperation from all involved parties. Conflicts between user departments and internal political difficulties can bring conflict of interests to the surface and create great difficulties to IS functional analysis, design and testing, as well as, final acceptance. Risks emerging from these internal realities where apparent all through out the analysis, particularly in highly complex organisations such as US Department of Defence: Until DOD develops and implements an effective strategy for overcoming resistance, parochialism, and stovepiped operations, transformation efforts, as envisioned by the 1995 task force report, will not be successful and the department will be faced with the continued proliferation of numerous business systems that are nonintegrated, duplicative, and waste limited resources. DOD DTS GAO (2006)[25] Furthermore, projects that may zlead to a significant change in organizational structure and culture may result in strong user resistance and cause a serious of risks. Managers on the customer side have therefore a very crucial role mediating and negotiating this change, as well as preparing the organisation to accept the new system. 4.3. Project Management Poor project management is universally accepted as a major cause of risk and failure in IS projects. The literature in the field is very rich and exhaustive. The results of this analysis seem to confirm the vast majority of theoretical assertions by authors in the field. This study identified risks around three main areas: human resource, project planning and project monitoring and reporting. Deficiencies in project planning and team building are well known risk factors (Kasser, 1998)[26]. Cadle and Yates (2001)[4] suggest that a full and complete project plan may not necessarily be presented before the contract but a comprehensive and proper project management strategy needs to be initiated as soon as possible. Furthermore, a well balanced team including both well and less experienced analysts and SW developers needs to be built. Finally, efficient communication channels linking project managers, project team, customer managers and end-users the essential to ensure flows of information and feedback. These communication channels are viewed as the key of final success in IS development and implementation (Cadle and Yates, 2001)[4]. Finally the plan should include formally defined and agreed milestones and deliverables (Holland and Light, 1999)[27]. These milestones and timelines enable appropriate project monitoring and control, as well as timely mitigation decisions whenever risk events emerge. Management of these deadlines needs to be met in order for the project to stay within agreed schedules and budgets and to maintain project team credibility. More interestingly, this research identified that it was in reporting and documenting project progress that the majority of public sector managers failed. This may have been due to prevalent public sector cultures, or just lack of specific training for IT project management roles. Project management remains, however, an umbrella term for many elements involved in systems failure. Therefore technical, technological and methodological 584

problems are often misunderstood or confused with project management. 4.4. Technological Issues Similarly with project management issues, technological causes of failure have been debated in the field of IS since its early beginnings. Also in this case, most of these studys findings are in accordance with the very exhaustive and extensive literature in technology risk factors. The study identified both stability and compatibility of hardware and software platforms as a major cause of problems. Furthermore, this survey shown that unproven or unfamiliar technologies may cause disappointment and lead to under-performance or conflicts with unrealistic expectations of technology. However, the study also confirmed that risks lie also with the technological development infrastructure, namely with unknown or unfamiliar programming languages, development tools and even development methods. 4.5. Development Methodology Methodological problems have often been confused with project management. In fact, these are very separate issues. Different design and development methodologies may result in very different project structures and risks. There are well known differences between agile and structured methodologies, with defenders of both approaches engaging in fierce discussions and theoretical arguments. For the practitioner however, these differences are more then theoretical and hypothetical. As identified in this study, choosing a methodology out of fashion or positioning in the field, may bring severe risks to the project and increase probability of delays and budget overruns. One of the persistent oversights of IT practitioners and programmers is usually the systems analysis stage of IS projects. Concentration in technical concerns of design and programming has systematically led reductionist interpretations of requirements, functional specifications, end-user needs and organisational constraints. This technological induced blindness is persistent in the SW sector is still prevalent today. In fact, the lack of user consultation and holistic awareness of organisational needs was found to be on of the critical failure factors in most of the case studies surveyed. Similarly, system designing, as the next step, is carried often out on the basis that a full comprehension of organization background and requirements has been achieved. This stage requires constant communicating within the project team as well as between the project team and the customer. Design ranging from the overcomplicated to the reductionist may have profound implications in the success of IS projects. In order to ensure that the project team is producing what the customer requires, it is a good strategy to have customers agreement on designs and prototypes before the starting heavy processes of programming. Again in this case, the lack of consultation with the customer may lead to significant needs from actual requirements and decrease chances of acceptance of the final system. System Development and Testing is probably the more critical phase of any IS project. Adequate programming and testing methods and techniques need to be adopted. The use of unstable and sometimes incompatible SW and HW hardware platforms may represent a significant risk for the project. More importantly, the use of emergent fashions and sometimes unproven methods and tools may result in easily predictable risks and slippages in timelines and budgets. Finally, it was found in almost cases that the lack of a thorough and inclusive testing strategy is an unacceptable risk and one of the major causes for disaster.

5. Conclusions
The checklist presented in this paper aims at supporting both practitioners and researchers in their risk thinking and assessment. For practitioners, the checklist is an important decision making support tool and is aimed at helping in risk identification and assessment activities. For researchers, on the other hand, the checklist provides a first attempt at establishing a risk ontology and a point of departure for further research. Future research in this area should aim at completing this first proposal, but also linking these risk factors to both causes and consequences. A major conclusion of the case study survey is that a considerable amount of risk factors are clearly incurred 585

even before the start of the formal project. All these factors, identified in the pre-project dimension, severely pre-determine the future of the project and create very predictable risks that could be avoided if given due consideration. In fact, this research found evidence that risk thinking should start very early as part of pre-project and not, as most of the modern design and development methodologies propose, solely as part of the development process itself. For instance DSDM proposes risk thinking as part of the functional model iteration, RUP proposes risk analysis and assessment as part of the inception phase, and finally, even XP a proclaimed risk-driven approach (Li et al., 2006)[28] only really formally advocates risk thinking during release planning. It is clear from findings of this study that risk thinking must start long before this, in fact, long before contracts are established.
References [1] [2] [3] [4] [5] Nunes M, Annansingh F. The Risk Factor. The Journal of the Institute for the Management of Information Systems, 2001, 12 (6): 10-12. Keil M, Cule P, Lyytinen K, Schmidt R. A Framework for Identifying Software Project Risks. Communication of the ACM, 1998: 41 (11): 76-83. Drucker P. Management: Tasks, Responsibilities, Practices. London: W. Heinemann Ltd, 1975. Cadle J, Yeate D. Project management for information systems. Harlow: Financial Times/Prentice Hall, 2001. Bhandari P, Nunes M, Annansingh F. Analysing the Penetration of Knowledge Management Practices in Organisations through a Survey of Case Studies. In Proceedings of the 4th European Conference on Research Methodology for Business and Management Studies (ECRM 2005), 21/22 April 2005, Universit Paris Dauphine, Paris, France, 2005. 37-45. Yin R. Case study research: Design and methods. Beverly Hills. CA: Sage Publishing, 1984. Harrin, E. Between a rock and a hard place. IT Now, 2007, 49(1): 6-8. Chapman C, Ward S. Project Risk Management: Processes, Techniques and Insights. New York: John Wiley &Sons, 1997. Kliem R, Ludin I. Reducing Project Risk. Aldershot: Gower Publishing Limited, 2000. Mantel S, Meredith J, Shafer S, Sutton M. Project Management in Practice. New York: John Wiley & Sons, 2001. Pritchard C. The Project Management Communications. London: Toolkit Artech House, 2004. Drori O. From Theory to Practice or How Not to Fail in Developing Information Systems. Software Engineering Notes, 1997, 22 (1): 85-87. Jalote P. Software Project Management in Practice. Boston: Addison-Wesley Professional, 2002. Taylor J. Managing Information Technology Projects: Applying Project Management Strategies to Software, Hardware and Integration Initiatives. New York: American Management Association, 2003. Tsui F. Managing Software Projects. Sudbury: Jones and Bartlett Publishers, 2004. Brown M. Mitigating the risk of information technology initiatives: Best practices and points of failure for the public sector. In: Garson G. (ed.) Handbook of Public Information Systems. New York: Marcel Dekker, 2000. 153-164. Hughes B, Cotterell M. Software project management. London: McGraw-Hill, 2002. Dvir D, Lipovetsky S, Shenhar A, Tishler A. In search of project classification: a non-universal approach to project success factors. Research policy, 1998, 27(9): 915-935. Shull F, Rus I, Basili V. How Perspective-Based Reading Can Improve Requirements Inspections. Computer, 2000, 33 (7): 73-79. Bostrom R, Heinen J. MIS Problems and Failures: A Socio-Technical Perspective. PART 2: The Application of Socio-Technical Theory. MIS Quarterly, 1997, 1 (4): 11-28. Small F. Ministerial Inquiry into INCIS, 2000. Available online at: http://www.justice.govt.nz/pubs/reports/2000/incis_rpt/INCIS%20inquiry.pdf. [Accessed 14/3/2007] Finkelstein A. Report of the Inquiry into the London Ambulance Service, 1995. Available online at: http://www.cs.ucl.ac.uk/staff/A.Finkelstein/las/lascase0.9.pdf. [Accessed 14/03/2007] Nah F, Lau J, Kuang J. Critical factors for successful implementation of enterprise systems. Business Process Management Journal, 2001, 7 (3): 285-296. GAO. DOD Business Systems Modernization: Navy ERP Adherence to Best Business Practices Critical to Avoid Past Failures, 2005. Available online at: http://www.gao.gov/new.items/d05858.pdf. [Accessed 14/3/2007] GAO. DOD Business Transformation: Defense Travel System Continues to Face Implementation Challenges, 2006. Available online at: http://www.gao.gov/new.items/d0618.pdf. [Accessed 14/03/2007] Kasser J. What Do You Mean You Can't Tell Me if My Project is in Trouble? In: Proceeding of the First European Conference on Software Metrics (FESMA 98), 1998, Antwerp, Belgium. Holland C, Light B. A Critical Success Factors Model for ERP Implementation. IEEE Software, 1999, 16 (3): 30-36. Li M, Huang M, Shu F, Li J. A Risk-Driven Method for eXtreme Programming Release Planning. In: International Conference on Software Engineering. Proceeding of the 28th International Conference on Software Engineering, May 20-28, 2006, Shanghai, China.

[6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] [26] [27] [28]

586

Appendix 1 A Proposition of an Information Systems Project Risk Checklist Project Risk Dimensions Requirement Specification and Project Scoping ID Pre-Project 1. 2. 3. 4. 5. Contractual Relationships 6. 7. 8. 7. 8. 9. Project Planning 10. 11. 12. 13. 14. 15. Organisational Environment 16. 17. 18. 19. 20. 21. 22. 23. Customer Internal & External Environment 1. 2. 3. 4. 5. 6. 7. End User 8. 9. 10. Management 11. 12. 13. Conflicts between user departments. Constant external pressure and uncertainty on how to manage it. Inefficient communication between all involved parties. Internal political difficulties. Lack of confidence on the project by the internal users. Mistrust between management and staff. Difficulties in harmonizing different and sometimes conflicting internal users perspectives on the project. Target users are unfamiliar with the technology and require additional training. Lack of end user support. End user reluctance in changing or even accepting the new system. Lack of understanding of technical issues and functional scope by management. Lack of information and IT skills by management. Internal resources and access are not adequately provided to the project team. Project Management Requirement specifications are ill-defined. Requirement specifications are ambiguous. Project scope and objectives are inappropriately defined. Requirement specifications are incomplete. Lack of early negotiation with customer and users. Complex and unclear relationships between partners, customers and suppliers. Ambiguous roles of partners in project planning and scoping. Disagreement between involved partners. Unclear payment schedule or a fixed-price contract. Inappropriate selection of suppliers due to ambiguous selection criteria. Uncertain long-term partnership between the customer and the supplier after the project. Deficient planning and resource allocation. Lack of previous experience by the customer. Lack of clear definition of development methodologies or/and technological infrastructures. Lack of planning for replacement of current systems or/and interfacing with current systems. Lack of backup plan for delays or/and under-performance of new system. Lack of a quality control system before project. Inadequate current business processes for IS implementation. Significant need for re-engineering of current business processes. Inappropriate business plan and IS vision. Lack of senior management support or/and internal political resistance. Lack of understanding of customers organizational culture. Requirement for widespread and persistent organizational culture change. Potential for end-user resistance. Lack of capability to identify and/or absorb both external and internal uncertainties. Risk Factors

587

Project Risk Dimensions Human Resource

ID Pre-Project 1. 2. 3. 4. 5. 6. 7.

Risk Factors Reluctance by the customer to attend project meetings. Inappropriate staffing and/or personnel shortfalls. Inappropriate project team structure. Inexperienced team members in core business or technology project components. Lack of clear processes of accountability and responsibility. Lack of commitment to the project by team members. Inadequate balance of junior and senior staff in the project team. Lack of effective processes of estimation. Lack of effective quality control and assurance according to agreed standards. Ill-definition of milestones and related deliverables. Ineffective risk identification and assessment. Ineffective planning for risk mitigation and/or avoidance. Lack of clear project management structure and methodology. Inappropriate project reporting. Unrealistic monitoring of timeliness and budgets. Ineffective risk monitoring according to contract, requirement specifications, time boxes and prioritization of features. Inappropriate human resource management. Lack of leadership and/or motivating attitudes by project managers . Emerging or unproven technologies. Incompatible technologies with project constraints and/or requirements. Erroneous, ambiguous or incomplete technology feasibility studies. Unfamiliar technologies to the design and development team. Unstable or incompatible HW infrastructures and platform. Emerging or unproven programming and debugging technologies. Overly complex or time demanding programming and debugging technologies. Unfamiliar development environment to the project team.

Project Planning

8. 9. 10. 11. 12. 13.

Project Monitoring and Reporting

14. 15. 16. 17. 18.

Technological Issues IS infrastructure and base technologies 1 2. 3. 4. 5. Development technologies 6. 7. 8.

Project Risk Dimensions (cont.) General Methodological Issues

ID Development Methodology 1. 2. 3. 4. 5. 6.

Risk Factors Use of emerging, unproven and often misunderstood methodologies. Use of inadequate or reductionist methodologies. Adoption of technologically centric design and development approaches. Adoption of non-comprehensive (not fully covering the entire process) methodologies. Project Managers that do not fully understand the technical requirements of an IS project. Inadequate planning for social-technical systems design and development. Inadequate comprehension of the current system and current situation of the organization. Misunderstanding of user requirements. Interpretation of end-user requirements from a reductionist technological perspective. Lack of an integrative holistic perspective of organisational needs. Inadequate prioritisation and assessment of requirements, functionalities and features.

Systems Analysis

7. 8. 9. 10. 11.

588

Project Risk Dimensions (cont.)

ID Development Methodology 12.

Risk Factors Poor dialogue, negotiation and communication with the end-users and the organisation in general. Poor dialogue between designers and analysts. Poor dialogue between designers and end-users. Overcomplicated designs that may result in system extremely heavy and complex systems. Non-use of prototyping to negotiate design solutions with the customer. Non-compliance with prioritization and specifications agreed in previous stages. Designs emerging out of fashion or current trends rather then explicit needs. Poor dialogue between designers and programmers. Attempt to produce a complete and perfect system in one go. Unstable and/or incompatible software and hardware system platforms. Initiate programming before design is fully agreed and/or complete. System testing without final user involvement. Inadequate testing the final integrated system before final implementation. Programming without agreed communication channels between programmers. Programming without agreed information sharing channels between programmers. Programming without agreed processes for sharing and re-use of common code between programmers. Code emerging out of fashion or current trends rather then explicit needs. Programmer-oriented coding instead of user-oriented. Lack of planned and agreed systems installation and cutover processes. Lack of a user training plan and insufficient user training before installation. Initiate training without complete testing. Lack of a clear and agreed maintenance plan. Assignment of unqualified staff for systems administration and maintenance.

System Design

13. 14. 15. 16. 17. 18.

System Development and Testing

19. 20. 21. 22. 23. 24. 25. 26. 27. 28. 29.

System Installation

30. 31. 32.

System Maintenance

33. 34.

Appendix 2 List of Case Studies ID. Title


1. Enterprise Resource Planning in Public School District Integrated Financial Management Program (IFMP) The National Program for IT in the NHS Navy ERP Defense Travel System (DTS) IT Investment Management (ITIM) Information Technology Management Identity and Passport Service: Introduction of ePassports Integrated National Crime Information System (INCIS) London Ambulance Service (LAS)

Organization
The San Diego Public School District NASA Department of Health, UK Department of Defense, US Department of Defense, US Bureau of Land Management Small Business Administration The Identity and Passport Service; UK Office New Zealand Police National Health Service (NHS)

Reporting Organization
Kellogg School of Management Northwestern University United States General Accounting Office National Audit Office, UK United States General Accounting Office United States General Accounting Office United States General Accounting Office United States General Accounting Office National Audit Office, UK Ministry of Justice South West Thames Regional Health Authority

Year
2002

URL
http://www.kellogg.northwestern.edu/faculty/jeffery/htm/c ases/SDSU%20Case%20wm.pdf http://www.gao.gov/new.items/d03507.pdf http://www.nao.org.uk/publications/nao_reports/05-06/050 61173.pdf http://www.gao.gov/new.items/d05858.pdf http://www.gao.gov/new.items/d0618.pdf http://www.gao.gov/new.items/d031025.pdf http://www.gao.gov/archive/2000/ai00170.pdf http://www.nao.org.uk/publications/nao_reports/06-07/060 7152.pdf http://www.justice.govt.nz/pubs/reports/2000/incis_rpt/ INCIS%20inquiry.pdf http://www.cs.ucl.ac.uk/staff/A.Finkelstein/las/lascase0.9.p df

2. 3. 4. 5. 6. 7. 8. 9. 10.

2003 2006 2005 2006 2003 2000 2007 2000 1995

589

Assessing Risks through Information Systems Failure Probability Based on the Life Cycle Theory
Liu Shan, Zhang Jinlong, Chen Tao, Cong Guodong
School of Management, Huazhong University of Science and Technology, P.R.China, 430074

Abstract Many information system (IS) failures may result from the inadequate and ineffective assessment of project risk. Since the risk of IS development project is full of dynamic variability during the life cycle, consistent evaluation, coordination and avoidance is needed. Moreover, little is known about integrated function of all kinds of risk factors in the course of IS project development. The model of IS development project risk assessment and failure probability during the life cycle has been established, combining the risk, the life cycle theory, dynamic characteristics and IS failure probability. Risk factors in the life cycle are identified and assessed by risk exposure analysis. The definition and measurement of IS failure probability are first presented, in one hand predicting the success of IS project implementation and in the other hand analyzing the integrated influence of risk factors. Finally, risk reducing strategy is introduced based on the assessment result, so as to reduce the high risks being identified and improve the probability of success in IS development. Key words Information Systems development, life cycle, IS failure probability, risk reducing strategy

1. Introduction
With the development of Information Technology (IT) and improvement of IS development project management. Implementations of IS are growing rapidly in our country. Many tools such as prototyping, data modeling and structure design are used in IS and software development. However, too many IS development projects end in failure. It is reported that fully 25 percent of all IS and software projects are cancelled outright[1]. A survey from Standish Group in 1995 indicated that 52.7% of IS project in the US run over their budgets and schedule. 31.1% are fully cancelled and only 16.2% are successfully finished within the schedule and budgets[2]. The probability of success of IS development projects is not high in the course of informationization. Therefore, how to avoid IS failure and reduce the risk of IS development receive great attention. IS researches propose to identify and assess the risk factors of IS project and make risk reducing plan in order to execute risk track and control, reducing IS development failure probability and making IS project valued. To avoid the high probability of system failure, IS researchers have attempted to identify factors threatening successful IS development. Risk assessment is fully researched and result in plenty of estimating instruments. Boehm summarized some software risk assessment tools[3]. Typical risk identification techniques include checklists, examination of decision drivers. Risk analysis techniques include performance models, cost models, network analysis and quality factor analysis. Risk prioritization techniques include risk exposure analysis and risk reduction leverage analysis. Risk varies in different phases of IS development life cycle. Researchers presented some valuable risk-oriented system development approach to combine the variation. Boehm has built spiral models based on the waterfall models and evolutionary development models, integrating life cycle theory, risk and prototypes and provided a kind of risk driving software development approach[4]. In this model, project goals, constraints, methods, risk factors, risk resolutions in each round and the plan for the next round was analyzed. The model not only can make two development models transformed but also had many other advantages. It paid attention to the reuse of current system, provides a mechanism for incorporating software quality objectives into software product development and focuses on eliminating errors and unattractive alternatives earlier. After this, Boehm get the mode upgraded and presented a win-win spiral models which integrated Win-Win Theory and W-Theory[5]. In 2001, IEEE Standard for Software Life Cycle Processes presented a risk management process model and makes it a standard[6]. Gang et al established a risk avoidance model in the phase of software bidding based on the life cycle theory, combining the life cycle and risk avoidance[7]. However, too many models, approaches and theories recognized and analyzed the IS development risks only
This research has been supported by National Natural Science Funds of China (No: 70571025).

590

from project failure perspective. They estimated risks, prioritized and avoid risks. In other words, researchers only analyzed the factors which influence the IS project success. To which extent does IS project fail? How does the risk vary in all the phases and how to recognize the dynamic variance? To which extent does the integrated effect of all risk factors influence the project? These problems have little been studied. Thereby, this paper puts forward the definition of IS failure probability and established the risk management model of IS development project based on the life cycle theory, attempting to address two problems: (1) How to identify and assess the variance of risk factors in all phases of the life cycle? (2) How to measure IS failure probability from the result of risk assessment so as to effectively estimate, control and avoid IS failure?

2. Risk model of IS development based on the life cycle theory


Based on the waterfall model of software development, the life cycle of IS development is divided into 8 phases: feasibility analysis, requirements analysis, product design, detailed design, code and unit test, system integration, implementation and system test, operations and maintaining. 4 phases are involved in IS project management. They are conception phase, development phase, implementation phase and ending phase. Feasibility and requirements analysis are involved in the conception phase. The development phase ranges from product design to system integration. System implementation and test is involved in the implementation phase while operations and maintaining is involved in the ending phase. Turning from one phase to another represents a significant milestone.

Conception

P1: Feasibility Analysis

P2: Requirements Analysis P3: Product Design P4: Detailed Design P5: Code and unit test P6: System Integration Implementation Review Adjustment Endding resolution P7: Implementation and Test P8: Operations and Maintaining Yes High probability of IS failure No Risk Identification Risk Assessment Risk Planning

Development

Fig.1 IS development risk model based on the life cycle theory

Risk management model of IS development based on the life cycle theory is showed in fig.1. This model integrates risk analysis, planning and IS failure probability in each phase of the life cycle to decide which will be done in the next phase. When moved from one phase to the next, risk identification and assessment are needed. If the result of assessment shows the low probability of IS failure, risk planning will be made before moving to the next phase. If it shows the high probability of IS failure, the project will stay and adjust to make resolution until the high probability of IS success. The judgment is executed through all 8 phases and is ended when the project is finished. 2.1 Risk identification When moved from one phase to another, risk identification and assessment are needed to be categorized due to its complexity to hold the essence of risk factors. Full category bothered managers very much while incomplete category lets decision makers ignore important risks. Risk factors can be identified by two approaches. One method is to describe all risks by fixed structure in theories and models from a complete perspective. But it makes distance from actual situations. The other method is to empirically conclude risk factors presumed that IS risks are statistically distributed which become close to the practice of IS management. Tab.1 integrates the two approaches 591

and summarizes IS development risk factors and risk variables from the literature. 2.2 Risk assessment In practice, not only can project managers find risk factors referring to Tab.1, but also identify them by questionnaire and statistical methods. The assessment of risk factors will be executed after the identification. While lots of risk assessment methods existed, risk exposure analysis is most widely used including estimated probability and loss of unsatisfied outcome. Risks are prioritized from calculates risk exposure in order to find the factors most influence IS success and make risk resolution. The calculation of risk exposure can be represented as follows: RE=P(UO)*L(UO) (1) RE is Risk Exposure. P(UO) is probability of unsatisfied outcome and L(UO) is the loss of unsatisfied outcome. Tab.2 shows the result of risk exposure analysis of an IS project in a Chinese construction company during the phase of code and unit test. The loss and the probability are estimated by 20 users and development members of the project who used expert scoring in the course of IS development. The loss ranges from 0 to 10 while the probability ranges from 0 to 1. The score of probability and loss is calculated by weighted average and confirmed by all the members. Risk exposure of risk factors can be drown by weighted average from risk exposure of risk variables.
Risk Factors F1:Project size Tab.1 IS development risk factors and variables Risk Variables V1:Application size V2:Multiple implementers V3:Many external suppliers V4:Lack of top management support V5:Change in organizational management V6:Strategies and politics conflict V7:Resources shifted away from the project V8:Unclear or incorrect system requirements V9:Continually changing system requirements V10:Undefined project success criteria V11:Lack of user support V12:Users resistant to change V13:Conflict between users V14:Low user responsibility V15:Technological newness V16:High level of technical complexity V17:Immature technology V18:Large number of links to other systems required V19:Personnel shortfalls V20:Rrequent conflicts between development team members V21:Inadequately trained development team members V22:Team members not familiar with the tasks being automated V23:Lack of expertise V24:Lack of an effective project management methodology V25:Project progress not monitored closely enough V26:Poor project planning V27:Inexperienced project manager V28:Inadequate estimation of budgets V29:Inadequate estimation of project schedule V30:Change of market reducing project value V31:New substituted product, service and technology V32:Opposite behaviors from external organizations Literatures [8] [9] [13] [11] [11] [9] [9] [11] [11] [9] [11] [11] [9] [9] [11] [9] [9] [9] [3] [3] [9] [9] [3] [9] [9] [3] [11] [9] [9] [12] [10] [12]

F2:Organization environment

F3:Requirements

F4:User

F5:Technology

F6:Team

F7:Planning and control

F8:Market and competitions

The top ten risks in this IS development are: project size, personnel shortfalls, inadequate estimation of 592

project schedule, change in organizational management, continually changing system requirements, poor project planning, undefined project success criteria, lack of an effective project management methodology, inexperienced project manager and unclear or incorrect system requirements. Risks of project size and personnel shortfalls are very high. Large project size leads to the complex requirements analysis. That requirements are still changing in the phase of code is quite negative. Furthermore, project team has no detailed plans and strict control, especially weak in terms of organization and team. In fact, this project is ended in implementation phase. Risk exposure analysis of each phase is showed in Tab.3. The light color area represents the high level of risks, the deep color area represents the very high level of risks and .means the project is cancelled. Obviously, application size and requirements changing are two risk variables most influencing IS success during the life cycle. In the late period of the project, change in organizational management, personnel shortfalls and inadequate control make the project a failure destiny. 2.3 IS failure probability analysis Most researchers tended to identify risk factors from a failure perspective. However, they dont describe to which extent the IS project will be a failure. In fact, IS failure probability is able to be measured from the result of risk assessment. Webster defined failure as a failing to perform a duty or expected action while the Standish study defined project failure as either a project that has been canceled or a project that does not meet its budget, delivery, and business objectives. Researchers viewed IS failure as two situations from software developer perspective[14]. If the project is completed, IS failure means developing a product that causes customer discontent. If the project is cancelled, IS failure means not learning anything that can be applied to the next project. We defined IS failure probability as the probability of discontent outcomes resulted from risk variables from a risk perspective. Two aspects are involved in discontent outcomes: one is probability of unsatisfactory outcomes and the other is losses of unsatisfactory outcomes. Therefore, IS failure probability is an integration of all risk factors. IS failure probability can be measured by the following formula:

R(F) =

i=1,2...n

{Pi (UO)

i=1,2...n

[Pi (UO) Li (UO)] } [Pi (UO) Li (UO)]

(2)

R(F) represents IS project failure probability. P(UO) means the probability of unsatisfactory outcomes. L(UO) represents losses of unsatisfactory outcomes. i is the number of each risk variable and n represents the total number of risk variable.
Tab.2 Risk exposure of risk factors and variables Risk factors F1 Risk variables V1 V2 V3 F2 V4 V5 V6 V7 F3 V8 V9 V10 V11 F4 V12 V13 V14 Loss caused by unsatisfactory outcome 9 2 3 8 8 6 7 8 8 7 8 6 5 3 Probability of unsatisfactory outcome 0.9 1 0.7 0.5 0.8 0.6 0.6 0.7 0.8 0.9 0.3 0.2 0.4 0.1 Rusk exposure of risk variables 8.1 2 2.1 4 6.4 3.6 4.2 5.6 6.4 6.3 2.4 1.2 2 0.3 1.48 6.1 4.55 Continued Rusk exposure of risk factors 4.07

593

Continued Risk factors F5 Risk variables V15 V16 V17 V18 V19 F6 V20 V21 V22 V23 V24 F7 V25 V26 V27 V28 V29 F8 V30 V31 V32 Loss caused by unsatisfactory outcome 6 5 6 6 10 6 7 5 7 7 6 8 7 8 8 6 6 7 Probability of unsatisfactory outcome 0.2 0.1 0.1 0.2 0.8 0.1 0.5 0.6 0.7 0.9 0.6 0.8 0.9 0.6 0.9 0.2 0.3 0.1 Rusk exposure of risk variables 1.2 0.5 0.6 1.2 8 0.6 3.5 3 4.9 6.3 3.6 6.4 6.3 4.8 7.2 1.2 1.8 0.7 1.23 5.77 4 Rusk exposure of risk factors 0.88

In this case, the IS project failure probability in the Chinese construction company above could be measured. The result is showed in Tab.4 which tells us that the IS failure probability in this company in the early period of the project is high. In the phase of requirements analysis, IS failure probability exceeds 50%. The IS failure probability is increasing with the project evolved and is cancelled in the end.
Tab.3 Risk exposure analysis in each phase of IS development Risk factors Risk variables Conception P1 V1 F1 V2 V3 V4 F2 V5 V6 V7 V8 F3 V9 V10 V11 F4 V12 V13 6.4 2 2.1 0.8 0.9 0.7 0.8 3.6 3.2 0.7 2.4 3.2 2 P2 7.2 2 2.1 0.8 0.9 0.7 0.8 4.8 6.4 2.1 1.6 2.4 4.2 P3 8.1 2 2.1 1.6 1.8 2.1 2.4 5.6 6.4 4.2 1.6 1.6 4.9 Development P4 8.1 2 2.1 2.4 2.7 3.5 3.6 5.6 6.4 4.2 1.6 1.2 3.6 P5 8.1 2 2.1 4 6.4 3.6 4.2 5.6 6.4 6.3 2.4 1.2 2 P6 8.1 2 2 7 9 4 4.9 6.4 6.4 7.2 4.8 3 6.4 Implementation P7 8.1 2 2 8 10 7.2 6.4 6.4 6.4 8.1 5.6 6.4 6.4 Continued Ending P8

594

Continued Risk factors Risk variables Conception P1 V14 V15 F5 V16 V17 V18 V19 F6 V20 V21 V22 V23 V24 F7 V25 V26 V27 V28 V29 V30 F8 V31 V32 0.6 0.6 0.5 0.6 1.2 3.6 0.6 2.4 3.6 2.1 2.7 0.8 0.6 1.2 3.6 2.4 0.7 0.6 0.6 P2 0.3 0.6 1.5 0.6 1.2 3.6 0.6 4.2 3.6 3.5 2.7 2.1 1.8 1.6 4.2 4.2 1.2 1.2 0.7 P3 0.3 1.2 2.5 2.4 1.2 5.6 0.6 3.5 3.5 4.2 3.6 2.1 2.4 4.2 4.8 6.3 1.2 1.2 0.7 Development P4 0.3 1.2 3.6 2.4 1.2 6.4 0.6 4.2 3.5 4.9 4.2 2.4 4.2 5.6 4.8 6.4 1.2 1.2 0.7 P5 0.3 1.2 0.5 0.6 1.2 8 0.6 3.5 3 4.9 6.3 3.6 6.4 6.3 4.8 7.2 1.2 1.8 0.7 P6 3 3 7 1.8 7.2 9 2.4 4.9 3.5 5.6 7.2 5.6 6.4 6.3 6.4 7.2 4.8 2.4 3.5 Implementation P7 6.4 3.6 8 6.4 8 10 5.6 6.4 4.2 6.3 7.2 6.4 6.4 6.3 6.4 8.1 5.6 4.8 6.4 Ending P8

Tab.4 IS failure probability in each phase of project during the life cycle Conception Development Implementation P1 R(F) 0.408 P2 0.519 P3 0.582 P4 0.618 P5 0.704 P6 0.782 P7 0.846

Ending P8 1

2.4 Risk planning and resolution According to the result of risk assessment, when IS failure probability is less than 50%, the project turn to the next phase and risk planning should be made out. Risk planning includes the budgets in the next phase, schedule control, used tools and decisions whether to use prototypes or evolutionary deliveries. If IS failure is more than 50%, risk resolution must be made and risk should be reassessed. Researchers suggested two types of risk reducing strategies which is defined as a conscious course of action undertaken to minimize the effects of risk factors inherent in system implementation situations: inhibiting strategy and compensating strategy[15]. Inhibiting strategy is initiated before the problem occurs in an attempt to prevent the problem. Compensating strategy is initiated after the problem arises in an attempt to repair the damage. Tab.5 listed risk reducing strategies for the 8 risk factors. In the code phase of the project above, inhibiting strategy including use prototypes, use evolutionary approach, use modular approach, avoid change, obtain user participation and commitment should be taken. Compensating strategy such as keep the system simple, hide complexity and obtain management support should also be considered.

595

Tab.5 System development risks and risk resolution Risk resolution Risk factors F1 S1:Use prototypes S2:Use evolutionary approach S3:Use modular approach S4:Keep the system simple S5:Hide complexity S6:Avoid change S7:Obtain user participation S8:Obtain user commitment S9:Obtain management support S10:Sell the system S11:Provide training programs S12:Provide ongoing assistance S13:Insist on mandatory use S14:Permit voluntary use S15:Rely on diffusion and exposure S16:Tailor system to peoples capabilities Note: I=Inhibiting Strategy, C=Compensating Strategy C C C C C C C C C I I I I I I I I C I I I C C I I C I I I I I C C I C I I F2 F3 F4 F5 I I I I F6 C C C C C C F7 F8

3. Result analysis
The variance situations of risk factors and IS failure probability in each phase of the project life cycle could be described from the data above. Risk exposure analysis and the change of IS failure probability of the project from the construction company are showed in Tab.6. R1(F) is the IS failure probability after the risk resolution has been taken.
Tab.6 Variance of risk exposure and IS failure probability during the life cycle Conception Development Implementation Risk Exposure and IS failure probability P1 P2 P3 P4 P5 P6 P7 RE(F1) RE(F2) RE(F3) RE(F4) RE(F5) RE(F6) RE(F7) RE(F8) R(F) R1(F) 3.5 0.8 2.5 2.05 0.73 2.46 1.88 0.63 0.408 0.356 3.77 0.8 4.43 2.13 0.98 3.1 2.77 1.03 0.519 0.468 4.07 1.98 5.4 2.1 1.83 3.48 3.9 1.03 0.582 0.492 4.07 3.05 5.4 1.68 2.1 3.92 4.6 1.03 0.618 0.533 4.07 4.68 6.1 1.48 0.88 4 5.77 1.23 0.704 0.678 4.03 6.23 6.67 4.3 4.75 5.08 6.57 3.57 0.782 0.765 4.03 7.9 6.97 6.2 6.5 6.5 6.8 5.6 0.846 0.831 Ending P8 1 1 1 1 1 1 1 1 1 1

596

Before risk resolution 1 0.8 0.6 0.4 0.2 0 P1

After risk resolution

IS failure rate

P2 P3 P4 P5 P6 P7 P8 Phases of IS development life cycle

Fig.2 The trend of IS failure probability in each phase of the life cycle

The project size is maintaining a high level while the organization risk factor changed a lot after the code phase. Moreover, requirements are considered as high risk after the requirements analysis phase resulted from large application size. Requirements of subsystem are uncertain. The project has been obtaining users support until implementation for the reason that the system could make uses work much easier. However, at the end of the project, users are not capable of doing anything to save the system. There is no difficulty in technology at the beginning of development. But when moved into the system integration phase, large number of external links of subsystems existed. More and more technical problems need to be solved and the problems get even worse. Team operates well at the beginning but some major team members are shifted away for some reasons leading to significant influencing IS success. The project is out of control in the code phase. The schedule is overrun and project leaders lack of efficient methodology which result in imprecise planning. Market and competition risk is not high, but the project lasts too long and external organizations are bothered which makes this risk higher and higher. In a word, large project size and unclear requirements are the main reasons for this IS failure. Team turnover and organization changes increase the distance from success. The trend of IS failure probability in each phase of life cycle is showed in fig.2. IS failure probability is not high in the feasibility phase while it has a tendency to failure in the requirements phase. After detailed design phase, IS failure probability is increasingly growing and is cancelled at last. From the relationship with risk factors and IS failure probability, each risk factor is undeterminable to IS failure for the reason that the IS failure probability is high while each risk factor exposure is not too high. Thereby, the integrated effect of every risk factor leads to IS failure. Risk resolution plays an important role in the early 4 phases. However, inhibiting strategy influences less and less after the code phase. Only is relied on compensating strategy reducing the risks. In another hand, Change in organizational management, personnel shortfalls and large application size risks could not appropriately be avoided.

4. Conclusion
The study constitutes a risk assessment model of information systems development by bringing in the concept of IS failure probability which transforms the risks identified and assessed to IS failure probability, not only predicting the future of IS development but also indicating the integrated influence of each risk factor to IS project. The result of risk exposure analysis can be used to make risk planning in the next step. Whats more, if the risk exposure or IS failure probability exceed the threshold, risk mitigating strategies could be taken to reduce the probability of IS failure. Managers could better planning and monitoring the project in each phase of the life cycle and make right decisions prior to the next phase. It has also been demonstrated that the model could successfully applied in practice. Risk assessment and the trend of IS failure probability could be deeply penetrated by several techniques such as regression analysis and factor analysis. More and more IS projects will be applied with this model for widely used.

597

References [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] Gibbs W W. Softwares chronic crisis. Scientific American, 271, 3: 86-95 The Standish Group, Chaos Report. http://www. standish-group.com/visitor/chaos.htm, 1995 Boehm B W. Software risk management: principles and practices. IEEE Software, 1991, 8 (1): 3241. Boehm B W. A spiral model of software development and enhancement. Computer, 1988, 21(5):61-72 Boehm, B., Egyed, A., Kwan, J., et al. Using the WinWin spiral model: a case study. Computer, 1998, 31(7): 33-44 The Institute of Electrical and Electronics Engineers. IEEE Standard for Software Life Cycle Processes-Risk Management, New York, 2001, 1540-2001 Gang X, Jinlong Z, K K L. Risk avoidance in bidding for software projects based on life cycle management theory. International Journal of Project Management, 2006, 24(6): 516-521 McFarlan E W. Portfolio approach to information systems. Harvard Business Review, 1981, 59(5): 142-150 Wallace L, Keil M, Rai M. Understanding software project risk: a cluster analysis. Information & Management, 2004, 42: 115125 Karolak D W. Software Engineering Risk Management, Los Alamitos (CA): IEEE Computer Society Press, 1996 Schmidt R, Lyytinen K, Keil M, et al. Identifying software project risks: An international Delphi study. Journal of Management Information Systems, 2001, 17 (4): 5-37 Benaroch M. Managing information technology investment risk: A real options perspective. Journal of Management Information Systems, 2002, 19(2): 43-84 Kemerer C F, Sosa G. L. System development challenges in strategic use of information systems. Information and Software Technology, 1991, 33(3): 212-223. Linberg K R. Software developer perceptions about software project failure: a case study. The Journal of Systems and Software, 1999, 49: 177-192 Alter S, Ginzberg M. Managing uncertainty in MIS implementation. Sloan Management Review, 1978, 20(1): 2331

598

The Research of Project Performance-Oriented of Software Process Improvement Model and the Decision Support System
Yu Benhai1,Zhang Jinlong2,Chen tao2,Cong Goudong2, Zhang Dongfeng2, Hu Xianbiao3
1.Shandong Institute of Business and Technology 2.School of Management, Huazhong University of Science and TechnologyWuhan 430074, China 3.Shenyang Ligong University, Shenyang 110168,China

Abstract Currently research about relationship between project performance and process improvement, the thesis analyzes the insufficient of the traditional model-driven software process improvement. The theoretical framework is discussed about performance of the project -driven of process improvement; to achieve this framework, the performance of a project-oriented decision support system logical model is established. Put forward the software project development strategies of the continuous performance evaluation and the software process improvement. Key words Software Project Performance, Software Process Improvement, Decision Support System

1. Introduction
Software Crisis has baffled the global informatization for many years, many experts, both home and abroad, has tried to get the theories and methods to resolve this problem from varies aspects. From the management aspects, software development project is typically of the insufficient of completely planning, understanding and controlling of the whole development process, in addition, the project doesnt have a well definite management framework(Ibbs CW, 2000). Management is the most essential problem of software development, rather than technologies, however, technologies just affects parts of the project(Mark C Paulk, 1989). Because the success of software project doesnt just depend on the technologies, people has made great efforts and dont get effect through the narrow technologies, and they have to turn for help of management technologies(Fang deying, 2003). In 1987, Watts S.Humphrey and his partners, who are the Software Engineering Institute of Carnegie Mellon University(SEI),had improved the study of process consciousness and capability in software organizations, the process improvement includes three aspects: Process definition, process measurement and process improvement. Since the process of software development is not visual, and is highly dynamic process which is influenced by technique development, personnel flowage, market change and some other factors, so it increases the complexity of the process; traditional process measurement just proposed basic methods and some basic steps and principles of establishing the measurement, it neglected the formalize support of establishing measurement and the arithmetic support of measurement process, so it couldnt complete measurement process repetition and self-optimize improvement(Wang qing, 2005); the existing process improvement models consider the objective factors excessively, and these models are insufficient on study of the integrative objective of software project performance, at the same time, they neglect the projects own characters, the difference situation of the project organization and the continuance and evolvement of process improvement, so process improvement is very difficult to get expectant effort. Firstly, this thesis specifies the signification of the software project performance, and analyses the relationship between software project performance and process improvement, and establishes performance-driven software project process improvement model; secondly, according to the state of software project organization, the characters of the project and the corresponding arithmetic, we have discussed project performance forecast and the mechanism of process improvement model based on the instances which are similar with the completed projects; we also create the logic model of the process improvement decision support system based on performance evaluation, and we propose process improvement strategy of continuous improvement, in order to provide useful attempt for improving software project performance.
The paper is funded project: The national natural science fund (the NO.70571025), Shandong Province science and technology attack project (the NO. 2006GG2301002).

599

2. Software project performance signification


The signification of software project performance comes from enterprise performance, however, because of different stakeholders of the project, the signification is different. Linda Wallace(2004) proposed that software project performance should include product effort and process performance; product effort presents whether the final software product of project development is successful or not, and there were seven items to measure product performance: reliability of software, facility maintainability, the seamless between system function and user requisitions, the degree of user satisfactory, response time, facility use and integrated quality level; process performance described whether the self of development process is successful or not, and there are two targets to estimate process performanceTime and budget, she deemed if organization could complete the desired quality software costing the prearranged time and budget, the level of project process performance was highly. Kwan-Sik (2006) divided the software project performance into subjective performance and objective performance. Subjective performance contains product effort and process performance; product effort describes the quality, function and some other aspects of completed software project by the organization, and it belongs to self-product characters; process performance describes the process of project development, it mainly includes: the knowledge obtained by the organization, cooperative ability of team relationship and dominative ability by the project development process. Objective performance contains costs and time. Subjective performance reflects the individual viewpoints of the person who estimates the performance evaluation, this performance can be influenced by the persons subjective judgment and identifying ability, so it was difficulty to form standardization; objective performance is facility for collecting data and evaluating. Kwan definition includes the whole contents of Wallace definition, additionally; he proposed that the organization obtained development experience, the organization numbers obtained knowledge and project development ability was improved after the organization completing a project; all the above were defined as process performance. Kwan definition of software project performance is more accurate than Wallace definition both from the main-body and the object of software project evaluation, and it describes the influences of organization and project performance from project development process, more completely. We analyze the performance from the stakeholder of the project, the user pays attention to usability of the software; developer pays attention to performance of the project, in other word, the developer attends whether the project can be completed using prearranged time and budget, and the quality of the software product is satisfied; however, the manager of software organization pays more attention to control of progress, commutation of team and the capability that the organization obtains knowledge(James.J.Jiang, et,al.2004). According the above, the components of software project performance is described in Tab.1.
Tab.1 The components of software project performance Performance Performance Second styles The contents of performance Stakeholder styles measurable Subject Project product Quality, function, usability and flexibility of Cant User performance performance software Project process the knowledge gotten by the organization, Cant cooperative ability of team relationship and performance dominative ability Objective performance Time and costs Can Software organization Both of the above

3. performance-oriented software project process improvement model


Process is aggregate of a set of tightly contacted actions that are completed by the organization using corresponding technology and management methods, software process improvement divided into three levels by the organization structure: organization process improvement, team process improvement and individual software process improvement. It also can be divided into three parts from the management aspects: management process improvement, engineering process improvement and support process improvement. Organization process includes 600

management process, improvement process and training process; software engineering process improvement includes defining process, developing process, maintaining process and so on; support process includes creating documents process, configuration management process, quality assurance process, verifying process, estimating process and so on. Organization process improvement model mainly includes IDEAL, PCDA, ICASE and other models; team process and individual software process improvement is TST of SEI and PSP, typically. This improvement models provide a whole sets of patterns and methods of development of organization, team and individual. Currently, process improvement of software organization generally belongs to model-driven, in other words, organization chooses firstly the quality standard (such as CMMI, ISO15504, ISO9126) and a improvement model (such as IDEAL, PCDA, ICASE and so on), then the organization implements process improvement by the steps of the model, software organization only copies mechanically and applies slavishly the improvement models and quality standards, they think they will get benefit if the improvement following the steps of the models. However, model-driven improvement only provides the method of process improvement; software organization cant deal with the details, such as: What are the concrete objectives of process improvement? Which links should be improved? What is the critical success factor of the process? How and when to improve? Many organizations devote a lot of energies into the process improvement, but there are hardly efforts, and they cant obtain the initial desirability. Software process improvement obviously improves project performance(Mullaly, 2006), the objective of process improvement is to establish standardization of software process, so that the organization effectively produces software products satisfied user demands using limited time and costs, the organization obtains knowledge and technology as well, therefore, according to the above definition of project performance, we deem: the finally objective of process improvement is to improve the holistic level of project performance, so the dependence of process improvement should take the project performance improvement as coreperformance drives software project process improvement. Process improvement, firstly, is made expectant performance of the project through communication between the users and software organization; secondly, forecasting the project performance using mathematic models, according to the current state of the software organization and the characters of the self-project (simply called Forecast Performance); the third, measuring the matching degree between expectant performance of the project and forecast performance, if forecast performance could achieve expectant performance, the current software organization state satisfies the project requests, and it is unnecessary to improve the process spending excessive costs, if cant achieve expectant performance, regular strategies of project process improvement should be generated through process improvement decision support system, and improve process implementation according to SPI theories; fourthly, after each phase has been completed, the evaluation of the current phase and the estimation of the performance for the next phase should be made by decision support system, generating new improvement strategies, so as to avoid the improvement deviation. The performance-driven process improvement model we proposed in this paper is shown in Fig.1.
Management Process Improvement Engineering Process Improvement Software Organization Software Project Objective Performance Process Performace Product Performance Project Desired Performance Fig. 1 Process Improvement Model of Performance Evaluation Support Process Improvement Process forecast Performance Project Process Improvement Decision Support system

User

601

4. Creating process improvement decision support system based on project performance evaluation
Software project process improvement belongs to bi-structure decision. However, the time and costs, which describe the objective project performance, are measurable and are computed easily, its structure is well; the subjective performance is the project performance result that is obtained through certain mathematical model, a set of subjective index system and experts evaluation, it belongs to non-structure problem; the generation of process improvement rule strategy is non-structure as well. Therefore, it is feasibility to establish performance-oriented evaluation software project process improvement decision support system; its logic model is shown in Fig.2.
Finally user of project Software Project Organization State Level of The Project Organization

Performance forecast model


The Model of Performance evaluation

The Expectant Performance of the project

Performance Assessment Model


ANN Model

Performance Evaluation of The project

States Targets System of Software Organization

Character Targets System of The project own Get data Get data

Performance Targets System of The project Get data Arithmetic Base (basic arithmetic) AB Explain Arithmetic Get Arithmeti c Arithmetic Base Management System ABMS Request Arithmetic

Assessment Model-base (instances illative model)MB Get Explain Model Model Model-base Management System MBMS Use Model

Database of The completed projects (Instance-base) DB Match Instance Get Model

Database Management System DBMS Request Model Request Instance

Use Arithmeti Use c Instance

Human-Computer Dialogue Unit


Fig.2 Logic Model of the Content for this Paper

4.1 Creating database system 4.1.1 Creating software organization state, project characters and project performance index system Project performance has intense relationship with software organization state and project character, software organization with high state level can develop software of high-level performance; the difficulty and complexity of software project is the critical factor for project performance as well; on the other hand, different organizations make different performances, however the organizations, on the same level, develop different projects, then the 602

results of performance are different(S.Pennypacker, 2006); certainly, the more projects developed by the organization, the more experience it would have, and the performance level will be higher. Since CMMI is model professionally used for software project process improvement, the paper considers CMMI Table as quality forecast index system. Project own character index system can be identified through verified, because there is more or less difference among the characters of different projects, so both the software organization and the users of the project, should sufficiently understand the characters of the project, so that the project performance forecast is more reasonable. Software project performance forecast index system can be verified through looking on document and interviewing experts. 4.1.2 Creating completed projects database The essential of software process improvement is analyzing and improving the current development project process, using the experience of completed projects. The previous studies didnt pay more attention on collecting real-time data, most process improvement depend on subjective perceptual knowledge, and the deviation is serious. We deem: during developing the project, timely analyzing, recording and arranging the real data of the current organization sate, project character and completed project performance in each phase of the project development, especially the data during project process being improved, then creating database of the completed software projects in the standard manner, those can provide reference for the process improvement of new similar software project, therefore, the software project holistic performance would be improved. 4.2 Creating model-base system The design of model-base system mainly includes some designs and completion, such as organization structure of model-base, the function of mode-base management system, the language of creating model-base and so on. A concrete model can generally be characterized into three parts: input data, output data and algorithms of model. Model-base is the core component of decision support system; its quality directly affects the performance of the system. 4.2.1 Choosing project performance forecast model Accurate forecast of software project performance is very important for the organization for accurately making plan of time and costs, and for process improvement (L.Mitchell, Victoria,2006), forecast performance refers to software organization, self-project characters and other factors, so, many methods can be used to evaluate project performance, such as Analytical Hierarchy Process , Artificial neural network , Fuzzy mathematics , Data Envelopment Analysis (DEA), Gray Correlative Degree Method Analysis(GCDM), Genetic Algorithm and combinations of one or more methods. Because each method has merits and shortcomings, the results of project performance evaluation are not completely consistent. Decision-makers choose models depending on software project characters, sometimes, they also choose several models, and all of the results reinforce each other and correct the faults of each result of the project evaluation performance. 4.2.2 Search models similar with the completed projects Similar instances search model is generally characterized into two types, one is the Nearest-neighbor method, ascertaining eigenvalue by expert evaluating, Probability Analysis method, Analytical Hierarchy Process, this type introduces into experts subjective suggestions during assessing, however, the main problem of this type excessively depends on experts knowledge, so different expert groups make different efforts; the other type is that training the eigenvalue by Artificial Intelligence, this type optimizes this eigenvalue directly using the data of completed project, so as to automatically obtain the eigenvalue. All the above models can be included in decision support system model-base, according to the compare between the project forecast performance and expectant performance, organization state and project characters, we search for the instances similar with the current software project using single model or integration of several models in the database of completed software project; we also sufficiently consider the various results of different models, so that the exist experience can be used accurately, and regular strategy of process improvement is generated. At the same time, the real state of the current project process improvement and the veracity of the evaluation results by different models should be recorded correctly in the completed project database, then the evaluation for the model is formed gradually, and 603

those are the foundation that perfection and widely used of the system.

5. Conclusion
We propose a new model based on performance process improvement frame and decision support system, after analyzing the relationships between software project performance and process improvement; at the same time, we provide concrete steps of the building and the execution for this model. Comparing with the previous passive mode-driven process improvement methods, initiative performance-driven process improvement focuses on the central improving function of performance in the organization process improvement, and conveniently achieves agreement and participation of stakeholders; the strategy, which is based on multi-model performance evaluation of completed projects and process improvement, can verify and reinforce among different models, and presents holistic level of project performance more completely and objectively. the process improvement method more accords with the real situation of software project development, avoiding the shortcoming aroused by single model which cant present the holistic content; continuative performance evaluation and the strategy of each project phase solve the problem of dynamic changes of organization and project, and finally achieves the objective of improving project performance and the rate of successfully developing software.
References Ibbs CW, Kwak YH. Assessing project management maturity. Project Management Journal, 2000. 31(1): pp:32~43. James.J.Jiang, Gary Klein, Hsin-Ginn Hwang,Jack Huang,Shin-Yuan Hung. An exploration of the relationship between software development process maturity and project performance. Information & Management 2004(41): pp:279~288. [3] Kwan-Sik Na, James T. Simpson, Xiaotong Li, Tushar Singh, Ki-Yoon Kim. Software development risk and project performance measurement: Evidence in Korea. The Journal of Systems and Software, 2006(2)pp:1~10. [4] L.Mitchell, Victoria. Knowledge Intergration and Information Technology Ptoject Performance. MIS Quarterly, 2006(4): pp:919~939. [5] Linda Wallace, Mark Keil, Arun Rai. Understanding software project risk: a cluster analysis. Information & Management, 2004(42): pp:115~125. [6] Mark C Paulk, Bill Curtis. Capability Maturity Model Version 1.1. IEEE, 1989. (4): pp:118~128. [7] Mullaly, Mark. Longitudiao Analysis of Project Management Maturity. Project Management Journal, 2006(3): pp:62~73. [8] S.Pennypacker, Kevin P.Grant and James. Project Managenmeng Maturity :An Assessment of Project Management Capabilities Among and Between Selected Industries. IEEE Transaction on Engineering Management 2006(1): pp:59~68. [9] Fang de-ying, Li min-qiang. Economic analysis of information system project overseeing mechanism. Management engineering journal, 2003(2): pp:98~102. [10] Wang qingLi ming-shu,Liu xia. One kind of initiative evaluation model supporting software process control and improvement Software journal, 2005. 16(3): pp:407~419. [1] [2]

604

Research of the Collaborative E-Business System Constructed


Yang Limao, Tian Liang
School of Business, Hubei University, P.R.China, 430062

Abstract: The conceptions of the Collaboration and Collaborative Effect are introduced in this paper at the beginning, and then we will give the meaning of the collaborative E-Business and present the functions it should possess. Based on these statements, we will point out the deficiencies of the BPEL4WSebXML and RosettaNet E-Business standards that can realize collaboration among corporations to a certain degree through analysis. At the end, we will put forward a collaborative E-Business model in order to realizing inter-enterprise collaboration. Key Words: Collaborative Effect, Intangible Resource, Collaborative E-Business, Inter-Enterprise Collaborative.

1. Introduction
In the current context of global economic and market integration, the competition among enterprises is gradually replaced by the competition among the supply chain whose core is the main Enterprise Group. The perspective of enterprise management of optimization of individual companies have gradually expanded to the whole supply chain, managers of enterprises not only need to consider the optimization and synergy of the internal resources, but also think about how to meet the demand of customer efficiently at low cost from the point of view of the whole supply chain, and thus involving the problems of the collaborative management spanning companies[1]. Therefore, the realization of collaborative management spanning enterprises will be the development goals of e-business in the next stage.

2. Collaboration and Collaborative Effect


H. Igor Ansoff is the first to put forward the concept of Synergy. In Corporate Strategy he divides strategy into four components: Product Market Scope, Development Direction, Competitive Advantage and Synergy. He defines the Synergy as the potential opportunity of getting tangible and intangible benefits and the close relationship between the opportunity and the ability of the companies. He believes the reason the value of overall company is greater than the sum of the value of the parts is in the synergy mode, one can be attributed to the benefits of economics of scale, and the other is the result of sharing intangible assets in each part, such as technical expertise, corporate image and so on. Hiroyuki Itami, a Japanese strategic management expert, set a strict definition of Synergy. He divides the resource of the enterprise into tangible and intangible resources: tangible resources refer to assets such as production equipment and factories; and intangible resources are a kind of intangible assets, it may be trademarks, technical expertise or a unique enterprise culture. He notes that synergy includes both a complementary effect and a collaborative effect. Complementary effect takes place when tangible resources are exploited completely through extending the range of applying them. On the other hand, collaborative effect is resulted from sharing intangible resources with partner companies, this can enhance the efficiency of utilizing the tangible resources. His definition to the synergy is only for the intangible resources, for they are the strategic resources of the company that cannot be copied, but can bring lasting competitive advantages for the company[2].

3. Collaboration E-Business: Technology Power of Realizing Collaboration among Enterprise


Nowadays, many applications of information technology based on the Internet or Intranet have been employed in the business process among the enterprise relying on the supply chain, such as Enterprise Resource Planning systems, Customer Relation Management systems and Supply Chain Management systems. Indeed, these applications raise the efficiency of processing business and bring about an integration effect for the enterprise relying on the supply chain to some degree, but they are, after all, just software which enhance the efficiency of treating business among these enterprises through the computerization of every connection of business cooperation among the enterprise[3]. By leveraging the tangible resources of the enterprise completely, a 605

complementation effect can be produced. Accordingly, we could call them complementation e-business. These solutions based simply on computerizing cannot bring about enduring competitive advantages for the company group, as it is easy for the competitors to acquire these software and operation knowledge about them from the marketplace.

3.1 The Definition of Collaborative E-Business


Collaboration is the fundamental driving force for the company group to create sustainable competitive advantage, but it is difficult for them to reach collaboration between them, for the internal structure of the traditional enterprise cannot meet the need of intangible resources flowing fluently and being shared conveniently among them to achieve collaboration. To overcome the obstacles of achieving collaboration across the enterprises, we should change the structure of the organization, business process and other internal architecture of the traditional enterprise that are used to obstruct the intangible resource flowing among them. In the era of information technology, we can use them to build up new types of organizations favoring sharing information and knowledge creation among the enterprise. This will activate intangible resources of the integration of the enterprise and lead to the accumulated knowledge of the enterprise flowing among not only the enterprise but also the business units and departments of these enterprise. Then the efficiency of the individual tangible resource would be enhanced and a collaborative effect will appear as during the enterprise finally integrates. Certainly, all these are on the basis of having completely exploited the tangible resource. The ultimate goal is to enable the enterprise group as a whole to achieve a sustained competitive advantage, and this is the Collaborative E-Business. Collaborative E-Business is totally on e-commerce, and it is the goals of the e-business development of companies.

3.2 The Functions of the Collaborative E-Business


(1) Dynamically integrating the business process. Dynamically integrating the business process among companies is the basis of realizing synergy across enterprises. Business process is the mechanisms that bring about the efficacy of the functional units of the enterprise and the enterprise on the supply chain. Integrating the business process of each company by information technology will enhance the execution efficiency and the sensitivity of the market reaction of the whole supply chain, while substantially reducing the cost of transactions among enterprises. Moreover, the change of the external and internal environment of the companies will lead to the change of the organizational structure, human resources and the patterns of the work flow, Collaborative E-Business can meet the demand of these changes without re-designing the system, and for the Collaborative E-Business have good scalability and a powerful user-defined function. (2) Resources synergy. Sharing the knowledge and the information. Collaborative E-Business can integrate the knowledge and the information owned by the enterprise, and it makes it convenient for users to get the knowledge and the information they desire though information portals, using an information portal the Collaborative E-Business can customize personalized information and services according to the demand of each user. Through Collaborative E-Business platforms, employees of the company can create, accumulate and share knowledge and information, customers, also suppliers and partners of the company could achieve goals of creating and sharing knowledge and information by using it. Customer synergy. The customer relationship management of Collaborative E-Business can not only realize one-way customer management, but to allow customers to participate, thus achieving comprehensive tracking and customer interaction. With the benefit of Collaborative E-Business, enterprise can get the information and the requirement of the customer immediately, customers can also update their relevant information and understand products and services they are interest in, they can even complete the purchase, service requests and other business projects by working with the enterprise. Partner synergy. The relationship is a synergy relationship that is built up between the company and its partners by Collaborative E-Business. Through it, partners can receive the demand of customers and the reaction of the market in time. At the same time, sharing information and knowledge allows the company to get the best methods 606

to purchase, produce and market. All these ensure the high level efficiency and lower cost of the supply chain. (3) The integration and synergy of the strategy. The perspective of the strategy management of the enterprise on the supply chain should not be limited to the enterprise itself, but rather from the point of view of the strategy management of the whole supply chain. Common commercial interests are the basis for cooperation between enterprises, so win-win is the most fundamental protection which ensures the realization of the collaboration between the partners on the supply chain. Therefore, the strategy synergy and the integration mechanism of the Collaborative E-Business are the basis of sustaining the relationship between the partner companies on the supply chain. For the long-term interests, the relationship of the partner companies must be changed and coordinated strategically when the enterprise on the supply chain face the change of internal and external environment, and this will affect the alternation of the integration of the business process and interaction of the recourses. When the strategy of the company changes, the pattern of integration of the business process may be adjusted, and the content and the extent of the resources exchanging corresponding to the business process must be altered as well[4]. For these reasons, Collaborative E-Business cannot only play roles of realizing strategy synergy and integration, but adjusting integration of the business process and interaction of resources among companies which are operated under the framework of Collaborative E-Business according to the alternation of the strategy of the enterprise.

4 Studies on BPEL4WS, ebXML and RosettaNET


Since 2000 Gartner Group proposed the definition of the Collaborative E-Business, companies and IT sector pay great attention on it. BPEL4WS (Business Process Execution Language for Web Service), ebXML (The Electronic Business using eXtensible Markup Language) and RosettaNet are three kinds of e-business standards that are currently put to use, they represent the functional parts of Collaborative E-Business. BPEL4WS, ebXML and RosettaNet are three kinds of collaborative e-business specifications that catch peoples eyes currently, but they exert different effects on the collaboration of the corporation, because of the distinct objective to be achieved they work on different kinds of information basis. Through affecting the information flow among the cooperation enterprises, collaborative e-business can realize collaboration among them, so the structure of the information flow among these enterprises can bring about different essential effects on the results of the collaboration among them. From the figure one we can see that the information flow among the enterprise can be divided into five layersLangen walter,2000.
Information about strategy integration (such as information about market) Information about strategy harmony (such as product development information) Information about commerce collaboration (such as distribution plan information) Information about long-term commerce (such as the historical commerce data) Fundamental information about short-term commerce (such as order information)

Figure 1. The structure of the information flow

We know that knowledge can be divided into the Generic Knowledge and the Specific KnowledgeJensen and Meckling,1988, and the Specific Knowledge are the basis of the competitive advantage for the corporation for it is impractical for the competitors to copy due to the high cost of transferenceWernerfelt, 1984; Barney, 1991. From this structure we can catch that the contents of the information are closer to the Specific Knowledge of the enterprise as the information referring to higher layers. For instance, the information about orders can be treated easily by the employees who just need to be familiar with the business process; but the information about the distribution plan are connected to the Specific Knowledge of the manager; similarly, the information about the 607

new product development have close relationship with the Specific Knowledge of the developer. Therefore, when the e-business works on the different layers in the structure of the information, they will exert different effects on the collaboration and produce different competitive advantages. (1) Although BPEL4WS can transport the information in any layers in theory, BPEL4WS mainly achieve the communication and integration of the independent system that work inside of the enterprise from the view of business process. It can deal with information from different systems and execute relevant operations positively but do nothing for the information from the layers higher than the one of commerce in short term automatically. For BPEL4WS has not provided dictionary specification of business terms for the cooperation among the corporation yet, so it can only support the collaboration from the layer of fundamental information about commerce in short-term through integrate the information system of these enterprises. (2) ebXML provides particular business process specification for the cooperation of the enterprise in the Business Process Specification Schema, and the pattern of the business process of the enterprise working under the ebXML framework must meet the need of BPSS. In the Core Components, ebXML defines neutral terms for commerce, which make it possible for the firm to do e-business across different industries easily. From the view of the function of ebXML, we can see that it has realized the transport of the fundamental information about the commerce in short-term and supports one-way or two-way transmission of the information about the commerce in long-term, but it neither can treat data about the commerce collaboration nor support the forming and transference of the strategy decision. So it is important to realize that the collaboration of the enterprise under using ebXML is from the perspective of commerce collaboration. (3) Compared to BPEL4WS and ebXML, the integration of the enterprise operating under the RosettaNet framework locates on a higher level of collaboration. The PIPs (Partner Interface Process) of the RosettaNet not only allow integrating the business of order management, releasing the information of the new product and so on, but support and integrate the ERP system in each corporation in order to meet the need of integration and collaboration across the organizations throughout the supply chain, it especially enables discerning and exchanging all kinds of forecast information, producing plan or marketing plan and information about inventory among the business partners from the ERP system. Even, RosettaNet is able to support Vendor Managed Inventory, which is a kind of high-level integration for the enterprise relying on the supply chain. In this situation, the PIPs and the RosettaNet Dictionary not only pay attention to the fundamental information about transaction, but also the transport of the real-time information of inventory and all kinds of information about planning. For this reason, we can find that compared to BPEL4WS and ebXML, which can only exert effect on the information layers under the commerce cooperation, RosettaNet is able to support transferring the information about decisions and strategy collaboration positively. However, the RosettaNet standard does not reference the more deep level strategy and process like BPEL4WS and ebXML.

5 Collaborative E-Business System Model Constructed


From the perspective of the function of the Collaborative E-Business, BPEL4WS, ebXML and RosettaNet have realized dynamic integration of the cross-enterprise business process. They have even achieved resource interaction to some degree, but it is not enough to generate a Collaboration Effect and bring sustainable competitive advantage for the whole supply chain only by these standards. So we will propose a model of Collaborative E-Business based on the BPEL4WS, ebXML and RosettaNet. This model is divided into five function layers, as it shown in the figure two, and we can catch the function module included in each layer and the relationship of each layers from the figure three. (1) The layer of infrastructure of Collaborative E-Business supplies secure, stable and efficient physical network environment and the memory space of data, knowledge and information for the higher layers application. This layer is mainly made up of physical equipment, such as network, server, database and so on. The layer of infrastructure is the information platform of the enterprise as well. The implementation of Collaborative E-Business should be conducted in phases; in the first step we need to integrate the internal resources of the 608

enterprise and company grouped by information technology, then the framework of Collaborative E-Business can be built on this information platform.
The layer for plans and control The layer for operation The layer for service The layer for business process integration The layer for infrastructure

Figure 2. The model of function layers of Collaborative E-Business System

(2) The layer of Collaborative E-Business for business process integration is the basis for the companies in the supply chain to place their core ability. The business processes are the mechanisms that bring about the efficacy of the functional units of the enterprise to the supply chain, so integration of the internal business process and the business process between companies is the foundation of cross-enterprise synergy. The layer for business process integration can integrate the business process efficiently and dynamically. BPEL4WS, ebXML and RosettaNet have all supplied business process integration mechanisms, so we can use these standards to support the business process integration directly.
Strategy Integration and Synergy

Synergy Design

Synergy Manufacture

Synergy Sale

Module for Sharing Information and Knowledge

Module for Partners Synergy

Module for Customer Synergy

BPEL4WS

ebXML

RosettaNet

Network

Server
Figure 3. The model of Collaborative E-Business System

Database

(3) The layer of Collaborative E-Business for service is to realize the interaction and synergy of the resources between companies. In order to get the collaborative effect between enterprises, we should activate intangible resources of the integration of the enterprise and lead to the accumulated knowledge of the enterprise flowing among not only the enterprise but also the business units and departments of these enterprises. Then the efficiency of the individual tangible resource would be enhanced and collaborative effect will appear during the enterprise integrating finally. We can bring about the synergy function, such as sharing knowledge and information between companies, customer synergy, partner synergy and so on, by the layer of service. (4) The layer of Collaboration E-Business for operation shows the roles and the synergy links of the companies on the supply chain. The synergy design and synergy manufacture can make the design of the production meet the requests of the customers, they can happen in the business between producers and vendors, or in the business among suppliers, producers and vendors. In order to remove the information obstacle between producers and customers, synergy sale emphasizes the sharing of the information of orders and prices between producers and channel members. At the same time, the synergy relationship is different according to the different roles between the companies. For instance, stock companies will pay more attention to the financial synergy with 609

their partners. (5) The layers of Collaborative E-Business for plans and control play the role of strategy coordination and synergy on the layers for operation, the layer for service and the layer for business process integration. Strategy includes the expectation of the long-term profit of the company, and the formulation of the short and long term plans is based on the strategy, so the change of the strategy must trigger the adjustment of the day-to-day action of the company, as well as provide the capability for alternation of the fundamental service that support the operation action. The formulation of the strategy should not only suit the interest of company itself, but also include the interest of other companies in the supply chain. In order to complete win-win between companies, the strategy of one company should be in keeping with the needs of the whole supply chain, otherwise the synergy relationship will be broken. So this layer should be the coordination and control layers for the whole framework of Collaborative E-Business. The formulation of the strategy relies on the information of the environment, the company itself and its partners. Therefore, the layer for operation, the layer for service and the layer for business process integration supply enough information for the managers of the company to form and alter the strategy. zAll the companies in the supply chain work in the framework of the Collaborative E-Business. First, a company could work out its strategy in according with the strategy of the supply chain relying on the module of strategy synergy, and then the company can choose its synergy partners according to the strategy and build synergy relationships with its partners through the layer for operation, the layer for service and the layer for business process integration, finally each company acts according to their roles under the framework of Collaborative E-Business. Management of the companies may alter their strategy through the information from the layer for operation, the layer for service and the layer for business process integration. Now, each company working under the framework of Collaborative E-Business can break the obstacles that block the sharing and flowing of the intangible resources between companies, activate the intangible resources of the company group and enhance the efficiency of utilization of the tangible resources, allowing the collaborative effect to be realized among the participating companies.

6. Conclusion
To acquire lasting competitive advantages in the environment of intense competition, the corporations have looked to integrating their resources with their partners in order to obtain better performance because of the effect of division of labor based on specialization. In this paper, we have pointed out that companies can break the obstacles that block the transport and sharing of the intangible resources between companies through Collaborative E-Business. Although there are several kinds of collaborative e-business standards such as BPEL4WS, ebXML and RosettaNet, they only can support the collaboration based on the lower-level information. We should realize the higher-level synergy and integration of the strategy and the resources if we want to get a collaborative effect. Therefore, we propose a Collaborative E-Business model based on the BPEL4WS, ebXML and RosettaNet to bring about the synergy between partner companies in the supply chain. In the paper we have indicated that the enterprise can realize the transport and sharing of the intangible resources among companies through collaborative e-business. However, some problems worth while to be studied are how to guarantee that the collaborative partners may choose to not reveal these intangible resources and information about these resources to the competitor in the same way or they would not utilize these intangible resources to compete with the corporation. At the same time, the relationships among the firms will be closer as the level of the integration among them become higher, so the enterprise will face higher cost when they separate themselves from the collaborative relationship because of the change of the environment. Therefore, companies need to carefully consider how to balance the cost resulted from breaking way the relationship and the opportunity cost for maintaining the collaboration and how to resolve the confliction between dynamic state and collaboration under the framework of collaborative e-business.

610

References [1] [2] [3] [4] Barney,J.B.Firm resources and sustainable competitive advantage, Journal of Management, 199117(1): 99-120 Itami, H.Mobilising Invisible Assets, Cambridge, Ma: Harvard University Press.1987.62-67 Michael C. Jensen. Specific and General Knowledge and Organizational Structure. Journal of Applied Corporate Finance,1995, 7(2):38-49 Gary A Langenwalter.Enterprise Resource Planning and Beyond: Integrating Your Entire organization. St. Lucie Press. 2000.192

611

612

Index of First Authos


B

Bu Xiangzhi
C

130

Chang Yaping 487 Chen Tao 556 Cheng Du 481 Chien Yung-Tsai 505 Cui Nanfang 145
D

Liu Chengming 392 Liu Liping 212 Liu Nan 19 Liu Peide 517 Liu Shan 590 Liu Shulin 523 Liu Shuren 221 liu Yang 215 Liu Zhixin 13 Lu Xinyuan 399
M

Da Qingli 152 Deng Wei 333 Dong Jichang 3 Dong Ming 159
F

Meng Jie 431 Minghe Sun 76 Ming-Jong Yao 302


N

Fang Deying
G

493

Nallan

117
Q

Gou Qinglong 165 Guo Chonghui 8


H

Qi Ershi 71 Qiu Jie 138


S

Han Guowen 340 Hao Chunlu 499 He Bo 59 He Liang 345 His-Hsien Yu 51 Hu jianwen 353
J

Sheng Yongxiang 405 Song Rui 226 Song Yexin 410 Sun Hao 425 Sun Shiyan 416
W

Ji Shouwen 175 Jia Chuanliang 359 Jia Xiaona 365


K

Kenji Kimura
L

90

Lei Yi 512 Li Dengfeng 372 Li Dong 180 Li Kunpeng 191 Li Mingyu 195 Li Yongjun 378 Li Yuanhui 384 Lihong Zhou 580 Lin Yong 200 613

W.K. Leung 122 Wang Dazhi 43 Wang Jing 232 Wang Nianxin 539 Wang Yongchun 437 Wang Zhiying 546 Wu Chunxu 46 Wu F. 562 Wu Jianghua 239 Wu Jie 443 Wu Qiang 448 Wu Shaofei 552 Wu Weiwei 37
X

Xiao Renbin 82 Xie Jiaping 245 Xu Chuanyong 260

Xu Xianhao 254 Xu Zhiduan 269


Y

Yan Lei 276 Yang Hongbin 569 Yang Jun 65 Yang Limao 605 Yan-KuenWu 107 Yongjiang Shi 316 Yu Benhai 599 Yuan Guo-qiang 29 Yunfeng Shi 532

Zha Yong 325 Zhang Tao 281 ZhangMin 101 Zhao Guohao 473 Zhao Jinshi 574 Zhao Yong 454 Zhao Zhenfeng 289 Zhao Zhenwu 459 Zhao Zhiyan 186 Zheng Aihua 296 Zhou Gengui 308 Zhou Xianzhong 466

614

You might also like