Está en la página 1de 128

Proceeding of Industrial Engineering and Service Science, 2011, September 20-21

Copyright 2011 IESS.


Reliability, Maintenance and Its Management: The
Current State of Play
Kym Fraser

School of Advanced Manufacturing and Mechanical Engineering, University of South Australia, Adelaide, Australia
kym.fraser@unisa.edu.au
ABSTRACT
Maintenance and its management is now of strategic importance for most organisations around the world. Problems
which surround the current maintenance literature include the identification of maintenance management models and
use of these models in real world applications. It has been argued that the gap between theory and practice is wider in
the maintenance field than any other research discipline. In this study 37 different maintenance management models
are identified and from these models three were found to clearly dominate the literature; Total Productive Maintenance
(TPM), Condition-Based Maintenance (CBM), and Reliability-Centred Maintenance (RCM). A comprehensive review
of these three models was undertaken to establish links to empirical real world applications, determining model
popularity and details of study methods, sector, industries, author and country. Further investigation of three leading
journals in maintenance found that 401 published articles on these popular models produced 48 articles with links to
practice, giving an empirical evidence rate of 12% when compared to the overall number of papers published. While
this paper, importantly, examines links between maintenance theory and practice, a clear picture emerges on the lack of
empirical research undertaken by academics in the area of maintenance and its management.

Keywords:Maintenance, management models, literature review, empirical evidence

1.Introduction
According to a US National Research Council Report in 1990, one of the research priorities of US manufacturing is
equipment reliability and maintainability [1]. Historically, maintenance activities have been regarded as a nece s-
sary evil by the various management functions in an organisation [2,3]. However, over the past 15 to 20 years, this
attitude has increasingly been replaced by one which recognises maintenance as a strategic issue in the organisation.
In 2006 Carnero [4] summed up the situation by stating the setting up of a predictive maintenance programme is a
strategic decision that until now has lacked analysis of questions related to its setting up, management and control
(p.945). The role of maintenance in maintaining and improving the availability of plant and equipment, product
quality, safety requirement and plant cost-effectiveness levels, constitute a significant part of the operating budget
of manufacturing firms [5].
According to [6] between 15 to 40 percent (average 28 percent) of the total production cost is att ributed to
maintenance activity in the factory. Ten years later [7] goes further by suggesting that maintenance department
costs represent from 15 to 70 percent of total production costs. [8] explained that next to energy costs, maintenance
spending can be the largest part of the operational budget. [9] discussed how the cost of maintenance for a selected
group of companies increased from US$200 billion in 1979 to US$600 billion in 1989, three-fold in just 10 years.
With the advent of more automation, robotics and computer-aided devices, maintenance costs are likely to be even
higher in the future [10].
Therefore, the effective integration of the maintenance function with engineering and other manufacturing
functions in the organisation can help to save huge amounts of time, money and other resources in dealing with
reliability, availability, maintainability and performance issues [11]. For most organisations it is now imperative
they take opportunities via maintenance management programs to optimise their pr oductivity, while maximising the
overall equipment effectiveness. With increasing focus on just -in-time, quality and lean manufacturing, the reliabil-
ity and availability of plant are vitally crucial. Poor machine performance, downtime, and ineffective plant mainte-
nance lead to the loss of production, loss of market opportunities, increased costs and decreasing profit [12]. This
has provided the impetus to many organisations worldwide to seek and adopt effective and efficient maintenance
strategies over the traditional firefighting reactive maintenance approaches [13,14].
Reliability, Maintenance and Its Management: The Current State of Play

2
The problem which currently exists involves firstly, the limited number of maintenance papers providing r e-
views of maintenance management strategies, and secondly, papers exploring and combi ning the various mainte-
nance strategies and their links to real world applications (empirical evidence) is non-existent. It would seem that
the gap between theory and practice in regards to maintenance is greater than in other research fields. In 1996,
Dekker [8] argued that mathematical analysis and techniques, rather than solutions to real problems, have been ce n-
tral in many papers on maintenance models. He goes on to say It is astonishing how little attention is paid either to
make results worthwhile or understandable to practitioners, or to justify models on real problems (p.235). In 1998
Rausand [15] supported Dekker by claiming there is more isolation between practitioners of maintenance and the
researchers than in any other professional activity (p.130). In 2002, [16] stated Since the late seventies, examples
of models assessing corrective and preventive maintenance policies over an equipment life cycle exist in the liter a-
ture. However, there are not too many contributions regarding real implement ation of these models in industry
(p.367). In a recent discussion on the problems and challenges of reliability engineering, [17] states that the main-
tenance literature is strongly biased towards new computational developments.
Therefore, the key objective of this paper is to provide links between literature and practice by firstly, revie w-
ing the maintenance literature and determining the various maintenance management models/strategies discussed
within it. Secondly, while the number of maintenance related papers in the literature is high (numbering in the
thousands), only papers providing empirical evidence will be further analysed to determine popular maintenance
management strategies in practice today, identifying the country, sector and industry that these models are being
employed in around the world. Articles of a purely mathematical nature, theoretically derived, or of a conceptual
basis were not analysed. The outcomes will provide practitioner and researchers with a practical insight of a bus i-
ness process which now holds significant strategic implications for nearly every organisation.
2. Identification of Maintenance Management Models in Literature
This section will establish the various maintenance management models found within the literature. The review in-
volved all peer reviewed journals and textbooks available on the University of South Australia library databases. This
source included well respected databases such as Business Source Complete (EbscoHost), Emerald fulltext, ScienceDi-
rect, Wiley InterScience, SAGE full-text collection and Compendex. These databases represent the major publishers in
the maintenance field such as Elsevier, Emerald and Taylor & Francis. To keep findings as contemporary as possible
the search for empirical evidence linking popular maintenance models to practice was restricted to articles published
within the last 15 years (1995 2009).
Table 1. Model Description and Categorisation
Model Main Focus Benefits/Requirements
Practical
application:
Holistic/
Singular
Literature
evidence:
Empirical/
Theoretical
Advanced tero-
technological model
Moves focus from Life-cycle-cost (LCC) to
Life-cycle-profit (LCP)
Integrates TQM/ terotechnology/ LCP.
Requires integrated IT system
Holistic Theoretical
Age-based
Maintenance
An extension of RCM Allows better management of items that
fail due to wear and/or related to age.
Holistic Theoretical
Availability-based
Maintenance
An extension to both RCM and TPM Needs to be integrated with manufacturing
resource planning (MRP) system
Holistic Theoretical
Basic tero-
technology model
Focus on maintaining systems life cycle Establishes information feedbacks to
maintain systems life cycle
Holistic Theoretical
Breakdown
Maintenance
Action is taken once the item/equipment has
failed
Applied quickly with limited resources
and information. A high risk and commer-
cially expensive strategy
Holistic or
singular
Theoretical
Campaign
Maintenance
Simular to shutdown maintenance. Used
when non-maintenance restraints take prior-
ity e.g. military operations
Replaces regular maintenance program but
completion time-frames are limited
Holistic or
singular
Empirical
Computerised
Maintenance
Management
System
Provides capabilities to store, retrieve and
analyse information
Deals with computer-aided integration of
maintenance in an enterprise. Used in
conjunction with a maintenance manage-
ment system e.g. TPM
Holistic
Empirical
and
theoretical
Condition-based
Maintenance
(CBM)
Based on the monitoring and detection of
equipment to determine vital warnings of
impending failure
CBM allows a reliable, accurate assess-
ment of service life while reducing reli-
ance on maintenance personnel
Holistic
Empirical
and
theoretical
Condition
Monitoring (CM)
Similar to CBM where condition monitoring
of selected equipment is undertaken to detect
potential failures
CM is commonly applied to individually
selected equipment. Should be integrated
with other maintenance programs
Singular
Empirical
and
theoretical

Reliability, Maintenance and Its Management: The current State of Play

Copyright 2011 IESS. 3
Model Main Focus Benefits/Requirements
Practical
application:
Holistic/
Singular
Literature
evidence:
Empirical/
Theoretical
Corrective
Maintenance
Unplanned activities undertaken to return the
equipment to its operating condition
Requires management processes to iden-
tify defects and eliminate root causes
Holistic Empirical
Effectiveness-
centred
Maintenance
Based on doing the right things instead of
doing things right
Encompasses the concepts of TQMain and
features of TPM and RCM to provides a
more effective maintenance system
Holistic Empirical
E-maintenance Integrates existing telemaintenance princi-
ples with Web services and modern
e-collaboration principles
Used in conjunction with CBM. Ideal for
military and commercial aircraft operators
to reduce aircraft downtime
Holistic Theoretical
Equipment Asset
Management
An optimum combination of best practice,
technology, organisation, and administration
Maximise lifetime value from process,
production, and manufacturing equipment
Holistic Theoretical
Kellys philosophy Control of reliability through the physical
control of engineering systems
Develops links between quality and main-
tenance. Mixture of elements from TPM,
RCM and terotechnology
Holistic Theoretical
Maintenance
Management Metric
Maintenance management is the allocation
of value added resources
Systematically improves overall equip-
ment effectiveness, while optimizing the
cost of per unit production
Holistic Theoretical
Operating mainte-
nance training and
administration
An organisational-wide approach which
considers all aspects of the supporting infra-
structure
Operations, maintenance, training, and
administration are integral parts of the
whole system
Holistic Theoretical
Outsourcing Transfer to outsiders with the goal of getting
higher quality maintenance at faster, safer
and lower costs
While firm can concentrate on core com-
petencies, the maintenance service con-
tract still requires management
Holistic or
selected
areas/items
Theoretical
Planned
Maintenance
Maintenance functions performed on a
pre-planned basis
Firms able to determine optimal intervals
for various machines and failure types
Holistic or
singular
Empirical
Preactive
Maintenance
Defines equipment maintenance require-
ments before the process, line or individual
machine commences operation or before
major expansion
Provides early evaluation of maintenance
costs and man hours Holistic or
singular
Theoretical
Predictive
Condition
Monitoring
The application of multiple technologies to
monitor the condition of machines for pend-
ing failure
Technology is combined with various
analysis techniques through computerised
applications
Singular Theoretical
Predictive Mainte-
nance
Consists in deciding whether or not to main-
tain a system according to its state
Recommended to be use in conjunction
with traditional periodic preventive main-
tenance programs
Holistic or
singular
Theoretical
Pre-planned
Maintenance (PPM)
Divides the working calendar into discrete
separate elements and assigns PPM jobs to
the various elements
Able to determine optimal intervals for
various machines and failure types. PPM
can attract criticism for over-servicing
Holistic or
singular
Empirical
Preventive
Maintenance (PM)
A series of tasks performed at a frequency
dictated by time, amount of production and
machine condition
PM can either extend the life of an asset or
detect that an asset has critical wear and is
going to fail or break down
Holistic or
singular
Empirical
Proactive
Maintenance
Advanced maintenance approach that fo-
cuses on reducing total maintenance required
and maximizing life of machinery
Individual maintenance activities are
re-engineered to enable preven-
tive/predictive maintenance practices
Holistic Theoretical
Productive
Reliability
Based on TPM with the purpose of reducing
costs and improving capacity through con-
tinuous maintenance improvement
Needs to utilise failure mode and effect
analysis techniques Holistic Theoretical
Profit Center
Maintenance
The maintenance of machinery, equipment
of fixed asset is considered a profit activity.
Assets are optimised for maximum value
rather than the least cost
Holistic Theoretical
Reliability-centred
Maintenance
(RCM)
An asset maintenance management system
oriented towards maintenance critical indus-
tries such as airlines, power plants
Analyses each physical asset in its operat-
ing context and assesses what must be
done to ensure it fulfils its function
Holistic
Empirical
and
theoretical
Risk Based
Maintenance
Focus is on the dual objectives of minimisa-
tion of hazards caused by unexpected failure
of equipment and a cost effective strategy
While minimising the probability of sys-
tem failure, risk analysis also evaluates
other consequences such as; safety, eco-
nomic and environment
Holistic Theoretical
Run-to-destruction Reactive approach. Equipment is used nor-
mally until it fails, then discarded or re-
placed
Normally confined to carefully selected
equipment and the consequences of failure
known and accepted in advance
Singular Theoretical
Run-to-failure Reactive approach. Equipment is used nor-
mally until it fails, then discarded or re-
placed
Requires very little ongoing and routine
maintenance. Suitable for small,
non-critical, low cost equipment
Singular Theoretical

Reliability, Maintenance and Its Management: The Current State of Play

4
Model Main Focus Benefits/Requirements
Practical
application:
Holistic/
Singular
Literature
evidence:
Empirical/
Theoretical
Scheduled
Maintenance
Periodic replacement of parts based on their
age
Firms able to determine optimal timing of
maintenance
Holistic or
singular
Theoretical
Strategic
Maintenance
Management
An overall business perspective which fur-
ther builds on TPM and RCM
Integrates technical, commercial and op-
erational aspects of business with mainte-
nance program
Holistic
Empirical
and
theoretical
Time-based
Maintenance
Maintenance activity based on a time period Economically beneficial when dispersion
of the item lifetime is small
Holistic or
singular
Theoretical
The Eindhoven
University of
Technology(EUT)
Developed to fill gaps in terotechnology
models.
Lists 14 sub-functions of maintenance but
no links to IT system Holistic Theoretical
Total Productive
Maintenance
(TPM)
An asset maintenance methodology that
combines the effort of plant operators,
safety, energy, materials, and quality with
the planning and maintenance efforts
Designed to be integrated with JIT, TQM,
employee involvement and environmental/
organisational factors
Holistic
Empirical
and
theoretical
Total Quality
Mainte-
nance(TQMain)
Converts a singular platform (CM) into a
holistic model
Recommends production schedules should
incorporate time for maintenance Holistic Theoretical
Value-driven Plan
Maintenance
Enhancement of RCM with company, plant
and maintenance objectives being integrated
Relies on the utilisation of knowledge and
expertise within plant
Holistic Theoretical
While 37 differently named models were identified (Table 1) an analysis of these models indicates a number of
similarities. Approximately half (18 models) share either a similar focus and/or the benefits/requirements are homoge-
neous. Models identified to offer only minor and subtle variations included: Basic / Advanced terotechnology;
Age-based / Time-based / Scheduled maintenance; Availability-based / Campaign maintenance; Breakdown / Correc-
tive maintenance; Condition Monitoring / Predictive Condition Monitoring; Effectiveness-centred / Total Quality main-
tenance; Planned / Pre-planned / Preactive / Scheduled maintenance; and Run-to-destruction / Run-to-failure. In regards
to similarities it could be argued that the model name, Preventive Maintenance (PM), has broad generic meanings for
maintenance. [18] described preventive maintenance as being a practice which encompasses all planned, scheduled and
corrective actions before the equipment fails. Another point of similarity is the fact that many models are a direct exten-
sion or based on the platform of the three most popular models found in the maintenance literature, being TPM, RCM
and CBM. A common theme to emerge from a majority of models was the need for the maintenance system to be inte-
grated with the organisations information and data systems.
When analysing the 37 indentified models an important consideration, especially for this paper, is the level of em-
pirical evidence found in the literature. While theoretical examples and descriptions can be found for most of the mod-
els, documented practical (real world examples) evidence was found for only 12 models (32%). When the four popular
models (TPM, RCM, CBM, and CM) are removed less than a of the remaining 33 models have any empirical evi-
dence on which the model can be practically evaluated. Adding to the empirical limitations of these remaining models is
the fact that only 1or 2 papers exist on each of the models and a number of the models are based on the four popular
models. In the case of practitioners the point on empirical evidence is important because it allows the model to be
evaluated in a real world environment. For them, developing an understanding of issues surrounding implementation
and success of the maintenance system are key points. Having limited practical evidence on the various models is prob-
lematic and not desirable.
3. Empirical examples of popular models analysed
A final list of 76 articles (the 3 models were represented 87 times) were extracted from the many hundreds of papers
reviewed and these were examined to establish model type, empirical evidence, author origin, study country, field of
study, and the research industry. The empirical evidence of each article involved methods such as surveys, interviews,
case studies and anecdotal experience. To clarify anecdotal experience papers classified as anecdotal were personal
accounts of the author/s experiences working and researching in the field. These articles, while providing empirical
evidence, must be viewed with caution as no empirical data was presented, only a personal view, therefore the use of
the term anecdotal. With the removal of hundreds of papers, due to the conceptual/theoretical nature of these papers, a
clear picture emerged of the real world examples for the three popular models in practice today.
On the surface it would seem that the rate of empirical research output over the 15 year period of the reviewed lit-
erature (average of 5.07 per year) has remained reasonably consistent, with peak years occurring in 2000 (12 publica-
tions), 2002 (7) and 2006 (8) . A closer analysis of the figures tend to indicate that there has been a decline in empirical
research in the three most popular maintenance models. The overall output in five of the last six years has been below
Reliability, Maintenance and Its Management: The current State of Play

Copyright 2011 IESS. 5
the yearly average of 5.07. Between 1995 to 2000 (6 years) there were 8 articles on CBM but since 2000 (last 9 years)
only 3 empirical studies have been published, with two out the three being in 2006. In regards to RCM, 23 articles were
published between 1995 to 2004 (10 years), and only 3 articles have been published in the five years since 2004.
Analysis of study sector and study industries shows that the Manufacturing sector (55%) clearly dominate, fol-
lowed by a General classification (18%) and Energy (13%). When narrowing the fields into specific industries, power
plants were clearly identified as being popular for maintenance research with eight papers, followed by steel mills and
the semiconductor industries with four each, part suppliers with three papers, and the automotive industry with two.
Interestingly, out of 76 empirical papers only two have direct practical links to the automotive industry. This industry is
a massive global influence on manufacturing around the world and has provided researchers with many practical exam-
ples of modern improvement philosophies such as just-in-time (JIT), total quality management (TQM), lean manufac-
turing (LM), flexible manufacturing systems (FMS), and world class manufacturing (WCM). While authors from Hong
Kong and Taiwan produced five and one publications respectively, Asian powerhouses such as Japan and mainland
China produced only three combined. With TPM being developed in Japan and also being the dominate maintenance
model in the literature (66%) it is therefore interesting that only two studies were conducted in Japan.

Model Popularity Study Sector Study Industry
TPM 66% [50] Manufacturing 55% (42) Power plants 8
RCM 34% [26] General classification 18% (14) Semiconductor 4
CBM 15% [11] Energy 13% (10) Steel mills 4
[ ] No. of studies Construction 3 (4%) Part suppliers 3
( ) No. of papers Automotive 2

Author Origin Study Country Study Methods
UK 24% (18) India 21% (12) Case study 50% (38)
India 17% (13) UK 19% (11) Anecdotal 24% (18)
USA 12% (9) USA 9% (5) Survey 14% (11)
Sweden 9% (7) Sweden 7% (4) Descriptive 6% (5)
HK/Taiwan 8% (6) Japan/China 7% (4) Comparison 3% (2)
Canada 5% (4) Canada 5% (3) Pilot study 3% (2)
Spain 4% (3) Spain 5% (3)
Italy 4% (3) Italy 5% (3)
Japan/China 4% (3) HK/Taiwan 5% (3)

In summary, it is worth pointing out that caution should be taken when trying to make comparisons between these
three popular maintenance models. It is clear that the applicability of TPM, RCM, and CBM are situation specific.
While very popular in the manufacturing sector, TPM is more suitable as an integrated holistic improvement system for
the organisation as a whole. RCM and CBM are more equipment specific for critical, complex, high tech applications
like gas compression systems in the offshore oil industry, boiler and turbine auxiliaries in the nuclear industry, and ro-
bots in automobile manufacturing. RCM is often used in more safety-focused sectors, such as the nuclear and aircraft
industries, where maintenance management has usually extensive due to safety regulations.
4. The Need for Greater Empirical/Practical Focus
While a total of 76 empirical articles were analysed in Section 3 it would seem that this figure is somewhat small given
the fact it represents 15 years of academic research and considering the growing level of importance maintenance man-
agement is to most organisations around the world. In an attempt to quantify or present an accurate picture of the cur-
rent situation, further analysis was undertaken of maintenance related journals.
Table 2. Published articles on popular maintenance management models: Comparison
betweentotal papers published and papers with empirical evidence (1995-2009)
Leading Maintenance Journals
Maintenance Models
Total
TPM CBM RCM
Journal of Quality in Maintenance Engineering 71 81 46 198
Reliability Engineering & System Safety 14 74 73 161
International Journal of Quality & Reliability Management 23 9 10 42
Total 108 164 129 401
Published papers with empirical evidence 22 8 18 48
Percentage of papers with empirical evidence 20% 5% 14% 12%

In this study it was found that three journals: Journal of Quality in Maintenance Engineering, Reliability Engi-
neering & System Safety, and the International Journal of Quality & Reliability Management provided over 50% of the
journals referenced. Table 2 provides a comparison between the total papers published and papers with empirical evi-
Reliability, Maintenance and Its Management: The Current State of Play

6
dence. As can be seen a total of 401 articles were publish between 1995 2009, and 48 of these articles made links to
real world applications. This provides a rate of 12% of published papers providing empirical evidence.While it would
seem that the percentage of empirical evidence is low, further research would be needed to establish how these figures
compare with other research areas outside of the field of maintenance.
5. Conclusions
A comprehensive review of the maintenance management literature was undertaken with 37 models being identified.
From this group three models were found to dominate the published literature, namely: Total Productive Maintenance
(TPM), Reliability-Centred Maintenance (RCM) and Condition-Based Maintenance (CBM). Of the many hundreds of
articles reviewed for these popular models only 76 papers were found to contain empirical evidence or real world
examples. Of the remaining 34 maintenance management models identified (excluding Condition Monitoring) very
little theoretical or practical support was found in the literature. Also in the last 5 or so years it was shown that overall
publication output of the maintenance models reviewed is trending lower, and this decline is even more pronounced in
regards to CBM and RCM. As maintenance and its management has increasingly become an important and strategic
issue for nearly every organisation in the world it could easily be argued that empirical based publications should be
increasing, not trending lower. The findings of this papers support the view that maintenance theory, in many respects,
is de-coupled from practical applications.
6. References
[1] V. Ebrahimipour and K. Suzuki, A synergetic approach for assessing and improving equipment performance in offshore in-
dustry based on dependability, Reliability Engineering and System Safety,Vol. 91, 2006, pp.10-19.
[2] F. L. Cooke, Plant maintenance strategy: evidence from four British manufacturing firms, Journal of Quality in Maintenance
Engineering, Vol.9, No.3, 2003, pp.239-249.
[3] S. Apeland and T. Aven, Risk based maintenance optimization: foundational issues, Reliability Engineering and System
Safety,Vol. 67, 2000, pp.285-292.
[4] M. Carnero, An evaluation system of the setting up of predictive maintenance programmes, Reliability Engineering and Sys-
tem Safety,Vol. 91, 2006, pp.945-963.
[5] B. Al-Najjar and I. Alsyouf, Selecting the most efficient maintenance approach using fuzzy multiple criteria decision making,
International Journal of Production Economics, Vol. 84, 2003, pp.85-100.
[6] R. Mobley,An Introduction to Predictive Maintenance, Van Nostrand Reinhold, New York, 1990.
[7] M. Bevilacqua and M. Braglia, The analytic hierarchy process applied to maintenance strategy selection, Reliability Engi-
neering and System Safety, Vol. 70, 2000, pp.71-83.
[8] R. Dekker, Applications of maintenance optimization models: a review and analysis, Reliability Engineering and System
Safety, Vol. 51, 1996, pp.229-240.
[9] T. Wireman, World Class Maintenance Management, Industrial Press Inc., New York, 1990.
[10] S. Blanchard, An enhanced approach for implementing total productive maintenance in the manufacturing environment,
Journal of Quality in Maintenance Engineering, Vol.3, No.2, 1997, pp.69-80.
[11] J. Moubray, Twenty-first century maintenance organization: Part 1 the asset management model, Maintenance Technology,
Applied Technology Publications, Barrington, IL, 2003.
[12] C. Cholasuke, R. Bhardwa and J. Antong, The status of maintenance management in UK manufacturing organisations: results
from a pilot survey, Journal of Quality in Maintenance Engineering, Vol.10, No.1, 2004, pp.5-15.
[13] K. Fraser, Maintenance management is now of strategic importance: So what strategies are your competitors using?, Pro-
ceedings of the 6
th
International Strategic Management Conference, St. Petersburg, Russia, July 8-10, 2010, pp. 139-152.
[14] I. Ahuja and J. Khamba, An evaluation of TPM implementation initiatives in an Indian manufacturing enterprise, Journal of
Quality in Maintenance Engineering, Vol. 13 No. 4, 2007, pp.338-352.
[15] M. Rausand, Reliability centered maintenance, Reliability Engineering and System Safety,Vol. 60, 1998, pp.121-132.
[16] A. Marquez and A. Heguedas, Models for maintenance optimization: a study for repairable systems and finite time periods,
Reliability Engineering and System Safety, Vol. 75, 2002, pp.367-377.
[17] E. Zio, Reliability engineering: Old problems and new challenges, Reliability Engineering and System Safety, Vol. 94, 2009,
pp.125-141.
[18] S. I. Mostafa, Implementation of proactive maintenance in the Egyptian Glass Company, Journal of Quality in Maintenance
Engineering, Vol.10, No.2, 2004, pp.107-122.


Proceeding of Industrial Engineering and Service Science, 2011, September 20-21
Copyright 2011 IESS.
Human Factors in Manufacturing
Kym Fraser

School of Advanced Manufacturing and Mechanical Engineering, University of South Australia,Adelaide, Australia
kym.fraser@unisa.edu.au
ABSTRACT
In todays competitive environment, cellular manufacturing (CM) is a process which offers global manufacturers im-
proved performance and helps them meet their strategic commitments through product and volume flexibility, lower
costs and improved customer response times. CM is a well-known strategy in removing many of the inefficiencies ex-
perienced in functional batch-type manufacturing environments. Evidence now indicates that some organisations have
achieved results that are less than anticipated and that firms which struggle to achieve the full benefit from CM may in
fact be experiencing problems with the human factors associated with manufacturing cells. As a socio-technical proc-
ess, cellular manufacturing requires careful attention to both its technical and human aspects. Academics and practi-
tioners alike have focused on the technical factors such as cell layout, machine order, family part grouping, and work-
flow balancing. The adoption of CM changes the social relationship and interactions among employees and their su-
pervisors. Given the potential impact on employees attitudes, motivation, and retention, these social changes call for
effective management in a number of areas including HRM, employment relations and industrial structure. This study
presents a review of the various human factors involved in manufacturing cells and tests the importance of each. A sur-
vey of managers, team leaders and operators working within CM systems helps to distinguish between technical and
human aspects and identifies the importance of human factors such as training, communication and teamwork.

Keywords: Human factors, cellular manufacturing, empirical study

1.Introduction
In todays competitive environment many companies are endeavouring to improve their manufacturing performance. It
is now widely accepted that cellular manufacturing (CM) is one such method that manufacturers can use to help meet
their strategic commitments, through product and volume flexibility, lower costs and improved customer response
times. CM is based on operators processing part families, or collections of similar parts, in cells, or clusters of dedicated
machines that may be dissimilar in function [1]. The benefits of CM include reduction in setup times, material handling,
work-in-process, cycle time, and tooling requirements [2],[3],[4]. Furthermore, the implementation of CM has been
shown to achieve significant improvements in product quality, space utilization, control of operations, scheduling, and
worker productivity [5],[6],[7]. While there is no doubt about the increasing popularity of CM (studies show that cells
are now adopted by between 43 and 53% of firms in the United States and the United Kingdom [8]) there is also evi-
dence that CM has not been successful in some organisations. Companies converting to CM often struggle with imple-
mentation and achieve results that are less than anticipated [8],[9],[10]. Evidence now indicates that firms who struggle
to achieve the full benefit from CM may in fact be experiencing problems with the human factors associated with CM
[7],[11],[12].
While much of the CM research work has focused on technical issues (machine order, family part grouping,
workflow balancing), it is now accepted that the implementation and ongoing success of CM involves the consid-
eration of both technical and human aspects. [13]found that both technical and social changes take place when a
company adopts advanced manufacturing systems such as cellular manufacturing. They point out that, i f an organi-
sation focuses solely on the technical side at the expense of human factors, its performance will be less favourable
than if it pays attention to both sets of issues. Under traditional batch-type functional manufacturing conditions em-
ployees have well-defined responsibilities for a single operation or machine. The very nature of cells requires that a
pool of individually skilled machine operators be grouped together to share work in the cell.
It is now accepted that a number of fundamental social changes do occur when companies convert from func-
tional manufacturing layouts to manufacturing cells. Given the potential impact on employees attitudes, motiva-
tion, and retention, these social changes call for effective management in a number of areas including supervision,
HRM, employment relations and industrial structure.This study aims to provide answers to two areas which have
not been adequately addressed in the literature. The first part of the study seeks to determine the level of influence
Human Factors in Manufacturing

8
that technical and human aspects may play within CM systems, and secondly, a list of human factors associated
with CM are tested to determine which are the most important factors within CM systems.
2. Technical vs Human Aspects of Cellular Manufacturing
There exists a significant and growing body of academic research exploring various technical facets of cell formation
and design [14],[15],[16],[17]. These include areas such as machine sequence, workflow balancing, machine-part fami-
lies, and cell capacity using mathematical or simulation methodologies. [11]explain that most of this technically focused
research adopts a micro-level focus, investigating one or a few issues within this large and complex process, and giving
only a limited attention to the significant human dimensions. This has led to the situation where we know a great deal
about certain steps in the technical design of cells, but lack a well-developed and broadly-focused theory of cell design
and its human consequences.
[18] argue that a major contributing factor why the full benefits of CM have not been achieved is the fact that
the research literature on cellular manufacturing over the last 15 years has to an overwhelming degree focused on
the development of technical procedures to solve the cell formation problem (machine order/layout, family part
grouping, work flow sequence).While many of the decisions inherent in cell system design are technical in nature
(e.g., how work should be scheduled through the cell), there are significant human dimensions to cell design (e.g.,
how cell operators will be selected, trained and rewarded). [11]conclude that many of the problems and failures in
cellular manufacturing systems occur at the interface between the technical and social subsystems.
[19]state that manufacturing companies can establish a strategic competitive advantage by placing a greater
importance on the human elements early in the design and implementation process. The authors explain that the vast
majority of the cell formation literature places pri mary emphasis on grouping similar parts and machines. Once the
cells are designed, secondary consideration is given to the assignment of workers to the cells. At this stage, the hu-
man element has typically only considered workers in terms of their labour capacity and/or technical skills.
[19]argue that, in this setting, human skills such as communication, problem solving, teamwork, leadership, and
conflict resolution can become just as important as technical skills, such as mechanics, mathematics, machining and
inspection.
In a study of implementation experiences, [6] concluded by making the following point, the picture that
emerges from this study is clear restructuring the factory to adopt cellular manufacturing should not be viewed
merely as a technical, engineering-dominated problem but as a change process where the people element domi-
nates. [20]found that a number of fundamental social changes occur when companies convert from functional
(batch or job shop) manufacturing layout to manufacturing cells. In CM, employees are moved from segregated
work groups (eg all press operators work in the same department, all lathe operators in same department) into cells
that combine jobs and workers from several specialized skill areas. Cell team members have to work together,
though each may have originally been under different pay or reward system, or possess different levels of training,
skills, and experience. In essence, conversion to CM changes the social interactions among employees and their
supervisors. These social changes require careful attention because of their potential impact on employee attitudes,
motivation, and retention.
[21]conducted a comprehensive evaluation of the various human factors associated with CM by reviewing
both the CM and the advanced manufacturing technologies (AMT) literature. Adding support to the lack of research
is this field, [21] states while cellular manufacturing is a popular research area, there is a singular absence of art i-
cles that deal with the human elements in cellular manufacturing. The results of their review identify eight broad
areas of human issues in CM: worker assignment strategies, skill identification, training, communication, auton-
omy, reward/compensation system, teamwork, and conflict management.
Where socio-technical systems such as cellular manufacturing are involved, both aspects need to be considered
to maximise success. The very nature of manufacturing cells dictates that individuals will be required to work t o-
gether to maximise the benefits that cells can provide manufacturers. What is not clear is the level of influence that
either aspect has on CM systems. Is one aspect considered more important or has a greater influence on the
on-going success of manufacturing cells? This study will endeavour to provide a better understanding to this un-
answered questioned.
3. Methodology
The data used for this study was collected via a questionnaire survey designed to provide information about the impor-
tance of human issues in cellular manufacturing. A sample size of 175 participants involved in cellular manufacturing
took part in the survey. Survey participants included three sub-groups: managers, team leaders, and operators. A brief
summary of the four medium to large organisations involved in the study are as follows:
Human Factors in Manufacturing

Copyright 2011 IESS. 9
- Company 1 (sites 1 & 2) Electrical accessories manufacturer (Australian) (2300 employees)
- Company 2 Sanitary ware manufacturer (Australian) (2000 employees)
- Company 3 Automotive components manufacturer (Australian) (800 employees)
- Company 4 Electrical accessories manufacturer (Switzerland) (380 employees)
The aim of this study is to provide answers for two research questions. Firstly, in an attempt to determine the level
of influence that technical or human factors may have on CM, participants were asked to distinguish between technical
and human aspects of cellular manufacturing and determine the level of influence either aspect may have on CM sys-
tems. The second objective was to test the importance of a list of human factors which are associated with CM and es-
tablish which factors are considered the most important within a CM system. The list of human factors used in this
study wereidentified by [21] and the list encompasses most of the social issues presented in the literature. The data col-
lected to answer both research questions was independent of the other.
4. Findings
Participants were asked to rate which problem (between technical and human) they had encountered the most often
while working in cells. A list of technical and human problems were provided to help participants understand the dif-
ference between each issue. The scale used to rate this question was as follows; 1 mostly technical, 2 more techni-
cal, 3 same, 4 more human, 5 mostly human.
Table 1 Source of problem: Technical or Human (Company)
Com N Mean SD
Company 1 site 1
- site 2
Company 2
Company 3
Company 4
42
23
6
40
61
2.05
2.91
2.33
2.15
2.66
0.909
0.793
0.816
1.027
1.124
Total 172* 2.41 1.042
*Of the 175 participants, 3 operators fail to answer this question (N=172)

The results showed that the problem being experienced within manufacturing cells is skewed toward technical
issues (mean 2.41) (see Table 1). The mean for each of the four companies and five sites surveyed fell within 2 (being
more technical) to 3 (experiencing the same amount of problems for both issues). Of the 172 participants, 52% indi-
cated that they had experienced either more technical or mostly technical problems involving CM systems. Of the
remaining 48% of participants, 34 % indicated that they had experienced the same amount of problems for both aspects
leaving only 14% to indicated that they had experienced more human or mostly human problems. It is worth noting
that the maximum and minimum mean values for the survey occurred between the two sites of the same company, 2.05
to 2.91.
Table 2 Source of problem: Technical or Human (Position)
Position N Mean SD
Manager
Team Leader
Operator
10
23
139
3.00
2.65
2.33
1.054
1.027
1.031
Total 172* 2.41 1.042
*Of the 175 participants, 3 operators fail to answer this question (N=172)

When comparing the data for the different positions held within the companies (see Table 2), operators indicated
that they experienced more technical problem than human issues within manufacturing cells with a mean of 2.33. For
team leaders the mean increases to 2.65 (indicating less technical and more human issues than operators) and for man-
agers the mean value is 3.00. For managers the amount of problems they experience between technical and human is-
sues are the same. The results indicate that people in leadership or management positions within a cellular environment
experience increased human issues as compared to operators of the cells.
In regards to the second research objective, participants were ask to rank the eight (8) human factors from 1=most
important to 8=least important. Each factor was characterised by a short description to help participants understand the
various factors being ranked.
Human Factors in Manufacturing

10
The overall rankings of human factors for the various companies/sites (see Tables 3) indicate three sub-groupings
of the eight human issues listed. The three factors ranked most important (means between 2.79 and 3.47) were com-
munication, teamwork, and training. The next sub-group (means between 4.38 and 4.83) was skill identification
and worker assignment strategies. The final group (means between 5.34 and 5.75) were, conflict management,
autonomy, and reward/compensation.
When comparing the individual companies and sites to the overall rankings the following differences are observed.
Of the three top ranked factors, the biggest difference occurred with the European company (Company 4) which ranked
training as the 7th most important factors while the Australian companies/sites ranked training either 1st or 2nd .
Another notable difference was Company 2 ranking teamwork as the 5th most important factors. While teamwork
ranked highly overall, one possible reason for this lower ranking may be due to the fact that the cells in this company
were only 2-3 person cells as compared to the surveys overall average of 6 people per cell. The low number of partici-
pants (N=6) for Company 2 also make it difficult to draw meaningful comparisons on an individual level.
Table 3 Human Factor Ranking: Companies

Rank
Respondent Category
Company 1 -
Site 1
Company 1 -
Site 2
Company 2

Company 3 Company 4 All Companies
1 Training Training Training Communication Communication Communication
2 Teamwork Teamwork Communication Training Teamwork Teamwork
3 Communication Communication Skill Identification Teamwork Autonomy Training
4 Skill Identification Skill Identification Worker Assign-
mentStrategies
Skill Identification Worker Assign-
ment Strategies
Skill Identification
5 Reward/
Compensation
Conflict
Management
Teamwork Worker Assign-
mentStrategies
Conflict
Management
Worker Assign-
mentStrategies
6 Worker Assign-
mentStrategies
Worker Assign-
mentStrategies
Reward/
Compensation
Reward/
Compensation
Skill Identification Conflict
Management
7 Conflict
Management
Autonomy Autonomy Conflict
Management
Training Autonomy
8 Autonomy Reward/
Compensation
Conflict
Management
Autonomy Reward/
Compensation
Reward/
Compensation

The overall ranking of skill identification and worker assignment strategies at 4
th
and 5
th
respectively seem a true
reflection as all individual companies/sites ranked both between 3
rd
and 6
th
. When comparing differences within the
three least important factors the notable difference again occurred in the European company (Company 4) when it
ranked autonomy as the 3
rd
most important while the other companies/sites (Australian) ranked autonomy either 7
th

or 8
th
most important.
When comparing the three respondent categories; managers, team leader, operators (see Tables 5), the overall re-
sults is strongly influence by the high number of operators (81%) of the total participants. The rankings given to the
eight human factors by both team leaders and operators was the same. While the rankings were the same, the mean
value for each factors for team leaders was higher (indicating greater importance) except for reward/compensation
system. The biggest difference between these means occurred for communication (2.13 to 2.85) and skill identifica-
tion (3.70 to 4.51). The notable difference in the rankings occurred between managers and the other two positions.
Manager considered teamwork, training and communication as the top three factors while team leaders and opera-
tors ranked communication, teamwork and training as their top three.
Table 4 Human Factor Ranking: Job Position

Rank
Respondent Category
Managers Team Leaders

Operators All Groups
1 Teamwork Communication Communication Communication
2 Training Teamwork Teamwork Teamwork
3 Communication Training Training Training
4 Skill Identification Skill Identification Skill Identification Skill Identification
5 Autonomy Worker Assignment
Strategies
Worker Assignment
Strategies
Worker Assignment
Strategies
6 Worker Assignment
Strategies
Conflict Management Conflict Management Conflict Management
7 Conflict Management Autonomy Autonomy Autonomy
8 Reward/
Compensation
Reward/
Compensation
Reward/
Compensation
Reward/
Compensation
Human Factors in Manufacturing

Copyright 2011 IESS. 11
5. Discussion
In regards to the first researchquestion, the biggest difference occurred between the two sites of the same company
(Company 1). This significant difference may in some way be explained by each sites attitude to training. At Site 1 the
training records (both past and future needs) of each operator were displayed in a strategic position within the plant for
all employees to observe. While such a method was not evident at Site 2, many employees at Site 2 openly complained
about the lack of coordinated training within the plant. Support for this issue was also evident in Question 2 when par-
ticipants at Site 2 ranked training (mean value of 2.04) higher than any other company or site. When questioned about
employee training at Site 2, management stressed that adequate training had been provided. Another notable difference
between the two sites of the same company occurred with the ranking of reward/compensation system. While Site 2
operators were unhappy about the lack of training, it was not the case in regards to reward/payment. Site 2 participants
clearly ranked the issue of payment as the least important (mean 7.09). At site 1 it was ranked the 5th most important
with a much higher mean of 4.58.
When comparing the three respondent categories (job positions), the two notable differences was that managers
ranked autonomy the 5th most important while the other two positions put it at 7th. Secondly, while all three positions
ranked reward/compensation system the least important factor, the mean of 7.40 by managers was the lowest mean
recorded in the survey, indicating the very low importance given to reward/payment issues by managers.
When looking for notable differences between countries the two factors to stand out was the low ranking for
training (ranked 7th) and the high ranking of autonomy (ranked 3rd) from the European results. Without further re-
search it would be difficult to state the reasons for these differences but the following point will be made. The surveys
distributed to participants in the Swiss company were converted to the common language of the factory being
Swiss-German. It was observed that many operators were experiencing some problems understanding the
Swiss-German language being used, even with the use of a senior and long time employee of the company to help ex-
plain the survey questions. While many of these workers were not native to Switzerland, it was interesting to note that
so many workers (including younger people) would experience such a problem with the native/common language. It
would seem that some literacy training would be beneficial the company, considering that operators needed to work
within a team environment in manufacturing cells. A second point which may have some influence on the high
autonomy ranking is that operators may feel more comfortable working in smaller groups or even alone due to this
literacy issue. It could also be argued that cultural differences may affect someof the outcomes,both within the Swiss
company and between the two countries.
The results of this survey clearly indicate that human factors play a significant role in the overall success of cellu-
lar manufacturing. When analysing which individual human factors are important to CM it is shown that communica-
tion, teamwork and training rank the highest while reward/compensation ranked lowest. In determining the importance
of human factors such as skill identification, worker assignment, training, reward etc. in different companies and coun-
tries, it must be noted that these issues are rarely neutral in nature, and their interpretation will be shaped by the indus-
trial context of the firm and/or country. It is therefore acknowledged that some of the differences in the results between
the four companies may well be shaped by the industrial context in which they operate. The non-testing of this issue in
this research provides the opportunity for further research in this area and in human factors in general.
6. Conclusions
Cellular manufacturing has a lot to offer global manufacturers by reducing both costs and inefficiencies within their
manufacturing processes. While the focus of research has been on the technical side of this socio-technical process, it is
now clear that greater effort must be placed on the human aspects to improve the benefits and success of this form of
manufacturing. This study found that while technical issues still play a major role in the on-going problems experienced
in cellular manufacturing, human issues account for a significant proportion of problems within cells. The study goes on
to identified the various human factors associated with CM and tests the importance of each. While communication,
teamwork and training were ranked as the most important factors, it is hoped that these findings will better inform prac-
titioners on the human aspects of CM and provide future direction in areas such as employment and industrial relations.
7. References
[1] H. Harris and K. Fraser, Towards virtual manufacturing: An implementation framework from feasibility to product develop-
ment, International Journal of Product Development, Vol. 11, No. 1/2, 2010, pp. 136-162.
[2] V. L. Huber and N. Hyer, The human factor in cellular manufacturing, Journal of Operations Management, Vol.5, No.2,
1985, pp.213-228.
[3] F. Olorunniwo, A framework for measuring success of cellular manufacturing implementation, International Journal of
Human Factors in Manufacturing

12
Production Research, Vol.35, No.11, 1997, pp.3043-3061.
[4] A. Gunasekaran, R. McNeil, R. McGaughey andT. Ajasa, Experiences of a small to medium size enterprise in the design and
implementation of manufacturing cells, International Journal of Computer Integrated Manufacturing, Vol.14, No.2, 2001,
pp.212-223.
[5] V. L. Huber, and K.A. Brown, Human resource issues in cellular manufacturing: A sociotechnical analysis, Journal of Op-
erations Management, Vol. 10, No.1, 1991, pp.138-159.
[6] U. Wemmerlov and D. J. Johnson, Cellular manufacturing at 46 user plants: implementation experiences and performance
improvements, International Journal of Production Research, Vol. 35, No.1, 1997, pp.29-49.
[7] K. S. Park and S. W. Han, Performance Obstacles in Cellular Manufacturing Implementation Empirical Investigation, Hu-
man Factors and Ergonomics in Manufacturing, Vol.12, No.1, 2002, pp.17-29.
[8] D. J. Johnson and U. Wemmerlov, Why does cell implementation stop? Factors influencing cell penetration in manufacturing
plants, Production and Operations Management, Vol.13, No.3, 2004, pp. 272-289.
[9] C. A. Yauch, Moving towards cellular manufacturing: The impact of organisational culture for small businesses, PhD Thesis,
University of Wisconsin, USA, 2000.
[10] K. Fraser, H. Harris and L. Luong, Improving the implementation effectiveness of cellular manufacturing: A comprehensive
framework for practitioners, International Journal of Production Research, Vol. 45, No 24, 2007, pp. 5835-5856.
[11] N. L. Hyer, K. A. Brown and S. Zimmerman, A socio-technical systems approach to cell design: case study and analysis,
Journal of Operations Management, Vol.17, 1999, pp.179-203.
[12] K. Fraser, Labour flexibility: Impact of functional and localised strategies on team-based product manufacturing, CoDesign,
Vol. 5, No. 3, 2009, pp. 143-158.
[13] G. G. Udo and A. Ebiefung, Human Factors affecting the Success of Advanced Manufacturing Systems, Computers & In-
dustrial Engineering, Vol.37, 1999, pp.297-300.
[14] N. Singh, Design of Cellular Manufacturing Systems: An Invited Review, European Journal of Operational Research, Vol.
69, No.3, 1993, pp. 248-291.
[15] M. Kazerooni, An integrated methodology for cellular manufacturing system design, PhD Thesis, University of South Aus-
tralia, Adelaide, 1997.
[16] G. Shambu and N. C. Suresh, Performance of hybrid cellular manufacturing systems: A computer simulation investigation,
European Journal of Operational Research, Vol.120, 2000, pp.436-458.
[17] Z. Albadawi, H. Bashir and M. Chen, A mathematical approach for the formation of manufacturing cells, Computers & In-
dustrial Engineering, Vol. 48, 2005, pp.3-21.
[18] U. Wemmerlov and D. J. Johnson, Empirical findings on manufacturing cell design, International Journal of Production
Research, Vol. 38, No.3, 2000, pp.481-507.
[19] B. A. Norman, W. Tharmmaphornphilas, K. L. Needy, B. Bidanda and R. C. Warner, Worker assignment in cellular manu-
facturing considering technical and human skills, International Journal of Production Research, Vol.40, No.6, 2002,
pp.1479-1492.
[20] F. Olorunniwo and G. Udo, The impact of management and employees on cellular manufacturing implementation, Interna-
tional Journal of Production Economics, Vol.76, 2002, pp.27-38.
[21] B. Bidanda, P. Ariyawongrat, K. L. Needy, B. Norman and W. Tharmmaphornphilas, Human related issues in manufacturing
cell design, implementation, and operation: a review and survey, Computers & Industrial Engineering, Vol.48, 2005,
pp.507-523.

Proceeding of Industrial Engineering and Service Science , 2011, September 20-21
Copyright 2011 IESS.
Developing A Model For Measuring
Organizational Knowledge: A Case Study of
PT.Telekomunikasi Indonesia, Tbk.
Fransiscus Rian Pratikto

Department of Industrial Engineering, Parahyangan Catholic University, Bandung, Indonesia
frianp@unpara.id, frianp@yahoo.com
ABSTRACT
Knowledge management (KM) has become the main attention of today business organizations due to its important role
in developing and sustaining their competitive advantage. Many organizations have benefited from KM initiatives. As
an old proverb in management goes, you cannot manage what you cannot measure, knowledge measurement is one
of the essences of KM. The importance of knowledge measurement is not supported by the availability of an appropriate
framework to do such effective measurement. Reperence [2] proposed a framework for measuring knowledge in an or-
ganization that consists of three component, i.e. stock, flow, and enabler. Stock is existing level of knowledge at a point
in time; flow is movement of knowledge between entity, including individuals, organizations or organization levels; and
enablers are investment, processes, structures, and activities established by organizations aimed at changing or main-
taining knowledge stocks, or influencing knowledge flows. This research aims to contribute in designing a measurement
model based on Boudreaus framework by exploring and operationalizing this framework, and finally empirically test-
ing the model in a case study. In this research, a model of knowledge measurement based on Boudreaus framework is
developed. This model identifies some constructs that theoretically affect (either directly or indirectly) the flow of
knowledge in an organization, i.e.: network attributes, external sources and knowledge complementarity, potential ab-
sorptive capacity, realized absorptive capacity, nature of knowledge (tacitness), knowledge integration, and social
mechanism integration. An empirical study in the largest telecommunication company in Indonesia is conducted. This
survey resulted in the significance and magnitude (in standardized value) of the total effect of each of the above con-
struct on the flow of knowledge as follows: network attributes (.51), external sources and knowledge complementarity
(.15), potential absorptive capacity (.22), realized absorptive capacity (.62), nature of knowledge (tacitness) (.25),
knowledge integration (.30), and social mechanism integration (.22).

Keywords: knowledge management, stock, flow, enabler.

1. The Importance of Knowledge Measurement
Knowledge management (KM) has become the main attention of today business organizations due to its important role
in developing and sustaining their competitive advantage. Many organizations have benefited from KM initiatives. Best
practices from many organization showed that the return on investment (ROI) of KM initiative implementation varied
between 2,5:1 to 10:1 [1].
Based on research by Davenport, et.al. [2] which studied 31 KM projects in 23 companies, it is found that KM im-
plementation project that effectively increase organizations efficiency and effectiveness are those that focused on (i)
creating knowledge repositories, (ii) improving access to knowledge, (iii) enhancing culture that support knowledge
usage in an organization, and (iv) managing knowledge as asset. Managing knowledge as asset requires a good and
proper knowledge measurement.
2. Boudreaus Framework of Knowledge Measurement
Boudreau [3] offered a quite comprehensive framework for measuring knowledge of an organization. Boudreaus
framework consists of 3 components, i.e.: stock, flow, and enabler. Stocks are the level of knowledge at a point in
time; flows are movement of knowledge between entities, including individuals, organizations, and organization levels;
and enablers are investment, processes, structures, and activities established by organizations aimed at changing or
maintaining knowledge stocks, or influencing knowledge flows.
According to Boudreau, knowledge measures that are categorized as stock includes accounting of intangibles, fi-
Developing A Model For Measuring Organizational Knowledge: A Case Study of PT.Telekomunikasi Indonesia, Tbk.

14
nancial statement augmentation, patents, publications and citations, organization experience and rivalry patterns, learn-
ing curves, and unit-level competencies, education and experience. Measures of knowledge flow are categorized in 2
groups, one group of measures focuses on business units and alliance partners, and another group focuses on groups and
teams. Enabler measures comprise geographical and political proximity, international and domestic organizational and
alliance design, research & development (R&D) expenditures, absorptive capacity, network attributes, and tacitness.
This research aims to contribute in designing knowledge measurement model based on Boudreaus framework.
This measurement model can be used to portray the condition of knowledge in an organization, and focuses on knowl-
edge measures that are measurable at individual level (see Table 1).
Table 1: Knowledge measures of Boudreaus framework and corresponding level of measurement
Knowledge measures
Level of measurement
Individual Organization
Stocks
Accounting for intangibles assets
Financial statement augmentation
Patents or publications and their citation patterns
Organization experience and competitive rivalry
Learning curves
Unit-level education, experience, and job requirements
Flows
Measures that focus on flow of knowledge between business units and
alliance partners

Measures that focus on flow of knowledge between colleagues and
team

Enablers
Geographycal and physical proximity
International and domestic organizational and alliance design
Research and development expenditures
Absorptive capacity
Networks attributes
Tacitness
3. The Knowledge Measurement Model
The following enabler measures are chosen as the models components:
- Absorptive capacity
Reference [4] defines absorptive capacity as dynamic capability embedded in a firms routines and processes mak-
ing it possible to analyze the stocks and flows of a firms knowledge and relate this variables to the creation and
sustainability of competitive advantage.
Absorptive capacity has 2 components, i.e.: potential absorptive capacity (PACAP) and realized absorptive capac-
ity (RACAP). PACAP comprises the ability to acquire and assimilate knowledge, whereas RACAP comprises the
ability to convert and exploit knowledge. Several factors significantly affect these two capabilities [4], e.g. knowl-
edge integration, sources of external knowledge, knowledge complementarity, and social integration mechanism.
- Network attributes
Individual and organizational network attributes are key enablers that determine flow of knowledge. This network
can be network between individuals, organization and suppliers, buyers, financial institutions, etc. [3].
- Tacitness
Tacitness reflects the efforts required to move the knowledge [5]. Tacitness is an enabler because it determines the
ease of knowledge transfer process. Tacitness can be harmful when it restricts desired knowledge flow between
groups, but also valuable in making knowledge difficult for competitors to copy (Teece, Pisano, & Shuen, 1997;
Barney, 1991 in [3]).
The complete model consists of 8 constructs, i.e.:
- Network attributes; consists of 3 variables: size, coverage, and strength of network.
- Sources and complementarity of external knowledge; consists of 4 variables: coverage and strength or relationship
with external paty in term of acquisition, purchasing through licensing and contract, inter-organization relationship
including research & development, alliance, and joint ventures, and knowledge complementarity between
individual in the organization and contacts in their networks [6].
- Social integration mechanism consists of variables that indicate the form and level of social mechanism in the or-
Developing A Model For Measuring Organizational Knowledge: A Case Study of PT. Telekomunikasi Indonesia, Tbk.

Copyright 2011 IESS. 15
ganization, either formal or informal [3].
- Tacitness consists of 5 variables: type of knowledge, codifiability, teachability, complexity, and system dependence
[6].
- Knowledge integration, consists of variables that indicate the availability and level of integration of knowledge in
the organization either in formal or informal form [7].
- Potential absorptive capacity consists of variables that indicate the ability to acquire and assimilate external
knowledge [7], [4].
- Realized absorptive capacity, consists of variables that indicate the ability to convert and exploit knowledge [7],
[4].
- Knowledge flow, consists of variables that indicate the flow of procedures, tools and ideas including patents [3];
The relationships between the above constructs result in the following directional hypotheses:
- Hypotheses 1: The better the network attributes of an organization, the more flow of knowledge into the organiza-
tion.
- Hypotheses 2: The better the network attributes of an organization, the higher external knowledge sources and
complementarity of the organization.
- Hypotheses 3: The higher external knowledge sources and complementarity of an organization, the higher potential
absorptive capacity of the organization.
- Hypotheses 4: The better the social mechanism integration of an organization, the higher the realized absorptive
capacity of the organization.
- Hypotheses 5: The higher the level of knowledge integration in an organization, the higher the potential absorptive
capacity in the organization.
- Hypotheses 6: The higher the level of knowledge integration in an organization, the higher the realized absorptive
capacity in the organization.
- Hypotheses 7: The higher the level of knowledge tacitness in an organization, the more difficult to attain a degree
of knowledge integration in the organization.
- Hypotheses 8: The higher the level of potential absorptive capacity in an organization, the higher the level of real-
ized absorptive capacity in the organization.
- Hypotheses 9: The higher the level of realized absorptive capacity in an organization, the higher the knowledge
flow in the organization.
- Hypotheses 10: The higher the level of knowledge tacitness in an organization, the more difficult to attain a degree
of knowledge flow in the organization.
- Hypotheses 11: The higher the social integration mechanism in an organization, the higher the level of knowledge
integration in the organization.
4. Case Study
The model was implemented in PT. Telekomunikasi Indonesia, Tbk. (TELKOM), Indonesias largest state-owned
telecommunication company. TELKOM is a full service and network provider telecommunication company. TELKOM
provides fixed wireline services, fixed wireless services, mobile/cellular services, data & internet services, and
interconnection & network services. As of 31 December 2006, TELKOM has about 48.5 million customers, consists of
8.7 million fixed wireline service customer, 4.2 million fixed wireless customers, and 35.6 million mobile service
customers [8]. In line with its vision To become a leading InfoComm player in the region, TELKOM has been
making a continual effort to stay the top position among telco operators in Indonesia.
Knowledge management initiative has ben formally implemented in TELKOM which is managed by an AVP
(Assistant Vice President). In 2007 TELKOM also awarded Top-5 of Most Admired Knowledge Enterprise (MAKE)
2007 in Indonesia.
Online survey through TELKOMs intranet was conducted in April 2007, 1269 sampel of TELKOMs employee
are collected from which 133 completed questionnaire were considered invalid, hence 1136 samples are used for further
analysis. The reliability of the questionnaire was tested using Cronbach Alpha and resulted in alpha coefficient of
0.922, leading to the conclusion that the instrument is reliable.
Majority of respondent are 41 45 years old (39,96%) and 46 50 years old (22,36%) with a mean of 43,88 years
old. Most of them have Sarjana degree (44,63%) dan Diploma degree (37,15%). About 56,16% of respondents have
been working in Telkom for more than 20 years, and 20,95% of them for 16 20 years. About 53,17% of the
respondents have just been no more than 1 year in their current position, and on average all of the respondents have
been in their position for 24,58 months.
Developing A Model For Measuring Organizational Knowledge: A Case Study of PT.Telekomunikasi Indonesia, Tbk.

16
5. Model Parameterization and Analysis
The model was parameterized using Structural Equation Modeling (SEM) which employed Maximum Likelihood esti-
mation method. Two trimming process were conducted to ensure that all constructs, variables, and relationships are
significant.
The parameterizations result in standardized value is depicted in Figure 1.
The final model was then tested for fitness and the result is depicted in Table 2, in which 5 of 9 criteria indicate
that the model was still in the acceptable level of fitness.
Table 2: Final model fitness
Model Fit Criteria Value Acceptable Level [9] Interpretation
Chi-square (_
2
) 8005.58 < 1022,82 Not fit
Goodness-of-Fit Index (GFI) 0.69 0 (not fit) s.d. 1 (perfect fit) Partially fit
Adjusted GFI (AGFI) 0.65 0 (not fit) s.d. 1 (perfect fit) Partially fit
Root-mean-square residual (RMR) 0.16 s 0,05 Not fit
Root-mean-square error of approximation (RMSEA) 0.10 < 0,05 Not fit
Tucker-Lewis Index (TLI) 0.69 0 (not fit) s.d. 1 (perfect fit) Partially fit
Normed fit index (NFI) 0.69 0 (not fit) s.d. 1 (perfect fit) Partially fit
Normed chi-square (_
2
/DF) 11.59 1,00 s _
2
/DF s 5,00 Not fit
Parsimonious fit index (PGFI) 0.61 0 (not fit) s.d. 1 (perfect fit) Partially fit


The impact of each construct on flow of knowledge is depicted in Table 3.
Table 3: Impact of each construct on flow of knowledge
Constructs Direct Effect Indirect Effect Total Effect
Realized Absorptive Capacity 0.62 0.00 0.62
Network Attributes 0.39 0.12 0.51
Knowledge Integration 0.00 0.30 0.30
Tacitness 0.10 0.15 0.25
Potential Absorptive Capacity 0.00 0.22 0.22
Social Integration Mechanism 0.00 0.22 0.22
External Sources and Knowledge Complementarity 0.00 0.15 0.15


Compared to the initial model, the final model contains the same construct, but some variables were eliminated
due to insignificance, i.e. variable J23, J24, J33, J34, J35, J36, and J45. Variable J23, J24, J33, J34, J35, and J36 were
trimmed due to insignificant factor loading (<0.30).
Variable J23 measures the complexity of knowledge that is required in a certain job such that an individual in-
volved in the job needs to be a specialist. The insignificant factor loading maybe due to the lack of respondents com-
petence in judging whether or not a job requires a specialist. The question might be more appropriate for an expert on
job analysis and competency.
Variable J24 measures the system dependence characteristic of knowledge and its depedence on knowledge of
other individual in the organization. The insignificant factor loading may be caused by insufficient respondents
knowledge to make a proper judgement about whether their knowledge is dependent on others.
Variable J33 measures the inidividual ability to recognize changes that occur in market, especially changes on
regulations, competition, and technology. Variable J34 measures individual ability to analyze and interpret those
changes. Those 2 variables measures second aspects of Potential Absorptive Capacity (PACAP), i.e. assimilation abil-
ity.
The insignificant factor loading of those 2 variables is due to the fact thet not all respondents jobs require them to
recognize, analyze, and interpret changes on regulation, competition, and technology. The questions are more appropri-
ate for employee whose job is related to market intelligence, business intelligence, atau technology watch.
Variable J35 measures individual ability to recognize the relationship between new external knowledge and cur-
rent internal knowledge, while variable J36 measures individual ability to capture an opportunity from new knowledge.
These 2 variables actually measure the first aspect of Realized Absorptive Capacity (RACAP), i.e. conversion ability.
The insignificant factor loading of these 2 variables may be caused by the fact that not all respondents jobs require
them to recognize the relationship between external knowledge and current internal knowledge, and to recognize op-
portunities regarding the new knowledge. These questions are more appropriate for empoyees whose jobs are related to
product development, business development, or one who act as a gatekeeper in knowledge management initiative im-
Developing A Model For Measuring Organizational Knowledge: A Case Study of PT. Telekomunikasi Indonesia, Tbk.

Copyright 2011 IESS. 17
plementation.
While variable J45 was trimmed because its regression coefficient was not significant. This variable measures the
extent to which an individual knows about knowledge outside of the organization. Eliminating this variable does not
bring any significant impact on the construct because it is still represented in 6 other variable.

Network
Attributes
Sources and
Complement
arity of
External
Knowledge
Potential
Absorptive
Capacity
Realized
Absorptive
Capacity
Flow of
Knowledge
Social
Integration
Mechanism
Knowledge
Integration
Tacitness
J4 J5 J6 J7 J1 J2 J3 J11 J12 J13 J8 J9 J10
J14
J15
J16
J17
J21 J22 J18 J19 J20 J28 J25 J26 J27 J29
J30
J31
J32
J37 J38 J39
J40
J41
J42
J43
J44
J46
e1 e2 e3 e4 e5 e6 e7 e8 e9 e10 e11 e12 e13
e37 e38 e39
e29
e30
e31
e32
e14
e15
e16
e17
e40
e41
e42
e43
e44
e46
e25 e26 e27 e28 e18 e19 e20 e21 e22
ePE
ePAC
eRAC
eAP
eIP
0,81
0,39 0,70
0,39
0,35
0,36
0,13
0,10
0,62
0,49
0,59 0,69 0,81 0,78 0,56
0,41
0,71
0,64
0,70
0,47
0,44
0,59 0,78 0,83
0,59
0,76
0,72
0,76 0,70 0,67 0,64 0,76
0,44
0,44
0,81
0,82
0,63 0,81 0,72 0,52 0,47 0,72 0,67 0,65 0,74 0,66 0,83 0,79 0,83
0,44


Figure 1: Final model after second trimming in standardized value.
6. Conclusions
Based on the parameterization and analysis, the following conclusions are drawn:
The flow of knowledge in Telkom is affected by some enablers, i.e. network attributes, sources and complementar-
ity of external knowledge, potential absorptive capacity, realized absorptive capacity, tacitness, knowledge integra-
Developing A Model For Measuring Organizational Knowledge: A Case Study of PT.Telekomunikasi Indonesia, Tbk.

18
tion, and social integration mechanism of the organization.
The impacts between enablers and of enablers on the flow of knowledge in Telkom that are significant (in stan-
dardized value) are: the positive impact of network attributes on flow of knowledge (0.39); the positive impact of
network attributes on sources and complementarity of external knowledge (0.81); the positive impact of sources
and complementarity of external knowledge on potential absorptive capacity (0.70); the positive impact of potential
absorptive capacity on realized absorptive capacity (0.35); the positive impact of realized absorptive capacity on
flow of knowledge (0.62); the positive impact of knowledge explicitness (or negative impact of knowledge
tacitness) on flow of knowledge (0.10); the positive impact of knowledge explicitness (or negative impact of
knowledge tacitness) on knowledge integration in the organization (0.49); the positive impact of knowledge
integration in the organization on potential absorptive capacity (0.39); the positive impact of knowledge integration
in the organization on realized absorptive capacity (0.36); the positive impact of social integration mechanism on
realized absorptive capacity (0.13); and the positive impact of social integration mechanism on knowledge
integration in the organization (0.44).
The overall measures of organizational knowledge in Telkom consist of the following constructs and variables:
- Network attributes, which has 6 measures: number of internal contacts; number of external contacts; strength of
internal network; strength of external network; scope of internal network; and scope of external network.
- Sources and complementarity of external knowledge, which has 6 measures: the extent to which individuals in
the organization have opportunities to be involved in relationships with external parties in the form of
acquisition, purchasing through licensing & contract, research & development, alliance, and joint venture;
individual access to external sources of knowledge; the extent to which individuals in the organization access and
use knowledge from external sources; the extent to which external contacts are willing to share their knowledge;
and complementarity of knowledge from external sources with internal knowledge.
- Tacitness, which has 4 measures: knowledge codifiability; knowledge teachability; knowledge complexity; and
knowledge system dependence.
- Social Integration Mechanism, which has 4 measures: intensity of problem solving through formal teams,
intensity of problem solving through informal mechanism, effectivity of problem solving through formal teams,
and effectivity of problem solving through informal mechanism.
- Knowledge integration, which has 5 measures: knowledge usefulness; ease of access to knowledge repository;
knowledge repository usefulness for knowledge sharing; ease of communication between individuals in the
organization; and effectivity of communication for knowledge sharing.
- Potential absorptive capacity, which has 3 measures: frequency of interaction between individuals in the
organization to acquire new knowledge; frequency of interaction between individuals in the organization and
customer or individuals outside the organization to acquire new knowledge; and intensity of knowledge
acquisition through informal mechanisms.
- Realized absorptive capacity, which has 3 measures: the extent to which individuals record new knowledge for
further reference; the extent to which individuals understand how to do their jobs and responsibilities; and the
extent to which individuals looking for better ways in doing their jobs.
- Flow of knowledge, which has 6 measures: flow of procedures and tools between individuals in the organization;
flow of ideas between individuals in the organization; flow of ideas between individuals in the organization and
individuals outside the organization; individuals awareness about knowledge in other individuals or units;
intensity of knowledge exchange between individuals in the organization; and intensity of knowledge exchange
between individuals in the organization and those outside the organization.
7. References
[1] W. Vestal, Measuring Knowledge Management, American Productivity and Quality Center, 2002.
[2] M. R. Trent, Assessing Organization Culture Readiness for Knowledge Management Implementation: The Case of Aeronauti-
cal Systems Center Directorate of Contracting, Thesis, Air-Force Institute of Technology, Wright-Patterson Air Force Base,
Ohio, USA, 2003.
[3] J. W. Boudreau, Strategic Knowledge Measurement and Management, Working Paper, Center for Advanced Human Re-
sources Studies, School or Industrial and Labor Relations, Cornell University, 2002.
[4] S. A. Zahra and G. George, Absorptive Capacity: A Review, Reconceptualization, and Extension, Academy of Management
Review 2002, 27 (2), 2002, pp. 185 - 203.
[5] P. Almeida and B. Kogut, Localization of Knowledge and the Mobility of Engineers in Regional Networks, Management
Science, 45 (7), 1999, pp. 905 917.
[6] U. Zander and B. Kogut, Knowledge and the Speed of the Transfer and Imitation of Organization Capabilities: An Empirical
Developing A Model For Measuring Organizational Knowledge: A Case Study of PT. Telekomunikasi Indonesia, Tbk.

Copyright 2011 IESS. 19
Test, Organization Science, 6 (1), 1995, pp. 76 92.
[7] J. J. P. Jansen, F. A. Bosch, and H. W. Volberda, Managing Potential and Realized Absorptive Capacity: How Do Organiza-
tional Antecedents Matter?, Accepted to be published in The Academy of Management Journal, 2005.
[8] PT. Telekomunikasi Indonesia, Tbk., Menjadi Model Korporasi Terbaik Indonesia, Annual Report 2006, 2007.
[9] R. E. Schumacker, and R. G. Lomax, A Beginners Guide to Structural Equation Modeling, Second Edition, Lawrence Erl-
baum Associates, Publishers, 2004.


Developing A Model For Measuring Organizational Knowledge: A Case Study of PT.Telekomunikasi Indonesia, Tbk.

20


Proceeding of Industrial Engineering and Service Science , 2011, September 20-21
Copyright 2011 IESS.
Large Scale Optimization Based on Self-Directed
Local Search
Eman Hasan; Daryl Essam; Ruhul Sarker

University of New South Wales at Australian Defense Force Academy, Canberra, Australia
e.hasan@student.adfa.edu.au, d.essam@adfa.edu.au, r.sarker@adfa.edu.au
ABSTRACT
In this paper we propose, a Memetic Algorithm (MA) that is based on a self-directed Local Search (MA-sd-LS), for
solving large scale problems using a random grouping decomposition technique. To overcome the large dimensionality
difficulties, the large scale problem is decomposed into smaller subproblems. As the correct subproblem size is hard to
determine, the motivation of this work is to investigate the effect that the subproblem size will have on the optimization
process for problems of different types of structure. Also, this work considers the role that self-directed Local Search
has in guiding the search to the most promising solutions MA is applied in the large scale domain. MA-sd-LS has
achieved significantly higher performance than one of the Evolutionary Algorithms (EA) in the literature
(DECC-CG[1]), in most of the benchmark problems defined in the Special Session on Large Scale Optimization in the
IEEE Congress on Evolutionary Computation in 2010.

Keywords: Memetic Algorithms, Evolutionary Algorithms, Local Search, Large Scale Optimization, Problem Decomposition.

1. Introduction
Over the last few decades, solving large scale optimization problems has become a challenging area in the fields of
computer science, operations research, and industry. This is mainly due to both the increased need for high quality
decision making for large scale problems in practice, and also the availability of increased computational power. In
general terms, solving small scale nonlinear optimization problems is not easy. The high dimensionality of the large
scale problems, along with their complex structure and interdependencies of variables, make them even more diff i-
cult to solve when using the same approaches which are used for small scale problems. The optimization of a large
scale problem usually consumes a massive amount of computational effort that might go beyond the capabilities of
the available computing resources. This has lead to the development of optimization algorithms for large scale
problems which use a decomposition technique to divide the total computation tasks of the large scale problems into
several smaller subproblems. Moreover, it is difficult to develop a mechanism to keep the same value for an inte r-
dependent variable which has more than one instance in the decomposed subproblems, and to also later merge back
the solutions of the subproblems to generate the complete solution of the problem is a demand.
One of the attempts to improve the performance of the optimization algorithms for large scale optimization
problems, is the divide and conquer strategy which was first introduced by Potter and DeJong [2], which has since
then been referred to as Cooperative Coevolution (CC). In this approach, a large optimization problem is deco m-
posed into smaller scale subproblems which can then be optimized separately using any optimization algorithm.
Although CC seems a promising framework for large scale optimization problems, its performance varies depend-
ing on the separability of the considered optimization problem [3]. For example, CC is inefficient in solving non-
separable optimization problems [4]. This is due to the fact that CC does not have a systematic way to group the
interdependent variables of a nonseparable problem. When these interdependent variables are optimized in different
subproblems, there will be a major decline in the overall performance of the optimization algorithm [5]. This indi-
cates the importance of grouping interdependent variables in one subproblem.
To overcome the CC limitation, it is necessary to use an appropriate technique to decompose large scale opt i-
mization problems, and to also have an efficient mechanism for information exchange for the interdependent var i-
ables when they are optimized in one subproblem and have instances in other subproblems. The early efforts for
decomposing large scale problems used a decomposition approach that divided the problem into one variable for
each subproblem (one-dimension based), or into two equal size subproblems (splitting-in-half strategy) [6-7]. Later,
some other approaches were introduced to decompose the large scale problem into many subproblems of a certain
size [8-10]. However, specifying a subproblem size is a compromise between complexity and the algorithms per-
Large Scale Optimization Based on Self-Directed Local Search

22
formance [5, 11]. The smaller the subproblem sizes in the separable problems, the easier the optimization, but the
larger the number of subproblems that must be evaluated. Also, for nonseparable problems, the larger the subprob-
lems size, then the better the algorithm performance, but the more complex the optimization [11].
An optimization algorithm can compensate for the complexity of specifying the appropriate subproblem size if
it is powerful and efficient. Over the last few decades Evolutionary algorithms (EAs) have successfully proven
themselves as efficient optimization techniques[12]. EAs are a set of techniques which have the common feature of
being inspired by the natural evolution of species. They have achieved great success on many numerical and co m-
binatorial optimization problems [13], and they can deal with multimodality, discontinuities, and noisy functions
[3]. Moreover, EAs have the advantages of being widely applicable as a simple and flexible approach, and of ha v-
ing a robust response to dynamic optimization problems [14]. In spite of these advantages, EAs performance de-
clines when dealing with large scale optimization problems. However because it is a flexible algorithm, EAs can be
hybridized with Local Search (LS) techniques. It is known that hybridizing EAs with other techniques can improve
the performance of the optimization process[15]. EAs that have been hybridized with LS are often called Memetic
Algorithms (MA). LS is a technique that iteratively improves its estimate of better solutions by searching in the
local neighborhood of the current solution [16]. Combining EAs with LS to form a MA, has been proven to refine
the search mechanism of optimization algorithms when they are applied to large scale problems [17].
In this work, we present a MA algorithm with a self-directed LS to optimize large scale problems. To invest i-
gate its performance, we have applied this algorithm to the specific test suite proposed in the Special Session on
Large Scale Continuous Global Optimization in the 2010 IEEE Congress on Evolutionary Computation. The results
of the MA were analyzed against DECC-CG [1], and has been proven to be comparable in most of the benchmark
problems. This shows the role of the self-directed LS in achieving better performance. When applying the model
with variant subproblem sizes, the algorithm achieves high performance with a large subproblem size in the prob-
lems that contain many interdependent variables and with a small subproblem size for other problems. This reveals
the relationship between the subproblem size and the interdependencies among variables.
The rest of this paper is organized as follows: section 2 presents our proposed methodol ogy. The experiments
are presented in section 3. Section 4 views the results and analysis. Finally section 5 concludes this paper.
2. Proposed Methodology
In this work we propose an MA algorithm with a self-directed LS for solving Large Scale problems (MA-sd-LS). The
proposed model has been applied on 20 benchmark problems [18], where the large problems are decomposed into
smaller subproblems so as to investigate the effect of the subproblem size on the algorithms performance. MA-sd-LS
achieves higher and comparable results to other algorithms in literature. The used decomposition technique in
MA-sd-LS is Random Grouping (RG). RG is one of the techniques that showed significant improvement over the
original CC for large scale optimization problems [11]. RG decomposes the large problem by grouping the variables
randomly into smaller size subproblems. In this approach, RG increases the probability of grouping dependent variables
in the same subproblem, which is recommended for the nonseparable problems. Each subproblem is optimized sepa-
rately, where any EA can be used. In this proposed methodology we are using a Genetic Algorithm (GA). In GA, Simu-
lated Binary Crossover (SBX) is used to generate offspring y
1
and y
2
of two parents x
1
, x
2
as in (1) and (2). Where

is
generated from (3), and is a constant value ( =2 is used by most practitioners).

1
=
1
2
1 +

1
+ 1

2
(1)

2
=
1
2
1 +

2
+ 1

1
(2)

(2)
1
1+
, 0.5
(
1
2(1)
)
1
1+
,

(3)
The mutation operator of the used GA is nonuniform mutation. As in non-uniform mutation, the step size decreases as
the generations increase, the search is uniform in the initial space and gets smaller as the algorithm proceeds [19]. Off-
spring x
i

t = x
i,1

, x
i,2

, , x
i,n

are created according to (4).

=
,1

+
,
(4)

,
=

,

,
1 [
1

, 0.5

,
+
,
(1 [
(1

), 0.5

(5)
Large Scale Optimization Based on Self-Directed Local Search

Copyright 2011 IESS. 23
In MAs, LS is the component which is most directly affected by dimensionality. LS is used to explore the neighborhood
around the current solution, and so high dimensionality increases that region and the overall domain. As LS is a com-
putationally expensive tool, for it to be scalable in a large search domain, the number of search iterations made by LS
should be minimized, and each executed iteration should achieve effective improvements to the solution. This is what
motivates the development of the self-directed LS in our proposed algorithm.
In this paper, MA-sd-LS uses an adaptive search step (d) in the LS which is directed towards the good solutions.
This makes the LS scalable so that it can be applied in large search domains, as it only steps forward if it enhances the
solution. The d value is expanded as in (6) after each successful search iteration. If the enlarged d fails, this is an indica-
tion of a local optimum, or that the global optimum is located in the previously searched domain. In this case, the d
value is doubled a few times to push the solution out of the local optimum and to thus expand the search area so that it
might contain the global optimum. If these new d values fail to move the solution forward, then the new d should take a
small step from the last successful d as in (7), to thus extensively concentrate the search in the current search space.
Regardless of the added cost of the adaptation of d, it helps to overcome the difficulty that is imposed by the large di-
mensionality.
d
e d d
/ 1
+ =
(6)
) / 1 ( d
e d d

+ =
(7)
The present decomposition techniques cannot detect the variables interdependencies. This means that the decom-
posed subproblems may include variables which are interdependent with other variables in a different subproblem.
Hence, because the subproblems are optimized separately, the different instances of all variables should be maintained
during the optimization process to represent the latest value of the optimized interdependent variable. The migration of
the interdependent variables to the other subproblems is controlled and is taken into consideration in our proposed algo-
rithm. All the variables are collected and the complete solution is upgraded throughout the optimization process.
Based on the previous discussion, the detailed steps of the proposed methodology are summarized as follows:
1. Generate the initial population with size NP for the dimension D variables.
2. Decompose the large scale problem randomly into subproblems sub
k
where k=[1,m] and D=k*m
3. Optimize each sub
k

4. Apply LS to one random variable of the few best x
i,j
of sub
k
:
a. Create d that changes with the LS_iter
b. add, and also subtract d and select the direction that enhances fitness
c. repeat until l=LS_iter
i. if fitness increases, then use (6) to enlarge d
ii. else, use (7) to decrease d
5. copy the value of the optimized variables in all other sub
k

6. While k<=m go to step 3
7. If FE < max_FE go to step 2
3. Experiments
MA-sd-LS has been tested on 20 benchmark large scale optimization problems [18]. These problems are designed into
4 categories where f1 to f3 are separable, f4 to f8 are partially-separable such that a small number of variables (m=50)
are dependent while all the remaining ones are independent. Functions f9 to f18 are partially-separable functions that
consist of multiple independent groups, each of which is m-nonseparable. The experiments are implemented with sub-
problem sizes of 5, 50, and 100 and the results are analyzed to investigate the effect of subproblem size on the algo-
rithms performance. In this experiment, subproblems are optimized sequentially for a certain number of generations.
An interdependent variable is optimized in a subproblem and its final value is copied into the other subproblems which
contain the dependent variables. After finishing the optimization of all subproblems - which will be referred to as a new
cycle the algorithm starts again with a different random grouping. The subproblems are repeatedly optimized until
reaching the stopping criteria, which is the maximum number of fitness evaluations (max_FE) of 3e+6 in these experi-
ments. Parameter setting: Tournament selection of size two, Simulated Binary Crossover (SBX), and Nonuniform
mutation are used. The mutation probability changes adaptively through generations from 0.15 to 0.1. The LS is
self-directed by the performance as in (6) and (7).


Large Scale Optimization Based on Self-Directed Local Search

24
4. Results
To analyze the differences directly, we view the results obtained by each algorithm in Table 1. The mean and median
values are remarked as bold for the best algorithm.
We can conclude the following from Error! Reference source not found.:
- MA-sd-LS achieved the best results in 16 functions out of 20 where 12 of them is partially-separable, two separable,
and 2 fully nonseparable.
- MA-sd-LS achieved the best results in all the Rosenbrocks functions (such as f
8
, f
13
, f
18
, and f
20
)
- MA-sd-LS achieved higher and relatively close results in Rastrigins (f
10
, f
15
) and the best results in Schwefels (f
12
,
and f
17
) only when the variables are partially-separable and consist of multiple independent groups, each of which is
m-nonseparable.
- For the separable functions and fully-nonseparable categories, MA-sd-LS obtains the best results for the multimodal
functions (f
2
, f
3
, and f
20
).
- The results of most of the partially-separable problems and one of the nonseparable problems in [18] with subprob-
lem size 100, achieved higher performance to the results obtained from the DECC-CG [1].
- As the differences between mean and median are small in most of the functions, we can conclude that the proposed
MA-sd-LS algorithm is robust and stable.
Table 1: Results of MA-sd-LS and DECC-CG, subproblem size=100 and FE=3e+06
f1 f2 f3 f4 f5 f6 f7 f8 f9 f10
Mean 3.42e+08 1.85e+02 2.85e-05 2.79e-03 5.61e+11 1.67e+06 4.88+08 4.15e+07 5.18e-04 7. 54e+03
Median 3.38e+08 1.86 e+02 2.92 e-05 2.87e-03 5.85 e+11 1.67e+06 4.87e+08 3.00 e+07 5.05e-04 7.61e+03
MA-sd-LS Std. 1.26e+07 4.39e+00 2.40e-06 2.13e-04 1.39 e+11 4.72-02 3.18e+8 3.12e+07 7.73e-05 1.81e+03
Best 3.28e+08 1.81e+02 2.45 e-05 2.49e-03 3.66 e+11 1.67e+06 1.61e+08 2.21e+06 4.15e-04 7.25e+03
Worst 3.61e+08 1.91e+02 3.04 e-05 3.03e-03 6.92 e+11 1.67e+06 8.21e+8 7.41e+07 6.23e-04 7.71e+03
Mean 2.93e-07 1.31e+03 1.39e+00 1.70e+13 2.63e+08 4.96e+06 1.63e+08 6.44e+07 3.21e+08 1.06e+04
Median 2.86e-07 1.31e+03 1.39e+00 1.51e+13 2.38e+08 4.80e+06 1.07e+08 6.70e+07 3.18e+08 1.07e+04
DECC-CG Std. 8.62e-08 3.26e+01 9.73e-02 5.37e+12 8.44e+07 8.02e+05 1.37e+08 2.89e+07 3.38e+07 2.95e+02
Best 1.63e-07 1.25e+03 1.20e+00 7.78e+12 1.50e+08 3.89e+06 4.26e+07 6.37e+06 2.66e+08 1.03e+04
Worst 4.84e-07 1.40e+03 1.68e+00 2.65e+13 4.12e+08 7.73e+06 6.23e+08 9.22e+07 3.87e+08 1.17e+04
f11 f12 f13 f14 f15 f16 f17 f18 f19 f20
Mean 1.77e+01 8.01e+03 7.27e+02 2.54e-03 1.52e+04 3.33e+01 1.39e+04 5.10e+03 4.24e+5 1.83e+01
Median 1.77e+01 7.38e+03 6.27 e+02 2.58e-03 1.52 e+04 3.33e+01 1.32e+04 4.66e+03 4.34e+05 9.44e+00
MA-sd-LS Std. 6.49e-06 1.38e+03 4.96e+02 2.68e-04 3.59e+02 5.47e-06 3.32e+03 2.77e+03 5.60e+04 1.81e+01
Best 1.77 e+01 6.76 e+03 3.06 e+02 2.13e-03 1.47 e+04 3.33e+01 1.11e+04 2.49e+03 3.29e+05 1.85e+00
Worst 1.77 e+01 1.02 e+04 1.58 e+03 2.86e-03 1.61 e+04 3.33e+01 1.95e+04 9.15e+03 4.71e+05 3.82e+01
Mean 2.34e+01 8.93e+04 5.12e+03 8.08e+08 1.22e+04 7.66e+01 2.87e+05 2.46e+04 1.11e+06 4.06e+03
Median 2.33e+01 8.87e+04 3.00e+03 8.07e+08 1.18e+04 7.51e+01 2.89e+05 2.30e+04 1.11e+06 3.98e+03
DECC-CG Std. 1.78e+00 6.87e+03 3.95e+03 6.07e+07 8.97e+02 8.14e+00 1.98e+04 1.05e+04 5.15e+04 3.66e+02
Best 2.06e+01 7.78e+04 1.78e+03 6.96e+08 1.09e+04 5.97e+01 2.50e+05 5.61e+03 1.02e+06 3.59e+03
Worst 2.79e+01 1.07e+05 1.66e+04 9.06e+08 1.39e+04 9.24e+01 3.26e+05 4.71e+04 1.20e+06 5.32e+03
After analyzing the results and the concluded observations from Error! Reference source not found., it is obvious that
our algorithm is successful in solving different types of functions whether separable or nonseparable, and all the par-
tially-separable functions except for the Rastrigins (which is originally separable) and Schwefels (which is originally
nonseparable) when a small number of variables are dependent and all the remaining ones are independent (f
5
, f
7
, and
f
15
). However, MA-sd-LS has been proven to be successful in solving Rastrigins and Schwefels in their original struc-
ture (f
2
, and f
19
). Although DECC-CG is better in the Ackleys function [20], our proposed algorithm achieved higher
performance in all of them (f
3
, f
6
, f
11
, and f
16
) and is comparable to DECC-CG.
The mean results obtained by MA-sd-LS are compared with the others obtained by DECC-CG in Table 2 using the
Wilcoxons test which is described in [21]. This test shows that although there is no significance difference, MA-sd-LS
achieves a higher rank than DECC-CG.
Table 2: MA-sd-LS versus DECC-CG (Wilcoxon's test with p-value=0.05)
Algorithm R+ R- Sig. difference?
DECC-CG 134 56 No
We have provided the convergence curves of the average values of problems f
2
(shifted-Rasrigins), f
3

(shifted-Ackelys), f
5
(rotated-Rastrigins), f
9
(rotated-Elliptic), f
12
(Schwefel), and f
18
(rotated-Rosenbrocks) as represen-
Large Scale Optimization Based on Self-Directed Local Search

Copyright 2011 IESS. 25
tative samples for subproblem size of 100 in figures Figure1 - Figure 6. We observe that f
2
, f
9
convergence quickly at
the first fitness evaluations and the convergence continue but slower. From the convergence curve of f3, we observe that
it achieves fast improvement at the beginning, and then the fitness value is almost stabilized after 1e+06 evaluations to
the end. We observe that the curve of f
5
has regions of different convergences, and from evaluation 2e+06 until 3e+06
the improvement is too slow. The convergences curve of f
12
, and f
18
have a steep slope and continue in a horizontal
convergence after 6e+05 evaluations.


























The results in Table 3 show the relationship between the performance and the subproblem size. It is clear that the
separable problems (such as f
1
, f
2
, and f
3
) achieve higher results when they are decomposed into smaller subproblems.
Even at the partially-separable problems where the separability is represented by Rastrigins separable function (such as
f
5
, f
10
, and f
15
) we can notice that the smaller the subproblem the higher the performance. For the partially-separable and
fully-nonseparable functions (such as f
4
, f
5
, f
7
, f
8
, f
9
, f
12
, f
13
, f
14
, f
17
, f
18
, f
19
and f
20
); they achieve higher performance when
they are decomposed into larger subproblems. We can observe from Table 3 that the partially-separable Ackelys f
6
, f
11

and f
16
, obtain the same results when decomposed into different subproblem sizes. These remarks from Table 3 indicate
how the large scale problem decomposition in highly related to the problem separability.
Table 3: Different subproblem size using MA-sd-LS with Dimension of 1000
Size f1 f2 f3 f4 f5 f6 f7 f8 f9 f10
Mean 3.42e+08 1.85e+02 2.85e-05 2.79e-03 5.61e+11 1.67e+06 4.88+08 4.15e+07 5.18e-04 7. 54e+03
Median 3.38e+08 1.86 e+02 2.92 e-05 2.87e-03 5.85 e+11 1.67e+06 4.87e+08 3.00 e+07 5.05e-04 7.61e+03
S=100 Std. 1.26e+07 4.39e+00 2.40e-06 2.13e-04 1.39 e+11 4.72-02 3.18e+8 3.12e+07 7.73e-05 1.81e+03
Best 3.28e+08 1.81e+02 2.45 e-05 2.49e-03 3.66 e+11 1.67e+06 1.61e+08 2.21e+06 4.15e-04 7.25e+03
Worst 3.61e+08 1.91e+02 3.04 e-05 3.03e-03 6.92 e+11 1.67e+06 8.21e+8 7.41e+07 6.23e-04 7.71e+03
Mean 4.40e+07 1.99e+02 2.09e-05 1.02e-02 5.89e+11 1.67e+06 6.89e+09 5.11e+07 6.07e-04 7.34e+03
Median 4.67e+07 1.95e+02 2.06e-05 1.02e-02 5.32e+11 1.67e+06 7.02e+09 3.78e+07 6.04e-04 7.39e+03
S=50 Std. 6.91e+06 1.94e+01 5.33e-07 1.02e-03 1.40+10 1.63e-01 1.58e+09 2.42e+07 6.87e-05 3.58e+02
Best 3.22e+07 1.73e+02 2.04e-05 9.20e-03 4.92e+11 1.67e+06 4.97e+09 2.70e+07 5.44e-04 6.78e+03

Figure 1: Mean results for f
2




Figure 2: Mean results for f
3



Figure 3: Mean results for f
5




Figure 4: Mean results for f
9



Figure 6: Mean results for f
12



Figure 5: Mean results for f
18


Large Scale Optimization Based on Self-Directed Local Search

26
Size f1 f2 f3 f4 f5 f6 f7 f8 f9 f10
Worst 4.92e+07 2.21e+02 2.16e-05 1.16e-02 8.37e+11 1.67e+06 9.06e+09 7.89e+07 7.20e-04 7.70e+03
Mean 3.08e+07 5.38e+01 8.46e-06 4.14e-01 5.23e+11 1.67e+06 6.23e+10 6.92e+07 1.08e-02 7.22e+03
Median 2.74e+07 5.50e+01 8.49e-06 2.50e-01 5.22e+11 1.67e+06 5.76e+10 6.22e+07 9.31e-03 7.15e+03
S=5 Std. 2.54e+07 3.93e+00 3.41e-07 3.06e-01 3.17e+10 0.00e+00 1.75e+10 1.28e+07 4.19e-03 2.49e+2
Best 3.03e+05 4.76e+01 7.96e-06 1.39e-01 4.76e+11 1.67e+06 4.23e+10 5.59e+07 5.72e-03 6.96e+03
Worst 6.92e+07 5.76e+01 8.85e-06 8.31e-01 5.63e+11 1.67e+06 8.95e+10 8.32e+07 1.56e-02 7.59e+03
f11 f12 f13 f14 f15 f16 f17 f18 f19 f20
Mean 1.77e+01 8.01e+03 7.27e+02 2.54e-03 1.52e+04 3.33e+01 1.39e+04 5.10e+03 4.24e+5 1.83e+01
Median 1.77e+01 7.38e+03 6.27 e+02 2.58e-03 1.52 e+04 3.33e+01 1.32e+04 4.66e+03 4.34e+05 9.44e+00
S=100 Std. 6.49e-06 1.38e+03 4.96e+02 2.68e-04 3.59e+02 5.47e-06 3.32e+03 2.77e+03 5.60e+04 1.81e+01
Best 1.77 e+01 6.76 e+03 3.06 e+02 2.13e-03 1.47 e+04 3.33e+01 1.11e+04 2.49e+03 3.29e+05 1.85e+00
Worst 1.77 e+01 1.02 e+04 1.58 e+03 2.86e-03 1.61 e+04 3.33e+01 1.95e+04 9.15e+03 4.71e+05 3.82e+01
Mean 1.77e+01 1.32e+05 1.36e+03 1.37e-02 1.53e+04 3.33e+01 2.64e+05 2.18e+04 9.97e+05 1.51e+02
Median 1.77e+01 1.35e+05 6.88e+02 1.36e-02 1.55e+04 3.33e+01 2.67e+05 2.32e+04 9.92e+05 1.45e+02
S=50 Std. 9.03e-06 1.49e+04 1.49e+03 1.68e-02 3.70e+02 8.68e-05 1.77e+04 1.02e+04 5.10e+04 1.54e+01
Best 1.77e+01 1.13e+05 6.28e+2 1.20e-02 1.48e+04 3.33e+01 2.56e+05 1.15e+04 9.26e+05 1.33e+02
Worst 1.77e+01 1.52e+05 4.03e+3 1.56e-02 1.58e+04 3.33e+01 2.90e+05 3.56e+04 1.06e+06 1.71e+02
Mean 1.77e+01 6.24e+05 8.09e+02 5.68e-01 1.45e+04 3.33e+01 1.17e+06 2.38e+04 2.57e+06 2.20e+02
Median 1.77e+01 6.16e+05 8.52e+02 5.95e-01 1.44e+04 3.33e+01 1.15e+06 2.21e+04 2.51e+06 2.24e+02
S=5 Std. 5.57e-06 2.86e+04 9.63e+01 9.57e-02 2.09e+02 1.14e-05 4.07e+04 7.86e+03 2.10e+05 1.22e+01
Best 1.77e+01 5.91e+05 6.96e+02 4.25e-01 1.42e+04 3.33e+01 1.14e+06 1.66e+04 2.36e+06 1.99e+02
Worst 1.77e+01 6.63e+05 9.06e+02 6.76e-01 1.48e+04 3.33e+01 1.23e+06 3.38e+04 2.80e+06 2.30e+02
5. Conclusion
In this paper, we have proposed a Memetic Algorithm MA-sd-LS that is based on self-directed Local Search. In it, large
scale problems are decomposed into smaller subproblems which are optimized separately. MA-sd-LS has obtained good
results comparable to the DECC-CG algorithm in most of the separable, partially-separable, and all nonseparable prob-
lems proposed by the organizers of the Special Session of Large Scale Global Optimization, in the IEEE Congress on
Evolutionary Computation 2010 [18]. This emphasises the advantage of using the self-directed LS to guide the search to
the most promising solutions. We have carried out empirical studies to analyze how the subproblem size affects the
performance of the optimization algorithm, following the benchmark problems [18]. Experiments have shown that there
is a relationship between the performance and the subproblem size. The separable problems achieve high results when
they are decomposed into smaller subproblems, even at the partially-separable problems. But for the partially-separable
problems in which the nonseparable problems are used and the fully-nonseparable functions, they achieve higher per-
formance when they are decomposed into larger subproblems. To get better results, the problem separability should be
known a head before starting the optimisation. This conclusion indicates the importance of a systematic approach that
can determine the problem structure that suits a certain subproblem size. Our future work will focus more on the prob-
lem identification before decomposition, in order to make more accurate groupings of the interdependent variables and
to specify the most appropriate subproblem size. As the decomposition of subproblems is applicable for optimization in
a parallel computing environment, this will also be implemented in future research.
6. References
[1] Yang, Z., K. Tang, and X. Yao, Large scale evolutionary optimization using cooperative coevolution. Information Sciences,
2008. 178: p. 2986-2999.
[2] M. A. Potter and K.A.D. Jong. A Cooperative Coevolutionary Approach to Function Optimization. in The Third Prallel Prob-
lem Solving From Nature. 1994. Berline, Germany: Springer-Verlag.
[3] T. Ray and X. Yao. A Cooperative Coevolutionary Algorithm with Correlation Based Adaptive Variable Partitioning. in IEEE
Congress on Evolutionary Computation. 2009.
[4] M. N. Omidvar, X. Li, Z. Yang, and X. Yao. Cooperative co-evolution for large scale optimization through more frequent
random grouping. 2010.
[5] M. Omidvar, X. Li, and X. Yao. Cooperative Co-evolution with Delta Grouping for Large Scale Non-separable Function Op-
timization. in 2010 IEEE World Congress on Computational Intelligence. 2010. Barcelona, Spain.
[6] Potter, M. and K. De Jong, A cooperative coevolutionary approach to function optimization, in Parallel Problem Solving from
Nature PPSN III, Y. Davidor, H.-P. Schwefel, and R. Mnner, Editors. 1994, Springer Berlin / Heidelberg. p. 249-257.
Large Scale Optimization Based on Self-Directed Local Search

Copyright 2011 IESS. 27
[7] Potter, M.A. and K.A.D. Jone, Cooperative Coevolution: An Architecture for Evolving Coadapted Subcomponents. Evolution-
ary Computation, 2000. 8(1): p. 1-29.
[8] Y. Liu, X. Yao, Q. Zaho, and T. Higuchi. Scaling Up Fast Evolutionary Programming with Cooperative Coevolution. in Com-
gress on Evolutionary computation. 2001.
[9] X. Li and X. Yao. Tackling high dimensional nonseparable optimization problems by cooperatively coevolving particle
swarms. in IEEE Congress on Evolutionary Computation (CEC) 2009. 2009. Trondheim, Norway IEEE.
[10] Z. Yang, K. Tang, and X. Yao. Differential Evolution for High-Dimensional Function Optimization. in 2007 IEEE Congress on
Evolutionary Computation. 2007.
[11] Z. Yang, J. Zhang, K. Tang, X. Yao, and A. Sanderson. An adaptive coevolutionary differential evolution algorithm for
large-scale optimization. in Proceedings of the Eleventh conference on Congress on Evolutionary Computation. 2009. Trond-
heim, Norway: IEEE Press.
[12] Sareni, B., L. Krahenbuhl, and A. Nicolas, Efficient genetic algorithms for solving hard constrained optimization problems.
Magnetics, IEEE Transactions on, 2000. 36(4): p. 1027-1030.
[13] R. Sarker, M. Mohammadian, and X. Yao, Evolutionary Optimization. 2002, MA, USA: Kluwer Academic Publishers Nor-
well.
[14] Lozano, M. and C. Garca-Martnez, Hybrid metaheuristics with evolutionary algorithms specializing in intensification and
diversification: Overview and progress report. Computers & Operations Research, 2010. 37(3): p. 481-497.
[15] Davis., L., Handbook of Genetic Algorithms. 1991, New York: Van Nostrand Reinhold.
[16] Hart, W.E., Adaptive Global Optimization with Local Search. 1994, University of California at San Diego, La Jolla, CA, USA.
[17] Zhao, S.Z., J.J. Liang, P.N. Suganthan, and M.F. Tasgetiren. Dynamic multi-swarm particle swarm optimizer with local search
for Large Scale Global Optimization. in Evolutionary Computation, 2008. CEC 2008. (IEEE World Congress on Computa-
tional Intelligence). IEEE Congress on. 2008.
[18] Tang, K., X. Li, P.N. Suganthan, Z. Yang, and T. Weise, Benchmark Functions for the CEC 2010 Special Session and Compe-
tition on Large Scale Global Optimization. Technical report, in Nature Inspired Computation and Applications Laboratory.
2009: USTC, China.
[19] Z. Michalewicz, Genetic Algorithms + Data Structures = Evolution Programs. 1992, New York: Springer-Verlag.
[20] Molina, D., M. Lozano, A. Snchez, and F. Herrera, Memetic algorithms based on local search chains for large scale continu-
ous optimisation problems: MA-SSW-Chains. Soft Computing - A Fusion of Foundations, Methodologies and Applications,
2010: p. 1-20.
[21] S. Garca, D. Molina, M. Lozano, and F. Herrera., A study on the use of non-parametric tests for analyzing the evolutionary
algorithms behaviour: A case study on the CEC2005 special session on real parameter optimization. Journal of Heuristics,
2009. 15: p. 617644.


Large Scale Optimization Based on Self-Directed Local Search

28

Proceeding of Industrial Engineering and Service Science, 2011, September 20-21
Copyright 2011 IESS.
A Recovery Model for a Production-Inventory
System with Transportation Disruption
Hawa Hishamuddin, Ruhul Sarker, Daryl Essam

School of Engineering and IT, University of New South Wales, ADFA Campus, Northcott Drive, Canberra 2600, Australia
h.hishamuddin@student.adfa.edu.au, r.sarker@adfa.edu.au, d.essam@adfa.edu.au
ABSTRACT
Supply chains (SC) are becoming increasingly competitive and complex in order to effectively meet customer demands.
This nature and complexity of the SCs make them vulnerable to various risks, including disruptions due to interruptions
in supply, transportation and many other sources. In the presence of a disruption, managers are required to make quick
and reliable decisions to recover from the unexpected event with as minimal costs as possible. In this study, a recovery
model is proposed for a two stage production and inventory system that experiences a transportation disruption. The
model is capable of determining the optimal ordering and production quantities during the recovery window such that
the total relevant costs are minimized, while seeking to recover the original schedule. Such tools are useful to assist
managers in effective decision making in response to disruptions, in particular when determining the optimal recovery
strategy for the longevity and sustainability of their businesses.

Keywords: Transportation disruption, recovery model, two stage inventory-production system, supply chain

1. Introduction
SC disruption is defined as an event that interrupts the normal course of operations of the effected SC entities. Disrup-
tions can be caused by internal or external sources to the SC, including machine breakdowns, transportation failure,
natural disasters, labor dispute, terrorism, war, and political instability. In recent years, we have come to see many dis-
ruption occurrences that have severely affected SCs [1]. Transportation disruption is slightly different from other forms
of SC disruptions, in that it only stops the flow of goods, whereas other disruptions may stop the production of goods as
well. It is distinctive in that the goods in transit have halted, even though the other operations of the SC are intact [2].
It is crucial that managers take appropriate preparatory measures of response, such as mitigation or contingency
strategies, to reduce the negative effects of these disruptions [3]. One of the goals of Disruption Management is to im-
plement the correct strategies that will enable a SC to quickly return to its original state, while minimizing the relevant
costs associated with recovery from the disruption [4].
In the literature on supply uncertainty or supply-disruption, where the supplier is not always available, numerous
studies have been performed for inventory models under the continuous review [5], [6] and the periodic review frame-
works [7], [8]. Although SC disruption in general has recently gained the interest of many researchers, the study on
transportation disruption in particular has received much less attention. Wilson [2] investigates the effect of transporta-
tion disruption on SC performance using system dynamics. The work concluded that the most severe impact is experi-
enced when transportation disruption exists between the tier 1 supplier and the warehouse. Studies of transportation
disruption can also be found in the literature on Emergency Logistics Scheduling, which is the integration of machine
scheduling and job distribution to customers with the consideration of disruption events [9]. Another study has been
conducted in the area of the Vehicle Routing Problem (VRP) by Sun et al.[10], who presented a hybrid knowledge rep-
resentation framework for disruption management problems in urban distribution decisions.
The model proposed in this paper studies a real-time rescheduling mechanism for an economic lot sizing problem
of a two stage SC system subject to transportation disruption. The recovery model is different from the works men-
tioned earlier in a number of ways. Our problem differs from Xia et al.'s model [11], in that to make our model more
realistic, disruption is in the form of a transportation disruption, which is not known a priori. Additionally, we have
considered penalty costs, as well as stock-out costs consisting of both backorder and lost sales cost. We are reluctant to
make assumptions such as that the inter-arrival time of supply disruption and the duration of the supply disruptions are
exponentially distributed [6], [7]. Rather the two parameters are assumed to be a random variable in our model. As the
key contribution, we introduce a novel approach that determines the optimal recovery plan for a two stage produc-
tion-inventory system, subject to the systems costs and constraints.
A Recovery Model for Production-Inventory System with Transportation Disruption

30
The contents of the paper are organized as follows. Section 2 discusses the model development. This section in-
cludes derivation of the cost functions. Section 3 deals with the solution approach for the model. Section 4 addresses the
related computational results and analysis. Lastly, section 5 summarizes our research findings and offers potential di-
rections for future research.
2. Model Formulation
2.1. System Description
In this paper, we consider a two stage production and inventory system consisting of a manufacturer and a retailer. The
manufacturer has production and inventory, and thus follows the economic production quantity model, while the retailer
only has inventory and follows the economic order quantity model. The notations used in developing the cost function
are as follows:
A
1
setup cost for the first stage ($/setup)
A
2
ordering cost for the second stage ($/order)
D demand rate for the system (units/year)
H
1
, H
2
annual inventory cost for stage 1 and 2 ($/unit/year)
P production rate (units/year)
Q
1
production lot size for stage 1 in the original schedule (units)

Q
2
ordering lot size for stage 2 in the original schedule (units)
X
i
production lot size of cycle i in the recovery schedule for stage 1 (units)
S
i
order lot size of cycle i in the recovery schedule for stage 2 (units)
Bq back order quantity for stage 2
Lq lost sales quantity for stage 2
T
d
disruption period
production up time for a normal cycle (Q/P)
u production down time for a normal cycle
t
e
start of recovery time window
t
f
end of recovery time window
T production cycle time for a normal cycle (Q/D)
B
1
, B
2
unit back order cost per unit time for stage 1 and 2 ($/unit/time)
L
1
, L
2
unit lost sales cost for stage 1 and 2 ($/unit)
C
T
unit transportation cost for each delivery ($/shipment)
W warehouse capacity for stage 2 (units)
T
1i
production time for cycle i in the recovery window for stage 1
T
2i
production time for cycle j in the recovery window for stage 2
n number of cycles in the recovery window
m number of lots in the recovery window
z number of optimal production lots in the recovery window
I
i
inventory level at the end of cycle i in the recovery window
f
1
the penalty function for the delay in recovering the original schedule in the first stage
f
2
the penalty function for the delay in recovering the original schedule of the second stage handled by stage 1
f
3
the penalty function for the delay in recovering the original schedule in stage 2
It is assumed that the demand rate is less than the production rate, i.e. D<P. As a preliminary study, we have cho-
sen the lot-for-lot policy to be applied to the model. For this particular type of shipment policy, the manufacturing lot
size for the first stage is equal to the ordering lot size of the second stage (Q
1
= Q
2
=Q) under ideal conditions, due to
coordination of the two stage system. The current production-inventory system is a modified version of the model pro-
posed by Banerjee [12], where the optimal production lot size (Q) is:
( )
P H D H
A A PD
Q
2 1
2 1
2
+
+
=
(1)
However, our model assumes that a truck experiences a disruption that prevents the goods from being dispatched to the
retailer as scheduled. The disruption may be caused by a major accident involving the truck or it can be due to natural
disasters, such as floods, earthquakes, or snow blizzards, which disrupt the truck from operating normally. In addition,
the disruption may or may not cause damage to the finished goods being transported. This paper investigates the dam-
aged lot case.
A Recovery Model for Production-Inventory System with Transportation Disruption

Copyright 2011 IESS. 31
In the case where the goods are damaged, the manufacturer has to reproduce the damaged lot in order to satisfy
demand. Therefore, there will be changes in the original production schedule at the manufacturers side. The retailer
only receives the goods after the production of the damaged lot is completed and this delay will result in shortages to
the retailer. Furthermore, no shortages are allowed during the subsequent cycles following the disruption. Moreover, our
model assumes that the transportation has unlimited capacity. It is assumed that the retailer owns limited amount of
warehouse capacity, (W). In addition, it is assumed that the second stage follows the zero-order inventory policy, where
an order is made only when the on-hand inventory reaches zero. Therefore, there is no inventory at the end of each cy-
cle. The first stage, on the other hand, can have left over inventory at the end of the cycles in the recovery window.
The objective of the problem is to determine the new recovery plan, consisting of the optimal order quantities for
the retailer and the production quantities for the manufacturer, so as to minimize the total recovery cost for the system.
Additionally, the aim is to return back to the original manufacturing and ordering schedule as soon as possible. This has
been the common purpose of various models in the disruption management literature and our model has the same aim.
Note that recovery is achieved when both stages are back to their original schedule. The duration in which the schedule
is allowed to have changes to achieve recovery is defined as the recovery time window [11], [13], which will be n cycle
times from the start of disruption. Extra costs are incurred in order to recover the system from the disruption, including
backorder (B
1
, B
2
), lost sales (L
1
, L
2
) and penalty costs (f
1
, f
2
, f
3
) for both the manufacturer and the retailer. Similar to
our previous work [13], our model assumes that the pre-disruption period in the disruptive cycle is zero.
2.2. Mathematical Representation
Let z be the optimal number of production lots in the recovery window, n be the number of cycles in the recovery win-
dow, m be the number of lots i.e. the demand to be satisfied and y be a binary parameter to represent the state of goods.
The relationship between n, m and y can be stated as follows:
m = n +y where

=
undamaged is lot if
damaged is lot if
y
0
1

(2)
Figure 1 depicts the inventory lines for stage 1 (manufacturer) and stage 2 (retailer). The dotted lines represent the
original non-disruption schedule, whereas the solid lines represent the new recovery schedule with the presence of dis-
ruption. The stripe-shaded triangle shows the amount of shortages, consisting of backorders (Bq) and lost sales (Lq),
incur- ed by the retailer during the disruption period, T
d
. In this figure, we have n = 3 recovery cycles and z = 4 produc-
tion lots for the recovery time window (t
e
t
f
). We define the decision variable X
i
as the production quantity for cycle i
in the re-

Figure 1: Production Inventory Curve for a two stage SC for the damaged lot case
covery time window for the first stage (manufacturer) and T
1i
as its respective production time, where i = 1, 2, , n.
The second decision variable, S
i
, is the ordering quantity for cycle i in the recovery window for the second stage (re-
tailer) and T
2i
is its respective consumption time. After a disruption of T
d
occurs, recovery takes place by utilizing the
S
1
-Bq
S2
S3
Td
te
tf
T21 T22 T23
Time
X1 X2
X3
te
tf
T11 T12
T13
Time
S2
S3
1
2
3
4
5
S4
Q
Q
STAGE 2
STAGE 1
Bq
Lq
T14
X4
6
S1
S
1
-Bq
S2
S3
Td
te
tf
T21 T22 T23
Time
X1 X2
X3
te
tf
T11 T12
T13
Time
S2
S3
1
2
3
4
5
S4
Q
Q
STAGE 2
STAGE 1
Bq
Lq
T14
X4
6
S1
A Recovery Model for Production-Inventory System with Transportation Disruption

32
production idle times, , in the original schedule. The time horizon is finite, such that only the costs in the recovery
window are considered. The total cost function considered is the sum of the average setup, inventory, transportation and
penalty costs per unit time, plus the total costs for shortages (backorders and lost sales). Given that shortages only occur
during the first cycle, we find that it is better to record it as a total and not as a time average.
2.3. Damaged Goods
For this particular case, we assume that the goods being transported are damaged during the disruption (y = 1). The total
costs for the second stage will first be formulated as this will ease in determining the costs for the first stage later, since
the production schedule for the first stage is dependent on the order schedule of the second stage. The setup cost equa-
tion for the first stage is rather straight forward and can be obtained by: A
2
(z-1) (3)
The inventory holding cost is derived as the unit inventory holding cost, H, multiplied by the total inventory dur-
ing the recovery time, which is equivalent as the area under the curve. This is calculated as:
( ) ( ) ...
2
1
23 3 22 2 21 1 2
+ + + T S T S T Bq S H

( ) |
.
|

\
|
+ =

=
1
2
2 2
1
2
2
z
i
i
S Bq S
D
H
(4)
The backorder cost formulation for the second stage can be derived as:
( ) ( ) Lq D T
T B
d
d

2
2

(5)
Finally, the lost sales cost is obtained as:
|
.
|

\
|

=
1
1
2
z
i
i
S nQ L

(6)

The penalty function derived in this model is based on the assumption that the longer it takes to recover the original
schedule, the higher the associated penalties. These penalties represent the extra costs incurred by the system when
there are changes in the original plan. Here we have derived it as a function of the number of recovery cycles, as indi-
cated below: f
3
(n
2
)

(7)

The sum of all the cost components above gives the total relevant costs of the recovery plan for the second stage, as
presented below:
( ) ( ) ( ) ( ) ( )
|
|
.
|

\
|
|
.
|

\
|
+ |
.
|

\
|

+
|
|
.
|

\
|
+
|
|
.
|

\
|
|
.
|

\
|
+ + =


=

=
1
1
2
2 2
3
1
2
2 2
1
2
2 2
2
) (
2
1
1
) , (
z
i
i d
d
z
i
i i
S nQ L Lq D T
T B
n f S Bq S
D
H
z A
nT
z S TC

(8)

Next, the total relevant costs for recovery for the first stage will be calculated. The setup cost is given by: A
1
(z)
(9)

Let us define I
i
as the inventory level at the end of cycle i in the recovery window, where
I
i
= I
i-1
+ X
i
- S
i
for i = 1, 2, ... , z (10)
The inventory cost for the first stage is:
|
.
|

\
|
+ + + + + + + + = ...
2
1
2
1
2
1
2
1
14 4 14 3 13 3 13 2 12 2 12 1 11 1 11 0 1
T X T I T X T I T X T I T X T I H

|
|
.
|

\
|
|
.
|

\
|
+ =

=

z
i
i
i i
P
X
X I H
1
1 1
2
1
(11)

For this model, it is assumed that the manufacturer incurs a penalty for backorders and lost sales of the retailers.
In other words, the manufacturer incurs a cost whenever a customer is unable to purchase the manufacturers product
from the retailer. The backorder cost and the lost sales cost for the manufacturer follows the concept by Cachon and
Zipkin [14] and is given by equations (12) and (13) respectively.
( ) ( ) Lq D T
T B
d
d

2
1

(12)

( ) |
.
|

\
|
+

=
1
1
1
z
i
i
S nQ y Q L

(13)

Instead of having a parameter that constitutes the fraction of shortages that are backordered or lost, like most mod-
els do [15], our model determines this by way of optimization to ensure the overall cost of the system is minimized.
Notice that for the lost sales cost formulation, we have considered the damaged lot as lost sales, which is given by Q(y).
The transportation cost for each delivery can be formulated as: CT(z-1)
(14)
Lastly, the penalty for delay in recovery is given as f
1
(n
2
) + f
2
(n
2
). Thus, the first stages total relevant cost for the
recovery plan is represented as follows:
A Recovery Model for Production-Inventory System with Transportation Disruption

Copyright 2011 IESS. 33
( ) ( ) ( ) ( ) ( )
( )
|
|
.
|

\
|
|
.
|

\
|
+
+ |
.
|

\
|

+
|
|
.
|

\
|
+ + +
|
|
.
|

\
|
|
|
.
|

\
|
|
.
|

\
|
+ + =

=
=

1
1
1
1 2
2
2
1
1
1 1 1 1
2
) ( ) ( 1
2
1 1
) , (
z
i
i
d
d
T
z
i
i
i i i
S nQ y Q L
Lq D T
T B
n f n f z C
P
X
X I H z A
nT
z X TC
(15)

The optimal recovery plan for the damaged lot case is obtained by solving the following mathematical problem,
which is minimizing the total cost of recovery for the two-stage system:
Min[TC
1
(X
i
, i = 1, , z) + TC
2
(S
i
, i = 1, , z)]

(16)

subject to the following constraints (17) (22):
Si W for i = 1, 2, , z-1 (17)

=
s
z
i
i
nPT X
1

(18)

=
>
z
i
i
Lq mTD X
1

(19)


=

=
s
i
j
q
i
j
j j
D
B
S
D
X
P
2
1
1
1 1
for i = 2, ..., z (20)
I
o
= I
z
= 0 (21)
S
z
= Q (22)
The objective function (16) comprises of the two total cost components of the first stage (8) and the second stage
(15). Equation (17) ensures that the inventory storage at the retailer side does not exceed its warehouse capacity. Equa-
tion (18) represents the production capacity constraint; whereas (19) ensures that the total demand during the recovery
period is accounted for. Equation (20) ensures that the retailer receives each of its shipments on time and never runs out
of stock. Equation (21) states that there is zero inventory at the start and end of the recovery window and (22) guaran-
tees recovery of the original schedule after m cycles. The above model can be categorized as a constrained integer
nonlinear programming model.
By solving the above model (16) for X
i
, S
i
and z subject to the constraints (17) - (22), one can obtain the optimal
recovery plan for the two stage SC system under disruption. Without disruption, this model will reduce to the original
model as in (1) that was presented earlier.
3. Solution Approach
This section presents several numerical examples to demonstrate the applicability of the proposed model in practice,
particularly on determining the new ordering and production schedule in the presence of a transportation disruption.
Two optimization methods have been used to solve the model, namely LINGO 10.0 and evolution strategy (, )ES
[16] with stochastic ranking [17]. The mathematical model presented in this paper was used to solve five different test
problems. The test problems were generated by arbitrarily changing the cost parameters as well as the disruption dura-
tion. Under the ES method, 30 independent runs were performed based on a (30, 200)-ES with a total of 1750 genera-
tions before termination. The value of P
f
used was 0.45. As used in [17], P
f
is the probability of using only the objective
function for comparisons in ranking. The solution procedure was coded in MATLAB and executed on an Intel Core
Duo processor with 1.99 GB RAM and a 2.66 GHz CPU.
The same test problems were solved using LINGO 10.0 to judge the quality of the solutions. Table 1 summarizes
the results of the experiment, which shows the optimal ordering quantities for the retailer, the optimal production quan-
tities for the manufacturer, the optimal number of recovery cycles, and the best objective values found by the two ap-
proaches. It can be seen that the ES method gives near optimal solutions as compared to the optimal solutions given by
LINGO. Moreover, the computed error of the ES method is exceptionally low (0.0% to 0.578%).
Analysis of the results shows that the solution for the model is highly dependent on the relationship between the
shortage cost parameters. When the backorder cost is lower than the lost sales cost and the rest of the parameters are
fixed, it can be observed that having backorders is more attractive. On the contrary, when the lost sales cost is lower
than the backorder cost, it is more optimal to have lost sales in the recovery schedule. It is worth highlighting that the
number of recovery cycles for the latter case will be shorter than for the former case (see test instances 1 and 2). The
extent of the disruption has an effect on the production quantities and z as well. For a large T
d
, X
1
is found to be larger,
which in turn yields a lower z value. This finding can be seen when comparing test instances 1 and 3.
A Recovery Model for Production-Inventory System with Transportation Disruption

34
Table 1. Parameters for 5 Test Problems.

5. Conclusion
In this study, a disruption recovery model for a two stage production and inventory system subject to transportation
disruption was analyzed. The cost structure for the above model was developed for the case where the goods are dam-
aged while being transported. The objective of the study was to determine the optimal ordering and production quanti-
ties for the recovery schedule that yields the minimum relevant costs of the system. To implement the solution proce-
dure, two methods, namely, LINGO and ES with stochastic ranking, were used to obtain optimal solutions for the pro-
posed model. Numerical examples were provided to demonstrate the applicability of the model to real life problems.
Analysis of the results shows that the optimal recovery schedule is highly dependent on the cost parameters and the
length of the disruption. The presented model can assist decision makers who take a pro-active approach in maintaining
business continuity in the event of a transportation disruption in their SC system. Future work will focus on developing
a heuristic as an alternative method to solve the presented model.
6. References
[1] Y. Sheffi, "The resilient enterprise: Overcoming vulnerability for competitive advantage", The MIT Press, Cambridge,
Massachusetts, 2005.
[2] M. C. Wilson, "The impact of transportation disruptions on supply chain performance," Transportation Research Part E:
Logistics and Transportation Review, Vol. 43, No. 4, 2007, pp. 295-320.
[3] B. Tomlin, "On the value of mitigation and contingency strategies for managing supply chain disruption risks," Management
Science, Vol. 52, No. 5, 2006, pp. 639-657.
[4] X. Qi, J. F. Bard, and G. Yu, "Supply chain coordination with demand disruptions," Omega, Vol. 32, No. 4, 2004, pp. 301-312.
[5] M. Parlar and D. Berkin, "Future supply uncertainty in eoq models," Naval Research Logistics, Vol. 38, 1991, pp. 107-121.
[6] M. Parlar and D. Perry, "Analysis of a (q, r, t) inventory policy with deterministic and random yields when future supply is
uncertain," European Journal of Operational Research, Vol. 84, 1995, pp. 431-443.
[7] A. Arreola-Risa and G. A. DeCroix, "Inventory management under random supply disruptions and partial backorders," Naval
Research Logistics, Vol. 45, 1998, pp. 687-703.
[8] S. Chopra, G. Reinhardt, and U. Mohan, "The importance of decoupling recurrent and disruption risks in a supply chain,"
Naval Research Logistics, Vol. 54, No. 5, 2007, pp. 544-555.
[9] F. Ke-Jun, H. Xiang-Pei, and W. Xu-Ping, "Research on emergency logistics scheduling model based on disruptions,"
Proceedings of the International Conference on Management Science and Engineering, 2006.
[10] L. Sun, X. Hu, and Y. Fang, "Knowledge representation for disruption management problems in urban distribution decisions,"
Proceedings of the The 3rd International Conference on Innovative Computing Information and Control, 2008.
[11] Y. Xia, M.-H. Yang, B. Golany, S. M. Gilbert, and G. Yu, "Real-time disruption management in a two-stage production and
inventory system," IIE Transactions, Vol. 36, 2004, pp. 111-125.
[12] A. Banerjee, "A joint economic-lot-size model for purchaser and vendor," Decision Sciences, Vol. 17, 1986, pp. 292-311.
[13] H. Hishamuddin, R. A. Sarker, and D. Essam, "A recovery model for an economic production quantity problem with
disruption," Proceedings of the International Conference of Industrial Engineering and Engineering Management, Macau,
2010.
[14] G. P. Cachon and P. H. Zipkin, "Competitive and cooperative inventory policies in a two-stage supply chain," Management
Science, Vol. 45, No. 7, 1999, pp. 936-53.
[15] K. S. Park, "Inventory model with partial backorders," International Journal of Systems Science, Vol. 13, No. 12, 1982, pp.
1313-1317.
[16] H.-P. Schwefel, "Evolution and optimum seeking", Wiley, New York, 1995.
[17] T. P. Runarsson and X. Yao, "Stochastic ranking for constrained evolutionary optimization," IEEE Transactions on
Evolutionary Computation, Vol. 4, No. 3, 2000.
LINGO ES
1 200 20 1.2 1.8 1 1 15 15 0.003 4 4 531144.3 534212.3 0.578%
2 200 20 1.2 1.8 1000 1000 1 1 0.003 2 2 175017.0 175013.9 0.002%
3 200 20 1.2 1.8 1 1 15 15 0.03 5 2 569185.1 569185.1 0.000%
4 400 25 4 5 2 2 20 20 0.03 6 2 958554.5 958554.5 0.000%
5 400 25 4 5 1 1 2 2 0.008 2 2 338239.4 338240.6 0.000%
B
1
Test Instance A
1
A
2
H
1
H
2
B
2
L
1
L
2
T
d
z n
TC
Error
Proceeding of Industrial Engineering and Service Science, 2011, September 20-21
Copyright 2011 IESS.
A Measurement Framework and Obstacles to
Align Educational System Output with
Employment Demand in Indonesia
Effi Latiffianti*; YudhaPrasetyawan**

Department of Industrial Engineering, InstitutTeknologiSepuluhNopember (ITS),Surabaya, Indonesia
latiffianti@gmail.com *, yudhaprase@ie.its.ac.id **
ABSTRACT
Currently the Ministry of National Education, Republic of Indonesia, is working on several important programs related
to educational system, including the program of alignment between educational system and employment market. The
program aims to match the output of educational system to those required in the employment market. In this research,
the relationship between these two can be described as the same principle works between demand and supply where
educational system acts as the supply side and the employment market as the demand side.This paper proposed a
measurement framework so called Alignment Index (AI) model as an index to measure the level of alignment between
educational system and employment market in certain region. This index includes four dimensions suggested in the
program: quantity, quality, time, and location. Furthermore, challenges to achieve a perfectly aligned system are also
discussed.

Keywords: Alignment Index,Educational System,Employment, Indonesia

1.Introduction
Rapid changes almost in all aspects of life have been under way across the globe. Higher demand for better quality of
life has forced the emergence of wide variety of improvement methods. To cope up with market changes, million
manufacturing and service systems across nations changed themselves by applying selected improvement alternatives.
Moving forward with the changes certainly needs to be balanced with the improvement of human resources. In the past,
a secondary education might be sufficient to guarantee the economic success, but today the economic health of devel-
oped and developing nations has increasingly come to depend on higher level of education and more specialized voca-
tional training [1]. However, this is not always the case. For instance, in Indonesia we found that more than 29% of total
unemployment in 2009 was graduated from general and vocational secondary education, while almost 27% was
post-secondary graduates [2]. This fact shows us that higher level of education may not be a sufficient solution in all
cases and we believe that there must be an alignment between outputs resulted by the educational system with the hu-
man resources requirement in an employment market.
Most countries in the world are continuously reforming its educational system to better capitalize on its natural,
social and economic resources [3], and so does Indonesia. Unemployment rate in a certain education level indicates a
mismatch between educational system and employment market. This relationship can be described as the same principle
works between supply and demand. Higher availability in one side will make its value to the other side less, and with
the same cost higher quality products will be more preferable than the less ones. This paper only reports a part of bigger
research scope and critically examines the alignment level between educational system as a supply side and employ-
ment market as a demand side of a region that later on may be associated as city/town, province, or a country. We aim
to provide a general framework to measure alignment of the region based on four aspects: quality, quantity, time, and
location. The result of this paper is then used for further research involving all captured variables and their interaction in
the system, as well as uncertain behavior, which in turn will affect the average performance of the examined system.
The research is expected to provide important points for policy maker as considerations in the policy making.
2. Alignment Index (AI) Model
In the context of Indonesian educational system, alignment is defined as efforts to match the educational system as hu-
man resource suppliers with the employment market that requires human resources with a certain gradeof competencies
and its variance. This requirement is continuously changing so that educational system must respond it as needed. When
A Measurement Framework and Obstacles to Align Educational System Output with Employmt Demand

36
a mismatch exists, problems may arise, such as increasing number of unemployment and decreasing productivity due to
delegation of improper men for certain job positions. Mismatch in this case may occur in terms of quality, quantity,
time, and location [4]. Thus, alignment index for each education level and field of study should include those four as-
pects in the measurement as follow:
AI = a (AI
Qt
) + b (AI
Q
) + c (AI
T
) + d (AI
L
) (1)
With a + b + c + d = 1
Where a, b, c, and d aredesired weights for alignment index in term of quantity (AI
Qt
), quality (AI
Q
), time (AI
T
), and
location (AI
L
) respectively. Because Indonesia has been implementing 9 years basic education program, or 12 years
for specific towns/cities due to the local policy, the education levels and fields of study to be measured should start from
secondary school levels and others that can be considered as equal with secondary education. It includes but is not lim-
ited to the following mentioned in Table 1. The alignmentindex of a region can be obtained by aggregating the align-
ment indexes of all level and field of study in the related region. The same way, we can also calculate the alignment
index for the country.
Table 1. Education level and fields of study in Indonesia
Levels Fields of study Levels Fields of Study
General Secondary
School
General
Post-secondary (professions) Medical school and health sciences,
Pharmacy, special education for teacher,
and others
Special school for
disabled
Post graduate non professions
Vocational Secondary
School
Engineering and technology,
Information and communication
technology, Healthcare services, Arts,
crafts and tourism, Agribusiness and
agro-industry, Business and
management
Source: [5]
Vocational post-secondary
(Diploma)
Engineering, healthcare services, special
education for teacher (Keguruan),
language study, social science, business
school, economy, language studies
Other vocational
programs (non school)
Language and technical skills.
Post-secondary non professions
Engineering, business and management,
law and social science, science, arts,
language studies, and others Post-graduate non professions

2.1.The Quantity Alignment Index (AI
QT
)
The quantity alignment index (AI
Qt
) describes level of alignment between educational system and employment in term
of quantity. Ideally educational system should produce an equal amount of human resources to those required in the
employment market. The closer the AI
Qt
value to 100%, the better the system performance. The value above 100% is
possible to be obtained, but that is a very rare case and usually only happens in a very specific or narrow area of exper-
tise.

Figure 1. The quantity alignment index measurement model
Basically, the value of AI
Qt
can be obtained by simply calculating the ratio between the total number of educated
human resources available in a certain region in year (i) and the number of available employments in the same year and
region. This index should be measured for each level and field of education, which all then to be aggregated to obtain
the AI
Qt
. Number of available employment should be identified in all possible sectors, including public sector (govern-
ment employees), manufacturing, farming, service, entrepreneur, and others. Figure 1 describes how AI
Qt
can be meas-
ured.
A Measurement Framework and Obstacles to Align Educational System Output with Employmt Demand

Copyright 2011 IESS. 37
Having the information of available employment in the near future, it allows the supply side to arrange how many
students to produce in a certain semester or year by adjusting number of intake and considering the proportion of stu-
dents graduated in each intake. The index interpretation, whether it is good, average, or bad, can be set according to the
achievement target of the measured system.
2.2.The Quality Alignment Index (AI
Q
)
The quality alignment index (AI
Q
) measures how good the educational system satisfies the market requirement in term
of the human resources quality. Quality in this case is associated with competencies of human resources. Good AI
Q
is
expected to reduce unbalanced supply-demand condition, for example when nursing school capacity is way higher than
the actual number of nurses required, the number of unemployment (nursing school graduates) will increase.
In this model, competencies should be assessed in two aspects: hard skills and soft skills. While soft skills score
(SS) may be simply measured as a percentage of individuals soft skills in comparison with the total soft skills required
in therelated employment, the hard skills score (HS) needs to be measured based on level of study appropriateness,field
of study appropriateness, andthe required hard skills that successfully fulfilled by the assessed individual (equation 2).
HS = C .B . HP (2)
AI
Q
= x(HS) + y(SS), where x + y = 1 (3)
In equation 2, C notates level of study appropriateness (C = 1 if individuals level of study equal to the required
criteria, C=0.71 for other condition), B is field of study appropriateness (B =1 if individuals education back-
ground/field of study is equal to the required criteria, B=0.71 for other condition), and HP is the average proportion of
all individuals hard skills to the total required hard skills. It can be calculated as follow:
Table 2. Hard skills assessment
Required competencies Individual assessment (0-100%)
Competency 1 _______ %
Competency 2 _______ %
.
.
.
.
.
.
Competency n _______ %
Individual HP = average value of assessed competencies

The quality alignment index AI
Q
can be obtained using equation (3) where the score of hard skills and soft skills may be
weighted by x and y respectively.
2.3.The Location Alignment Index (AI
L
)
In term of location, alignment efforts are made in purpose to maintain the fulfillment of human resources requirement in
a specific region. A region of 100% aligned system should be able to produce graduates that would fill out 100% avail-
able employments. To measure the location alignment index we use scores for demand and supply sides as shown in
Table 3.
Table 3. Graduates and Employments Scoring
Score Graduates in the assessed region (city/town) Score Employment in the assessed region (city/town)
1 Working in the assessed region (G1) 1 Filled out by graduates from the city(E1)
1.25
Working outside the region, but in the same
state/province(G2)
0.75
Filled out by graduates from other region, but in the same
state/province (E2)
1.5 Working outside the state/province (G3) 0.5 Filled out by graduates from other state/province (E3)
1.75 Working outside the country (G4) 0.25 Filled out by graduates from other country(E4)

Score should be calculated in both demand (equation 5) and supply sides (equation 6) at which the maximum
scores are 100%. The location alignment index is the score ratio between demand and supply as shown in equation (7).
(5)
A Measurement Framework and Obstacles to Align Educational System Output with Employmt Demand

38
(6)
(7)
whereAIS
L(i)
, AID
L(i)
, and AI
L(i)
are alignment index of supply side, alignment index of demand side, and the location
alignment index respectively,G
i
and E
i
for i = 1, 2, 3, 4 are the number of each assessed criteria explained in Table 3. In
this model, the numerator maximum score is 1, while the denominator will be equal or greater than 1. Thus, the location
alignment index model will not produce any value larger than 1. It should be underlined that the number of individuals
assessed in the supply side and the demand side may differ because this index does not attempt to measure alignment in
quantity.
2.4.The Time Alignment Index (AI
T
)
Alignment index in term of time can be measured using several indicators. Average waiting time before a graduate finds
a job is probably the closest indicator to give information about the level of alignment in term of time. If a system were
well aligned, graduates would normally get hired soon after graduation time. Shorter waiting time should indicate that
graduates are produced in the (nearly) right time when they are needed. While companies may perform recruitment
process when there are job positions need to be filled out, each educational system has standard lengths of period where
the schedule may only commence at specific times during the year, for example spring and fall. Thus, time gap will
always exist between the point where students are graduated and the point when they find employment. In the case with
such limitation, it may be better to measure the system using equation (4).

(4)


In equation 4, AI
T
is the time alignment index, p
ij
is the proportion of graduates get hired in less than the sub-period
of educational system in level iand field of study j, m and n are total numbers of education level and field of studies
observed (see Table 1 as an example). This proportion should be measured for each education sub-period (quarter, se-
mester, or year). For example, in an educational system where graduation occurs each semester or twice a year, meas-
urement should be made every 6 months period and aggregated for the assessed year, and p
i
is the proportion of gradu-
ates that successfully find job in less than 6 months after graduation time in the same year.
3. Discussion
In the model development, we put more stress on the educational system side rather than the employment side. The
main reason is that from our perspectives, the educational system is more controllable. In practice, government in-
volvement in form of policies is easier to implement in the educational system than in industries.
Ideally, a perfectly aligned system should be able to produce an exact number of educated human resources with
the required quality of competencies in a time and place they are needed. However this condition may be difficult to
achieve due to the nature of population behavior, which normally found to be higher than the number of available em-
ployments. Furthermore,there are several reasons that would potentially obstruct the realization of 100% aligned sys-
tem, including:
1. Economic of scale. When a number of human resources in a specific field needed are small, there is no reason to
establish an institution only for single purpose. For example, 5 fresh graduates of shipbuilding engineering per
year are needed in a certain region in which there is no university or institution of engineering to accommodate the
need. A perfectly aligned system should provide an educational process to produce the required engineers. How-
ever, in this case, it may not feasible in term of financial aspect to establish a shipbuilding engineering school with
capacity of 5 per year.
2. Centralized industry. In Indonesia and probably in most countries in the world, it is often found that regulation
prevents the establishment of industrial facilities in just any places. For environmental, safety, and other reasons,
sometimes the government only allows the industries to operate in a specifically dedicated industrial area, and
sometimes not all regions (towns or cities) have this kind of industrial park. In this case, it is difficult to prevent
people migration for employment purpose from one place to another. Also, it may be not wise to approve the ab-
sence of school in a certain region just because there is no available employment there.
3. Errors in forecast. Each education level of specific or general field of study requires a certain length of period to
produce graduates. Thus, forecasting on the number of required employment in the future should be available in
A Measurement Framework and Obstacles to Align Educational System Output with Employmt Demand

Copyright 2011 IESS. 39
order to prepare the human resources. For example, decision making regarding how many students should be taken
this year in an intake of 4 years education level should be made based on the number of graduates required 4 years
from now. Of course it is not easy to identify this number, especially because it is very closely related to strategic
planning of firms. In fact, only small number of industries (firms) really got involved, established and maintained
direct connection with educational system (institutions), especially in Indonesia.
4. Human migration.Migration has been a popular issue in discussion over the past decades, not only because the
phenomenon has been occurring in most part of the world but also due to the interesting facts of various causes
and impacts related to migration. While employment and economic reasons have been reported as major factors
causing migration as those found in China [6] and United States [7,8,9], other factors such as educational opportu-
nities, culture, and family also play an important but secondary role [9]. So is probably the case in Indonesia. Ad-
ditionally, disparities in the development level among regions in Indonesia are suspected to be the root cause of
most suggested reasons [10]. For example, the fact that universities in Indonesia are commonly found in relatively
more developed regions would somehow make migration unavoidable case for education reasons. Thus, a
well-aligned system is only expected to reduce migration for employment, education, and economic reasons, but
not those for other reasons. The occurrence of migrations for whatever reasons would make it difficult to achieve a
perfect location alignment index (AI
L
).
5. Culture and Local Value.Changes in all aspects of life have been affecting society and its culture. However, some
of cultural values remain. Tensions between culture/religion in one side and womens individual human rights
in the other side were identified recently [11] and it proves that this value somehow continue to exist. Although
RadenAjengKartini, one of Indonesian national heroine, had pioneered emancipation of women in Indonesia long
time ago, some of cultural and religion values related to women has not changed significantly in several cases.
This is especially true for less developed regions where the society tends to be conservative. For instance, in some
parts in Indonesia, women are expected to stay within the home and family. In such society, woman would proba-
bly choose to stay at home after completing her study rather than being employed and that would cause the align-
ment index to be lower. Differently, members of very rich family may choose not to work because they find it not
necessary to do. To this problem, it may have nothing to do with educational system but somehow it affects the
index value for the assessed system.
6. Close collaboration requirement and organizational purpose. Aligning educational system and the employment
market requires close collaboration of both sides, which is sometimes hard to maintain due to differences of or-
ganizations purpose. In order to be able to consistently produce human resources needed by the employment
market, industries are expected to actively inform any changes they expect from educational system. In the other
hand, educational system members should also put efforts to keep the system informed about any changes may
occur in the market. Furthermore, the harder and more important part is how institutions within the educational
system share and play their roles. For example, in a specific region 100 fresh graduates of electrical engineer are
needed every year while there are three universities offering this subject. To keep the system well aligned, those
three universities are expected to take in total 100 students. This can be very difficult to implement especially
when the universities (or other educational institutions) play roles as profit-oriented organizations, as those found
in most cases. Hence, government policies should take part to make it possible.
7. Knowledge and technology limitation. Knowledge and technology advancementoften come from other part of the
world, especially in the developing countries. Knowledge can be transferred by moving people, specific tools, and
technologies as well as networks that combine people, tools, and routines [12]. Thus, the presence of people as
knowledge owners is sometimes unavoidable when it comes to knowledge transfer efforts. It may not be a first
priority to be avoided, but in the context of alignment, importing people from other regions will significantly re-
duce the location alignment index value.
8. Globalization and the spread of industries across nations. Globalization has made the parent company, facilities
locations, and market borderless. In order to deliver products more quickly and cheaply, agile supply networks are
required. Determining the optimum number and location of factories and distribution centers is very crucial to
successfully set up the network. In doing so, many firms have opened new facilities across nations. To closely su-
pervised the subsidiary companies and transferring knowledge, as well as copying the success of the parent com-
panies, employees from parents companies are often placed overseas. The impact of this phenomenon is lower lo-
cation alignment index value for regions where the new facilities are located.
While alignment is thought as a solution in educational and unemployment problems, some parties would probably
disagree with the concept of alignment. Practically, when a system is well aligned, several undesired phenomenon
might emerge, such as limited human resources alternatives. In a well-aligned system, the number of required human
resources for employment with the available number will be nearly the same. It means employers will have smaller
number of candidates to select from. For most-wanted firms as employment place, it may not be a problem, but for less
A Measurement Framework and Obstacles to Align Educational System Output with Employmt Demand

40
and least wanted firms they may have to be satisfied with even smaller number of available candidates to select from.
This condition will also made decision-making process of hiring-firing more difficult. For example because the number
of available human resources is small, a firm might rather keep an underperformed employee than to go once again
through the selection process where only very few number of candidates will participate.
4. Conclusion
The proposed alignment index model can be implemented to measure the alignment level of educational system and
employment market in certain regions. This index can be used as an indicator to compare the performance of a system
in a certain region with others. The model also gives information on what aspects within the four dimensions assessed:
quantity, quality, time, and location, which require further improvement. However, measurement may not be easy in
this case due to data availability. Thus, a well-structured data gathering method is required to implement the model.
Although the index value suggests that 100% alignment is an ideal condition, it may not be the final purpose of the
alignment program. In fact, how to increase the index is more important concern in our case. As we all know, abundant
supply in this case is unavoidable and it involves many variables that interact each other, and some probably is not di-
rectly related to the educational system and employment. Further analysis on what impacts may be resulted in an ideal
condition has not been performed yet. In addition, achieving 100% index is a difficult and challenging task for many
reasons, including: economic of scale, centralized industry, errors in employment forecast, human migration, culture
and local value, technology limitation, different organizational purpose, globalization and the spread of industries across
nations, as well as close collaboration requirement among parties involved in the system.
Finally, this alignment model can only be seen as a soft approach in purpose to help evaluating the performance of
national educational system in its role in the supply-demand relationship with employment rather than to reach 100%
alignment. The evaluation is then expected to help the government narrowing the gap between demand and supply.
6. References
[1] Organization for Economic Cooperation and Development (OECD), Education Policy Analysis 2006-2005, OECD, Paris,
2006.
[2] BadanPusatStatistik, 2009.
[3] Metzger et al, A Comparative Perspective on the Secondary and Post-Secondary Education Systems in Six Nations: Hong
Kong, Japan, Switzerland, South Korea, Thailand and the United States, Procedia Social and Behavioral Sciences, Vol. 2,
2010, pp. 1511-1519.
[4] Tim PenyelarasanKementrianPendidikanNasional, KerangkaKerjaPenyelarasanPendidikandenganDuniaKerja, 2010.
[5] Directorate of Vocational Secondary School Development Ministry of Indonesia National Education, Data Pokok SMK,
2009, http://datapokok.ditpsmk.net/index.php?prop=&kab=&status=&kk=&bk=&pk=
[6] Z. Liu, Human capital externalities and rural-urban migration: Evidence from rural China, China Economic Review, Vol. 19,
2008, pp. 521-535.
[7] M. J. Greenwood, Research on Internal Migration in the United States: A Survey, Journal of Economic Literature, Vol. 13,
1975, pp. 397-433.
[8] C. C. Roseman, Labor Force Migration, Non-Labor Force Migration, and Non-Employment Reasons for Migration,
Socio-Econ Plan Sci, 1983, Vol. 17, No. 5-6, pp 303-312.
[9] T. Kontuly, K. R. Smith, and T. B. Heaton, Culture as a Determinant of Reasons for Migration, The Social Science Journal,
1995, Vol. 32, No. 2, pp. 179-193.
[10] Fathurrohman, KerjasamaAntar Daerah dalamPenangananMigrasidanPersebaranPenduduk, Dialogue JIAKP, Vol. 2, No. 2,
May 2005, pp. 726-734.
[11] B. Winter, Religion, culture and womens human rights: Some general political and theoretical considerations, Womens
Studies International Forum, 2006, Vol. 29, pp. 381-393.
[12] A. C. Inkpen, Knowledge Transfer and International Joint Ventures: the case of NUMMI and General Motors, Strategic
Management Journal, John Wiley & Sons Ltd.,2008, Vol. 29, pp. 447-45.
Proceeding of Industrial Engineering and Service Science, 2011, September 20-21
Copyright 2011 IESS.
Government Intervention and Performance:
Evidence from Indonesian State-Owned
Enterprises
Bin Nahadi/Graduate School of Asia Pacific Studies Doctoral Program,

Ritsumeikan Asia Pacific University, Oita, Japan.
binnahadi@gmail.com.
ABSTRACT
In this study, the impact of government interventions toward performance of Indonesian state-owned enterprises is in-
vestigated, using 114 of total 141 enterprises from the year 2006 to 2009 (456 observations) as a sample. The study is
cross-sectional to estimate how issues of intellectual property, soft budget constraint and political embeddedness affect
the economic performance of enterprises. Form of SOEs, number of state ownership, government loan, capital injec-
tion, number of government officers/politicians seat in the board of commissioners, as well as government assignment
are assigned as government intervention proxies. On the other hand the firm performance is represented by ROA and
ROE values. The result shows that the government ownership, government loan and government assignment have an
adverse impact to SOEs performance, meanwhile number of government officerson the supervisory board is the only
variable with favorable impact to SOEs. The impact from the rest of government actions are unclear and need to be
investigated further. Finally, possible explanations of each empirical findingare elaborated.

Keywords: Government Intervention, Performance, Indonesia, State-Owned Enterprise

1.Introduction
The role of government in transition economies is undeniably critical, which is one of the common ways is through
state owned enterprises (SOEs).It is widely known that SOEs have been suspected as ill-governed business entities sig-
nified by high level of corruption, lack of transparency, as well as severe efficiency. Many market based economist be-
lieve that the main reason of such weaknesses is overwhelming government interventions. Therefore, they actively
promote liberalization trough privatization of SOEs. However, it may be not true for all cases.
This paper aims toexamine the relationship between the level of government interventions and the performance of
state owned enterprises. The paper unfolds as follows. In Section 2, the theoretical review is described. Variables and
hypothesis is developed in Section 3; meanwhile, section 4depicts data and methodology. Section 5presents result and
findings, and then discussion is developed in section 6. Final section concludes the paper.
2. Literature Review in Government Intervention
There are three main issues of government intervention whichare elaborated in this paper. They are intellectual property
aspect through control and ownership, budget constraint aspect, and political embeddedness issue. Each aspect is de-
scribed in the following paragraphs.
2.1.Intellectual Property Aspect
SOEs are a business institution whichbelongs to a society as a whole at the proxy of state. The problem is if everyone
owns itthat means no one actually ownsit, as a result, no one has an incentive to utilize the resources effectively and
efficiently. Therefore, many economists suggest assigning property rights by lowering the government control and
ownership [1].
The problem believed to be related to ownership is the principle-agent problem that arises when managers act not
in shareholders best interest. The deviating management goal often hinders the shareholders goal in maximizing their
share value. Previous study reveals that efficient information and structure of incentive as a result of the existence of
private ownership is believed to be able to reduce agency problems[2]. Also,it is argued in [3] that another reason why
Government Intervention and Performance: Evidence from Indonesian State-Owned Enterprises

42
full or partial privatized SOEsare said to have less agency problem is because those firms have better both external and
internal governance mechanisms.Furthermore, the agency problem in the SOEs sector is worse than their peers in the
private sector since there are two layer of agency problem: owners-to-politicians and politicians-to-managers[4].
2.2.Budget Constraint
SOEs are frequently exploited by governments in emerging economies to produce public necessities, which in turn all
costs occurred will be shouldered by government via loan policy or subsidies [5]. It will lead to the situation of
so-called soft budget constraint. Comprehensive illustration is as described in [6]:
The softening of the budget constraint appears when the strict relationship between expenditure and earnings has
been relaxed, because excess expenditures over earnings will be paid by some other institution, typically by the state. A
further condition of softening is that the decision maker expects such external financial assistance with high probability
and this probability is built into his behavior.
From several previous studies, causes of soft budget constraint can be categorized into some causes, such as de-
centralized [7], paternalism [5], and public ownership in socialist economies [8], monopolistic market [9], and policy
burden [10]. In the context of Indonesian SOEs, two latter causes are relevant. Some particular industries, such as sea-
port, airport, and defense industry, have been still monopolized by SOEs. It is not because of competitiveness of SOEs-
but because either the government has not liberalized the market yet or otherwise those industries are not lucrative
enough to attract private firms.
It is said in [11] soft budget constraint will cause the firm become less responsive to price, technological changes,
and unfavorable external condition that lead to arise of organizational slack. In addition, SOEs may not be efficient in
utilizing their finance resources since capital market cannot discipline SOEs.
2.3 Political Embeddedness
The role of the state as the regulator as well as the owner of SOEs at the same time causes the situation, so-called po-
litical embeddednessthat refers to technical, bureaucratic, or emotional ties to the state and its actors. It includes
wide-ranging and intricate association; official and unofficial, personal and organizational ties to the state [12].Given
the existence of the principle-agent problem mentioned earlier, one way utilized by the shareholder to ensure the man-
agement work toward owner-based interests is through a supervisory board. However, it has been quite common that
the members of supervisory board of most SOEshave been selected among bureaucrats from any associated departments
or politicians from any political parties. As a result,SOEs might be an ideal place of rent seeking activities from the
member of the board of commissioner.
In addition, as mentioned in the issue of soft budget constraint above, SOEs are often utilized as a vehicle for exe-
cuting governmental agenda such as for delivering some government assignments. As a result, SOEs will be charged
with multitasks, not only as a business entity but also as a government body at the same time.
3. Variables and Hypothesis
To address the issue of intellectual property assignment/ownership control, two variables are employed. They are form
of SOEs (FORM) and number of government ownership (OWNERSHIP). Some economistsargue that a source of inef-
ficiencies in SOEsis high control of state over the firms. It is said that the government is more likely to distract the re-
sources of the firm to attain its own political or socioeconomic goals [13]. In addition, government control over enter-
prises is also suspected to have an association with the absence of incentive and lack of monitoring for managers to bet-
ter perform[14]. Moreover, different forms of state ownership are also associated with the level of government officials
involvement in the process of corporate governance and it is likely to have different performance [15]. Form transfor-
mation and privatization can be regarded as one ways of defining property rights. Property right theory suggests that the
clearer the property rights are defined, the better the utilization of the assets (governance) will be [16]. According to
those arguments the following hypothesis is proposed:
H1: the higher government control toward SOEs represented by more-bureaucratic form of firm will providene-
gative impact toward SOEs performance;
H2: the higher government control toward SOEs represented by higher percentage of state-ownership will provi-
denegative impact toward SOEs performance;
With regard to the soft budget constraint aspect, this study employs two independent variables, namely capital in-
jection (CAPITAL) and government debt (GOVLOAN). In most cases, if SOEs are facing severe financial hardship the
state will interfere either by providing loan or capital injection as a last resort sources. In contrast to the case of com-
mercial bank loan that requires some rigid requirements in obtaining credit and of course with market rate, the govern-
Government Intervention and Performance: Evidence from Indonesian State-Owned Enterprise

Copyright 2011 IESS. 43
ment frequently releases many requirements so that the SOEs will more easily get a loan at subsidized interest rate. This
government loan present financial benefit to SOEs, mainly because of lower interest rate, no collateral required and
lower transaction cost. In case of capital injection the advantages enjoyed by SOEs are even bigger than government
loan. Nonetheless, both types of government actions can create disincentive for managers to govern the firm properly
and efficiently including finding needed financial resources. This may also hinder sound development of capital or fi-
nancial market. Therefore the following hypotheses are set:
H3 :Government loan will givenegative impact toward SOEs performance;
H4 :Capital injection will providenegative impact toward SOEs performance;
The issue of political embeddedness is examined by employing two variables; government assignment through
public service obligation (PSO) and number of government officers/politicians seating in the board of commissioner
(OFFICERS). PSO is a government program to avail the basic need of people such as electricity, food, medicine, fuel,
transportation and soon. Doing so will provide SOEs both benefit as well as cost. The appointed SOEs will financially
benefit from captive revenue plus certain a percentage of profit given over each particular government assignment.
Nevertheless, it also implicitly grants some cost to SOEs. SOEs that heavily rely on government assignment as the main
source of revenue will be more likely to have unproven competitiveness compared to their private owned peers. In the
long run, it also will harm financially. Moreover, too much business transaction with government and its bureaucrats
may induce political rent seeking activities that undermine SOEs competitiveness.
Furthermore, government has assigned active/retired officers from associated ministries and politicians from ruling
political parties as member of Board of Commissioners (BOC) in most SOEs. It also derives both benefit and cost to
SOEs simultaneously. The presence of an official on the board can be a source of legitimacy and facilitator in passing
government policy to SOEs and in delivering a message from SOEs in an effort of influencing the policymakers that
ultimately benefit SOEs[17]. Even more, this also can provide SOEs access to resources (such as government project)
controlled by department or ministry in which the officials work.
On the other hand, public choice theory states that politicians will maximize their interest in gaining more votes so
that the firm with less political intervention will be more likely in increasing search for better governance [18]. In addi-
tion, as representative of the government, acting officials usually will act on the basis of government interest that is
probably not in line with firm objective. Additionally, as argued in [19] the presences of politician exacerbate the
agency problem. This means that the presence of officials on the BOC may be perceived with significant costs for the
firm. Given those arguments following hypotheses are proposed:
H5: Government assignments through public service obligation will give negative impact toward SOEs
H6: Number of active/retired officers and politicians on BOC will result in negative impact toSOEs
As dependent variable, this study employs Return on Assets (ROA) and Return on Equity (ROE) as performance
measures. Thanks to its simplicity in calculating as well as its explanatory power both measures were used in previous
numerous researches, including for Indonesian SOEscase [20]. For control variable, equity (EQUITY) and firms core
business (CORE) are selected to represent the size of SOEs and the industry where the firm operates consecutively.
4. Data and Methodology
Financial data were collected from the annual report of 114 SOEs (of total 141 SOEs) for the year 2006-2009 (456 ob-
servations). This sample covers almost 97% of population both in term of assets and sales.
The way in giving a score for independent variable as follows:
a. SOEs is scored 1, 2, and 3 if their form is a public agency, company limited, and listed company limited consecu-
tively;
b. Ownership (OWNERS) is represented in percentage of state ownership, range from 0% to 100%;
c. Capital injection (CAPINJ) and government loan (GOVLOAN) are dummy variables. If the SOE did NOT get any
form ofadditional capital injectionwithin last five years score 0 is given and 1 otherwise for CAPINJ. Meanwhile if
there is NO government long term loan balance in the SOEs balance sheet score 0 is provided and 1 otherwise for
GOVLOAN;
d. Number of officers or politicians (OFFBOC) who seat on board of commissioners is expressed in number as it is;
e. PSO is also a dummy variable which is SOEs that conduct government assignment is valued 1 and 0 otherwise;
f. Equity value has been transformed into ln value to reduce the possibility of multicollinearity problem;
g. Type of industry which the SOEs operate is also valued using dummy variable, 0 for good production/manufacture
and 1 for service provider.
Once all data have been identified and inputted, those independent variables are tested to examine the relationship to-
ward dependent variable using ordinary least square method. The regression equations are written as follows:
Government Intervention and Performance: Evidence from Indonesian State-Owned Enterprises

44
ROE =
0
+
1
FORM +
2
GOVLOAN +
3
OFFBOC +
4
PSO +
5
CAPINJ +
6
OWNERS +
7
ln.EQUITY +

8
CORE (1)
ROA =
0
+
1
FORM +
2
GOVLOAN +
3
OFFBOC +
4
PSO +
5
CAPINJ +
6
OWNERS +
7
ln.EQUITY +

8
CORE (2)
5. Result and Findings
Table 1 shows the descriptive statistic and correlation. Average ROE of ISOEs, 0,085, is relatively low compared to
their private competitors. Meanwhile, average number of government officers and politicians on the board of commis-
sioner is 3.32. Furthermore, mean of state ownership on SOEs that is 92%, partly because this study doesnot include
SOEs with state-minority ownership, less than 50%, but mainly it shows that the majority of SOEs are still
wholly-owned by the state. With respect to form, most SOEs are in the form of limited corporations. In term of core
business whichSOEs operate, there were more SOEs doing business in the service industry compared to manufacture
industry. The rest of variables are dummy variable so that the means just show the relative proportion over the observa-
tion. For instance, mean of PSO is 0.12 meaning the percentage of SOE executing special government program is
around 12% of population.
Table1: Descriptive Statistics And Correlations with ROE as Dependent Variable








Table 2:



Table 2: Cooefficients, t Statistic, & Collinearity with ROE as Dependent Variable















Table 2 demonstrates roughly 57% variability of the dependent variable can be explained by all combined inde-
pendent variablesemployed. Using 307 observations (after omitting some outliers), this score is considered high. Em-
ploying Variance Ination Factor (VIF) and Tolerance statistic critical scores that may signal problem with multicollin-
earityhave not been approached by both scores [21].
Looking at the significance, except GOVLOAN and CAPINJ, the rest of independent variables have statistically
significant effect toward ROE. Although FORM and PSO are not significant at 5% confidential level, however, both
variables are quite significant at 10% confidential levels. Therefore, in this paper both variables are still considered as
significant.
Mean SD 1 2 3 4 5 6 7 8 9
1 ROE 0.085 0.092
2 FORM 2.070 0.481 0.299
3 GOVLOAN 0.410 0.492 -0.143 0.067
4 OFFBOC 3.320 1.274 0.289 -0.071 -0.039
5 PSO 0.120 0.322 0.096 0.014 0.089 0.244
6 CAPINJ 0.235 0.424 -0.147 -0.283 0.042 0.110 0.300
7 OWNERS 0.926 0.178 -0.247 -0.450 0.053 0.152 0.023 0.182
8 ln.EQUITY 12.044 4.182 0.711 0.250 -0.095 0.364 0.242 -0.072 -0.132
9 CORE 0.630 0.483 0.136 -0.037 -0.412 -0.007 0.068 0.024 -0.004 0.003
Standardized
Coefficients
B Std. Error Beta Zero-order Partial Part Tolerance VIF
(Constant) -0.093 0.034 -2.714 0.007
FORM 0.015 0.009 0.080 1.764 0.079 0.299 0.102 0.067 0.704 1.420
GOVLOAN -0.002 0.008 -0.009 -0.204 0.839 -0.143 -0.012 -0.008 0.786 1.272
OFFBOC 0.007 0.003 0.102 2.397 0.017 0.289 0.138 0.091 0.794 1.259
PSO -0.023 0.012 -0.080 -1.883 0.061 0.096 -0.108 -0.072 0.799 1.251
CAPINJ -0.010 0.009 -0.045 -1.059 0.290 -0.147 -0.061 -0.040 0.817 1.224
OWNERS -0.067 0.022 -0.129 -2.971 0.003 -0.247 -0.170 -0.113 0.770 1.299
ln.EQUITY 0.014 0.001 0.651 14.733 0.000 0.711 0.649 0.561 0.743 1.345
CORE 0.027 0.008 0.140 3.312 0.001 0.136 0.188 0.126 0.813 1.229
N
F
R Square
Collinearity Statistics
307
48.838
0.567
Unstandardized
Coefficients
t Sig.
Correlations
Government Intervention and Performance: Evidence from Indonesian State-Owned Enterprise

Copyright 2011 IESS. 45
From the second equation, which is the only difference from table 1 is that ROA is used as dependent variable in-
stead of ROE. The result displayed on the table 3 shows almost similar figure. The ROA score is considerably low at
3.2%. What makes slightly difference is the number of valid observation after taking out the outliers. With regard to
correlation, there is no sharp correlation among variables. It support the argument that multicollinearity problem is neg-
ligible.Table 4 reveals that regression results moderately high r square, 0.450. A couple outliers were identified until
reaching valid observation is 270. After considering F score, Tolerance and VIF score the model is judged statistically
fit. Among predetermined independent variables only CORE was not significant.
Table 3: Descriptive Statistics and Correlations with ROA as Dependent Variable










Table 4: Cooefficients, t Statistic, & Collinearity withROA as Dependent Variable














From the result of two different equations of regression discussed above, the impact of each aspect of government
interventions can be summarized as follows. Overall, as shown by table 5 comparison of the result from two different
tests provide strong argumentto support Hypotheses H2, H3, H5, and reject Hypotheses H6. However, hypotheses re-
garding SOEs form and capital injection are left with unclear answer due to the indecisive results.
6. Discussions
From the findings described earlier, form has a positive impact over ROE, meaning that reducing the government con-
trol signified by transformation of SOEs form is more likely to give good impact of the SOEs performance. However,
the opposite result was found for the second equation which has ROA as dependent variable. The possible reason is
SOEs with less control from the government will have more flexibility in raising capital either through equity capitali-
zation (for instance, through initial public offering) or by leveraging debt. SOEs with less government control seem to
finance their project using more debt rather than equity. As a result, it will keep their equity low so that it can push their
ROE higher. Interestingly, when performance measured by using ROA the opposite result prevails. This paper argues
SOEs with less government control become less conservative in selecting project in the sense that funds obtained from
debt/loan have been invested in the project with low return.
Mean SD 1 2 3 4 5 6 7 8 9
1 ROA 0.032 0.025
2 FORM 2.100 0.469 -0.139
3 GOVLOAN 0.440 0.498 -0.292 0.041
4 OFFBOC 3.250 1.205 0.384 -0.065 0.057
5 PSO 0.120 0.328 -0.107 0.062 0.167 0.271
6 CAPINJ 0.181 0.386 0.288 -0.187 -0.054 0.127 0.323
7 OWNERS 0.923 0.176 -0.082 -0.480 0.049 0.141 0.017 0.129
8 ln.EQUITY 12.119 4.080 0.270 0.210 0.003 0.339 0.269 0.052 -0.046
9 CORE 0.670 0.470 0.042 0.002 -0.412 -0.054 0.042 0.020 -0.020 -0.109
Standardized
Coefficients
B Std. Error Beta Zero-order Partial Part Tolerance VIF
(Constant) 0.046 0.012 3.930 0.000
FORM -0.010 0.003 -0.192 -3.505 0.001 -0.139 -0.212 -0.161 0.703 1.423
GOVLOAN -0.011 0.003 -0.225 -4.291 0.000 -0.292 -0.257 -0.197 0.769 1.301
OFFBOC 0.008 0.001 0.385 7.597 0.000 0.384 0.426 0.349 0.820 1.219
PSO -0.025 0.004 -0.324 -6.102 0.000 -0.107 -0.353 -0.280 0.748 1.336
CAPINJ 0.020 0.003 0.315 6.275 0.000 0.288 0.362 0.288 0.839 1.192
OWNERS -0.034 0.007 -0.241 -4.559 0.000 -0.082 -0.272 -0.209 0.752 1.330
ln.EQUITY 0.001 0.000 0.240 4.621 0.000 0.270 0.275 0.212 0.782 1.279
CORE 0.000 0.003 -0.001 -0.025 0.980 0.042 -0.002 -0.001 0.793 1.261
N
F
R Square
270
26.680
0.450
Unstandardized
Coefficients
t Sig.
Correlations Collinearity Statistics
Government Intervention and Performance: Evidence from Indonesian State-Owned Enterprises

46
Table 5:The Impact of Each Independent Variable To ROE and ROA
Independent Variable
Dependent Variable
Impact ROE ROA
FORM Positive Negative Indecisive
OWNERS Negative Negative Negative
CAPINJ Negative (insignificant) Positive Indecisive
GOVLOAN Negative (insignificant) Negative Negative
OFFBOC Positive Positive Positive
PSO Negative Negative Negative
Ln.EQUITY Positive Positive Positive
CORE Positive Negative (insignificant) Indecisive

Not surprisingly, both equations show consistent results regarding the impact of ownership toward performance.
The result show higher percentage of government ownership will lead to poorer performance. The presence of other
shareholders other than the government is expected to be able to enhance governance of the firm through improvement
in monitoring, transparency, responsibility, and so on. This is especially true for the case of Indonesian privatized SOEs
as finding of previous research [22]. ListedSOEs may have better governance due to the presence of both internal and
external governance as argued in [3].
With respect to capital injection, the result is mixed. This variable is statistically significant in relation to ROA but
insignificant in the case of ROE with different direction of impact. This finding need to be further investigated by em-
ploying other performance variables or by applying qualitative approach. Similarly, the impact of government loan over
performance is also indecisive. It is because only one test, which ROA as dependent variable, demonstrates significant
result. However, both tests show the same negative impact of such kind of government interference. It can be conclude
that the cost of obtaining and optimizing government loan exceeds the financial benefit that may be able to reaped.
Even possible financial benefit from low interest and low transaction cost of loan acquirement may be offset by illegal
transfer paid to rent seeker in bureaucracy. This finding reinforces the previous research conclusion which is soft budget
constraint will create a conducive environment for spoiled managerial behavior [11]. These managers have no incentive
to run the firm efficiently, reluctant to compete fairly which will severely harm the firm competitiveness in the long run.
Interestingly, the findings with respect to number of government officers occupy seats on board of commissioner ap-
pears to be different from common belief that suspects that the presence of officer on board of commissioners is likely
to worsen the situation and performance. The presence of officer on board commissioner seems to contribute in helping
SOEs in accessing resources that can boost the firm performance. Otherwise, the presence of officers on the supervi-
soryboard can be as an effective tool for conducting check and balance among related ministry so that SOEs can op-
erate productively might be the case.
The similar explanation can be relevant for the case of PSO. The financial benefits grasped by SOE in the form of
captive revenue with a certain percentage of normal profit has been outweighed by summation of rent transferred to
official for getting the assignment and potential cost of inefficiency that incurs because of managerial moral hazard.
7. Conclusions
This paper reveals empirical evidence that government intervention in the form of state ownership, government loan,
and government assignmentprovide a negative impact to the corporate performance represented by value of ROA and
ROE. Underlying theories such as property right, soft budget constraint as well as political embeddedness have ex-
planatory power in explaining the findings related to those government actions. The results also seem to be consistent
with previous studies. Surprisingly, the resultassociated with number of government officers seat on the board of com-
missioner appears to be different from common belief.The study demonstrates that the number of officers on supervi-
sory board has positive impact to the firm performance. However, the impacts of SOEs form and capital injection have
not been clear yet. Further test need to be done using some other quantitative performance measure such as efficiency to
clarify the effect or using a qualitative approach instead of quantitative tools.
In addition, the net impact of each government intervention is the resultant of all possible benefits that probably
can be reaped and all potential costs may occur from such government actions. These costs include a rent transferred to
authorities as a cost of interference and potential cost of inefficiency that occurs because of managers moral hazard.
Using the empirical findings of this paper, future related research can examine how institutional structure and incentive
system can play role in making each government intervention favorable not only for SOEs but also for society as a
whole.
Government Intervention and Performance: Evidence from Indonesian State-Owned Enterprise

Copyright 2011 IESS. 47
8. Acknowledgements
I am grateful for insightful comments and encouragement provided by my PhD supervisor, Professor Yasushi Suzuki,
as well as my colleagues in Ritsumeikan Asia Pacific University, especially Suminto,AliAbidin, and JoshuaYadonfor
their fruitful discussion and suggestions to this research.
9. References
[1] R. H. Coase, "The Problem of Social Cost". Journal of Law and Economics3 (1), 1960, pp. 144.
[2] E.F. Fama and M.C. Jensen, Separation of Ownership and Control, Journal of Law & Economics, Vol. 26, 1983, pp. 301-25.
[3] A. Shleifer and R.W.Vishny, A survey of Corporate Governance, Journal of Finance, Vol. LII, 1997, pp. 737-83.
[4] A. Cuervo and B.Villalonga, Explaining the Variance in the Performance Effects of Privatization, Academy of Management
Review, Vol. 25, 2000, pp. 581-90.
[5] J.Y.Linand G. Tan, Policy Burdens, Accountability and Soft Budget Constraint. American Economic Review 89 (2), 1999,
pp. 426431.
[6] J. Kornai, Vision and Reality: Market and State, Routledge, New York, 1990.
[7] M..Dewatripont, and E. Maskin, Credit and Efficiency in Centralized and Decentralized Economies. Review of Economic
Studies Vol. 62 (4), 1995, pp. 541555.
[8] D. Li, Public Ownership as the Cause of a Soft Budget Constraint. Mimeo, Harvard University, 1992.
[9] I.R. Segal, Monopoly and Soft Budget Constraint. RAND Journal of Economics 29 (3), 1998, pp. 596609.
[10] J.Y. Lin, F. Cai, and Z. Li, Competition, Policy Burdens, and State-Owned Enterprise Reform. American Economic Review,
Vol. 88 (2), 1998, pp. 422427.
[11] B. Jalan, Indias Economic Crisis: The way ahead, Oxford University Press, New Delhi, 1991.
[12] E. Michelson,Lawyers, Political Embeddedness, and Institutional Continuity in Chinas Transition from socialism. American
Journal of Sociology, Vol. 113, 2007, pp. 352414.
[13] M. Boycko, A. Sheleifer, R. Vishny, A Theory of Privatization, Economic Journal 106, 1996, pp. 309319.
[14] Y. Aharoni, The performance of State-Owned Enterprises. In Toninelli, P. A. (Ed.), The Rise and Fall of State-Owned Enter-
prise in the Western World, New York: Cambridge University Press, 2000, pp. 4972.
[15] I.Okhmatovskiy, Performance Implications of Ties to the Governmentand SOEs: A Political Embeddedness Perspec-
tive.Journal of Management Studies, Vol. 47, 2010.
[16] L. D. Alessi, The Economics of Property Rights: A Review of The Evidence, Research in Law and Economics, Vol. 2, 1980,
pp. 1-47.
[17] C. R. Xin and J. L.Pearce, Guanxi: Connections as Substitutes for Formal Institutional Support. Academy of Management
Journal, Vol. 39, 1996, pp. 164158.
[18] J. M. Buchanan, Theory of Public Choice, University of Michigan Press, 1972.
[19] A. Cuervo, and B. Villalonga, Explaining the Variance in the Performance Effects of Privatization, Academy of Management
Review, Vol. 25, 2000, pp. 581-90.
[20] R. Viverita, and M. Ariff, Corporate Performance of Indonesian Private and Public Sector Firms: Financial and Production
Efficiency, University of Queensland, Brisbane, 2004.
[21] J. F. J. Hair, R. E. Anderson, R. L. Tatham, and W. C. Black, Multivariate Data Analysis. Englewood Cliffs, NJ: Pren-
tice-Hall, 1998.
[22] E. Yonnedi,Privatization, Organizational Change and Performance:Evidence from Indonesia, Journal of Organizational
Change Management, Vol. 23 No. 5, 2010, pp. 537-563.
[23] J. E. Stigliz, Whiter Socialism MIT Press, 1996, p. 79.


Government Intervention and Performance: Evidence from Indonesian State-Owned Enterprises

48

Proceeding of Industrial Engineering and Service Science, 2011, September 20-21
Copyright 2011 IESS.
Promoting Collaboration among Stakeholders in
Citarum River Basin Problem
Utomo Sarjono Putro*; Dhanan Sarwo Utomo; Pri Hermawan

School of Business and Management, Institut Teknologi Bandung, Bandung, Indonesia
*utomo@sbm-itb.ac.id
ABSTRACT
This research aims to develop an agent-based simulation model of the dynamic of negotiation based on interaction
among autonomous agents, who have different interests and act based on their emotion. Agents in this model are
equipped with emotion and ability to learn, and negotiate each other based on drama theory framework. To illustrate
the simulation model, an environmental conflict case in Citarum river basin is discussed in this paper. Qualitative study
is used to gather information regarding the agents historical options, positions and preferences. Based on the qualita-
tive study result, the historical dynamics of common reference frame in the real world is obtained. The simulation
model is then tested and validated by comparing with historical dynamics of the real conflict in Citarum river basin.
Using this simulation, it is possible to describe possible outcomes of the conflict evolution and suggest policy in order
to reduce dilemmas and encourage collaboration among agents in the real world.

Keywords: agent based simulation, drama theory, dilemma, collaboration

1. Introduction
A conflict, from mere difference of opinion to deadly confrontations, is unavoidable in daily life. Negotiation as an
effort to resolve conflict is a very common process in everyday life. This is why negotiation process is studied in
many scientific fields such as economy, political science, psychology, organizational behavior, decision sciences,
operations research and mathematics [1].
One of conflicts in real worldis a conflict occurred in the Citarum River Basin. Citarum River is the longest
river in West Java province. Many people depend on the Citarum River, making it one of the most strategic river in
Indonesia. Unfortunately, the condition Citarum River now has changed completely. Since the industrialization in
the 80s the Citarum River turned into industrial landfills. At present, there are around 500 textile factories that di s-
pose their waste into Citarum River, many of which are conducted without proper waste treatment. Citarum river
condition is worsened by the population explosion in the upstream area. Increasing population has also increased
the number of illegal logging and disposal of household waste. As a result, fl ood always occurs during the rainy
season due to sedimentation in downstream areas of rivers and increasing number of barren land. Citarum River
Basin problem involves many stakeholders. Based on the literature study and focus group discussion; there are a t
least 33 stakeholders in Citarum River Basin Conflict. These stakeholders have conflicting interest and then, efforts
to restore the condition of Citarum River become more and more difficult.
Negotiation in the real world such as the one in Citarum River Basin conflict posses several characteristics i.e.:
1) Decentralized [1], i.e., parties in a negotiation have different frames and strategy in seeking resolution of con-
flict; 2) Involving communication among parties [1]; 3) Decisions of negotiators are interlinked through communi-
cation processes that involve many different levels [2]; 4) Involving incomplete information [1], for example, a
party cannot know for certain utilities from the other parties; 5) Involving repeated interaction with no
well-structured sequences [1]; 6) Emotion is an important device in structuring goals, values and preferences [3]
and affects communication [2].
Negotiation process reflects the characteristics of a complex system since: 1) the elements involved in a nego-
tiation process are heterogeneous and autonomous agents (parties); 2) agents involved a negotiation process are
bounded rational, so that they may have bias in information and have a misperception toward the other agents; 3)
communication process in a negotiation involves transmission of knowledge that will influence the behavior of its
recipient; 4) negotiation is an iterative process. Such process involves feed-back loops that allows an agent to learn
and revise his/her strategy. Accordingly, the system (condition during the negotiation process in this case) evolves
over time [4]; 5) in general, interactions in a negotiation process are non-linear in the sense of, for an action there
This paper is based on research sponsored by the Air Force Research Laboratory, under agree-
ment number FA2386-10-1-4091. The U.S. Government is authorized to reproduce and distri-
bute reprints for Governmental purposes notwithstanding any copyright notation thereon.
Promoting Collaboration among Stakeholders in Citarum River Basin Problem

50
are many possible outcomes that could be produced and, for an outcome there are many possible actions that may
cause it.
The objective of this study is to construct an agent-based simulation of the dynamics of negotiation based on
drama theory framework. Agents in the simulation model are equipped with emotions and ability to lear n. The
agent-based simulation is chosen because it can minimize the number of simplifications used by its ability to fully
represents individuals and model bounded rational behavior while, drama theory is chosen because, it proposes an
episodic model whereby situations unfold. Using the constructed model, this study will propose strategy that can
promote collaboration among stake holders in Citarum River Basin Conflict.
2. Proposed Mechanism
2.1. Model of Agents Options, Position and Threat
In drama theory, there are a number of agents who have options, positions, preferences and threats. Interaction among
agents occurred under the common reference frame that is, the joint perception regarding the conflict that occurs. In
this simulation, an agent is represented as a column in a common reference frame. Each agent i, has a number of options
(O
ki
) that are represented as rows in common reference frame. At each iteration t, agent i has position to accept or to
reject each its own option. Agent is position toward its own options will generate payoff (Vo
ki
t
) for the agent i. Agent
is payoff has two dimensions namely accept dimension and reject dimension. If agent is position is to accept option
O
ik
, then agent is payoff in accept dimension is assigned as x (a real number between 51 to 100) and, agent is payoff in
the reject dimension is assigned as (100 x). The opposite rule applies if agent i position is to reject option O
ik
.
Each agent j (j i), has position to accept, reject or indifferent toward option O
ik
of agent i. Agent js position
toward agent is options will generate payoff (Vpo
kij
t
) for agent j. Agent js payoff consists of two dimensions,
namely accept dimension and reject dimension. If agent js position is to accept option O
ik
, then agent js payoff in
accept dimension is assigned as x (a real number between 51 and 100) and, agent js payoff in reject dimension is as-
signed as (100 x). The opposite rule applies if agent j position is to reject option O
ik
of agent i. If agent js position
is indifferent toward option O
ik
of agent i then agent js payoff in both dimensions are assigned as 50.
The total real payoff obtained by each agent by adopting its own positions in each iteration t is calculated as fol-
lows:

+ =
m
t
kij
t
ki
t
i
t
i
Vpo Vo p Vp ) (


m = number option and (i j)
p
i
t
= positions of agent i in iteration t
(1)
While, the total payoff obtained by agent i by adopting agent js positions in each iteration t is calculated as follows:

+ =
m
t
kij
t
ki
t
j
t
i
Vpo Vo p Vpp ) (


m = number option and (i j)
p
j
t
= positions of agent j in iteration t
(2)
Both payoffs are stored in real payoff matrix. The columns of this matrix represents agent i and the rows of this
matrix represents agent j. The elements on the diagonal of the real payoff matrix represent the payoff that will be ob-
tained by each agent by adopting its own position.
For all options, a set of threats is defined. The total payoff obtained by agent i by adopting threatened future in
each iteration t is calculated as follows:

+ =
m
t
kij
t
ki
t
i
Vpo Vo T Vpt ) (


m = number option and (i j)
T = threat
(3)
Each agent i has an estimation regarding the payoff that will be obtained by other agents for each of their position.
Agent is estimation toward agent js payoff is also consists of two dimensions, i.e. accept dimension and reject dimen-
sion. If agent js accepting option O
ik
, then agent i estimates that agent j will obtain payoff equal to x ( x is a random
number from 51 to 100) in accept dimension and 100- x in reject dimension. The opposite rule applies if agent j is re-
jecting option O
ik
. If agent j is indifferent toward option O
ik
then, agent i estimates that agent js payoff in both dimen-
sions are equal to 50. All agents store their estimation regarding other agents payoffs in estimated accepting payoff
matrix and estimated rejecting payoff matrix. The columns of agent is estimated payoff matrices represent agent j and
the rows represent option O
ik
.
Promoting Collaboration among Stakeholders in Citarum River Basin Problem

Copyright 2011 IESS. 51
2.2. Modeling Agents Dilemmas
In each iteration, if agent i and agent j have incompatible position (e.g. agent i accept option O
ik
while agent j reject the
option) among them, then confrontation dilemmas will emerge. Agent is dilemmas toward agent j are determined by
the payoff that will be obtained by agent i. There are two kinds of dilemmas that are considered in this research, i.e.
rejection dilemma and persuasion dilemma [5]. Those dilemmas are defined as follows:
- If agent is payoff by adopting agent js position is greater than or equal to agent is payoff to adopt its own
threat then, agent i has rejection dilemma toward agent j.
- If agent is payoff by adopting agent js position is less than or equal to agent is payoff to adopt its own threat
then, agent j has persuasion dilemma toward agent i.
If there are no incompatible positions among agents, the collaboration dilemmas still may occur. The collaboration
dilemma considered in this research is trust dilemma. Agent i who has compatible position with agent j, will have
trust dilemma toward agent j, if agent is estimation regarding agent js payoff is not in accordance with agent js posi-
tion. For example, agent i will have trust dilemma toward agent j if both agent i and j accept option O
ik
but agent i
estimates that agent j will have greater payoff by rejecting option O
ik
.
2.3. Negotiation Protocols
In this negotiation protocol, each agent is equipped with emotion that is modeled using PAD temperament model [6].
In this model, emotional state is constructed by three independent dimensions i.e. Pleasure , Arousal and Dominance.
The formulation of agents emotional state is as follows [7].
d a p d a p ij
r r r r r r Se + = ) 1 .( ) , , (
(4)
During the simulation, an agent conducts negotiation with a partner for options on which they have incompatible
positions between them (e.g. agent i accept the option while agent j reject the option). The negotiation protocol in this
research is constructed based on rational negotiation framework in which, agent i will offer certain amount of its payoff
(st
i
) to agent j in order to influence agent j to change his/her position closer to agent is position. The potency of agent
is offer to shift agent js position (Ov
ij
) is affected by agent is emotional state toward agent j (Se
ij
), and agent js per-
ception toward agent is offer (Ov
ji
) is affected by agent js emotion toward agent i (Se
ji
).
i i ij ij
st st Se Ov + =
(5)
ij ij ji ji
Ov Ov Se Ov + =
(6)
Suppose agent is position is to accept option k and agent js position is to reject option O
ik
, then agent is payoff in
accept dimension will then subtracted by Ov
ij
and agent is payoff in reject dimension is added by Ov
ij
while, agent is
estimation toward agent js payoff in reject dimension is also subtracted by Ov
ij
and agent is estimation toward agent
js payoff in accept dimension is added by Ov
ij
. On the other hand, agent js payoff in accept dimension is added by Ov
ji

and agent js payoff in reject dimension is subtracted by Ov
ji
while, agent js estimation toward agent is payoff in reject
dimension is added by Ov
ij
and agent js estimation toward agent is payoff in accept dimension is substracted by Ov
ij
..
Similar rules apply for the opposite condition.
For each iteration, an offer from agent i is perceived by agent j, and agent i will compare the response of agent j
with agent js response in the previous iteration. Then, agent is emotion state toward agent j will change according to
the concept of Flow Model of Emotion [8] which then mapped into PAD dimensions.
Table 1. Change in Agent is emotional states
Agent i offer (compare to previous
iteration)
Agent j perception (compare to
previous iteration)
Change in agent i emotion
state toward agent j
r
p
r
a
r
d

Higher higher + + +
Higher lower - + +
Lower higher + + -
Lower lower - - -
Promoting Collaboration among Stakeholders in Citarum River Basin Problem

52
Through the negotiation process agents will learn to identify the emotional state that can produce the biggest shift
in position of other agents (best emotional state). Learning mechanism which is built in this study is based upon the
assumption that each agent will revise his/her emotional state according to his/her experiences in the previous iterations.
Each time agent i gives an offer to agent j, agent i will record emotional state that he/she use and the shift resulted in
agent js position. If in the current iteration the shift in agent js position is higher or equal to the shift in agent js po-
sition in the previous iteration then, agent i will revise his/her best emotional state according to his/her emotional state
in the current iteration [9].
3. Case Study: Citarum River Basin Conflict
The simulation model in this study is constructed by using NetLogo version 4.1.2. In this study, the common reference
frame of the Citarum River Basin conflict is used as the simulation input. This Common reference frame was identi-
fied through observation and focus group discussion with the stake holders in Citarum River Basin conflict. Through
this qualitative study five agents was identified i.e. Government (G), Public Enterprise (PE), Green (GR), Community
Alliance (CA), Enterprise (E). Agents options, positions, and threat are described in Table 2.
Table 2. Common Reference Frame in Citarum River Basin Conflict


During the simulation process, three scenarios are tested. In the first scenario, agents are negotiating by using a
negative emotion toward other agents. In this scenario, the value of pleasure, arousal and dominance of each agent
towards the other agents are set randomly from 0 to -1. In the second scenario, agents are negotiating by using a neutral
emotion toward other agents. In this scenario, the value of pleasure, arousal and dominance of each agent are set as
zero. In the third scenario, agents are negotiating by using a positive emotion toward other agents. In this scenario,
the value of pleasure, arousal and dominance of each agent towards the other agents are set randomly from 0 to 1. The
random assignment of the emotional dimensions is conducted because it is not possible to conduct empirical measure-
ment because of many stake holders in the real world.

Promoting Collaboration among Stakeholders in Citarum River Basin Problem

Copyright 2011 IESS. 53

Figure 1. Simulation Interface

Each scenario is run thirty times. In every run, the number of iterations needed to eliminate confrontation dilemmas and
the number collaboration dilemmas that occur when the position of all agents have been compatible are observed.
Assuming each run as a sample then, the simulation results can be tabulated and tested using ANOVA to observe the
differences among scenarios. The comparison among scenarios is shown in Table 3.
Table 3. Comparison among scenarios


The comparison results shows that if agents use negative emotions to other agents during the negotiation process then,
in average the time required to eliminate the confrontation dilemmas will be longer than if they use neutral or positive
emotions. In addition, the numbers of collaboration dilemmas that arise when agents use negative emotions are sig-
nificantly higher than if they use neutral or positive emotions.
4. Conclusions
Through this study, an agent-based simulation of the dynamics of negotiation using drama theory frame-work have been
constructed. The simulation model has involved agents emotions and learning ability in the negotiation protocol.
This model is able to show the evolution of common references, and show the effect of agents emotional states toward
the number of dilemmas resulted in the given common reference frame, the time required to eliminate confrontation
dilemmas and the collaboration dilemmas that potentially arise after all agents reach compatible positions.
Promoting Collaboration among Stakeholders in Citarum River Basin Problem

54
The proposed model is applied to analyze the conflict in Citarum River Basin. Based on the simulation results it
can be concluded that if agents use negative emotions to other agents during the negotiation process then, the time re-
quired to eliminate the confrontation dilemmas will be longer than if they use neutral or positive emotions. In addition
the simulation results also show that the numbers of collaboration dilemmas arise when agents use negative emotions
are significantly higher than if they use neutral or positive emotions. In the real world, positive emotions can be imple-
mented in several forms, for examples, willingness to compromise, giving empathy to others and convincing others etc.
Agents who have positive emotion will not threat partners, and impose the will or the anarchist protest .
In the future, the model in this study needs to be improved by integrating other dilemmas such as threat dilemma
and positioning dilemma. The feasibility and accuracy of this simulation to represents the evolution real world con-
flict is important to be investigated.
5. References
[1] K. Sycara and T. Dai, Agent Reasoning in Negotiation In D. M. Kilgour and C. Eden, Eds. Advances in Group Decision and
Negotiation 4 : Handbook of Group Decision and Negotiation , New York: Springer, 2010, pp. 437-451.
[2] S. T. Koeszegi and R. Vetschera, Analysis of Negotiataion Process In D. M. Kilgour and C. Eden, Eds. Advances in Group
Decision and Negotiation 4 : Handbook of Group Decision and Negotiation , New York: Springer, 2010, pp. 121-138.
[3] B. Martinovski, Emotion in Negotiation In D. M. Kilgour and C. Eden, Eds. Advances in Group Decision and Negotiation 4 :
Handbook of Group Decision and Negotiation , New York: Springer, 2010, pp. 65-86.
[4] E. R. Smith and F. R. Conrey, Agent-Based Modeling: A New Approach for Theory Building in Social Psychology,
Personality and Social Psychology Review , Vol. 11, 2007, pp.87-104.
[5] U.S. Putro, M. Siallagan, and S. Novani, Agen based simulation of negotiation process using drama theory. Proceedings of
the 51st Annual Meeting of the International Society for the Systems Sciences. Tokyo, 2007.
[6] A. Mehrabian, Pleasure-Arousal-Dominace: a general framework for describing and measuring individual differences in
temperament. Current Psychology: Developmental, Learning, Personality, Social , vol 14, no 4, pp. 261-292, 1996
[7] H. Jiang, From rational to emotional agents PhD Thesis , University of South Carolina, Department of Computer Science and
Engineering, 2007
[8] L. Morgado and G. Gaspar, Emotion in intelligent virtual agents:the flow model of emotion. Proceeding Intelligent virtual
agents: 4th International Workshop , 2003.
[9] U.S Putro, P. Hermawan, M. Siallagan, S. Novani, D.S. Utomo Agent-Based Simulation of Negotiation Process in Citarum
River Basin Conflict. Proceedings PAN-PACIFIC Conference XXVII. Bali , 2010
Proceeding of Industrial Engineering and Service Science, 2011, September 20-21
Copyright 2011 IESS.
Supplier Selection Model Based On Tolerance
Allocation To Minimize Purchasing Cost And
Quality Loss
Noviasari Sabatini
1
, Wakhid Ahmad Jauhari
2
, Cucuk Nur Rosyidi
3


Production System Laboratory, Industrial Engineering Sebelas Maret University, Surakarta, Indonesia
E-mail : 1) nov.sabatini@gmail.com 2) wakhidjauhari@uns.ac.id 3) cucuk@uns.ac.id
ABSTRACT
Manufacturing companies do not produce some or even all of the components that make up their final product. The
components are obtained by outsourcing to suppliers. There are some benefits that comes from outsourcing activities,
such as reducing manufacturing cost, doubling before tax income, increasing companys performance, and helping the
company for being more focus to their core-business. However, selecting suppliers is a critical, difficult, and
time-consuming activity because if it was not done carefully, it can make the company suffered tangible and intangible
losses. The main problem related to outsourcing is the quality and variability of outsourced-material and components to
be used in the assembly process. In selecting suppliers, manufacturing company has to consider quality and purchasing
cost. Variation of components will affect the final assembly of product. On the other hand, company has its own speci-
fication of final product. The tolerance of the final product has to be allocated to components tolerances such that the
accumulated component tolerances do not exceed the product tolerance. Quality loss and purchasing cost are impor-
tant criteria because they trade-off each other. The tighter tolerance resulting in higher purchasing cost lower quality
loss. In this paper we developed a supplier selection model considering quality, demand, and suppliers capacity based
on tolerance allocation. The objective function of the model is to minimize purchasing cost and quality loss. A numeri-
cal example is provided using linear tolerance chain. The product consists of three components where each component
can be supplied from different suppliers and each supplier can supply more than one component. The result of numeri-
cal example shows that particular coefficient of quality loss cost and capacity of supplier has an impact for the
selection of suppliers and tolerances.

Keywords: supplier selection, tolerance allocation, purchasing cost, quality loss.

1. Introduction
In todays manufacturing environtment, manufacturing companies do not produce some or even all of components
that make up their final product. Components are obtained by outsourcing to suppliers. According to [1], 30%
companys saving comes from 50% lower procurements by outsourcing activities. There are some benefits that
come from outsourcing activities such as reducing manufacturing cost, doubling before tax income, increasing
companys performance, and helping the company for being more focus to their core-business [2]. However,
selecting suppliers is a critical, difficult, and time-consuming activity because if it was not done carefully, it can
make the company suffered tangible and intangible losses.
The main problem related to outsourcing is the quality and variability of outsourced-material of components to
be used in the assembly process [3]. The fact is, more than 50% manufacturing cost for non-conformance product
comes from outsourced material including cost of rework and scrap which are tangible cost of quality loss [4].
Further more, there are intangible quality loss cost which is more difficult to be measured. It happens when the
product has been received by costumer and its known as loss to society [5]. This kind of quality loss has various
impacts from losing costumers to loss of the company reputation [6].
One of the critical quality indicators of a product is the tolerance. There are two approach in tolerance design
which are tolerace analysis and tolerance synthesis or tolerance allocation. In tolerance analysis, designer determine
the component tolerances first and check whether the components exceed the assembly tolerance. If the tolerance
components exceed the product tolerance, then the designer must redefine the components tolerance. In tolerance
allocation, the designer determine the assembly tolerance first and then allocate the assembly tolerance to the
tolerance of its components. When the tolerance of the assembly product is not conformed to the specification, there
Supplier Selection Model Based On Tolerance Allocation To Minimize Purchasing Cost And Quality Loss

56
will be quality loss. In selecting suppliers, manufacturing company has to consider purchasing cost and quality
loss since both are trade off each other. The tighter tolerance will result in higher purchasing cost and lower quality
loss.
Many researches have been conducted in supplier selection problem with many criteria and constraints.
Reference [3] developed a model for supplier selection to minimize purchasing cost and quality loss. The research
considered two constraints which are the tolerance of assembly product and the number of selected supplier for each
component which is only one (binary integer). Another research has been conducted by [7]. In, the research, the criteria
of selecting supplier is based on maximization of weight preference using Analytic Hierarchy Process (AHP). Three
constraints are considere in the model: (1) minimum requirement of the number of supplier for each product, (2)
maximum permissible number of products allocated to each supplier, and (3) total number of supplier assignments.
Reference [5] has been developed a supplier selection model based on taguchi loss function. There are five elements of
quality loss in the objective function which are loss of quality, loss of speed, los of flexibiliy, loss of dependability, and
cost of manufacture (if parts are made by the company) or cost of purchase (if parts are outsourced). There are three
constraints in the model, which are companys demand of parts, production capacity of the company, and capacity of
suppliers.
Reference [3] did not consider the technological capacity of suppliers in producing various components, while
Reference [7] considered the capacity but did not include quality loss in the criteria. Reference [5] has considered both
quality loss and technological capacity in the model with no assembly tolerance constraint. In this research, we develop
a mathematical model for selecting supplier to minimize purchasing cost and quality loss, considering tolerance of
assembly product, technological capacity of suppliers in producing various components, and allowing more than one
supplier selected.
2. Model Development
2.1. Objective Function
There are two elements of objective function, which are purchasing cost and quality loss. The objective function can be
expressed as in (1) which taken from [3]. In the equation, c
ij
denotes purchasing cost of component i from supplier j, Q
denotes the quality loss, and

is the decision variable. The quality loss can be expressed as in (2), where A denotes
failure cost,

denotes the k-th asssembly tolerance, and


S
ij
is the standard deviation of component i from supplier j.

fx
ij
= (c
ij
x
ij
+Qx
ij
)
J
j=1
I
i=1


Where
Qx
ij
=
A
T
k
2
s
ij
2

K
k=1

2.2. Constraints
We consider three constraints as follows:
1. The constraint of tolerance specification of assembly product is taken from [3]. The constraint states that the
accumulation of components tolerance must not exceed the assembly tolerance. This constraint is considered
as quality requirement of the product and can be expressed as in (3). The variance can be expressed in terms of
tolerance using (4) and (5) for components and assembly respectively.

S
ij
2
x
ij

J
i
j=1
I
k
i=1

k
2
(i,j)
( 3 )

2
=
t
ij
3C
pk

J
i
j=1
I
k
i=1



( 4 )
(1)
(2)
Supplier Selection Model Based On Tolerance Allocation To Minimize Purchasing Cost And Quality Loss

Copyright 2011 IESS. 57

k
2
=
T
k
3C
p

2


( 5 )
2. The minimum number of supplier for component i can be stated as (6). The constraint is used to ensure the
minimum number of supplier for each component.
x
ij
N
i
J
i
j=1


( 6 )
3. The maximum number of permissible component supplied from one supplier. Equation (7) shows the
constraint which is used to represents the technological capacity of the supplier in producing various
components to allow one supplier supplied more than one component.

x
ij
O
j
I
i=1


( 7 )
4. Binary variables for supplier selection, 1 if supplier j is selected to supply component i and 0 otherwise.

x
ij
=0, 1 ( i,j)

( 8 )
2.3. Complete Model
The complete model developed in this research can be expressed as follows:

Minimize

fx
ij
= c
ij
x
ij
+
A
T
k
2

t
ij
3C
pk

J
i
j=1
I
k
i=1

K
k=1
J
t
j=1
I
i=1


Subject to

2
x
ij

J
i
j=1
I
k
i=1

T
k
3C
p

2
(i,j)

x
ij
N
i
J
i
j=1


x
ij
O
j
I
i=1


x
ij
=0, 1 (i,j)
3. Numerical Example and Analysis
A numerical example is given to illustrate the implementation of the model. As in[3], we consider an assembly consists
of 3 components and has an assembly function as shown in (9).

=

=
1
+
2
+
3
( 9 )

(12)
Supplier Selection Model Based On Tolerance Allocation To Minimize Purchasing Cost And Quality Loss

58
Each component x
1
, x
2
, and x
3
are assumed to be normally distributed with mean
1
= 10.0000 mm,
2
= 30.0000 mm,
and
3
= 20.0000 mm. Each component can be supplied from more than one supplier, but different supplier has different
tolerance and price as shown in Table 1.
Table 1. Prices and tolerances for each component on each supplier.
Component no.
Supplier no.
1 2 3 4
Component 1
Tolerance (mm) 0.0020 0.0025 0.0030 0.0035
Price (IDR) 61,000 30,500 17,400 13,000
Component 2
Tolerance (mm) 0.0020 0.0025 0.0030 0.0035
Price (IDR) 82,700 56,600 34,800 26,100
Component 3
Tolerance (mm) 0.0020 0.0025 0.0030 0.0035
Price (IDR) 69,700 39,200 21,700 17,400

The optimization result is shown in Table 2. From the Table we can make the following observation. The quality loss
coefficient A and the capacity of suppliers impact the selected suppliers and allocation of tolerances. When A = 0, the
model will only considering purchasing cost. The model will find the cheapest prices for selecting suppliers. Supplier 4
will be selected in such circumstances since supplier 4 has the cheapest purchasing cost for all of components among all
of the available suppliers. When the capacity of supplier 4 is reduced, the model will find the second-cheapest prices,
which is supplier 3.
Table 2. Optimization result for numerical example
Capacity Of Supplier
Suppier selected

A = 0 A = Rp 2,134,000
S1 S2 S3 S4 Comp.1 Comp.2 Comp.3 Comp.1 Comp.2 Comp.3
3 3 3 3 S4 S4 S4 S2 S3 S3
3 3 3 2 S4 S4 S3 S2 S3 S3
3 3 3 1 S3 S4 S3 S2 S3 S3
3 3 1 3 S4 S4 S4 S2 S3 S2
3 1 1 3 S4 S4 S4 S2 S4 S3

The price gaps between suppliers for each component will deterrmine the selected suppliers due to the reduction of
suppliers capacity. For example, when the capacity of supplier 4 is reduced from 3 to 2, the selected supplier for
component 3 will switch from supplier 4 to supplier 3 since the price gap between supplier 4 and supplier 3 for this
component is IDR 4,300 which is the least among the gap prices for the two other components. When the capacity of
supplier 4 reduced again from 2 to 1, the next selected supplier is for component 1 since the price gap between supplier
4 and supplier 3 for this component is IDR 4,400 which is the second-least of the gap prices among the other
components (see Table 3).
Table 3. Gap of prices between supplier 4 and supplier 3
Components
Prices (Rp)
Supplier 4 Supplier 3 Gap
1 13,000 17,400 4,400
2 26,100 34,800 8,700
3 17,400 21,700 4,300


When A = 2,134,000, which is 10 times of the actual purchasing cost from supplier 1 for each of three components, the
selection of suppliers and tolerances are changed. If all of suppliers can supplied all of components, then the suppliers
selected are supplier 3 for the two of components and supplier 2 for the other one. Any changing of the suppliers
capacity excepts the capacity of those two suppliers selected will not affect the chosen suppliers for each components.
The capacity reduction of supplier will still follow the rule of the price gaps between suppliers as when A=0.
Supplier Selection Model Based On Tolerance Allocation To Minimize Purchasing Cost And Quality Loss

Copyright 2011 IESS. 59
4. Conclusions
This paper presented a model for selecting suppliers to minimize purchasing cost and quality loss. There are two main
constraint considered in this paper, which are tolerance allocation and technological capacity of suppliers in producing
various components. From this research we can conclude that the quality loss coefficient A and the capacity of suppliers
impact the selection of tolerances and suppliers which follows the rule of the prices gaps between suppliers. Future
research is directed to involve the components allocation for each selected suppliers and extending the model to make
or buy analysis problem which now currently under investigation.
5. Acknowledgements
Part of this work is supported by BPI from Faculty of Engineering Sebelas Maret University.
6. References
[1] Accenture Consulting, Achieving High Performance through Outsourcing and Procurement Mastery Podcast, Accenture,
2008.
[2] J. Barthelemy, The Seven Deadly Sins of Outsourcing, Academy Of Management Executive, Vol. 17, No. 2 , 2003, pp. 87-98.
[3] C. X. Feng, J. Wang, and J. S. Wang, An Optimization Model For Concurrent Selection Of Tolerances And Supplier,
Computers & Industrial Engineering 40 , 2001, pp. 15-33.
[4] R. Plante, Allocation of Variance Reduction Targets Under the Influence of Supplier Interaction, International Journal of
Production Research, Vol. 38, No. 12 , 2000, pp. 2815-2827.
[5] J. Teeravaraprug, Outsourcing and Vendor Selection Model Based On Taguchi Loss Function, Songklanakarin Journal of
Science and Technology 30(4), July-Augustus 2008, pp. 523-530.
[6] R. S. Kumar, N. Alagumurthi, and R. Ramesh, Calculation of Total Cost, Tolerance Based on Taguchi's Assymetric Quality
Loss Function Approach, ISSN 1941-70 20 American Journal of Engineering and Applied Sciences 2 (4), 2009, pp. 628-634.
[7] A. J. Rajan, K. Ganesh, K. V. Narayanan, Application of Integer Linear Programming Model for Vendor Selection in a Two
Stage Supply Chain, International Conference on Industrial Engineering and Operation Management, Dhaka, 9-10 January
2010.



Supplier Selection Model Based On Tolerance Allocation To Minimize Purchasing Cost And Quality Loss

60

Proceeding of Industrial Engineering and Service Science, 2011, September 20-21
Copyright 2011 IESS.
Relationship of Entrepreneurial Traits, Eagerness
to Start a Business, And Firm Performances: An
Exploratory Study in Small and Medium
Enterprises In Indonesia
Henry Pribadi and Kazuyori Kanai

GraduateSchool of Economics, Osaka University, Osaka, Japan
Email: sf_pchan@yahoo.com
ABSTRACT
This paper briefly explains about the progress of our research in light of relationship between entrepreneurial traits
with firm creation and firm performances, especially in small family business in Indonesia. Two major entrepreneurial
traits models are used; entrepreneurial intention model for examining the future potential business owner and entre-
preneurial orientation for examining relationship between entrepreneurial trait with environment and firm perform-
ances. Two phases of research are taken: first phase about relationship between entrepreneurial intention theorem with
eagerness of starting or continuing a business through empirical study and second phase about relationship between
entrepreneurial orientations with firm performance through empirical and qualitative studies. Some important findings
acquired, such as important factors that determined eagerness to conduct a business, strong relationship between en-
trepreneurial intentions with higher firm performances, and how aspects of entrepreneurial orientations give strong
positive impact in leveraging firm performances.

Keywords:Entrepreneurial intention, entrepreneurial orientation, small family business, Indonesia Small and Medium Enterprises

1.Introduction
Subject of Small and Medium Enterprise (SME) and entrepreneurship will always be an important factor in building
economy of developing country, such as Indonesia. Indonesian Ministry of Small and Medium Enterprise announced
that up until 2009, there were almost 53 million unit of SME in Indonesia and those units provide jobs to almost 100
million citizen of Indonesia [1]. Reference [2] noted that Indonesian SME comprised for almost 90% of all business unit
that was founded in Indonesia. Those figures reflect how Indonesia really depends on SME growth and entrepreneur-
ship will become a key factor to develop Indonesian economy. On the other hand, researchers show that even though for
developing countries SME is a vital key to promote economic growth; evidences show that to ensure the sustainability
of a SME business is not an easy feat. Reference [3] clearly pointed out that entrepreneurs in Laos faced numerous hur-
dles in their struggle to keep the business intact. Technological barrier, lack of good human resources, lack of focus, and
harsh treatment from unfair policy of government clearly slow the development business in Laos. Reference [4] also
pointed out similar situation in Uganda, where SME unit survivability is really low in the first year of their founding
and they focused more about problems in supply chain and performances. Those finding clearly shows that good SME
business performance is a vital necessity in order to survive and more attention needed in understanding how to increase
SME firm performance through entrepreneurial action and conduct.
We believe that in order to understand more about SME firm performance, one should consider examining rela-
tionship between entrepreneurship factors of firms and successful firms performance. Sustainability and survivability
of a firm will depend majorly on how good the owner of the firm can harness entrepreneurial factors and integrate them
to firm strategy and action [5]. By examining more about firm owner entrepreneurial conduct, we hope to gain more
insight about how a firm operates and which factors that really important in building and operating a business, espe-
cially a small business. Through this paper, we would like to present a brief report about our finding and action
throughout our researches in examining entrepreneurial factors in small business in Indonesia. We will present our
finding through two research phases. First phase about our research on the very base of entrepreneurship: the entrepre-
neur intention. Here we conduct a research on future entrepreneur that still in their education college ages and try to
Relationship of Entrepreneurial Traits, Eagerness to Start a Business, and Firm Performances:
An Exploratory Study in Small and Medium Enterprises in Indonesia

62
find any significant factors that contribute to the success of founding a business or succession of a business. Second
phase on this paper is about our research on entrepreneurial orientation (EO) of small family business. We conduct a
qualitative research on several small firms to explore and examine how owners entrepreneurial orientation, entrepre-
neurial intention, and relation with previous generation affect their firm performance. We believe that there are linkages
of entrepreneurial intention when one still in preparation stage of conducting a business with entrepreneurial orientation
in operating stage of conducting a business. Our findings reveal some interesting facts about this relationship. Future
step and our next plan also briefly discussed after the report.
2. Entrepreneurial intention among university student
2.1.Entrepreneurial intention
There is plenty of literature about entrepreneurship that has attempted to define characteristics of entrepreneurs. One of
the earlier mainstreams of entrepreneurial research that focused on the characteristics of entrepreneurs is called the trait
approach. This approach was introduced by McClellands[6] who tried to relate entrepreneurship to psychology. In the
trait approach or sometimes called personal characteristics-oriented, there is an implicit assumption that an entrepreneur
is a key actor. He is an individual who identifies opportunities, develops strategies, assembles resources and takes an
action. McClellands study [6] found that most of the laid-off workers stayed at home for a while before finding similar
jobs. Yet, a small number of workers behaved differently. They tried to find a better job or started their own businesses.
McClelland [6] came out with the theory of the need of achievement. He discovered that the need of achievement was a
crucial factor for personal career decision. He further mentioned the role of family education in shaping the entrepre-
neurs character traits. McClelland [6] also postulated that the propensity of individual motivation to go into business is
a force of entrepreneurship. Accordingly, competitiveness was found as the most important variable in Lynns [7] study
of the relationship between national culture and economic growth. A high valuation of money was the second most im-
portant variable in Lynns [7] study although the prospect of making money typically ranks low in entrepreneurs stated
motivation. On the contrary, the need to be ones own boss or to have independence is the most significant factor [8].
Self-efficacy has been linked theoretically and empirically with many managerial and entrepreneurial phenomena.
Self-efficacy is linked to initiation and persistence at behavior under uncertainty, setting of higher goals, and reducing
of threat-rigidity and learned helplessness. This is important because opportunity recognition depends on situational
perceptions of controllability and self-efficacy [9].Over decades, the trait approach has been challenged by the envi-
ronmental approach. The environmental approach studies the most influential factors outside entrepreneurs that contrib-
ute to entrepreneurs success. A number of hypotheses have also been proposed about the influence of entrepreneurs
families on their willingness to start their own businesses. The previous results concerning the relationship between
education and entrepreneurship are very mixed. In the US, Reynolds [10] indicates that groups with lower education
showed less interest in entrepreneurial career. In the case of university, some evidence shows that a high intelligence
student prefers to pursue his career in education or research. It means hindering the entrepreneurial intention among
high intelligence student.
2.2.Research finding
This research is based on the survey carried out in 2007 on the students at Faculty of Industrial Technology at Petra
Christian University, Surabaya, Indonesia. A random sample of students completed the questionnaire. With the ap-
proval and cooperation of the lecturers, the questionnaire distributed during class sessions. Most students completed and
returned them during the sessions. The participation was voluntary and 140 students completed and submitted the ques-
tionnaire, resulting in a response rate of over 60%. The survey consisted of a two-page structured questionnaire. The
students answered items that addressed their entrepreneurial intentions, perceived feasibility of starting a business, per-
sonal characteristics and effect of entrepreneurship education. Response options included five-point Likert scales, ap-
propriate categorical and dichotomous scales. The information obtained was analyzed using the statistical software
package STATA. In this study, OLS regression was used as an analytical tool. The post regression evaluation concerns
with the existence of multicollinearity among independent variables. To check this problem, the so-called variance in-
flation factor (VIF) was used, which is the reciprocal of tolerance. VIF increases and so does the variance of the regres-
sion coefficients, making it unstable to estimate. Large VIFs are an indication that reflects the presence of multicollin-
earity. The VIFs found in the estimates ranged from 1.24 to 1.58, meaning that no multicollinerity problems occurred.
Our regression result in Table 1 showed that self efficacy; inspiration of role model, and government bureaucracy
gave a positive and strong effect on ones decision and intention in starting a business. Lack of self confidence, un-
certainty on external environment, and job offer from prestigious companies gave significant negative effect in starting
a business.
Relationship of Entrepreneurial Traits, Eagerness to Start a Business, and Firm Performances:
An Exploratory Study in Small and Medium Enterprises in Indonesia

Copyright 2011 IESS. 63
Our finding shows that there is a strong evidence of how subject with strong family background in business will
give significant motivation in building or succession of a business in the future, significant negative effect on lack of
self confidence also signal the real difference between subject with family business background and not. In short, our
finding suggests examining more deeply into entrepreneurial factors of family business subject and give some thought
about how to create a good entrepreneurship curriculum in university at general [11].
Table 1. Regression result
Factors Result
Group 0.092
Sex -0.013
GPA -0.022
Self efficacy .212**
a

Family ranking 0.022
Have an business experience 0.069
Inspiration from role model .172**
Motivation to be independence -0.066
Personal achievement and talents -0.034
Money-related motivation -0.062
Market-related motivation 0.046
Uncertainty on politic and economic growth -.234**
Difficulty on government bureaucracy .144**
Lack of guidelines on starting a new venture -0.002
Personal reason (e.g. married, pursue a higher degree) .132*
Receive job offering from big companies -.175**
Lack of initial investment 0.09
Lack of family support 0.001
Lack of university support 0.061
Lack of self confidence -.261**
Uncertainty on market and tight competition 0.043
F 11.01
Significance of F (Prob<F) 0
R
2
0.6819
Adjusted R
2
0.62
a ***significant at 1%, **significant at 5%, *significant at 10%
3. Small family business: Relationship of Entrepreneurial orientation and firm performances
The result on our first phase research clearly showed on how entrepreneurial intention in subject of future business
owner resides in strong ties of family background in business and self efficacy. These findings suggest continuing our
research of entrepreneurial activity more in family business field. Kanai [12] pointed out that entrepreneurship should
be consists of entrepreneur intention; ability to concept; and power to mobilize various resources and how important the
network effect on entrepreneurship. Therefore the next step should be to examine how business owners conceptualize
their business and the action of harnessing various resources during their business activity. Entrepreneurial orientation
concept is a good model to assist in examining about these kinds of factors [13]. Thus we should direct our second
phase research on examining the relationship of small family business, entrepreneurial orientation, and firm succession
through generation to understand more about entrepreneurial activity in Indonesia.
Entrepreneurial Orientation (EO) refers to a firm's strategic orientation, capturing specific entrepreneurial aspects
of decision-making styles, methods, and practices. As such, it reflects how a firm operates rather than what it does
Relationship of Entrepreneurial Traits, Eagerness to Start a Business, and Firm Performances:
An Exploratory Study in Small and Medium Enterprises in Indonesia

64
([14]; [15]). Reference [16] summarizes the characteristics of an entrepreneurial firm as one that engages in product
market innovation, undertakes somewhat risky ventures, and is first to come up with proactive innovations, beating
competitors to the punch. Based on this, several researchers have agreed that EO is a combination of the three dimen-
sions: innovativeness, proactiveness, and risk taking. Thus, EO involves a willingness to innovate to rejuvenate market
offerings, take risks to try out new and uncertain products, services, and markets, and be more proactive than competi-
tors toward new marketplace opportunities.
Numerous studies in EO generally agreed that universally EO will give a positive effect on small business per-
formance, but it should be accepted with a grain of salt. Lumpkin and Dess[15] proposed that EO could help a small
firm to leverage its performance if certain condition of internal factors and external factors of the firms can be met.
Wiklund and Sheperd[14] argued that certain configuration of EO, network, and external environment play important
aspect in explaining small business variance in performances.
Concur with previous paragraph; we had examined the general condition of Indonesia small business in our previous
research [17]. We used an integrated model approach of configuration of internal assets of a firm (EO), external factors,
and firm strategy to explain the variance of performance in Indonesia small business. We define firm strategy as char-
acteristics of how a firm conducts their business in a distinct strategy by describing it into Porters positioning strategy
[18].By referring to [19] we define firm strategy as cost leadership strategy and differentiation strategy. Firm perform-
ances were measured through how a firm performs in term of profit and market growth in last three years. We refer our
definition of firm performances and how to measure them into Spanos and Lioukas [20] works.We conducted an em-
pirical study with 256 samples of small firms in East Java and analyzed our data through structural model equation. We
found out that positive and significant relationship occurred on EO, external factors, and firm strategy together in ex-
plaining the variance of firm performances. Comparing our result in table 2 with Wiklund and Sheperd[14] result, we
confident our scope of research also confirm similar conclusion with previous studies.
Therefore, based on our finding and previous studies, we conducted a qualitative research on several small firms in
Indonesia to examine the relationship between EO and firm performance. We also include questions about entrepreneu-
rial intention, relationship with previous generation and business model to enrich our result and findings. For now, we
succeed in acquiring three samples with different variation of firm performance (high, normal, and low) to be compared
each others.
Our result in table 3 shows interesting finding regarding with relationship of firm performance, entrepreneurial in-
tention and entrepreneurial orientation. Firm with high performance exhibits good trait of the owner; such as high self
efficacy, high confidence, and high entrepreneurial orientation. This finding supports our previous studies about entre-
preneurial intention and relationship of EO with firm performance. Higher firm performances firm shows stronger rela-
tionship with education level and knowledge acquisition, something that support previous study about relationship of
firm performance and knowledge assets [21]. In term of family relationship and family succession, our finding shows
that franchise-like succession model will reduce agency problem inside a family business and good relationship in the
family is one of high performance small family business characteristics.
Table 2. Structural model results
Examined path Standardized path coefficients P value Result
1. External factors Firm strategy 0.639 0.09 Supported at 10% level
2. External factors Firm performance -0.068 0.459 Not supported
3. Firm strategy Firm performance 0.192 0.03 Supported at 5% level
4. EO Firm performance 0.105 0.093 Supported at 10% level
5. EO Firm strategy 0.219 0.05 Supported at 5% level
Table 3. Result of samples interview
Factors A Firm B Firm C Firm
Business Watches and Bags Glassware and Kitchen utensils Gold and silver jewelry
Build 80's 80's 80's
Generation Second Second Second
Firm performance Good Normal Worst
Succession model Franchise-like Normal succession Exit/Spin off
Education level University University Drop out
Business-edu relation High Normal Low
Relationship of Entrepreneurial Traits, Eagerness to Start a Business, and Firm Performances:
An Exploratory Study in Small and Medium Enterprises in Indonesia

Copyright 2011 IESS. 65
Factors A Firm B Firm C Firm
Previous gen. relation Good Good Bad
External condition Rough Rough Rough
Cultural understanding Good Good Bad
Self efficacy High Normal Low
Self confidence High Normal Low
Agency problem None Middle High
Innovativeness High Low Low
Proactiveness High High Low
Risk taking High Low High
4. Closing remarks and future plan
ur research examines the basic building of small and medium enterprise, entrepreneurial factors that consist of entre-
preneurial intention, orientation, external factors, and firm strategy. We succeed in capturing important aspect and find-
ing of entrepreneurial intention and factors that affect variation in firm performances. We find out that in regards of
entrepreneurial intention, self efficacy and role model will leverage eagerness to start a business while offer from big
company to work and uncertainty external condition will degrade the intention. Those positive factors still hold true
when we examine the business owners, self efficacy, and good relationship with previous owner/family matters, to-
gether with factors of entrepreneurial orientation will help the firm to achieve higher performances. Regardless, our
research still in early stages; there are still many works to be done. Our next step will be to reaffirm our research and
our finding in term with previous studies. While we success in getting some interesting finding, but we believe that it is
still too premature to wrap our research in a conclusion. More data are favorable, especially qualitative data to confirm
our finding and concluded the symptom to be valid across bigger population. When more data and more literature stud-
ies have been conducted, we confident that our finding will be a good contribution to scientific world in business field
in order to give better understanding about the condition of Indonesia small and medium enterprises.
5. Acknowledgements
We wish to express our most gratitude to Monbukagakusho Scholarship of Japan Government and Osaka University
that funded our long term research and academic opportunity. We also wish to express our thanks to Petra Christian
University in Surabaya, Indonesia which provide valuable data for our research, also our fellow colleagues and profes-
sors whose generous input help us in conducting our research.
6. References
[1] KementrianKoperasidan Usaha Kecil danMenengahRepublik Indonesia, Indonesian. SME development 2005-2009.
http://www.depkop.go.id
[2] A. G. Brata, DistribusiSpasial UKM dimasakrisisekonomi. JurnalEkonomi Rakyat, Vol. 2, No.8, 2003.
[3] N. Southiseng, and J. Walsh, Competition and Management Issues of SME Entrepreneurs in Laos: Evidence from Empirical
Studies in Vientiane Municipality, Savannaketh, and LuangPrabang. Asian Journal of Business Management, Vol. 2, No. 3,
2010, pp. 57-72.
[4] S. Eyaa and J. M. Ntayi, Procurement Practices and Supply Chain Performances of SMEs in Kampala. Asian Journal of Busi-
ness Management, Vol. 2, No. 4, 2010, pp. 82-88.
[5] G. J. Avlonitis and H. E. Salavou, Entrepreneurial orientation of SMEs, product innovativeness, and performance. Journal of
Business Research, Vol. 60, 2007, pp. 566-575.
[6] D. C. McClelland, Achievement and Entrepreneurship : A longitudinal Study, Journal of Personality and Social Psychology,
Vol. 1, No. 4, 1965, pp 389-393
[7] R. Lynn, The Secret of the Miracle Economy: Different National Attitudes to Competitiveness and Money. 1991, London: The
Social Affairs Unit.
[8] D. J. Storey, Understanding the Small Business Sector. 1994, London: Routledge.
[9] N. Krueger and D. V. Brazael, Entrepreneurial Potential and Potential Entrepreneurs, Entrepreneurship Theory & Practice,
Spring 1994, pp. 91-104.
[10] P.D. Reynolds, Who Starts New Firms? Linear Additive versus Interaction Based Models, paper presented at the 15th Babson
College Entrepreneurship Research Conference, London, April 1995, pp. 13-15.
Relationship of Entrepreneurial Traits, Eagerness to Start a Business, and Firm Performances:
An Exploratory Study in Small and Medium Enterprises in Indonesia

66
[11] D. Soetanto, H. Pribadi, I. G. A. Widyadana, Determinant factors of entrepreneurial intention among university student. The
IUP Journal of Entrepreneurship Development, Vol. 7, Nos. 1 & 2, March & June 2010, pp. 23-37.
[12] K. Kanai, The mechanism for promoting entrepreneurship. In K. Gonda, F. Sakauchi, T. Higgins. Regionalization of Science
and Technology Resources in the Context of Globalization, Tokyo 1994.
[13] J. C. Casillas, A. M. Moreno, and J. L. Barbero, Entrepreneurial orientation of family firms: Family and environmental dimen-
sions. Journal of Family Business Strategy, Vol. 2, 2010, pp. 90-100.
[14] J. Wilkund, and D. Shepherd, Entrepreneurial orientation and small business performance: a configurational approach. Journal
of Business Venturing, Vol. 20, 2005, pp. 71-91.
[15] G. Lumpkin, and G. G. Dess, Clarifying the entrepreneurial orientation construct and linking it to performance. Academy
Management Review, Vol. 21, No. 1, 1996, pp. 135-172.
[16] D. Miller, The correlates of entrepreneurship in three types of firm. Management Science, Vol. 29, 1983, pp. 770-791.
[17] H. Pribadi and K. Kanai. Examining and Exploring Indonesia Small and Medium Enterprise Performance: An Empirical
Study. Asian Journal of Business Management, Vol. 3, No. 2, 2011, pp. 98-107.
[18] M. E. Porter, Competitive Advantage. 1985. The Free Press.
[19] M. Acquaah and M. Yasai-Ardekani, Does the implementation of a combination competitive strategy yield incremental per-
formance benefit? A new perspective from a transition economy in Sub-Saharan Africa. Journal of Business Research, Vol. 61,
No. 4, 2008, pp. 346-354.
[20] Y.E. Spanos and S. Lioukas. An examination into the causal logic of rent generation: Constrasting Porters competitive strategy
framework and the resource-based perspective. Strategy Management Journal, Vol. 22, 2001, pp. 907-934.
[21] Y. H. Li., J. W. Huang, and M. T. Tsai, Entrepreneurial orientation and firm performance: The role of knowledge creation
process. Industrial Marketing Management, Vol. 38, 2009, pp. 440-449.
Proceeding of Industrial Engineering and Service Science, 2011, September 20-21
Copyright 2011 IESS
A Framework For Assessment and Validation of
Construction Project Management Performance
Din, Sabariyah* and Abd-Hamid, Zahidy**

UTM Razak School of Engineering and Advanced Technology, Universiti Teknologi Malaysia, International Campus,
Kuala Lumpur, Malaysia. Email: *(saba@ic.utm.my) ** ( ir.zahidy@gmail.com).
ABSTRACT
There has been a limited framework from which construction companies could confidently apply to derive conclusions
on the relationship between the ISO 9000 Quality Management System (QMS) certification and construction project
performance. In view of this limitation, this paper offers a project management framework on which a model called a
Project Management Performance Assessment for Contractors (PMPAC) was built. Based on the framework, a set of
questionnaire, after being piloted was mailed to project managers to gather sample data. Sample was drawn from
Grade G7 ISO certified and non-ISO certified Malaysian construction companies. Data were analyzed to test three
hypotheses. The study revealed that there is a significant difference in Project Management Practices, no significant
difference in Project Success and a significant difference in Financial Returns between the two data sets, encompassing
ISO 9000 certified and non certified construction companies. The framework was validated by practitioners who
testified that the PMPAC Model is an effective tool for assessing construction project management performance.

Keywords: Construction, Project , Management Performance Framework, ISO 9000

1. Introduction
The construction industry is a huge industry, accounting for around 10% of the worlds gross domestic product (GDP),
7 % of employment and up to 40% of energy usage [19]. Nevertheless, it is often criticized for being inefficient [3] and
that the industry failed to meet clients requirements [9]. The Malaysian construction industry is no exception where a
number of large projects were abandoned, mainly due to financial problems [1]. In the effort to eliminate those negative
reputations, the Construction Industry Development Board Malaysia (CIDB) had introduced a compulsory measure for
Grade G7 contractors to be certified with the ISO 9000 QMS (by January 1
st
2009), before they could undertake any
business operations in Malaysia.
This paper seeks to explore the relationship between ISO 9000 certification and construction project performance.
It firstly details the formulation of the conceptual framework, then the application of the framework and brief research
methodologies followed by the research findings, discussion and conclusion.
2. Previous Studies of ISO 9000 Certification and Project Management Performance
A number of past studies focused on the motives of gaining the ISO 9000 QMS certification. The most frequently men-
tioned were to expose the image of the organization, to improve the business performance [5], and to capture project re-
lated benefits [12; 21] from the ISO 9000 QMS certification, through internal changes in the operational functions. Later
Benner and Veloso [4] highlighted improvements in revenue through wider access to new customers, after adopting the
ISO 9000 QMS. Lo and Humphreys [14] suggested that project management techniques could be used in developing a
project network and in resources loading profile to ensure an effective and efficient implementation of the QMS.
Orwig and Brennan [17] noticed that many of the elements of quality management systems were applied on key
business processes involving repetitive, steady-state and standardized manufacturing operations. Serpell [20], while
recognising that the QMS has its origin in manufacturing, the concept could be effectively applied to construction pro-
ject environments. Construction is however unique; in that no two projects are exactly the same. Construction projects
are characterized by their complexity and by an evolving non-standardized nature of the management processes. Due to
the fundamental differences of the two sectors, Kazaz and Birgonul [13] viewed that the manufacturing-oriented quality
concept cannot be directly applied to the construction industry.
With regards to Project Success (PS), Heerkens [10] suggested that PS could be measured in four levels: Meeting pro-
ject targets; Project management efficiency; User utility and Organizational improvement. PS can also be measured in
A Framework For Assessment and Validation of Construction Project Management Performance

68
the form of lessons learnt from prior failures or successes [8]. Lock [15] noticed that the construction industry has a
long record of adopting project management methods effectively and the success of projects is said to be more depend-
ent on the people who managed, rather than the specialized equipment applied to affect the standardization.
In a study on project financial returns (FR), Beatham [3] noted that traditionally, companys performance was
measured solely in financial terms, profit and turnover. Manoochehri [16], however found that traditional financial
measures based on accounting concepts and practices are often inappropriate and insufficient. Since FR are categorized
as lagging indicators, they are considered to be poor predictors of tomorrows performance [18]. A survey of 114 pro-
ject managers, Cook [7] concluded that financial returns (FR) had a positive impact on Project Success.
3. Conceptual Framework, Model Development, Hypotheses and Methodology
3.1 Conceptual Framework
It is proposed that the effects of ISO 9000 certification efforts on the projects management (PM) performance can be
evaluated by obtaining measurements from three broad components: Project Management Practices (PMP), Project
Success (PS), and Project Financial Returns (FR). These three components are treated as dependent variables and the
independent variable being the certification. The hypothesis (see 3.3) against each component is marked in Figure 1 as
H1, H2, and H3 respectively.














Figure 1: The Framework of Project Management (PM) Performance.
3.2 Model Development
The Project Management Performance Assessment (PMPA) Model of Bryde [6] was referred. The Model conforms
with the framework of the European Foundation for Quality Management (EFQM) Business Excellence. Hillman [11]
noted that the EFQM Model provides a tried and tested framework. Figure 2 represents the PMPAs Model, showing
the enablers (inputs) such as PM Leadership on the left, and output such as PM KPI on the right.


Figure 2: The PMPA model from Bryde [6].
Some variables related to QMS certification are not measured in the above PMPAs model. It is therefore extended and
now called the Project Management Performance Assessment for Contractor or The PMPAC Model. Figure 3 intro-
duces some additional business performance indicators. On the left are the enablers (inputs) of PM Leadership and on
the right are results: Project Success (PS) and Financial Returns (FR) to be assessed using this PMPAC Model.
Project
Management
(PM)
Leadership
PM Staff
PM Policy
& Strategy
PM Partnerships
& Resources
Project Life
Cycle
Management
Process
PM Key
Performance
Indicators
(KPIs)
RESULTS ENABLERS
H3
H2
H1
ISO-certified
Construction
Companies
Non-certified
Construction
Companies
PM Practices
(PMP)
Project Success
(PS)
Project Financial
Returns
(FR)
Performance Management
Outcomes from the ISO vs.
Non ISO certified companies
A Framework For Assessment and Validation of Construction Project Management Performance

Copyright 2011 IESS. 69

Figure 3: The PMPAC Model
3.3 Hypotheses on Project Management (PM) Performance
i. PM Practices: Given that ISO 9000 QMS is used to assure the quality of management processes derived from
the PM Practices of construction projects, it is expected that ISO 9000 certification will result in enhanced
PM Practices. The first hypothesis is:

H1
0
: There is no difference in PM Practices (PMP) between ISO certified and non-ISO certified construction companies

ii. Project Success: In relation to suggestions put forward by Heerkens [10] and Forsberg [8] led to the second
hypothesis:

H2
0
: There is no difference in Project Success (PS) between ISO-certified and non-certified construction companies.

iii. Financial Returns: Given some evidences reported in Manoochehri [16], Parker [18], Beatham [3] and Cook [7]
this hypothesis is postulated:

H3
0
: There is no difference in Financial Returns (FR) between ISO-certified and non-certified construction companies.
3.4 Methodology
The PMPAC Model was applied to measure PM Performance (PMP). A set of questionnaire was designed, containing
the following enablers: Under PMP, questions were arranged in these categories: PM Leadership (5 questions);
PM Staff (2 questions); PM Policy and Strategy (3 questions); PM Partnerships and Resources (2 questions); Project
Life Cycle Management Process (4 questions); PM Key Performance Indicators (4 questions). Under Financial Returns
(6 questions) and Project Success (10 questions). Data were collected on PM practices, PS, FR by using structured
questionnaire divided into four parts. Part 1: Descriptive data on the respondents organization. Part 2 : Enablers (PM
Practices), arranged in these categories: PM Leadership, PM Staff, PM Policy and Strategy, PM Partnerships and
Resources. Part 3: Perceptions of PS and facts on FR; Part 4: Demographic data of respondents. A five-point Likert
scale (1 = strongly disagree, 2 = disagree, 3 = neutral, 4 = agree and 5 = strongly agree) was used to measure PMP. The
questionnaire was piloted with 20 project managers, 10 from ISO-9000-certified and 10 from non-certified construction
companies. Tests for questionnaires reliability confirmed the appropriateness of the data, without any items deleted.
The internal and external validity showed that an additional refinement of the survey instrument was not needed.
The sample was drawn from approximately 130,000 companies listed in the CIDB Directory 2006-2007. Table of
random numbers was used. The population for the ISO certified companies were limited to those which had been certi-
fied from year 2004 and had completed at least one project since first obtaining certification. A total of 151
ISO-certified construction companies and 3437 non-ISO certified companies matching the criterion were identified. An
approximately 20% of the non- ISO certified companies were selected as the sample, resulted in 806 companies being
systematically selected.Table1below shows the distribution of completed questionnaires from where the response data
were tabulated. Assumptions for the multivariate MANOVA test as suggested by Tabachnick and Fidell [22], were
evaluated. These include: unequal sample size, multivariate normality, linearity, outliers, homogeneity of vari-
ance-covariance matrices, reliability of covariates, and multi-collinearity and singularity. The normality test confirmed
that the original data set was approximately normal. Bias test for non response showed no significant difference.
RESULTS

PM Staff


Project Life
Cycle Man-
agement Proc-
ess
PM Project,
Success
PM Finan-
cial Returns



PM Key Per-
formance Indi-
cators (KPIs)


Project Manage-
ment (PM) Lead-
ership
PM Policy
& Strategy
PM Partner-
ships & Re-
sources
ENABLERS



A Framework For Assessment and Validation of Construction Project Management Performance

70
Table 1: Number of Completed Questionnaire
Type of Samples Response Response (Rate) Incomplete Complete (Rate) Total sample
ISO Companies 151 73 (48.3%) 2 71 (47%)
Non- ISO Companies 806 263(32.6%) 18 245(30.4%) 336
4. Data Analysis and Model Validation
4.1 Descriptive
There were 57 (80.3%) male and 14 (19.7%) female respondents of ISO-certified companies, and 184 (75.1%) male
and 61 (24.9%) female respondents of non-ISO companies. Most had more than 15 years experience in PM (n=27,
38.0% for ISO-certified companies and n=86, 35.1% for non-certified companies). A t-test and MANOVA was run to
check if there were any respondent bias in respect of the sex or the level of experience. The result of the t-test, based on
mean scores for PM Practices showed no significant difference between practices by genders. There was no significant
difference between different levels of experience and PM Practices, Financial Returns and Project Success mean scores
[Pillais Trace (0.377) >Alpha (0.01)]. The proportion having ISO 9000 certification amongst larger organizations was
higher than that for smaller organizations.
4.2 Hypotheses Testing
Results from the F test tabulated in Table 2(a), suggest that there is a significant difference in PM Practices between the
ISO-certified and non-certified construction companies. Table 2(b), MANOVA results show a significant difference
between the two groups for each of PM Policy and Strategy, Project Life Cycle Management Process, and KPIs.
The other three factors - Leadership, Staff, and Partnerships and Resources show no significant difference at p < 0.01,
between the two groups of companies.
Table 2(a) Summary of MANOVA Test for Project Management Practices
ISO-certified
(N = 71)
Non-certified
(N = 245)
Tests of Between-Subjects Effects
Variables Mean SD Mean SD
2
F Sig.

PM Practices

3.8127

0.39113

3.5431

0.48501

4.002

18.449

0.000

Table 2(b) Summary of MANOVA Test on Factors of Project Management Practices
ISO-certified
(N = 71)
Non-certified
(N = 245)
Tests of Between-Subjects
Effects
Mean SD Mean SD
2
F Sig.

Leadership
Staff
Policy and strategy
Partnerships and resources
Project life cycle
management process
Key performance indicators

3.6845
3.9577
4.0516
3.7394

3.8697
3.7007

0.49067
0.56535
0.46683
0.60273

0.55719
0.56001

3.5469
3.7449
3.7918
3.5796

3.4633
3.3122

0.43074
0.74742
0.57950
0.82158

0.69959
0.75235

1.042
2.494
3.716
1.406

9.094
8.307

5.266
4.935
12.004
2.323

20.230
16.295

0.022
0.027
0.001
0.128

0.000
0.000

Table 3(a) shows the F test yielded a p-value of 0.038. With an alpha of 0.01, H2
0
is not rejected. Therefore, one
might conclude that there was no significant difference in Project Success between the ISO-certified and non-certified
construction companies however, as shown in Table 3(b), the MANOVA results reveal that four factors of PM Success
namely: Within Budget, Efficient Management, Benefit to Intended User, and Impact on Company's Business show a
significant difference between the two groups.

A Framework For Assessment and Validation of Construction Project Management Performance

Copyright 2011 IESS. 71
Table 3(a) Summary of MANOVA Test for Project Success
ISO-certified
(N = 71)
Non-certified
(N = 245)
Tests of Between-Subjects Effects
Variables Mean SD Mean SD
2
F Sig.

Project Success

4.1958

0.56327

4.0482

0.51435

1.199

4.341

0.038
Table 3(b) Summary of MANOVA Test on Factors of Project Success
ISO-certified
(N = 71)
Non-certified
(N = 245)
Tests of Between-Subjects
Effects
Mean SD Mean SD
2
F Sig.

Within schedule
Within budget
Efficient management
Within quality
Works accordingly
Use by Intended User
Benefit to Intended User
Impact on Clients performance
Impact on companys business
results
Lessons learned

3.96
4.00
4.01
4.23
4.21
4.27
4.35
4.23

3.31
4.39

0.977
0.956
0.853
0.721
0.607
0.585
0.588
0.741

0.600
0.686

3.96
3.83
3.90
4.03
4.00
4.02
4.04
4.01

4.29
4.40

0.879
0.884
0.783
0.636
0.668
0.689
0.664
0.698

0.660
0.582

0.002
1.618
0.691
2.132
2.363
3.476
5.334
2.500

0.022
0.000

0.002
1.994
1.082
4.951
5.503
7.804
12.710
4.989

0.053
0.000

0.000
0.006
0.003
0.016
0.017
0.024
0.004
0.016

0.000
0.000

The F test of Table 4(a) led to rejecting H3
0
and concluding that there was a difference in Financial Returns between the
certified and non-certified construction companies. As shown in Table 4(b), the MANOVA results revealed that three
factors of Financial Returns namely: Financial Calculation Procedure, Financial Contingency Plan, and Effect of Price
Escalation show significant differences, between the two groups.
Table 4(a) Summary of MANOVA Test for Financial Returns
ISO-certified
(N = 71)
Non-certified
(N = 245)
Tests of Between-Subjects Effects
Variables Mean SD Mean SD
2
F Sig.

Financial Returns

3.7441

0.49828

3.4476

0.60043

4.840

14.426

0.000
Table 4(b) Summary of MANOVA Test on Factors of Financial Returns
ISO-certified
(N = 71)
Non-certified
(N = 245)
Tests of Between-Subjects
Effects
Mean SD Mean SD
2
F Sig.

Financial calculation procedure
Financial contingency plan
Amount loan used
Inflation allowance and price es-
calation
Effect of price escalation
Availability of positive financial
returns

4.10
3.96
3.59

3.61
3.70

3.51

0.539
0.685
0.785

0.746
0.782

0.826

3.69
3.59
3.34

3.39
3.41

3.26

0.764
0.823
0.917

0.893
0.853

0.968

9.199
7.370
3.405

2.516
4.693

3.438

17.750
11.685
4.304

3.386
6.693

3.903

0.000
0.001
0.039

0.067
0.010

0.049
The PMPAC Model has been validated to determine whether or not the Model which was built based on the framework
shown in Figure 1, could become an effective tool for assessing PM Performance in the construction industry. The
Model validation was carried out in 2 phases and interview method was adopted: In Phase 1, twenty two (22) randomly
selected construction companies of various grades and three (3) developers were involved The total score of each re-
spondent was calculated. The results indicate that 6 or (24.0% ) of the respondents had less than 50% of the PMPAC
scores, thus were below the average project management performance. In Phase 2, the Public Works Department
(PWD) of the State of Pahang (client to construction contractors) was consulted. The District Engineer, representing
PWD was asked to rank the corresponding contractors Project Management Performance. Likert Scale was applied: 1=
A Framework For Assessment and Validation of Construction Project Management Performance

72
Very Good, 2= Good, 3= Fair, 4=Poor and 5= Very Poor. The rank given by the District Engineer was compared
against the level of project management performance reported by the corresponding contractor from the same district.
Five (5) most recently completed projects from each district were chosen. The validation test result showed in Table 5
had indicated that there was no significant difference (t = 2.679, p > 0.01) in PM Performance mean scores (at a 1 per-
cent level of significance) reported between contractors and the clients (the District Engineer). It thus suggests, that the
validated PMPAC Model could be used for assessing PM performance in the Malaysian construction industry.
Table 5 Summary of t-Test Analysis (Second Phase of PMPAC Model Validation Test)
N Mean Standard Deviation df t Sig. (2-tailed)
Contractor
PWD
17
17
77.0235
66.1765
6.28499
15.46462

32

2.679

0.012*
*Significant at the 0.01 level
5. Discussions and Conclusion
Results from the analysis indicated that the ISO 9000 QMS certification had a positive effect on PM Practices and
Financial Returns (FR), but not on Project Success (PS). Findings on the PM Practices seemed consistence with the
views forwarded by Brown [5], Serpell [20], Lo and Humphreys [14], Karim [12] and Benner and Veloso[4]. Findings
also showed that ISO 9000 QMS did not enhance PM Partnership and Resources. The absence of a significant differ-
ence between ISO-certified and non-certified construction companies in the PM Project Success involved some en-
ablers; such as Use by its Intended User and Lessons Learned from Completed Project. These results might be re-
lated to the current practices of quality assurance procedures adopted by the companies. Added Au and Yu [2] that the
contract was good enough to control the quality procedures adopted by construction companies. In Malaysia however,
the quality of the construction work was totally relied on the supervision of the project manager and site supervisor,
while the activities of quality assurance in most construction companies were oriented to meet technical specifications
of the final product, often expressed through owners inspection.
Findings reported in this paper suggest that the PMPAC Model could be used as an effective tool for assessing
construction project management performance. The framework had shed lights for a symbiotic relationship between the
ISO 9000 QMS certification effort and a project management practices in the construction industry. Some limitations
may be raised since data analyzed in this study were based on replies from what project managers could recall from his
experience in managing the most recent completed project. Nevertheless, the rich experience from construction project
had been utilized in designing a project management performance framework, if applied could counter the negative
reputations of the construction industry. It is suggested that a wider project stakeholders, such as contractors from
various grades and their clients should be included, in order to enhance the present findings.
In synthesis, improvement in the QMS may have to be industry tailored to warrant successful application of the
system. Many related companies should also focus on systematic project management activities, while applying the
quality management systems as catalyst to achieve better project performance, financial returns and project success.
6. References
[1] Alaghbari, W., Kadir, M. R. A., Salim, A., and Ernawati (2007). The significant factors causing delay of building construc-
tion projects in Malaysia. Engineering, Construction and Architectural Management, 14(2), 192-206.
[2] Au, J. C. W., and Yu, W. W. M. (1999). Quality management for an infrastructure construction project in Hong Kong. Lo-
gistics Information Management, 12(4), 309-314.
[3] Beatham, S., Anumba, C., Thorpe, T., and Hedges, I. (2004). KPIs: a critical appraisal of their use in construction. Bench-
marking: An International Journal, 11(1), 93-117.
[4] Benner, M. J., and Veloso, F. M. (2008). ISO 9000 practices and financial performance: a technology coherence perspective.
Journal of Operations Management, 26, 611-629.
[5] Brown, A., Van der Wiele, and Loughton, K. (1998). Smaller enterprises experiences with ISO 9000. International Journal of
Quality & Reliability Management, 15(3), 273-285.
[6] Bryde, D.J., 2003. Modelling project management performance. Int. J. of Qual. & Rel. Mgt. 20, 2, 225-229.
[7] Cook, B. W. (2004). Measuring the value of success in project management organizations. Argosy University-Orange
County, USA: DBA Dissertation.
[8] Forsberg, K., Mooz, H., and Cotterman, H. (2000). A model for business and technical success, second edition. John Wiley &
Sons, Inc. N.Y., USA.
[9] Giles, R. (1997). ISO 9000 perspective for the construction industry in the UK. Training For Quality 5(4), 178-181.
A Framework For Assessment and Validation of Construction Project Management Performance

Copyright 2011 IESS. 73
[10] Heerkens, G. R. (2002). Project management. McGraw-Hill, N.Y., USA.
[11] Hillman, G. P. (1994). Making self-assessment successful. The TQM Magazine, 6(3), 29-31.
[12] Karim, K., Marosszeky, M., and Davis, S. (2006). Managing subcontractor supply chain for quality in construction. Engi-
neering, Construction and Architectural Management, 13(1), 27-42.
[13] Kazaz, A., and Birgonul, M. T. (2005). The evidence of poor quality in high rise and medium rise housing units: a case study
of mass housing projects in Turkey. Building and Environment, 40, 1548-1556.
[14] Lo, V., and Humphreys, P. (2000). Project management benchmarks for SMEs implementing ISO 9000. Benchmarking: An
International Journal, 7(4), 247-259.
[15] Lock, D. (2004). Project management in construction. Gower Publishing Limited, Hants, UK.
[16] Manoochehri, G. (1999). Overcoming obstacles to developing effective performance measures. Work Study, 48(6),223-229.
[17] Orwig, R. A., and Brennan, L. L. (2000). An integrated view of project and quality management for project-based organiza-
tion. International Journal of Quality & Reliability Management, 17 (4/5), 351-363.
[18] Parker, C. (2000). Performance measurement. Work Study, 49(2), 63-66.
[19] PricewaterhouseCoopers (2008). Engineering & construction industry sector. PricewaterhouseCoopers International Limited
[20] Serpell, A. (1999). Integrating quality systems in construction projects: the Chilean Case. International Journal of Project
Management, 17(5), 317-322.
[21] Singels, J., Ruel, G., and Van de Water, H. (2001). ISO 9000 series-Certification and performance.International Journal of
Quality & Reliability Management, 18(1), 62-75.
[22] Tabachnick, B. G., Fidell, L. S.(2007), Using Multivariate Statistics, 5
th
Edition. Boston: Pearson Education, Inc.



A Framework For Assessment and Validation of Construction Project Management Performance

74

Proceeding of Industrial Engineering and Service Science , 2011, September 20-21
Copyright 2011 IESS.
Hybrid Neural Network-Genetic Algorithms
Approach for Fault Diagnosis of Bearing System
1
L.A. Wulandhari ,
2
A. Wibowo,
3
M.I. Desa

Faculty of Computer Science and Information System, Universiti Teknologi Malaysia, 81310 UTM Skudai, Johor Bahru, Malaysia
1
awlili2@live.utm.my,
2
antoni@utm.my,
3
mishak@utm.my
ABSTRACT
Fault diagnosis of critical systems such as bearing systems require some concern. Unexpected breakdown of one
component in such systems can induce failure of the whole system. With effective diagnosis, faults can be detected
much earlier and unacceptable consequences from total system failure can be avoided. In this paper, we present the
development of fault diagnosis techniques for bearing systems based on time series vibration data using hybrid of back
propagation neural networks (BPNN) and genetic algorithms (GAs) and is called BPNN-GAs. The GAs are used in
BPNN to increase the performance of condition classification in the bearing systems. Here, we consider a bearing sys-
tem that consists of two bearings called Drive End Bearing (DE) and Fan End Bearing (FE). Three accelerometers are
attached to the DE, FE, and Baseline (BA) through which the vibration data are captured. These vibration data need
to be analyzed to obtain information on conditions of the bearing systems. We extract ten features from the vibration
data as input and use sixteen classes as target output. The results between standard BPNN and BPNN-GAs are com-
pared and it clearly shows that BPNN-GAs give better classification accuracy in less CPU time and number of itera-
tions.
Keywords: Back-Propagation Neural Network, Genetic Algorithms, Fault Diagnosis, Bearing System

1. Introduction
Bearings are parts in machine that are used to support rotating shaft. Appropriate bearing design can minimize the
friction and its failure may cause expensive loss of production [1]. Unfortunately, bearing is one of machine parts
which has a high percentage of defect compared to the other component such as stator winding and rotor [2].
Therefore, an early and effective fault diagnosis of bearing is an essential task.
Actual fault diagnosis could be executed by examining and analyzing the vibration signal of t he bearing. The
vibration signal data contains frequency, time or time-frequency domain [3]. In this paper, we use time-frequency
domain as the data to diagnose the bearing fault. However, it is not easy to identify the condition of the bearing
system directly from the vibration signal especially if it involves more than one bearing in a system. Artificial I n-
telligence is one of the techniques that can provide an automated procedure for fault diagnosis [4]. Some previous
researchers used fuzzy neural network [5], radial basis function (RBF) network [4], and genetic-based neural net-
works (GNNs) [6] for this purpose.
This paper presents the hybrid technique that combines back-propagation neural networks (BPNN) and genetic
algorithms (GAs) in identifying the condition of the bearing system. GAs is applied in BPNN to obtain acceptable
weights for the BPNN training. In this paper, we improve bearing fault data representation from previous work by
combining and modifying available vibration signal data to obtain more specific condition diagnosis. Ten features
are extracted from these vibration signals that are used as the input of BPNN training. Those features are standard
deviation, skewness, kurtosis, the maximum peak value, absolute mean value, root mean square value, crest fac-
tor, shape factor, impulse factor and clearance factor [7]. These non-dimensional features are effective and practical
in fault diagnosis due to their relative sensitivity to early faults, and robustness to various loads and speeds [4].
These features are used as the input of BPNN training, whereas the target outputs are sixteen conditions of the
bearing system. In the result section we will show the comparison of performance between BPNN and hybrid
BPNN-GAs. Detail steps for fault diagnosis of the bearing system are presented in the next section.
Hybrid Neural-Network-Genetic Algorithms Approach for Fault Diagnosis in Bearing System

76
2. Bearing Data Structure
In this paper, vibrations signal data are captured from a bearing system which consists of Drive End bearing (DE) and
Fan End bearing (FE). Three accelerometers are attached on the bearings and baseline respectively as the tools to record
the vibration signals of the bearing. The structure of the bearings and accelerometer are shown in Figure 1.







Figure 1. Bearing and Accelerometer Structure
Bearing vibration data were collected under seven different conditions; (1) FE and DE normal, (2) FE normal and
DE Inner race fault (IRF), (3) FE normal and DE Ball fault (BF), (4) FE normal and DE Outer race fault (ORF), (5) FE
IRF and DE normal, (6) FE BF and DE normal and (7) FE ORF and DE normal. Normally, based on the available data
we have seven condition classes of bearing as the output of the diagnosis. However, we combine and modify the data to
improve the class of bearing conditions into sixteen classes. The sixteen classes of the bearing condition are presented
in Table 1.
Table 1. Sixteen conditions of bearing
ON noitidnoC
ON noitidnoC ON noitidnoC ON noitidnoC
1 C
ED dna EF
lamroN
5 C
ED dna FRI EF
lamroN
9 C
ED dna FRI EF
FRO
13 C FB ED dna FB EF
2 C
lamroN EF
FRI ED dna
6 C
ED dna FRO
EF lamroN
10 C
ED dna FRI EF
FB
14 C
ED dna FRO EF
FRI
3 C
lamroN EF
FRO ED dna
7 C
ED dna FB EF
lamroN
11 C
ED dna FB EF
FRI
15 C
ED dna FRO EF
FRO
4 C
lamroN EF
FB ED dna
8 C
ED dna FRI EF
FRI
12 C
ED dna FB EF
FRO
16 C
ED dna FRO EF
FB
The 320 samples of time series data are used in BPNN and BPNN-GAs. These samples are split into two sets: 240
samples for training and 80 samples for testing. BPNN training uses 30 neurons for input which is composed based on
ten features extraction from three accelerometers. The topology of BPNN and BPNN-GAs is explained in the next sec-
tion.
3. Hybrid BPNN-GAs
Standard BPNN is one of the supervised training algorithms that are widely used in defect diagnosis. However, BPNN
has conflict between overfitting and generalization which leads to a low learning training speed and the easiness of
converging to local optimum point of the network [8-9]. This problem can be tackled by applying GAs in standard
BPNN. GAs are global search methods which are based on principles like selection, crossover and mutation [10]. In this
paper, GAs are applied to find the acceptable weights and they are used in BPNN training. By using the acceptable
weight, minimum mean square error (MSE) can be obtained in less iteration. In the next subsection we briefly introduce
standard BPNN, GAs and BPNN-GAs.
FE Bearing DE Bearing
FE Accelerometer DE Accelerometer
BA Accelerometer
Hybrid Neural-Network-Genetic Algorithms Approach for Fault Diagnosis in Bearing System

Copyright 2011 IESS. 77
3.1. Back Propagation Neural Network (BPNN)
Here, we assume BPNN has a training input vector =
1
,
2
, ,

and target =
1
,
2
, ,

which
implies that input and output layer of the BPNN consist of and neurons, respectively. The input layer and the
output layer are related by several hidden layers and they are adjusted in advance. The th input and the th output
neuron are connected by a weight

which satisfy the following function:


=1

, (1)

=
1
1+exp

. (2)
The error value between the output and target is calculated using Mean Square Error (MSE) formulation:

=
1
2

=1
(3)

and BPNN updates the weights for obtaining the desired MSE value.
3.2. Genetic Algorithms (GAs)
GAs are adaptive search and optimization algorithms which ease in operation because involve genetic nature principles,
minimal requirements and global perspective. GAs are good in finding good acceptable solution, however, they do not
guarantee global optimum solution [11]. The GAs are performed by the following steps [12]:
1. Generate an initial population of chromosomes randomly.
2. Calculate the fitness value of the population using equation

.
1
i
i
E
F =
(4)

3. Form a mating pool which contains the best genes that are selected using roulette selection method.
4. Select parents pair from mating pool
5. Combine respective pair of parents using crossover operator to obtain offsprings.
6. Create a new population of chromosomes by combining the selected parents and their offsprings
7. Evaluate the fitness value of new population. If the fitness values converge, stop, and return the best solution in
current population. Otherwise, go to step 3 for the new population
3.3. BPNN-GAs
The hybrid of BPNN-GAs is conducted in the following steps:
1. Assume BPNN numbers of weights are + where n is the number of neurons input, m is the number of
neurons output, l is the number of neurons in hidden layer and each weight (gene) is a real number.
2. Generate an initial population of chromosome which consists of 1380 (m=30, n=16, l=30) genes from BPNN
random weight.
3. Calculate fitness value of p chromosomes using equation (4).
4. Generate the mating pool by selecting the best genes using roulette selection methods.
5. Select parent pairs from mating pool for crossover mechanism.
6. Create a new population by combining the selected parents and their offsprings.
7. Evaluate the fitness value of new population. If the fitness values converge, stop, and return the best solution in
current population. Otherwise, go to step 3 for the new population
8. Apply the best solution in current population as the initial weights for BPNN training.
The scheme of hybrid BPNN-GAs is shown in Figure 2.
Hybrid Neural-Network-Genetic Algorithms Approach for Fault Diagnosis in Bearing System

78








Figure 2. The Scheme of Hybrid BPNN-GAs Algorithm
4. Result and Analysis
We conduct the experiment of standard BPNN and BPNN-GAs to obtain the fault diagnosis of the bearing system. In
BPNN-GAs approach, we used BPNN with the topology: 30 neurons of input, 30 neurons of one hidden layer and 16
neurons of output layer. In this experiment, we set GAs parameters as follows: 100 chromosomes of each population,
each chromosome contains1380 genes, crossover rate is 0.6 and maximum generation is 200.
For standard BPNN we use three topologies as follows: (1) 30 neurons of input, 30 neurons of the first hidden
layer and 16 neurons of output layers, (2) 30 neurons of input, 30 neurons of the first hidden layer, 30 neurons of the
second hidden layer and 16 neurons of output layer and (3) 30 neurons of input, 30 neurons of the first hidden layer, 30
neurons of the second hidden layer, 30 neurons of the third hidden layers and 16 neurons of output layer. We refer
m-l
1
-l
2
-l
3
-n as BPNN with m neurons input, l
1
neurons of the first hidden layers, l
2
neurons of the second hidden layers,
l
3
neurons of the third hidden layers and n neurons output.
BPNN and BPNN-GAs are implemented in MATLAB to obtain the desired classification of bearing system condi-
tion and performed on a computer with Intel Core 2 Quad processor Q8200, 2.33 GHz and 1.96 GHz and RAM 3.46
GB. The accuracy of classification is calculated using the following equation:
% 100 =
output Total
class output true total
accuracy tion classifica (5)
The accuracy of classification is shown in a confusion matrix which represents true output and target class in diagonal
of the matrix. As the results, Figure 3 presents confusion matrix of the best results of standard BPNN and hybrid of
BPNN-GAs. It shows that both of approaches achieve 93.3% classification accuracy, however, standard BPNN needs
100000 iterations whereas BPNN-GA just needs 15000 iterations. The comparison between standard BPNN and
BPNN-GAs is given in Table 2:
Table 2. Comparison of performance between standard BPNN and hybrid BPNN- GAs Approach in fault diagnosis
dohteM noitaretI
gniniarT gnitseT
noitacifissalC
ycaruccA
CPU Time
(sec)
noitacifissalC
ycaruccA
CPU Time
(sec)
BPNN 30-30-16 100000 85.8% 6289.6 60.3% 0.053
BPNN
30-30-30-16
100000 93.3% 7261.8 71.6% 0.047
BPNN
30-30-30-30-16
61542 88.8% 3991.8 72.2% 0.066

BPNN-GA

2000 85.8% 657.9 80.0% 0.034
5000 89.6% 645.5 87.5% 0.034
15000 93.3% 765.2 86.3% 0.034

NO YES
Generate initial population of
chromosome (Random BPNN
weights)
Calculate fitness value
Selection
Mating pool Crossover
Do fitness values
of current popula-
tion converge?
New population
(new weights)
BPNN training
Hybrid Neural-Network-Genetic Algorithms Approach for Fault Diagnosis in Bearing System

Copyright 2011 IESS. 79

A B
Figure 3. Confusion Matrix of BPNN-GAs (A) and Standard BPNN (B)
5. Conclusion
This paper presented hybrid BPNN-GAs approach to diagnose condition of bearing system. The result shows that
BPNN-GAs gives better result than standard BPNN in the bearing system diagnosis. The hybrid of BPNN-GAs
achieves higher classification accuracy in less iteration and shorter CPU time compared to the standard BPNN.
6. Acknowledgements
The authors thank to Universiti Teknologi Malaysia (UTM) and Ministry of High Education (MOHE) for Research
University Grant (RUG) Vote No. Q.J. 130000.7128.00J96 and the Research Management Center (RMC) - UTM for
supporting this research project. The first author sincerely thank to UTM for awarding International Doctoral Fellow-
ship (IDF).
7. References
[1] A. Harnoy, Bearing Design in Machinery : Engineering Tribology and Lubrication. 2003: Marcel Dekker, Inc.
[2] P.V. J. Rodriguez and A. Arkkio Detection of Stator Winding Fault in Induction Motor Using Fuzzy Logic. Applied Soft
Computing 8. 2008: p. 1112-1120.
[3] N. Saravanan, V. N. S. K. Siddabattuni, and K. I. Ramachandran, Fault Diagnosis of Spur Bevel Gear Box Using Artificial
Neural Network (ANN), and Proximal Support Vector Machine (PSVM). Applied Soft Computing 10. 2010: p. 344-360.
[4] Y. Lei, Z. He, and Y. Zi, Application of An Intelligent Classification Method to Mechanical Fault Diagnosis. Expert Systems
with Applications 36. 2009: p. 9941-9948.
[5] H. Wang and P. Chen, Fault Diagnosis for A Rolling Bearing Used in A Reciprocating Machine by Adaptive Filtering
Technique and Fuzzy Neural Network. WSEAS TRANSACTON on SYSTEMS Issue 1, Vol. 7. 2008: p. 1-6.
[6] Y. C. Huang, C. M. Huang, H. C. Sun, and L. S. Liao, Fault Diagnosis Using Hybrid Artificial Intelligent Methods. 5th IEEE
Conference on Industrial Electronics and Applicationsis. 2010: p. 41-44.
[7] W. Li, T. Shi, G. Liao, and S. Yang, Feature Extraction and Classification of Gear Faults Usig Principal Component Analysis.
Journal of Quality in Maintenance Engineering Vol. 9 No.2. 2003: p. 132-143.
[8] J. Tetteh, E. Metcalfe, and S. L. Howells, Optimisation of radial basis and backpropagation neural networks for modelling
auto-ignition temperature by quantitative-structutre property relationship. Chemometrics and intelligent laboratory systems 32.
1996: p. 177-191.
Hybrid Neural-Network-Genetic Algorithms Approach for Fault Diagnosis in Bearing System

80
[9] J. Rafiee, F. Arvani, A. Harifi, and M. H. Sadeghi, Intelligent condition monitoring of a gearbox using artificial neural network.
Mechanical systems and signal processing 21. 2007: p. 1746-1754.
[10] J. L. Tang, Q. R. Cai, and Y. J. Liu, Gear Fault Diagnosis with Neural Network based on Niche Genetic Algorithm
International Conference on Machine Vision and Human-machine Interface. 2010: p. 596-599.
[11] S. Rajasekaran and G. A.V. Pai, Neural networks, fuzzy logic and genetic algorithms: synthesis and applications. 2007: New
Delhi, II : Prentice-Hall of India.
[12] Y.J. Cao and Q. H. Wu, Teaching Genetic Algorithm Using MATLAB. Int. J. Elect. Enging. Educ., Vol.36. 1999: p. 139-153.



Proceeding of Industrial Engineering and Service Science, 2011, September, 20-21
Copyright 2011 IESS.
The Effects of Trade Location: The Case of Dual
Listing Telkom in NYSE and IDX
SitiArfiah Arifin*, Deddy P. Koesrindartoto**

School of Business and Management, Institut Teknologi Bandung, Indonesia
*siti.arfiah@sbm-itb.ac.id, **deddy.pri@sbm-itb.ac.id
ABSTRACT
Although some stocks, in some sectors, affected in the short term by irrational behavior, the stock market as a whole fol-
lows fundamental laws, grounded in economic growth and returns on investment. This is following the classical finance
paradigm predicts that an assets price is unaffected by its location of trade and other factors. In the second half of the
1990s, the S&P 500 Index more than tripled in value to an all-time high of almost 1,500. The stocks, such as Amazon and
AOL, became stock market superstars. Then the market crashed, and many stars flickered out. Moreover, people began to
question whether classical finance theories could really explain such dramatic swings in share prices. This is the basic
point of view of this essay that relative price of the stock is correlated with the relative stock market indexes of the coun-
tries where the stock are traded most actively. The essay hypothesizes the single stock of Telkom that is most intensively
traded on a given market will co-move excessively with that markets return and currency. The measurement for the
co-movement of stock price through the regression of stock log return differential on Indonesia and U.S. market index log
returns plus the relevant log currency changes. Finally, this essay will provide an example in which location of trade and
ownership appears to influence prices. Then, a similar sort of phenomenon occurs with closed-end country funds.

Key words: Telkom, Trade Location, Market Index, Currency Change

1. Introduction
Following the classical finance paradigm predicts that an assets price is unaffected by its location of trade. This condition
happened when international financial markets are perfectly integrated, then a given set of risky cash flows will have the
same value and risk characteristics when its trade is being redistributed across markets and investors [2].
Then, sometimes the market crashed, and many stars of stock flickered out, people began to question the situation
that classical finance theories could really explain such dramatic moves in stock prices [1]. Some would even assert that
stock markets lead lives of their own, detached from the basics of economic growth and business profitability. Although
some stocks, in some sectors, affected in the short term by irrational behavior, the stock market as a whole follows fun-
damental laws, grounded in economic growth and returns on investment [3].
This essay provides an example in which location of trade and ownership appears to influence prices. It shows that
the stock price of the biggest state telecommunication company in Indonesia, Telkom, is influenced by location factor.
This condition seem happened in general because it is also shows to the stock prices of three worlds largest and most
liquid multinational companies [2]. Furthermore, the main contribution of the essay is to show that therelative price of the
stocks is correlated with the relative stock market indexes of the countries when the stocks are traded most actively. Spe-
cifically, it tests whether location matters by examining single company stock whose charter fixes the division of past and
current equity cash flow of price stock.
The stock of Telkom provides a clearly example of co-movement for several reasons. First, the single stock that
examined in the paper is one of the pioneer that being tradedabroad beside in Indonesia. Second, the Telkom stock are
majority owned by the government of Indonesia thatcan be influences the national market index directly. Third, the stocks
are traded on world stock exchanges, and many investors can purchase the stock locally.
The objective of this essay is to show that the relative price of the stock is correlated with the relative stock market
indexes of the countries where the stock are traded most actively. For example, when the Indonesian market moves rela-
tive to the U.S. market, the price of Telkom (which trades relatively more in Jakarta) tends to move relative to the price in
New York. Similarly, when the rupiah appreciates against the dollar, the price of Telkom tends to move relative too. A
similar sort of phenomenon occurs with closed-end country funds. In particular, it appears that closed-end fund share
prices co-move most strongly with the stock market on which they trade. Then, while net asset values co-move most
strongly with their local stock markets [2].
The Effects of Trade Location: The Case of Dual Listing Telkom in NYSE and JKSE

82
Finally, the rest of the chapter is organized as follows. Section 2 describes the company profile structure of Telkom
and its stock. Section 3 presents the empirical hypotheses and tests of the single stock price. Section 4 discusses the origin
of data. Section 5 shows the findings and results on co-movement because of the trade location. Then, section 6 offers
conclusions of the paper.
2. Company Profile
PT Telekomunikasi Indonesia, tbk or Telkom as operator of TIME (Telecommunication, Information, Multimedia, and
Edu-taintment) business has performed telecommunication operation in the form of telephone (fixed wire line, fixed
wireless, and cellular), data and internet, network service and interconnection, and content/application. As of December 31,
2009, number of subscribers has grown at 21.2% or 105.1 million compared to the previous year. For telephone only,
TELKOM serves 8.4 million of fixed wire line subscribers, 15.1 million of fixed wireless subscribers, and 81.6 million of
cellular phone subscribers [8].
As of December 31, 2009, the Indonesian Government (52.4%) owns TELKOM common share and public share-
holders (47.53%) own the rest. TELKOM share is traded at Indonesia Stock Exchange (IDX), New York Stock Exchange
(NYSE), then at London Stock Exchange (LSE) and Tokyo Stock Exchange (TSE), both in form of Publicly Offering
without Listing (POWL) [8].


Figure 1. Sequence Plot of Log Return Telkom are traded on the NYSE and IDX
3. Empirical Hypotheses and Tests
This essay hypothesizes that stocks are most intensively traded on a given market will co-move excessively with that
markets return and currency. The null hypothesis is that the relative stock price should be uncorrelated with everything.
Then, alternative hypothesis is that markets are segmented, so that relative markets shocks explain movements in the
price differential.
Regression of stock log return differential on Indonesia and U.S. market index log returns plus the relevant log cur-
rency changes is the measurement for the co-movement of stock price:
1 1 1 1
rTelkom (NYSE-IDX), t = + i NYSE t+i+ j S&P 500 t+j + kIDXt+k + l g$/Rpt+l + t,
i=-1 j=-1 k=-1 l=-1 (1)

Because of the cross-border aspects of the markets, it is include currency changes as well as local-currency stock
returns as market factor in Eq. Therefore, at the null hypothesis isthe entire slope coefficients is zero. Otherwise, on the
alternative hypothesis, the more a stock trades on a given market, the higher its estimated slope.
The Effects of Trade Location: The Case of Dual Listing Telkom in NYSE and IDX

Copyright 2011 IESS. 83
4. Data
Indonesian stock prices for Telkom (Tlkm) are taken from the Jakarta Stock Exchange (IDX) [5], while the U.S. stock
prices for Telkom (Tlk) are taken from the New York Stock Exchange (NYSE) [6]. Moreover, index from S&P 500 is
used for US market return [7], and the currency change of US Dollar and Rupiah [4], respectively. The sample monthly
period is January 1, 2005 to December 1, 2010. All returns are expressed in log form.
Other important consideration is where returns are measured; the essay also estimates the relative return on the stock
price by taking the difference of the log returns in the markets where they trade most actively. For example, it is uses the
returns of Telkom in New York and Jakarta. Finally, issue concerns the currency denomination of returns. The eq.
mentions all return variables in local currencies and then add exchange rate changes as separate independent variables on
the right-hand side of the regressions.
5. Findings and Results
The result from the first regression estimates of Eq. (1) for Telkom stock, respectively. First, the regression uses return
log return differential on Indonesia and U.S. market index log returns plus the relevant log currency changes at time t.
The result is that the R
2
: 0.440 shows the correlation between log return TlkTlkm can be explained about 44% with the
others independent variables. Then Standard Error Estimate (SEE) is 0.013, determine that less amount of the number
make the regression model is worth to predict log return TlkTlkm. Moreover, from the F test, the amount is 12.980 with
significant 0.000, it predicts that the regression model can be used to determine log return TlkTlkm or all of independ-
ent variables together can predict the log return TlkTlkm. In significance column (Sig.), variables log return NYSE, log
return S&P 500 and log return IDX have amount above 0.05, so it is not affect the log return TlkTlkm, respectively.
Moreover, only variable log return USD/IDR affect the log return TlkTlkm because the Sig. amount is 0.000.
Table 1. Result of the regression at period time t-1, t, and t+1
Specification Return
Period
R
2
SEE F Sig. Sig.
NYSE
Sig.
S&P 500
Sig. IDX Sig.
g $/Rp
2005-2010 t-1 0.438 0.014 13.055 .000
a
.204 .703 .193 .000
2005-2010 t 0.440 0.013 12.980 .000
a
.177 .694 .215 .000
2005-2010 t+1 0.580 0.013 6.456 .000
a
.489 .403 .588 .467

Because the result in Table 1 is not too significant respectively, the other variable is added to the Eq. (1), in order to
get better result, and then the regression process replied again. The second multiple regression test shows the result that
the R
2
amount is 0.580, then the correlation between log return TlkTlkm and other variables is strong. The F test
(6.456) and significance (.000) shows that the independent variables exposure together is significant. Moreover, variables
log return USD/IDR at time t (sig .000) and log return USD/IDR at time t-1 (.000) affect log return TlkTlkm signifi-
cantly.
The result shows that log return USD/IDR at time t and log return USD/IDR at time t-1 are the most affected vari-
ables to log return TlkTlkm than others. The new estimation of log return differential TlkTlkm and other variables:

r Telkom(NYSE-IDX),t = + rNYSE,t + r S&P 500,t + rIDX,t + g$/Rp,t-1 + g$/Rp,t
+ Telkom(NYSE-IDX),t (2)

Table 2. Result of the next regression at period time t-1 and t
Specification Return
Period
R
2
DW F Sig. B ()
NYSE
B ()
S&P 500
B ()
IDX
B ()
g $/Rp
2005-2010 t-1 0.438 2.80 13.055 .000
a
.140 .001 -.096 .623
2005-2010 t 0.440 2.78 12.980 .000
a
.152 .001 -.092 .614
The result in table 1 and table 2, reject the perfect-integration hypotheses that explained in classical finance para-
digm. The signs of virtually all coefficients line up with the alternative hypothesis, and all are significantly different from
zero at the 1 percent level. In table 2, for example, at period t of log return TlkTlkm differentials yields coefficients of
about of about 0.152 on the NYSE, 0.001 on the S&P500, -0.92 on the Indonesian index (IDX). The coefficients on the
exchange rate changes are also large, at 0.623 at return period t-1 and 0.614 for the dollar/rupiah exchange rates. At 1
percent appreciation of the dollar against the rupiah, influences the relative price of Telkom stock by about 60 basis
points. Then, this coefficient values also describe that at 1 percent appreciation of the dollar relative to the rupiah influ-
The Effects of Trade Location: The Case of Dual Listing Telkom in NYSE and JKSE

84
ence the relative price of Telkom stock by about 60 basis points. The R
2
in the table 2 are quite high, about 40 percent for
return of period t-1 and period t.
6. Conclusions
The essay presents evidence that stock price of Telkom is affected by the location of trade, especially with the change of
currency where the place of stock is traded, NYSE in United State of America and IDX in Republic of Indonesia. There-
fore, the location of trade therefore appears to matter of pricing. The co-movement between the return price differentials
and market indexes are present too. The similar result is happened to the twin stocks that have nearly identical cash flows,
move more similar to the markets where they trade most intensively than they should. The co-movements between the
price differentials and market indexes are present at short and long horizons [2].
Moreover, this essay presents the possible sources about the cause of the result. The first source of the change ex-
planation is noise, irrational traders made market-wide noise shocks, which affect locally traded stocks more than foreign
traded stocks, but the main problem with this is that thesource of noise or persistent irrationality is difficult to identify.
The second possible source might institutional inefficiencies, because the stock might beclassified as a domestic stock,
so the classification is needed in practice also could help resolve informational asymmetries and agency problems in the
investment process. Third, the source of the change is tax-induced investor heterogeneity, but still incomplete for explain
more about investor behavior. Finally, the future research for other local stocks from Republic of Indonesia, which trade
in the foreign area is important to be determined. This is important to know whether there are anomalies from these stocks
trade, especially when the stocks are correlated with the relative stock market indexes of the countries where the stock are
traded most actively, and also with the currency change between the countries.
7. References
[1] Chopra, N. et al, 1993. Yes, discounts on closed-end funds are a sentiment index, Journal of Finance, Vol.48, pp 801-8.
[2] Froot, K.A., E. Dabora, 199., How are stock prices affected by the location of trade. Working Paper no. 6572. National Bureau
of Economic Research, Cambridge, MA.
[3] Hardouvelis, G., R. La Porta, T. Wizman, 1995. What moves the discount on country equity funds. Working Paper no. 4571.
National Bureau of Economic Research, Cambridge, MA.
[4] Historical Exchange Rates, 2011.http://www.oanda.com/currency/historical-rates/
[5] Index Jakarta, 2011. http://finance.yahoo.com/q?s=^jkse&ql=1
[6] Index New York, 2011. http://finance.yahoo.com/q?s=^NYA&ql=0
[7] Index S&P 500, 2011. http://finance.yahoo.com/q?s=^GSPC&ql=0
[8] Info Perusahaan, 2011.http://www.telkom.co.id/info-perusahaan/
Proceeding of Industrial Engineering and Service Science, 2011, September 20-21
Copyright 2011 IESS.
A Scheduling Model for Production System
Considering Material Handling Operations
Dwi Kurniawan*, Rispianda, Isa Setiasyah Toha

Industrial Engineering Department, Institut Teknologi Nasional, Bandung, Indonesia
*E-mail: dwikur77@gmail.com
ABSTRACT
In most techniques, production scheduling only consider production machines as resources. These techniques generally
do not consider the material handling operations by assuming that the material handling equipments are always avail-
able and the handling time can be ignored. In systems with significant material handling time, ignoring the material
handling operations may cause the material handling equipment allocation that does not fit the need for part transpor-
tation. In this study, a scheduling model considering material handling equipments as resource will be developed. The
developed model will start from priority dispatching technique, by adding necessary steps to consider the material han-
dling equipments.

Keywords: Scheduling, Material Handling, Priority Dispatching Technique

1. Background
Scheduling is the allocation of resources to perform a set of tasks based on time. To determine this allocation, various
techniques have been developed using optimization and heuristic approaches. In the techniques currently available,
scheduling only consider production machinery as a resource [1]. These techniques do not consider the material han-
dling process by assuming that the process can be done using material handling equipments available and the processing
time can be ignored.
In certain production systems, such as production system in Balai Yasa Jembatan Kereta Api (railway bridge
workshop), Bandung, the time required to perform the material handling process has a significant proportion to the total
processing time. Thus, the material handling time in this type production systems can not be ignored. The common rea-
son of this condition is the size or mass of material, so that most or all of the material handling process must use special
equipment, and the material handling time can not be ignored.
In the production systems with significant proportion of material handling time, the material handling equipment
should be regarded as a source as production machinery, and should be considered in scheduling. Without considering
the scheduling of material handling equipment, it is possible that at a time, work-in-process parts are waiting to be
transported from one station to another station because of all material handling equipments are being used to transport
components, while at the other time, all material handling equipments are idle. Therefore, in this type production sys-
tem, scheduling needs to consider the material handling time and equipments.
This research aims to:
1. Create a flow shop production scheduling model for systems with significant proportion of material handling time.
2. Apply the model to solve the problems occurred in Balai Yasa Jembatan Kereta Api, Bandung.
2. Literature Review
There are some recent developments in scheduling considering material handling operations. Lei and Wang [2] consid-
ered the problem of cyclic scheduling of two hoists. Bilge and Ulusoy [3] exploited the interactions between the ma-
chine scheduling and the scheduling of the material handling system in an FMS by addressing them simultaneously.
Das and Spasovic [4] presented a straddle scheduling procedure that can be used by a terminal scheduler to control the
movement of straddle carriers. Khayat et al. [5] proposed an integrated formulation of the combined production and
material handling scheduling problems. Babiceanu et al. [6] presented a solution for scheduling material handling de-
vices in the cellular manufacturing environment using the holonic control approach. Finally, Anwar and Nagi [7] con-
sidered the simultaneous scheduling of material handling transporters (such as automatic guided vehicles or AGVs) and
manufacturing equipment (such as machines and work centers) in the production of complex assembled product.
A Scheduling Model for Production System Considering Material Handling Operations

86
This paper will develop a scheduling model considering material handling operations. It will be developed from
Bakers job shop scheduling [1]. The material handling consideration will refer to the system designed by Apple [8].
3. Model Development
3.1 Problem Modelling
Problem of scheduling production machines and material handling equipments will be developed gradually. The
problem is developed from simple to complex in several stages:
1. Scheduling one production machine and one material handling equipment.
2. Scheduling m production machines and one material handling equipment.
3. Scheduling m production machines and h independent material handling equipments.
4. Scheduling m production machines and h dependent material handling equipments.
A. Scheduling one production machine and one material handling equipment
Examples of scheduling one production machine and one material handling equipment can be seen in Figure 1. All
products are processed by one machine and supported by one material handling equipment, but each has different num-
ber of repetitions and operation time. Problems like this have a general form as shown in Figure 2. The routing of this
problem is shown in Table 1.
Handling Boring Handling Boring Handling Boring Handling Boring Handling

Figure 1: An example of scheduling one material handling equipment and one machine










Figure 2: An example of scheduling one machine and one material handling
Table 1. Routing of problem in Figure 2
Job
(i)
Operation (j)
1 2 3 . 2p 2p+1
1 H M H . M H
H M H M H
n H M H . M H
M: machine; H: material handling
B. Scheduling m production machines and one material handling equipment
Examples of scheduling m production machines and one material handling equipment can be viewed in Figure 3. All
jobs are processed by some machines and transported by one material handling equipment, with different sequence and
operation time. The problem has a common model as shown in Figure 4. The routing of this problem is shown in Table
2.
Handling Boring Handling Lathe Handling Sewing Handling Painting Handling

Figure 3: Examples of scheduling m machines and one material handling
Product with
4 holes
material
handling

n job

p
repetition



machine
A Scheduling Model for Production System Considering Material Handling Operations

Copyright 2011 IESS. 87

Figure 4: General form of scheduling m machines and one material handling equipment
Table 2. Routing of problem in Figure 4
Job
(i)
Operation (j)
1 2 3 . 2k 2k+1
1 H Mij H . Mij H
H Mij H Mij H
n H Mij H . Mij H
Mij: machine used in job i operation j; H: material handling
C. Scheduling m production machines and h independent material handling equipments
Examples of scheduling m production machines and h independent material handling equipments can be viewed in
Figure 5. All jobs are processed by some machines and transported by some material handling equipments, with
different sequence and operation time. The independent term means that the material handling equipments transport
the jobs without any dependence or collaborative action with other. The problem has a common model as shown in
Figure 6. The routing of this problem is shown in Table 3.
Forklift Boring Crane Lathe Crane Sewing Crane Painting Forklift


Figure 5: Examples of scheduling m machines and h independent material handling















Figure 6: General form of scheduling m machines and one material handling equipment
material handling

n job

k operations



Machine 1




Machine m

Material handling 1



n job



k operations

Material handling h

Machine 1





Machine m

A Scheduling Model for Production System Considering Material Handling Operations

88
Table 3. Routing of problem in Figure 6
Job
(i)
Operation (j)
1 2 3 . 2k 2k+1
1 Hij Mij Hij . Mij Hij
Hij Mij Hij Mij Hij
n Hij Mij Hij . Mij Hij
Mij: machine used in job i operation j; H: material handling used in job i operation j
D. Scheduling m production machines and h dependent material handling equipments
In the previous section, material handling equipments are assumed to work independently, do not affect each other. In
the real system, it is possible that several material handling equipments have a mutual dependency (dependent).
Interdependence between some material handling equipments can occur in some conditions:
1. Selection of material handling equipment.
Generally, machining and material handling operation have designated machines or material handling equipment
required. However, in certain material handling operations, materials handling equipment
which will be used is not determined before (some equipments are possible). If a material handling operation can
be handled by more than one material handling equipment, the equipment used to handle this operation is the one
that can perform operations earlier. The earliest completion of the operation depends on:
Ready time of each material handling equipment to be used.
Arrival time of each materials handling equipment in the displacement starting location.
2. Simultaneous use of resources.
Generally, each operation, either machining or material handling operation, uses only one resource. However, it is
possible that certain operations require more than one resource simultaneously. The examples are:
Use of some material handling equipments simultaneously. To lift a locomotive body weighing over 70 tons
from its wheels, two overhead cranes of 36 tons haul powered are used simultaneously.
Use of materials handling equipment and production machines simultaneously. To drill a bar of bridge
component, a drill machine is used assisted by an overhead crane to hold the bar.
3. Shared path.
Movement of material handling equipments may use a path simultaneously (or alternately). In double girder over-
head crane, one crane path (axis movement) is used by the two cranes. Therefore, when a crane will move from one
location to another, another crane should not be in a location that will be passed through. In transport vehicles such
as forklifts and tractors, track dependencies can also occur on paths that are used together by several equipments. If
an equipment will be used, but the required path is being used by other equipment, then it should wait until the path
they need is available.
The general form and the routing of scheduling m production machines and h dependent material handling
equipments are the same with the independent one, as can be seen in Figure 6 and Table 3.
3.2 Scheduling Algorithm
An algorithm is developed to solve the scheduling problem described in Section 3.1. The algorithm refers to the prob-
lem of scheduling m machine scheduling production and h dependent material handling equipments, due to this problem
is the most complex problem of the four problem types. This means that the algorithm is applicable for the three other
simpler problems.
The scheduling process is based on notations and steps in the Priority Dispatching Technique [1]. The notations
used in the model develompent are:
PS
t
= partial schedule consists of t scheduled operations;
S
t
= operations ready to be scheduled at stage t;
r
j
= earliest time at which operation j e S
t
can be started;
c
j
= earliest time at which operation j e S
t
can be completed;
D
ij
= arrival time of job i in the j-th material handling operation;
t
ij
= total time of job i in the j-th material handling operation (including the arrival time).
The algorithm is described as follows.
Step 1. Suppose t = 0, PS
i
= 0 and S
0
= set of operations without predecessors.
Step 2. If there are material handling operations in S
t
:
a. If there are more than one material handling equipment that can be used, select a material handling equipment that
capable to complete the operation earlier.
A Scheduling Model for Production System Considering Material Handling Operations

Copyright 2011 IESS. 89
b. Determine r
j
, the time when all material handling equipments can be used, regarding the use of paths by the
equipments.
Step 3. For material handling operations at S
t
, determine the arrival time D
ij
based on the last location of the required
material handling equipment. If some resources are used, the arrival time is the latest arrival time of all resources. Then,
specify t
ij
by summing D
ij
with material handling time specified in the routing.
Step 4. Considering r
j
, determine c* = min
jeSt
{c
j
} and the resource r* where c* would be done.
Step 5. For each operation j e S
t
requires the resource r* and has r
j
< c*, select a priority using the following stages:
a. Prioritize operations using less resource.
b. Select an operation using certain priority rule.
Add selected operation to PS
t
so the partial schedule PS
t+1
is obtained.
Step 6. For the partial schedule PS
t+1
obtained from Step 5, update the following data.
a. Remove the scheduled operation j from S
t
.
b. Create S
t+1
by adding the operation succeeding the scheduled operation j.
c. Add t by one.
Step 7. Return to Step 2 to review PS
t+1
and continue all the steps until all jobs are scheduled (S
t
= {}).
4. Model Application
A problem is shown in the following. Table 4 and 5 shows the original routing and processing time of four jobs without
the material handling operations.
Table 4. Original routing

Table 5. Original processing time
Job Operation Job Operation
1 2 3 1 2 3
1 1 2 3 1 4 3 2
2 2 1 3 2 1 4 4
3 3 2 1 3 3 2 3
4 2 1 3 4 3 3 1

Materials handling operations in this system is performed based on the following description.
The material handling equipments consist of 2 forklifts (number 4
1
and 4
2
) and 2 overhead cranes (number 5
1
and
5
2
).
Transportation between two machines will use crane, and transportation between storage and machines will use
forklift.
Operations on Machine 3 need assistance of one crane, and material handling operations of Job 4 requires two ma-
terial handling equipments (forklift or crane) simultaneously.
Aisles in the plant are wide enough to pass by 2 forklifts as well. Meanwhile, two cranes are located on one line so
that the position of each crane in the scheduling must be considered.
After including material handling operations, the job routing is updated as shown in Table 6. Further, Table 7
shows the transportation time between machines and storages in the plant.
Table 6. New routing

Table 7. Transportation time
Job Operation From To
1 2 3 4 5 6 7 R. mat.
storage
Machine
1
Machine
2
Machine
3
End pr.
storage
1 4 1 5 2 5 3+5 4 Raw material
storage
- 3 3 4 2
2 4 2 5 1 5 3+5 4
3 4 3+5 5 2 5 1 4 Machine 1 3 - 1 3 4
4 412 2 512 1 512 3+512 412 Machine 2 3 1 - 2 3
Notes:
Shaded cells are material handling operations
4 or 5: one material handling is required
412 or 512: two forklifts or two cranes are required
3+5: machine 3 is required assisted by a crane
3+512: machine 3 is required assisted by two cranes
Machine 3 4 3 2 - 2
End product
storage
2 4 3 2 -
Forklift park 2 3 4 5 2
Crane 1 park - 1 1 2 -
Crane 2 park - 2 1 1 -

Considering Table 6 and Table 7, and the machining time in Table 5, an updated processing time for both machin-
ing and material handling operations is then summarized in Table 8.
A Scheduling Model for Production System Considering Material Handling Operations

90
Table 8. Updated processing time
Job Operation
1 2 3 4 5 6 7
1 9 4 5 3 9 2 5
2 7 1 6 4 13 4 6
3 8 3 6 2 9 3 8
4 7 3 5 3 13 1 7
Note: Shaded cells are material handling operations

Applying the algorithm developed in Section 3, a schedule of machinery and material handling equipments is
shown in Figure 7.

Figure 7: Final schedule for the given problem
The Gantt-chart in Figure 7 is important to analyze to understand how the model works. For example, see Job 2
with the operation sequence 214, 222, 235, 241, 255, 26(35) and 274. Remember that the odd sequence operations are
material handling ones. So, the operation sequence of Job 2 switches alternately between machining operations (lower
region) and material handling operations (upper region).
Further more, two-resource operation can be seen in the notation 26(35) in the Gantt-chart. The operation 26(35)
appears in Machine 3 and crane 2, due to the operation requires both resources. The same appearance occurs in opera-
tion 32(35) and 16(35). Other two-resource operations with different notation are 414
12
, 435
12
and 455
12
. Finally, a
three-resource operation occurs in 46(35
12
) as can be seen in the Gantt-chart.
5. Concluding Remarks
The model developed in this paper works properly. The model is applicable for production systems with significant
proportion of material handling operations. The model works by combining both machining operations and material
handling operations in one routing table and one processing time table. The model can adopt special conditions of mate-
rial handling operations, such as multi-resource operation, consideration of material handling equipment locations, the
usage of machinery and material handling equipment simultaneously, and consideration of the route of material han-
dling equipment movements. The model can also be extended to production systems using material handling equipment
such as Automatic Guided Vehicle (AGV) and Robotic Guided Vehicle (RGV).
6. References
[1] K. R. Baker, Introduction to Sequencing and Scheduling, John Wiley & Sons Ltd., 1974.
[2] L. Lei and T. Wang, The Minimum Common-Cycle Algorithm for Cyclic Scheduling of Two Material Handling Hoists with
Time Window Constraints, Management Science, Vol. 37, Issue 12, 1991, pp. 1629-1639.
[3] U. Bilge and G. Ulusoy, A Time Window Approach to Simultaneous Scheduling of Machines and Material Handling System
in an FMS, Operations Research, Vol. 43, Issue 6, 1995, pp. 1058-1070
[4] S. K. Das and L. Spasovic, Scheduling Material Handling Vehicles in A Container Terminal, Production Planning & Con-
trol: The Management of Operations, Vol. 14, Issue 7, 2003, pp. 623 633.
[5] G. E. Khayat, A. Langevin and D. Riope, Integrated Production and Material Handling Scheduling Using Mathematical Pro-
gramming and Constraint Programming, European Journal of Operational Research, Vol. 175, Issue 3, 2006, pp. 1818-1832.
[6] R. F. Babiceanu, F. F. Chen and R. H. Sturges, Real-Time Holonic Scheduling of Material Handling Operations in A Dynamic
Manufacturing Environment, Robotics and Computer-Integrated Manufacturing, Vol. 21, Issues 4-5, 2005, pp. 328-337.
[7] M. F. Anwar and R. Nagi, Integrated Scheduling of Material Handling and Manufacturing Activities For Just-In-Time Pro-
duction of Complex Assemblies, International Journal of Production Research, Vol. 36, Issue 3, 1998, pp. 653- 681.
[8] J. M. Apple, Material Handling Systems Design, The Ronald Press Company, New York, 1972.
Proceeding of Industrial Engineering and Service Science, 2011, September 20-21
Copyright 2011 IESS.
An Improved Fuzzy Number Ranking Method
Based on the Centroid-index
Shuo-Yan Chou, Vincent F. Yu
*
, Luu Quoc Dat

Department of Industrial Management, National Taiwan University of Science and Technology,43, Section 4, Keelung Road, Taipei
10607, Taiwan. E-mail: vincent@mail.ntust.edu.tw (V.F. Yu); Tel: +886-2-2737-6333; Fax: +886-2-2737-6344
ABSTRACT
Ranking fuzzy numbersplays a very important role in decision process, data analysis and applications. Ranking
indieces based on the centroids of fuzzy numbers are commonly used approaches for raking fuzzy numbers. However,
there are some weaknesses associated with these indieces. This paper reviews several fuzzy number ranking methods
based on centroid-indiecesand proposes a new centroid-index ranking method which is capable of ranking various
types of fuzzy numbers effectively. Acomparative example is presented to demonstrate the usage and advantages of the
proposed centroid-index ranking method for fuzzy numbers.

Keywords: Fuzzy Numbers, centroid index, centroid of fuzzy numbers, ranking.

1. Introduction
Ranking fuzzy numbers plays a very important role in decision making, optimization, and other usages. Ever since
Yager [1], who presented the centroid concept in the ranking method, numerous ranking techniques using the centroid
concept have been proposed and investigated [2-19]. Some of them have been compared and contrasted in Wang and
Lee [16] and more recently in Ramli and Mohamad [11].
Cheng [6] in 1998 used a centroid-based distance method to rank fuzzy numbers. For a trapezoidal fuzzy number
( , , , ; ), A a b c d = = the distance index can be defined as
2 2
( ) A
A
R A x y = + , with
,
b c d
L R
A A
a b c
A
b c d
L R
A A
a b c
xf dx xdx xf dx
x
f dx dx f dx
+ +
=
+ +
} } }
} } }
1 1
0 0
1 1
0 0
.
L R
A A
A
L R
A A
yg dy yf dy
y
g dy g dy
=
+
=
+
} }
} }
, and
R
A
f and
L
A
f are the respective right and left membership functions of A, and
R
A
g and
L
A
g are the inverse of
R
A
f and
L
A
f , respectively. The larger the value is of
( ) R A
, the better the ranking will be of A.
Cheng [6] further proposed a coefficient of variation (CV) index that improves the concept of ranking fuzzy numbers,
using fuzzy mean and fuzzy spread as presented by Lee and Li [8].
Chu and Tsao [7] in 2002 found that the distance method and CV index proposed by Cheng [6] still have some
shortcomings. Hence, to overcome these problems, Chu and Tsao [7] proposed a new ranking index function
, A
A
S x y = where A x is similar to A x in Cheng [6] and
0 0
0 0
.
w w
L R
A A
w w A
L R
A A
yg dy yg dy
y
g dy g dy
+
=
+
} }
} }
The larger the value is of
( ) S A , the better the ranking will be of A.

In some special cases, the method proposed by Chu and Tsao also hasthe same shortcomings as that in Chengs
method [6]. The shortcomings of Chengs and Chu and Tsaos centroid-index are presented as follows. For example, for
fuzzy numbers , , A B C and , , , A B C according to Chengs centroid-index
( ) ( )
2 2
R x y = +
, whereby the same
results are obtained - that is, if A B C < < , then . A B C < < This is clearly inconsistent with the mathematical
logic. For Chu and Tsaos centroid-index S xy = , if 0 x = , then the value of S xy = is a constant zero. In other
words, the fuzzy numbers with centroids
1
(0, ) y and
1 1 2
(0, ), ( ) y y y = are considered the same. This is also obviously
unreasonable.
An Improved Fuzzy Number Ranking Method Based on the Centroid-Index

92
Wang, Yang, Xu, and Chin [18] found that the centroid formulae proposed by Cheng [6] and Chu and Tsao [7] are
incorrect. Therefore, to avoid any more misapplication, Wang, Yang, Xu, and Chin [18] presented the correct centroid
formulae as:
( ) ( )
/
b c d b c d
L R L R
A
A A A A
a b c a b c
x xf dx xwdx xf dx f dx wdx f dx = + + + +
} } } } } }
,
and,
( ) ( )
0 0
[ ( ) ( )] / [( ( ) ( )]
w w
R L R L
A A A A A
y y g y g y dy g y g y dy =
} }
.
The correct formula proposed by Wang, Yang, Xu, and Chin [18] is only limited to trapezoidal fuzzy numbers
with invertible membership functions [11]. Shieh [13] presented the correct centroid formula, which can cater to both
invertible and non-invertible fuzzy numbers. The formula of the horizontal point is similar to Wang, Yang, Xu, and
Chin [18], while the vertical point is defined as
( ) ( )
0 0
| | / | |
w w
A
y A d A d
o o
o o o =
} }
, where | | A
o
is the length of the
cut . A
o
o In particular, for a trapezoidal fuzzy number ( , , , ; ), A a b c d = = the value of
( )
( ) ( ) 1
3 ( ) ( )
c b
y A
d c a b
= (
= +
(
+ +

, which coincides with Wang, Yang, Xu, and Chins [18] formula.
To overcome the shortcomings of these existing fuzzy numberranking methods, this paper proposes a new cen-
troid-index ranking method based upon centroid formulae of Wang, Yang, Xu, and Chin [18] and Shieh [13]. The paper
further presents acomparative example demonstrating the efficiencies and advantages of the proposed centroid-index.
2. Fuzzy numbers
There are various ways of defining fuzzy numbers. This paper defines the concept of fuzzy numbers as follows.
Definition 1. A real fuzzy number Ais described as any fuzzy subset of the real line R with membership function
( ) A x that can be generally be defined as [20]:
( ),
,
( )
( ),
0, otherwise,
L
R
A x a x b
b x c
A x
A x c x d
=
s s

s s

=

s s

(1)
where , , a b cand d are real numbers. Unless elsewhere specified, it is assumed that Ais convex and bounded (i.e.
, ). a d < < :[ , ) [0, ]
L
A a b = is monotonic increasing continuous from the right function, and
: ( , ] [0, ]
R
A c d = is monotonic decreasing continuous from the left function. If 1, = = then Ais a normal fuzzy
number; otherwise, it is said to be a non-normal fuzzy number. If the membership function ( ) A x is piecewise linear
and continuous, then Ais referred to as a trapezoidal fuzzy number and is usually denoted by ( , , , ; ) A a b c d = = or
simply ( , , , ) A a b c d = if 1. = =

Figure 1 is an illustration of the trapezoidal fuzzy number ( , , , ; ). A a b c d = = In this case,
( )
( ) ,
L
x a
A x
b a

=
a x b s < and
( )
( ) ,
R
w x d
A x
c d

c x d < s . In particular, when , b c = the trapezoidal fuzzy number is reduced to


a triangular fuzzy number and can be denoted by ( , , ; ) A a b d = = or ( , , ) A a b d = if 1 = = . Thus, triangular fuzzy
numbers are special cases of trapezoidal fuzzy numbers.
An Improved Fuzzy Number Ranking Method Based on the Centroid-Index

Copyright 2011 IESS. 93
A
y
=
a b c d
( )
L
A x ( )
R
A x

Figure 1. Trapezoidal fuzzy number.

Definition 2. The cut o of fuzzy number Acan be defined as [21]
{ } | ( ) , A A x f x
o
o = > where
, [0,1]. x Ro e e

The symbol A
o
represents a non-empty bounded closed interval contained in R. It can be denoted by
l
A
o
and
u A
o
as the lower and upper bounds of the closed interval, respectively.
3. Improved Ranking Method Based on the Centroid-index of Fuzzy Numbers
In this section the centroid point of a fuzzy number corresponds to x value on the horizontal axis and y value on
the vertical axis. The centroid point ( , ) x y for a fuzzy number A in definition 1 is defined as [13]:
( ) ( )
( ) / ( ) A x xA x dx A x dx


=
} }
(2)
( ) ( )
0 0
| | / | |
w w
A
y A d A d
o o
o o o =
} }
, (3)
where A is a fuzzy number with sup ( )
x R
A x =
e
= , and | | A
o
is the length of the cut o ,0 1 A
o
o > s
, and
| | u
l
A A A
o o
= . If A is a crisp set with
0 ( ) A x = = and ( ) 0 A x = if 0 x x = , then its centroid is defined by
0 ( , ) x = .
For a trapezoidal fuzzy number ( , , , ; ) A a b c d = = , the centroid point ( , ) A
A
x y is defined as follows [13, 18].
( ) | | { }
0 ( ) / ( ) ( ) / 3 x A a b c d dc ab d c a b = + + + + +
(4)
( ) ( ) | | { }
0
( ) / 3 1 / ( ) ( ) y A c b d c a b = = + + +
(5)
Remark. It is clear that
0
( / 3) ( ) ( / 2) y A = = s .
Proof.
0
( ) [1 ]
3 ( ) ( ) 3
c b
y A
d c a b
= =
= + >
+ +
1 1
( ) ( )
c b
d c a b

+ >
+ +
0
( ) ( )
c b
d c a b

>
+ +
c b > (6)
In the case of a triangular fuzzy number, b c = so
0
( ) ( / 3) y A = = .
0
( ) [1 ]
3 ( ) ( ) 2
c b
y A
d c a b
= =
= + <
+ +
2( )
1
( ) ( )
c b
d c a b

<
+ +
( ) ( )
0
( ) ( )
c d a b
d c a b
+
<
+ +

( ) ( ) 0 c d a b + < c a b d + < + . (7)
Because , c a c b b d + < + + hence (6) is satisfied.
The new centroid-index is now proposed as follows. Suppose
1 2 , ,..., n A A A are fuzzy numbers. First, we calculate
the centroid point of all fuzzy numbers ( , ), 1,2,..., .
i
i
i A
A
A x y i n = = We then define min min ( , ), G x y = such that
An Improved Fuzzy Number Ranking Method Based on the Centroid-Index

94
min
inf , x S =
1
,
n
i i
S U S
=
= { / ( ) 0},
i
i A
S x f x =
min
inf , y Y =
1
,
n
i i
Y U Y
=
=
{ / 0 ( ) }.
i
i A
Y y Y x = = s
The distance between
the centroid point
( , ), 1,2,..., i
i
i A
A
A x y i n = =
and the minimum point min min ( , ) G x y = , is proposed as follows.
2 2
min min ( , ) ( ) ( )
3
i
i
i A
A
D A G x x y y
=
= +
(8)
If , i j A A are two fuzzy numbers, then the ranking will be done as follows.
(1) ( , ) ( , ) i j i j A A D A G D A G <
(2) ( , ) ( , ) i j i j A A D A G D A G > >
(3) ( , ) ( , ) i j i j A A D A G D A G =
4. Numerical Example
This section uses numerical example to illustrate the validity and advantages of the proposed centroid-index ranking-
method.Numerical example demonstrates that the proposed centroid-index ranking method can rank a mix of fuzzy
numbers.
Example. Consider a mix of normal and non-normal fuzzy numbers using the proposed centroid-index ranking method.
The normal triangular fuzzy number is
1
( 2, 1,3;1), A = the non-normal triangular fuzzy number is
2
( 2, 1,3;0.8), A =
and the non-normal trapezoidal fuzzy number is
3
( 3, 2, 1,0;1). A =
Figure2 shows the pictures
of the three fuzzy numbers. Table 1 presents the results obtained by applying Chengs [6] centroid-index, Chu and
Tsaos [7] centroid-index, and the proposed centroid-index (8). The final ranking result obtained by using (8) is
3 2 1. A A A < < It is worth mentioning that Chu and Tsaos centroid-index[7] cannot differentiate between
1
A and
2
A -
that is, their rankings are always the same. On the other hand, the ranking order by using Chengs [6] centroid-index
leads to an incorrect ranking order 2 1 3. A A A < < This example demonstrates one of the advantages of the proposed
centroid-index ranking method - it effectively ranks a mix of various types of fuzzy numbers.
A
3
1
-3 -2 -1 0 1 2 3
y
x
A
1
A
2
0.8

Figure 2. Fuzzy numbers
1 2
, A A and
3
A .
Table 1. Comparative between fuzzy numbers
1
, A
2
A
, and
3
A
.
Fuzzy number
Centroid points

Chengs ranking index
Chu and Tsaos ranking index
i
i
S x y =

Minimum points G
Centroid by formulae (8)
i A x
i A
y

( ) ( )
2 2
R x y = +

min x

min y
1
A

0 1/3 0.3333 0 -3 0.8 3.0091
2
A

0 4/15 0.2667 0 -3 0.8 3.0005
3
A

-3/2 7/18 1.9 1.5496 -3 0.8 1.5049
5. Conclusion
This paper proposes a new centroid-index method for ranking fuzzy numbers. The proposed formulae are simple and
have consistent expressions on the horizontal axis and vertical axis. Because the proposed centroid-index ranking
An Improved Fuzzy Number Ranking Method Based on the Centroid-Index

Copyright 2011 IESS. 95
method is based on Wang, Yang, Xu, and Chins [18]and Shiehs [13] centroid formulae, it can be used to rank both
invertible and non-invertible fuzzy numbers. The paper presenteda comparative example to illustrate the validity and
advantages of the proposed centroid-index ranking method. It shows that the ranking order obtained by the proposed
centroid-index ranking method is more consistent with human intuitions than those obtained byexistingmethods. Fur-
thermore, the proposed ranking method can effectively rank a mix of various types of fuzzy numbers (invertible and
non-invertible, normal, non-normal, triangular, and trapezoidal), which is another advantage of the proposed method
over other existing ranking approaches.
References
[1] R. R. Yager, On a general class of fuzzy connectives, Fuzzy Sets and Systems, Vol. 4, No. 6, 1980, pp. 235-242.
[2] L. Abdullah and N. J. Jamal, Centroid-point of ranking fuzzy numbers and its application to health related quality of life indi-
cators, International on Computer Science and Engineering, Vol. 02, No. 08, 2010, pp. 2773-2777.
[3] S. M. Chen and J. H. Chen, Fuzzy risk analysis based on ranking generalized fuzzy numbers with different heights and differ-
ent spreads, Expert Systems with Application, Vol. 36, 2009, pp. 6833-6842.
[4] S. J. Chen and S.M. Chen, A new method for handling multi-criteria fuzzy decision making problems using FN-IOWA opera-
tors, Cybernatics and Systems, Vol. 34, 2003, pp. 109-137.
[5] S. J. Chen and S. M. Chen, Fuzzy risk analysis based on the ranking of generalized trapezoidal fuzzy numbers, Applied Intel-
ligence, Vol. 26, 2007, pp. 1-11.
[6] C. H. Cheng, A new approach for ranking fuzzy numbers by distance method, Fuzzy Sets and Systems, Vol. 95, 1998, pp.
307-317.
[7] T. C. Chu and C. T. Tsao, Ranking fuzzy numbers with an area between the centroid point and original point, Computers &
Mathematics with Application, Vol. 43, 2002, pp. 111-117.
[8] E. S. Lee and R. L. Li, A method for ranking fuzzy numbers and its application to decision making, IEEE Transactions on
Fuzzy Systems, Vol. 7, No. 6, 1988, pp. 677-685.
[9] E. Mehdizadeh, Ranking of customer requirements using the fuzzy centroid-based method, International Journal of Quality
& Reliability Management, Vol. 27, No. 2, 2010, pp. 201-216.
[10] S. Murakami, H. Maeda, and S. Imamura, Fuzzy decision analysis on the development of centralized regional energy control
system, Proceedings of the IFAC SymposiumMarseille, 1983, pp. 363-368.
[11] N. Ramli and D. Mohamad, A comparative analysis of centroid methods in ranking fuzzy numbers, European Journal of
Science Research, Vol. 28, No. 3, 2009a, pp. 492-501.
[12] N. Ramli and D. Mohamad, A centroid-based performance evaluation using aggregated fuzzy numbers, Applied Mathemati-
cal Science, Vol. 3, No. 48, 2009b, pp. 2369-2381.
[13] B. S. Shieh, An approach to centroids of fuzzy numbers, International Journal of Fuzzy Systems, Vol. 9, No. 1, 2007, pp.
51-54.
[14] A. H. Vencheh and M. Allame, On the relation between a fuzzy number and its centroid, Computers and Mathematics with
Applications, Vol. 59, 2010, pp. 3578-3582.
[15] A. H. Vencheh and M. N. Mokhtarian, A new fuzzy MCDM approach based on centroid of fuzzy numbers, Expert Systems
with Application, Vol. 38, 2011, pp. 5226-5230.
[16] Y. J. Wang and H. S. Lee, The revised method of ranking fuzzy numbers with an area between the centroid and original
points, Computers and Mathematics with Applications, Vol. 55, No. 9, 2008, pp. 2033-2042.
[17] Y. M. Wang, Centroid defuzzification and the maximizing set and minimizing set ranking based on alpha level sets, Com-
puters & Industrial Engineering, Vol. 57, 2009, pp. 228-236.
[18] Y. M. Wang, J. B. Yang, D. L, Xu, K. S. Chin, On centroids of fuzzy numbers, Fuzzy Sets and Systems, Vol. 157, 2006, pp.
919-926.
[19] Z. X. Wang, J. Li, S. L. Gao, The method for ranking fuzzy numbers based on the centroid index and the fuzziness degree,
Fuzzy Information and Engineering, Vol. 2, 2009, pp. 1335-1342.
[20] G. J. Klir and B. Yuan, Fuzzy Sets and Fuzzy Logic Theory and Applications, Prentice Hall PTR, 1995.
[21] A. Kaufmann and M. M. Gupta, Introduction to Fuzzy Arithmetic: Theory and Application, VanNostrand Reinhold, New
York, 1991.




An Improved Fuzzy Number Ranking Method Based on the Centroid-Index

96


Proceeding of Industrial Engineering and Service Science, 2011, September 20-21
Copyright 2011 IESS.
Development of an Automatic Cruise Control
Simulator
Hendro Nurhadi

Mechanical Eng. Dept., Faculty of Industrial Technology, Institut Teknologi Sepuluh Nopember (ITS), Surabaya, INDONESIA
E-mail: hdnurhadi@me.its.ac.id
ABSTRACT
An intelligent transportation system (ITS) is a subject to meet the need of an automated vehicle. The automatic cruise
control (ACC) is a driver-assisting device, used to control the headway with respect to the vehicle in front, according to
a given control law. There are various kinds of ACC systems of different complexity. This paper presents a systematical
approach to develop an automatic cruise control simulator in order to assist design engineers in obtaining the suitable
control parameters for desired performance. In the proposed approach, the standard unified modelling language
(UML) is adopted to design the system. Simulation results show that the developed system successfully helps us to de-
sign the automatic cruise controller.

Keywords: UML, ACC, simulator design, ITS, driver-assisting device

1. Introduction
Intelligent transportation systems (ITS), formerly called intelligent vehicle-highway systems (IVHS), aim to improve
the efficiency of current transportation systems by applying modern technology. One part of the ITS program, the
automated highway system (AHS), promises to reduce traffic congestion and increase the safety, efficiency and capac-
ity of highway systems without building additional highways [1], [2]. It does this by adding intelligence to both the ve-
hicle and the roadside. In the automated vehicle, the automatic cruise control (ACC) is a driver-assisting device, used to
control the headway with respect to the vehicle in front, according to a given control law. There are various kinds of
ACC systems of different complexity [3]-[7]. In general, it should provide the basic functionality for keeping a constant
speed, typically during a long journey of highway driving. It allows the driver to use the default speed (e.g. 100 km/hr
in this paper) or accelerate to a desired speed, and then activate the system so as to make the car maintain this cruising
speed without driver interventions.
This paper develops an automatic cruise control simulator so that the design engineer can further use it to evaluate
and compare various designed ACC systems and further modify the control parameters to achieve the desired perform-
ance, such as the transient overshoot, response time, and steady state error. The unified modeling language (UML) is a
language for specifying, constructing, visualizing, and documenting the artifacts of a software-intensive system [8]. It
defines the notation and semantics for modeling systems using object-oriented concepts. In this paper, we design the
ACC simulator based on the object-oriented technology with UML.
Generally, the UML consists of nine main diagrams corresponding to standard static and dynamic aspects. The de-
signer can freely choose a subset of the diagrams and their order is not constrained in UML. Although UML does not
define the development process and how to do object-oriented analysis and design, a use-case driven, architec-
ture-centric, iterative, and incremental development process [8] is recommended by using UML, as shown in Fig. 1.
First, the use-case diagram in UML is modeled to capture the requirements in the functional analysis stage. Then, in the
static structural design stage, the class diagram is used to describe the static relationship of the system. Subsequently,
the state chart is constructed according to above models to describe the dynamic behaviors. Finally, implementation of
the above models is performed by using the Java language. Each constructed model in Fig. 1 may be modified in an
iterative fashion, through a repeated cycle of analysis, design, and implementation, and then back to the beginning of
the cycle again (i.e. so-called round-trip engineering). In this paper, the ACC simulator is developed through this de-
velopment procedure.
The main goal of the system modeling, analysis, and design in previous stages is to provide standard models for
system implementation. Although the UML modeling is not restricted to any particular language for implementation,
we prefer Java as the target language due to its object-orientation, portability, safety, and built-in support for network-
ing and concurrency [9]. Java also possesses several features for real-time development [10]-[12]. During the imple-
Development of an Automatic Cruise Control Simulator

98
mentation, it concerns translating information from multiple UML models and the Petri net into the code and database
structure. This translation is not straightforward, however, there is a close correspondence between Java and UML, and
a standard mapping between UML and Java is described in [13].
Round-trip engineering


Figure 1. Systematic development procedure
2. Development of the Automatic Cruise Control (ACC) simulator
In this section, the UML models will be used to design the ACC simulator. Then, Java language is adopted to imple-
ment the system. Note that the models described later in the remaining paper have been simplified for illustration pur-
poses.
2.1. System specification
The vehicle specification used in this paper is obtained from the Formosa Magnus, which is a domestic car produced by
Formosa Plastic Croup in Taiwan. The cruise controller is based on the PID controller scheme, as shown in Fig. 2.
Some major parameters of the vehicle dynamic model and the designed controller are shown as Table 1. The maximum
adjustable output is designed in order to prevent rapid acceleration, to avoid engine damage, and to keep passenger
safety and comfort (the passenger would encounter bounded force 0.44 G for the maximum adjustable output). Note
that our purpose of this simulation is to obtain the suitable control parameters (P, I and D) so as to make the system
have the acceptable performance.


Figure 2. Block diagram of PID control scheme
Table 1. System specification for simulated ACC system
Parameter Value
For Vehicle Vehicle mass (including passengers) 1800 kg
Maximum engine output 2500 kg.m/s
2

Aerodynamic drag coef. 0.5 kg/m
Mechanical drag 4 kg.m/s
2

Development of an Automatic Cruise Control Simulator

Copyright 2011 IESS. 99
Parameter Value
For Control Maximum cruising speed 180 km/hr
Maximum adjustable output 800 kg.m/s
2
(0.44G)
Control param. 1: P N/A kg/sec
Control param. 2: I N/A kg.m/s
2

Control param. 2: D N/Akg
2.2. Functional analysis with the Use-Case diagram
A use-case diagram is used to capture the basic functional requirements of the system. It consists of actors and use
cases. Actors, drawn as stick figures, represent users and other external systems that interact with the described system.
Use cases, drawn as ellipses, represent the scenarios of the system. A scenario is a sequence of steps describing an in-
teraction between a user and a system. Fig. 3 shows use cases for the automatic cruise control system, in which there are
2 actors and 8 use cases.
The actor, Driver, can perform use cases: Power ON/OFF, Pause/Resume, Stop, Set Speed, Default, Dial Up, and
Dial Down to manipulate the ACC system. The ACC status of the performed use cases will be displayed by using the
Display Status. After the driver powers on the system, he/she can use the Set Speed to set the cruising speed and further
use the extended use cases: Default, Dial Up, and Dial Down to adjust the cruising speed. In the cruising mode, the
driver may re-engage the manual control by using the Pause (in Pause/Resume) or Stop use cases. Then, the driver can
resume a previously set speed by using Resume (in Pause/Resume) after performing the Pause, or can set a new cruising
speed after performing the Stop.


Figure 3. Functional analysis with the use-case diagram
2.3. Static structural design with the class diagram
The class diagram is the main static structural analysis and design model for a system. It is developed through informa-
tion collected in the use-case diagram. A class diagram describes the types of objects in the system and the various
kinds of static relationships that exist among them. It also shows the attributes and operations of a class and the con-
straints that apply to the way objects are connects.
Fig. 4 represents the static structure and object relations of the ACC system. The Vehicle has the composition rela-
tion (represented as a black diamond) with the ThrottleActuator, VehicleDynamic, and SpeedSensor classes. The com-
position relation indicates that the composite is explicitly responsible for the creation and destruction of the contained
objects. Other relations in the diagram are associations, indicating loosely coupled classes that send messages to each
other in order to collaborate. The Driver can manually control the Vehicle directly through the ThrottleActuator or may
automatically control it by using the AutoCruiseCtrl with ThrottleCtrl class through the ThrottleActuator. The Speed-
Sensor detects the speed of the VehicleDynamic and then feedbacks it to the ThrottleCtrl and exports it to the
AutoCruiseCtrl for displaying on UserInterface.
Development of an Automatic Cruise Control Simulator

100
2.4. Dynamic behavioral analysis with statechart
The statechart in the UML is the main dynamic behavior analysis model for a system. Fig. 5 shows the simplified
statechart of the ACC system. The ACC system has two major states: ON (a super-state) and OFF. When the driver
turns on the system power, the system will lies in the ON state, in which the system will display status continuously.
Two main threads are processed concurrently in the ON state.
On one hand, the system initially stays in Waiting state. When the driver set the cruising speed, it will transfer to the
Keeping Speed state to maintain the vehicle speed as closely as possible to the cruising speed. Then, if the driver stops
the ACC, it will transfer back to the Waiting state. If he pause the ACC, it will change to the Pausing state and await the
resume command. On the other hand, the ACC lies in the Setting Cruise Speed and processes the default set, dial up and
dial down commands to change the cruising speed.


Figure 4. Static structural design with the class diagram

Figure 5. Dynamic behavioral analysis with the statechart
2.5. Implementation with Java language
The system modeling and design developed in previous stages provide ACC models for implementation. The developed
graphical human/machine interface (HMI), shown in Fig. 6, is designed with a Java Applet. The human user can push
the buttons to issue commands and interact with the ACC system. Also, the status feedback is displayed on the HMI.
Development of an Automatic Cruise Control Simulator

Copyright 2011 IESS. 101
3. Simulation result
After simulating the ACC system with various control parameters (P, I and D), the suitable control parameters are ob-
tained, as shown in Table 2. The corresponding performance indices of such PID controller are also shown in the table.
It takes less than 26 seconds to accelerate from 0 km/hr to 100 km/hr without negatively impacting passenger safety and
comfort by using the ACC system. Furthermore, the resulted PID controller has good transient and steady responses,
less than 1.7 % overshoot and 0.2 % steady state error, respectively.
4. Discussion
The present work leads to the following discussions:
1. The design procedure in Fig. 1 is a round-trip engineering, in which all models may be developed in an itera-
tive and incremental way through a repeated cycle of analysis, design, implementation and test. This approach
admits the possibility of making some alterations, such as changing the requirements or discovering a flaw in
the original design.
Table 2. Simulation result
Item Value
Controller param. 1: P 5200 kg/sec
Controller param. 2: I 4200 kg/sec
2

Controller param. 3: D 1 kg
Response Time < 26 sec (0-100 km/hr)
Transient overshoot < 1.7 %
Steady state error < 0.2 %
2. The plug-in feature of the object-orientation helps achieve better modularity and further evaluate various ACC
systems. For example, after we set up the scenario with vehicles, ACC, and other components, we may want to
run the same simulation with different ACC models (or with another vehicle models) in order to make com-
parisons. From the class diagram in Fig. 4, we can easily change and plug-in the new AutoCruiseCtrl object (or
Vehicle object) to run other simulations without significantly changing other components.

Figure 6. The implemented ACC system
3. Since the UML is based on object-oriented concept, reusable models in the resulted models can be grouped
into a design library so as to saving time for the similar case design.
5. Conclusion
This paper presents an object-oriented approach to systematically design and implement the ACC by using the UML
and Java. First, the use-case diagram is adopted to describe the functionalities of the system. Then, the class diagram is
used to model the static structures, and the state chart is further applied to describe the dynamic behaviors of the system.
Finally, the implementation has been accomplished by using Java language with a Java Applet. The developed system
has been successfully useful for obtaining the control parameters of the ACC system through the simulation.
6. References
[1] P. Varaiya, Smart cars on smart roads: Problems of control, IEEE Trans. Automat. Contr., vol. 38, no. 2, pp. 195-207, 1993.
[2] J. S. Lee and P. L. Hsu, Statechart-based representation of hybrid controllers for vehicle automation, IEE Proc. Intelligent
Development of an Automatic Cruise Control Simulator

102
Transport Systems, vol. 153, no. 4, pp. 253-258, Dec. 2006.
[3] Y. Zhang, E. B. Kosmatopoulos, P. A. Ioannou, and C. C. Chien, Autonomous intelligent cruise control using front and back
information for tight vehicle following maneuvers, IEEE Trans. Veh. Tech., vol. 48, no. 1, pp. 319-328, 1999.
[4] P. Li, L. Alvarez, and R. Horowitz, AHS safe control laws for platoon leaders, IEEE Trans. Contr. Syst. Tech., vol. 5, no. 6,
pp. 614-628, 1997.
[5] D. N. Godbole and J. Lygeros, Longitudinal control of the lead car of a platoon, IEEE Trans. Veh. Tech., vol. 43, no. 4, pp.
1125-1135, 1994.
[6] P. A. Ioannou, and C. C. Chien, Autonomous intelligent cruise control, IEEE Trans. Veh. Tech., vol. 42, no. 4, pp. 657-672,
1993.
[7] S. E. Shladover, Longitudinal control of automotive vehicles in close-formation platoons, ASME J. Dyn. Syst., Meas., Contr.,
vol. 113, pp. 231-241, 1991.
[8] G. Booch, J. Rumbaugh, and I. Jacobson, The Unified Modeling Language User Guide. Reading, MA: Addison-Wesley, 1999.
[9] E. Bertolissi and C. Preece, Java in real-time applications, IEEE Trans. Nuclear Science, vol. 45, no. 4, pp 1965-1972, 1998.
[10] Sun Microsystems, The Java Tutorials, December 2010. [Online]. Available: http://java.sun.com/docs/books/tutorial/
[11] K. Nilsen, Real-time programming with Java technologies, in Proc. IEEE Int. Symp. On Object-Oriented Real-Time Distri.
Comput., 2001, pp. 5-12.
[12] R. F. Mello and C. E. Moron, A Java real-time kernel, in Proc. IEEE Int. Conf. on Indu. Elec., vol. 2, 1999, pp. 728-734.
[13] J. Greenfield, Unified Modeling Language/Enterprise JavaBeans (UML/EJB) Mapping Specification, Rational Software
Corporation Document, May, 2001.


Proceeding of Industrial Engineering and Service Science, 2011, September 20-21
Copyright 2011 IESS.
Risks Analysis on Yield Curve of Indonesian
Sharia Mortgage Financing Versus Conventional
Home Loans: Utilizing Vasicek Approach
Sudarso Kaderi Wiryono
1
; Barli Suryanta
2
; Oktofa Yudha Sudrajad
3
; Aulia Nurul Huda
4
; Ana Nove-
ria
5


Sub Interest Group of Business Risk and Finance, School and Business Management ITB, Bandung, Indonesia
sudarso_kw@sbm-itb.ac.id
1
, barli.suryanta@sbm-itb.ac.id
2
, oktofa@sbm-itb.ac.id
3
, aulia.nurul@sbm-itb.ac.id
4
,
ana.noveria@sbm-itb.ac.id
5

ABSTRACT
Murabaha is one of the popular sharia banking instruments for mortgage financing which usually has 15 year maxi-
mum duration. The ultimate competitor of sharia mortgage financing comes from conventional home loans. The concept
of sharia mortgage financing is different with conventional home loans including how to manage crucial risks and de-
termining its yields. Therefore, this study has purpose to compare yield curve of Indonesian sharia mortgage financing
and conventional home loans yield curve. Comparative analysis will be conducted from perspective of risk analysis. To
obtain displaying of yield curve, this study utilizes Vasicek approach to forecast yield of each. And the data on this
study are obtained from real data of sharia mortgage financing and conventional home loans with 15 year maturity.
The contribution of this study is viewing the substance differentiation between Indonesian sharia mortgage financing
and conventional home loans explicable of managing risks relying to yield curve.

Keywords: Murabaha, Vasicek model, Indonesian sharia mortgage financing yield curve, Conventional home loans
yield curve

1. Introduction
Islam prohibits Muslims involving interest (riba), defined as any predetermined or fixed return from financial
transactions including both deposits and loans, although the purpose for which such loans are made or how low the
rate of interest charged is [1]. Meanwhile, debt financing is a trade based financing engaging related parties with
buying and selling of good under sharia principles [2]. Murabaha is one of scheme of a trade based financing that
involves the banks buying what the merchant wants and then selling to customer later at an agreed price [3].
Then, Ismail [2] expressed in his study that as a trade-based contract, Murabaha total payment contract will be
treated as an opportunity cost concept related to present and future value where to calculate both of them will adopt
rate of return. Rate of return term in sharia banks can be called as an equivalent rate. An equivalent rate is according
to distribution of profit sharing between sharia bank and its deposit. In Indonesian case, an eclectic range of
equivalent rates are utilized in mortgage financing under Murabaha scheme. A portion of customers to pay their
burden at specified maturity linkage to Murabaha contract could be mentioned as customer equivalent rates of
mortgage financing.
After discussing an equivalent rate, there is another terminology which sharia banks always reveal in their
revenue, named as a yield. Yield in sharia mortgage financing is the discrepancy between equivalent rates erected to
customer who took a mortgage and sharia deposit funding rates. Consideration of yield, t his study seeks to compare
yield curve of Indonesian sharia mortgage financing and conventional home loans. This is very important to observe
phenomena both of them, especially from risk analysis framework.
2. Methodology
2.1. Vasicek Approach
Vasicek [4] proposed model that avoids the certainty of negative yields and eliminates the need for a potentially infi-
nitely large extension factor. Yields in sharia mortgage financing usually have positive yields to avert loss and also to
Risks Analysis on Yield Curve of Indonesian Sharia Mortgage Financing Versus Conventional Home Loans:
Utilizing Vasicek Approach

104
minimize large extension factor by conducted prudent codes. Thereby, sharia mortgage financing confirms to the as-
sumption of Vasicek approach closely. The original Vasicek approach can be written as [5] :

dz dt r b a dr . ) ( o + = (1)
Where, r is the instantaneous short rate of interest; a is the speed mean reversion; b is the long term expected value for
r , and;o is the instantaneous standard deviation r ; and a , b , and o are constants. Vasicek elaborates (1) more
detail into the price at time t of a zero-coupon bond that pays $1 at time T :

) ( ) , (
) , ( ) , (
t r T t B
e T t A T t P

=
(2)

In this equation ) (t r is the value or r at time t ,
a
e
T t B
t T a ) (
1
) , (

=
(3)
Then,

(

+
=
a
T t B
a
b a t T T t B
T t A
4
) , ( ) 2 / )( ) , ( (
exp ) , (
2 2
2
2 2
o o
(4)

And, the formula on yield:

) ( ) , (
1
) , ( ln
1
) , ( t r T t B
t T
T t A
t T
T t R

=
(5)

Vasicek also expressed that
r
(
t
) is a stochastic process, subject to two requirements: first,
r
(
t
) is continuous func-
tion of time, that is, it does change value by instantaneous jump; second, it is assumed that
r
(
t
) follows a Markov
process. Under this assumption, the future development of spot rate given its present value is independent of the past de-
velopment that has led to present level. One of very critical from Vasicek model is thus made: the Markov property im-
plies that the spot rate process is characterized by a single state variable, namely its current value. The probability distri-
bution of the segment
} ( { t r > t
is thus completely determined by the value of
r
(
t
). In another word, Vasicek model
can be utilized to forecast forward rate by knowing the spot rate or current value.
Therefore the existence of Markov process in Vasicek approach, this study proposes some vital procedures related to
Markov process. The procedures will be:
- For sharia mortgage financing: first, using the assumption of Markov process to forecast sharia banks deposit rate for
15 year maturity; second, to appoint of previous first, this study needs to utilize real sharia deposit rate of 1 month, 2
month, 6 month and 12 month; third, in order to get yields calculation, the real data of sharia mortgage financing will
be applied to construct sharia mortgage financing.
- For sharia mortgage financing: first, similar with sharia mortgage financing that using the assumption of Markov
process to forecast conventional banks deposit rate for 15 year maturity; second, to conduct appropriate forecast, it
takes real conventional banks deposit rate of 1 month, 2 month, 6 month and 12 month; third, the real data of con-
ventional home loans will be employed to get yield curve of conventional home loans.
Noted that their customers will subtract 15 year maturity and the data are collected on period of March and April 2011
from the largest of three state owned Indonesian sharia banks and conventional banks based on parameter of asset, i.e.
Bank of Sharia BNI, Bank of Sharia Mandiri, Bank of Sharia BRI, Bank of BNI, Bank of Mandiri, and Bank of BRI.
Risks Analysis on Yield Curve of Indonesian Sharia Mortgage Financing Versus Conventional Home Loans:
Utilizing Vasicek Approach

Copyright 2011 IESS. 105
3. Risk Analysis on Yield Curve: Sharia Mortgage Financing Versus Conventional Home
Loans


Figure 1. Yield curve of Sharia BNI mortgage financing by utilizing Vasicek approach

Figure 2. Yield curve of conventional BNI home loans by utilizing Vasicek approach

Figure 3. Yield curve of Sharia Mandiri mortgage financing by utilizing Vasicek approach
0%
2%
4%
6%
8%
10%
12%
14%
16%
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
Maturity
Y
T
M
Equivalent sharia BNI's rate of mortgage financing
Deposit sharia BNI's equivalent rate
4.0%
5.0%
6.0%
7.0%
8.0%
9.0%
10.0%
11.0%
12.0%
13.0%
14.0%
15.0%
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
Maturity
Y
T
M
BNI deposit rate BNI home loans
4.0%
5.0%
6.0%
7.0%
8.0%
9.0%
10.0%
11.0%
12.0%
13.0%
14.0%
15.0%
16.0%
17.0%
18.0%
19.0%
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
Maturity
Y
T
M
Mandiri deposit rates Mandiri home loans
Risks Analysis on Yield Curve of Indonesian Sharia Mortgage Financing Versus Conventional Home Loans:
Utilizing Vasicek Approach

106

Figure 4. Yield curve of conventional mandiri home loans by utilizing Vasicek approach

Figure 5. Yield curve of Sharia BRI mortgage financing by utilizing Vasicek approach

Figure 6. Yield curve of conventional BRI home loans by utilizing Vasicek approach
0%
2%
4%
6%
8%
10%
12%
14%
16%
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
Maturity
Y
T
M
Equivalent Mandiri's rate of mortgage f inancing
Deposit sharia Mandiri's equivalent rate
0%
2%
4%
6%
8%
10%
12%
14%
16%
18%
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
Maturity
Y
T
M
Equivalent sharia BRI's rate of mortgage f inancing
Deposit sharia BRI's equivalent rate
4.0%
5.0%
6.0%
7.0%
8.0%
9.0%
10.0%
11.0%
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 Maturity
Y
T
M
BRI deposit rates BRI home loans
Risks Analysis on Yield Curve of Indonesian Sharia Mortgage Financing Versus Conventional Home Loans:
Utilizing Vasicek Approach

Copyright 2011 IESS. 107
Figure 1, figure 3, and figure 5 shows that Sharia BNI, Sharia Mandiri, and Sharia BRI mortgage financing adopts
positive yield curve. Positive yield curve in which yields with long rates substantially greater than short rates [6]. Con-
trast with conventional home loans of BNI, Mandiri, and BRI where they expose normal yield curve that the curve
slopes gently upwards as maturity increases, all the way to the longest maturity. Therefore interest rate is prohibited by
sharia principles so that yield curve in sharia banks influenced by factor of inflation, maturity, and operational. Inflation
risk reflects that sharia banks in this context expect inflationary pressures in the future. Thereby to anticipate such thing,
sharia banks induces higher yield than conventional home loans in order to obtain optimum return from mortgage fi-
nancing. Maturity risk in sharia banks linkage to mark up risk therefore 15 year of sharia mortgage financing under
Murabaha scheme can not be re-priced and can not be used swaps to transfer the risk [7]. Regarding liquidity risk, even
though in the process way the mortgage is increasing, as long as the contract has not been terminated, the customers can
not sell to get arbitrage gain on their mortgage unless all the obligation is already settled to sharia bank. This infers
sharia mortgage asset is not liquid. Fact of operational risk of sharia banks which they need to be instituted [7] and cur-
rently the number of their operational branch still under conventional banks. The much more operational branch of
banks, the much more customer can acquire to take loans, the more efficient rates on loans. Right now, sharia banks in
Indonesia are relatively new with little branch, indeed they need high yield to hedge their less efficient on operational.
Sharia banks are having low risk such credit and market. From mortgage financing, sharia banks enjoy certain income
and definitely assets of sharia mortgage can be treated as trusted collateral [7].
Figure 2 and figure 6 displays normal yield curve for conventional home loans of BNI and BRI. Compared to
positive yield curve, normal yield curve provides yields at average levels and the curve slope gently upwards as matur-
ity increases, all the way to the longest maturity [6]. Only conventional Mandiri exposes positive yield curve. In their
business process conventional home loans so much depend on dynamics of interest rates. This study detects rates of
BNI and BRI home loans are more efficient than sharia mortgage financing. To determine the rates for home loans,
conventional BNI and BRI calculate urgent factors prudently, i.e. delivering deposit rates on routine schedule, how
much risk free rate announced by central bank, and inflation rate expectation in the future. By concerning to these, they
earn ease credit spread, otherwise they will get loss.
This condition is supported by a lot of operational branches in nearly all major cities in Indonesia, established in-
stitution, and well risk management. It implies that their operational risk is low. And conventional home loans can be
countenanced as liquid assets. Customers that already owned the house, it can be sold anytime to a third party when its
price increases as long as payment running well to the banks. Another advantage thing to take home loans is customers
pick up second home loans by using their first house as trusted collateral and so on. BNI and BRI can bundle their as-
sets backed-mortgages becoming a valuable bond easily, then issuing to primary market in order to receive a significant
funding. To minimize credit risk and market risk of 15 year home loans maturities, both conventional BNI and BRI
utilize credit debt swap instruments. They can transfer credit and market risk to derivative instruments or in scheme of
efficient portfolio or combination between them to tackle interest rate risk. It is very clearly that normal yield curve of
conventional BNI and BRI is taking advantage of their business process with the customer. Caused the rate is competi-
tive and efficient, and by that, both can deliver more home loans to reasonable customers in significantly.
Positive sloping yield curve of conventional Mandiri (see figure 4) has different character compared to BNI and
BRI. It is interesting because of the similarity between Sharia Mandiri and conventional Mandiri. Positive yield curve of
Mandiri indicates that short term interest rates are expected to rise, then the longer yields should be higher than shorter
ones [6]. As the grandstanding bank in Indonesian, Mandiri has wide and steady operational branches throughout Indo-
nesia. Considering the large size of its branches, Mandiri can easily minimize the operational risk of its housing loans.
To attain optimum credit spread, Mandiri respects with country outlook in the future by applying positive yield curve in
order to compensate inflationary risk and maturity risk. Inflationary risk usually is stimulated by constructive growth of
a country. To intercept inflationary risk plus maturity risk, Mandiri entails significant yields. Mandiri also appoints de-
rivative instruments and portfolio strategy as a credit debt swaps mechanism to secure its home loans from probability
of default. Mandiri gets beneficial from liquidity of its mortgages bond to collect a huge funding. Not only Mandiri but
also its customers enjoy liquidity same as customers of BNI and BRI that already discussed before.
4. Conclusions
Sharia BNI, Sharia Mandiri, and Sharia BRI applied positive yield curve to hedge inflation, maturity and operational
risk. Sharia mortgage financing is not liquid neither for sharia banks nor its customer because of the Murabaha
agreement. Mark up risk is correlated with maturity where the longer maturity means the greater mark up risk. Sharia
Banks do not provide a set of alternative to hedge mark up risk because they do not allow credit debt swaps mechanism
especially established derivative instruments. However they have low credit and market risk. Sharia banks earn certain
Risks Analysis on Yield Curve of Indonesian Sharia Mortgage Financing Versus Conventional Home Loans:
Utilizing Vasicek Approach

108
income and asset of sharia mortgage can be treated as trusted collateral. Conventional BNI and BRI attribute normal
yield curve. They consider efficient rate as their weapon to invite potential customers to take home loans on them. They
have capacity to cover operational risk easily based on wide range operational branch throughout Indonesia. Their
housing loans are so liquid and very beneficial whether for the banks or its customers. To minimize credit and market
risk, they play derivative instruments to transfer risk (credit debt swaps) and implement portfolio strategy to manage
interest rate risk. Conventional Mandiri is a little bit different by inducing positive yield curve. But in its business proc-
ess actually has same pattern as conventional BRI and BNI.
5. References
[1] J.C.Y. How, M.A.Karim, and P. Verhoeven, Islamic Financing and Bank Risks: The Case of Malaysia, Thunderbird Interna-
tional Business Review, vol. 47(1) 75-94, Wiley periodicals, Inc, 2005.
[2] R. Ismail, Assessing Moral Hazard Problem in Murabaha Financing, Journal of Islamic Economics, Banking, Finance, vol-
ume-5 Number-2, p.102-112, Unknown Year of Published.
[3] F.F. Ghannadian, Developing economy Baking: The Case of Islamics Banks, International Journal of Social Economics, vol.
31 no. 8, Emerald Group Publishing, 2004, pp. 740-752
[4] O. Vasicek, An Equilibrium Characterization of The Term Structure, Journal of Financial Economics 5, North Holland Publi-
hing Company, 1977, p.177-188.
[5] H.C. John, Options, Futures, and Other Derivatives, Fourth Edition, Prentice Hall Upper Saddle River, NJ 07458, 2000,
p.567.
[6] F.J. Fabozzy,Interest Rate, Term Structure, Valuation Modelling, John Wiley and Sons, Inc, 2002, p.74.
[7] T.K.B. Ahmad, Risk Management An Analysis of Issues in Islamic Financial Industry, IDB Islamic Research and Training
Institute, Occasional Paper No.5, Jeddah, 2001.

Proceeding of Industrial Engineering and Service Science, 2011, September 20-21
Copyright 2011 IESS.
Understanding the Potential Effects of Queue
Information on Visitors Behavior and the Factors
that Influence Their Decisions: Case Study at
Dufan Theme Park
I Putu Wisnu Saputra
1
; Yos Sunitiyoso
2

School of Business and Management, Institut Teknologi Bandung, Indonesia
putu.wisnu@sbm-itb.ac.id
1
, yos.sunitiyoso@sbm-itb.ac.id
2

ABSTRACT
In a theme park, it is expected that information about queue and the availability of rides would be beneficial for the
visitors and thus increasing the level of service of the theme park. The information provides the visitors with options to
choose what rides in that theme park that are available with shorter queue. To understand the potential effects of such
information on visitors choice behavior, a research is conducted in Dufan, the biggest theme park in South East Asia.
The study utilizes a questionnaire survey involving over 200 respondents who are visitors at Dufan during its peak days.
The questionnaire data are being analyzed using descriptive and statistical analysis. The study found that nearly half of
respondents will move to other line when they get the information about the queue and the information in the form of
digital board is the most preferred. Furthermore, the most influential factor that affects a visitors decision of whether
to stay or to move to another ride when having to queue in his/her current location is the distance between rides. Level
of favorite is also an important factor. The more favorite the ride to the visitor, the more he or she wants to go to that
ride. The study implies the need for such queue information and its potential positive effect shows that this would add to
customer satisfaction. People are given a choice to move or stay than just wondering when their time to play comes.
People can calculate the time of waiting to gain the best option: to stay in the current line, move to other ride or just
move out from the queue and wait outside while eat in a restaurant. It means that giving people various options would
make them enjoying the amusement park more than just waiting in line.

Keywords: theme park, queue information, visitors behavior.

1. Introduction
People react differently on information they get. Their behaviors are studied in this research in order to understand
the factors that influence people ride choice or movement from one ride to another. In every ride in Dufan there are
no information that would provide answer to customers on (a) how many people those stand in line with them? (b)
How many minutes that they have to wait to get into the rides? and (c) are there any other rides with fewer queues than
they stand on now? The answer of these questions would be very helpful to all visitors of Dufan in choosing the
shortest queue to stand on. In this study, this information is named as Queue Information.
Dufan in total has 25 rides which are separated in eight different theme areas (see Figure 1). Those rides have
different characteristics. These characteristics, whether they are exciting, adventurous or fun may drive people to go
to one ride or not. If we try to compile all of the characteristics of one ride and match them to each of the visitor of
Dufan, we can get the Level of Favorite from visitor to that ride. Dufan is a 9.5 hectare area. If someone has an av-
erage 1 m/s (meter/second) of walking time, he or she would require 26.4 hours, more than one day, to cover whole
the area of Dufan just walking not playing the rides. Distance is another important factor that influences people to
go from one ride to another ride. The origin where visitor come from is also hypothesized as a factor for visitor to
choose any of rides. People from Jakarta would only choose rides that he or she intended to, while people outside
Jakarta would like to choose all the rides because they couldnt go to Dufan often. People come to Dufan individu-
ally or in group. Those who come to Dufan in group can be separated into two: small group (consist of less than 25
people) and large group (consist of more than 25 people). People who come in group may follow wher e their group
going. But, people who come individually or in relatively small group have their independent choice to choose the
Understanding the Potential Effects of Queue Information on Visitors Behavior and the Factors that Influence Their Decisions: Case
Study at Dufan Theme Park

110
ride they wanted. In this study, both people who come in small group and individually are considered as individual
visitors. These five factors: queue information, level of favorite, distance, origin, and group, are the focus of this
study and are investigated to understand their influences to visitors behavior.


Figure 1. Dufan Rides and Facilities
Differences on the queue length from one ride to another and the absence of information regarding the queue
makes the people only concentrated in lining for a ride; especially their favorites. It can make other rides have only
a few visitors. People who wanted to go to Dufan for fun can only get their time stuck in queue. In Dufan, there are
many queue spots such as, Simulator Theatre, Kora Kora, Bianglala, Arum Jeram, Halilintar, Istana Boneka, Nia g-
ara gara, Baku Toki, Tornado and Hysteria (see Figure 1 for the location of these rides). The longest queue is in
Simulation Theatre. Furthermore, the other rides that have small number of people who stand in line are: Burung
Tempur, Rajawali, Pontang Pontang, Ombang Ombang, Poci Poci and Ubanga Banga.
An important reason that may cause the queue problems in Dufan is that there is no information about queue in
other rides. It makes those who stand in line become captive or they just do not know the situation in other rider and
have no other choice than staying in the line until they are served. When they receive information, they may change
their choice whether to move to another ride with shorter queue or they may decide to continue waiting for their turn.
The medium of information and queue time are also expected to be important in influencing their choices. This study is
looking to understand the potential effects of such information and the factors that influence visitors decisions.
2. Underlying Theories
Maister [1] studies about the feeling of people while he or she is in queue. He stat ed that if managers are to con-
cern themselves with how long their customers or clients wait in line for service, and then they must pay attention
not only to the actual wait times but also to how these are perceived. Managers should also try to put their position
as the customers to understand their feeling while in line. Knowing the characteristic of people in queue is important
to this study because the author would like to understand the feeling of those people and therefore should bring success
in the questionnaire process.
Hudson [2] stated that there are factors that influence people on choosing what he or she wants to buy a prod-
uct or service in tourism and hospitality. Therefore understanding the way these factors influence customers dec i-
sions is very important in this industry. There are several factors that may influence consumer behavior: motivation,
culture, age & gender, lifestyle, life cycle, and reference groups [2]. In this study, some of these factors are hy-
pothesized to be influential to behavior of Dufan customers. They are including: level of favorite of rides, distance
between rides, effect of origin of the respondents on choosing rides, and effect of coming to Dufan in a group or
individual. Level of favorite can be influenced by the Dufan visitors culture, age and gender. Women and children
are usually like fun rides such as Istana Boneka and Balada Kera. But young men are usually like challenging rides
such as Kora Kora, Halilintar, Tornado, Arung Jeram and Hysteria. Motivation has influence people to go to rides
even though the distance between rides is far away. The relative home location to Dufan may influence people to
Understanding the Potential Effects of Queue Information on Visitors Behavior and the Factors that Influence Their Decisions: Case
Study at Dufan Theme Park

Copyright 2011 IESS. 111
chose enjoying only the rides they wanted (their favourite ones) or to try enjoying all ride while they were in Dufan
due to their limited chance to visit Dufan frequently. Reference groups may also influence people who come in
groups, for example by following their groups choice in choosing the rides to play.
3. Research Methodology
The way to obtain business solution on the queue problem of Dufan is by understanding visitors preferences and
behavior thorough a survey in Dufan. Starting in November 2010, a questionnaire survey using face -to-face inter-
view is conducted by asking 228 respondents who visit Dufan during its peak periods which are during weekends;
Saturdays and Sundays. The data is then checked and it is found that 200 questionnaires are complete and can be
used in analysis. The questionnaire consists of 25 questions in total. They are divided into three sections (a) first
impression to the queue section which asks respondents about their experience when visiting Dufan and experiencing a
queue; (b) giving information about the queue section which contains question regarding the effect of information to
visitors choices and (c) respondent profile which consists of ages, gender, origin, educational background, and occupa-
tion.
The research is focused only on ride and not including show. The example of a show is Balada Kera. The di f-
ferent of show and ride is that a show has a schedule for the visitors who want to watch it, while a ride cannot offer
it because the time of playing ride can vary on some rides depending on the length of queue. The number of people
who are standing in queue is the main factor that makes ride cannot give the exact schedule on every play. If the
queue line is short, then rides operators would extent the rides time of play. If the queue line is long, then rides
operators would cut the rides time of play. The questionnaire is only taken on weekends not weekdays when the
theme park is at its peak periods. Weekends were chosen to reflect the real feeling of visitors who are standing in
queue which is usually takes place in weekends. The research is only focusing on the queue problem which happens
in Dufan and does not take any benchmarking with other theme parks.
4. Results and Analysis
From the first section of questionnaire, it is obtained that 69% of respondents choose to stand in current queue, 18%
of respondents choose to move to another ride and the last 14% choose to follow friends or family or to use Fast-
Trax. When completing this section, the respondents were not informed yet about the questions in the second part.
In the second part of the questionnaire, respondents were asked about their responses to information about the
queue. Respondent were given information about other ride that has shorter queue than he or she has at current time.
There are types of movement that can be chosen by respondents:
1. To move from Favorite ride to Favorite ride that is located near to current location.
2. To move from Favorite ride to Favorite ride that is located far from current location.
3. To move from Favorite ride to Non-Favorite ride that is located near to current location.
4. To move from Non-Favorite ride to Favorite ride that is located near to current location.
5. To move from Non-Favorite ride to Favorite ride that is located far from current location.
6. To move from Non-Favorite ride to Non-Favorite ride that is located near to current location.
These questions are used to identify the effects of preferences on rides and the distance of one ride to another to the
behavior of people to move from one ride to another. Figure 2 shows preferred actions of respondents after they
were provided with queue information.


Figure 2. Preferred Actions After Given Information Figure 3. Value on the Queue Information
Understanding the Potential Effects of Queue Information on Visitors Behavior and the Factors that Influence Their Decisions: Case
Study at Dufan Theme Park

112
In Figure 2, The highest percentage is moving from non-favorite to favorite which located near to the current
queue with 79% of respondents. The lowest percentages are Favorite Favorite Far and Favorite Non Fa-
vorite Near with only 27% of respondents will move to other ride. It can be concluded that people who has been
standing in favorite ride line would be reluctant to move to other ride if that ride is far from the one they are stand-
ing now or not as favorite as the one they are waiting for now. From that figure, we can get the average percentage
of moving decision is 45%. This means that almost half of the respondents are willing to move when they got the
information about other ride. This is supported by their answers on the value of the queue information as in Figure
3. Around 94% of respondents consider that queue information would be valuable or useful feature of Dufan.
The next proposition relates to the digital information board. More than 50% of respondents are choosing digital
board as the medium for queue information. Digital board is being chosen because it can be seen by all visitors at
the same time. Digital board also will not be interrupted by various noises produced by people, announcement via
loudspeakers or sound of music in Dufan.
The questionnaire also tries to answer how many minutes are acceptable to stand in the line for the respondents.
There are two biggest percentage of queue time, less than 10 minutes (< 10 min) with 46% and between 10 and 20 min-
utes (10-20 min) with 37%. The combination of these percentages is 83% of respondents. The result would mean that
there are most of people accept the queue time less than 20 minutes. This concludes that most people do not want to
wait for a long time. In reality, they may have to wait much longer than that.
To study the factors that influence visitors decisions to move to another ride or to stay in the queue, a statistical
analysis is conducted. The data is reorganized following these rules: level of favorite is assigned into four numbers
based on hypothesized preference of visitors on their movement from a ride to another ride, 0 as moving from favorite
ride to non-favorite ride, 1 as moving from non-favorite ride to non-favorite ride, 2 as moving from favorite ride to
favorite ride, and 3 as moving from non-favorite ride to favorite ride; distance is assigned into two numbers, 1 as
near and 0 as far; origin is assigned into two numbers, 1 as from Jakarta and 0 as from outside Jakarta; group is
assigned into two numbers, 1 as coming to Dufan as group and 0 as individual. The categorized data is then exported
to the SPSS and analyzed using Logistic Regression. The method is a specialized form of regression that is formulated
to predict and explain a binary categorical variable [3]. Those factors above become independent variables and the
decision variable, which is assigned into two numbers: 0 as stay in the queue and 1 as move to another ride becomes
a dependent variable. The objective of logistic regression is to predict the value of dependent variable which is an
binary variable (0 or 1) using independent variable which has already known the value before [4]. The result is
shown in Table 1.
Table 1. Variables in the Equation
B S.E. Wald df Sig. Exp(B)
Step 1
a
Favorite .787 .070 127.594 1 .000 2.196
Distance 1.328 .150 78.119 1 .000 3.773
Origin .336 .131 6.638 1 .010 1.400
Group -.245 .129 3.606 1 .058 .783
Constant -2.596 .235 121.976 1 .000 .075
a. Variable(s) entered on step 1: Favorite, Distance, Origin, Group.

It can be seen that there are three factors that are significant with o = 0.05, which are favorite, distance and
origin. Group is not significant in influencing people to move to another ride. The negative value of constant
(-2.596) shows that if there are no queue information, customers would like to stay in the queue. From the three
significant factors, it is interesting to find that Distance has the biggest coefficient (1.328) that means that Distance
is the most influential factor for making people move to another ride. Level of Favorite comes second with 0.787
and the last is Origin with 0.336. With this result, Dufan should consider to offer ride that is located not far away
from the queue information board. Showing information of favorite rides on the queue information board would
also influence visitors to move from their current location. It is al so concluded from the result above that people
from Jakarta are eager to move to another ride when the queue is too long for them to wait than people from outside
Jakarta. It can be summarized that providing queue information would make people move and there are three factors
which are significant in influencing visitors decisions of whether to stay or to move from a queue, which are level
of favorite, distance and origin with distance factor to be the most influential factor.
These factors have been tested for multicollinearity which can be shown from the result of their VIF (Variance
Inflation Factor) and Durbin Watson value. VIF value for each factors are: Level of Favorite = 1.242; Distance =
1.242; Group 1.066; Origin = 1.066 and the Durbin Watson value is 1.785. If the VIF is less than 10 there is strong
Understanding the Potential Effects of Queue Information on Visitors Behavior and the Factors that Influence Their Decisions: Case
Study at Dufan Theme Park

Copyright 2011 IESS. 113
indication that multicollinearity is not affecting the regression coefficients and therefore they are well estimated.
Furthermore, if Durbin Watson value is close to 2, it means that there is no correlation between factors used in the
analysis.
From the classification result, it was obtained the overall percentage is 66.8%. This means that the regression
morel can give correct answer in 66.8% of possibility, which is higher than the percentage of correct by chance that
is 50%. This percentage is enough to shows that the regression model can be used to answer the research question:
the importance of queue information and how it can influence people.
5. Conclusion and Future Study
From the analysis, it is concluded that there is a significant change of respondents decisions when the information of
queue is provided to them. The percentage of respondents who will move to other ride when the queue is too long for
them to wait before given any information about another ride queue condition is 18%, while the average percentage of
respondents who will move to other ride after given the queue information is 45%. This significant increase of percent-
age would mean that it is important to provided queue information for Dufan visitors.
It is also concluded that the most influential factor is the distance between rides, followed by level of favorite be-
tween rides and then the origin of the visitors. Group is not significant factor in influencing people to move to another
ride. It is also obtained in the statistical analysis that the regression models constant is negative. It means that if the
customers do not have any information about another rides queue, they would not like to move to another ride. Most of
respondents would agree on below 20 minutes as an acceptable queue time. Digital Board would be the best option to
choose because it cannot be distracted by the noise and can be seen to everyone.
The study also found some potential implications to Dufan. Firstly, information about number of people who
standing in line on one ride is required in order to give an informed decision of whether the visitors would like to
move or to stay when they are in queue. He or she will choose the one of ride which has shorter queue. However
some people may only be affected if they just get in a line for less than their acceptable waiting time. The reason is
because people who stand in for long time and get into the queue too deep, would not want to step onto other ride
because they have spent too much time to wait.
Secondly, regarding the medium of the information, the majority of respondents prefer the use of a digital
board to inform them about the queue information. Digital board would be the best option to choose beca use it can
be easily seen to everyone and cannot be distracted by the noise that formed by the huge number of people gathered
in the theme park. This is more preferred than a booking system like FastTrax in Dufan and FastPass in Disney,
Lo-Q, Multi Motions, and Alton Towers as this adds another problem of fairness. People which have more money
to book the ticket so that they do not have to stand in the line would make other people who do not have this ability
to buy the tickets would feel disappointed. Lutz stated that the customers of theme park that do not use the virtual
queue system and wait in the general line see the intrusion as negative [5].
The respondents choose that it is best to place the information medium on every ride in Dufan. This will give the
same chance for everyone to access that important information. Placing the digital board on every ride would require
investment for the theme park management. But for the customer side, it would add benefit because they do not have to
walk to a kiosk to get the information and particularly they do not have to face a secondary queue. Furthermore, the
digital board should be installed in front of every entrance of ride or in the range of 20 minutes of waiting time from the
entrance (considerable time of waiting in line according to the questionnaire result). This would prevent people who
want to go to another ride because of the information given to them get too deep in the queue. Based on the authors
experience, it would be very hard for people to move out from a crowded queue. Dufan should put the distance between
rides on its every information board on each ride. To get more effective result, Dufan may give the queue information
of rides limited on the themes areas. For example in Hysteria queue, people would be given the queue information only
about rides which are located in the same theme area, Greeks. This action also could prevent people who are standing
in queues at another theme areas moving to other areas. If all people in other areas know that Hysteria in Greeks has a
very short queue line, it would make them go directly to it. Because the distance between theme area is quite far, when
they come to Hysteria, it would have possibly been full. This incident may add to their dissatisfaction. So, by selec-
tively choosing the queue information only for the rides in one theme area would minimize this problem.
Finally, the study implies the need for information and its potential positive effect shows that this would add to
customer satisfaction. People are given a choice to move or stay than just wondering when their time to play comes.
People can calculate the time of waiting to gain the best option: to stay in the current line, move to other ride or just
move out from the queue and wait outside while eat in a restaurant. It means that giving people various options would
make them enjoying the amusement park more than just waiting in line. Lith mentioned that having information about
Understanding the Potential Effects of Queue Information on Visitors Behavior and the Factors that Influence Their Decisions: Case
Study at Dufan Theme Park

114
visitors interests would benefit them [6]. He also said that this valuable information could stimulate cooperation with
sponsors to attract the public to their products.
There are several future studies that can be conducted to follow up on the outcome of this study. These i n-
cludes: simulation and field experiment. The first follow up of this study is a simulation study to model the visitors
movement in the theme park. It is found in this study that the information has given benefit to the respondents, however
to estimate the benefit of information in reducing queue a simulation is required. The visitors behaviors are modeled
based on the outcomes of this study. Field experiment is the next step after the simulation, which aims to prove the
benefit of providing queue information to visitor. This can be conducted for example by placing digital board to one of
the ride of Dufan and then studying visitors response to such information. Field experiment will make sure whether the
information show in the digital boards could be useful in the way required by visitors. Another purpose is this field ex-
periment would attract feedback from the visitors of Dufan.
6. References
[1] Maister, D. H. (1985). The Psychology of Waiting Lines. The Service Encounter, 113-123.
[2] Hudson. (2007). Tourism and Hospitality Marketing. University of Calgary, Canada : Sage Publication Ltd.
[3] Hair, J., Black, W., Babin, B., Anderson, R., & Tatham, R. (2006). Multivariate Data Analysis, Sixth Edition. New Jersey:
Pearson Prentice Hall.
[4] Santoso, S. (2010). Statistik Multivariat. Jakarta: Elex Media Komputindo.
[5] Lutz, H. (2008). The Impact of Virtual Queues for Amusement Parks. Proceedings of Decision Sciences Institutes (DSI)
39th Annual Meeting, Baltimore.
[6] Lith, P. v. (2002). Queue Management. Retrieved from http://multimotions.websystems.nl/eng/index.html.



Proceeding of Industrial Engineering and Service Science , 2011, September 20-21
Copyright 2011 IESS.
Developing Model to Identify Significant Human
Factors in Aviation Maintenance
Mohd Noor Said
1
,Nooh Abu Bakar
2
, Ahmad Zahir Mokhtar
3


1
Universiti Kuala Lumpur Lot 2891, JalanJenderamHulu, JenderamHulu, Dengkil, Selangor, Malaysia
2
School ofgraduate studies, UTM InternationalCampus, JlnSemarak, Kuala Lumpur, Malaysia
3
Universiti Kuala Lumpur Lot 2891, JalanJenderamHulu, JenderamHulu ,Dengkil, Selangor, Malaysia
mdnoorsaid@miat.unikl.edu.my
1
, noohab@citycampus.utm.my
2
, azmokhtar@miat.unikl.edu.my
3

ABSTRACT
Elimination of aviation accidents is one of the primary goals of the aviation maintenance industry. A leading cause of
aviation accidents is lack of oversight of various human factors issues and organizations maintenance operation per-
formance. The technologies used in the industry generate multiple risks, mostly from three domains: systems, hardware
and people. Analysis of existing aviation maintenance data is a crucial step in meeting the aviation industrys need to
improve aviation safety. This paper approaches to assess significant human factors impacting human error in aviation
maintenance. We conducted study of Malaysia aviation maintenance industries to determine these significant human
factors and to illustrate how empirical analysis approaches integrate aircraft maintenance personnel opinions about
the relative importance of human factors. We develop model through modified SHELL model to categories the human
factors that we derived from the literature review and the opinions of aviation personnel who involved in maintenance.
The result showed that there are significant human factors impacting human error, and the result also provided ap-
proaches of Structural Equation Modeling (SEM) to verify the hypotheses in the path analysis model. The model helped
to determine the significant human factors underlying aviation maintenance errors, ultimately helping aviation person-
nel to manage human error and safety issues in aviation maintenance.

Keywords:Developing Model, Significant Human Factors, Impacting Human Error, Aviation Maintenance

1.Introduction
Aviation maintenance personnel work on extremely sophisticated aircraft with complex integrated systems which are
continuously upgraded and improved. The technological changes with respect to digital computer system and introduc-
tion of new materials requires that the maintenance personnel be trained to analyze, repair, inspect and certify these
system in accordance with the quality standards defined by the aircraft manufacturers and Aviation Authorities. Aircraft
maintenance is an essential component of the global aviation industry. It involves a complex organization in which each
aircraft maintenance personnel performs varied task with limited time, minimal feedback, and sometimes difficult am-
bient conditions [1]. Maintenance in this context is essentially about keeping aircraft operational within a strict time
schedule. The main role of aviation maintenance personnel is to categorize and judge the important of problem that
could threaten the airworthiness of aircraft [2]. Aircraft contain many rapidly developing advanced technologies, such
as composite material structures, glass cockpits, highly automated systems, and build-in diagnostic and test equipment;
therefore, the need to simultaneously maintain new and old fleets requires aviation maintenance personnel to be knowl-
edgeable and adept in their work than in previous years [3]. However, the complexity of such operations naturally pre-
sents new possibilities for human error and subsequent break-downs in the systems safety net [4].
In recent years, the aviation industry has gradually begun to make use of risk management and risk incident analysis
[5], [6], [7], [8]. Many accident reports now include risk factors in their conclusions. For example, on May 25, 2002, a
B747-200 China Airlines passenger aircraft departing Taiwan for Hong Kong broke up in-flight; all 225 people on board
were killed. The accident report by the Aviation Safety Council (ASC) in Taiwan found that the incident involve many
items related to maintenance risks that had the potential to degrade aviation safety [9].The most important step in aviation
management is risk identification. If the risk cannot be accurately identified, it cannot be analyzed or evaluated. Once
actual and potential hazards are identified, an assessment should be made of the cause and contributing factors and a
decision should be made as to whether action is required [10]. We aimed at evaluating the significant human factors in
aviation maintenance industries. The objective is to help aviation industries better understanding their major operational
and managerial weaknesses in order to improve management and aviation maintenance operation. The study of ques-
Developing Model to Identify Significant Human Factors in Aviation Maintenance

116
tionnaire in Malaysia aviation industries was conducted to determine these significant human factors and to illustrate how
empirical evaluation approach integrate expert opinion about the relative importance of these factors.
1.1 The human factors model
Human factor practitioners typically concentrate on the interface among people and the other system elements. The im-
portant point of the system view is that humans cannot be isolated from other system components. The view is similar
to that of ecologist, i.e. that all elements in nature interacts. We can't change one aspect of the system without being
concerned about its effects on other system [11].All aviation accidents are composed of four factors [12], this is known
as the SHEL model: software (e.g. maintenance procedure, maintenance manual, checklist ), hardware (e.g. tools, test
equipment, physical structure of aircraft, and instruments ), environment (physical environment such as condition in the
hangar, work environment such as work patterns, and management structures), and liveware ( the person or people at
the center of the model, including maintenance engineers, supervisors, managers, etc,) [13]. The model which identifies
three kinds of interactive resources, its indicated that the sources of all aviation accidents can be categorized as one
(Liveware) or combination of three major relationships (Liveware-Software, Liveware- Hardware, and Live-
ware-Environment.)
Hawkins [14] modified Edwards model to include the interactive nature of the person to person relationship
(Liveware-Liveware) and called it SHELL. Hawkins used the relationship between liveware and software, liveware and
hardware, liveware and environment and liveware and liveware to describe situations that the people encountered or
what happened to them in the working environment. The model does not cover the interfaces that the outside human
factors (Hardware-Hardware, Hardware-Environment and Software-Hardware) and is intended only a basic aid to un-
derstanding human factors [15].
1.2 The modified model for categorizing the human factors in aviation.
We are in the era of organizational accidents [16]. In recent years, there has been a shift in emphasis within the safety
literature away from the individual-level that might be responsible for accident and incidents, and towards organizational
and organization-related factors [17], [18], [10], [8], [19]. When people are at the center of aviation safety, the quality,
capacity, attitude, perception, and training of personnel are important and therefore highlighted. The organizational cul-
ture, organizational climate, managerial model, decision making pattern and aviation safety culture will also affect an
individual [20], [10], [21].Accidents are usually organizational or managerial issues composed of series of errors that are
sometimes difficult for aviation personal to recognize and control. In practice, the International Civil Aviation Organi-
zations (ICAO) Human Factor Training Manual [22] emphasizes the organizational issues of airline maintenance oper-
ations. Furthermore, the International Air Transport Association [23] has five categories for the accident classification
system: human, technical, environmental, organizational, and insufficient data.
1.3 The extended SHELL model and research hypotheses
To examine the importance of the organizational aspect of the aviation maintenance system, we extended the SHELL
model to explicitly include organization as a mediator factor. This extension enables the role played by the organizational
aspect of the aviation maintenance system to be examined, through its interaction with the aviation maintenance personal.
With the extended SHELL model, an aviation maintenance system is described as human factors interfaces in which the
aviation maintenance personal (liveware) as a human factor interact with other human factors including others (live-
ware), physical resources (hardware), non physical resources (software), physical settings (environment), and non
physical settings (organization). In aviation accident analysis, organizational errors in relation to resource management,
organizational climate, and operational processes have been highlighted in order to better understand and manage human
error. These latent organizational failures can directly impact affect supervisory practices, as well as the conditions and
actions of operators [24]. In aviation maintenance, the efficiency and reliability of human performance are influenced by
working conditions, which stem from the overall organizational process [25]. Organization and management decisions
made in the technical support, policies, workforce, finance and safety have significant impacts on the type of human error
that can appear.
As such, an effective (Liveware) aviation maintenance personal interface with less organizational deficiencies
would better help reduce human errors created by other human performance interfaces of the system. In addition, an
effective (Liveware) aviation maintenance personal interface derived from positive and innovative organizational cli-
mate will help the organization operating in a high-risk environment such as an aviation maintenance system to better
manage and more easily adapt to ongoing changes [21]. Mismatches at the above human performance interfaces have
been regarded as sources of human error in which the aviation maintenance personal (liveware) play a vital role. To
examine how this ideal situation has been achieved, it is thus hypothesized that:
Developing Model to Identify Significant Human Factors in Aviation Maintenance

Copyright 2011 IESS. 117
H1 There is a positive & direct relationship human factors and human error in aviation maintenance.
H2 There is a positive & direct relationship human factors and organization in aviation maintenance.
H3 There is a positive & direct relationship organization and human error in aviation maintenance.
H4 The impact of human factors on human error in aviation maintenance increases with mediating role of organiza-
tion effort in aviation maintenance
2. Research Method
2.1 Survey instrument
The survey items on the questionnaire for measuring the three constructs of the research model in Fig. 1 were obtain
from existing literature including SHELL model [26],[13],[10],[8],[23]. The expressions of the items were adjusted,
where appropriate, to the context of aviation maintenance. The total of 84 survey items was considered for measuring
the three constructs (Human Factors, Organization, and Human Error). A pre-test was performed with three aviation
maintenance industries on the 58 survey items for the improvement in the content and appearance. The respondent was
asking to complete the questionnaire. The respondents suggested that all statements were appropriate.
2.2 Data collection
A survey questionnaire containing the measurement items was distributed to aviation maintenance personal of all levels
in 30 aviation maintenance industries, including supervisor, instructor, license aircraft engineer and technician. A total
of 315 effective responses were received.
3. Result and Discussion
Structural Equation Modeling (SEM) was used to test and analyze the hypothesized relationship of the research model
in Fig. 1. SEM aims to examine the inter-related relationships simultaneous between a set of posited constructs, each of
which is measured by one more observed item (measure). SEM involves the analysis of two models: a measurement or
factor analysis model and structural model [27]. The measurement model specifies the relationships between the ob-
served measures and their underlying construct, with the constructs allowed to inter-correlate. The structural model
specifies the posited causal relationships among the constructs.
3.1 The measurement model with reliability analysis
A reliability analysis was first carried out on survey data to ensure the internal consistency of the constructs. For ex-
ploratory research, Cronbachs alpha should be at least 0.70 or highest for a set of item to be considered and adequate
scale [28]. An exploratory and confirmatory factor analysis was than conducted on a single and multiple constructs to
extract the factors from the items retained after reliability analysis. The items retained are good indicators of their un-
derlying factors extracted, which are used as the observed variables or indicators in the measurement model for meas-
uring their corresponding constructs
Developing Model to Identify Significant Human Factors in Aviation Maintenance

118
3.2 The structural model

Figure 1: The research model showing input and output variables with regression weight.
The structural model with a path diagram shown in Fig.1 with the measurement model in Tables 1 and 2 was con-
structed. Ovals represent the constructs (Latent variables), and rectangles represent the factors (observed variables or
indicators). Single headed arrows represent causal relationships between variables. Goodness of fit test was conducted
with the survey data to examine the efficiency of the structural model. The chi-square of the structural model was sig-
nificant (
2
= 126.530, df = 50, p = 0.000) with the value of ((
2
/df = 2.531) smaller than 3, indicating ideal fit [29], the
large chi-square value was not surprising, since the chi-square statistic has proven to be directly related to sample size.
To assess the overall model fit without affected by sample size, alternative standalone fit indices less sensitive to sample
size were used. These indices included the goodness of fit index (GFI) the adjusted goodness of fit index (AGFI), the
comparative fit index (CFI), and the root mean square error (RMSEA) [5]. To have a good model fit, GFI should be
close to 0.90, AGFI more than 0.80, CFI more than 0.90, and RMSEA less than 0.10 [30]. An assessment of the struc-
tural model suggested and acceptable model fit (GFI = 0.939 ; AGFI = 0.905; CFI = 0.936; RMSEA = 0.070).
3.3 Significant Human Factors Impacting Human Error in Aviation Maintenance
The purpose of this research is to determine if there is significant human factor impacting human error in aviation
maintenance. The findings of this research reveal that there are significant human factors impacting human error in Ma-
laysia aviation maintenance industry. The research finding supports findings that the human factors have a positive im-
pact to the aircraft maintenance technician [3]. The results of the study also agree with indicated that human errors were
caused by one or several components failures among Software, Hardware, Environment and Liveware in a system [14].
From the path analysis, we can observe that human factors and organization were significant towards dependent vari-
able human error. The significant level was referring 95% confidence level with p-value < 0.001. With reference to the
significant importance, independent organization factors (0.405) were more significant compared to independent human
factors (0.324) which referring to the estimate value stated in the Table 1.
Based on weight and ranking, the order of significance of the five dimensions when we studied the result as pre-
sented in Table 1, with human factors as dependent variable and variables software, hardware, environment, liveware
(I) and liveware (O) as independent variables, it indicated that variable software, hardware, environment, liveware (I)
and liveware (O) were significant at 95% confidence level (p-value ***) with variable software took as reference group.
In this model, variable hardware (0.871) is the most significant factors, followed by variable liveware (I) (0.818), live-
ware (O) (0.768), software (0.754) and environment (0.714). Furthermore, when we took independent variable quality
support as reference factor, we found that independent variables quality support, company policy, workforce, finance
strategy and safety culture were significant in 95% confidence level (p-value ***). Finance strategy (0.883) is the most
significant factors influence organization. It continues the significant with other independent variables company policy
(0.836), workforce (0.818), quality of support (0.803) and safety culture (0.7.41).
Developing Model to Identify Significant Human Factors in Aviation Maintenance

Copyright 2011 IESS. 119
Table 1: Regression weight of the factors in path model.
Unstandardized Estimate p-value Standardized Estimate
SW <------- HF 1.000 *** .754
HW<------- HF 1.214 *** .871
ENV <------- HF 1.069 *** .714
LW(I)<------- HF 1.110 *** .818
LW(O) <------- HF 1.120 *** .768
QS <------- ORG 1.000 *** .803
CP <-------ORG .920 *** .836
WF <------- ORG .908 *** .818
FS <------- ORG .878 *** .883
SC <------- ORG .867 *** .741
INST<------- HE 1.000 *** .781
SR <------- HE .910 *** .848
3.4 Hypotheses Testing
In SEM analysis, the relationships among independent and dependent variables (constructs) are assessed simultaneously
via covariance analysis. Maximum Likelihood (ML) estimation was used to estimate model parameters with the co-
variance matrix as data input. The ML estimation method has been described as being well suited to theory testing and
development [27],[30]. Two sets of independent variables and dependent variables are used for testing research hy-
potheses H1-H4 respectively. The first set has the human factors construct and the organization as independent variable,
and human error as dependent variables. Fig. 1 shows the result of the structural model. The values associated with each
path (hypothesized relationship) are standardized path coefficients. These values represent the amount of change in the
dependent variables for every single unit of change in the independent variable. For example, an increase of one unit in
the human factors construct will cause an increase of one unit in the human error construct. Solid lines indicate sup-
ported relationship respectively.
Table 2: Regression weight between the constructs
Unstandardized Estimate p-value Standardized Estimate
HE <----- HF .569 *** .324
ORG <----- HF .501 .008 .252
HE<----- ORG .632 *** .405

The standardized regression weight and p-values for structural relationships as shown in Table 2.The result shows
that, the standardized regression weight for H1was found to be 0.324 (p-value < 0.001). This result was support to H1
that the HF has direct and strong impact on HE . Table 1 also presents the relationship between HF and ORG efforts.
The standardized regression weights for the hypothesized relationship between HF and ORG was found positive (0.252)
and insignificant (p-value > 0.001), the result does not provide support to H2 that the HF have a direct and strong im-
pact on ORG effort. The standardized regression weight for the direct relationship between ORG effort as found to be
positive (0.405) and significant (p-value < 0.001) confirming H3 that ORG had strong positive direct impact on HE.
The empirical support for mediating role of organization efforts in the context of relationship between human fac-
tors and human error is hardly found. In the case of aviation industry, organization may intermediate between HF and
HE. This discussion leads to the following hypotheses: H4: The impact of significant human factors on human error in
aviation maintenance increases with mediating role of organization effort in Malaysia aviation maintenance industries.
These theoretical discussion and proposed hypothesized relationships are deliberating in Figure 1. Human error is also
indirectly affected by HF through ORG efforts. In order to test whether ORG efforts are an important mediator of HF
with HE relationship the following rule of thumb will be followed [30][31].
i. IE < 0.085 and IE => Non mediator
ii. IE > 0.085 and IE ~ DE => Partial Mediator (HF -> HE relationship, p < 0.05)
iii. IE > 0.085 and IE > DE => Total Mediator (HF -> HE relationship, p > 0.05)
The standard Indirect Effect (IE) of HF to HE is 0.103 which is more than 0.085 (Table 3). Thus, ORG efforts me-
diate the relationship between HF and HE. Since, the p-value for Direct Effect (DE) between HF to HE is less than 0.05
thus, ORG efforts are a partial mediator. In conclusion, this finding provides support to H4 hypotheses which is the im-
pact of HF on HE increases with a mediating role of ORG efforts in Malaysia aviation maintenance industry.
Developing Model to Identify Significant Human Factors in Aviation Maintenance

120
Table 3: Direct effect (DE) and Indirect Effect (IE) analysis for Malaysia aviation maintenance industry






Note: Std. Total Effect = Std. Direct Effect (DE) + Std. Indirect Effect (IE)
4. Conclusion
The empirical findings of a questionnaire survey of 315 aviation personnel in Malaysia show that the model and ap-
proach are both strategically effective and practically acceptable for categorizing the significant human factors.The re-
sult reveal that the aviation maintenance companies may want to propose management strategies related to the signifi-
cant factors to minimize the human error. Our findings also suggest that the Civil Aviation Authority may consider
asking management level groups in aviation companies such as human recourses and maintenance departments, to focus
on significant factors to improve aircraft maintenance performance and reducing error. Specifically, these significant
factors are related to the hardware, liveware (I), environment, and liveware (O). Aviation maintenance companies also
have to focus other significant factors under organizational such as financial strategy, policies, and manpower and
safety culture. When employee professionalism is protected and the individual staff members have the company atten-
tion, safety and human error costs should be reduced.
5. References
[1] Latorella, K.A.,Prabhu, P.V., 2000. A review of human error in aviation maintenance and inspection. International Journal of
Industrial Ergonomics 26 (2), 133161.
[2] Pettersen, K.A.,Aase, K., 2008. Explaining safe work practices in aviation line maintenance. Safety Science 46, 510519.
[3] Y.-H. Chang, Y.-C. Wang 2010.Significant human factors in aircraft maintenance technician, Safety Science 48, 54-62
[4] CAA, 2002a. CAP 715: An Introduction to Aircraft Maintenance Engineering Human Factors for JAR 66. UK Civil Avia-
tion Authority.<http://www.caa.co.uk/docs/33/CAP715.PDF (accessed on 15 May 2009).
[5] Janic, M., 2000. An assessment of risk and safety in civil aviation. Journal of Air Transport Management 6, 4350
[6] Lee, W.K., 2006. Risk assessment modeling in aviation safety management. Journal of Air Transport Management 12 (5),
267273.
[7] Wong, D.K.Y.,Pitfield, D.E., Caves, R.E., Appleyard, A.J., 2006. Quantifying and characterizing aviation accident risk factors.
Journal of Air Transport Management 12, 352357.
[8] CAA, 2007. Aircraft Maintenance Incident Analysis. UK Civil Aviation Author-
ity.<http://www.caa.co.uk/docs/33/Paper2007_04.pdf. (accessed on 15 May 2009).
[9] ASC, 2005. Aviation Occurrence Report I, In-flight breakup over the Taiwan Strait, Northeast of Makung, Penghu Island,
China Airlines Flight CI611, Boeing 747- 200, B-18255, May 25, 2002, (ASC-AOR-05-02-001). Aviation Safety Council,
Taipei, Taiwan
[10] CAA, 2003. CAP716: Aviation Maintenance Human Factors. UK Civil AviationAuthor-
ity.<http://www.caa.co.uk/docs/33/CAP71PDF (accessed on 15 May 2009).
[11] Endsley, M.R., Robertson, M.M., 1996. Team situation awareness in aviation maintenance. In: Proceedings of the Human Fac-
tors and Ergonomics Society 40th Annual Meeting. Human Factors and Ergonomics Society, Santa Monica, CA, pp.
1077-108
[12] Edwards, E., 1995. Aviation ergonomic: whence and whither? Ergonomics 38 (3),565569
[13] CAA, 2002b. CAP 718: Human Factors in Aircraft Maintenance and Inspection. Civil Aviation Author-
ity.<http://www.caa.co.uk/docs/33/CAP718.PDF (accessed on 15 May 2009).
[14] Hawkins, F.H., 1993. Human Factors in Flight.Ashgate, Aldershot, England.I ATA, 2006. Safety Report. International Air
Transport Association. Geneva, Switzerland/Montreal, Canada.
[15] ICAO, 2003. Human Factors Guidelines for Aircraft Maintenance Manual, first ed. international Civil Aviation Organization.
Doc. 9824-AN/450<http://www2.hf.faa.gov/opsManual/assets/pdfs/ICAOHF.pdf(accessed on 15 May 2009).
[16] Reason, J., 1990. Human Error. Cambridge University Press, Cambridge, England
[17] Westrum, R., 1996. Human factors experts beginning to focus on organizationalfactors in safety. ICAO Journal (October).
[18] Neal, A., Griffin, M.A., Hart, P.M., 2000. The impact of organizational climate onsafety climate and individual behavior.
Safety Science 34, 99109.

Std. Total Effect Std. Direct Effect Std. Indirect Effect.
HF ORG HE HF ORG HE HF ORG HE
ORG 0.324 0.000 0.000 0.324 0.000 0.000 0.000 0.000 0.000
HE 0.423 0.405 0.000 0.324 0.405 0.000 0.103 0.000 0.000
Developing Model to Identify Significant Human Factors in Aviation Maintenance

Copyright 2011 IESS. 121
[19] Parker, D., Lawrie, M., Hudson, P., 2006. A framework for understanding thedevelopment of organizational safety culture.
Safety Science 44, 551562
[20] McDonald, N., Corrigan, S., Daly, C., Cromie, S., 2000. Safety management systems and safety culture in aircraft maintenance
organizations. Safety Science 34 (13), 151176.
[21] Arvidsson, M., Johansson, C.R.,Ek, A., Akselsson, R., 2006. Organizational climate in air traffic control: innovative prepared-
ness for implementation of new technology and organizational development in a rule governed organization. Applied Ergo-
nomics 37, 119129.
[22] ICAO, 1998. Human Factors Training Manual, first ed. International Civil AviationOrganization, Doc. 9683-AN/950.
[23] IATA, 2006. Safety Report. International Air Transport Association. Geneva, Switzerland/Montreal, Canada.
[24] Wiegmann, D.A.,Shappell, S.A., 2003. A Human Error Approach to Aviation Accident Analysis: the Human Factors Analysis
and Classification System. Ashgate, Burlington, VT.
[25] Isaac, R., Ruitenberg, B., 1999. Air Traffic Control: Human Performance Factors. Ashgate, Aldershot, England.
[26] ICAO, 2003. Human Factors Guidelines for Aircraft Maintenance Manual, first ed. international Civil Aviation Organization.
Doc. 9824-AN/450. <http://www2.hf.faa.gov/opsManual/assets/pdfs/ICAOHF.pdf (accessed on 15 May 2009).
[27] Anderson, J.C.,Gerbing, D.W., 1988. Structural equation modeling in practice: a review and recommended two-step
approach. Psychological Bulletin 103 (3), 411423.
[28] Nunnelly, J.C., 1978. Psychometric Theory, second ed. McGraw Hill, New York
[29] Bentler, P.M., 1988. Theory and Implementation of EQS: a Structural Equations Program. Sage, Newbury Park, CA.
[30] Hair Jr., J.F., Anderson, R.E., Tatham, R.L., Black, W.C., 1998. Multivariate Data Analysis with Readings, fourth ed. Prentice
Hall, New Jersey.
[31] N.M. Zakuan, 2009. Structural Analysis of Total Quality Management, ISO/TS16949 and Organization Performance in Malay-
sia and Thailand Automotive Industry. Unpublished ,UniversitiTeknologi Malaysia.


Developing Model to Identify Significant Human Factors in Aviation Maintenance

122

Proceeding of Industrial Engineering and Service Science , 2011, September 20-21
Copyright 2011 IESS.
Improving Process Performance Through Quality
Engineering Using TAGUCHI Robust Design
Arum Sari*;Halida Intan

Industrial Engineering, Pasundan University, Bandung, Indonesia
*aarum310355@yahoo.com
ABSTRACT
Experiment design is a powerful technique for understanding a process and studying the impact of potential variable
affecting a process. Robust design is a methodology for making process performance insensitive to variation in manu-
facturing condition and environment. Taguchi method of robust design in optimizing process parameter setting has
widely used. Although it has been widely accepted for improving variability in manufacturing process, research has
shown very little has been done on the application of such powerful methodology. This paper investigates how the ro-
bust design of parameter setting can improve the quality performance of a product. The research has shown that the
proposed robust design of setting parameter, has significantly improved the probability of nonconforming unit. The
paper also discusses the quality performance of the robust design compares to non robust design. The result has shown
that the robust design has produced better probability of nonconforming units compared to non-robust design. Benefit
of the proposed parameter setting to cost reduction is also investigated.

Keywords:Taguchi, DOE, Robust Design, Parameter setting, TQM, Quality Improvement, S/N Ratio

1. Introduction
The quality of a process is technically measured from variance. High variance is potential to produce high nonconforming
unit.Settingthe parameters is one of the ways on reducing the variation. The tool for achieving parameter design is the
design of experiment. Effort should be directed toward determining the best design at the least cost. Focus on reducing
variability without adding cost can be achieved by defining a strategy to minimize the effect of these causes. Taguchi
experiment is one of the ways to carry out an effective experiment and robust design is a methodology in Taguchi ex-
periment for making products performance insensitive to variation. Although it has been widely accepted for tackling
variability problems in manufacturing processes, research has shownthat very little has been done in the manufacturing
sector in Indonesia. This paper will explain how to improve quality performance through robust design of parameter
setting. PT. Mitrametal Perkasa will be used as a case of this study.
2. Problem Statement
PT. Mitrametal Perkasa produces the components of motor vehicle. Brake hose is the main product which has very poor
quality performance. Most of all is out of specification and it should be reworked to meet the specification. The quality
improvement program was still focused on quality control, while the parameter settings were still conducted by trial with
no significant result. A redesigned parameter setting is an alternative to provide a significant improvement. Parameter
settings will be design robustly based on Taguchi experiment. The research questions arehow to design a robust parameter
setting, how muchtheimprovement result, and whether the performance of arobust design is outperformed. The research
methodology is described in Section 3, planning and designing the experiment in Section 4, conducting the experiment in
Section 5. Analysis of the experiment is offered in Section 6, while analysis of the robust design is inSection 7. The
conclusion is presented at the end of the paper.
3. Research Methodology
The experiment will be performed through four distinct phases. The first phase is planning the experiment. It includes
forming the team, determining the objective, identifying quality characteristics, determining measurement methods,
selecting factors and level factors, specifying variable setting, and identifying potential interaction. The second phase is
designing the experiment. It involves calculating degree of freedom, selecting the orthogonal array(s), construction of
the experiment layout, assigning of the factors and interactions of interest to the array(s). The third phase is conducting
Improving Process Perfomance Through Quality Engineering Using TAGUCHI Robust Design

124
the experiment. It includes execution of the experiment as developed in the planning and design phases. This phase in-
cludes development of the test plan and performing the experimental runs. The last phase is analyzing the experiment. It
relates to calculation for converting raw data to the S/N ratio to get a robust design of the parameter. Analysis includes
determining the most important factors, selecting the optimal levels for those factor settings using tabular and graphical
techniques. The last step at this phase is conducting the confirmation run at the optimal settings for checking the repro-
ducibility and to identify the improvement result of the program.
4. Planning And Designing The Experiment
After the appropriate team has been selected, they are working together to improve the quality. The objective of the
study is to minimize percent defective of the product in order to reduce reworked units. Pareto diagram of historical
data of percent defective is used to select the department, the product, the process and the quality characteristic to be
considered. There are 6 departments available in the plant namely(1) Casting, (2)Machining, (3) Painting, (4) Brake Hose,
(5) Lining Bonding, and (6) Stamping. Pareto diagram showsthat Brake Hose department that has the largest percent
defective (38%) is selected. There are four type of products produce in Brake Hose department namely (1) 2W Front
Brake Hose All Type, (2) 2W Rear Brake Hose All Type(3) 4W Front Brake Hose All type and (4) 4W Rear
Brake Hose All Type. The Pareto diagram define that product 2W Front Brake Hose all types whichhas the largest
percent defective (29%) is selected. Based on the flow process diagram of the 2W Front Brake Hose all types, the
pareto diagram define that Crimping 2is selected. There are five quality characteristics of the Crimping 2 process.The
Pareto diagram for each quality characteristics of the Crimping 2 processdefine that there are three quality character-
istics which has 86% contribution. They are (1)Crimping diameter, (2) Skirt Width, and (3) Side Line Out. Another
important in the objective of the experiment determination is:Not to try to solve the whole problem in one experiment
(Glenn,S.P.,1993). For this reason, the Crimping Diameter which is categorized as a Nominal Is The Bestcharacter-
istic, is put into first priority. It is clearly define that the objective of the experiment is to reduce the defect of
Crimping Diameter in the Crimping 2 process for product type 2W Front Brake Hose all types in Brake hose
department as a program for quality improvement at PT. Mitrametal Perkasa. Cause and effect diagram is used to select
the factors and the level factors and check sheet is used to define the root causes of the problem. Analysis of the root
causes results that there are 4 factors to be considered. They are (1) heat input, (2) heating rate, (3) pressure, and (4)
humidity. Heat inputs parameters are electrical current and voltage, and heating rates parameter is heating time.
Analysis of the root causes has identified 5 factors to be studied. The factors are, (A) electrical current, (B) voltage, (C)
heating time, (D) pressure and (E) humidity. Factor A,B,C,and D are classified as control factors while factor E is a noise
factor. Each factor has two levels.It is also interested in the interaction of AxB, AxC, and BxC. Summary of the factors,
level factors and the associate setting is presented in Table 1, Selection of the factor and level factor resulting 7 degree
of freedom for the control factor. The appropriate orthogonal array for the selected factors is L8. The control factor is
placed in an inner array and the noise factor is placed in an outer array. The layout of such design is shown in Table 2,
Noise factor ofE that has two levels is treated as a replication and placed in the outer array. L8 means that the experi-
ments consists of 8 experiment runs that are repeated 2 times each according the presence of one noise factor at two levels.
There will be a total of 16 experimentsto be conducted. For each setting of the design factor, a mean of the experiment
responses is calculated. Assignment of factor and the interaction into Columnin the Orthogonal Array is done using
standard Linear Graph of L8. Using the Linier Graph, the assignment of factor A, factor B, interaction AxB, Factor C,
Interaction AxC, Interaction BxC and factor D is placed consecutively in column 1to7 as shown in Table2.
Table 1. Factors and Level factors
Types Factors Symbol 1
st
Level

2
nd
level
Control Factor Voltage A 12 Volt 9 Volt
Current electric B 60A 90A
Heating Time C 5 sec 3 sec
Pressure D 50 MPa 40 MPa
Noise Factor Humidity E Low High
5. Conducting The Experiment
The experiment is conducted in 8 experimental runs. For each experimental runs or combination of factors, two repeti-
tions are performed. The first repetition is conducted at the first levelof noise factorand the second repetition is conducted
at thesecond level of noise factor. Since there are two repetitionsper experimental runs, there will be two responses for
each run as presented in Table 2 column 9 and 10, then the mean responses for each run experiment is calculated and
presented in Table 2 column 11. Due to the objective of the experiment is to design a setting parameter robustly, thenthe
Improving Process Perfomance Through Quality Engineering Using TAGUCHI Robust Design

Copyright 2011 IESS. 125
mean responses(column 11) must be converted into the S/N ratio.Because the characteristic of Crimping Diameter is
nominal the best, then the suitable S/N ratio is calculated as presented in column 12 of Table2.

Tabel 2. Responses of The Experiment
Eksperimen Outer Array (Noise Factor) Resp.
1
Resp.
2
Mean
response

S/N
Ratio Inner Array (Control Factors)
A B AXB C AXC BXC D
1 1 1 1 1 1 1 1 10.40 10.30 10.350 43.31
2 1 1 1 2 2 2 2 10.45 10.50 10.475 49.43
3 1 2 2 1 1 2 2 10.55 10.45 10.500 43.43
4 1 2 2 2 2 1 1 10.65 10.40 10.525 35.50
5 2 1 2 1 2 1 2 1055 10.60 10.575 49.53
6 2 1 2 2 1 2 1 10.50 10.45 10.475 49.43
7 2 2 1 1 2 2 1 10.45 10.35 10.480 43.35
8 2 2 1 2 1 1 2 10.55 10.65 10.600 43.52
To define the effect of each factor and the associate levels, a response table is developed. This is performed by
grouping the mean responses by factor level for each column in the array, taking the sum and dividing by the number of
responses. For example, the effect of factor A level 1 (A
1
) is the average of the S/N ratio resulted from experiment 1,2,3,
and 4, while the effect of factor B level 2 (B
2
) is the average of S/N ratio resulted from experiment 1,2,5 and 6. The
overall result of the effect factor is presented in Table3. Based on the average response computed for each factor and
interaction, a means response graph is constructed (Figure 1). The absolute difference between the two average re-
sult(from 2 levels) is the effect of the factors and interaction.Based on Table 3, the proposed robust design of setting
parameters of Crimping 2 process are set as follows:electric current at 1
st
level (60 A), pressure at 2
nd
level (40 MPa),
voltage at2
nd
level (9 Volt), heating time at 1
st
level (5 sec) or 2
nd
level (3 sec).
Tabel 3. Response Table of The Effect Factor
Factor A B AxB C AxC BxC D
LEVEL 1 42.918 47.923 44.903 44.903 44.923 42.959 42.897
LEVEL 2 46.454 41.449 44.470 44.470 44.449 46.413 46.475
RANK 3 1 7 6 5 4 2






(a) (b) (c) (d) (e) (f) (g)
Figure 1. Graphic Factor Effects based on the S / N Ratio
6. Analysis Of Variance
Analysis of variance is conducted to examine the effect of the factors under study. Analysis of variance begin with the
calculation ofthe sum of square andthe mean square of the S/N ratio. A pooling up approach is used to find the significant
effect of the factors. Pooling up is done starting from the factor or interaction factor which has the smallest sum of
square. Column 3 of Table 4 shows that the smallest sum of squareis 0.37 and it belongs to the interaction factorAxB, so
that thesum of square (AxB) is combined with the sum of square error.The overall result of recalculation of the sum of
square, themean square and the F ratio for all factors except for factors that have been pooling up (AxB interaction)is
shown in Table 4. For this case, pooling up is done three times and the result is summarized in Table 4.
30
40
50
1 2
S/N
LEVEL
Faktor A
30
40
50
1 2
S/N
LEVEL
Faktor B
30
40
50
1 2
S/N
LEVEL
Faktor AXB
30
40
50
1 2
S/N
LEVEL
Faktor C
30
40
50
1 2
S/N
LEVEL
Faktor AXC
30
40
50
1 2
S/N
LEVEL
Faktor BXC
30
40
50
1 2
S/N
LEVEL
Faktor D
Improving Process Perfomance Through Quality Engineering Using TAGUCHI Robust Design

126
Table 4. Analysis of Variance
` POOLING I POOLING II POOLING III
Factor V SS MS SS MS F SS MS F SS MS F
A 1 25.01 25.01 25.01 25.01 66.82 25.01 25.01 66.82 25.01 25.01 62.61
B 1 83.83 83.83 83.83 83.83 223.9 83.83 83.83 223.9 83.83 83.83 209.86
AXB 1 0.37 0.37 Pooling I
C 1 0.37 0.37 0.37 0.37 1.00 Pooling II
AXC 1 0.45 0.45 0.45 0.45 1.20 0.45 0.45 1.20 Pooling III
BXC 1 23.86 23.86 23.86 23.86 63.75 23.86 23.86 63.75 23.86 23.86 59.73
D 1 25.60 25.60 25.60 25.60 68.40 25.60 25.60 68.40 25.60 25.60 64.90
Error 0 0.37 0.37 0.37 0.37 0.75 0.37 1.20 0.40
total 7 159.5 159.5 159.5 159.5

Using 90% confidence intervals, thevalue of the F table (F (0.10, 1, 1))is 39.9. That means after pooling up the in-
teraction factor AxB, there is influence of factor A to the CrimpingDiameter. The process is recalculated to find the
influence of all factors. The end result for each pooling is summarized in Table5 while the contribution percentage for
each factor and interaction is presented in Table6. It isconcluded that factors A,B,Dand the interaction factor of BxC has
a significant effect. The proposed setting parameter are as follow: electric current is set at first level ( 60 A), pressure
is set at second Level (40 MPa), voltage is set at the second level ( 9 V) andheating time is set at first level( 5sec).A
number of units built according to the recommended setting should be tested for confirmation of the result.For that
purpose, 25 observations of four subgroup sizeis conducted. The same calculation is done for the new experiment re-
sponses resulting = 10.548, variance 0.016 and probability non nonconforming unit almost zero.The confirmation
results compare against the initial condition which has =10.477 and

=0.051 and the probability ofnonconforming unit


is 96.4%. This means that the proposeddesign of setting parameter has significantly improve the quality performance.
Table 5. Test of Hypothesis of Factors
Effect Factor POOLING 1 POOLING 2 POOLING 3
F
T
F
C
Ho F
T
F
C
Dec F
T
F
C
Dec
A 39.9 66.8 Re 8.5 66.8 Re 8.5 62.6 Re
B 224 Re 224 Re 209.9 Re
AX B Polling I
C 1.0 Acc Polling II
A X C 1.2 Acc 1.2 Acc Polling III
B X C 63.7 Re 63.7 Re 59.7 Re
D 68.4 Re 68.4 Re 64.90 Re
Table 6. Percent Contribution (r) of factors
Factors v SS MS SS R
(%)
A 1 25.01 25.01 24,61 15
B 1 83.83 83.83 83.43 52
BXC 1 23.86 23.86 23.46 15
D 1 25.60 25.60 25.20 16
Error 3 1.20 0.40
Total 7 159.50
7. Analysis of Robust Design
To conclude that the robust design performs better than the non robust design, the same calculation is repeated instead of
based on the value of S / N ratio but it is based on the mean response as listed in columns 11 ofTable 2. The propose
setting parameters of the non robust design resulting a slightly different as shown in row 2 Table 7
Table 7. Confirmation Results of Robust Design and Non Robust Design
CONTROL FACTORS CONFIRMA-
TION RESULT
PROB OF
REJECTIN
(%) A B C D

tsuboR
ngiseD
Effect

x

10.55 0.033 0.0002
Level 2 1 1 2
Improving Process Perfomance Through Quality Engineering Using TAGUCHI Robust Design

Copyright 2011 IESS. 127
CONTROL FACTORS CONFIRMA-
TION RESULT
PROB OF
REJECTIN
(%) A B C D

9 V 60 A 5 Sec 40 Mpa
Non-Robust
Design
Effect x x x

10.59 0.052 27.000
Level 2 2 2 2
9 V 90 A 3 sec 40 Mpa

It is clear that a robust design outperforms the quality performance of non - robust design. Probability of noncon-
forming unit of the robust design is 27% greater than the non-robust design. Assuming that the daily throughput is 700,
then non-robust parameter settingswill produce 189 units nonconforming. While a robust design only produced one unit
non-conforming. Assumed that the costs of scrap unit is 210,000 and reworks unit is 3000, then the daily savings from the
proposed robust design of the setting parameter is 24.300.000or 680.400.000 monthly
After the improvement at theCrimping 2 processis completed, the next process to beimproved based on Pareto
Diagram is the process of Cutting.Afterallprocessof product 2WFrontBrakeHoseAllType has been improved,the next
product to be improved based on Paretodiagram is 2WRearBrakeHoseAllTypeand then moving forward to depart-
ment of machining, casting, and stamping.
8. Conclusion
It is concluded that the proposed robust design of parameter settings has improved the quality significantly. The prob-
ability of nonconforming units was reduced from 96.44% to 0002%. Robust design has produced a better quality per-
formance, compared to nonrobust design.
9. Reference

[1] Fawlkes, W.Y.,Creveling, C. M., 1995, Engineering Method For Robust Design Using Taguchi Method In Technology And
Product Development. AdisonWesly Publishing Company.
[2] Mitra , A. ,1998, Fundamental Of Quality Control And Improvement, 2
nd
edition Prentice-Hall Inc
[3] Peace, G.S., 1993, Taguchi Method: A Hands On Approach, AdisonWesly Publishing Company.
[4] Ross, J.E., 1994, Total Quality Management, Text, Cases And Reading, 2
nd
edition, London, Kogan Page Limited.
[5] Ross, P. J., 1996, Taguchi Technique For Quality Engineering, 2
nd
edition, Mc Graw Hill
[6] Taguchi, G., Elsayed, E.A. Hang Siuang, T. C., 1989, Quality Engineering In Production System, Mc GrawHilSimak
Baca secara fonetik
Simak
Baca secara fonetik

Simak
Baca secara fonetik


Improving Process Perfomance Through Quality Engineering Using TAGUCHI Robust Design

128

También podría gustarte