Está en la página 1de 201

Hybrid Simulation Models

of Production Networks
Hybrid Simulation Models
of Production Networks

Vassilis S. Kouikoglou
and

Yannis A. Phillis
Technical University of Crete
Chania, Greece

Springer Science+Business Media, LLC


Library of Congress Cataloging-in-Publication Data

Kouikoglou, Vassilis S., 1961-


Hybrid simulation models of production networksNassilis S. Kouikoglou and Yannis
A. Phillis.
p. em
Includes bibliographical references and index.
ISBN 978-1-4419-3363-8 ISBN 978-1-4757-5438-4 (eBook)
DOI 10.1007/978-1-4757-5438-4
1. Manufacturing processes-Mathematical models. I. Phillis, Yannis A., 1950- II.
Title.

TS183 .K69 2001


658.5-dc21
2001029439

ISBN 978-1-4419-3363-8

2001 Springer Science+Business Media New York


Originally published by Kluwer Academic/Plenum Publishers in 2001
Softcover reprint of the hardcover 1st edition 2001

http:!www.wkap.nl/

10987654321

A C.I.P. record for this book is available from the Library of Congress

All rights reserved

No part of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any
means, electronic, mechanical, photocopying, microfilming, recording, or otherwise, without written
permission from the Publisher.
PREFACE

Industrial production is one of the most basic human activities indispensable to the
economic activity. Due to its complexity, production is not very well understood and
modeled as opposed to traditional fields of inquiry such as physics. This book aims at
enhancing rigorous understanding of a particular area of production, that of analysis and
optimization of production lines and networks using discrete event models and simulation.
To our knowledge, this is the first book treating this subject from the point of view
mentioned above. We have arrived at the realization that discrete event models and
simulation provide perhaps the best tools to model production lines and networks for a
number of reasons. Analysis is precise but demands enormous computational resources,
usually unavailable in practical situations. Brute force simulation is also precise but slow
when quick decisions are to be made. Approximate analytical models are fast but often
unreliable as far as accuracy is concerned. The approach of the book, on the other hand,
combines speed and accuracy to an exceptional degree in most practical applications.
The book is mainly intended for graduate students or advanced seniors as well as
practitioners in industrial engineering and operations research. Researchers and academics
working in the field of production engineering may find useful ideas in the book. A senior or
graduate level course in simulation as well as basic probability would be a useful prerequi-
site. Part of the material of the book we have taught in undergraduate and graduate courses
in simulation and production networks at the Technical University of Crete.
Chapter 1 provides an overview of the field. Chapter 2 gives a brief exposure to
discrete event models and simulation needed in the subsequent development. Chapters 3
through 5 deal with detailed models for production lines and networks. Chapter 6 intro-
duces optimization issues such as repair and buffer allocation with the aid of the models of
the previous three chapters. Chapter 7 concludes.
We would like to express our gratitude to Nili Phillis who read the manuscript and
made a number of constructive comments.

Vassilis S. Kouikoglou
Yannis A. Phillis

v
CONTENTS

1. INTRODUCTION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1. Analytical Models of Production Networks . . . . . . . . . . . . . . . . . . . . . . . 1
1.2. Types of Production Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.2.1. Job Shops . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.2.2. Production Lines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.2.3. Production Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.2.4. Buffers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.2.5. Material Handling Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.2.6. Discrete and Continuous Production . . . . . . . . . . . . . . . . . . . . . 9
1.3. Problems and Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.3.1. Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.3.2. Design ........... '................ . . . . . . . . . . . . . . . . . 10
1.3.3. Information-Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.3.4. Performance Measures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
l.Al. Appendix: A Review of Probability Theory, Statistical Estimation, and
Stochastic Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
l.Al.l. Axioms of Probability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
l.Al.2. Conditional and Independent Events ........... , . . . . . . . . 12
l.A1.3. Random Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
l.A1.4. Expectation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
l.A1.5. Some Commonly Used Random Variables . . . . . . . . . . . . . . . . 20
l.Al.6. Estimation of Mean and Variance . . . . . . . . . . . . . . . . . . . . . . . 25
l.Al.7. Limit Theorems and Confidence Intervals for the Mean . . . . . 27
l.A1.8. Introduction to Stochastic Processes . . . . . . . . . . . . . . . . . . . . . 29
l.Al.9. Markov Chains . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
l.Al.10. Discrete Time Markov Chains . . . . . . . . . . . . . . . . . . . . . . . . . 32
l.Al.11. Continuous Time Markov Chains . . . . . . . . . . . . . . . . . . . . . . . 34

2. FUNDAMENTALS OF SIMULATION MODELING . . . . . . . . . . . . . . . . . . 43


2.1. Systems Described by Differential or Difference Equations . . . . . . . . . . 44
2.2. Discrete Event Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
2.2.1. Conventional Simulation Models . . . . . . . . . . . . . . . . . . . . . . . 45
2.2.2. Hybrid Discrete Event Models . . . . . . . . . . . . . . . . . . . . . . . . . 49

vii
viii CONTENTS

2.3. Modeling Random Phenomena . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54


2.3.1. Random Number Generators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
2.3.2. Inverse Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
2.3.3. Acceptance-Rejection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
2.4. Determining the Number of Simulations . . . . . . . . . . . . . . . . . . . . . . . . . 62
2.5. Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63

3. TWO-MACHINE SYSTEMS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
3.1. System Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
3.2. Conventional Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
3.2.1. Discrete Event Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
3.2.2. Estimation of Performance Measures . . . . . . . . . . . . . . . . . . . . . . 71
3.3. Hybrid Model for Continuous Flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
3.3.1. Comparison of Discrete Traffic and Continuous Flow . . . . . . . . . 73
3.3.2. Continuous Flow Model for Two Machines and One Buffer . . . . 76
3.4. Hybrid Model for Discrete Traffic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
3.4.1. Machine Event Scheduling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
3.4.2. Scheduling a Buffer-Full Event . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
3.4.3. Scheduling a Buffer-Empty Event . . . . . . . . . . . . . . . . . . . . . . . . . 86
3.4.4. Update Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
3.4.5. Event Driven State Adjustments . . . . . . . . . . . . . . . . . . . . . . . . . . 92
3.5. Numerical Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
3.6. Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97

4. PRODUCTION LINES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
4.1. Continuous Flow Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
4.2. Discrete Part Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
4.2.1. State Variables and Events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
4.2.2. Event Scheduling of Starved-and-Blocked Machines . . . . . . . . . . 105
4.2.3. Simulation Model Logic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
4.3. Numerical Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
4.4. Extensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
4.4.1. Series-Parallel Configurations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
4.4.2. Variable Processing Times . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
4.4.3. Random Processing Times . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
4.5. Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
4.Al. Appendix: FORTRAN Code for Simulating Continuous Flow
Production Lines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121

5. PRODUCTION NETWORKS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137


5.1. Acyclic Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
5.1.1. System Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
5.1.2. Continuous Flow Approximation . . . . . . . . . . . . . . . . . . . . . . . . . . 141
5.1.3. Continuous Flow Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
CONTENTS ix

5.2. State Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146


5.2.1. Update Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146
5.2.2. Instantaneous Adjustment of State Variables . . . . . . . . . . . . . . . . . . 147
5.2.3. Scheduling of Next Events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
5.3. Numerical Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
5.4. Algorithmic Deadlocks in Non-Acyclic Networks . . . . . . . . . . . . . . . . . . . 157
5.5. Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158

6. OPTIMIZATION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
6.1. Optimal Assignment of Repairmen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
6.2. Lot Scheduling Policies and Stability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161
6.2.1. System and Control Policy Description . . . . . . . . . . . . . . . . . . . . . . 162
6.2.2. Hybrid Model and Performance Evaluation . . . . . . . . . . . . . . . . . . . 164
6.3. Perturbation Analysis and System Design . . . . . . . . . . . . . . . . . . . . . . . . . 167
6.3.1. Optimization with Equality Constraints . . . . . . . . . . . . . . . . . . . . . . 168
6.3.2. Allocation of Buffer Space and Repair Effort . . . . . . . . . . . . . . . . . 170
6.3.3. Infinitesimal Perturbation Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . 172
6.3.4. Numerical Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177
6.4. Designing with Concave Costs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178
6.4.1. Formulation of Optimization Problems . . . . . . . . . . . . . . . . . . . . . . 178
6.4.2. Solution Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179
6.4.3. Numerical Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181
6.5. Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186

7. CLOSURE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187

REFERENCES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189

APPENDIX A: STATISTICAL TABLES............................... 191

INDEX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195
Hybrid Simulation Models
of Production Networks
1
INTRODUCTION

Production of goods is a process of transforming matter from one form into another.
This process together with the production of services is the basis of the economic activ-
ity. Production is a complex activity subject, among others, to the laws of physics and
human decisions. The latter determine to a good degree the efficacy of production.
A production system for this book comprises a number of machines interconnected
in arbitrary but given ways, performing certain operations according to a well-defined
protocol. The manager of such a system is interested in knowing its capacity, expected
throughput, possible bottlenecks, the effects of decisions such as repair and buffer alloca-
tion, and possibilities to improve performance. Such knowledge is not easy to acquire
technically or cheap economically, even for rather simple production systems. The devel-
opment of effective methodologies, which aid decisions of the type how, when, where,
and what to produce is becoming a pressing necessity in an era of stiff competition.

1.1. ANALYTICAL MODELS OF PRODUCTION NETWORKS

The models we develop in this book deal with production networks. The simplest
element to be modeled is a machine M shown in Fig. 1.1. Raw parts enter M and after
being processed, they exit the system as finished products.

raw parts -G products

Figure 1.1. One machine.

We assume that the processing times are deterministic and equal to l/RM time units.
The quantity RM is called the nominal production rate of M. The machine is unreliable.
Upon completion of one part, the machine may fail and stop working with probability p

1
2 HYBRID SIMULATION MODELS OF PRODUCTION NETWORKS

or survive and load the next part with probability 1 - p. If a breakdown occurs, M under-
goes repair during a period of 11RM time units. At the end of this period, the machine is
repaired with probability r or it remains under repair for the next period, with the com-
plementary probability. Thus all the activities of interest start and end at times ft = k IRM,
k = 0, 1, .... We shall refer top and r as the failure and repair probabilities, respectively.
An appropriate measure of performance for this simple production system is the ex-
pected throughput. This quantity is defined as the mean number of parts produced by the
system during one time unit. To find the expected throughput of M, one may model the
system as a Markov chain. Appendices l.Al.9-l.Al.ll contain a review of Markov
chains. The problem is straightforward as the following example shows.

Example 1.1. The system of Fig. 1.1 can be modeled by a Markov chain with two
states, 1 = operational and 0 = under repair. The diagram of Fig. 1.2 depicts the transi-
tions between states.

Figure 1.2. Markov chain of machine M.

We then derive the equations describing the dynamics of the state probabilities over the
time points tk. k = 0, 1, ... ,

where Pk(y), y = 0, 1, is the probability that the machine is in state y at time h. The previ-
ous equations are written compactly

where

The solution to this linear equation is


INTRODUCTION 3

It is known from the theory of Markov chains that as k ~ oo we reach a steady state
where Pk~ P and Pis the eigenvector of A corresponding to the eigenvalue 1, that is

P=AP

From the above and the fact that P(O) + P(1) = 1 (since the machine must be either up or
down), we obtain

P(O) = _]!_ P(1)= _r_


r+p r+p

During a long time t the machine will be up for a total of t P( 1) time units. Since the
processing time of M is 1/RM time units, it will produce a total oft P(1) RM items. There-
fore, the expected throughput TH is given by

TH= tP(l)RM =RM _r_


t r+ p

The Markov chain approach is very efficient in analyzing systems with a small num-
ber of states. However, as the complexity of the production system increases the number
of states explode and this approach quickly becomes inefficient. In order to illustrate the
explosion of the number of states, we consider the production line of Fig. 1.3, which has
two machines M 1 and M 2 connected serially and one intermediate buffer B with finite
storage space. The function of the buffer is to decouple the machines by providing empty
space for M 1 and supplying M2 with parts. Let BC denote the maximum number of semi-
finished parts that can be stored between M 1 and M 2 Since each machine can hold one
piece at a time, BC equals the size of B plus two. In this section, BC, for simplicity, will
be referred to as the capacity of buffer B. We assume that an operational machine may
break down at the end of a cycle only if it is neither starved nor blocked. Thus, failures
are operation-dependent. Finally, we assume that the machines have fixed and equal cy-
cle times, a property that is typical of the class of systems known as synchronous produc-
tion systems.

<aw parts -{:::J-OIT prodLds

Figure 1.3. Two machines, one buffer.

Example 1.2. Consider a synchronous, two-stage production system with BC = 2.


The state for the system in equilibrium is represented by the triplet CY~o z, y 2), where y; is
4 HYBRID SIMULATION MODELS OF PRODUCTION NETWORKS

the state of M;, i = 1, 2, and z is the number of intermediate parts that have already been
processed by M 1 and wait for or are being processed by M 2 Hence, a part that is blocked
in M 1 is included in z whereas a part that has just entered M 1 is not. Specifically, when
z = 0 machine M 2 is starved, when z = BC = 2 machine M1 is blocked, and when z = 1 M1
is not blocked and M 2 is not starved. Obviously, for any fixed z we have 4 states for the
machines. The states ofthe system are

(0, 0, 0) (1, 0, 0) (0, 0, 1) (1, 0, 1)

(0, 1, 0) (1, 1, 0) (0, 1, 1) (1, 1, 1)

(0,2,0) (1,2,0) (0,2, 1) (1,2, 1)

Let p; and r; be the failure and repair probabilities, respectively, of machine M;, i = 1, 2,
over one production cycle. Since the complete state diagram of the corresponding
Markov chain is very complex, it is divided into three simpler diagrams that are presented
in Figs. 1.4 through 1.6.

(1 - r,)(1 - P2)

..

Figure 1.4. Transitions from states (y~, I, y2).


INTRODUCTION 5

Figure 1.5. Transitions from states (y~, 0, yz).

..--~----f

~ 1 -P2 i

........................................................................................

Figure 1.6. Transitions from states (y 1, 2,yz).

Figure 1.4 depicts the transitions from states with z = 1 that is, states of the fonn
(y,, 1, y 2), where y 1 = 0, 1 and y 2 = 0, 1. In these states the machines are neither starved
6 HYBRID SIMULATION MODELS OF PRODUCTION NETWORKS

nor blocked. Let y;, k denote the state of M; and Zk the number of intermediate parts at time
h The transitions among various states are determined by the following rules:
The number of intermediate parts at time tk + 1 is given by

zk+l =zk+YI,k-Y2,k

Machine M; does not change its state at time tk+ 1 with probability 1 - p; if M; is
up or 1 - r; if it is down. This can be written compactly as

P(y;, k+ 1 = Y;, k I zk = 1) = (1 - p;)Yi, k ( 1 - r;) 1 - Yi, k

The state of M; changes at time tk + 1 with probability p; if M; is up or r; if it is


down. Thus,

P(y;,k+ 1 = 1- Yi,k I Zk = 1) = p/i,k r; l-y;,k

State transitions of M1 and M2 are independent of each other and are determined
by the last two rules. For example the probability that both machines remain up
at It+ 1 is given by

P(yl.k+l = 1,y2,k+l = 1ly1,k = 1,zk= 1,y2,k= 1)

The reader can easily verify that the transitions of Fig. 1.4 are in accordance with the
above rules.
Figures 1.5 and 1.6 correspond to the boundary states z = 0 and z = 2. Figure 1.5
shows the transitions from states (y~> 0, y 2). For these states we apply the rules of case
z = 1, except that machine M2 is now starved and, by the assumption of operation-
dependent failures, it cannot break down. Finally, Fig. 1.6 depicts the transitions from
states (y 1, 2, y 2) in which machine M 1 is blocked and, therefore, it cannot break down.
r;
By defining p; ~ 1 - p; and ~ 1 - r;, the equations for the state probabilities become

where

Pk ~ [Pk (0, 0, 0) Pk(l, 0, 0) Pk(O, 0, 1) Pk(l, 0, 1) Pk(O, 1, 0) ... Pk(l, 2, 1)f

(T denotes matrix transpose) and


INTRODUCTION 7

- 0 0 0 0 0
rtr2 0 0 0 0 0 r1P2
-
rtrz 0 0 0 0 0 r1P2 0 0 0 0 0
-
0
-
rt 0 0 0 0 0 0 0 0
rtr2 r1P2
rtrz 0 r1 0 0 0 YtP2 0 0 0 0 0
-
0 PtYz 0 0 0 0 PtPz 0 0 - 0
r1r2 YtP2
- 0 PtPz 0 0
0 PtYz 0 0 rtrz 0 YtP2 Pz
A~ -
0 P1r2 0 PI r1r2 0 0 P1P2 0 0 riP2 0
- - -
0 P1r2 0 PI nrz 0 0 P1Pz 0 0 YtP2 Pz
- 0 0 0
0 0 0 0 0 p1r2 0 0 r1r2
- - 0 0
0 0 0 0 0 P1r2 0 0 YtYz rz
- 0 0
0 0 0 0 0 P1r2 0 0 r1r2 0
-
0 0 0 0 0 PtYz 0 0 YtYz rz 0 0

From Examples 1.1 and 1.2, we see that the dimension of the matrix A increases fast
even for the simple system of Fig. 1.3. If the buffer capacity is BC, the states of the line
are

(0,0,0) (l, 0,0) (0,0, l) (1,0, 1)

... (0, z, 0) (1, z, 0) (0, z, 1) (1, z, 1) ...

(0, BC, 0) (1, BC, 0) (0, BC, I) (I,BC,1)

and the dimension ofthe matrix A is 4x(BC +I).


The generalization of the above is straightforward. In a serial system with n ma-
chines, each with two states, I and 0, and n- 1 intermediate buffers each with capacity
BC1, j = I, 2, ... , n - I, the number of states is

n-1
2nTI (BC 1 + l)
j=l

For a realistic system ofn = 20 machines and I9 buffers with capacity BC1 = 20, j =I, 2,
... , I9, the number of states is about 1.39 x 1031 . The computational requirements for
calculating the equilibrium probabilities of this system are beyond the capabilities of to-
day's computers. This suggests that Markov chains are good only for problems of very
small dimensions.
Simulation is another alternative but also costly since it cannot avoid examining all
states. The approach of this book avoids the inefficiencies of Markov chains or simula-
tion by examining a small number of states, essential to solving the problem. Even then,
we are occasionally obliged to make certain approximating assumptions, to save compu-
tational time at a minute cost to precision.
8 HYBRID SIMULATION MODELS OF PRODUCTION NETWORKS

1.2. TYPES OF PRODUCTION SYSTEMS

In this section, we present a classification of the most common types of production


systems and describe their basic components.

1.2.1. Job Shops

A job shop is a production system with a given number of machines and workers ca-
pable of performing operations on different jobs with possibly different processing time
requirements. Hence, in job shops production can be asynchronous. The variety of jobs is
rather large whereas the production volume is usually small. A machine shop or a con-
struction process may be modeled as job shops.

1.2.2. Production Lines

A production line produces high volumes of a small number of products. All jobs
follow the same serial route from machine to machine. The material handling system is
quite simple given the inflexibility of the line. Such a line is also called ajlow line.
If all activities of interest, namely, the processing of items and the repair of failed
machines, start and end at the same times for all machines, then the flow line is synchro-
nous. Unlike job shops, in which the processing times at each machine vary form one
operation to another, the machines of a synchronous flow line are usually unmanned and
the operations are performed by automatic equipment. Examples 1.1 and 1.2 concern
synchronous flow lines. Furthermore, if the movement of workparts is synchronized so
that all machines begin operation at the same time and a stoppage of any machine forces
down all other machines, then the line is called transfer line.
The control or design parameters of a production line are limited. Yet, in an era of
increasing competition any improvement in productivity is not only desirable but also
necessary. Thus, questions about the distribution of storage space between machines or
the distribution of repair resources need rational answers in order to enhance productivity
given the constraints of each individual situation. Such questions are difficult to answer
due to analytical complexities. We shall examine these problems in detail in the follow-
ing chapters.

1.2.3. Production Networks

A production network consists of machines interconnected in arbitrary but given


ways. Parts visit the machines according to known protocols. A number of production
streams may converge at the entrance of a machine or diverge from its exit, where as-
sembly or disassembly may also occur. Parts may return to segments of the network, thus
forming feedback loops.

1.2.4. Buffers

In a production network without storage space, when a machine breaks down, it can-
not produce and also the other machines are forced to stop production either immediately,
INTRODUCTION 9

as in transfer lines, or after a few production cycles. This happens because the machines
that follow the one that is down (also called the downstream machines) do not have parts
to work on and become starved, whereas the preceding or upstream machines cannot re-
lease their parts to the failed machine and become blocked.
To avoid shocks or delays ofproduction, storage spaces are introduced between ma-
chines. These spaces are called buffers and hold semi-finished parts. They provide space
for products by operational upstream machines when the downstream ones are not opera-
tional and, dually, they operate as sources of parts to downstream machines when the
upstream ones are not operating. Buffers also absorb shocks when production rates of
adjacent machines differ. Since buffers act as inventories, they impose costs on produc-
tion and the question to be answered is what the optimum storage size should be to
maximize profit.

1.2.5. Material Handling Systems

Transportation of parts from machine to machine as well as storage of unfinished or


finished items is performed by the material handling system. This system may consist of
conveyor belts, carts, or pallets together with a robotic mechanism for the movement of
the parts.

1.2.6. Discrete and Continuous Production

The most obvious type of production involves discrete parts. Continuous production,
however, is not at all uncommon. Examples abound as in the case of refineries, beverage
or chemical industries. As we shall see, continuous production models offer excellent
approximations of discrete production under certain conditions.

1.3. PROBLEMS AND ISSUES

Questions of the type "how much is produced", "how much buffer space", or "which
repair resources", that were answered in the past by trial-and-error or just intuitively,
nowadays ought, in order to enhance performance, to be answered systematically.

1.3.1. Analysis

Analysis of a production network entails the computation of a number of quantities


related to the performance of the network. Such quantities are the average throughput,
cycle time, and buffer levels, among others. Analysis may acquire a more general scope
by observing the traffic of parts at certain nodes, possible bottlenecks and so on.
The main weakness of Markov chains or simulation is that they examine a tremen-
dous number of states, spending a lot of time on unimportant computations. The central
idea of this book is that this effort can be reduced enormously by visiting only a limited
number of states. These states are: machine up or down, and buffer full, empty, or par-
tially full. During operation between two states the system runs deterministically. When a
new event occurs, the network is updated and adjusted to the new situation.
10 HYBRID SIMULATION MODELS OF PRODUCTION NETWORKS

When the production rates are random, a piecewise deterministic approximation is


used which is quite effective in most practical cases.

1.3.2. Design

Buffer space or repair allocations are two of the most important problems when one
designs a production network. The analytical tools we develop here are powerful when
the optimal design is sought. Central to the computation of this design is the estimation of
gradients of a given performance measure with respect to the design parameters.
The models presented in this book are suited to work together with mathematical
programming methods and perturbation analysis. The former are optimization procedures
whereas the latter is a method of estimating vector gradients from one simulation run.
With them, any practical network can be analyzed and designed on a modern PC in just a
few seconds or minutes. Thus the tools of the book are of considerable practical value.

1.3.3. Information - Data

Any model is at most as good as its data. The information needed to use the models
we develop in the following chapters is mostly of the statistical type. Knowledge of the
statistics of customer arrivals, processing times, breakdowns, and repairs is assumed.
Such statistics may be obtained in a straightforward manner, but in practice the produc-
tion environment is susceptible to changes and quick adaptations according to need. An
effort, therefore, should be made to obtain reliable data. Such data ought to be updated as
more and more information about the production network is gathered.

1.3.4. Performance Measures

The general philosophy of performance is that of maximum profit or minimum cost


given the constraints at hand. A number of operational indices are computed, which are
directly or indirectly related to profit or cost. These are: throughput, mean level and vari-
ance of buffers, mean time in the system (also known as the cycle time), utilization of
machines and so on.

l.Al. APPENDIX: A REVIEW OF PROBABILITY THEORY, STATISTICAL


ESTIMATION, AND STOCHASTIC PROCESSES

l.Al.l. Axioms of Probability

Probability theory deals with models of physical phenomena, experiments, games,


and processes whose outcomes depend on chance. The starting point of probability theory
is a random experiment and all its simple (irreducible) outcomes forming a set called the
sample space .Q. Examples of random experiments include tossing a coin, rolling a die,
recording the times between failures of a machine, counting the number of items a ma-
chine produces during a given period, etc. An event A is a subset of the sample space. For
INTRODUCTION ll

the die example, the sample space is n = { 1, 2, 3, 4, 5, 6} and A could be the event ''the
outcome is odd" which corresponds to the set {1, 3, 5}. We say that event A occurs when
the outcome of the experiment belongs to A.
Many physical phenomena and applications often involve several experiments occur-
ring simultaneously or sequentially. In order to model such phenomena, one defines a
combined experiment whose sample space is the cartesian product of the sample spaces
that correspond to the simple experiments. For example, consider the experiment of roll-
ing a pair of dice. Then the sample space is

nxn= {(1, 1),(1,2), ... ,(6,6)}


n
where is the sample space of the simple die.
For any event A we define the event Ac, called the complement of A, to be the set of
n
all outcomes of the sample space that are not in A. The empty set, 0, is defined to be
the complement of n. This event cannot occur because it does not contain any outcomes.
By the difference A - B between any two events A and B, in that order, we mean the
event comprising all outcomes of A that are not in B.
The set A u B is the union of A and B and is the event that occurs whenever either A
orB or both events occur.
Dual to the union is the intersection A n B of A and B and is the event that occurs
whenever both events occur.
If A n B = 0 then A and B cannot occur simultaneously and they are said to be mu-
tually exclusive. If u;:,1 A;= n then the events A 1. A2, ... , An are said to be exhaustive.
Not every subset of nisin general an interesting event. We nay not have informa-
tion about some events, we may not be interested in some of them, or we may not be able
to assign probabilities to all of them. The events of interest form a CJ'-jield or Borel field 8
of subsets of n, namely a non-empty collection of sets such that:
(1) If Ae8 then Aceg
(2) For any countable collection of events A~. A2, ... e fJ, the event Ub, 1 A; belongs
to8

For the die example, the class 8 1 = {0, n, {1, 3, 5}, {2,4,6}} is a field, but the class 8 2 =
gl u {1} = {0, n, {1, 3, 5}, {2, 4, 6}, {1}} is not a field because the set {1, 2, 4, 6},
which is the union of { 1} and {2, 4, 6}, does not belong to 8.
A probability space is a triple {n, fJ, P} where P is a set function, called a probabil-
ity measure on fJ, that maps 8into [0, 1] according to the following axioms.

Axioms of probability:
(1) For any event Aeff, P(A);:?: 0
(2) P(fl) = 1
(3) For any countable collection of mutually exclusive events A 1, A2, , in 8
12 HYBRID SIMULATION MODELS OF PRODUCTION NETWORKS

These axioms combined with properties of the set operations defined previously can
be used to prove several results about probabilities. For example, any two events A and B
can be represented as the following unions of mutually exclusive events

A = (A - B ) u (A n B )

B = (B- A) u (A n B )

which, in view of Axiom 3, yield

P(A -B) =P(A)-P(A nB)

P(B- A) = P(B ) - P(A n B )

Similarly, Au B can be written in the form

A u B =(A - B ) u (B- A) u (A n B )

from which we obtain

P(A u B ) =P(A - B ) + P(B- A) + P(A n B )

=P(A) + P(B ) - P(A n B )

Another useful formula can be derived by observing that the events A and Ac are mu-
tually exclusive and exhaustive. Then from Axioms 2 and 3 we obtain

Finally, combining Axiom I and the previous equation yields

P(A) ~I

l.A1.2. Conditional and Independent Events

Suppose that for some random trial we are informed that event B has occurred. What
is the probability that event A has also occurred? For the die example, B could be ''the
outcome is odd" which corresponds to the set {I, 3, 5} and A could be the event "the out-
come is smaller than 3" which corresponds to the set {I, 2}. Of course, since B has oc-
curred then we must have that P(B ) > 0. This conditional event is referred to as "A given
that B has occurred" and it is denoted by A I B. The corresponding conditional probability
of A is defined by
INTRODUCTION 13

P(A I B)= P(A n B)


P(B)

for P(B) > 0.


In the above example, we have A 11 B = {1}, P(A 11 B ) = 1/6, and P(B ) = 3/6, and
thus P(A I B)= 1/3.
In general, the probability of the unconditional event A differs from the probability of
the conditional event A given that B has occurred. However, if

P(A 11 B ) =P(A) P(B )


then by the definition of P(A I B ) we have

P(A I B ) =P(A)

This implies that A is independent of B. Also, for P(A) > 0, it immediately follows that

P(B I A) =P(B)

which implies that B is also independent of A.


Hence if P(A 11 B ) = P(A) P(B ), then A and Bare independent.
Three events A, B, and Care said to be independent if

P(A 11 B 11 C) = P(A) P(B ) P( C)

and in addition they are pairwise independent.


Consider n mutually exclusive and exhaustive events Aj such that P(Aj) > 0, j = 1, 2,
... , n, and B an arbitrary event. The next two theorems facilitate computations with
conditional probabilities.

Theorem of total probability:

II II

P(B) = 'L.P(B 11 A;)= 'L.P(B I A; )P(A;)


i=l i=l

Bayes Theorem: For every event Abi = 1, 2, ... , n,

P(Aj I B)= :(B I Aj )P(Aj)


'L.P(B I A; )P(A;)
i=l
14 HYBRID SIMULATION MODELS OF PRODUCTION NETWORKS

l.A1.3. Random Variables

Given a probability space {il, :J, P}, a real random variable or simply random vari-
able is a function X( w) mapping n into the real line R such that
(1) X(w) is a measurable function relative to the a-field :J, that is, for any xeR, the
set {w: X( w) ~ x} is an event.
(2) P[X( w) =oo] =P[X( w) =- oo] = 0.
The distribution function F(x) of a random variable X(w) is the probability of the
event {w: X(w) ~ x}, for any xeR. For simplicity, we omit wand denote the random
variable by X, the event {w: X( w) ~ x} by {X~ x}, thus

F(x) = P(X ~ x)

The distribution function satisfies F(- oo) = 0, F( oo) = 1. In addition, it is nonde-


creasing and continuous from the right, that is, for & > 0, lim 6 _.0 F(x + s) = F(x). IfF is a
distribution function andpe(O, 1), the inverse ofF, which is defined by

F- 1(p) ~ inf {x: F(x);?: p}

is called the pth quantile of F.


In general, a condition is said to hold with probability one or almost everywhere if
there is a set Ee:J of probability measure 0 such that the condition holds for each out-
come w outside E, that is, weff.
A random variable X is called continuous if its distribution function is continuous
everywhere. The derivative

f(x) = dF(x)
dx

whenever it exists, is called the probability density function of X. Since F(x) is nonde-
creasing, f(x) is nonnegative. A random variable X is called absolutely continuous if a
nonnegative functionf(x) exists such that

F(x) = Jf(t)dt 'V xeR


-co

The distribution function of an absolutely continuous random variable is continuous eve-


rywhere and differentiable almost everywhere. Furthermore

ao
JJ(x)dx =1
-co
INTRODUCTION 15

b
P(a ~X~ b)= F(b) -F(a) = ff(x)dx \if a, b eR
a

A random variable X is discrete if the set of possible values of X is finite or counta-


bly infinite, that is X E {xh x2, ... }. Let p(x;) be the probability that the discrete random
variable X assumes the value X;, that is

The function p(x) is called the probability mass function. Since the events X= X;, i =
I, 2, ... , are mutually exclusive and exhaustive we must have

00

LP(X;) =1
i=l

The distribution function for the discrete random variable X is given by

F(x)= LP(X;)
x;Sx

As shown in Fig. 1.A 1, F(x) has a staircase form with discontinuities at the points x;.

F(x)

~--
.!:11
~p-(x-1)---~~----~l--~~-------------i~x
X1 X2 0 X3

Figure l.Al. Distribution function of a discrete random variable.

In simulation we are often interested in studying several random variables Xh X 2 , ,


Xn simultaneously. In such cases, the joint distribution function F(x~. x 2, , Xn) and the
joint probability density function f(xh x2, ... , Xn) or the joint probability mass function
p(x~. Xz, , Xn ), whichever applies, of}(, j = 1, 2, ... , n are defined by:
16 HYBRID SIMULATION MODELS OF PRODUCTION NETWORKS

If these random variables are independent then for any subset 0 1, 0 2, , of them

where Fj(x), J;<x), and pix) are the distribution, probability density, and probability mass
functions of 0

l.Al.4. Expectation

The expected value or mean of a random variable X is denoted by E(X) or JL and is


defined by

Jx f(x)dx if X is absolutely continuous


JL= E(X) ={ -oo00
'Lx;p; if X is discrete
i=l

For a discrete random variable X we can define the distribution and probability den-
sity functions in the same way as we did for the absolutely continuous random variables
by using two functions: the unit step function

= {b x<O
x~O
U(x)

and its "derivative", a generalized function called the delta or impulse function 8(x). The
impulse function is defined by

b
Jtp(x)8(x- X; )dx = q>(x;)
a

which holds for every real function q>(x) continuous at X; and every a, b such that
X;E(a, b).It should be noted that generalized functions are operators defined accordingly
but for our purposes this exposition suffices although it is not rigorous. The distribution
function of X is
INTRODUCTION 17

F(x) =P(X$x) = L,p(x1 )U(x -x1 )


1=1

and the probability density function is

=
f(x) =F'(x) = L,p(x1 )8(x -x1 )
i=l

Recall that for any discrete random variable the mean is defined by

f1 = L,x1p(x1)
1=1

Let us examine each term of the above summation separately. First, observe that for
a 1 =- oo and a1 = (x1 + x 1 _ 1)/2, the point x 1 lies in the interval (a1, a1+ I]. Therefore, setting
cp (x) = x p(x1) and applying the property of the impulse function yields

Oj+}
x 1p(x1)= fxp(x 1 )8(x-x1 )dx
a;

Since 8 (x- xi) is zero in the interval (a1, a1+ 1] for xi::;:. X;, the above becomes

01+1 =
X; p(x;) = f L x p(xi) 8 ( x - xi) dx
a; j=l

By observing that the intervals (a 1, a1+ I], i = 1, 2, ... , are disjoint and exhaustive (i.e. their
union is IR) we may write the mean of X as follows:

where the last expression is the same as the mean of an absolutely continuous random
variable. Therefore, for every random variable X we have

f1 = f x f(x)dx
18 HYBRID SIMULATION MODELS OF PRODUCTION NETWORKS

We have already seen that a random variable X is a measurable function of the out-
come of some random experiment. Therefore, any function g(X) can be considered a
random variable itself provided it is measurable, that is, {g(X) $ y} is an event, and
P[g(X) = oo] = P[g(X) =- oo] = 0. The mean of g(X) is defined as follows:

00

E[g(X)] = Jg(x)f(x)dx
-00

In an analogous manner, we can compute the mean of a measurable function of several


random variables.\),}= 1, 2, ... , n:

From the definition of the expected value the following properties can easily be veri-
fied:
( 1) Any constant a can be regarded as a degenerate random variable with probability
density function 8 (x - a). Therefore, E( a) = a.
(2) The operator E(.) is linear, that is, given two finite sequences of real numbers a1
and random variables.\),}= 1, 2, ... , n,

(3) If.\),}= 1, 2, ... , n, are independent random variables then

Knowledge of the expected value of a random variable X provides only partial in-
formation about its statistical properties. A more complete specification of X is possible if
one knows the quantities

00

E(Xk) = Jxk f(x)dx k= 0, 1, ...


-00

which are called the kth moments of X. The quantities

00

E[(X-.utl= J<x-,u)k f(x)dx k=O, 1, ...


-00

are called the kth central moments of X. The parameter Var(X) or a 2, which is defined by
INTRODUCTION 19

is called variance of the random variable X. An equivalent expression for the variance is
obtained as follows

a2 =E(X2- 2j.JX + /-l2)


=E(X2)- 2).l2 + /-l2

The positive square root a of the variance is called standard deviation. The standard
deviation measures the average distance of X from its mean. If a= 0, then we must have

<Xl

J(x- ).l) 2 f(x)dx =0


-<Xl

Since both (x- 1-li andf(x) are nonnegative, the above is valid only when either x = Jl or
f(x) = 0 for every xER. Therefore we must have X(m) = Jl almost everywhere, that is, for
every outcome mEilexcept, possibly, on a set of zero probability measure.
For two real random variables X and Y with means Jlx and Jly, respectively, the quan-
tity E[(X- Jlx) ( Y- Jly)] is called covariance of X and Y and it is denoted by Cov(X, Y ).
The variance and covariance have the following properties:
( 1) For any numbers a, b and any random variable X with mean Jl,

Var(a + bX) = E[(a + bX- a- b,u)2] = E[b 2 (X- J..l) 2] = b2 Var(X)


(2) If X, Yare independent random variables then

Cov(X, Y) = E[(X- Jlx)] E[(Y- Jly)] = 0


(3) Given any finite sequences of random variables J0 and real numbers a1,j = 1, 2,
... , n,

The mean, variance, and moments of a random variable are referred to as the statisti-
cal parameters or, simply, parameters ofthe random variable. In the next section, we de-
scribe random variables that are commonly used in simulation.
20 HYBRID SIMULATION MODELS OF PRODUCTION NETWORKS

l.Al.S. Some Commonly Used Random Variables

Bernoulli. An experiment with two possible outcomes, 1 or 0, whose probabilities


are q and 1 - q, respectively, can be represented as a random variable of the Bernoulli
type. The probability mass and distribution functions of the Bernoulli distribution are
given by

{ 1-q k=O
Pk = q k= 1

F(x)=(l-q) U(x)+q U(x-1)

where U(x) is the unit step function. The mean and variance of X are given by

where qe[O, 1] is a given parameter.

Geometric. A geometric random variable describes the number of successive occur-


rences of 0 before 1 occurs in a sequence of Bernoulli trials. Here we have

Pk=q(1-q)k k=0,1, ...

co
F(x) = q :2:(1- q)k U(x- k) xeR
k=O

or

F(x)= 1-(1-qy+1 x=O, 1, ...

where qe[O, 1] is the parameter of the geometric distribution. The mean and variance of
the geometric distribution are given by

1-q
j.J=-
q

2 1-q
a=-.-
q2

A geometric random variable can be used to approximate (a) the number of items a
machine produces between successive breakdowns (number of parts-to-failure), (b) the
number of breakdowns during the production of a single item, (c) the number of con-
INTRODUCTION 21

fonning items produced until the production of a defective one, etc. The parameter q is
defined respectively as (a) the probability that the machine fails before completing an
item (failure probability), (b) the complementary of (a), that is, the probability of com-
pleting an item before failing (survival probability), or (c) the probability of producing a
defective item.
The geometric distribution has the so called memoryless or Markov property, accord-
ing to which the remaining number of 0 's until the occurrence of 1 is independent of past
trials.
For example, suppose that a machine whose failure probability is q has already sur-
vived the production of c parts, for some nonnegative integer c. What is the probability
that the machine will break down after k items are produced?
Let the random variable Y denote the remaining parts-to-failure of the machine. This
probability is written as follows

P(Y =k I machine has already survived the production of c items)

and the memoryless property suggests that this probability is Pk = q (1 -qt. To see this,
let the random variable X denote the total number of parts the machine produces before
failure. Clearly X= Y +c. Moreover, by definition, X has a geometric distribution. The
required probability may be written as follows

P(Y = k I the machine has already survived the production of c items)

=P(X=k+ c IX~ c)

= P[ {X = k + c} n {X ~ c}]
P(X ~c)

Since the event {X~ c} contains the event {X= k+ c}, their intersection is

{X=k+c} n {X ~c} = {X=k+c}


Therefore,

P(X = k + c I X ~ c) = P(X = k +c)


P(X ~c)

= - - -Pk+c
"--- = Pk+c
-~:...:.._-
1-P(X~c-1) 1-F(c-1)

= q(l-q)k+c =q (l-q)k
{1- q)c

which proves the memoryless property of the geometric distribution.


22 HYBRID SIMULATION MODELS OF PRODUCTION NETWORKS

Poisson. A discrete random variable X has a Poisson distribution with parameter A,


A > 0, if its probability rriass and distribution functions are

F(x) = P(X ~ x) = LPk


kSx

Ak
= e-A :L- U(x-k)
00

k~o k!

The mean and the variance of X are given by

The Poisson distribution is commonly used to model the number of events that occur
during a time interval when the only available information is their mean rate of occur-
rence. For instance, consider a worker who produces, on the average, A parts per hour and
works x hours a day. Then the daily production of the worker can be approximated by a
Poisson random variable with parameter A = AX.

Uniform. An absolutely continuous random variable X with uniform distribution on


[a, b] has the following density function:

f(x)={ b;a
otherwise

We write X- U(a, b). The distribution function of X is given by

0 x<a
{ x-a a~x~b
F(x)= b;a
x>b

and its mean and variance are

a+b
Jl=--
2

(b -a) 2
12
INTRODUCTION 23

Gamma, exponential, Erlang. An absolutely continuous, nonnegative random vari-


able X has a gamma( a, 11A) distribution with parameters a and 11A, if its density function
is given by

Aa xa-! e-..t x
f(x) == f(a) U(x)

where f( a) is the gamma function, defined by

f(a) ~ "'Jya-le-Ydy
0

for any a> 0. In general, the distribution function of X does not have a closed form. The
mean and variance of the gamma distribution are

The gamma distribution is used to model the time to perform some task, such as, the
time spent by a worker to produce one item, the uptimes and downtimes of a machine
subject to failures and repairs, etc.
In particular, when a == 1 the gamma distribution is called the exponential distribu-
tion with rate A, whose statistics are

f(x) == k -..tx U(x)

F(x) = (1-e-..tx) U(x)

p=IIA

The exponential distribution describes the times between successive occurrences of


events whose frequency has a Poisson distribution with parameter A == Ax. Indeed, for the
Poisson distribution, the probability that no events occur during the time interval [0, x],
that is

is equal to the probability that the exponential random variable is greater than x.
It can be shown that the exponential distribution enjoys the memoryless property.
For example, consider a machine whose uptimes are exponentially distributed with mean
11A. Suppose that at some time we are informed that the machine has already be func-
24 HYBRID SIMULATION MODELS OF PRODUCTION NETWORKS

tional for c time units, for some nonnegative real c. Then the residual lifetime of the ma-
chine is independent of c and has an exponential distribution with rate A. The proof of
this property is similar to the proof for the geometric case and will be omitted. Hence, the
exponential distribution is the continuous analog of the geometric distribution.
Finally, the sum of n independent exponential random variables each with rate A is a
random variable with gamma(n, 11A) distribution. This distribution is also known the Er-
lang(liA) distribution with n degrees of freedom and is denoted by n-Erlang(liA).

Normal, chi-square, t. The normal or Gaussian distribution is the most frequently


used distribution. For a normal random variable X with mean J.l and variance a 2, the
probability density function is given by

1 _(x-p)2
f(x) = e 2a 2 xeR
../21!0"2

We write X- N(.u, a 2). Notice that f(x) is an even function about J.l The distribution
function of X does not have a closed form.
The normal distribution has the following properties:
(1) Suppose~.}= 1, 2, ... , n, are independent N(~. a/) random variables and a;eR,
i = 0, 1, ... , n. Then the random variable

n
X=ao+ l.ajXj
j=l

has a normal distribution with mean

n
J.l = ao + l. a j J.l j
j=l

and variance

n
0"2 = ....
"a~J a~J
j=l

(2) In particular, if X is N(J.l, a 2) then the random variable Z defined by

X-J.l
Z=--
a

has an N(O, 1) distribution, called the standard normal distribution. Its distribution func-
tion is denoted by l/J (z), that is,
INTRODUCTION 25

Let Za, ae(O, 1), denote the critical point of the standard normal distribution; that is

The parameter a is the probability mass of the right tail of the normal distribution. Table
Al of Appendix A gives the critical points Za for various values a between 0 and 0.5. It
follows from the symmetry of the normal density that Za = - z 1 _a , from which we can
determine Za for ae(0.5, 1).
(3) If 'Zj are independent, standard normal random variables, j = 1, 2, ... , n, then the
random variable

n
Y= L:Xj
j=l

has a gamma distribution with parameters a= n/2 and A.= 0.5, which is also known as the
chi-square distribution with n degrees of freedom.
(4) Given two independent random variables Z andY, where Zhas a standard normal
distribution and Y has a chi-square distribution with n degrees of freedom, the random
variable

z
t=--
n .Jy In
has a t distribution with n degrees of freedom. Table A2 in Appendix A gives the critical
points tn,a, i.e. such that P(tn ~In, a)= a, for various degrees of freedom and tail probabili-
ties a.
The normal and the t distributions are used for constructing confidence intervals of
quantities of interest associated with a particular stochastic system. This issue is dis-
cussed in the next two sections.

1.A1.6. Estimation of Mean and Variance

A basic problem in statistical analysis is that of estimating the parameters of a ran-


dom variable X based on a number of observations (realizations) of X In simulation, pa-
rameter estimation is necessary at the early stage of determining the statistics of the ran-
dom inputs that affect the evolution of some system or process of interest. In addition,
parameter estimation is used to estimate the performance of the system from a number of
output data, which are obtained through several simulation runs.
The only parameters we consider here are the mean and the variance because they
are the most important in practical applications. First, the mean and the variance are suf-
ficient for determining the parameters of the most commonly used distributions we de-
26 HYBRID SIMULATION MODELS OF PRODUCTION NETWORKS

scribed in the previous section. Second, the performance measures of any stochastic sys-
tem are expressed as expected values of random variables that are related to the perform-
ance of these systems. Common performance measures of a production network are:
throughput. which is defined as the ratio of mean production of the system over a given
period of operation; mean cycle time, which is the mean time to convert raw parts into a
finished item; mean buffer levels, etc.
Suppose that the random variable X has finite mean IJ and finite variance a 2 Let Xh
X2, Xn be the available observations of the random variable. These observations could
be the outputs of n replications of a simulation experiment. The problem of estimating a
parameter of X is equivalent to that of determining a function on Rn called the estimator
or point estimate of the parameter. such that the value of the function at the point (X~. X2
. . . Xn) approximates the parameter with good accuracy. The average or sample mean is
defined by

- 1n
X(n)= -~X;
n i=l

and the sample variance by

1 n[ - ]2
S 2(n) = -~ X; - X(n)
n-1 i=t

By using the properties of expectation. it can easily be verified that the sample mean is an
unbiased estimator of IJ, that is

Furthermore, if the X/ s are independent, then also S 2( n) is an unbiased estimator of a 2


Intuitively, we wish the estimator to be as close as possible to the corresponding sta-
tistical parameter. Unbiasedness is a desirable property but it does not say anything about
the estimation error. For example, since E(X1) = IJ why should one use the sample mean
rather than a single measurement, X 1? Recall that the variance of a random variable pro-
vides a measure of the distance of the random variable from its mean. Then, assuming
independence of the Xi's, we obtain

- ] 1 n a2
Var [X(n) = -~ Var(X;) = -
n2 i=t n

while

from which it becomes clear that the sample mean is more accurate than a single meas-
urement. It is noteworthy that as the sample size n goes to infinity the variance of X(n)
INTRODUCTION 27

goes to zero and, thus, X(n) tends to f.i This is another desirable property of the sample
mean, which is called consistency. This fact, however, implies that, in order to make a
secure prediction, one needs an infinite number of simulations. To get around this ineffi-
ciency we shall formulate the estimation problem differently.

l.A1.7. Limit Theorems and Confidence Intervals for the Mean

In this section, we consider a more general estimation problem. Since the sample
mean is a measurable function of random variables, it is a random variable itself. Then
what is its distribution? Obviously, finding the distribution function of some random
variable is more difficult than estimating its mean value. However, for large sample sizes,
the sample means behave as normal random variables. This remarkable result, known as
the centra/limit theorem, permits the construction of confidence intervals for the estima-
tion of the mean.
Before stating the central limit theorem, we briefly review a few aspects of conver-
gence and present another useful limit theorem without proof.

Definition l.Al. A sequence of functions g,(w), n = 1, 2, ... , defined on a set n is


said to converge in the ordinary sense or, simply, converge to a function g(w) as n tends
to infinity, if for every wEn and any & > 0, there exists a positive integer N, possibly
depending on wand &, such that Jg,( w) - g( w)l < & for all n > N. We write

lim g,( co)= g( w)


n--+oo

We now extend this definition to sequences of random variables. In Appendix


1.A 1.1, we have seen that one can combine several experiments, each one with its own
sample space, to form a composite experiment with a common sample space. In the same
way, we can embed several random variables on a common probability space.

Definition l.A2. A sequence ofrandom variables Y,(w), n = 1, 2, ... ,defined on a


common probability space (n, 5~ P) converges to the random variable Y(w) almost eve-
rywhere if there is a set E E .:7 of probability measure 0 such that for every outcome
WEfl-E,

lim Y,(co) = Y(w)


n--+oo

where the above limit is in the ordinary sense.

The next theorem is one of the most popular limit theorems in probability theory and
characterizes the asymptotic behavior of the sample mean. We state the simplest version
ofthis result (Capinski and Kopp, 1999) without proof.

Theorem l.Al. Strong Law of Large Numbers: Let XI> X2, . ,X, be a sequence of
independent, identically distributed random variables with finite mean f.i Then
28 HYBRID SIMULATION MODELS OF PRODUCTION NETWORKS

lim X(n) = JL almost everywhere


n~oo

In statistical estimation, estimators converging almost everywhere are referred to as


strongly consistent. As we have pointed out in the concluding paragraph of the previous
section, consistency of the sample mean is not sufficient for obtaining reliable estimates
of JL.
We now consider a weaker type of convergence.

Definition l.A3. A sequence of random variables Yn(w) with distribution functions


Fn(x) converges weakly or in distribution to the random variable Y(w) with distribution
function F(x), if the sequence Fn(x) converges to F(x) in the ordinary sense, for every
XER.

Notice that the outcomes ware no longer needed in the above definition. It can be
shown (e.g. see Capinski and Kopp, 1999) that convergence almost everywhere implies
convergence in distribution. The converse is not true. Weak convergence gives rise to
central limit theorems, the most important tools for statistical estimation. The simplest
central limit theorem, due to Lindeberg and Levy, is as follows.

Theorem l.A2. Central Limit Theorem: Let X~o X 2, ... , Xn be a sequence of inde-
pendent, identically distributed random variables with finite mean JL and finite variance
a 2 Let also Zn be a random variable defined by

X(n)- JL
Zn = -'-;'--;::::::.-
(j'j ..Jn
Then

lim P(Zn :$; z)= r/J(z)


n~oo

where r/J(z) is the standard normal distribution function, that is

1 z x2
r/J(z) = r;::- fe -2 dx
"'2tr -00

It follows from the above theorem that for large n,

From the above, we have that JL belongs in the interval


INTRODUCTION 29

- (j
X(n) Za/2 ..Jrz

with probability

\faE(O, 1)

where za 12 is the critical point of the standard normal distribution (see Appendix A at the
back of this book). If the variance is unknown, then we replace the standard deviation
with the sample standard deviation S(n), and an approximate confidence interval is

- S(n)
X(n) Za/2 ..Jrz

If the X;'s are independent normal random variables then the quantity

has a t distribution with n - 1 degrees of freedom and an exact confidence interval for f.l
is given by

- S(n)
X(n) ln-l,a/2 ..Jrz

where tn-l, a 12 is the critical point of the t distribution with n- 1 degrees of freedom (see
Appendix A).

l.A1.8. Introduction to Stochastic Processes

As we have already seen, a random variable X is a function that assigns a number


X( w) to every outcome of some random experiment. Many systems involve random phe-
nomena that evolve in time t, IE T. In the remaining sections of this appendix we review
the basic tools for studying such phenomena.

Definition l.A4. A stochastic process {X,, IE J1 is a family of random variables de-


fined on some common probability space {.0, ~7, P} parameterized by the real variable t.

If Tis an interval of the real axis, then {X1, !E T} is a continuous time process. If Tis
a countable set ofreal numbers, e.g. T= {... -1, 0, 1, ... },then {X,, !ET} is a discrete
time stochastic process. The set S of all possible values of a stochastic process {X,, IE T}
is called the state space of the process. If S is countable then the stochastic process is
called a chain and, without loss of generality, we can setS= {0, 1, ... } and t 2:: 0.
30 HYBRID SIMULATION MODELS OF PRODUCTION NETWORKS

For simplicity, we denote a stochastic process by Xr. To distinguish between the sto-
chastic process Xr and its value for fixed WE .0 and tE T we occasionally denote the latter
by X,(w).
Sampling the stochastic process Xr at some time instant yields a random variable.
Hence, for each fixed time t, we can define the probability distribution function, the
probability density function, and the statistical parameters of Xr in the usual way. In gen-
eral, the distribution function depends on the sampling instant. If the distribution function
is independent oft then the stochastic process is strictly stationary.
Suppose that we sample Xr at the time instants t" t 2, to obtain a sequence of ran-
dom variables Xr 1, X12 , The sequence Xr 1, X12 , is called a sample path of Xr. In this
case we may also define the joint distribution function and joint statistics of the sample
path. For example, the joint distribution function of Xr 1 and X12 is defined by

We can also define the joint distribution function and joint statistics for two or more
stochastic processes defined on a common probability space. For example, if the stochas-
tic processes Xr and Y, are sampled at times t" t 2, , ln. where t 1 < t2 <In, then the joint
distribution function of the resulting random variables is defined by

Furthermore, if

for every sequence t 1, t 2, , tn. then the stochastic processes Xr and Yr are independent.
A common problem in simulation is that of estimating the mean value of some
bounded function g(Xr) of a stationary stochastic process Xr from a single sample path
{.Xr(w), tE[O, tmaxH. WE.O. For the purposes of this book, Xr may be the state of a given
production system and g(.Xr) the profit rate of the system at time t. Then X,( w) and
g[Xr( w)] are the corresponding outputs of some simulation experiment, designated by the
outcome WE .0.
A point estimate of E[g(Xr)], the mean profit rate, can be obtained by choosing tmax
large and computing the time average of g(.Xr), which is defined by

1 'mr
f{tmax W) = - jg[X,(w)]dt
fmax 0

For simplicity in the exposition we assume that the above integral is a Riemann integral.
Note that stationarity of Xr ensures that the random variables g(.Xr), tE [0, tmax]. are identi-
cally distributed. However, stationarity does not ensure convergence of the time average
g{tmax. w) to the mean, as the following example shows.
INTRODUCTION 31

Example l.Al. Suppose that g(x) = x. Furthermore, assume that Xt( w) = Xo( w) for
every time t and every outcome OJ and that X0 has two possible values, 0 or 1, with prob-
abilities 0.5. It is easily checked that E[g(.Kt)] = E(X0 ) = 0.5 but g(trnru" OJ) = Xo(w)-::;:. 0.5
for every I max and OJE fl.

The reason why the above construction fails to converge to E[g(.Kt)] is that the ran-
dom variables .Kt. te [0, trnax]. are not independent. Recall that independence is a necessary
condition for applying the strong law of large numbers. We now introduce the concept of
ergodicity of stationary processes.

Definition l.AS. A stationary stochastic process .Kt is ergodic, if for every function
g(.Kt) such that E[g(.Kt)] exists, the time average g(trnax w) tends to E[g(.Kt)] as lrnax ~ oo
almost everywhere.

We remark that E[g(.Kt)] does not depend on t.


In general, it is difficult to prove that a given stochastic process is ergodic. In addi-
tion, since production systems operate in a highly dynamic environment where new tech-
nologies emerge over the years the assumptions of stationarity and ergodicity may not be
realistic. For example, when a new machine is installed in a workstation comprising a
number of parallel machines, the workstation's capacity is increased and, therefore, the
stochastic process describing cumulative production is altered. In this book, emphasis is
placed on the efficient generation of sample paths using simulation and the derivation of
their time averages.

l.Al.9. Markov Chains

A special class of stochastic processes with many practical applications is the class
of Markov chains.

Definition l.A6. A stochastic process of the chain type is called Markov if, for every
partition t, < t2 < ... <In

In words, the state at some future time depends on the current state but it is independent
of past states.

From this definition, it is obvious that Markov chains are characterized by the memo-
ryless or Markov property, which is a fundamental property of geometric and exponential
random variables.
Suppose that a Markov chain .Kt starts from state x(O). We shall denote by P,(i) the
conditional probability P[.Kt = i I X0 = x(O)]. Let t and r be two time instants such that
t < r. By applying the theorem of total probability and the Markov property we obtain

Pr(i)= :LP[Xr =i,X, =jiXo =x(O)]


jeS
32 HYBRID SIMULATION MODELS OF PRODUCTION NETWORKS

= :LP[XT =i IX, = j ,Xo = x(O)] P[X, = j IXo = x(O)]


jeS

= :LP(XT =i IX, = j) P,(j) (l.A1)


jeS

which is the Chapman-Kolmogorov equation.


Since the chain must be in some state at time t, we must have

:LP,U) =1 (l.A2)
ieS

for every t ::?: 0. This equation is called the normalization equation. Any vector with non-
negative elements satisfying the above equation is called a probability vector.

l.Al.lO. Discrete Time Markov Chains

Consider a discrete time, stationary Markov chain x;. The one-step transition prob-
abilities are denoted by

a!i =P(x;+I = i Ix; =j) t = 0, 1, ...


for every i, j eS. Note that by the stationarity assumption, the one-step transition prob-
abilities do not depend upon t.
Suppose that at time t the chain is in state j. Let ~denote the number of successive
periods the process rests in state j before it jumps to another state. From the definition of
one-step transition probabilities, we have that the probability that the process will leave
state j in period t + 1 is P( ~ = 1) = 1 - aJJ. Since aJJ is independent of t, successive at-
tempts of the process to leave state j form a sequence of Bernoulli trials. The number of
these attempts is ~ By definition, ~ has a geometric distribution on { 1, 2, ... } with
probability mass function

This result justifies the fact that Markov chains and geometric random variables share the
memoryless property.
Using the one-step transition probabilities, we can describe the evolution of the
Markov chain explicitly. By invoking the Chapman-Kolmogorov equation with r = t + 1
we obtain

Pr+I(i) = LaiiP,(j)
jeS

Let P, denote the column vector [P,(l) P~2) ... ]r of probabilities of the various states
after t time steps and A the matrix of transition probabilities, that is
INTRODUCTION 33

The previous Chapman-Kolmogorov equation can be written compactly

(l.A3)

Given the initial state X 0 = k, the vector of state probabilities after one step is ob-
tained by multiplying A with the column vector Po= [o 1k, o 2k. ... ]T, where oik is defined
by

i=k
i=F- k

To compute P 2, we invoke Eq. (l.A3) twice:

In a similar way, the vector state probabilities after t steps is

P1 =A P 1 _t = ... =A 1 Po.
Quite often, we are interested in the steady state behavior of a Markov chain. This
requires computing the limit of P1 as t ~ oo, which is called the equilibrium probability
vector. However, not every Markov chain has a unique steady state. For a unique equilib-
rium probability vector to exist, the Markov chain must be
irreducible, that is, it is possible to go from any state to any other state in a finite
number of steps;
aperiodic, that is, it is possible to return to a previously visited state after any
number of steps (e.g. by resting at that state indefinitely);
positive recurrent, that is, the mean number of steps required to return to a pre-
viously visited state is finite.
The first two properties guarantee that the limit is unique. If the chain is periodic,
that is, it may return to a previously visited state only after d > 1 time steps, then the limit
exists only when t is a multiple of d. The third condition ensures that all equilibrium
probabilities are positive. When S is infinite, positive recurrence is required to rule out
the possibility of drifting toward higher states, in which case all the probabilities would
wind up equal to zero. When Sis finite, irreducibility implies positive recurrence.
Under the above conditions, the limiting distribution of the various states is inde-
pendent of P0 Hence, taking the limits of both sides ofEq. (l.A3) yields

P=AP (l.A4)
34 HYBRID SIMULATION MODELS OF PRODUCTION NETWORKS

where P = [P(l) P(2) ... ]Tis the vector of equilibrium probabilities. We remark here that
P is the eigenvector of A corresponding to the eigenvalue 1. In addition, the limit of A1 as
t ~ oo exists and each column of A"" is equal toP, that is

Aoo =[PIP I P 1 ... ].

Equation ( l.A4) has an infinite number of solutions (to see this, substitute P by aP,
aeR, to get another valid equality). To obtain a complete solution for P we also employ
the normalization equation. For Markov chains with state spaces of moderate size, the
solution to the linear Eqs. (l.A2) and (l.A4) can be obtained in reasonable computational
time by applying standard algorithms.

l.Al.ll. Continuous Time Markov Chains

In a discrete time Markov chain, the state may incur no more than one jump per
time step. Suppose now that~ is a continuous time, stationary Markov chain. In analogy
with the discrete time case, we make the following assumptions:
A I) The probability that the chain will be in state i at time t + w given that it was in
*
state j,j i, at time t is given by

P(~+w = i l~=j) =Ayw + o{w) (LAS)

for some bounded positive number Ay, where o(w), the "little oh" function, de-
notes a function with the property

lim o(w) =0
w--+0 W

A2) There are no instantaneous states, that is the probability that the chain will incur
two or more transitions in the interval (t, t + w) is o(w).
It follows from Eq. ( I.AS ), that the probability that the chain will not be in state j at
time t + w given that it was in state j at time t is

I -P(~+w =j I~= j) = P(~+w =F j I~= j) =Ajjw + o(w) (l.A6)

where

Ajj LAij
ieS

Next, we consider the opposite event, whose probability is


INTRODUCTION 35

This event can be decomposed into the following mutually exclusive events:
either no transition occurs during the interval {t, t + w] and the process rests in
state},
or at least two transitions occur, the first one being from} to some other state
and the last one being from some state back to j.
By assumption A2, the probability of the second event is o( w). Therefore, the probability
that the process rests in state j during w time units is

P[X,= j, rE(t, t + w] I x; = j] = P(x;+w = j I x; = j)- o(w)

(l.A7)

The parameters Ajj and Aii, i, j ES, are the transition rates associated with state j.
Next we show that the transition rates completely characterize the behavior of the
Markov chain. Specifically, we consider the following questions.
1. How long does the Markov chain spend in some state before it moves to the next
state?
2. How does the process "decide" which will be its next state?
3. How do state probabilities evolve in time?
The first important result concerns the distribution of the intertransition times. As-
sume that the Markov chain visits state j at time t. Let the random variable ~ denote the
total time the process rests in that state before jumping to another state. Then the distri-
bution function of ~ can be written

F;{w)=P(~~wlx;=})= 1-P[X,=j, rE(t,t+w] lx;=J]

The last term of the above equation is the probability that the process will rest in state j
for at least w time units. This event can be decomposed into two simpler ones, namely

{X,= j, rE(t, t + u] Ix; == j}

and

{X,= j, rE{t + u, t+ w] I x;+u =}}

where u < w. Hence,

1- Fiw) = P[X,= j, rE(t, t + u], and X,= j, rE(t + u, t + w] I x; == j]

and, using the Markov property,

1-F;{w)=P[X,=j, rE(t+u,t+w] lx;+u=}]P[X,=j, rE(t,t+u] lx;=j]

= P[X, == j, rE(t + u, t + w] I x;+ u == j] [I - .fj(u)]


36 HYBRID SIMULATION MODELS OF PRODUCTION NETWORKS

Rearranging terms in the above yields

1-F-(w)
P[Xr=J,re{t+u,t+w]IX'r+u=Jl= 1
1- Fj(u)

By Eq. (1.A 7), P[Xr = j, re(t + u, t + w] IXr+ u = j] = 1 - Ajj (w- u) + o(w- u). Hence,

1-F-(w)
-A.u(w-u)+o(w-u)= 1 -1
1- Fj(u)

= [F;{u)- F;{w)] -- 1-
1- Fj(u)

Dividing by (w- u) and taking the limits u ~won both sides of the above, yields

dln[l- Fj(u)]
=----...,::_-
dw

from which we obtain

F;{ w) = 1 - e - Ajj w

This implies that the intertransition times of a Markov chain are exponentially distrib-
uted. Again, this result justifies the fact that continuous time Markov chains and expo-
nential random variables share the memoryless property.
Consider now the sequence of the various states the process visits successively. Let It
be the transition epoch and Yk the corresponding state visited upon the kth transition, k =
0, 1, .... The time tk+ 1 - tk between the kth and the (k + l)th transitions is an exponential
random variable. Hence, Yk is a chain with irregular transition epochs. It can easily be
verified that Yk is a Markov chain. For this process, the one-step transition probabilities
can be computed as follows:

(by the Markov property) =lim P(X,k+l = i I xlk+l


w-+0
'* i and x,k+l-W = j)
INTRODUCTION 37

. Ayw+o(w) Aij
== 11m--=----
w-+oo Auw+ o(w) Aff

Therefore, the probability of a conditional transition to state i given that a transition oc-
curs is independent of the intertransition time. The discrete time Markov chain Yk is
called the embedded chain of Xr.
Next, we derive equations that describe the behavior of continuous time Markov
chains, both during transient times and in equilibrium. Setting T == t + w in Eq. (l.Al)
yields

Pt+w(i)== LP(Xt+w ==iiXt ==J)f>t(j)


jeS

== P(Xt+w == i IX, = i)f>t (i) + L P(Xt+w =i IXt = })f>t (})


ji

Subtracting P,(i) and dividing by w both sides of the above equation yields

. . . . LP(Xt+w = i IX, = })f>t (})


f>t+w(l)-.P,(l) =-1-P(Xt+w =liXt =l) f>t(i)+.;:_j_; _ _ _ _ _ _ __
w w w

Taking the limit w ~ 0 on both sides of the above and applying Eqs. ( l.A5) and ( l.A6)
yields

(l.A8)

which is known as the forward Chapman-Kolmogorov equation of continuous time


Markov chains. As in the discrete time case, the Chapman-Kolmogorov equations can be
expressed in vector form, thus

df>t =A PI (l.A9)
dt

where P, is the column vector of state probabilities at time t and A is defined by

...
... ]
The vector of equilibrium probabilities, assuming they exist, is defined by

P= lim P1
1-+<X>
38 HYBRID SIMULATION MODELS OF PRODUCTION NETWORKS

By definition, P is independent of time; hence,

dP =O
dt

and Eq. (l.A9) becomes

AP=O (l.AIO)

which is the algebraic Chapman-Kolmogorov equation. A sufficient condition for a


steady state to exist is that the embedded Markov chain Yk be irreducible and consist of
positive recurrent states (Heyman and Sobel, 1982).
The vector Pis computed using Eq. (l.AIO) and the normalization equation. As in
the discrete time case, the possibility of solving this system of equations in reasonable
computational time depends on the number of states of the corresponding Markov chain.
Finally, we remark that a positive recurrent Markov chain Xt with equilibrium prob-
abilities P(l) is ergodic, that is, for any function g(x) such that l:;lg{i)IP{i) < oo,

t0
1__,
J
lim! g(X, )dr = L_g(i)P(i)
ieS
almost everywhere (l.A11)

This result (for a proof see e.g. Ross, 1970) is very useful because it permits the estima-
tion of a performance measure of a given system using the sample path obtained by a
single simulation run of the system.
Markov chain models are commonly used to analyze queueing systems. A queueing
system is a network of servers where customers arrive, wait for service and, after being
served, go to another server or leave the system. The simplest queueing system_ has a sin-
gle queue shown in Fig. l.A2.
Single-stage queueing systems are described using the notation A!Bim!KINIZ where
A and B specify the distributions of the interarrival and service times, respec-
tively: M (memoryless) denotes the exponential distribution, En stands for then-
Erlang distribution, G is used for an arbitrary distribution, D for deterministic
times, etc;
m denotes the number of parallel, identical servers serving the queue;
K is the maximum number of customers in the queue and in service (an arrival is
rejected if it finds K customers in the system);
N is the size of customer population;
Z is the queue discipline: FIFO (first in, first out), LIFO (last in, first out), etc.
If any of the last three descriptors is missing then we assume that K = oo, N = oo, and
Z=FIFO.
INTRODUCTION 39

r~~~i~~~~-p~p~-~~ii~~l

l: ~
:
l i

i arr;vals T ~ de~rtures ~
I rejected c"'lome<S ~
.......................................................................................................................
Figure l.A2. Single-stage queueing system.

Example l.A2. M/M/1 queue: As an application, we consider an M/M/1 queue with


mean interarrival time 11A, and mean service time 11f.L We assume that service times and
interarrival times are independent random variables. The number of customers in the sys-
tem (queue plus service) at time t is denoted by n1 This quantity is a nonnegative integer.
An arrival causes a transition from state n1 to state n1 + 1 whereas upon a service comple-
tion the system moves from a state n1 > 0 to n1 - 1. We call n1 a birth-death process. Fur-
thermore, due to the lack of memory of the arrival and service distributions x; is a con-
tinuous time Markov chain with transition rates

A<n+lln=A- n=O,l, ...

A<n-lln=Jl n=l,2, ...

Ann= A-+ ,u n = 1, 2, ...

Aoo=A

All other rates are zero. The equilibrium probabilities, assuming they exist, satisfy the
algebraic Chapman-Kolmogorov equations

A-P(O) = ,uP(l)

(A-+ ,u)P(n) = A-P(n- 1) + ,uP(n + 1) n = 1, 2, ...

The first equation yields P(l) = p P(O), where p ~A,/,u. From the other equations we ob-
tain P(n) = pn P(O), n = 0, 1, .... Finally, P(O) is determined from the normalization equa-
tion (I + p + p 2 + .. .)P(O) = 1, which, for p < 1, yields P(O) = 1- p. Hence, the equilib-
rium probabilities are written as

P(n)=pn(l-p) n=O, 1, ...


40 HYBRID SIMULATION MODELS OF PRODUCTION NETWORKS

which imply that the steady state of the M/M/1 system has a geometric distribution. The
constant pis called the traffic intensity or utilization factor of the system because it ex-
presses the mean time required by the server to serve all the customers arriving during
one time unit. The condition p < 1 ensures that for n ~ oo, P(n) tends to 0, and the num-
ber of customers in the system is bounded almost everywhere. This property is referred to
as stability. The inequality

p<1

is the stability condition of the M/M/1 queue.

Next we consider a stable queueing system and define the following stochastic proc-
esses
A, number of arrivals up until time t
D, number of departures up until time t
n1 number of customers in the system at time t.
We also denote by wk the waiting time in the system of the kth customer. Since the sys-
tem is stable, wk and n1 are bounded almost everywhere. Furthermore, we have that
D, =A,- n,.
Using the above quantities, we may define the following performance measures,
provided the limits below exist and are finite almost everywhere:
throughput:

. A, -n,
TH = ll. i DD,- = 1liD--'--~
/-400 t /-400 t

. A,
=1t--.oo
lm-
f
(because n, is bounded)

mean number of customers in the system:

- 1'
N = lim -t 0Jnr dr
/-400

mean time in the system:

1 Dt
W= lim -L:wk
/-400 D, k=I

The above quantities satisfy Little's formula (Ross, 1970; Kleinrock, 1975; Buzacott and
Shanthikumar, 1993),
INTRODUCTION 41

N=THW

and, therefore, it suffices to compute two of them.

Example l.A3. We compute the performance measures of the M/M/1 system. Let ak
denote the time of arrival of the kth customer. Since A 1 arrivals will occur up until timet,
it follows that

Dividing by A 1 yields

(l.Al2)

Since A 1 ~ oo almost everywhere as t ~ oo and the interarrival times are independent,


applying the strong law of large numbers yields

lim a At = lim!!..!..
t-+oo A 1 k-+oo k

where a0 : 0 and the last equality follows from the fact that all the arrivals are accepted.
Similarly, we see that

lim a At+! = lim a At+! Ar + 1 = _!_


Hoo Ar t-+ooAr+l Ar A

By inverting the terms of inequalities (l.A12) we obtain

Ar ~A_~_A_
aAt t aAt+I

Like previously, A/aA1 and A/aAr+ 1 tend to A as t ~ oo and therefore

TH=limA= A
1-+0CJ t

To compute the mean number of customers in the system, we apply Eq. (l.All) with
g(x) = x. This yields
42 HYBRID SIMULATION MODELS OF PRODUCTION NETWORKS

- 1t
N =lim- fn,. dr
1--+oo I 0

00 p
= I,nP(n)=-
n=O 1-p

Finally, by applying Little's formula we obtain

W= 1
.u(l- p)
2
FUNDAMENTALS OF SIMULATION MODELING

Production systems are dynamic systems, into which raw parts enter and, after being
processed by machines, exit as finished products. A dynamic system is a collection of
entities that evolve over time according to certain laws. The variables that describe the
system at each time instant constitute the state of the system and are called the state vari-
ables. Evolution can be conceptualized as a sequence of states that the system visits dur-
ing a period of observation. We shall refer to such sequence as the trajectory or sample
path of the system. A model of a dynamic system is a symbolic expression of the under-
lying laws that relate the current state to past system states.
A dynamic model is classified as deterministic or uncertain depending on whether its
parameters are completely or partially/imprecisely known. There are several frameworks
to represent knowledge under uncertainty, such as, probability theory, possibility (fuzzy
set) theory, the theory of evidence, etc. In this book, we adopt the first framework as-
suming that randomness is the only culprit of incomplete or imprecise knowledge.
Henceforth, uncertain models will be referred to as stochastic models and their parame-
ters will be expressed as random variables drawn from known distributions. The next
example illustrates the above concepts.

Example 2.1 The synchronous, two-stage production line of Example 1.2 is a sto-
chastic dynamic system. The entities of the system are the machines M 1 and M 2 and the
buffer B. The state of the system during the kth operation cycle is described by the triplet
(y 1, k. Zt. y 2, k ), where y;, k is the state of M;, i = 1, 2 and zk is the number of parts between
M1 and M2 in the beginning of that cycle. In order to estimate the expected throughput, we
use two more state variables, the cumulative production of machines M1 and M2 The
system evolves according to the following rules:
There is an infinite source of raw parts before machine M 1 and an infinite sink
for finished items after M 2 Machine M 1 supplies M 2 with parts and a machine
can produce at most one part during one cycle. At any time instant, the number
of semi-finished parts is a nonnegative integer less than or equal to BC.

43
44 HYBRID SIMULATION MODELS OF PRODUCTION NETWORKS

If M; is operational and neither starved nor blocked, then it produces one piece
during the current cycle; it may break down at the end of the cycle with prob-
ability p; or survive with probability 1 - p;.
If M; happens to be starved or blocked in the beginning of one cycle, then it can-
not break down at the end of that cycle.
If M; is down in the beginning of a cycle then it is repaired at the end of the cy-
cle with probability r; or it remains down for one more cycle with the comple-
mentary probability.
The above rules constitute a model for the stochastic system herein.

This chapter is an introduction to the simulation of production systems. Simulation is


a modeling technique that uses the computer to generate a possible sample path of a dy-
namic system of interest.
In the first two sections, we present two approaches to simulation which have a
common point of departure: a discrete event representation of the system's dynamics. The
state of the system consists of the cumulative production of each machine and the number
of items that are stored in each buffer at all time instants. In the first approach, the state is
updated whenever a workpart completes processing on a machine and proceeds to an-
other machine for the next operation. This is a standard method for simulating production
systems. The second approach observes the system at infrequent time points when groups
of parts complete some operation at a machine and uses analysis to keep track of the sys-
tem in the interim.
The last two sections present methods for generating random variables with known
distributions and statistical methods for analyzing the outputs of simulation experiments.
These methods can be found in several texts on simulation modeling (e.g. Banks and Car-
son, 1984; Law and Kelton, 1991; Ross, 1990) and are included here for the convenience
of the reader.

2.1. SYSTEMS DESCRIBED BY DIFFERENTIAL OR DIFFERENCE


EQUATIONS

Consider a general deterministic system evolving in the time interval [0, lmaxl Let
x(t) be the vector of state variables at time t, te [0, tmaxl For example, the state of a pro-
duction system comprises the cumulative production of machines and buffer levels.
Given the state at time 0, we can simulate the evolution of the system by employing a
model that transforms x(O) into a sample path, that is, a sequence of states in which the
system will be at subsequent instants.
A class of continuous time processes arising in many practical situations is given by
the solutions of differential equations expressed compactly as

dx(t) =Fc[x(t), t] dt (2.1)

with x and Fe being vectors of suitable dimension. Obviously, this equation cannot be
numerically exploited to simulate the evolution of the system since deriving x(t) from
FUNDAMENTALS OF SIMULATION MODELING 45

x(O) requires an uncountable number of evaluations. It is, however, possible to approxi-


mate this differential equation by some difference equation. This requires first that we
discretize the continuous interval 0::::;; t::::;; !max into a finite discrete set of times, to= 0, It=
r, ... , tk = kr, ... , tx = !max, where K is a sufficiently large integer and r is a step of size
lmaxf N.
By considering the Taylor series expansion for x(tk+ 1) =x(tk + r) and using Eq. (2.1)
we obtain the first-order approximation (Hildebrand, 1974)

The above can be written compactly as follows. First, we replace the argument tk in Fe
by It+ 1 - r and then we define the function F 0 [x(tk), tk+ t] r Fc[x(ft), tk+ 1 - r]. This
yields

(2.2)

Equation (2.2) describes the evolution of a discrete time system, which can be simu-
lated as follows:

Algorithm 2.1. Discrete time model


(a) Initialize. Set t = 0, x =x(O).
(b) Find next time. Set t:= t + r.
(c) Test. Ift>tmax.thenstop.
(d) Adjust state. Set x:= F 0 (x, t). Go to (b).

In a discrete time system, the state changes at regular epochs and remains constant in
the intervals [t, t + r ). Discrete time models arise naturally in the study of a host of physi-
cal phenomena that can be described or approximated by difference equations. A typical
example of such systems in manufacturing, is the synchronous production line that we
described in Examples 1.1, 1.2, and 2.1. Yet, discrete time systems cannot describe the
flow of parts in production systems in which the machines produce asynchronously and,
therefore, state transitions occur at irregular epochs. Asynchronous production systems
belong to a more general class of systems, called discrete event systems.

2.2. DISCRETE EVENT SYSTEMS

2.2.1. Conventional Simulation Models

A discrete event system is a system that moves randomly from state to state at ran-
dom discrete points in time while in between such points the state is constant. State tran-
sitions are triggered by occurrences of discrete events belonging to a finite set E. At each
occupied state, all the events in E compete for causing the next state transition. Each of
these events has a clock indicating the time at which the event is scheduled to occur.
46 HYBRID SIMULATION MODELS OF PRODUCTION NETWORKS

Let It be the kth transition epoch, ek the kth event, ek EE, and x(ft) the corresponding
state of the system. Let Tk, e be the clock reading corresponding to event eEE upon the kth
transition. The smallest clock reading determines the next event and the next transition
epoch ofthe system, i.e.,

ek+ 1 = argmin {Tk,e: eEE}

The following example illustrates the above definitions and introduces the funda-
mental concepts of discrete event simulation.

Example 2.2. Figure 2.1 depicts a production line with two machines M 1 and M 2 and
an intermediate buffer B of infinite capacity. Parts are loaded on the first machine from
an infinite source and when they complete service at that machine they move to the
buffer. Then they proceed to machine M 2 from which they are sent to an infinite sink of
finished products. The processing times of machines are fixed constants. For simplicity,
we assume that B has infinite capacity and the processing times of M 1 are shorter than the
processing times of M2 In this situation, the buffer level keeps growing without bound
but machine M 1 is never blocked and M2 is never starved. Suppose that we want to keep
track of the buffer level in the interval [0, tmaxl A discrete event model of the system can
be developed as follows.

M2 products

Figure 2.1. Two-machine production line.

Let r 1 and r2 be the constant processing times of parts at M 1 and M2, respectively,
where r 1 < r2 Let also z(t) denote the number of semi-finished parts in the system, that is,
the number of parts in B plus the one that is being processed by M2 at time t. The number
of parts is increased when a part departs from M 1 and enters the buffer and it is decreased
when a part departs from M2 Hence, the departures of parts from the machines are the
events of interest for the system. Let us call these events arrival and departure, denoting
them as 1 and 2, respectively. We consider the system at time tk when event ek takes
place. Upon the occurrence of this event, the state z(t) is increased or decreased, accord-
ingly, using the following state adjusting equation
FUNDAMENTALS OF SIMULATION MODELING 47

Let Tk, 1 denote the time of next arrival at buffer B right after the occurrence of the kth
event. In a dual fashion, let Tk, 2 denote the time of next departure from B right after tk. If
ek = 1, that is, an arrival at the buffer is observed at time tk. then M 1 will initiate a new
production cycle immediately and we must compute a new time of next arrival Tk, I The
time of next departure remains unchanged. Likewise, if a departure occurs then the time
of next departure Tk, 2 must be computed, while the time of next arrival remains un-
changed. Hence, depending on the type of the current event ek> the clock reading of
event e, e = 1, 2, is modified using the following event scheduling equation:

_ { tk + re if event e is observed at time th that is, e = ek


Tk' e - 1.,.,..k-1, e 'f . h
I e IS not t e current event

The next event of the system will occur at time

and, therefore, its type is determined from

ek+ 1 =argminTk,e
e=l,2

We then follow the same procedure to simulate the next event, and so forth until time
fmax

The above example contains the most important ideas needed to develop a discrete
event model. For expository convenience, we have ignored a host of events, such as, ma-
chine failures, starvation, and blocking, which will be considered in the next chapters.
In view of the above example, we can now formulate the state equations for a general
deterministic discrete event system. Let Tk be a vector whose eth element T~c, e is the
scheduled time of the occurrence of event e, right after the occurrence of the kth event.
Generalizing Example 2.2, the evolution of a discrete event system can be described by
three sets of equations:
a set of event scheduling equations

for suitable event scheduling functions Ge, eeE, or, using the vector of the
event-times,

(2.3)

two equations for determining the type and the time of occurrence of the next
(global) event in the system
48 HYBRID SIMULATION MODELS OF PRODUCTION NETWORKS

tk+ = min{Tk,e: eEE}


1
(2.4)
ek+ 1 = argmin{Tk,e: eEE}

and a set of state adjusting equations

(2.5)

for a suitable state transition function F.

To simulate a deterministic discrete event system we proceed iteratively as follows:


First, we use Eq. (2.3) to initialize the clock settings for all event types. Next we use Eq.
(2.4) to determine the time of occurrence and the type of the next (most imminent) event
ofthe system. Then we invoke Eq. (2.5) to adjust the state of the system according to this
event and, finally, we use Eq. (2.3) to schedule a new time of occurrence for each eveht
type based on the adjusted state. The following algorithm describes a deterministic dis-
crete event model.

Algorithm 2.2. Conventional discrete event model


(a) Initialize. Set t = 0, e = 0 (the null event), T = 0, where 0 ~ [0 0 ... 0] is the zero
vector, and x = x(O); schedule times of next events: T = G(O, 0, 0 ).
(b) Find most imminent event. Set t = min {1}: j EE}, e = argmin {1}: j EE}.
(c) Termination condition. If t > !max. then stop.
(d) Execute event e. Adjust the state, x:= F(x, e); schedule times of next events, T:=
G(T, e, t); go to (b).

This algorithm will be referred to as the conventional discrete event algorithm. Its
computational requirements are proportional to the total number of events that are exe-
cuted in lmax time units. In production systems, a state transition occurs when a workpart
is released from a machine. Obtaining accurate estimates of system performance often
requires running the simulator for a production volume of say 100,000 parts. If the sys-
tem is small, then this task requires reasonable CPU times. However, in designing a new
system where a number of alternative decisions have to be evaluated, or in analyzing a
large network of machines, it may take hours or even days to obtain accurate perform-
ance estimates. In the next section, we shall derive conditions under which it is possible
to reduce the number of computations.
In stochastic discrete event systems, state transitions and scheduled event-times are
driven by stochastic phenomena such as random demand, random machine breakdowns,
or random processing times. In particular, stochastic discrete event systems with a coUht~
able state space are known as generalized semi-Markov processes (GSMP's). A GSMP
can be modeled as a deterministic discrete event system with the only difference being
that the state adjusting function F and the event scheduling function G are also functiohs
of appropriate random variables or stochastic processes with known distributions. For the
two-machine example, suppose that parts are inspected after each stage and may be sent
FUNDAMENTALS OF SIMULATION MODELING 49

back for reprocessing or pass to the next stage with given probabilities. The state adjust-
ing equation becomes

if ek = 1 and the part passes the inspection at M1


if ek = 2 and the part passes the inspection at Mz
otherwise

This equation is reminiscent of the one-step transition probabilities ofMarkov chains (see
Appendix l.A 1.1 0). Furthermore, if the processing times are random variables, then the
event scheduling equations become

if event e is observed at time ft, that is, e = ek


if e is not the current event

where rk, e denotes the processing time at machine Me, e = 1, 2, of the part that causes the
kth event.
To obtain legitimate sequences of the processing times and the inspection outcomes
for each machine, we invoke appropriate functions called random variate generators.
This issue will be discussed in Section 2.3.

2.2.2. Hybrid Discrete Event Models

Speed-up of simulation can be achieved by reducing the number of state transitions


that are executed by the simulator. This is possible if the following hold:

Decomposability Conditions. The sample path of the system can be partitioned into
consecutive subsequences of event epochs (tq, tq + ~> ... , tm _ ~> tm), tq ~ tq + 1 ~ ~ tm, such
that
(i) the event em with its corresponding time tm and
(ii) all the intermediate states x(t), tE(tq, tm)
can be derived directly from x(tq) using analysis, rather than simulating the system ex-
plicitly.

Next we describe a system for which the decomposability conditions hold.

Example 2.3. Consider the two-machine production line of Example 2.2, where now
machine M 2 undergoes maintenance service every r3 time units of operation. The service
time is r4 time units. Let y(t) denote the state of M 2 , where y(t) = 1 means that the ma-
chine is up and y(t) = 0 means that it is down at time t. The variables z(t) and y(t) consti-
tute the state of the system. Again, we assume that the intermediate buffer B has infinite
capacity and that M1 is faster than M 2 ( r 1 < r2), which imply that M 1 is never blocked and
M2 is never starved. A conventional model of this system would observe the following
events: (1) departure from M~> (2) departure from M 2, (3) stoppage of M 2, (4) service
completion. Next shall see that for this system, the decomposability conditions are satis-
50 HYBRID SIMULATION MODELS OF PRODUCTION NETWORKS

fled. To show this, we develop a different model. This model observes only event 3 but
uses two more state variables,
a(t) remaining time-to-next arrival at buffer B
d(t) remaining time-to-next departure from B
which will be referred to as the transient times of B. At time 0 the transient times are
a(O) = r 1 and d(O) = r2 (if the buffer is empty, then M2 is starved during the first r 1 time
units and we set d(O) = r 1 + r2). Let tk denote the event-times of the conventional model.
Suppose that at time tq, machine M 2 is stopped for maintenance. Let a(t~) and d(t~) be the
transient times, and z(t~) the number of parts in the system right before the occurrence of
this event. Since maintenance service lasts r4 time units, the transient time right after the
beginning of the maintenance is adjusted as follows

However, the remaining time-to-next arrival and the number of parts in the system are
not affected by the stoppage of M 2 Hence,

a(tq) = a(t~)

We now confirm the decomposability condition for this system. Let tm denote the time of
the next stoppage of machine M 2 Then, since M2 was restored at time tq + r4, the time tm
is given by

and condition (i) is in effect. Next, we show that the state of the system during (tq, tm) can
be derived form the state at time tq. At any time te [tq, tm), the state variables of the sys-
tem are given by

if M 2 is under service, that is, t < tq +


={ ~
T4
y(t)
otherwise

and

() _ ( ) (number of arrivals) (number of departures)


z t - z tq + in B during [tq, t] + from B during [tq, t]

Since the first arrival occurs at time lq + a(tq), the second one at tq + a(tq) + r~. the third at
tq + a(tq) + 2 r~. and so forth, we have that
FUNDAMENTALS OF SIMULATION MODELING 51

l
( number of arrivals)_ {1 + t- tq - a(tq)
in B during [tq, t] - Tt
J if t ~ tq + a(tq)
0 otherwise

where LxJ is the largest integer less than or equal to x. The number of departures from B
can be computed similarly. From the above we see that the state of the system at any time
t < tm can be derived directly from the state at time lq To be able to repeat the above pro-
cedure at time tm, we must also compute the transient times a(t~) and d(t~ ). The first of
these times is computed as follows

( - ) _ ( time of next arrival) _


a tm - in B after time tm tm

number of arrivals )
= fq + a(tq) + ( in B during [tq, tm] Tt - tm

The transient time d(t~) can be computed similarly. Therefore, the state variables at time
t~ can be derived directly from the variables at tq and the system is decomposable.

In the above example the state of the system is decomposed into fast and slowly
varying states. The former are the transient times and the level z(t) of buffer B and the
latter is the state y(t) of machine M2 The set of slowly varying states constitutes the class
of macroscopic states and the remaining ones are the microscopic states, which can be
traced using analysis. In an analogous manner, we define macroscopic and microscopic
events according to the type of state the system visits at the corresponding state transi-
tions. For example, the events eq and em are macroscopic, whereas eq+J. eq+ 2 , ... , em-1
are microscopic events.
If such decomposition is possible, then we can construct a hybrid simulation/analytic
model, which is equivalent to the conventional, discrete event model (2.3)-(2.5). The hy-
brid model will observe state transitions triggered by occurrences of macroscopic events
belonging to a finite set EM, which is a subset of E. In Example 2.3, the set EM contains
only one event, namely, the stoppage of M2
Let en be the nth macroscopic event in the sample path of the system and tn the time
when this event is observed. Upon the occurrence of en all the events in EM compete to
cause the next transition. Let Tn be the vector of the clocks corresponding to macroscopic
events updated right after the occurrence of the nth event; that is, Tn, e is the scheduled
time of next occurrence of event eeEM after time tn. Decomposability condition (i) re-
quires that there exists a function Gs that keeps track of future event-times on the basis of
the current state x(tn), i.e.,

(2.6)

This equation is the analog to Eq. (2.3).


The time and the type of the (n + 1)th macroscopic event to occur in the system are
computed from
52 HYBRID SIMULATION MODELS OF PRODUCTION NETWORKS

In+ I= min {Tn,e: eeEM}


(2.7)
en+ I= argmin {Tn,e: eeEM}

Now consider the interval [tn. tn+ 1) between successive macroscopic events. By con-
dition (ii), there must be a function Fu such that

x(t) = Fu[X(tn), In, t] (2.8)

for every te [tm tn + 1). Using this equation we can keep track of the transitions to micro-
scopic states in the interval between two successive macroscopic events. This equation is
invoked to update the state of the system right before the occurrence of the next macro-
scopic event, i.e.,

In the conventional algorithm we used Eq. (2.5),

to adjust the state of the system at time tk+ 1 To adjust the hybrid model at time ln+ J. we
can use a similar equation in which tk+ 1 and ek+ 1 are replaced by ln+ 1 and en+ I Still
some care needs to be taken of the fact that x(tk) in the above equation is the state the sys-
tem occupies upon the occurrence of the most imminent microscopic event prior to en+ I
But since the state of the system is piecewise constant in the interval [tk, In+ 1), it immedi-
ately follows that x(tk) = x(t: + 1) and Eq. (2.5) can be expressed equivalently as

The hybrid simulator can be implemented as follows:

Algorithm 2.3. Hybrid discrete event model


(a) Initialize. Set t = 0, e = 0 (the null event), T= 0 (zero vector), x = x(O); schedule
times of next events: T= Gs(x, 0, 0, 0).
(b) Find Most Imminent Event. Store the current timer= t; set t =min {Ij:jeEM},
e = argmin {Ij:jeEM}.
(c) Termination Condition. If I > !max then set t = lmax invoke (d 1) to update the
state, and stop.
(d) Execute Event e
(d 1) Update. Use analysis to update the state of the system right before the oc-
currence of e on the basis of the most recent state x,

x := Fu(x, r, t)

(d2) Adjust. Find the new macroscopic state occupied right after event e,
FUNDAMENTALS OF SIMULATION MODELING 53

x := F(x, e)

(d3) Schedule. Update the vector of next event-times

T:= Gs(x, T, e, t)

(d4) Goto(b).

The key condition under which a hybrid model is more efficient than a conventional
one is that the cost of determining and executing the next macroscopic event (i.e., updat-
ing and adjusting the state and scheduling future events) at steps (b)-(d) of the above al-
gorithm be lower than the cost of computing the microscopic states x(tq + 1), x(tq + 2), ,
x(tm _ 1) by successively executing steps (b)-( d) of Algorithm 2.2 . This implies that

( frequency of ) (computational cost of) ( frequency of ) ( computational cost of)


macroscopic events x Eqs. (2.5)-(2.8) :::; microscopic events x Eqs. (2.3)-(2.5)

-we remark that both models use Eq. (2.5). In most cases, the time spent in simulating a
macroscopic event is longer than that for a microscopic event by, at least, one order of
magnitude. Then the efficiency of the hybrid model depends crucially on the rarity of the
macroscopic events.
A generalization of the above is possible for hybrid discrete event systems whose
states involve some variables that incur jumps at discrete points in time and others that
vary continuously. The continuous states are the microscopic variables whereas the dis-
crete states are the macroscopic variables.

Example 2.4. Consider a tank of infinite capacity into which a liquid flows at a con-
stant rate R 1 and out of which the liquid flows at a rate R 2(t). The level z(t) of the tank at
timet is the microscopic state of the system and it is determined as follows:

z(t) =z(O) + R 1 t - JR 2 (y)dy


0

Suppose that R2 alternates between 0 and R, R > R 1, every r time units. Hence, R2 is a
macroscopic state variable whereas R 1 is a fixed system parameter. In the beginning, the
level of the tank changes continuously at a rate R 1 - R2 If R2 > R1, then the tank will be-
come empty and its output rate will be reduced to the inflow rate. We consider two mac-
roscopic events, namely, "tank empties" and "R2 changes", denoted by 1 and 2, respec-
tively. Let e denote the generic next macroscopic event. A discrete event algorithm for
this example can be implemented as follows:

Algorithm 2.4. Model of a hybrid system


(a) Initialize. Input z, lmax Rt. R, and R2; set t = 0. Find times of next events: set
T2 = r; if R2 > Rt. the tank will become empty at time
54 HYBRID SIMULATION MODELS OF PRODUCTION NETWORKS

else set T1 = oo to exclude T1 from the set of candidate next-event times. In the
computer, oo is represented by the maximum allowable real constant, or by a
number greater than the specified simulation period tmax. say tmax + 1.
(b) Find Most Imminent Event. Store the current time r = t; determine t =min {1}:
j = 1, 2} and e = argmin {1}:} = 1, 2}.
(c) Termination Condition. If t > tmax then set t = lmax invoke (d1) to update the tank
level, and stop.
(d) Execute Event e
(d1) Update. Set z:= z + (R 2 - R 1)(t - r).
(d2) Adjust. If e = I then set R2 = R 1; else set R2:= R- R2 (switch to the other
outflow rate).
(d3) Schedule. If R2 > R 1 then set

else T1 = oo. If e = 2 then set T2 = t + r; otherwise T2 is not altered.


(d4) Go to (b).

In Section 3.3 we shall extend the above algorithm to model a two-stage production
system in which the machines may be up or down for random periods of time drawn from
known distributions. In the next section we show how a computer can be utilized to gen-
erate random samples from these distributions.

2.3. MODELING RANDOM PHENOMENA

All models considered so far were completely deterministic. A deterministic system


has a known set of inputs to Eqs. (2.1 )-(2.8), which generate a unique sample path. On
the other hand, the evolution of a stochastic system cannot be predicted precisely. In pro-
duction systems, this uncertainty may be due to random demand, random processing
times ofworkparts or random failure and repair times of machines.
In simulation, it is possible to take into account this uncertainty by invoking the so
called random variate generators. Specifically, part of the simulation effort goes to gen-
erating sequences of random variates, which represent a possible realization of the ran-
dom parameters. Each sequence corresponds to the values assumed by a random pa-
rameter during the simulation period. Then, since all parameters are known, we proceed
with the solution of equations as if the system were deterministic. Some useful random
variate generators are presented in the next two sections.
FUNDAMENTALS OF SIMULATION MODELING 55

2.3.1. Random Number Generators

By uniform random numbers or simply random numbers we usually mean random


variables Uthat are uniformly distributed on the interval [0, 1]. Random number genera-
tors are recurrence relations of the form

Un+l=g(Un) n=O, 1, ...

which yield sequences of numbers that appear to be independent of each other and cover
the interval [0, 1] in a uniform arrangement.
The most widely used random number generators are the linear congruential gen-
erators (Law and Kelton, 1991 ). Such generators start with an initial integer value Zo,
called the seed, and yield successive random numbers Un + 1 by computing

Zn+ I= (aZn +b) mod c


Un+l _
_Zn+l
__
(2.9)
c

where "x mod y" denotes the remainder of the division of x by y, and a, b, and Zo are
positive integers, all smaller than c. Since Zn uniquely determines the next random num-
ber and cis integer, Eqs. (2.9) generate at most c different random numbers ranging from
Ole to (c- 1)/c before returning to the value Z0 from which it has started.
The above generator should use integers whose word lengths (including the sign) are
at most 32 bits, to be implementable on any personal computer. Also, it is computation-
ally more efficient to use a modulus that is close to a power of 2. These two conditions
are satisfied by choosing c = 2 31 - 1. A good random number generator should have a
long period and a uniform coverage of the interval (0, 1). Note that the choice c = 231 - 1
only does not guarantee a long period for the Zn's (for example a = 1 and b = 0 yield Zn =
Zo for all n). Hence a and b remain to be specified.
Choosing b = 0 yields the so called multiplicative congruential generator, which is
more efficient than the linear one because it saves one addition. The period of a multipli-
cative congruential generator is c - 1 if c is prime and the smallest integer k for which
ak- 1 is divisible by c is k = c - 1 (Knuth, 1981 ).
The generators

Zn+ I= (630,360,016 X Zn) mod (2 31 - 1)

satisfy the above conditions and their period is 231 -2.


56 HYBRID SIMULATION MODELS OF PRODUCTION NETWORKS

2.3.2. Inverse Transform

In this section we describe a method for generating random variables drawn from
general distributions. The corresponding sampled values will be referred to as random
val'~ates to distinguish them from (uniform) random numbers.
Suppose that a stochastic system is driven by some random variable X El!t with
known distribution function F(x). To simulate this system we must develop a generator to
obtain legitimate values for X Since we know how to generate random numbers U and
since any measurable function of U is also a random variable (see Appendix l.A1.3), it is
natural to ask whether a function g:[O, l] ~ R exists, such that the distribution of g(U)
coincides with the distribution of X. If this is the case for the stochastic system of interest,
then the underlying probability law is not altered if we replace X by g( U ). Then the ran-
dom variables X and g( U) are said to be stochastically equivalent in the broad sense.
This relation is denoted by X- g( U).
Since F(x) is an increasing function, it follows that F- 1(U) is also increasing in U.
Hence

F- 1(U)sx iff UsF(x)

From the above we have

P[F- 1( U) s x] = P[ Us F(x)]
=F(x)

because U is uniform. This shows that the function sought is F - 1(.), that is, the random
variable F - 1( U) is stochastically equivalent to X in the broad sense. Hence,

(2.10)

The function F - 1( U) is known as the inverse transform of U.


To develop a random variate generator for X we consider two distinct cases:
If X is an absolutely continuous random variable and F(x) is strictly increasing,
then, for any random number u, equation F(x) = u has a unique solution x, which
is a legitimate random variate for X.
If the distribution function contains one or more flat segments, points of discon-
tinuity, or even both (e.g. when X is discrete), then the inverse of the distribution
function is defined as

F- 1(u) = inf {t: F(t) ~ u}


for every u E [0, 1]. In this case x is the smallest value of X such that

F(x) ~ u
FUNDAMENTALS OF SIMULATION MODELING 57

Example 2.5. a. If X is exponentially distributed with F(x) = 1 - e -~~x, then the solu-
tion to F(x) = u is

ln(l- u)
X =- ------'-
J..i

that is, x is given by a function of the random number 1 - u. Stochastic equivalence en-
sures that since 1 - U and U have the same distribution using u instead of 1 - u will pro-
duce a legitimate sample value for X. This saves a subtraction.

F(x)

0 2 X

(a)

f(x)

qo(x) q (1 - q) o(x- 1)
q (1 - q)2 o(x- 2)

0 2 X

(b)

Figure 2.2. Geometric distribution (a) and density (b) functions.

b. Suppose that X has a geometric distribution with probability mass function


P(X = k) = q(l- qi, k = 0, 1, .... Since X is a discrete random variable, its probability
density function consists of impulses and its distribution function is a staircase function.
These functions are plotted in Fig. 2.2. The distribution function is given by

00

F(x) = q 1:0- q)k U(x- k) xElR.


k=O
58 HYBRID SIMULATION MODELS OF PRODUCTION NETWORKS

where U(x) denotes the unit step function. From the above we have

F(x)=l-(1-qy+' x=O,l, ...

The inverse transform returns the smallest integer x such that F(x) 2:: u. Since F(x) is in-
creasing, this is only possible iff F(x) 2:: u >F(x- 1). From these inequalities we obtain

x = linIn(l-
(1- u) J
q)

where LtJ is the largest integer such that LtJ < t. Stochastic equivalence of the random
variables U and 1 - U gives rise to a slightly different generator,

x = l lnu
ln(l- q)
J

which saves a subtraction.

F(x)

a b c X
(a)

I
f(x)

ft 0.5 O(x- c)

a b c X
...
(b)

Figure 2.3. Probability distribution (a) and density (b) functions of Example 2.5c.

c. Let a, b, and c be points on the real axis such that a< b <c. Suppose that a random
variable X attains the value c with probability 0.5 or it is uniformly distributed in the in-
terval [a, b] with probability 0.5. The probability distribution and density functions of X
are shown in Fig. 2.3. The distribution function of X is given by
FUNDAMENTALS OF SIMULATION MODELING 59

0.5 x-a x E [a, b]


b-a
F(x) = 0.5 X E (b, c)

x E [c, oo)

In the interval [a, b], F(x) is strictly increasing and the solution to equation F(x) = u is

x=a+2(b-a)u

Note that in this case UE [0, 0.5]. Intuitively, for u > 0.5 we must have x = c. This can be
verified by observing that the inequality F(t) ;:: u > 0.5 is satisfied for every !E[c, oo).
Hence, F- 1(u) = inf {t: F(t);:: u} =min{!: !E[c, oo)} =c. To summarize, the inverse trans-
form of F(x) yields

x ={ a c+ 2 (b- a) u if uE [0, 0.5]


if UE (0.5, 1]

From the above examples we see that the inverse-transform method requires that we
express the distribution function in closed form and solve F(x);:: u or F(x) = u, whichever
applies, for x. Should either task be impossible or computationally inefficient, the so
called acceptance-rejection method provides a valued alternative.

2.3.3. Acceptance-Rejection

Let X be a random variable with probability density functionf(x). The acceptance-


rejection method presumes the existence of a function g(x) such that g(x) ;::j(x) and the
equation

fg(y) dy = u
-00

can be solved for x. Note that g(x) is not a probability density function since

00 00

K~ fg(x)dx;:: ff(x)dx= 1
-oo -oo

Let Y be another random variable with probability density function h(x) ~ g(x)/K. The
acceptance-rejection method is summarized by the following algorithm.

Algorithm 2.5. Acceptance-rejection method


(a) Generate a sample value x of Y (e.g. by using the inverse-transform method).
(b) Generate a random number U.
60 HYBRID SIMULATION MODELS OF PRODUCTION NETWORKS

(c) If U :S f(x) I g(x), then set X= x (accept) and stop; otherwise go to (a) (reject).

Let Xa denote the random variable generated by the algorithm. We shall prove that
the probability density function of Xa is f(x), that is, Xa is stochastically equivalent to X.
For the event {Xae(x, x + dx)} we have two possibilities: either {Ye(x, x + dx)} and ac-
ceptance is declared during the first pass of the algorithm or the value of Y is rejected and
the algorithm yields Xae(x, x + dx) after two or more loops. This can be expressed as

P[Xae(x,x+dx)] = h(x)dxf(x) + [1[1- f(y)]h(y)dy]P[Xae(x,x+dx)] (2.11)


g(x) -oo g(y)

The first term on the right side of this equality is the probability of the joint event
{Ye(x,x+dx)} and {xis accepted at step (c)}. The second term arises from rejection,
whatever the outcome Y may be, in which case the algorithm restarts. Solving Eq. (2.11)
yields

h(x)dx f(x)
g(x)
P[Xae(x, X+ dx)] = [ ]
1- 11- f (y) h(y) dy
-00 g(y)

and substituting K h(x) for g(x)

f(x) dx
P[Xae(x, x + dx)] =------=-=K'------
oo 1 00
1- Jh(y)dy +- JJ(y)dy
-oo K -oo

=f(x) dx

which implies that Xa and X are stochastically equivalent.


The probability q of acceptance at step (c) is

-oo
1
q = h(x) f(x) dx
g(x)
= ...!_
K

from which it follows that the number of iterations until the algorithm yields a random
variate is geometrically distributed on { 1, 2, ... } with mean

00

- 00 82:(1-q)" 1
n =l:nq(l-q)n-1 =q n=O = -=K
n=l 8(1- q) q
FUNDAMENTALS OF SIMULATION MODELING 61

Since the area under g(.) equals, by definition, K and K ~ 1, it follows that the closer g(x)
is tof(x), the closer K is to 1 and the fewer iterations are performed.
For a discrete random variable X, the acceptance-rejection algorithm is exactly the
same as for the continuous case withf(x) and h(x) replaced by the probability mass func-
tions.fx ~ P(X =x) and hx ~ P(Y =x).

Example 2.6. Consider a nonnegative random variable X drawn from an n-Erlang(1)


distribution, n > 1, with density function

xn-1 e-x
f(x) = U(x)
(n-1)!

and mean n (see Appendix l.A1.5). To apply the acceptance-rejection method we try an
exponential random variable Y with the same mean and density function

h(x) = -I e-xln U(x)


n

Next we look at functions g(x) of the form

g(x) = K h(x)
For g(x) to be an eligible maximizing function, the following must hold

n-1
f(x) nxn-le -x-;;-
K ~ - - = __;,_.___;;,____
h(x) (n-1)!

for every x ~ 0. Since f(x) and h(x) are density functions they are nonnegative and they
carry the same total probability mass. Furthermore, since these functions are not identi-
cally equal, K must be greater than 1. It follows from the previous discussion that the
most efficient choice forK (the closest to 1) is the maximum value of the above ratio,
provided this quantity is finite. It can be verified by differentiation that the above ratio is
increasing in the interval [0, n) and decreasing in (n, oo). Hence for x = n the ratio as-
sumes its maximum value

K ~ nnel-n
(n -1)!

from which we obtain

.~ nnel-n
g(x) = 1\. h(x) = e-xln
n!
62 HYBRID SIMULATION MODELS OF PRODUCTION NETWORKS

The functionsf(x), h(x), and g(x) are plotted in Fig. 2.4 for the case n = 5.

Figure 2.4. Density functions: exponential, h(x), 5-Erlang,f(x), and their maximizer g(x).

Example 2.6 shows how to develop a valid, though not necessarily efficient, algo-
rithm. The computational requirements of the method are proportional to K which, in
turn, depends on the choice of h(x). For a more detailed treatment of the acceptance-
rejection method, including a number of efficient choices of h(x) for various distributions,
the reader is referred to Fishman ( 1978) or Law and Kelton ( 1991 ).

2.4. DETERMINING THE NUMBER OF SIMULATIONS

The purpose of simulation is to estimate the value of some quantity, say J..l, related to
the performance of a production network. In Appendix l.Al.7, we have seen how one
can construct a confidence interval for f..l using the outputs of n independent replications
of the simulation experiment. Here we consider the inverse problem; that is, we want to
find the number of simulations required so that the absolute estimation error for J..l be less
than &with probability 1- a, for given c> 0 and ae(O, 1).
Suppose that x~. X 2, are the outputs of n simulations of a stochastic system. Obvi-
ously X 1, X 2, are random variables. We assume that these variables are independent
and that (.)() = f..l < oo, V ar(.)() = a 2 < oo , i = 1, 2, .... Obtaining independent simulation
outputs can be achieved by using different streams of random numbers in each simula-
tion. For n large enough, applying the central limit theorem yields a 100 (1- a) percent
confidence interval for J..l,

where Za/2 is the critical point of the normal distribution (see Table Al of Appendix A at
the back of this book). Since we require that the error be less than &, n should be the
smallest integer satisfying
FUNDAMENTALS OF SIMULATION MODELING 63

a
Za/2 .[;, ~ e

or, equivalently,

(2.10)

The unknown variance a 2 can be approximated by the sample varianceS 2(K), com-
puted from the outputs of K initial simulation experiments (the fixed-sample-size proce-
dure). Alternatively, one can proceed according to the following sequential procedure:

Algorithm 2.6. Sequential estimation ofthe number of simulations


(a) Make K ~ 5 simulation runs. Set n = K.
(b) Compute the sample meanX(n) and the sample variance S 2(n).
(c) Replace a 2 by S 2(n). If inequality (2.10) is valid, then stop; otherwise, replace n
by n + 1, make an additional simulation run, and go to (b).

In many applications, the hypothesis that the outputs X; of the simulation experi-
ments follow a normal distribution is valid with good accuracy. This, for example, is the
case when simulations are terminated a long time after the system has reached its steady
state. In such cases, all the above procedures are valid provided we replace Za/ 2 by the
critical point tn-l, a/ 2 of the t distribution with n -1 degrees of freedom (see Table A2 of
Appendix A).

2.5. SUMMARY

Production networks can be viewed as discrete event stochastic dynamical systems.


In contrast to ordinary dynamical systems that can be modeled by differential or differ-
ence equations, the evolution of a discrete event system is governed by the occurrences of
discrete events whereupon the state of the system changes. In production systems, these
events are among others completion of operations, random or scheduled machine stop-
pages, buffer overflow and exhaustion. In this chapter, we have given a brief exposure to
discrete event models and simulation. We have described two approaches to discrete
event simulation, namely, the conventional and the hybrid simulation/analytic methods.
The former observes all the events during a simulation period. The latter observes only a
limited number of events, the macroscopic events, which induce large perturbations to the
system. In the time between successive macroscopic events, the evolution of the system
and the occurrences of the other (microscopic) events are determined using analysis. In
the next chapters, we discuss in detail hybrid models of production systems with varying
degrees of complexity.
3
TWO-MACHINE SYSTEMS

In this chapter we study a simple production line with two unreliable machines and
an intermediate buffer to illustrate the logic of conventional and hybrid simulation mod-
els. More complex system topologies are examined in Chapters 4 and 5.

3.1. SYSTEM DESCRIPTION

We consider a production line with two unreliable machines M1 and M 2 and an inter-
mediate buffer B 1 Parts are loaded on the first machine from an infinite source Bo and
when they complete service at that machine they move to the buffer. Then they proceed
to machine M 2 from which they are sent to an infinite sink B2 of finished products. We
assume that the time to transport an item from one machine to the next is negligible. The
system is depicted in Fig. 3.I.

Figure 3.1. A two-machine production line.

Items are identical and processing times are constant for each machine. The inverse
of the processing time of machine M;, i = 1, 2, will be referred to as the nominal produc-
tion rate RM; Buffer B 1 can accommodate up to BC 1 items, which have already been re-
leased from M 1 and wait to be processed by M2 Machine M 1 becomes blocked when it is
ready to release an item and the buffer is full. In a dual fashion, M2 becomes starved
when it is ready to receive an item and the buffer is empty. However, the first machine is
never starved and the last one is never blocked. Blockage and starvation phenomena force
the faster machine to produce at a slower rate. In practice, if a machine becomes starved

65
66 HYBRID SIMULATION MODELS OF PRODUCTION NETWORKS

it waits until a part is available. Then it produces at its nominal rate and releases its part
into the downstream buffer, but then waits for the next part to come in and so on. Its mac-
roscopic behavior resembles production at a slower rate.
In addition to blockage and starvation, machines may be forced down temporarily
due to power supply failures, machine breakdowns, tool changes, and preventive mainte-
nance. Power supply failures occur at random epochs and are known as time-dependent
failures. The other events are operation-dependent. Stoppages due to machine breakdown
and tool changes are caused by machine and tool deterioration and occur after a random
number of items have been produced. Preventive maintenance is usually scheduled after a
specific amount of production. All these phenomena can easily be taken into account dur-
ing simulation by introducing appropriate state variables and events.
To keep the model simple, we examine only operation-dependent failures. Further-
more, and without loss of generality, we assume that the probability of production of one
workpiece over a production cycle 1 - fi is constant for each machine M;. The comple-
ment of this probability is the probability of failure fi over the same cycle. Let F; denote
the number of parts-to-failure of M;. Then

P(F; = n) = (1-fit fi (3.1)

which is the geometric distribution with parameter/;, 0 ~fi< 1.


In Example 2.5(b) we found the inverse transform of the geometric distribution. A
random variate generator for the number of parts-to-failure can be obtained using the
following algorithm:
(a) Generate a uniform random number uE(O, 1).
(b) Set

number ofparts-to-failure =l lnu


ln(1- /;)
J (3.2)

where LxJ is the largest integer such that LxJ < x.


Alternatively, one may wish to generate the number of failures during the production
cycle of a single item. The machine survives the production of a single item with prob-
ability 1 - fi or it incurs at least one failure during the production cycle with the comple-
mentary probability. Then

P(number of failures= n) =fin (1 - fi) (3.3)

and the corresponding generator is

number of failures = lin u


In/;
J (3.4)
TWO-MACHINE SYSTEMS 67

Finally, we assume that the time-to-repair TTR; of M; is an exponential random vari-


able with mean llr;, which means that the density function};{!) ofTTR; is given by

Jir!\t) -_ r; e -r; t

The parameter r; will be referred to as the repair rate of M; and denotes the mean number
of repairs that can be completed in one time unit. Again by applying the inverse-
transform method we obtain the generator for the duration of one repair

(3.5)

3.2. CONVENTIONAL MODEL

3.2.1. Discrete Event Algorithm

Now we develop a conventional model to simulate the operation of the production


system in the interval [0, lmaxl Let t denote the time of the simulation clock. We begin
with the variables that describe the state of the system at any time t during a simulation
run: BL 1(t) is the level of buffer B 1 at timet, P,{t) is the total production of machine M; by
time t, and s,{t) is the state of M; where

if M; is starved
if M; is neither starved nor blocked
if M; is blocked

The model uses two events, the arrival and departure events. The arrival event occurs
when M1 finishes a production cycle and releases its workpart into B 1 The departure
event occurs when M2 produces an item and removes a workpart from B 1 to begin a new
production cycle. We shall occasionally refer to these events as event 1 and event 2, re-
spectively. Therefore, event i corresponds to the departure of some item from M;.
Fundamental in the development of the model is the sequence of times when items
complete service at a machine and are ready to proceed to the downstream buffer. Let t
be the time at which machine M; starts processing a part (clearly, t may be the time at
which M; has released the previous part into buffer B; and loads this part from B;_ J. or the
time at which this part is released from M;_ 1, if M; happens to be starved). The duration
of the production cycle for this part is equal to the sum of the net processing time 1IRM;
and the total downtime, if one or more failures occur. Equation (3.4) gives the number of
failures and (3.5) gives the duration of a single repair period. A realization of the total
repair time is
68 HYBRID SIMULATION MODELS OF PRODUCTION NETWORKS

number of
( total downtime during)
L: (time to repatr the nth fat lure)
failures . .
. = (3.6)
one pro uc ton eye1e
d t n=l

Therefore, the time at which the workpart of interest completes service and is ready to
depart from M; is

( total downtime during) 1


(3.7)
TM; = + one production cycle +
1 RMi

At each time instant every machine has its own time of future event. The simulator
keeps track of the system's evolution by advancing the simulation clock t to the time of
occurrence of the most imminent event, that is,

t= min TM; (3.8)


I

Equations (3.6)-(3.8) are the event scheduling equations of the discrete event model.
Next, we describe how the events affect the state of the system. We examine age-
neric machine M; of the system with an upstream buffer B;_ 1 and a downstream buffer Bi.
This will permit the description of longer production lines of the form M 1 ---+ B 1 ---+ .. ---+
B;-1---+ M;---+ B;---+ ....
Suppose event i occurs at time t = TM;. that is, machine M; finishes a workpart and is
ready to release it to the downstream buffer B;. We consider the following cases for event
i:
(A) the downstream buffer is full and the machine becomes blocked;
(B) the part is released into B; or it is sent directly toM;+ 1o if that machine has been
starved by time t;
(C) M; is ready to commence a new production cycle, but it becomes starved be-
cause there is no part to process;
(D) M; is not starved and it removes one part from B;_ 1 at timet; if the upstream ma-
chine M;_ 1 happens to be blocked, then it becomes unblocked immediately.
Cases A-D are discussed in detail below.
(A) M; becomes blocked: If the downstream buffer B; is full, then the machine be-
comes blocked immediately. From this condition, however, we must rule out the
special case where the capacity of B; is zero, the downstream machine M;+ 1 is
starved, and the released item goes toM;+ 1 directly. Hence, the condition for
blocking is {BL;= BC;} and {s;+ 1 "# 0}. When this condition is in effect, M; is
suspended until the time TM; + 1 when M; + 1 releases its workpart and removes
one part from the intermediate buffer or from M;directly (ifBC;= 0). Since this
time may not be known in advance (due to the possibility of M; + 1 becoming
blocked as well), we can safely set TM; = oo to exclude TM; from the set of candi-
date next-event times in Eq. (3.8). Then, CaseD ensures that the algorithm will
TWO-MACHINE SYSTEMS 69

execute an event i immediately after the execution of event (i + 1). To summa-


rize, blocking of M; is expressed as follows:

if {BL;= BC;} and {s;+ 1 0} '*


then {s;= 2} and {TM;= co}

In the computer, co is represented by the maximum allowable real constant, or by


a number greater than the specified simulation period fmax. say tmax + 1.
(B) Workpart is released: If M; is not blocked, then its total production P; will be
increased by one. The released item will enter B; whose level will be increased
by one, unless the downstream machine is starved, i.e., s;+ 1 = 0. In the latter
case, the item will be sent to M; +It which will start a new production cycle im-
mediately. The corresponding time TM; +1 of occurrence of next event is com-
puted from Eq. (3.7).
(C) M; becomes starved: After the item is released, machine M; is ready to remove
another item from B; _1 and start a new production cycle. If B; _ 1 is empty and
M; _1 is not blocked, then M; will become starved. This phenomenon is the dual
of blocking and, reasoning as in Case A, we express it as follows:

then {s;= 0} and {TM; =co}

(D) M; commences a new production cycle: If M; is not starved, then the level of
B;_ 1 is decreased by one, and TM; is computed from Eq. (3.7). If M-1 happens to
be blocked, it is unblocked immediately since now there is one unit of ~pace
available in B;_ 1 for the blocking part. Thus M;_ 1 is released immediately and
the algorithm executes event (i ...., l) at time t. In a production line with more than
two machines, event i may cause a sequence of similar events upstream of M; in
order to reactivate the chain of machines M; _1, M; _2, , which had been
blocked by M;.
From the above, it is clear that the occWT~mce of an event in M; may trigger secon-
dary events affecting the upstream and d()wn~trf.':am machines. The flowchart of Fig. 3.2
illustrates the above cases. To summarize, tht~ conventional model proceeds as follows:

Algorithm 3.1. Conventional model ofa discrete part production line


(a) Initialize. Input machine parameters, buffer capacities, and total simulation time
tmax Set t = 0 and compute next-event times for each machine from Eq. (3.7).
(b) Advance Clock. Record the time of occurrence of the most recent event, r= t.
This time is required to update the performance measures of the system (see

These conditions are not mutually exclusive: if there is no intermediate buffer between M; _1 and M;, then
BL; = 0 always and M; _ 1 is often blocked by M;.
70 HYBRID SIMULATION MODELS OF PRODUCTION NETWORKS

Section 3.2.2). Find the machine, say M;, with the most imminent event to occur
and advance the clock t as in Eq. (3.8). If t > tmax then tenninate the simulation.
(c) Execute Event i.
(d) Return to Step (b).

Set s;+1 = 1 Update statistics


Compute TM;+ 1 from Eq. (3.7) of BL; (see Sec. 3.2.2)
Set BL; := BL; + 1

Update statistics
of BL;-t (see Sec. 3.2.2)
Set BL;-1 := BL;-1-1
Return

Figure 3.2. Event-i routine of the conventional discrete event model.


TWO-MACHINE SYSTEMS 71

3.2.2. Estimation of Performance Measures

The aim of simulation is to assist managers in deciding how to design, expand, and
manage production systems. Managers assess the consequences of these decisions by
comparing net profit, return on investment, and cash flow during a period of interest of,
say, lmax time units. These economical indices are closely related to a number of opera-
tional indices, so called performance measures, associated with a production system.
Typical performance measures include expected throughput, inventory levels, machine
utilizations, and cycle times. In simulation, these quantities are estimated as averages of
functions of the system state over the period of interest.
Letf[x(t)] be a function of the state x(t) whose expected value is to be estimated. In
the discrete event model presented previously, the state x(t) of the system is altered only
at the times t0 = 0, lt. 12, , tx = tmax when events occur. The time average ofj[x(t)] over
the interval [0, tmax ] is computed from

_ 1 tmax
f = - ff[x(t)]dt
lmax 0

Sincef[x(t)] is piecewise constant, we have

(3.9)

We now give estimates of the most common performance measures.

Throughput: The throughput TH of a production line is approximated by the aver-


age production rate of the last machine. Thus

TH production of the last machine during [0, t max ]


I max

Mean level of buffer B;: The average level, which is an estimate of the mean buffer
level, is defined by

_ 1 tmax
B; = - fBL;(t)dt
lmax 0

In a discrete part system, the level BL,{t) of B; is piecewise constant in the interval
[0, lmax] Applying Eq. {3.9) gives
72 HYBRID SIMULATION MODELS OF PRODUCTION NETWORKS

Note that we are considering production lines with several buffers and machines and,
therefore, BL,{t) is altered only at a subset of times t 0, th ... , tK when parts are transferred
into or out of B;. Now consider all the intervals [ T, t) in which the level of B; is constant
with T and t being two event epochs when BL; incurs successive jumps. Then the above
summation can be replaced by

- 1
B; = - L(t-r)BL;(r)
f max all B; -related
intervals [ r, t)

The simulation algorithm keeps track of the most recent buffer-related event time T
and updates the summation whenever an item is transferred into or out of B;. In the algo-
rithm, these calculations are performed during the execution of events (i- 1) and i (see
Fig. 3.2).
Variance of the level of buffer B;: The sample variance of BL,{t) is defined by

aa;
2
={ - 1 f
lmax 2
[BL;(t)] dt - B;
} - 2

fmax 0

Arguing as above, we can write

or, equivalently,

an/={_!_
T
L(t-r) [BL;(r)] 2
all B; -related
}-B/
intervals [ r ,I)

Mean time in the system: The total time a part spends in the system, known also as
the cycle time, is the difference between the time at which the part exits from the last
machine as a finished product and the time when it enters the first machine. Let n be the
total production of the last machine by time !max The average time in the system is com-
puted by

W -- -1 ~{1 d f h .h ~(timeofjtharrival)
) _I f(timeofjthdeparture) - -I L-
L- ea ttme o t e }t part - - L-
n j;l n j;l from last machine n j;l at first machine
TWO-MACHINE SYSTEMS 73

Utilization of machine M;: The utilization of M; is the proportion of time the ma-
chine is processing workparts. Since the processing time of a single part is l!RM; we have

UM;= totalbusytimeof M; 100%


I max

( total production ) 1
of M; bytimetmax X RM.
= I 100%
I max

Other performance measures of interest (e.g. the proportions of blocked, starved, and
down periods) can be calculated similarly. In the beginning, the values of these quantities
are zero and in the process of simulation they are updated whenever an event takes place
that affects the corresponding state variables.
Since the state x(t) is piecewise constant, the terms on the right side of Eq. (3.9) are
the areas of rectangular regions arranged sequentially on the time axis. In the following
section we will consider systems that have a piecewise linear behavior. The correspond-
ing performance measures will then involve trapezoidal regions and all the calculations
will be carried out in a similar manner.

3.3. HYBRID MODEL FOR CONTINUOUS FLOW

We now present a hybrid model, which uses simulation and elementary analysis to
model a two-stage production system where the flow is assumed continuous rather than
discrete.

3.3.1. Comparison of Discrete Traffic and Continuous Flow

We can think of the production line as a continuous flow system sketched in Fig. 3.3
in which a fluid flows into and out of a reservoir B 1 through pumps M 1 and M2

Figure 3.3. A continuous flow system.


74 HYBRID SIMULATION MODELS OF PRODUCTION NETWORKS

The reason for studying such a system is twofold.


First, many production systems process fluid-e.g. the chemical and food industries.
Continuous flow can be viewed as the processing of discrete infinitesimal parts. To ana-
lyze such systems one could employ a conventional (piece-by-piece) simulator by quan-
tizing the liquid into arbitrarily small product units 8 and appropriately rescaling the ma-
chine rates and buffer capacities. For example, if the maximum flow rate through pump
M;, i = 1, 2, is RMi liters per second and the capacity of the reservoir is BC 1 liters, then, in
the discrete part model, the capacity ofbuffer will be BCd8parts and the nominal rate of
machine M; RMJ 8 parts per second. Hence, the number of events during a conventional
simulation is proportional to 110. In the limit 8 ~ 0 the discrete part model approximates
the continuous flow system, but any finite-length simulation will involve an infinite se-
quence of event executions. Consequently, an alternative discrete event model is needed
to speed up simulation.
The second reason for studying continuous flow is that there is a wide spectrum of
system topologies and parameter ranges where it is possible to approximate discrete traf-
fic by a fast continuous flow model. Continuous flow can be modeled efficiently using
linear equations of the form

(flow rate) x (time)= (total flow)

In Section 2.2.2 we have presented such a model (Algorithm 2.4) for a two-stage system
with infinite storage capacity. A model of unreliable machines and finite intermediate
buffers will observe only changes in the flow rates caused by machine failures and repairs
and buffer overflows and run-downs. Then a set of linear flow equations can be em-
ployed to keep track of total production and the number of parts of each buffer in the sys-
tem. Thus, piece-by-piece computation is avoided.
Now consider a discrete part system and both, a continuous flow model and its dis-
crete counterpart. If flow rates do not vary as frequently as the parts of the discrete sys-
tem are transferred, then the first model will observe fewer events and deliver superior
computational efficiency compared to the second. A natural question then arises as to
whether the performance of production systems can be predicted by approximating dis-
crete traffic by a continuous flow. The following examples provide evidence of the accu-
racy of this approximation.

Example 3.1. Suppose that the processing time of M 1 is 1, the capacity of B 1 is 3,


and the processing time of M2 is I.5. In the beginning, the buffer is empty. Figure 3.4 de-
picts the parts produced by each machine (denoted by arrows) and the level of the buffer
during the first I5 time units. In the discrete system, M1 starts immediately and finishes
the first part at t = 1, whereas M2 remains idle until this time. From that time on, M1 de-
livers parts to B 1 faster than M2 removes parts from that buffer and both machines are
busy. At t = 8, B 1 becomes full and M 1 starts a new production cycle. At times 8.5 and 9,
the level of the buffer is altered by -I and +I, respectively, whereas at t = I 0, these
changes occur simultaneously, rendering the buffer full. At t = 1I, M1 attempts to release
an item that has been just finished, but becomes blocked. The machine stays blocked until
time 11.5 when M2 removes an item from the buffer. Then, both machines start process-
ing new items but M 1 finishes its part earlier and becomes blocked again. This alternation
TWO-MACHINE SYSTEMS 75

of M 1 between busy and idle states continues until the simulation terminates. Note that at
time 11.5, M 1 has produced II parts and M2 7, the difference being equal to the capacity
of the buffer plus one more, which reflects the fact that there is always a part occupying a
unit space in M2.

t t t
I
t- L t
a,: o

M,:
time:
~ ..................................... ...__.___..._
1 1 1 I j I
t= 0 t =8 t =11.5

Figure 3.4. Evolution of the discrete part system.

t= 0 t= 9

Figure 3.5. Plots of continuous and discrete buffer levels.

Example 3.2. We consider a continuous flow system with the same machine rates
and buffer capacity. Thus, RM1 =I, RM2 = 1/1.5, and BC 1 = 3. In the beginning, M 1 will
produce an infinitesimal quantity, which goes to M2 instantly. Therefore, both machines
are busy and the net flow rate into B 1 is RM1 - RM2 = 0.333. The level of B 1 (depicted by a
heavy line in Fig. 3.5) will grow linearly until it reaches the value 3, i.e., the capacity of
the reservoir. This event occurs at time

t (capacity)-(initiallevel) 310333 = 9 time units


net inflow rate
76 HYBRID SIMULATION MODELS OF PRODUCTION NETWORKS

Then, M1 slows down to the rate of the slower machine and remains slowed down until
the end of the simulation period. Using the flow rates we compute the cumulative flow
through M 1 at t = 11.5

~)times) x (flow rates)= 9 x 1 + (11.5- 9) x (111.5) = 10.667 parts

and the cumulative flow through M2

11.5 x (1/1.5) = 7.667 parts

These quantities differ by 3, which is the capacity of B 1

Using the above examples, we investigate the error of a continuous flow approxima-
tion to a discrete traffic system by comparing the estimates of the most important per-
formance measures: productivity and average buffer level. At any time instant, the total
productions of the two systems differ by less than 0.667 parts. If we let lmax ~ oo, the av-
erage production rates will become equal. Also from Fig. 3.5 we see that the lines of the
discrete and continuous buffer levels eventually coincide. Dividing the areas under these
lines by lmax yields the average buffer levels. Again, for a large lmax the discrepancy of
the continuous flow model becomes negligible.
Similar remarks can be made for the case when the inflow rate into B 1 is less than the
outflow rate and the buffer becomes empty.
Intuitively, when buffers do not fill or empty frequently and the production volume is
large over a horizon of, say, one day or week, the approximation ought to be good. Frac-
tional production of a few parts incurs a small error in the production of hundreds or
thousands of pieces. If, however, the production volume and buffer capacities are small,
such errors become significant, but then one does not need a fast analysis tool since a
conventional simulator can do a better job. It is large production volumes and complex
networks where conventional models fail the test of efficiency.

3.3.2. Continuous Flow Model for Two Machines and One Buffer

The hybrid model observes changes in the flow rates, which are caused by the fol-
lowing events:
(a) a machine fails,
(b) a machine is repaired,
(c) the buffer becomes full,
(d) the buffer becomes empty.
At time t, the buffer can be full, partially full, or empty and a machine can be up or
down. The continuous flow approximation has a number of implications:
1. At any time t, machine production and buffer levels may be fractional.
2. Let R; denote the current production rate of M;, i = 1, 2. If M; is neither starved
nor blocked, then it produces at a maximum rate RM;; if M; is under repair, then
R;= 0. When the buffer fills, the production rate of M 1 is reduced instantly to the
TWO-MACHINE SYSTEMS 77

rate of M2, i.e., R1 = R2. In a dual fashion, when the buffer empties, the rate of M2
is reduced instantly, i.e., R 2 = R 1
3. If M1 is blocked and M2 is down, then R1 = R2 = 0. When M2 is repaired its rate
assumes the maximum value, R2 = RMz Then, M 1 will start producing at the
maximum allowable rate, which is the minimum of the nominal rate of M 1 and
the production rate of M2 , i.e., R 1 =min {RM1, R 2 }. In a dual fashion, when M 1 is
repaired and M2 is starved we set R1 = RM1 and R2 = min {RM2, R1}. All these rate
changes occur instantly. When the traffic is discrete, the repaired machine re-
quires some time to finish the part it had started before the failure occurred. As a
result, the blocked (or starved) machine will operate after a transient period,
which the continuous flow model ignores.
An event occurs when a microscopic state variable reaches a threshold value. The
model uses the following microscopic variables: level ofbuffer Bh BL 1(t), total produc-
tion of machine ~. P,{t), and number of remaining parts-to-failure of~. F;(t). As dis-
cussed in the previous section, F; is a geometric random variable. Right after M; is re-
paired, a sample production volume until the next failure is computed from Eq. (3.2),
which is the random variate generator of the geometric distribution. The model, however,
admits any distribution different from geometric.
At each event epoch t, a next event and a corresponding event-time is assigned to
every component (machine or buffer) of the system. Let TM; denote the time of next event
at machine~ and T81 the time of next event at buffer B 1 The event scheduling equations
are discussed below:
( 1) If M 1 is faster than M2, B 1 will fill at time

(3.10)

(2) If M 1 is slower than M 2 , the buffer will empty at time

(3.11)

(3) If both machines produce at the same rate, the buffer will stay at the same level.
We then schedule a fictitious event occurrence at time T81 = oo.
(4) The time of failure of an operational machine~. i = 1, 2, is computed using the
parts-to-failure from

(3.12)

(5) By assumption, repair times are exponential random variables. The time of re-
pair of a failed machine is given by
78 HYBRID SIMULATION MODELS OF PRODUCTION NETWORKS

lnu
Tui= 1 - - - (3.13)
r;

where r; is the mean repair rate of M; and u is a random number in (0, 1). The
model admits any other distribution of downtimes.
The performance measures and the microscopic variables are updated right before
the occurrence of events. Consider two successive event epochs rand t, r:::; t. Let primed
quantities denote the values of the microscopic variables at time r. The update equations
are:

Pi= P;'+Rj(t-r)
F;= F/-Ri(t-r) (3.14)
BL 1 = BL) +(R{ -R2)(t-r)

The performance estimates are represented as time averages over the operation pe-
riod [0, !max].
Throughput: For a production line, the throughput is determined by the cumulative
production of the last machine. Hence,

TH = production of the last machine during [0, t max]


!max

Mean level of buffer B 1: Since the buffer level varies linearly between events, an
estimate of the mean buffer level can be computed from

_!_(BL 1 - BL))(t- r)
all event 2
,_
B- _ occurrences

!max

Variance ofthe level of buffer B 1: The sample variance is defined by

T
J[BL 1(s)] 2 ds
o -z
aBl
2
= -B,
I max

Since [BL 1(s)] 2 is piecewise continuous on the interval [0, !max], we have

,-
L:
all event
J[BL 1(s)] 2 ds
2 occurrences -2
O"Bl = B,
!max
TWO-MACHINE SYSTEMS 79

Next we set BL!=BL 1(r) and observe that BL 1(s) =BL(+(R{-R2)( s-r) for
every sE [ r, t). From this equation we obtain

2
all event
occurrences
[ (BLJ) 2 (t- r) + BL! (R{- Ri )(t- r) 2 + t (R{- Ri) 2 (t- r) 3 J
- 2
CTBi =---------------------------Bi
fmax

Percent downtime of machine M;: This quantity is computed from

DM; = sumofrepairtimes of M; 100%


I max

Utilization of machine M;: The utilization of M; can be computed as in the discrete


part case by

total production ) 1
(
of M; by time lmax X RM; 0
UM;= 100%
I max

or, alternatively, by

R!
L _, (t-r)
alloperational RM;
UM; = _i_nt_erv_a_Is-=-[r_,r.:....)_ _ _ _ 1OO%
I max

To derive the last expression, observe that R! /RM; is the fraction of time in which M;
is utilized during [ r, t). In the remaining time (t- r) (1- R! IRM;) the machine is idle.
Hence the fraction of time the machine is blocked (starved) is

L: - ') (t-r)
( 1 -R!
all blocked(strved) RM;
intervals [r,t) l OO%
I max

Mean time in the system: The total time a part spends in the system is the differ-
ence between the time at which the part exits from the last machine as a finished product
and the time when it enters the first machine. The mean time in the system W taken over
all produced items is computed using Little's formula,

N
W=-
TH
80 HYBRID SIMULATION MODELS OF PRODUCTION NETWORKS

where N is the mean number of items in the system. The latter is the sum of the mean
buffer level and the mean number of parts in each machine. Each machine is occupied by
one item provided it is not starved. Since M1 is never starved, it is always occupied by
one item. Since M 2 is never blocked, it can be starved, operating (utilized), or failed.
Hence, the probability that M 2 is occupied by one item is UM2 + DM2 From these observa-
tions, it follows directly that

In the beginning of a simulation run all the above quantities are zero and in the proc-
ess of simulation they are updated within the corresponding event routines. The steps of
the hybrid continuous flow simulator are outlined below.

Algorithm 3.2. Hybrid model of a continuous flow, two-stage system


(a) Initialize. Input machine parameters, buffer capacity and initial level, and total
simulation time lmax Set t = 0, Ri =RMi, i = 1, 2, and schedule the next event for
each component (machine or buffer).
(b) Advance Clock. Record the time of occurrence of the most recent event, r = t.
Find the component with the most imminent event to occur and advance the
clock to the corresponding event-time

If t > I max. then set t = !max. execute step (c 1), and terminate the simulation.
(c) Execute Event Routine
(cl) Update total production, number of remaining parts-to-failure, buffer level,
and performance measures of the affected components.
(c2) Adjust production rates of the affected machines.
(c3) Compute next event of each component.
(d) Goto(b).

3.4. HYBRID MODEL FOR DISCRETE TRAFFIC

We now develop a hybrid (computational-analytic) model for discrete traffic. From


Examples 3.1 and 3.2 and the discussion of the previous section it is clear that the buffer-
full and buffer-empty events act as "link" messages directing the fast machines to keep
pace with their neighbors. Workparts serve as the conveyors of such messages. When the
flow is continuous, items are infinitely small and therefore the flow rates are reduced
immediately after the occurrence of an event. In discrete part systems, these events are
realized after a transient period that corresponds to the interval between the time the
buffer becomes unavailable and the time the machine requests a unit space or a part. To
analyze such phenomena we define the following microscopic variables
TWO-MACHINE SYSTEMS 81

a remaining time-to-next arrival at buffer B I


d remaining time-to-next departure from B I
These quantities will be referred to as the transient times of the machines. The model we
shall develop next is exact since it captures all transient phenomena and it is faster than
conventional simulation because it observes a small number of events.
The model observes only three types of events, namely,
(a) a machine fails
(b) buffer-full
(c) buffer-empty
In the sequel, we use the term "buffer-full event" to designate the beginning of a se-
quence of busy periods separated by blocked intervals. We also use the term "buffer-
empty event" to designate the beginning of a sequence of busy periods separated by idle
intervals, due to starvation.
Note that the model does not use repair events. Elimination of the repair event is
achieved as follows: When MI breaks down, the transient time a is increased by the
amount of time required to complete repair. This time is given by Eq. (3.6):

number of failures
downtime during)
( totalproduction in one cycle
one cycle = I (time to repair the nth failure)
n=t

When M 2 breaks down, d is modified accordingly. Repair periods are thus incorporated
into the transient times a and d. As a result, the machines are assumed to be always "up".
Upon occurrence of a failure at M;, i = 1, 2, its production rateR; is reset to the nominal
value, i.e. R; = RM; This value may change, if a buffer-full or a buffer-empty event occurs
before the next failure of M;.
Discrete traffic can be simulated using Algorithm 3.2 of the previous section but in-
voking different equations for updating and adjusting the states and scheduling the next
events. We begin with the equations for scheduling the next events. Then we derive the
expressions for updating the microscopic variables right before the occurrence of an
event and for adjusting the state immediately after.

3.4.1. Machine Event Scheduling

We want to compute the time of next failure of a machine at (an arbitrary) timet. By
the assumption of operation dependent failures, it turns out that the time between failures
of a machine depends on the number of parts being processed. When a machine M; fails,
the model generates a random number of parts-to-next failure. Suppose that at time t the
number of parts-to-next failure is F;. Then an estimate ofthe time of failure is

F -1
TM"=t+a+ - ' - (3.15)
' R;
82 HYBRID SIMULATION MODELS OF PRODUCTION NETWORKS

where a is the remaining time-to-release the next part, F; is the number of parts-to-failure
and R; the production rate of M;. The time of failure is re-estimated whenever blockage
and starvation alter the production rate of the machine.
Next we consider the buffer-full and buffer-empty events.

3.4.2. Scheduling a Buffer-Full Event

First, we examine the mechanism of machine blockages. Machine M 1 will become


blocked, if it attempts to release an item into a full buffer. Figure 3.6 shows a possible
sample path leading to the blockage of M 1 Suppose that at time t, the transient times a
and d, the production rates and buffer level BL 1 are known. We define the following
quantities:
T time at which M 1 attempts to release a part downstream and becomes blocked
T81 time at which the previous item was released by MI. T81 < T
N; number of parts that will be completed by M;, i =1, 2, in the interval (t, T81 ].
Specifically, M1 will release N 1 parts into the buffer by time T81 At that time, the buffer
will be full. The next part will be completed at time T but it will stay at M 1 until time
T81 + d' when M 2 releases a part downstream, loads another one from BI. and frees a unit
space for the blocked part. The primed quantities a' and d' denote the transient times im-
mediately after TB 1

:1 a'=d'>
L L
r Te, T

Figure 3.6. Blockage occurs after transient period d (N2 > 0).

In the hybrid model, T81 is considered to be an event-time. Later we shall see that
apart from the fact that at this time the buffer is full, T81 satisfies a stronger condition
which justifies its choice as the event-time. It can be verified by inspection of Fig. 3.6
that T81 is given by

(3.16)
TWO-MACHINE SYSTEMS 83

Furthermore, since parts are not lost,

(3.17)

As we have pointed out, at time T81 the buffer is full. However the buffer may have
been full at other time instants prior to this time. In Fig. 3.6, r is such a time instant. To
see this we start from T81 going back in time. We observe that right after M 2 produces its
N2th item, it removes one part from the buffer and, therefore, its level must be BC 1 - 1.
This implies that at time r, at which M 1 produces its (N1 - 1)th item, the level of the
buffer should be BC 1 Hence B 1 is full at both times rand T81 The difference between r
and T81 is that, whereas after time r we have a departure from B 1 and the level drops to
BC,- 1, at time T, that is, after time T81 , machine M 1 attempts to release one more item
into the buffer and becomes blocked. Therefore, after time T81 , M1 is coupled with M2
and so their transient times a' and d' are equal. To summarize, the condition for a block-
age of M 1 is that B1 be full and M1 completes two successive items (the N;th and the next
one) within the transient timed' of M 2 This can be written as

- 1-<a' =d'
R,

In the remaining section we derive the event scheduling equation, that is, expression
for 1 based on the state of the system at time t. To this end, it suffices to compute N 1
TB
and substitute it into Eq. (3.16). We examine two distinct cases.
Case A: Nz > 0. Blockage occurs later than time t + d, when each machine has pro-
duced at least one part. A realization of this situation is depicted in Fig. 3.6, where M1
finishes two successive parts within a production cycle of M 2 The first part, which is the
N, th part produced after t, fills the excess capacity of B 1 The next part is produced earlier
than the part in M2 and M1 becomes blocked. Therefore, a necessary condition for this
event is that M 1 be faster than M2, that is, R1 > R2 An expression for N 1 is obtained as
follows. In Fig. 3.6, the segment a'= d' represents the transient time between the N 1th and
the (N1 + 1)th arrival at B 1 From Eq. (3 .16) we have

and by inspection of Fig. 3.6,

Substituting the above into Eq. (3.17) yields


84 HYBRID SIMULATION MODELS OF PRODUCTION NETWORKS

Solving the above ford'(= a') yields

Next, combining the blocking condition

1 , d'
-<a=
RI

and the fact that d' can be at most equal to the length of a production cycle of M2 y~elds

1 , d' :s;-
-<a= 1
RI R2

By inserting the expression we derived for a'= d' into the above we obtain

and, after rearranging terms,

Since N 1 is an integer, it follows that

(3.18)

where Ixl is the smallest integer such that Ixl > x.


Case B: N 2 = 0. Blockage occurs before M2 produces an item, that is, T81 < t +d. This
event occurs when either the production cycles of M2 are very long or M2 incurs a long
repair period. Note that, in the latter case M 2 need not be slower than M 1 A typical reali-
zation of this situation is depicted in Fig. 3.7, where M1 produces its first item at time
t +a, the second one at t +a+ (1/R 1), etc. The N 1th item is completed at time T81 and fills
the excess capacity of B 1 The next one is ready before time t + d, while M2 is still proc-
TWO-MACHINE SYSTEMS 85

essing an item. Hence a necessary and sufficient condition for a blockage within transient
period dis

..............................tl'
d
a a'= d'

u
T

Figure 3.7. Blockage occurs within transient period d (N2 = 0).

Since N2 = 0, Eq. (3.17) implies N 1 = BC 1- BL~t which, upon substitution into Eq. (3.16),
yields

B_C...:.. _L....!,l_-_1
TB 1 = t+a + - 1_-_B

Case C: N2 =0 and BC 1 = BL 1. This case must be handled in a different way from the
previous one. Indeed, for BC 1 = BL 1the above scheduling equation yields

which implies that at timet, the next event to occur must be executed earlier than t! To
get around this inconsistency, the model schedules the buffer-full event to be executed
instantly, i.e. T81 = t.
The next proposition summarizes Cases A, B, and C.

Proposition 3.1. The time of a buffer-full event is scheduled as follows:

t if BL 1 =BC 1 and d-a>O

1+ a + BC1 - BL1 if Rt (d-a) > BC1-BL1 > 0


R1 (3.19)

1 +a+ Nt -1 if R 1 > R2 and none of the above holds


Rt
86 HYBRID SIMULATION MODELS OF PRODUCTION NETWORKS

where Nt is given by Eq. (3.18).

The model avoids frequent piece-by-piece calculations since, in general, the times t
and Tnt can be several production cycles apart.

3.4.3. Scheduling a Buffer-Empty Event

In an analogous manner we derive expressions for the time when M 2 becomes


starved (see Fig. 3.8). Assume that at timet, the transient times a and dare known, the
buffer contains BLt parts and one more part is located in M2 Let Tnt be the time when M2
becomes starved and N; the number of parts that are produced by M;, i = 1, 2, in the inter-
val (t, Tnd Then

(3.20)

The time Tnt is computed from

(3.21)

Equation (3.21) is the analog to Eq. (3.16) for the buffer-empty event.

.J
d' =a . )
a

M1: I~
Figure 3.8. Starvation occurs within the transient period a (N. = 0).

Let a' and d' be the transient times immediately after Tnp At time Tnt+ a', one part
will be released from Mt into the buffer from where it will move to M 2 without delay.
Here we have the situation where an arrival and a departure from Bt occur simultane-
ously. Thus, in the hybrid model, d' is set equal to a'.
Next, we derive the event scheduling for two distinct cases, both leading to a buffer-
empty event.
Case D: Nt = 0. Starvation occurs before Mt produces an item, that is, Tnt < t + a.
This happens when either the production cycles of Mt are very long or Mt incurs a long
repair period. Note that, in the latter case Mt need not be slower than M 2 A realization of
TWO-MACHINE SYSTEMS 87

this situation is depicted in Fig. 3.8, where M2 produces its first item at time t + d, the
second one at t + d + (1/R2), etc. At time Tsp M2 has consumed all the pieces that were
initially in B1 and becomes starved immediately. Hence, a condition for this event is

By Eq. (3.20) and the fact that N 1 = 0,

Inserting the above into Eq. (3 .21) yields

Case E: N 1 > 0. Starvation occurs later than time t + a, when each machine has pro-
duced at least one part. A realization of this situation is depicted in Fig. 3.9, where M 2
attempts to remove two successive parts from B 1 within a production cycle of M1 The
first part is loaded on M2 at time T and buffer B 1 is left empty. This part is going to be the
Nzth item produced by M2 after t. At time T81 , machine M2 is ready to load one more part
but the buffer is still empty because M 1 is in the middle of a production cycle. It can be
verified by inspection of Fig. 3.9 that the condition for a buffer-empty event is that B 1 is
empty at time T81 and the segment (1/R 1) - a' is greater than or equal to 1/R2 From this
condition and the fact that the length of segment d' =a' is positive (otherwise M 2 would
not become starved) we obtain

0 < d' =a':::; - 1- - -1-


R, Rz

Figure 3.9. Starvation occurs after the transient period a (N1 > 0).

An expression for N 2 is obtained as follows. In Fig. 3.9, the segment d' =a' represents the
transient time before the (N2 + 1)th departure from B 1 From Eq. (3 .21) we have
88 HYBRID SIMULATION MODELS OF PRODUCTION NETWORKS

and by inspection of Fig. 3.9,

N2 - 1 ( R.-a
= 1 + [d +~-a- 1 )] R1

Substituting the above into Eq. (3.20) yields

from which we obtain

1
d'=a'=(a-d)+(N2"-1)(- 1 __
R1 R2
)-BLIR1
Inserting the above into the condition of the buffer-empty event, that is,

0 < d' =a'~ - 1- - -1-


RI R2

we obtain

and, after rearranging terms,

Since N 2 is integer, it follows that

(3.22)
TWO-MACHINE SYSTEMS 89

The next proposition summarizes Cases D and E.

Proposition 3.2. The time of a buffer-empty event is scheduled as follows:

(3.23)

where N 2 is given by Eq. (3.22).

3.4.4. Update Equations

In this section, we shall describe the evolution of the microscopic states of the sys-
tem, i.e., transient times, buffer level, cumulative production, remaining parts-to-failure,
and related performance measures, in the interval between two successive events. The
problem is to predict the state of the system at any time instant t on the basis of the state
at the time r of occurrence of the most recent event.

T+ d(T) + NiR2

T r + a(r) + N1/R1

Figure 3.10. System evolution between two successive event epochs.

Let N 1 be the number of arrivals at B 1 in the interval [ r, t) and N 2 the number of de-
partures from B 1 during the same period. By definition, N 1 counts the arrivals up to, but
not including, timet. Similarly, N2 denotes the departures from the buffer during the in-
terval [ r, t). By counting arrows in Fig. 3.10 we obtain

Nl =f;Ut-r-a(r)]RI J if t ~ r+a(r)
otherwise
(3.24)
N2 =f~Ur-r-d(r)]R2 J if t ~ r+d(r)
otherwise
90 HYBRID SIMULATION MODELS OF PRODUCTION NETWORKS

where LxJ is the largest integer less than x. At time t- , the material balance in B 1 is

The "minus" superscript in t- is used because the model updates the state variables right
before the execution of events.
Since M 1 will produce N 1 parts, cumulative production and number of parts-to-failure
are updated from

P1(r) = P1( r) + N1
(3.25)
F1(r) =F1(r) -N1
We now compute the new transient time a(r) of M 1 From Fig. 3.10 we have

a(r)= r+a(r)+ N 1 -t (3.26)


R1

If the interval [ T, t) happens to be shorter than the initial transient period a( T) (this case is
different from Fig. 3.10), then N 1 = 0 and, therefore, Eq. (3.26) is still valid.
By replacing a( T) with d( T) and subscript 1 by 2 in Eqs. (3.25) and (3.26), we obtain
the update equations for the microscopic variables of M 2 at timer.
We now derive the update equation for the mean level of buffer B 1 Let n 1(s) and
n2(s) denote the numbers of arrivals and departures in the interval [ T, s ), s :::; t. Then

1-
I max

fBL 1(s)ds
L JBLI(s)ds
all event
- I=_
B 0
_..;:,----=-------
occurrences

fmax lmax

1-

L J(BL1 (r) + n1(s)- n2(s)]ds


all event

=--------------
occurrences
(3.27)
!max

The functions n 1(s) and n2(s) have a staircase pattern and are given by Eqs. (3.24) by sub-
stituting s for t.
Consider the function n 1(s), depicted in Fig. 3.11, and the area of the region under
n1(s) over [ T, t),
TWO-MACHINE SYSTEMS 91

This region consists of N 1 rectangles with heights 1, 2, ... , N 1-recall that N1 denotes the
total production in the interval [ r, t); hence, N 1 = n 1(r). The base lengths of the first N1-l
rectangles are 1/R 1 and the last one is (1/R 1) - a(r). Hence

1
fn 1 (s)ds
- 1
= -[1+2+ .... +(N1 -1)]+ [ --a(t-)
1 ]N1
T ~ ~

_ 1 N 1(N1 +1) ( _)N


-- -at 1
R1 2

time, s

Figure 3.11. Evolution of n 1(s).

Similarly, the area under n2(s) is given by

Inserting the above into Eq. (3.27) yields the estimate for the mean buffer level.
Utilization of M; is computed from (see Section 3.3.2)

( total production ) 1
totalbusytimeof M; of M; bytimetmax x RM.
UM;= _ _ _.:....___ _ _.,:_ 100% == I 100%
~~ ~~

where the total production of M; is updated using Eq. (3.25).


Microscopic states and performance measures are updated sequentially, right before
the execution of events. Next, we derive the state adjusting equations, which are invoked
upon the occurrence of events.
92 HYBRID SIMULATION MODELS OF PRODUCTION NETWORKS

3.4.5. Event Driven State Adjustments

When a buffer-full event occurs, the transient times-to-next arrival and departure are
equal. Moreover, if M1 is faster than M2 (blocking can also occur if M2 is faster than M1
but incurs a long down period), then it is forced to produce at a slower rate. Hence we set

(3.28)

The operation of M 2 is not influenced by this event. The next event to occur at M 1 is
scheduled according to Section 3.4.1. Finally, any future changes in the state of BJ. e.g.
due to the occurrence of a failure before the end of simulation, need not be considered at
present. Therefore, since transient times and production rates are equal, the clocks associ-
ated with the buffer-related events are frozen and we set T81 = oo.
In a dual fashion we proceed with the buffer-empty event setting

(3.29)

Then we set T81 = oo and we schedule the next event for M2


As it will be discussed later in this section, the model assumes that a machine may
break down only at the beginning of a production cycle. This assumption is made for
convenience and, as we shall see, it does affect the times at which items depart from the
machine. When a machine fails, its transient time (a or d) is prolonged by the total down-
time, which is computed from Eq. (3.6),

number of
( total downtime during) failures
one production cycle = ~(time to repair the nth failure)

Suppose, for example, that M 1 produces at a constant rate R 1 in the interval [ r, t) and that
it breaks down at time t, when it starts a new production cycle. At time t -, a part is loaded
on M 1 and the transient time is equal to the production cycle, i.e.

a(r) = - 1
RI

Note that we use the actual rate R1 instead of the nominal RM1 because the machine may
be blocked during [ r, t). At time t, the model observes a failure of M1. redefines the tran-
sient time as the remaining time to complete one part, and resets the production rate to
the nominal value. Thus

a(t) = ( total downti~e during) + _1_ (3.30)


one productiOn cycle RM, and R1 = RM,

Then the model computes a new value for the number of parts-to-next failure. This quan-
tity is a positive integer, since each repair period to be encountered while processing the
TWO-MACHINE SYSTEMS 93

current workpart has already been incorporated into the transient time. If the machine was
blocked during [ r, t), then the state of the buffer is switched to "partially full". Finally,
the model schedules next events at M~. B 1 and M2
Since the algorithm schedules machine failures and all buffer-related events upon
departures or arrivals of workparts to the buffer, the probability of encountering two si-
multaneous events conditioned on a single departure or arrival is not negligible. We ex-
amine two phenomena.
The first happens when the time in which the model will observe a failure of ma-
chine M2 happens to be equal to the actual time T81 at which M2 will become starved.
However, since the failures are operation-dependent phenomena they cannot take place
during idle (starved) periods. What actually happens is that the failure occurs after the
machine loads one part. This implies a dominance relation between starvation and failure
phenomena whereby, if a machine becomes starved and breaks down simultaneously,
then the first event will be executed before the second one.

previous cycle current cycle

ACTUAL
SYSTEM

MODEL
------t
,.__sum of repair periods --.

a = transient time

Figure 3.12. Starvation dominates breakdown; shifting of repair periods to the left does not affect release times.

This situation is depicted in Fig. 3.12. This figure also illustrates how the model han-
dles the occurrence consecutive breakdowns. In such cases, it is computationally more
efficient to incorporate the starvation period, all the repair periods, and the time to proc-
ess the current workpart into the transient time, rather than executing each failure event
separately. It is clear from Fig. 3.12 that this approach does not affect the parts' release-
times and so the sample paths of the model and the system coincide.
The second phenomenon happens when the time T81 at which M 1 will become
blocked and the time at which M1 will break down are equal. Note that the quantity T81 ,
defined in Section 3.4.2, is actually the time when the machine begins processing the part
that is going to be blocked. In this case, the failure event must be executed first. This is so
because the repair time may be long enough so that blockage is eventually cancelled.
94 HYBRID SIMULATION MODELS OF PRODUCTION NETWORKS

3.5. NUMERICAL RESULTS

This section investigates the issues of accuracy and computational efficiency of the
discrete event models described in the previous sections. The following simulators were
developed in FORTRAN 77 code: a piece-by-piece (PP) simulator, based on the Algo-
rithm 3.1 and the flowchart of Fig. 3.2, and two others corresponding to the continuous
flow (CF) and discrete part (DP) hybrid models. The algorithms of the hybrid models are
based on Algorithm 3.2 but, as we discussed in Sections 3.3 and 3.4, they use different
equations for updating and adjusting the states and scheduling the next events.
In order to compare the models under the same experimental conditions, we use the
common random numbers technique (e.g. see Law and Kelton, 1991 ). That is, for a cer-
tain type of event and a given machine, the three simulators use a common sequence of
random numbers. The use of common random numbers permits fair comparisons of dif-
ferent models to be made based on short simulation runs.
The CPU time required to execute a simulation run depends on the number of events
that occur during the simulation. For the PP model, the number of events is the number of
times machines M 1 and M 2 produce items. If the capacity of the intermediate buffer is
finite, then, over a long period, the total productions of the machines are approximately
equal. Therefore the CPU time is proportional to the total production of the system aug-
mented by the time needed to execute blockage and starvation phenomena, which require
some extra computations. On the contrary, the number of events of the hybrid models
does not depend on the total production directly, but on the number of failure, buffer-full,
and buffer-empty events. We performed several experiments to investigate the various
factors affecting the computational requirements of the three models. In each experiment,
we alter one parameter of a two-stage system and observe its effect on the CPU time re-
quired to simulate the production of 1,000,000 parts (CPU time per million parts). The
standard values of the parameters are the following
nominal rate RMi = 10 parts/time unit, i = 1, 2
failure probability fi = 0.0099
mean time-to-repair llri = 1 time unit
buffer capacity BC 1 = 10
Figure 3.13 shows the throughput estimates TH of the PP model and the CPU times
per million parts as functions of the nominal production rate of machine M 1 We observe
that for large values of RM1, the throughput and the CPU times remain approximately
constant. This is justified as follows. If the rate of M 1 is considerably larger than that of
M2, Mt remains blocked almost always and M2 is seldom starved. Thus the throughput of
the line is determined by the efficiency of the slowest machine. This machine is M2 and
its efficiency '72 is computed as follows

1
ryz= ----------------------
mean time-to-produce one part
TWO-MACHINE SYSTEMS 95

1
= ------~------------~~----~--~--------~
1 (meannumberoffailuresduring) ( mean )
RM 2 + the production of one part x time-to-repair

1
== == 9.0909 parts/ time unit
_1_ + __,fr_ __!_
RM 2 1-12 r2

For RM1 >> RM2, the number of events the hybrid models observe per million parts is pro-
portional to the number of failures, which, by the assumption of operation-dependent
failures, depends on TH and is independent of RM1 Finally, the computational require-
ments of the PP model are independent of RM1 because M 1 and M 2 remain in the same
states (blocked and not starved, respectively).

100 . . . . . . .. . . . . .. . ......... .................. 10

CPU time per


million parts (sec) ,-
..[_ 8

6
throughput
TH

_._CF
.--op
10
...__
-J .... .. - . - .. ------ .. - - ..... . .. "' - ....
4

-pp
--
2
-TH
1 0
0 20 40 60 80 100
nominal rate of the first machine

Figure 3.13. CPU time and throughput versus RMt

From the same figure it appears that the CPU time has a local minimum at RM 1 == 10
for the hybrid as well as the PP models. This is so because when the nominal rates are
equal, the machines do not become starved and blocked frequently.
Since all simulation experiments use common random numbers, the DP and the PP
models yield the same estimates of the throughput TH. The errors of the CF model due to
the continuous flow assumption were less than 1% in all cases.
Figure 3.14 shows the dependence of the throughput and CPU times on the capacity
BC1 of the intermediate buffer. As it is expected, when the capacity increases the fre-
quency of blockages and, therefore, the computational requirements decrease. Again, as
BC goes to infinity, the throughput tends to 9.0909, which is the efficiency of M 1 or that
of M2 (recall that the standard parameters of the two machines are equal).
96 HYBRID SIMULATION MODELS OF PRODUCTION NETWORKS

100 10

CPU time per


million parts (sec) -- --- -
8

6
throughput
TH

10
-.-cF

--
4
-.-oP
-pp
-TH --- 2

+---------~----------~----------+0
10 100 1000
buffer capacity

Figure 3.14. CPU time and throughput versus BC1.

10

CPU time per 8 throughput


million parts (sec) 10 TH
6
-.-cF
-.-op 4
-pp 0.1
2
-TH

0.01 + - - - - - + - - - - + - - - - + - - - - - - + 0
0.0001 0.001 0.01 0.1
failure probabilities

Figure 3.15. CPU time and throughput versusfi and./2.

Figure 3.15 summarizes the simulation results for a wide range of failure probabili-
ties fi and fi where fi =Ji. As the failure probabilities increase, the frequency of failures
increases and, therefore, the frequencies of blockage and starvation phenomena also in-
crease. Thus the CPU times of the discrete event models are increasing functions of the
failure probabilities. However the PP model does not suffer any severe degradation in its
performance. From this figure we see that when the failure probabilities are smaller than
some critical value, the hybrid models are faster than the PP benchmark. For the particu-
lar experiment herein the critical value for the failure probabilities is/;::::: 0.15, which im-
plies that each machine breaks down after the production of (1 - j;)lj; = 5.67 parts, on the
average. Such frequent failures are rarely the case in actual production lines. If the line is
completely reliable, then, after a transient period, the machines will produce 10 items per
time unit, the buffer will be empty and the hybrid models will not execute any events
TWO-MACHINE SYSTEMS 97

until the end of the simulation. In this case, the hybrid models are infinitely faster than
the PP simulator.

3.6. SUMMARY

In this chapter, we presented two hybrid discrete event models of production lines
with two machines and a finite intermediate buffer. The models avoid piece-by-piece
processing of entities by observing the occurrence of major events i.e., machine failure or
repair and buffer overflow or depletion, and using elementary analysis to keep track of
machine production and buffer levels in the interim. The speed of the models for rela-
tively reliable systems was verified through a large number of experiments. These models
can be used as building blocks for the description of long production lines and complex
networks.
4
PRODUCTION LINES

Production lines are among the most common types of production systems used in
the industry. In spite of their simple structure, we can identify many varieties ofproduc-
tion lines such as manual or automated, synchronous, asynchronous, or stochastic, con-
trolled or uncontrolled, continuous or discrete, single-part, batch, or mixed, buffered or
unbuffered, etc. In this chapter we study a class of single-part, open production lines with
finite interstage buffers and deterministic production rates, which includes the two-stage
system as a special case. We develop the hybrid models of continuous and discrete traffic
using the event-based formalism discussed in the previous chapter. Other operating disci-
plines and particularities of the type described above can be incorporated into the models
at minimum effort. Some extensions are discussed in Section 4.4.
A number of experimental results are reported to compare the hybrid models-
continuous (CF) and discrete (DP}-with conventional piece-by-piece simulation. We
use two criteria for the evaluation of the models, accuracy and computational require-
ments. In general, the continuous hybrid model delivers exceptional computational per-
formance over the others and it is exact for analyzing continuous flow systems. Moreover
it appears quite accurate for a wide range of discrete part lines. Hence it is a powerful tool
for optimization problems, where different system designs and operating policies must be
evaluated.
The CF model of lines with deterministic processing times was proposed by
D'Angelo et al. (1988) and the model for random processing times by Kouikoglou and
Phillis (1994). The DP model was developed by Kouikoglou and Phillis (1991).

4.1. CONTINUOUS FLOW MODEL

A production line is a serial arrangement of n machines M~t M 2, , Mn. with n - 1 in-


termediate buffers B h , Bn _ h as shown in Fig. 4.1. Workpieces enter each machine in
sequence and finally exit the last machine as finished products. The production line may
be open or closed. In open systems there is an infinite source of raw parts in front of the
first machine and an infinite sink for products at the end of the line. If the system is
closed, then Mn and M 1 are connected through a buffer.

99
100 HYBRID SIMULATION MODELS OF PRODUCTION NETWORKS

~-~~Er&G
Figure 4.1. An open production line with n machines.

The storage capacity of B; is BC;< co. Machine M; requires 1/RM; time units to com-
plete each part. Hence, the nominal production rate of M; is RM; The flow is assumed to
be continuous, that is, machine production and buffer levels are fractional quantities. A
hybrid model for this system is obtained by extending the algorithm for two-stage sys-
tems developed in Section 3.3. The model observes changes of flow rates caused by ma-
chine failures, repairs, blockage and starvation, and utilizes elementary analysis to calcu-
late machine production and buffer levels in the intervals between successive changes.
In the two-stage system, there may be only one blocked machine (M1) or one starved
machine (M2 ), producing at a reduced rate. In longer lines, changes in production rates
propagate instantly to the beginning and the end of the production system through chains
of consecutive blocked and starved machines. Specifically, we have the following cases.
(1) When buffer B; becomes full the rate of M;+ 1 is not altered, but machine M; be-
comes blocked instantly and is forced to run at a slower rateR;= R;+ 1 If there is a chain
of blocked machines M; _ 1, M; _2, , M; _ko the algorithm is repeated upstream by setting

R;-m =Ri-m+ I

form= 1, 2, ... , k, until a non-full buffer B;-k-l is reached. Here we have a block of ma-
chines collapsed into one with rate R; _ m = R; + 1 Since the rate of a blocked machine is
less than its nominal production rate, the above recursion can be written equivalently as

(4.1)

form= 1, 2, ... , k.
(2) When buffer B; becomes empty, in a dual fashion, R;+ 1 = R; and M;+ 1 becomes
starved. If there is a chain of starved machines M; +2, M; +3, , downstream of M; + 1 ,
these machines are forced to run at a slower rate:

(4.2)

form= 2, 3, ... , k, until a non-empty buffer B;+k is reached; again, these adjustments are
equivalent to setting R; + m = R;.
(3) Recursions (4.1) and (4.2) are invoked in the beginning of the simulation in order
to initialize the production rates given the initial buffer levels.
(4) When M; breaks down we set R;= 0. Ifthere is a chain of blocked (starved) ma-
chines upstream (downstream) of M;, then their rates become 0 immediately according to
Eq. (4.1) or (4.2).
PRODUCTION LINES 101

( 5) When M; is repaired, its rate is restored to the maximum value R; = RM; The new
rate propagates instantly upstream and downstream according to Eqs. (4.1) and (4.2).
That is, if M; _ h M; _ 2, (M;+ I> M; + 2, ) had been forced down by M; then they begin
processing again at their rated capacities or at the rates of their immediately succeeding
(preceding) machines.
With the above considerations and the discussion of Section 3.3.2, the hybrid model
works as follows:

Algorithm 4.1. Hybrid mode/for continuous flow production lines


(a) Initialize. Specify machine parameters, buffer capacities and initial levels, and
total simulation time !max
(al) Set t = 0, R;= RM;. i =1, ... , n.
(a2) Using the rates from step (al), trace the line downstream and compute new
rates for the starved machines using Eq. (4.2).
(a3) Using the rates from step (a2), trace the line upstream and compute new
rates for the blocked machines using Eq. (4.1).
( a4) Compute the time of next event for each component.
(b) Advance Simulation Clock. Store the current time -r = t. Find the component with
the most imminent event and advance clock to the corresponding event-time

t = allmin
M;,B;
{TM; , Te;}

If t > !max. then set t = lmax update all state variables, and terminate the simu-
lation.
(c) Execute Event Routine
(cl) Identify the chains ofblocked and/or starved machines to be affected by the
event. Using Eqs. (3.14), update the cumulative production, parts-to-failure,
and buffer levels of every machine and buffer in the affected chains. Statis-
tics of machine utilization and buffer levels are updated as in Section 3.3.2.
( c2) Adjust production rates of the affected machines.
(c3) Compute next event of each affected component according to Eqs. (3.10)-
(3.13).
(c4) Go to (b).

Alternatively, the algorithm can stop when the last machine Mn completes a specified
production volume. The source code of the hybrid continuous flow algorithm, written in
FORTRAN 77, is presented in Appendix 4.Al.

4.2. DISCRETE PART MODEL

In this section we extend the hybrid model of two-stage discrete systems to analyze
production lines with several machines and intermediate buffers. This task requires a
more elaborate analysis than in the case of continuous flow. In the continuous case, at
any time instant an event can be either active (on) or disabled (off). Whenever two com-
102 HYBRID SIMULATION MODELS OF PRODUCTION NETWORKS

petitive events are in effect simultaneously, one of them dominates while the other is dis-
abled immediately. For example, if M; works faster than M; _ 1 then B; _ 1 will become
empty and the rate of M; will be reduced to the value R; _ 1 If R; _ 1 happens to be larger
than the rateR;+ 1 of the downstream machine M;+" then a buffer-full event will occur at
B; after an elapsed time. Then the rate of M; will decrease further toR;+ 1 and, as a result,
the level of B; _ 1 will start to increase. Here we have the situation where blockage of M;
changes the state of B;_ 1 from empty to not empty (partially full) instantly.
From the discussion in Section 3.4, it turns out that when the traffic is discrete,
events alter the production rates after some transient time. Therefore, at any time t, an
event can be off, imminent, or on. For example blockage of a machine is:
off, if the downstream buffer is partially full,
imminent, if the downstream buffer is full but the machine has not finished its
part, or
on, if the downstream buffer is full and the machine is blocked.
Similarly, we can identify three different states for starvation. The problem arises when-
ever blockage and starvation are imminent, that is, the buffer which is upstream from a
machine is empty while the downstream one is full. This situation has not been consid-
ered for the two-stage line, because the first machine cannot be starved and the second
cannot be blocked. We examine this phenomenon in detail in Section 4.2.2. In the next
section we define the state variables and the event types that determine the evolution of
the system.

4.2.1. State Variables and Events

There are three buffer states: empty, intermediate (partially full), and full. There are
two machine states: up and down (under repair). The model uses the following events:
(a) a machine fails
(b) a machine is repaired
(c) a buffer becomes full
(d) a buffer becomes empty
(e) a buffer becomes not full
(f) a buffer becomes not empty
Events (e) and (f) change the state of a buffer from full and empty, respectively, to
partially full. In the continuous flow model, these events occur simultaneously with the
occurrence of a failure, or buffer-full and buffer-empty events. Hence they need not be
considered separately. For instance, when a buffer is full and the upstream machine be-
comes starved, the state of the buffer switches to partially full and the machine becomes
not blocked immediately. In discrete part systems, these events are realized after a tran-
sient time has elapsed, as we shall see later in this section.
We now introduce the state variables of the system. Transient times were defined in
Section 3.4. For longer production lines we define
a; remaining time-to-next arrival at buffer B;
d; remaining time-to-next departure from B;.
PRODUCTION LINES 103

By definition, a; is the remaining time-to-next departure from M; and d; the time-to-next


arrival at M; + 1 The other state variables used in the model (production rates, cumulative
production, parts-to-failure, buffer levels, and their statistics) and the next-event times of
machines and buffers are as in Section 3.4.
When an event takes place, the model updates and adjusts the state variables, and
schedules next events at the affected components. Then the simulation clock is advanced
to the time of the most imminent event and the above procedure is repeated until the end
of simulation. The equations for updating state variables and scheduling events (a)-(d) are
the same as for the two-stage system, derived in Section 3.4. Next we examine the not-
full and not-empty events in detail.
A not-full event takes place when blockage is canceled. There are two possibilities
for this event. The first one is illustrated in Fig. 4.2. Machine M;+ 1 is faster than M;, but it
is under repair for a sufficiently long period such that the intermediate buffer B; becomes
full. At timet, a buffer-full event takes place and M; is forced to stay idle throughout the
remaining repair period of M;+ 1 Blockage is canceled after an elapsed timed; when M;+ 1
releases the next item, since, from that time on, it works faster than M;. The condition for
blockage cancellation is

{B;= full, a;= d;, and R;< R;+ J} (4.3)

The corresponding event-time is computed at time t using

(4.4)

Several not-full events take place simultaneously if there is a chain ofblocked machines
upstream of M;.

M; + 1
d;
a;=d;

M;

Figure 4.2. Buffer B; becomes not full.

The second possibility is that of blockage being canceled by starvation of M;; this
case is discussed in the next section.
A not-empty event takes place when starvation is canceled (see Fig 4.3). Again we
have two possibilities. Suppose M; is faster than M;+ 1, but it is under repair for a suffi-
ciently long period such that the intermediate buffer B; becomes empty. Then, M;+ 1 is
forced to stay idle throughout the remaining repair period of M;. Starvation is canceled
104 HYBRID SIMULATION MODELS OF PRODUCTION NETWORKS

immediately after M; releases the next item, since, from that time on, it works faster than
M; + 1 The condition for this event is

(4.5)

and the time of its occurrence is computed from

(4.6)

This event may generate additional not-empty events that propagate through a chain of
starved machines downstream of M;+ 1 For example, if B;+ 1 happens to be empty and
R;+ 2 < R;+ 1 then it will become not empty at time T 8 ;+ 1, when M;+ 1 completes the first
item after the transient period of starvation has elapsed. Therefore, unlike the not-full
event, which propagates to all the upstream buffers simultaneously, the not-empty events
occur at a sequence of distinct times T8 ;, T8 ;+ 1, ....

M;

Figure 4.3. Buffer B; becomes not empty.

The other possibility for a not-empty event is when starvation of M; + 1 is canceled by


blockage and it is discussed in the next section.
Since the machine failures and all the buffer-related events occur upon departures or
arrivals of workparts to buffers, two simultaneous events could be conditioned on a single
departure or arrival. The problem then arises as to which event to execute first. The dy-
namics of simultaneous buffer-related events are discussed in the next section. Here we
examine three additional possibilities, which will be referred to as the event priority
rules:
( 1) A buffer is exhausted and the downstream machine is scheduled to fail while
operating on the next workpart. Since the machine cannot break down during an
idle interval, the buffer-empty event must be executed first. This situation has
been discussed in Section 3.4.5.
(2) A machine breaks on a workpart, which in turn is going to be blocked. As dis-
cussed in Section 3.4.5, the failure event must be executed first.
(3) Machine M; is scheduled to be starved (B;_ 1 empties) and not blocked (B; be-
comes not full) simultaneously. Then, the buffer-empty event is executed first
and the transient times of M; are prolonged.
PRODUCTION LINES 105

Rule 3 suggests that every possible delay to the production cycle and transient time of a
machine must be taken into account before a change of the buffer state from full to inter-
mediate occurs.

4.2.2. Event Scheduling of Starved-and-Blocked Machines

Consider a segment of the production line that consists of three machines and two in-
termediate buffers (see Fig. 4.4). Assume that B; _1 is empty and B; is full. This situation
occurs when M; is faster than its adjacent ones. Then, M; is forced to wait until a part ar-
rives from the upstream buffer and, upon completion of processing, the part is blocked
until an empty space is available in the downstream buffer. The machine alternates be-
tween starved and blocked states periodically, until either a not-empty event cancels star-
vation or a not-full event cancels blockage.

--+Er&~&
Figure 4.4. Segment containing a starved-and-blocked machine M;.

During a starved-and-blocked period, the production of M; is dictated by the up-


stream and downstream machines, and the levels of the adjacent buffers assume extreme
values, BL; _ 1 =0 and BL; =BC;. Therefore, if we can compute the length of this period
then we will have complete knowledge of the dynamics of the segment, thus avoiding
piece-by-piece simulation.
Lett be the time when the machine enters a starved-and-blocked state. We want to
predict the time of next event in the segment. We distinguish the following situations:
1. A starved-and-blocked state is canceled immediately after its occurrence, be-
cause either B; becomes not full or B;_ 1 becomes not empty (see Cases A and B
below).
2. A starved-and-blocked state is canceled after a number ofparts have been pro-
duced, again because either B; becomes not full or B;_ 1 not empty (Cases C and
D).
3. The rates of M; _1 and M; + 1 are equal and machine M; remains starved and
blocked (Case E).
Case A. Figure 4.5 depicts an immediate cancellation of a buffer-full event due to
starvation. Machine M; is blocked and its rate has been set equal to the rate of M; + 1
However M;_ 1 produces at a slower rate and buffer B;_ 1 is exhausted at time t. A unit
space will be available in B; after an elapsed time d;. However, M; will request this space
after an elapsed time which equals the sum of the transient time a;_ 1 for the arrival of a
new part and the processing time of that part at M;. Thus, M; is no longer blocked. From
Fig. 4.5 we see that
106 HYBRID SIMULATION MODELS OF PRODUCTION NETWORKS

(4.7)

Here the empty event cancels the blockage directly, and thus a not-full event forB; occurs
immediately, i.e.

T8 ;=t (4.8)

d;
~

M; j j
a;-1 + 1/Rr.~1

M;-1

a;-1
t= Ta1

Figure 4.5. Cancellation of the starved-and-blocked state by an early not-full event

d;

L L

Figure 4.6. Cancellation of the starved-and-blocked state by an early not-empty event


PRODUCTION LINES 107

Case B. A starved machine M; fills its downstream buffer B; because it is faster than
M;+ 1 (see Fig. 4.6). Machine M; is then blocked and releases the next workpart after time
d; when a unit space is available in B;. In the meantime, however, M;_ 1 has completed a
workpart and M; is no longer starved. The condition now is

1
d;>a;-1 + - - (4.9)
RMi-1

A not-empty event forB;_ 1 is scheduled at time

(4.10)

d;

M; l

Figure 4.7. Cancellation of the starved-and-blocked state by a not-full event.

Case C. Suppose M; has slowed down due to blockage but M;_ 1 is slower than M;+ 1
As a result, B;_ 1 empties at time t. The machine then remains starved and blocked for
several, say N;, production cycles, before blockage is canceled. This situation is depicted
in Fig. 4.7. The not-full event occurs upon the departure ofthe last blocked part from M;.
We then have the following:

Proposition 4.1. If M; alternates between starved and blocked states for several pro-
duction cycles and M; _ 1 is slower than M; + 1 , then a not-full event will take place after
the machine produces a total of

N; =l+ld; -a;-t -11 RM; J (4.11)


1I R;-t -1 I R;+t

parts.
Proof. By assumption, M; and M; +1 continue producing synchronously after time t.
The departure time of the N;th part (the last blocked part) from M; is given by
108 HYBRID SIMULATION MODELS OF PRODUCTION NETWORKS

N -1
Ts; =t+d; +-'- (4.12)
R;+I

Since this part is blocked, it must have been be completed by time TJ. no later than Ts;.
Hence, by inspection ofFig. 4.7 we see that

which can be written as

N-1 N-1 1
t+d; +-'-;?::t+a;-1 +-'-+-- (4.13)
R;+I Ri-1 RM;

Blockage is canceled at time T8 ;, and so the time at which M; completes the (1 + N;)th
part is greater than the time at which a single space for this part is available in B;. From
Fig. 4.7 we see that

which implies

N; N; 1
t+d; +-- < t+a;-1 +--+-- (4.14)
R;+1 Ri-1 RM;

Upon combining inequalities (4.13) and (4.14), we obtain Eq. (4.11).

Using Eqs. (4.11) and (4.12) we obtain the time of the next event Ts; at buffer B;.

d;

--+ M; not ~tarved, blocked

J j _t _t t- l
1 + N;_,

f G;-1

Figure 4.8. Cancellation of the starved-and-blocked state by a not-empty event.


PRODUCTION LINES 109

Case D. This case is depicted in Fig. 4.8 and it is the dual to Case C. Now M; _1
works faster than M;+ 1 and M; is faster than M;_ 1 Suppose that buffer B;_ 1 empties, forc-
ing M; to slow down and at some time t later B; becomes full. Machine M; then remains
starved and blocked for several production cycles before starvation is canceled. In the
figure, N; _ 1 is the number of items passed from M; _ 1 to M; during the starved-and-
blocked period. The not-empty event occurs upon the arrival ofthe N;_ 1th part at M;. We
then have the following:

Proposition 4.2 If M; alternates between starved and blocked states for several pro-
duction cycles and M; + 1 is slower than M; _1, then a not-empty event will take place after
machine M; _ 1 produces a total of

N;-t = 1+la;-t -d; -1/ R;+t J (4.15)


1/ Ri+I -1/ R;-1

parts.
Proof. Since N; _ 1 is the part that cancels starvation, from time

N;-t -1
Tsi-I = t + a;-t + ---'--'--- (4.16)
R;-1

on, machine M; will not be starved, i.e., Ts; _ 1 is the time of the not-empty event. The pre-
vious part must have departed from M; by time Tt. no later than Ts; _ 1 (otherwise the ma-
chine would not be starved prior to T8 ;_ 1). Hence, by inspection of Fig. 4.8 we see that

which can be written as

t + a;-t +
N ,_ I -1 ?. t + d; + ----'
N1--'-'I --2
-- (4.17)
R;-t Ri+I

Furthermore, since starvation is canceled at time Ts; _ 1, by the time r 2 at which M; sends
the N;-tth part downstream, another part must have been loaded into B; from M;_ 1 From
Fig. 4.8 we see that

which implies

N __I < t + d; + ---'-


t + a;-t + _, N-1- -t-1
'--- (4.18)
R;-t Ri+I
110 HYBRID SIMULATION MODELS OF PRODUCTION NETWORKS

Upon combining inequalities (4.17) and (4.18) we obtain Eq. (4.15).

Using Eqs. (4.15) and (4.16) we obtain the time of the next event TB;_ 1 at buffer Bi-1
Case E. If M; is starved and blocked simultaneously and the rates of M; _ 1 and M; + 1
are equal, then buffer B;_ 1 will remain empty and B; will remain full until the end of the
simulation, unless a disturbance is observed earlier, e.g. either any of the three machines
breaks down, or M; _1 becomes starved, or M; + 1 becomes blocked. Hence we set

(4.19)

4.2.3. Simulation Model Logic

Now we present the building blocks of the simulation model in detail. There are
many variables describing the state of the system and the model would be computation-
ally inefficient if, upon every event, we had to update and adjust the whole state vector.
This is not the case here, since the system is decomposable. Indeed, as discussed in the
previous sections, events cause local perturbations in the system by altering the transient
times and production rates of the adjacent machines. Therefore, when an event takes
place, only the states of the adjacent components need to be updated and adjusted. These
perturbations are transferred upstream and downstream along the production line through
a sequence of secondary events, which are observed and executed in series. The steps of
the discrete event algorithm are as follows:

Algorithm 4.2. Hybrid model for discrete part production lines


(a) Initialize the line. Set: total simulation time tmax; buffer capacities and initial lev-
els; nominal production rates and transient times; machine mean times-to-failure
and repair; length n of the production line. Trace the line downstream and
schedule the next events.
(b) Determine the next event. Record the time of the most recent event r= t. Find
the events with the smallest event-time and select one that complies with the
priority rules of Section 4.2.1. If the time-of-next-event t exceeds I max go to step
(d), otherwise go to (c).
(c) Execute the appropriate event routine (see below) and go to step (b).
(d) Terminate the simulation. Trace the line downstream and update all system vari-
ables. Stop.

LetS; denote the segment M;- B;, i =I, ... , n. The relevant event routines are:
1. Machine-i fails
Update segments S; and S;_ 1 using the update equations of Section 3.4.4.
Adjust the state variables of S; and S; _ 1 In particular, increase the transient
times d;_ 1 and a; by the amount oftime required to repair M; and complete
the next item.
Schedule next events of S; and S; _ 1
2. Machine-i is repaired
PRODUCTION LINES 111

Compute the new parts-to-failure using Eq. (2.10) and schedule the next
time of failure as in Section 3 .4.1.
3. Buffer-i fills
UpdateS;, Si-1
Adjust R; to min {R;, R; + 1}.
If B;_ 1 happens to be empty, schedule next events inS; and S;-1 as in Sec-
tion 4.2.2; otherwise synchronize the transient times a; and d;-1 of M; with
the transient time d; and schedule next events in S; and S; _ 1 as in Section
3.4.
4. Buffer-i empties
In a dual fashion:
Update S;, S;+ I
Adjust R; + 1 to min {R;, R; _1}.
If B; + 1 happens to be full, schedule next events in S; and S; _ 1 as in Section
4.2.2; otherwise set d;= a; and a;+ 1=a;+ 1/RM;+ 1, and schedule next events
inS; and S;+ 1 as in Section 3.4.
5. Buffer-i becomes not-full
UpdateS;, Si-1
If B;_ 1 is empty, then set R;= min {RM;, R;_ !}; otherwise, restore the rate of
M; to its nominal value. Decouple M; from M; + 1 by setting a;= d; _1.
Schedule next events inS;, S;_ 1. as in Section 3.4.
6. Buffer-i becomes not-empty
In a dual fashion:
UpdateS;, S;+ I
If B;+ 1is full, then set R;+ 1 =min {RM;+ 1, R;+ 2 }; otherwise, restore the rate
of M;+ 1 to its nominal value. Decouple M;+ 1 from M; by setting d;= a;+ I
Schedule next events inS;, S;+ 1. as in Section 3.4.

4.3. NUMERICAL RESULTS

This section reports on the computer implementation and efficiency of the hybrid
models. The following simulators were developed in FORTRAN 77 code: a piece-by-
piece (PP) simulator, based on Algorithm 3.1 and the flowchart of Section 3.2.1, and two
others corresponding to the continuous flow (CF) and discrete part (DP) hybrid models.
As in the previous chapter, in order to compare the models under the same experi-
mental conditions, we use common sequences of random numbers, each one dedicated to
a certain type of event and a given machine. The issues investigated are those of accuracy
and computational efficiency. The relative speed, RS, of the hybrid models with respect
to the PP model is measured by

CPU time of PP model


RS=
CPU time of hybrid model
112 HYBRID SIMULATION MODELS OF PRODUCTION NETWORKS

and the relative estimation error, RE, for the various performance measures by

RE = (hybridmodelestimate)- (PPmodelestimate) lOO%


PP model estimate

We consider two production lines L 1 and L 2 In Lt. the first five machines have
nominal rates RM 1 = 5, RM2 = 4, RM3 = 6, RM4 = 7, RMs = 8. The sixth machine is identical
to Mt. the seventh machine is identical to M 2 and so on. The line is completed by a peri-
odic connection of blocks of five machines with their corresponding buffers. Failures are
operation-dependent. We assume that a fully utilized machine breaks down every 20 time
units on the average, or, equivalently, the mean failure rate of machine M; is p; = 0.05.
Furthermore, the number of parts M; produces between two successive failures is a ran-
dom variable drawn from an exponential distribution with mean 20RM; The times-to-
repair of the machines are also exponentially distributed with mean 2 time units. Thus the
repair rate of M; is r; = 0.5.
Production line L 2 consists of a series of identical five-machine segments with pa-
rameters RM1 = 10, RM2 = 8, RM3 = 12, RM4 = 15, RMs = 16,p; = 0.1, and r; = 1.0.
Four different experiments are conducted to determine the effects of transient phe-
nomena, buffer capacities, failure frequencies, and line length on the performance of the
hybrid models.
In actual manufacturing systems the start-up periods occupy a significant part of the
daily operation. When the system is empty, it takes some time to produce the first items.
Since the CF model assumes that material is produced continuously, it cannot take into
account this initial delay. This further suggests that in the CF model the first failures are
observed earlier than the actual times. Figure 4.9* displays the throughput error and the
number of buffers whose errors are larger than 10% for line L 1 with n = 100 machines
and buffer capacities BC; = 2.

100 90
number of
75 \\ I -o- number of buffers with RE > 10%
- RE (throughput) ~ 65
throughput
errors
50
\ 40
error(%)

25 ~
-
,.. 15
,..
-
1">. ~

0 -10
0 300 600 900
time

Figure 4.9. Errors of the CF model during the initial transient period.

1991 IEEE. Reprinted, with permission, from IEEE T Automat. Contr. 36:515.
PRODUCTION LINES 113

It should be stressed that since the DP (hybrid, discrete part) model is exact, it does-
n't exhibit any errors in throughput rates or mean buffer levels when compared with a
piece-by-piece simulator. The greatest disadvantage of the CF model appears in the com-
putation of mean buffer levels, whereas the throughput estimates are close to those of the
piece-by-piece simulator. The relative speeds of the CF and DP models are 6.02 and 4.15
respectively for a simulation period of 900 time units. Lines with larger buffers have
longer transient periods but the continuous flow approximation is more accurate than in
those with small capacities.
In the second experiment we examine the effects of buffer capacities on the perform-
ance of the hybrid models. We simulate line L 2 with n = 40 machines and various buffer
capacities until 10,000 items are produced. Initially the line is empty. The results are
given in Table 4.1. For small capacities (BC; ~ 1) the throughput error ofthe CF model is
in excess of 10%. The relative speeds of the hybrid models are very close. However, the
CF model is superior in speed for BC; 2 2, and its accuracy in estimating mean through-
put rate improves as buffer capacities increase, reaching negligible error levels for
BC;2 10.

Table 4.1. Accuracy and speed for various buffer capacities.

DP model CF model
BC; RS RE~%1 RS RE(%)
0 2.04 0.0 2.07 -60.7
2.69 0.0 3.65 - 11.7
2 3.39 0.0 4.85 -6.8
3 3.86 0.0 5.62 -3.8
4 4.36 0.0 6.34 -3.4
5 4.76 0.0 6.69 2.2
10 6.46 0.0 9,79 - 0.6

The key condition under which the hybrid algorithms will deliver superior computa-
tional efficiency is that events occur at a frequency with order of magnitude smaller than
the machine production rates. The latter determine the frequency of events for the PP
simulator. Indeed, when buffer capacities are large, blackings occur rarely and the effi-
ciency of the hybrid model is remarkable. To further support the above conjecture we
investigate the effect of increasing failure rates of line L 2 with 40 machines and space for
10 items in each buffer. As we observed in the previous chapter, the results, shown in
Fig. 4.1 0, suggest that there is a critical level of machine vulnerability, below which the
DP model is faster than the PP benchmark. For the particular experiment herein the criti-
cal failure rate is p; = 1.0, which implies that the fastest machines (max RM; = 16) break
down once every 16 production cycles, on the average, whereas the slowest ones fail
once every 8 cycles. However, such frequent failures are rarely the case in actual produc-
tion lines. If the line is completely reliable, after a transient period, all machines are
slowed down to the rate of the slowest machine and no event takes place thereafter. In
this case, the model is infinitely faster than the PP simulator. Therefore, relatively reliable
systems can be efficiently analyzed by the exact hybrid model.
114 HYBRID SIMULATION MODELS OF PRODUCTION NETWORKS

12
-a- DP is faster
9 ~ -t:r- PP is faster
RS 6
\,
3
be_
0
~ .A

0.01 0.1 10
failure rate

Figure 4.10. Relative speed ofDP model versus failure rates.

12
-t:r-CF ~r-
~
9
-D-DP~
... -l]

~y
RS 6

0
0 20 40 60 80 100
line length (number of machines)

Figure 4.11. Relative speed of hybrid models versus system size.

In the last simulation experiment, all parameters of L 2 are the same as in the standard
case, the buffer capacities are 10 and the line length varies from 10 to 100 machines.
From Fig. 4.11 it appears that the relative speeds of the hybrid models increase with in-
creasing system size, a property that allows large systems to be analyzed at minimum
computational cost. In addition, the accuracy of the CF model is exceptional, with its
throughput errors being less than 0.6% in all cases.

4.4. EXTENSIONS

This section discusses extensions of hybrid models to more complex production


lines, batch production, and random processing times. In Chapter 6, these models are
further enhanced to take into account several optimization and control issues such as
buffer space allocation, maintenance, lot sizing, and sequencing problems.
PRODUCTION LINES 115

4.4.1. Series-Parallel Configurations

A series-parallel production system is a serial arrangement of workstations W1, W2,


... , Wn. Station W; is a block of k; parallel machines M;, 1, which perform the same type of
operation at nominal rates RM;,J'} = 1, 2, ... , k;. Figure 4.12 illustrates a segment of series-
parallel production system.

Figure 4.12. Three workstations connected in series.

Throughout this section we assume that the flow of items is continuous rather than
discrete. A hybrid model for this system is the same as that of the production lines (Algo-
rithm 4.1, Section 4.1 ), except for a few extensions we discuss next.
The production rate R; of workstation W; is the sum of the production rates R;,J j = 1,
2, ... , k;, of its machines; its capacity or nominal rate, RM; is the sum of the nominal pro-
duction rates of the operational machines. When B; becomes full, R; is reduced to the rate
of the downstream workstation W; + 1 and similar rate reductions are realized in the chain
of blocked workstations W;-t. W;_ 2 , ,until a non-full buffer is encountered. When B;
becomes empty, the model traces the downstream chain of starved workstations, if any
exists, and reduces their rates to the rate of W;.
If a workstation is slowed down and one of its machines breaks down or is repaired,
then the rates of the other operating machines in the same workstation are adjusted in-
stantly according to a given work allocation discipline. Decisions concerning workload
allocation to a given block of operational machines depend on the state of the production
line and the control policy set by the production management.
One of the most popular disciplines for load balancing is the FIFO rule, which pri-
oritizes machines that have spent longer time awaiting a new part. This discipline at-
tempts to equalize the busy or, equivalently, idle times of all the machines. For continu-
ous flow systems, this is equivalent to adjusting the machine rates in proportion to their
maximum rates. For example, if W; has two operating machines M;. 1 and M;. 2 and it is
slowed down to a rate R; then machine M;.J will produce at rate

R
R-=R
1,)
-'
Mi,j R
;'=1,2 (4.20)
M;
116 HYBRID SIMULATION MODELS OF PRODUCTION NETWORKS

where RM; = RM;, 1 + RM;, 2 is workstation's W; maximum production rate. Furthermore, by


Eq. (4.20) R;, 1 /RM;,J = R;IRM; for every j, which implies that all machines are equally util-
ized.
Another strategy for maintaining a certain feasible level R; of the flow rate through
workstation W; is to utilize only a subset of machines at any given time; the remainder of
the parallel machines are on standby and are used only in the event of failure. According
to this discipline, each machine is assigned a priority index based on its operating char-
acteristics, such as, speed, reliability, operational cost, etc. Assume that the machines of
w; are indexed according to descending priority. If machine M;, 1 is operational, then it
attempts to process the total workload R; parts per time unit. If R; exceeds the nominal
production rate of M;, 1, the excess workload XS = R;- RM;, 1 is dispatched toM;, 2, and so
on. The process stops when the workload is exhausted, since the total rate R; is feasible,
i.e. it is less than or equal to the nominal production rate of W;. Thus the rate allocation
algorithm is:
(a) Initialize: set R;,m = 0, form= 1, 2, ... , k;, XS = R; and}= 0.
(b) Check: ifXS = 0, then stop.
(c) Load Next Machine: replace j by j + 1
set R;, 1 = min {RMi,J XS}
replace XS by XS - R;,1
go to step (b).
The above extensions can easily be incorporated into Algorithm 4.1 of Section 4.1.
To compare the continuous flow and the piece-by-piece models we consider a pro-
duction line with identical workstations. Each workstation consists of three identical ma-
chines, whose production, failure, and repair rates are 10.0, 0.1, and 1.0, respectively. Be-
tween successive workstations there are buffers each with capacity of 20 parts. The pro-
duction rates of the operating machines of a workstation are determined according to the
FIFO rule. The continuous flow and piece-by-piece models are run for a production pe-
riod of3,000 time units using common random numbers.

-
25

20 ~
~
~
RS
15

10
r
5

0
0 10 20 30 40 50 60
number of workstations

Figure 4.13. Relative speed of the CF model versus system size.


PRODUCTION LINES 117

Figure 4.13 shows the relative speed (RS) of the continuous flow model as a func-
tion of the line length. The errors of the CF model in estimating throughput and mean
buffer levels are less than 0.8% and 2%, respectively. Again, as in the previous section,
the relative speed of the model is an increasing function of the number of workstations,
ranging between the values 12 and 23.
From the above experiments, it appears that the CF model is an accurate and efficient
tool for the analysis of series-parallel configurations in steady state.

4.4.2. Variable Processing Times

The key condition for the efficiency of the hybrid models is that the machine produc-
tion rates change due to event occurrence at a frequency that is considerably smaller than
their own order of magnitude. Central to this condition is the assumption of constant
processing times. Indeed, the hybrid models would be inefficient if we were to adjust the
production rate each time a machine begins processing a new part. Although the assump-
tion of constant processing times is valid for single-product transfer lines where machine
operations are numerically controlled, it cannot capture the effects of variations in the
processing times induced by variable human performance in repetitive tasks and the di-
versity within the family of products.
In this section, we examine a production line that produces several types of products
in batches. At time zero there are J jobs available for processing before the first machine,
where J'?. 0. Job j, j = 1, 2, ... , J, represents a production order (backorder) of a given
number Qj of items and its processing time requirements differ from those of the other
jobs. Thus for a given machine, the nominal production rate changes over time according
to the type of jobs being processed. Such changes may require that some amount of setup
time be spent for retooling, reprogramming, cleansing, etc., depending on the dissimilar-
ity between the operations of the preceding and succeeding jobs. The list of backordered
jobs is updated each time a new order arrives in the system. We assume that the order
quantities and interarrival times are random variables drawn from known distributions.
All the above can be handled well by the hybrid models by introducing three addi-
tional events, namely, job arrival, setup commencement, and setup completion. We now
illustrate in detail how the continuous flow model records these events.
At time zero there is a list ofbackordered jobs awaiting entrance to the first machine.
A new job is scheduled to arrive after a random interarrival time computed from a ran-
dom variate generator. Now assume that machine M1 begins a setup for job j on the list.
During a setup period, a machine cannot process parts; hence we set R 1 = 0. When the
machine completes its setup, it begins producing parts j at its rated speed RMJ,j Hence, a
setup completion can be handled exactly as a machine repair. At this point the number of
parts-to-next-setup for M 1 is q 1 = Qj. A new setup is scheduled to occur when q runs
down to zero. All these events can be taken into account as follows:
1. A new job arrives at timet. Find the corresponding order quantity q; if this quan-
tity is random, invoke a random variate generator. Insert the order into the back-
log list; that is,

1995 liE. Reprinted, with permission, from /IE Transactions, 27:32.


118 HYBRID SIMULATION MODELS OF PRODUCTION NETWORKS

replace Jby J + 1
set QJ= q
identify the types of operations required from each machine and the corre-
sponding nominal production rates RM;,J i = 1, 2, ... , n.
By invoking a random variate generator find the time-to-next arrival ta and
schedule the arrival of the next job (J + 1) at time TA = t + ta. If M 1 is starved,
then commence a setup for job J immediately.
2. M; commences a setup for job j at time t. Update M;, the upstream buffer B; _ 1
and the downstream buffer B;; set R; = 0 and schedule next events forB;_ 1 and
B;. Let j' be the most recent job that has been processed by M;. Schedule a setup
completion at time t + S;,p, 1, where S;,J',J is the time required to set up M; for job j
immediately after job j'.
3. M; completes a setup for job j at timet. Update machine M;, the upstream buffer
B;_ 1 and the downstream buffer B;; set q;= Q1 and R;= RM;, 1; schedule next
events for B;_ 1 and B;. The machine will finishjobj at time t + q;IR;. Schedule a
setup of M; at timet+ q;IR;. Using the updated number of parts-to-failure of M;,
invoke Eq. (3.10) to compute the time of failure of M;.
4. Effects of blockage or starvation occurring at timet. Identify the chain ofbuff-
ers and machines to be affected by the event. Update the corresponding numbers
of parts-to-next-setup q;. Reduce the production rateR; of the affected machine
M; using recursion (4.1) or (4.2). Using the new rate, recalculate the time of next
setup and the time offailure of M; at timet+ q;IR;.
In the above algorithm, the machines and buffers are updated using Eqs. (3.14). The
number of parts-to-next-setup forM; at timet is updated similarly from

q; = q;' + R;' (t- r)

where r is the time of the previous event at M;; q;' denotes the parts-to-next setup at time
rand R;' is the rate of M; in the interval [ r, t). The event scheduling equations are given
by Eqs. (3.10)-(3.13).
For systems like the one described above, production control is based only on the ac-
tual occurrences of demand. That is, raw parts are released in front of the first machine
only when a new order is placed. Such systems are known as produce-to-order systems.
Produce-to-order operation reduces the inventory carrying costs, because items are pro-
duced only when there is demand for them and no product is stored, but may lead to long
delays in filling customer orders.
On the contrary, in produce-to-stock systems, decisions regarding acceptance or re-
jection of an incoming order and whether to release raw material and semi-finished items
into the various production stages are based on the inventory/backlog status of the system
(Buzacott and Shanthikumar, 1993; Dallery and Liberopoulos, 2000). That is, an incom-
ing order is rejected if the current backlog J or the total unfinished work,
PRODUCTION LINES 119

has reached a specified upper limit. Also a machine stops producing, even if it may have
raw parts, when a subset of the downstream buffers or just the inventory of finished items
has reached a target level. These decisions affect the throughput, mean buffer levels, and
delays in filling customer orders. Therefore, they are directly related to operational costs
and sales revenues.
Since the continuous flow model can update the inventory/backlog status of the sys-
tem at any time instant, it can easily be modified to handle this kind of production con-
trol. Furthermore, the hybrid model could be used for testing different values for the in-
ventory and backlog target levels in order to compensate the inventory, setup and backlog
costs with sales returns and maximize the profit rate of the system. We shall discuss this
issue in Section 6.2.

4.4.3. Random Processing Times

We now examine production lines with random processing times or, equivalently,
random processing rates. Again, as in the previous section, the problem here is that it is
not efficient to generate the processing rate from a random variate generator each time a
machine begins processing a new part. To overcome this inefficiency, we develop a hy-
brid CF model in which we approximate the discrete and random traffic by a piecewise
deterministic fluid flow.
The fluid approximation we propose assumes that the processing rates are piecewise
deterministic. Specifically, in the CF model, M; begins producing a lot of q items, 1< q <
co, at a constant rate. The model uses an appropriate random variate generator to compute
the lot's net processing time T. For example, if the processing times are exponentially
distributed then T has a gamma distribution. For q ~ 10, generating a gamma variate re-
quires considerably less computation time than generating q exponential variates (Law
and Kelton, 1991). The model can handle more general distributions. For example, ifthe
processing times are Erlang or gamma, then we invoke the same generator since their
sum has also a gamma distribution.

cumulative
production

t t t t t t ttt t
a train of processing times

Figure 4.14. Approximation of random processing times (q = 5).


120 HYBRID SIMULATION MODELS OF PRODUCTION NETWORKS

In the interval [0, T] the cumulative production is approximated by a linearly in-


creasing function shown in Fig. 4.14. The CF model calculates the quantity RMi= q/T,
which is the current maximum processing rate of M. When M completes these q items,
the maximum processing rate is adjusted similarly by generating another total processing
time T' for the next q items.
The accuracy and speed of the CF model depends crucially on the size q of the lot.
To see this, pick a single item at random from those produced by M during a production
period. Let Oi 2 be the variance of its processing time. In the CF model, this item belongs
to a lot of size q. The sum T of the processing times of all the items in the lot is a random
variable whose variance is q Oi 2 By the piecewise deterministic approximation of proc-
essing times, each item of the lot will be processed at a constant amount of time Tlq.
Hence the variance of the processing time of the selected item will be (qOi 2) I l, that is,
smaller than the true variance Oi 2 by a factor of q. By the same argument one can show
that the mth central moments of the approximate processing times are smaller than the
true ones by a factor of qm- 1
Clearly, when q = I, a machine processes successive parts at different rates, and
when q = oo processing rates are constant. One should therefore seek a sizeable value for
q to avoid computing a new rate for each part because this is computationally inefficient.
At the same time, this value of q should not be too large because then the randomness of
the processing times is ignored and the results are inexact. The obvious tradeoff will be
decided experimentally.
The accuracy of the approximation and its efficiency over a conventional piece-by-
piece (PP) simulator has been verified for various system configurations and parameter
values. Here we present numerical results from the study of serial systems with 2, 5, 10,
15, 20 and 30 workstations. Each workstation consists of three machines. The first ma-
chine has deterministic processing times, the second exponential, and the third Erlang
with 6 stages. The mean processing times are 0.1 time units and the probability that a
machine breaks down during a processing cycle is 0.01. Repair times are exponentially
distributed with mean 1 time unit, buffer capacities are 20 and the simulation horizon
spans 5,000 time units. The parameter q of the CF model assumed the values 20 and 30.

10000

CPU time
1000

/
~ - ~

-- -- --
(sec) 100
pp

10
-
...Y
~
--- q=20
q=30

0 5 10 15 20 25 30
Number of stations

Figure 4.15. CPU time vs. line length.


PRODUCTION LINES 121

From Fig. 4.15 we see that the logarithms of the CPU times differ by a constant fac-
tor. Hence, the relative speed of the CF algorithm is independent of the system size. For q
= 30 the CF model is about 15.5 times faster than the PP one and the corresponding errors
of the estimates of the throughput rates are less than 1.43% in all cases. Compared to the
deterministic series-parallel system discussed in the previous section, the throughput er-
rors of the CF model are slightly larger because now we have approximated the random
processing times by piecewise deterministic processing times.
Using q = 20, results in a marginal improvement of the accuracy (errors < 1.32%) but
the relative speed drops to 11.5. Clearly the choice q = 30 is satisfactory for this system.

4.5. SUMMARY

In this chapter, we developed hybrid discrete event models of long production lines
in which the machines have constant, time-varying, or random processing times and the
flow of material is discrete or continuous. These models decompose the production line
into fast and slow dynamic systems, which are analyzed using the models presented in
Chapter 3. The flow of material through each buffer is analyzed separately, when the
buffer is partially full, or it is linked with the flows of its adjacent ones, when the buffer
fills or empties. We presented the equations that describe the flow of material through
chains of full or empty buffers and machines that are simultaneously starved and blocked.
When the processing times are random, discrete traffic is approximated by a piecewise
deterministic fluid flow. The accuracy ofthe hybrid models and their computational effi-
ciency were verified through a large number of experiments. In Section 6, we shall see
how these models can be used to optimize production lines. In Appendix 4.A1 we give
the computer program of the continuous flow model.

4.Al. APPENDIX: FORTRAN CODE FOR SIMULATING CONTINUOUS FLOW


PRODUCTION LINES

Here we present the FORTRAN code of the hybrid CF model for production lines.
The line consists of NM unreliable machines with deterministic processing times and
NM - 1 intermediate buffers with finite capacities. Machines and buffers are indexed by
N, N = 1, 2, .... Buffer N is located between machines Nand N+1 and can store up to
BC(N) units of material.
Machine N requires liRA TE(N) time units to process one unit of material. Hence
RA TE(N) is the nominal production rate of machine N. Failures are operation-dependent.
In Section 3.1, where we presented the discrete part model of a two-stage production line,
we assumed that the numbers of parts-to-failure have geometric distributions on
{0, 1, ... }. However, since the flow is now continuous, we assume that the production
volume between two successive failures of machine N has an exponential distribution,
which is the continuous analog of the geometric distribution. Since every machine re-
quires a specified time to process one unit of material, it turns out that the operating time
between successive failures of machine N is also exponentially distributed. We assume
that the mean operating time between failures of machine N is 1/FR(N). Hence, the mean
122 HYBRID SIMULATION MODELS OF PRODUCTION NETWORKS

number of parts-to-failure is RATE{N)/FR(N). The downtimes (times-to-repair) of ma-


chine N are assumed to be also exponential random variables with mean 1/R.R(N). The
failure rate of machine N is FR(N) and the repair rate is RR(N). Table 4.A1 shows the
variables used in the model.

Table 4.Al. Definitions of variables used.

Variable Description
NM Number of machines (maximum value= 100)
N Machine index, N = 1, 2, ... , NM
SIMTIM Simulation period
INFINITY Infinity(= SIMTIM + 1)
TIME Present time (simulation clock)
RATE(N) Nominal production rate of machine N
FR(N) Failure rate of machine N
RR(N) Repair rate of machine N
R(N) Production rate of machine N
PROD(N) Cumulative production of machine N
PTF(N) Number of remaining parts-to-failure of machine N
DOWN(N) Percent downtime of machine N
NEM(N) Type of next event of machine N (failure= 0; repair= 1)
TEM(N) Time of next event of machine N
TPEM(N) Time of previous event of machine N
STATEM(N) State of machine N (down= 0; up= I)
BC(N) Capacity of buffer N, N = I, ... , NM- I
BL(N) Level of buffer N
MBL(N) Mean level of buffer N
NEB(N) Type of next event of buffer N (empties = 0; fills = 2)
TEB(N) Time of next event of buffer N
TPEB(N) Time of previous event of buffer N
STATEB(N) State of buffer N (empty = 0; partially full = I; full = 2)
z Integer Zn used by the random number generator (see Eqs. 2.9)
ICRN(N,J) Current value of Z used by the random number generator to
generate the uptime (J =I) or downtime (J = 2) of machine N
RAND(Z) Multiplicative congruential generator:
Zn = (63030600I6 Zn-1) (mod 231 - I)
computes a new integer value for Z and returns the next ran-
dom number Z I 2 31 -I
INISEED(POSITION) Function: returns an appropriate element from a list of 500
seeds (starting values Z0) of the random number generator;
POSITION refers to the position of the seed in the list

The FORTRAN code is given in Figs. 4.Al through 4.A8. Figure 4.A1 contains the
main program, called FLOWLIN. The main program reads the parameters of machines
and buffers from the input file FLOWLIN.INP. The variable POSITION is an integer
between 1 and 500 and its meaning will become clear later in this section.
PRODUCTION LINES 123

The loops labeled 70 and 80 compute the integer variables ICRN(N, J), which are the
seeds of the random number generators for the uptimes and downtimes r:f 1~1.achine N.
Specifically, ICRN(1,1) and ICRN(1,2) are used to generate the first uptinie7and the first
downtime of machine 1; the next two, namely ICRN(2, 1) and ICRN(2,2), are reserved for
the first failure and the first repair of machine 2, etc. Using common seeds in the random
numbers of the continuous flow and discrete part algorithms we can compare these mod-
els under the same experimental conditions. The invoked function INISEED, shown in
Fig. 4.Al.8, yields successive seeds that are apart by 100,000 random numbers. That is,
for the multiplicative congruential generator (see Section 2.3.1)

Zn+ 1 = (aZn) (mod c)

if the seed for the uptimes of machine 1 is Z0 then the seed for the repaii times of the
same machine will be Z 100000 and the corresponding seeds for machine 2 will be Z2ooooo
and Z300000 , respectively. This ensures that the sequence of the uptimes of machine 1 will
be independent from the sequence of downtimes of the same machine during the first
100,000 failures.
Subroutine INITIAL(TIME) sets the simulation clock TIME to zero, initializes the
state variables of the system, and schedules the next event of each buffer and each ma-
chine. Subroutine NEXTEVT(TIME, N, MAORBUF) .finds the most imminent event;
TIME records the time of occurrence of this event, N denotes the index of the component
at which the event will take place, and MAORBUF specifies its type (machine or buffer).
Whenever an event occurs, the corresponding event routine is invoked. The event rou-
tines are
EMPTIES(N, TIME): buffer N empties at time TIME
FILLS(N, TIME): buffer N fills at time TIME
FAILS(N, TIME): machine N breaks down at time TIME
REPAIRED(N, TIME): machine N is repaired at time TIME
If the time of next event exceeds the simulation period SIMTIM then the program up-
dates the buffer levels and cumulative production of each machine {see the loop labeled
130), writes a number of output variables to file FLOWLIN.OUT, and terminates the
simulation.

PROGRAM FLOWLIN
REAL*8 INFINITY,SIMTIM,TEM,TEB,TPEM,TPEB,TIME,MBL
INTEGER*2 STATEM,STATEB,NEM,NEB,POSITION
INTEGER*4 Z,ICRN
COMMON/BLOCKO/SIMTIM,INFINITY
COMMON/BLOCK1/FR(100) ,RR(100),DOWN(100),ICRN(100,2)
COMMON/BLOCK2/NM,STATEM(100),STATEB(100),NEM(10 0),NEB(100),
&TEM(100) ,TEB(100),TPEM(l00),TPEB(100),
&R (100), RATE (100), PROD ( 100) , PTF (100), BC ( 100), BL ( 100) , MBL ( 100)

Figure 4.Al. FORTRAN code of the main program FLOWLIN.


124 HYBRID SIMULATION MODELS OF PRODUCTION NETWORKS

OPEN(S,FILE='FLOWLIN.INP ',STATUS='OLD')
C read line parameters from file "FLOWLIN.INP"
READ(5,*) POSITION,NM,SIMTIM
C input parameter "POSITION" must be an integer between 1 and 500
IF (POSITION.GT.500) POSITION=500
IF (POSITION.LT.l) POSITION=!
DO 10 N=l,NM
READ(5,*) BC(N),BL(N),RATE(N),FR(N) ,RR(N)
10 CONTINUE
OPEN (6,FILE= 1 FLOWLIN.OUT',STATUS='NEW')
C write results to file "FLOWLIN.OUT"
WRITE(6,*) I ************************* ************'
WRITE(6,*)' * CONTINUOUS FLOW MODEL OF *'
WRITE(6,*)' * UNRELIABLE PRODUCTION LINES *'
WRITE(6,*) I ************************* ************'
WRITE(6,20) POSITION,NM,SIMTIM
20 FORMAT(llH Seed no.=I3/21H Number of machines=,I3/
&20H Simulation period=,F10.1,12H time units)
WRITE (6, 30)
30 FORMAT(//7X, 'PARAMETERS OF MACHINES AND BUFFERS'/
&'----1----M A CHINES: ----1- BUFFERS: '/
&I I R A T E s I INITIAL I I
&' N. IPRODUCTIONIFAILUREI REPAIRICAPACITYI LEVEL '/
&'----l----------+------- +-------1--------+------- -')
DO 40 N=l,NM-1
WRITE(6,50)N,RATE(N),FR(N ),RR(N) ,BC(N) ,BL(N)
BL(N)=AMINl(BL(N),BC(N))
BL(N)=AMAXl(BL(N),O.)
40 CONTINUE
WRITE(6,60) NM,RATE(NM),FR(NM),RR(NM)
50 FORMAT(I3, lX, I', 2X, F6.2,2X, I', 2 (lX, F6.2, I'),
I I I

&2X,F5.0,1X, '1',2X,F5.0,1X)
60 FORMAT(I3,1X, '1',2X,F6.2,2X, '1',2(1X,F6.2, 'I'))
INFINITY=SIMTIM+l.
DO 70 N=l,NM
DO 80 J=l,2
ICRN(N,J)=INISEED(POSITIO N)
POSITION=POSITION+l
IF (POSITION.GT.500) POSITION=!
80 CONTINUE
70 CONTINUE
CALL INITIAL(TIME)
100 CALL NEXTEVT(TIME,N,MAORBUF)
IF (TIME.GE.SIMTIM) GOTO 110
IF (MAORBUF.EQ.l) THEN
IF (NEM(N) .EQ.O) THEN
CALL FAILS(N,TIME)
ELSEIF (NEM(N) .EQ.l) THEN
CALL REPAIRED(N,TIME)
END IF
ELSEIF (NEB(N) .EQ.O) THEN
CALL EMPTIES(N,TIME)
ELSEIF (NEB(N) .EQ.2) THEN
CALL FILLS(N,TIME)
ENDIF
GOTO 100

Figure 4.Al. (continued). FORTRAN code of the main program FLOWLIN.


PRODUCTION LINES 125

110 WRITE(6,120)
120 FORMAT(//7X, 'S I MULA T I 0 N RESULTS '/
&'----1--M A C H IN E S --1-- B U F FER S ---'/
&1 I TOTALI PERCENT! FINALI AVERAGE'/
&' Nl PRODUCTION! DOWNTIME! LEVELl LEVEL'/
&'----l-----------l---------1--------l-----------')
DO 130 N=1,NM
CALL MACELAP(N,TIME)
C rectify the total downtime, DOWN, if the machine happens to be down
IF (STATEM(N) .EQ.O) DOWN(N)=DOWN(N)-SNGL(TEM(N)-TIME)
DOWN(N)=(DOWN(N)/TIME)*100
IF (N.LT.NM) THEN
CALL BUFELAP(N,TIME)
MBL(N)=MBL(N)/TIME
WRITE(6,140)N,PROD(N),DOWN(N),BL(N),MBL(N)
ELSE
WRITE(6,150)NM,PROD(NM),DOWN(NM)
END IF
130 CONTINUE
14 0 FORMAT (I 4 I I I I I lX, FlO. 1 I I I I I 2X, F7. 2 I I I I I lX, F7. 1 I I I I I Fl 0. 4)
150 FORMAT(I4, 'l',lX,FlO.l, I I 1 ,2X,F7.2, 'I')
WRITE(6,' (/7X,33(1H-)/11X,12H THROUGHPUT=,Fl2.6/7X,33(1H-)) ')
&SNGL(PROD(NM)/TIME)
STOP
END

Figure 4.Al. (continued). FORTRAN code of the main program FLOWLIN.

Subroutine INITIAL(TIME), shown in Fig. 4.A2, encodes Steps (a1)-(a4) of Algo-


rithm 4.1. Specifically, loops 160 and 170 set the rates of starved or blocked machines to
the maximum permissible values while loop 180 initializes the state variables associated
with each component (machine or buffer) of the system and computes the time of occur-
rence of each event in the system.

SUBROUTINE INITIAL(TIME)
REAL*B INFINITY,SIMTIM,TEM,TEB,TPEM,TPEB,TIME,MBL
INTEGER*2 STATEM,STATEB,NEM,NEB
INTEGER*4 Z,ICRN
COMMON/BLOCKO/SIMTIM,INFINITY
COMMON/BLOCK1/FR(100),RR(100),DOWN(100) ,ICRN(100,2)
COMMON/BLOCK2/NM,STATEM(100),STATEB(100),NEM(100),NEB(l00),
&TEM(100),TEB(100),TPEM(100),TPEB(l00),
&R(l00),RATE(100),PROD(100),PTF(100),BC(100),BL(l00) ,MBL(100)
TIME=O.DO
C trace the line downstream and upstream to compute the maximum
C permissible production rates
R(1)=RATE(1)
DO 160 N=1,NM-1
IF (BL(N) .EQ.O.) THEN
R(N+1)=AMIN1(RATE(N+1),R(N))
ELSE

Figure 4.A2. Subroutine INITIAL.


126 HYBRID SIMULATION MODELS OF PRODUCTION NETWORKS

R(N+1)=RATE(N+1)
END IF
160 CONTINUE
DO 170 N=NM-1,1,-1
IF (BL(N) .EQ.BC(N)) R(N)=AMIN1(RATE(N),R(N+1))
170 CONTINUE
DO 180 N=1,NM
C initialize each machine N, compute the number of parts-to-failure,
C and schedule the next event
PROD(N)=O.
STATEM(N)=1
Z=ICRN(N,1)
PTF(N)=-ALOG(RAND(Z))*RATE(N)/FR(N)
ICRN(N,1)=Z
TPEM(N)=O.DO
TEM(N)=PTF(N)/R(N)
NEM(N)=O
IF (N.LT.NM) THEN
C initialize each buffer N and schedule the next event
MBL(N)=O.DO
TPEB(N)=O.DO
IF ((BL(N).EQ.O.).AND.(R(N).EQ.R(N+l))) THEN
STATEB(N)=O
TEB(N)=INFINITY
ELSEIF((BL(N) .EQ.BC(N)) .AND. (R(N) .EQ.R(N+l))) THEN
STATEB(N)=2
TEB(N)=INFINITY
ELSE
STATEB(N)=1
CALL BUFEVEN(N,TIME)
END IF
END IF
180 CONTINUE
RETURN
END

Figure 4.A2. (continued). Subroutine INITIAL.

Subroutine NEXTEVT, shown in Fig. 4.A3, finds the minimum event-time and iden-
tifies the component at which the next event will occur.

SUBROUTINE NEXTEVT(TIME,N,MAORBUF)
REAL*8 INFINITY,SIMTIM,TEM,TEB,TPEM,TPEB,TIME,MBL
INTEGER*2 STATEM,STATEB,NEM,NEB
COMMON/BLOCKO/SIMTIM,INFINITY
COMMON/BLOCK2/NM,STATEM(100) ,STATEB(100) ,NEM(tOO) ,NEB(100),
&TEM(100),TEB(100),TPEM(100),TPEB(100),
&R(100),RATE(100),PROD(100),PTF(100),BC(100),BL(100),MBL(100)
TIME=SIMTIM
C trace the line downstream, to identify the component with the
C smallest event-time

Figure 4.A3. Subroutine NEXTEVT.


PRODUCTION LINES 127

DO 190 J=1,NM
IF (TEM(J) .LT.TIME) THEN
N=J
MAORBUF=1
TIME=TEM (J)
END IF
IF (J.EQ.NM) GOTO 190
IF (TEB(J) .LT.TIME) THEN
N=J
MAORBUF=2
TIME=TEB(J)
END IF
190 CONTINUE
RETURN
END

Figure 4.A3. (continued). Subroutine NEXTEVT.

Subroutines FAILS, REPAIRED, FILLS, and EMPTIES are given in Fig. 4.A4.
Each of these subroutines updates and adjusts the state variables of the components that
are affected by the next event of the system and schedules the times of next events for
these components. Specifically, when a machine breaks down or becomes starved or
blocked, its rate is reduced and so do the rates of adjacent machines that happen to be
starved or blocked by the first machine. When the machine is repaired, the rates of these
machines are restored to the maximum permissible values.

SUBROUTINE FAILS(N,TIME)
REAL*B TEM,TEB,TPEM,TPEB,TIME,MBL
INTEGER*2 STATEM,STATEB,NEM,NEB
INTEGER*4 Z,ICRN
COMMON/BLOCK1/FR(100),RR(100),DOWN(100),ICRN(100,2)
COMMON/BLOCK2/NM,STATEM(100),STATEB(100),NEM(100),NEB(100),
&TEM(100),TEB(100),TPEM( 100),TPEB(100),
&R(100),RATE(100),PROD(1 00),PTF(100),BC(100),BL(10 0),MBL(100)
C update state variables of:
C machine N, blocked segment upstream from N, and
C starved segment downstream from N
PROD(N)=PROD(N)+PTF(N)
IF (N-1.GT.O) CALL FELAP(N-1,TIME)
IF (N.LT.NM) CALL EELAP(N,TIME)
C adjust state variables and schedule the repair of machine N
STATEM(N)=O
R(N)=O.
TPEM(N)=TIME
Z=ICRN(N,2)
DOWNTIME=-ALOG(RAND(Z))/RR(N)
ICRN(N,2)=Z
TEM(N)=TIME+DOWNTIME
NEM(N)=1
C whenever a machine fails, DOWN is increased by the repair time
DOWN(N)=DOWN(N)+DOWNTIME

Figure 4.A4. Subroutines FAILS, REPAIRED, FILLS, and EMPTIES.


128 HYBRID SIMULATION MODELS OF PRODUCTION NETWORKS

C compute a new number of parts-to-failure


Z=ICRN(N,l)
PTF(N)=-ALOG(RAND(Z))*RATE(N)/FR(N)
ICRN(N,l)=Z
C adjust state variables and schedule next events at:
C blocked segment upstream from machine N and
C starved segment downstream from N
CALL FEVEN(N-l,TIME)
CALL EEVEN(N,TIME)
RETURN
END

SUBROUTINE REPAIRED(N,TIME)
REAL*8 TEM,TEB,TPEM,TPEB,TIME,MBL
INTEGER*2 STATEM,STATEB,NEM,NEB
COMMON/BLOCK2/NM,STATEM(100),STATEB(100),NEM(100),NEB(l00),
&TEM(100) ,TEB(l00),TPEM(100),TPEB(100),
&R(l00),RATE(100),PROD(l00),PTF(l00) ,BC(lOO) ,BL(100) ,MBL(100)
C update state variables of:
C blocked segment upstream from machine N and
C starved segment downstream from N
IF (N-l.GT.O) CALL FELAP(N-l,TIME)
IF (N.LT.NM) CALL EELAP(N,TIME)
C adjust state variables and schedule the next failure of machine N
TEM(N)=TIME+PTF(N)/RATE(N)
NEM(N)=O
TPEM(N)=TIME
R(N)=RATE(N)
STATEM(N)=l
C adjust state variables and schedule next events at:
C blocked segment upstream from machine N and
C starved segment downstream from N
CALL FEVEN(N-l,TIME)
CALL EEVEN(N,TIME)
RETURN
END

SUBROUTINE FILLS(N,TIME)
REAL*8 INFINITY,SIMTIM,TEM,TEB,TPEM,TPEB,TIME,MBL
INTEGER*2 STATEM,STATEB,NEM,NEB
COMMON/BLOCK2/NM,STATEM(100),STATEB(100) ,NEM(100),NEB(l00),
&TEM(l00) ,TEB(100),TPEM(l00) ,TPEB(lOO),
&R (100), RATE ( 100), PROD ( 100), PTF (100), BC ( 100), BL ( 100), MBL (100)
C update state variables of machine N
C and blocked segment upstream from N
CALL MACELAP(N,TIME)
CALL BUFELAP(N,TIME)
IF (N-l.GT.O) CALL FELAP(N-l,TIME)
C adjust state variables and schedule next events at
C machine N and blocked segment upstream from N
STATEB(N)=2
BL(N)=BC(N)
R(N)=AMINl(R(N),R(N+l))
CALL FEVEN(N,TIME)
RETURN
END

Figure 4.A4. (continued). Subroutines FAILS, REPAIRED, FILLS, and EMPTIES.


PRODUCTION LINES 129

SUBROUTINE EMPTIES(N,TIME)
REAL*B INFINITY,SIMTIM,TEM,TEB,TPEM,TPEB,TIME,MBL
INTEGER*2 STATEM,STATEB,NEM,NEB
COMMON/BLOCK2/NM,STATEM(100) ,STATEB(100),NEM(100),NEB(100),
&TEM(100),TEB(100),TPEM(100) ,TPEB(l00),
&R(100),RATE(100),PROD(100) ,PTF(100) ,BC(100),BL(100),MBL(100)
C update state variables of buffer N, machine N+l
C and starved segment downstream from N+l
CALL BUFELAP(N,TIME)
CALL MACELAP(N+1,TIME)
IF (N+1.LT.NM) CALL EELAP(N+1,TIME)
C adjust state variables and schedule next events at
C machine N and starved segment downstream from N
STATEB(N)=O
BL(N)=O.
R(N+1)=AMIN1(R(N+1),R(N))
CALL EEVEN(N,TIME)
RETURN
END

Figure 4.A4. (continued). Subroutines FAILS, REPAIRED, FILLS, and EMPTIES.

Figure 4.A5 shows subroutines EELAP, FELAP, MACELAP, and BUFELAP. Sub-
routine MACELAP(N,TIME) is invoked to update the cumulative production and the
remaining number of parts-to-failure of machine N. Subroutine BUFELAP(N,TIME)
updates the level of buffer Nand its time average. Subroutine EELAP(N,TIME) locates
and updates the chain of empty buffers and starved machines (if any) downstream from
buffer N. Subroutine FELAP(N,TIME) locates and updates the chain of full buffers and
blocked machines (if any) upstream of buffer N.

SUBROUTINE EELAP(N,TIME)
REAL*B TEM,TEB,TPEM,TPEB,TIME,MBL
INTEGER*2 STATEM,STATEB,NEM,NEB
COMMON/BLOCK2/NM,STATEM(l00) ,STATEB(100) ,NEM(100) ,NEB(100),
&TEM(100),TEB(100) ,TPEM(lOO) ,TPEB(100),
&R (100), RATE (100), PROD (100), PTF (100), BC (100), BL ( 100) 1 MBL (100)
C trace the line downstream until a non-empty buffer is found
J=N
200 CALL BUFELAP(J,TIME)
IF (STATEB(J) .NE.O) RETURN
J=J+l
CALL MACELAP(J,TIME)
IF (J.LT.NM) GOTO 200
RETURN
END

SUBROUTINE FELAP(N,TIME)
REAL*B TEM,TEB,TPEM,TPEB,TIME,MBL
INTEGER*2 STATEM,STATEB,NEM,NEB

Figure 4.A5. Subroutines EELAP, FELAP, MACELAP, and BUFELAP.


130 HYBRID SIMULATION MODELS OF PRODUCTION NETWORKS

COMMON/BLOCK2/NM,STATEM(l00),STATEB(l00),NEM(l00),NEB(l00),
&TEM(l00) ,TEB(l00),TPEM(l00),TPEB(l00),
&R(l00),RATE(l00),PROD(l00),PTF(l00),BC(100),BL(l00),MBL(l00)
C trace the line upstream until a non-full buffer is found
J=N
210 CALL BUFELAP(J,TIME)
IF (STATEB(J) .NE.2) RETURN
CALL MACELAP(J,TIME)
J=J-1
IF (J.GT.O) GOTO 210
RETURN
END

SUBROUTINE MACELAP(N,TIME)
REAL*8 TEM,TEB,TPEM,TPEB,TIME,MBL
INTEGER*2 STATEM,STATEB,NEM,NEB
COMMON/BLOCK2/NM,STATEM(100),STATEB(l00),NEM(100),NEB(l00),
&TEM(100),TEB(100),TPEM(100),TPEB(l00),
&R(l00),RATE(100),PROD(l00),PTF(l00),BC(l00),BL(l00),MBL(l00)
IF (R(N) .EQ.O.) GOTO 220
DURATION=SNGL(TIME-TPEM(N))
ANl=DURATION*R(N)
PTF(N)=PTF(N)-ANl
PROD(N)=PROD(N)+ANl
220 TPEM(N)=TIME
RETURN
END

SUBROUTINE BUFELAP(N,TIME)
REAL*8 TEM,TEB,TPEM,TPEB,TIME,MBL
INTEGER*2 STATEM,STATEB,NEM,NEB
COMMON/BLOCK2/NM,STATEM(l00),STATEB(100),NEM(100),NEB(l00),
&TEM(100),TEB(100),TPEM(100),TPEB(100),
&R(l00),RATE(l00),PROD(l00),PTF(l00),BC(l00),BL(100),MBL(l00)
DURATION=SNGL(TIME-TPEB(N))
IF (DURATION.EQ.O.) GOTO 230
BPREVIOUS=BL(N)
IF (STATEB(N) .EQ.l) THEN
BL(N)=BPREVIOUS+DURATION*(R(N)-R(N+l))
END IF
MBL(N)=MBL(N)+(BPREVIOUS+BL(N) )*DURATION*.SDO
230 TPEB(N)=TIME
RETURN
END

Figure 4.A5. (continued). Subroutines EELAP, FELAP, MACELAP, and BUFELAP.

Figure 4.A6 shows the event scheduling subroutines EEVEN, FEYEN, MACEVEN,
and BUFEVEN. Subroutines MACEVEN(N,TIME) and BUFEVEN(N,TIME) schedule
next events at machine Nand buffer N, respectively. Subroutine EEVEN(N,TIME) finds
the chain of empty buffers and starved machines (if any) downstream from buffer Nand
schedules next events at each one of these components. Subroutine FELAP(N,TIME)
schedules next events in the chain of full buffers and blocked machines (if any) upstream
ofbufferN.
PRODUCTION LINES 131

SUBROUTINE EEVEN(N 1TIME)


REAL*B TEM 1 TEB 1 TPEM 1 TPEB 1 TIME 1 MBL
INTEGER*2 STATEM 1 STATEB 1 NEM 1 NEB
COMMON/BLOCK2/NM 1 STATEM(l00) 1 STATEB(l00) 1 NEM(l00) 1NEB(l00)1
&TEM(l00) 1 TEB(l00) 1 TPEM(l00) 1 TPEB(l00) 1
&R(l00) 1 RATE(l00) 1 PROD(l00) 1 PTF(l00) 1 BC(l00) 1BL(l00)1MBL(l00)
C trace the line downstream until a non-empty buffer is found
J=N
240 IF (J.GE.NM) RETURN
IF (STATEB(J) .NE.O) GOTO 250
IF (STATEM(J+l) .EQ.l) R(J+l)=AMINl(R(J) 1 RATE(J+l))
CALL BUFEVEN(J 1 TIME)
CALL MACEVEN(J+l 1 TIME)
J=J+l
GOTO 240
250 CALL BUFEVEN(J 1 TIME)
RETURN
END

SUBROUTINE FEVEN(N 1 TIME)


REAL*B TEM 1 TEB 1 TPEM 1 TPEB 1 TIME 1 MBL
INTEGER*2 STATEM 1 STATEB 1 NEM 1 NEB
COMMON/BLOCK2/NMISTATEM(l00)1STATEB(l00) INEM(l00) INEB(lOO) I
&TEM(l00) 1 TEB(l00) 1 TPEM(l00) 1 TPEB(l00) 1
&R(l00) 1 RATE(l00) 1 PROD(l00) 1 PTF(l00) 1 BC(l00) 1 BL(l00)1MBL(l00)
C trace the line upstream until a non-full buffer is found
J=N
260 IF (J.LE.O) RETURN
IF (STATEB(J) .NE.2) GOTO 270
IF (STATEM(J) .EQ.l) R(J)=AMINl(R(J+l) 1 RATE(J))
CALL BUFEVEN(J 1 TIME)
CALL MACEVEN(J 1 TIME)
J=J-1
GOTO 260
270 CALL BUFEVEN(J 1 TIME)
RETURN
END

SUBROUTINE MACEVEN(N 1 TIME)


REAL*B INFINITY 1 SIMTIM 1 TEM 1 TEB 1 TPEM 1 TPEB 1 TIME 1MBL
INTEGER*2 STATEM 1 STATEB 1 NEM 1 NEB
COMMON/BLOCK0/SIMTIM 1 INFINITY
COMMON/BLOCK2/NMISTATEM(l00) ISTATEB(lOO) INEM(l00) INEB(100) I
&TEM(l00) 1TEB(l00) 1 TPEM(l00) 1TPEB(l00) 1
&R ( 100) I RATE (100) I PROD (100) I PTF (100) I BC (100) I BL (100) I MBL (100)
IF (R(N) .GT.O.) THEN
TEM(N)=TIME+PTF(N)/R(N)
ELSEIF (STATEM(N) .EQ.l) THEN
TEM(N)=INFINITY
ENDIF
RETURN
END

SUBROUTINE BUFEVEN(N 1 TIME)


REAL*B INFINITY 1 SIMTIM 1 TEM 1 TEB 1 TPEM 1 TPEB 1 TIME 1 MBL
INTEGER*2 STATEM 1 STATEB 1 NEM 1 NEB

Figure 4.A6. Subroutines EEVEN, FEYEN, MACEVEN, and BUFEVEN.


132 HYBRID SIMULATION MODELS OF PRODUCTION NETWORKS

COMMON/BLOCKO/SIMTIM,INFINITY
COMMON/BLOCK2/NM,STATEM(100) ,STATEB(100),NEM(100),NEB(100),
&TEM(100),TEB(100),TPEM(100),TPEB(100),
&R (100), RATE (100), PROD (100), PTF ( 100), BC (100), BL (100), MBL (100)
BREST=BC(N)-BL(N)
IF (R(N+1) .GT.R(N)) THEN
IF (BL(N) .GT.O.) STATEB(N)=1
TEB(N)=TIME+BL(N)/(R(N+1)-R(N))
NEB(N)=O
ELSEIF (R(N) .GT.R(N+1)) THEN
IF (BL(N) .LT.BC(N)) STATEB(N)=1
TEB(N)=TIME+BREST/(R(N)-R(N+1))
NEB(N)=2
ELSE
TEB(N)=INFINITY
END IF
RETURN
END

Figure 4.A6. (continued). Subroutines EEVEN, FEYEN, MACEVEN, and BUFEVEN.

The random number generator is encoded by function RAND shown in Fig. 4.A7.
This code, developed by Marse and Roberts ( 1983 ), computes the remainder of divisions
involving integers that are longer than 32 bits, using 32-bit (including the sign bit) words.

FUNCTION RAND(Z)
C********************************************************************C
C this function updates the seed Z using recursion C
C Z(n) = [630360016 Z(n-1)] [mod (2**31-1)] c
C and returns the uniform (0,1) random number RAND(Z) = Z/(2**31-1) C
C********************************************************************C
INTEGER*4 Z,A1,A2,P,B15,B16,XHI,XALO,LEFTLO,FHI,K
DATA B15/32768/,B16/65536/,P/2147483647/
DATA A1/24112/,A2/26143/
XHI=Z/B16
XALO=(Z-XHI*816)*A1
LEFTLO=XAL0/816
FHI=XHI*A1+LEFTLO
K=FHI/815
Z=(((XALO-LEFTL0*816)-P)+(FHI-K*815)*816)+K
IF(Z.LT.O)Z=Z+P
XHI=Z/816
XALO=(Z-XHI*816)*A2
LEFTLO=XAL0/816
FHI=XHI*A2+LEFTLO
K=FHI/815
Z=( ( (XALO-LEFTL0*816)-P)+(FHI-K*815)*816)+K
IF(Z.LT.O)Z=Z+P
RAND=(2*(Z/256)+1)/16777216.
RETURN
END

Figure 4.A 7. Function RAND.


PRODUCTION LINES 133

The function INISEED has been described earlier in this section and it is shown in
Fig. 4.A8. This function returns the seed that is stored in POSITION of list IN, which
contains 500 seeds. For programming convenience this list is broken down into 5 lists
named INl, IN2, IN3, IN4, and INS.

FUNCTION INISEED(POSITION)
C************************************************ ********************C
C this function selects the seed for each random number generator C
C from a collection of 500 different values C
C************************************************ ********************C
INTEGER*4 IN1(100) ,IN2(100) ,IN3(100) ,IN4 (100),IN5(100)
INTEGER*2 POSITION
C FORTRAN 77 permits a maximum of twenty continuation lines
C to be used for each statement
DATA IN1/1234567890,1752205679,1258429365,1663790076, 27905570,
& 1455818825,1297964256, 694324539,2039267695, 525692763,
& 1800276977,2102317462,1237476626, 791770709, 798774600,
& 1538550641, 214316813, 502876500, 577663741,2119388284,
& 883109084, 771742969, 55190594, 746588200,1762468134,
& 40018958, 851474358,1948210216, 766299773, 230673240,
& 339741794, 82732929,1082503233,1526515231, 355253912,
& 1746470560,1573711842,1370423786, 114309065, 341524981,
& 1100280813,1136458425, 151189606,1282275591, 161947956,
& 1081794842, 47206157,1632554495,1710995669,1309487190,
& 582900062, 118132451,1541321172, 889387009,1184536711,
& 1627443680, 856585451,2008488542, 868788208,1849541778,
& 1592770014,1440662249, 219919258, 654000386,1064479093,
& 1260121314, 421777124, 81098033, 22548643,1168028438,
& 957201740, 81687946,1801171158,1291328368,1513298968,
& 1124074772,1906874802,1017874552, 635812814, 910698321,
& 2125824248, 907611588,1160197548, 273959974,2102275133,
& 365468273, 473924061,1690100028, 185336444, 660653309,
& 264947697, 915018048,1323715104,1320577038,1936693103,
& 749606720,1329350997, 521921131,2018383983,1338122674/
DATA IN2/ 527544017,1574625288, 512088289, 908745540, 561858491,
& 928263664,2048584554,1687062195,1915281912, 216978796,
& 688723922,1548347285, 844193176, 569098473,1037540107,
& 844363264,1141166719, 332376381,1950548848, 998969877,
& 297909965,2115014035,1453812528,1232821469,146126 9256,
& 693930950, 786516026,1890076359,1026209608, 710252396,
& 1750042719, 122834232, 536637116,1281114769, 130189036,
& 1026193166,1360628730,1097078524,1717344091,16485 68164,
& 1043446791, 234512696,1982712005, 8489010,1869309712,
& 739986511, 336087771, 73777272,1348454067, 89105159,
& 153838255, 671439448,1468254119, 680445492, 623446734,
& 1566118489,1070744987, 243999849, 501273389,2128824519,
& 1991760789,1148161028,1391879247,1458885583,14655 04721,
& 724481076,1875423417, 302885246, 574717534, 303882964,
& 770897679, 574804247,1167803979,1591775013, 311817332,
& 775904750, 441726923,1398688911,1846882047, 767453939,
& 1725234416, 301327474,1954368722,1277396979,1100826546,
& 1907837378,1162977612, 831074323,1848718805,2075706281,
& 797502263,2015501153, 105130264, 297446964,1683506621,
& 67110101,1521200160,1440914985,1170968770, 11524465/

Figure 4.A8. Subroutine INISEED.


134 HYBRID SIMULATION MODELS OF PRODUCTION NETWORKS

DATA IN3/1911239311, 584808652,2034936 078,1597242654, 540533314,


& 1201526332, 808384740,1569932 381, 299551135, 918541811,
& 1237574258,196616 7205,2053933254, 308742343,1193861 187,
& 1006403363,132567 6526, 216322346,1128253 818, 620720553,
& 1768010654,209185 1433,1675288126,1 276159520,2058553 806,
& 1715714994,103317 5261, 386157122, 832964590, 766622468,
& 80234398, 263425196, 61505463,15474652 14,2101400294,
& 198475528,2105150 184,1155420998, 887098746, 841565779,
& 2022591920, 687020410,2063898 114, 848358263, 796912891,
& 112819780, 282427961, 389082980, 848512129, 587532595,
& 2108080337,211125 8322, 677046335,1954354 856,1444236234,
& 628790518,1454037 590,1007758828,16 17789882,12622297 11,
& 1319490547, 813100178, 470152514, 769324610,1790510 846,
& 687605560, 52294113,15999900 07,1116284686,206 4276007,
& 512323811,1131169 258,1242968693, 845530092,1597805 629,
& 1311673603, 28032924,20347060 74,1617863312,107 5634436,
& 1750128367,186608 9937,1230193034,2 017521985, 842390424,
& 965708101, 187135606,1373082 725, 81240482,17286496 34,
& 721613057, 572478235, 990508296, 613567296, 62801097,
& 1293385050,105164 6951, 633901089, 70046354, 455015365/
DATA IN4/1869382643,17 31637054, 247326851,1601077 005, 853384805,
& 1696580164,142532 9498,1327168601, 663726299,1416841 682,
& 589250635,1373121 172,1073368353,17 15484248,10413512 33,
& 1207605162, 886155930,1722477 639, 227392208, 896558409,
& 1984268874,173362 1946, 231310509,2055932 221, 54143302,
& 367259097,1335928 373, 255511211, 971406474, 772107845,
& 334362089, 20692553, 577449200, 130312177, 677742650,
& 2102396207,120948 3930,1706588860,1 189391229, 811206275,
& 1711224429,145901 8461,2131251105,1 922550776, 121843262,
& 777781647,1647542 114,1767083727, 611332652, 908536342,
& 150004152, 345382270,1317409 827,2087148111, 917617129,
& 373544762,2133224 043,1694467438, 472375989,1825864 965,
& 836425346,1203963 752, 41048985, 756010392, 129230939,
& 1326666461, 73586300,10917853 04,2011812947,192 5035423,
& 1576316277, 637393780,1361355 628, 785606838,1430618 490,
& 1569566906,165422 9161,1748254875,1 145721829, 149756901,
& 1570137783, 740353518,2051784 483, 262012226, 952412589,
& 1218607524,109679 8525, 702653476,1409090 522,1081151363,
& 1068950287,149341 3461,1266809492,2 100543225,1319328 440,
& 1868286111,106478 1832, 890751464,1534814 825, 118561341/
DATA IN5/1927466510,13 80003402, 143124129, 733805325,1794682 267,
& 1392108048,161489 2864,2135566882, 863176901, 455784973,
& 28349271,17720113 91, 558817105,1730364 033,1349525661,
& 1373822330, 781682693,2071814 122,2038833160,21 28682160,
& 90364830, 818933503,1419930 673, 759223441, 776504170,
& 631712655, 619959484,1448489 027, 567930506,1239036 879,
& 327773601,1779360 107, 610541145, 495276565,1961986 206,
& 1565034097, 69181584,12499041 06,1716034656,145 1222629,
& 1538196151, 161432669, 195907051, 975417322,1255141 963,
& 533546420,1470366 517,1581590921, 708610485,1834297 560,
& 1284907189,209564 6403,1686527478, 3125534,133275994 0,
& 3438931, 551730790,1223142 003, 12476650, 886467564,
& 662980059, 558811560,1116411 418,1654497397, 800207126,
& 541505688,1846684 832, 886513088,1044467 989,1258622456,
& 1521870891,184289 3390,1380148849,1 118077659,1746766 166,
& 1125227202,134639 2140, 717477099,1226818 850, 745745762,

Figure 4.A8. (continued). Subroutine INISEED.


PRODUCTION LINES 135

& 1056474563, 124677153,1404147779,1737442839, 827823102,


& 567020255,1606851953,2062785021, 708327851,1765223737,
& 609428655, 764756785,1169463417,1604675325, 234883484,
& 838698305, 491464386, 18224869,1016642110, 919408368/
IF (POSITION.LE.100) THEN
INISEED=IN1(POSITION)
ELSEIF (POSITION.LE.200) THEN
INISEED=IN2(POSITION-100)
ELSEIF (POSITION.LE.300) THEN
INISEED=IN3(POSITION-200)
ELSEIF (POSITION.LE.400) THEN
INISEED=IN4(POSITION-300)
ELSE
INISEED=IN5(POSITION-400)
END IF
RETURN
END

Figure 4.A8. (continued). Subroutine INISEED.


5
PRODUCTION NETWORKS

Production networks belong to a general class of systems known as queueing net-


works where commodities move through finite-capacity buffers and compete for re-
sources in front of multiserver nodes or workstations. A particular class of such systems
is that of Markovian queueing networks, in which the time that a commodity binds a re-
source has an exponential or geometric distribution.
As we have discussed in Appendix 1.A 1.5, these distributions have the memory less
property, that is, if a commodity binds a resource by time t, the distribution of the remain-
ing sojourn time does not depend on t. Because of this property, the system can be com-
pletely described by the number of commodities in each buffer at any time instant. The
equilibrium probabilities can be uniquely determined by solving the Chapman-
Kolmogorov equations (Appendices 1.A 1.9-1.A 1.11 ).
The memoryless assumption has been extensively employed in the analysis of pro-
duction, communication, computer, and urban service systems. Markov models can de-
scribe unreliable production systems in which the machines alternate between up and
down states with exponentially distributed intertransition times. However, in Chapter 1
we saw that these models are seldom useful because of the large number of states re-
quired to model even a modest production line.
The analysis of multiple-product networks with finite queues usually requires vast
computing resources, thus, ruling out the possibility of an analytical solution. Apart from
a few cases, including the so called Jackson networks and some extensions in which
queueing space is unlimited (Jackson, 1957; Gordon and Newell, 1967; Baskett et al.,
1975), closed form analytical results do not exist in the literature. For general queueing
systems, simulation is an obvious alternative.
In this chapter, we present a generalization of the hybrid simulation method to acy-
clic production networks with multitask machines, random processing times, assembly
operations, and probabilistic routing of parts through finite-capacity buffers. First, we
approximate random processing times by piecewise deterministic variables as in Section
4.4.3. Second, we approximate discrete traffic by a continuous flow. An immediate im-
plication of the second approximation is that when a buffer becomes full or empty, its
inflow or outflow rate is reduced instantly whereas the flow rate increases (also instantly)
when the buffer becomes not full or not empty. When a machine breaks down, the flow

137
138 HYBRID SIMULATION MODELS OF PRODUCTION NETWORKS

rates of buffers connected to this machine are reduced, and when it resumes operation,
the flow rates are increased with no delay. Since processing times are actually not piece-
wise deterministic and transient phenomena occur instead of instantaneous rate changes,
the model is approximate.
Hybrid models for systems with deterministic and random processing times were de-
veloped by Kouikoglou and Phillis (1995) and Phillis and Kouikoglou (1996). These
models have been extended for systems in which machines can produce several types of
parts (Kouikoglou and Phillis, 1997).

5.1. ACYCLIC NETWORKS

5.1.1. System Description

The production network under examination consists of a number of workstations.


Each workstation is a parallel configuration of unreliable machines producing several
types of parts. Parts move from one workstation to another through buffers of finite ca-
pacities. Each buffer carries identical parts and connects at most two workstations. There
are buffers without upstream workstations. These buffers are assumed to be infinite
sources of raw parts. There are buffers without downstream workstations. These buffers
are assumed to be infinite sinks where finished products are stored. Delays in the supply
of raw parts and intermittent demand can be taken into account by placing fictitious ma-
chines at the entrance and the exit of the system, respectively. The statistical behavior of
these machines due to random breakdowns, repairs and, possibly, fluctuations of produc-
tion rates, may simulate the fluctuations of demand and supply.
Each machine may receive different parts from several buffers which are processed
individually, assembled, disassembled, or undergo a combination of these operations. The
result of these operations may be a composite part or a set of different items, which are
then stored in the downstream buffers.
We assume that the production network is acyclic, that is, a part does not return to a
workstation where it previously received service. In Section 6.2, this assumption is re-
laxed.
The system consists of a set of workstations connected by intermediate buffers of fi-
nite capacity. Workstation n consists of a set Mn of parallel machines that produce a fam-
ily Jn of composite part types. Each machine m, meMn. processes one composite part at a
time. Processing times are constant or random variables with known statistics and ma-
chines are unreliable.
Figure 5.1 depicts the flow of parts j through workstation n. Next, we introduce a
rich set of parameters that can be used to describe the flow of parts in the network.
When a machine of workstation n is ready to process a new part from the set Jn of
parts, a decision has to be made as to which type to select. We assume that a part j is se-
lected with probability pj, j eJn. Equivalently, pj represents the proportion of parts j that
are produced by workstation n. In practice, part-selection decisions may depend on time-
varying factors, such as, the state of the production system, the demand, and the objec-
tives of managers. All these factors can be taken into account during simulation by treat-
PRODUCTION NETWORKS 139

ing the part-selection decisions as discrete events whereupon part-mix proportions are ap-
propriately adjusted.

Figure 5.1. Flow of composite parts} through workstation n.

Now assume that a partj is going to be processed. We consider the following opera-
tions:
l. Merging. There may be several buffers feeding workstation n with the same part
type. Hence, for each incoming part type there is a group of supply buffers. In
the figure, buffers a and a" belong to the group Ug carrying parts g. The other
group Ug' consists of a single buffer a'. Parts are dispatched according to given
probabilities called the merging parameters. When workstation n needs one unit
of part g, it removes one from buffer a with probability rna, or from buffer a"
with the complementary probability rna"
2. Assembly. One part oftypej,jeJ,, is produced from a family G1 of different
parts arriving at workstation n. In the figure, the parts g and g' are assembled
into a composite partj. The number of parts g required to assemble one partj is
Og, a given positive integer, which will be referred to as the assembly parameter
of parts g.
3. Splitting (disassembly). Upon departure from workstation n, one partj splits into
s1 subproducts, where s1 is a given positive integer that will be referred to as the
splitting parameter of parts j.
4. Rqvting. Let D1 be the set of downstream buffers in which the subproductsj are
stored. Routing of the subproducts is performed according to specified prob-
abilities called the routing parameters. In the figure, a partj is sent to buffer p
with probability rp or to buffer p' with the complementary probability rp
Since, in general, each machine processes different parts, the mechanisms of ma-
chine failures may be more complex than the ones we examined in the previous chapters.
We discuss two probabilistic models of operation-dependent failures.
140 HYBRID SIMULATION MODELS OF PRODUCTION NETWORKS

The first model assumes that a machine m, meMn, is subject to multiple types of
failures, depending on the part in production. For simplicity, we assume that the number
of parts}, j eJn. machine m produces between successive typej failures has a geometric
distribution on {0, 1, ... } with parameter h. m In addition, each failure is independent of
the other types of failures. The parameter h. m is the probability that machine m fails dur-
ing the production of a single part}. Suppose machine m begins processing a part} at time
t and the number of remaining parts machine m will produce until its next typej failure is
FJ. m Let also x be the net processing time for this part and z the duration of the produc-
tion cycle (net processing time plus downtime). The piece-by-piece simulation model
handles failures of the type described above, hereafter referred to as independent failures,
as follows:
(a) Compute the net processing timex of machine m. Assume for the moment that
the machine will operate without failures during the next x time units. Set z = x.
(b) Reduce FJ. m by one.
(c) If the new value is positive, then the machine will survive the production of the
part; go to (e).
(d) If, however, the new value is zero, then the machine incurs a typej failure.
Invoke an appropriate random variate generator to compute the downtime of
machine m, that is, the duration of the repair. This time is a random variable
whose statistics may depend on the type of failure occurred.
Increase z by the time required to repair machine m.
Compute a new value for FJ. m using a geometric random variate generator
(see Example 2.5b). Go to step (c).
(e) Schedule a departure from machine m at at time t + z.
In certain cases, it is more realistic to introduce a single type of failure whose time of
occurrence depends on a physical property of the machine, which we call wear. There are
many models of probabilistic wear (see e.g. Cox, 1962, Section 10.4). We shall consider
a simple case.
First, we assume that wear is some aggregate measure of accumulated deterioration,
which can be expressed as a linear combination of the cumulative processing times of all
part types since the last recovery of machine m from an earlier failure. Specifically, to
each part j we assign a linear multiplier Ji. m which expresses the amount of wear added to
machine m during one time unit of processing that part. For simplicity, we assume that
the wear machine m can tolerate before it breaks down, that is, the remaining wear until
the next failure, is a random variable drawn from an exponential distribution with pa-
rameterfm
We now show how the piece-by-piece model handles failures of the above type,
hereafter referred to as additive failures. Suppose that at time t, machine m ;fegins proc-
essing one part j while the remaining wear until its next failure is F m Let x be the net
processing time for this part and z the duration of the production cycle. We have the fol-
lowing algorithm:
(a) Compute the net processing timex of machine m. Assume for the moment that
the machine will operate without failures during the next x time units. Set z = x.
(b) The amount of wear to be added to machine m during x time units is xfi. m
PRODUCITON NETWORKS 141

(c) IfFm~ xjj, m, then the machine will survive the production of the part; replace Fm
by Fm- xjj,m and go to (e).
(d) If, however, Fm< xjj, m, then the machine incurs a failure when Fmcrosses zero,
that is, at time t + y, where

F.
y= _m_ ~X
/j,m

Increase z by the time required to repair machine m.


Reduce x by y. The resulting x is the remaining time-to-complete the part
right after machine m is repaired. Compute a new value for F m by invoking
an exponential random variate with parameterfm (see Example 2.5a). Go to
step (b).
(e) Schedule a departure from machine mat timet+ z.
Observe that in both models the quantities Fj, m and F m which measure the wear pro-
gress, have memoryless distributions. This suggests that the two models have similar
properties, although the latter has an extra parameter fm and, therefore, it is expected to
fit actual failure data better than the former. However, in the sequel we shall use the
model of independent failures which describe failures of many types, as it is often the
case when machines perform more than a single type of operation.

5.1.2. Continuous Flow Approximation

The model we propose approximates discrete traffic by a continuous flow. Continu-


ous flow can be viewed as the limit of discrete traffic, as the unit of product becomes
infinitesimally small. Hence, in the CF model the merging and routing parameters and the
part selection probabilities represent fractions of production rather than probabilities.
To prove this, we construct a sequence of discrete part systems as follows. The first
system of the sequence is the original system in which parts are not divided. Referring to
Fig. 5.1, when a part is processed in workstation n, it is sent to buffer f3 with probability
rp. In the kth system of the sequence, each part is divided to yield k identical items. As-
sume that k items are produced sequentially by the workstation and are sent to the down-
stream buffers according to the original routing probabilities. Let X;, i = 1, 2, ... , k, be
random variables defined as

x; = { 1 if the ith item is sent to buffer f3


' 0 otherwise

Then, by assumption, we have that P(X; = 1) =rp , P(X; =0) = 1 - rp, and E(X;) =rp. The
random variable
142 HYBRID SIMULATION MODELS OF PRODUCTION NETWORKS

is the fraction of items that are sent to buffer {J. The strong law of large numbers (Theo-
rem l.A1 in Appendix l.Al.7) asserts that ask~ oo, X(k) converges to E(Xj) = rp almost
everywhere. But then, the items become infinitesimally small and the traffic converges to
a continuous flow. Using the same arguments, one can show that part selection and merg-
ing probabilities wind up as fractions of the flow in the fluid limit.
In Section 3.3.1 we have already seen why the CF approximation works well for sys-
tems with deterministic processing times. Random processing times are adjusted in a lot-
by-lot manner, using the piecewise deterministic approximation method of Section 4.4.3.
In either case we assume that at any time, the maximum processing rates of all part types
are known. Let RMJ, m( r) be the maximum processing rate of machine m for parts j at time
T. The quantity 1/RMJ, mC T) is the processing time for a single item.
In the remainder of this section we introduce quantities that describe the flow of
parts in the network.
First, w:e examine the flows through the workstations. Consider workstation n, which
produces the part family Jn. Assume that at time r, there is a sufficient supply of parts to
assemble any composite part of Jn and enough space to receive the production. In this
case we say that the workstation is in isolation. For every jeJn and meMn we denote by
Cj, m( r) the number of parts j that can be produced by machine m per unit of time. This
quantity will be referred to as the processing capacity of machine m and it is computed
using the following proposition.

Proposition 5.1. The processing capacity of an operational machine m for parts j is


given by

(5.1)

Proof. Since the machine processes several part types, production is allocated among
them according to the part-mix parameters pj, }EJn. Specifically, cj, m( T) is equal to the
fractionpj ofthe total volume produced in one time unit, thus

Cj,m( r) = p j L Ck,m(T) (5.2)


all keJn

Summing up all the processing times of the part family must yield one time unit. There-
fore

L
all;"eJn
Cj,m(r)x(timeto.process
oneunttofpartj
.)= L
all;"eJn
Cj,m(r)
RM.;,m (r)
=1
which, in view of the Eq. (5.2), becomes
PRODUCTION NETWORKS 143

L [ Pj }2Ck,m(r)] = 1
alljeJn RMj,m (r) allkeJn

Solving for the summation term inside the bracket yields

}2Ck,m(r)=-----
allkeJn L Pk
allkeJn RMk m (r)

and the result follows by substituting the solution in Eq. (5.2).

The sum of the processing capacities of all operational machines is the total capacity
of workstation n for parts j, which will be denoted as TCj, n( r ). Hence,

}2Cj,m(r)
all operational
machines m, meMn

In the model, this quantity is adjusted right after the occurrence of a failure, repair, and, if
the processing times are random, a change of a maximum processing rate of a machine.
If an operational machine m is in isolation, that is, neither starved nor blocked, then
the production rate Rj,m(r) is equal to the processing capacity Cj,m(r) of m. When the
flow of a part type is slowed down due to blockage or starvation, the saved operational
time is allotted to the other part types. As a result, the production rates of these parts in-
crease. An iterative algorithm for the allocation of the production rates will be presented
in Section 5 .2.2. The algorithm computes the production rates Rj, m( r) of all machines of
the workstation, and their sum TRj, n( r ), called the total flow rate through workstation. n
at time r, for every part typejEJn.
Next, we use the operational parameters introduced in the previous section to find
expressions relating the total flow rate of a composite part through a workstation to the
inflow and outflow rates of its adjacent buffers. As before, we assume that all the buffers
of Fig. 5.1 are partially full, that is, they can meet the instantaneous demand for both
parts and storage space. Let OJ... r) be the outflow rate from buffer a and lp( r) the inflow
rate into buffer pat time r.
By definition of the assembly operations, the outflow rate of parts g from the group
U8 is 0"8 TRj, n( r ). By definition of the merging parameters, buffer a provides a fraction ma
of the outflow rate of U8 Hence, the outflow rate of buffer a is

(5.3)

Furthermore, splitting of parts j yields a total downstream flow rate of sj TRj, n( r)


which is dispatched to buffers Pand p' according to the routing protocol discussed in the
previous section. By the fluid approximation, the inflow rate into pis

(5.4)
144 HYBRID SIMULATION MODELS OF PRODUCTION NETWORKS

5.1.3. Continuous Flow Model

The CF model is event driven. Machines may be operational or under repair, and
buffers may be full, partially full or empty. An event takes place when either a buffer or a
machine changes state. The model observes the following events:
a machine breaks down,
a machine is repaired,
a machine's maximum processing rate changes,
a buffer becomes full,
a buffer becomes not-full,
a buffer becomes empty,
a buffer becomes not-empty.
Part of the state of the system consists of the states of machines and buffers and the flow
rates of parts, which were defined in the previous section. The remaining state variables
are
BLa( r) number of parts waiting in buffer a; 0 ::; BLa( r) ::; BCa and BCa is the ca-
pacity ofbuffer a
P1, m( r) cumulative flow (total production) of parts j through machine m
Fj, m( r) number of remaining parts j until the next failure of m (independent fail-
ures)
Q1, m( r) number of remaining parts j until the maximum processing rate of parts j in
m changes; q 2: Q1, m( r) 2: 0 and q is the size of the lot used in the piecewise
deterministic approximation (see Section 4.4.3).
By viewing the state variables as functions of time, the state of the system can be
partitioned into the following vectors:
a vector xd of discrete states, comprising all variables that change only when an
event takes place (current and maximum flow rates, processing capacities, states
of buffers and machines),
a vector Xc of continuous states, whose elements are continuous, piecewise linear
functions of r(e.g., cumulative flows and buffer levels), and
a vector xh of hybrid states, which are linear and decreasing in the intervals be-
tween successive events and incur jumps at the times of event occurrences (e.g.,
parts-to-failure and parts-to-rate-change).
Figure 5.2 shows the plots of three representative state variables used in the CF
model. The first one (discrete state) represents the production rate of machine m during
four alternating repair and operational intervals. We assume that no other events (e.g.,
blocking, starving, or rate changes) take place during the observation period. The second
(hybrid state) is the corresponding number of type j parts-to-failure. The third (con-
tinuous state) is the level of a buffer that feeds machine m with parts j. Observe that when
the machine is operational, its rate is constant whereas the parts-to-failure and (for this
particular example) the buffer level are decreasing in r. When the machine breaks down,
the rate of m becomes zero and the model invokes a random variate generator to compute
a new number of parts-to-failure. Therefore the discrete and hybrid state variable have
PRODUCTION NETWORKS 145

discontinuities at the failure times. Finally, during the repair period, the buffer level in-
creases, whereas the parts-to-failure and the production rate are constant.

: production rate of machine m


- - - - : level of the buffer that feeds machine m
- - - - : number of parts-to-failure of machine m

....................

Figure 5.2. Plots of typical discrete, continuous, and hybrid state variables.

An event takes place when a continuous or a hybrid state assumes a boundary value.
For example, when BLa( r) = BCa buffer a becomes full or when Fj, m( r) = 0 machine m
breaks down, etc. Upon occurrence of an event, xd and xh incur an instantaneous change,
which, in tum, affects the future evolution of X c.
Let r be the time when an event takes place and TMm (resp. Tsa) denote the next-event
time scheduled to occur at machine m (resp. buffer a) after time r: The steps of the con-
tinuous flow simulator are as follows:

Algorithm 5.1. Hybrid model of a continuous flow production network


(a) At time rdetennine the next event of the system. This event is the one with the
smallest time of occurrence

(b) Right before time t, update the continuous and hybrid states of machines and
buffers that are to be affected by the next event. The (;Orresponding update equa-
tions are of the form

(5.5)

(5.6)

where Fe and Fh- are functions to be derived in the next section.


(c) Upon occurrence ofthe event, adjust the hybrid and discrete states of the af-
fected components. Let ,; be the vector of random numbers that represent all
146 HYBRID SIMULATION MODELS OF PRODUCTION NETWORKS

random disturbances in the system. The corresponding adjusting equations are


of the form

for a buffer-related event


(5.7)
for a machine-related event

(5.8)

(d) Schedule next events for the affected components. The event scheduling equa-
tions are of the form

(5.9)

(5.10)

Go to (a).

The functions Fe, Fh-, Fh, Fd, Fm and Fa, will be derived in the next section. The
simulation terminates when a specified stopping time is reached.

5.2. STATE EQUATIONS

5.2.1. Update Equations

The following equations are immediate consequences of the conservation of flow in


every buffer and machine.
The level of parts in buffer a is

BLa(t) = BLa( r) +[fa( r)- Oa( r)] (t- r) (5.11)

the cumulative flow of part type j through machine m (cumulative production of m)

(5.12)

the number of remaining parts} until the next failure of machine m subject to independent
failures

(5.13)

and the number of remaining parts j until the processing time of m changes

(5.14)
PRODUCTION NETWORKS 147

Equations (5.11) and (5.12) update the continuous state variables Xc of the system
and they have been written compactly as Eq. (5.5). The other equations correspond to Eq.
(5.6) and update the hybrid state variables xh. If at time r the hybrid states are known,
then (5.12) and (5.13) yield the state variables at timer, right before the occurrence of
the next event. The effects of this event on the hybrid as well as the discrete states are
discussed in the next section.

5.2.2. Instantaneous Adjustment of State Variables

Equation ( 5. 7) represents the adjustment of hybrid states upon the occurrence of an


event at time t. When machine's m maximum processing rate for type k parts changes, we
reset the corresponding number of parts-to-rate-change to q. Then we apply the piecewise
deterministic approximation of Section 4.4.3 and compute a new maximum processing
rate RMk, m(t). When the machine breaks down, we compute the number of parts-to-failure
from a suitable random variate generator.
Next, we develop three algorithms corresponding to Eq. (5.8) for adjusting the flow
rates of system components (workstations, machines, buffers) that are affected by the
event at time t.
The first algorithm computes the instantaneous rates of workstation n and its ma-
chines at time t. Obviously, the flow rate of part type j through workstation n cannot ex-
ceed the maximum supply rate of subassemblies from the upstream buffers, the maxi-
mum flow rate that can be absorbed by the downstream buffers, and the processing ca-
pacity of the workstation itself.
Since one partj requires 5g parts g, geG1, from the group Ug of upstream buffers, the
maximum supply rate for part j is given by

oo if n is well supplied by every subassembly of j

aEUg
if all buffers carrying type g parts, g E G1 , are empty

Similarly, since one partj splits into s1 subproducts that are sent to the downstream buff-
ers fJeD1, the maximum departure rate from workstation n is

oo if there is least one non-full buffer in D 1


DRJ,n(t) = LOp(r)
PEDj
if all downstream buffers fJ, fJ E D1 , are full

Finally, the total processing capacity of workstation n is


148 HYBRID SIMULATION MODELS OF PRODUCTION NETWORKS

all operational
machinesm, meMn

We define the available work rate for part type}, AW 1, n(t), as the total flow through
the workstation n during one unit of time if the workstation had infinite capacity. This
quantity is computed from

AWJ, n(t) =min {SRJ, n(t), DRJ, n(t)}

To allocate flow rates to each machine, it suffices to calculate the cumulative flow of
each part type during one unit of time, that is, in the time interval [t, t + 1]. We examine
two distinct cases.
First, if TC1, n(t) :::; AW1, nU) for all j Elm then workstation n is neither starved nor
blocked.-Hence the processing rates of the workstation are

Beyond time t, the levels of the upstream buffers and the empty space in the downstream
buffers are nondecreasing. Therefore, the workstation and its machines will produce at
their rated capacities.
Second, if there is at least one part type i, i Elm for which TC;, nU) > AW;, n(t), then
the workstation will produce AW;, n(t) parts within a fraction of one unit of time, say c.
Hence [t, t + c] is the subinterval of [t, t + 1] in which all parts are produced. During the
remaining interval (t + c, t + 1], of length (1- c), the workstation will not be able to pro-
duce parts i. Therefore in the remaining ( 1 - c) time unit, the workstation will be proc-
essing the other part types j, j ':F. i, at rates that will larger than TC1, n( t) because the num-
ber of competing parts decreases. The rate allocation algorithm proceeds by finding an-
other part type, if any exists, for which all available work is completed at time t + c + c 1
for some c 1 > 0 such that t + c + c 1 < t + 1. Then it divides the remaining time interval
(t + c, t + 1] in two subintervals, namely (t + c, t + c+ C 1 ] and (t + c+ C 1, t + 1], and,
again, decreases the number of competing parts by one. The process is repeated until ei-
ther all the available work is completed before time t + 1 or there is some unfinished
work for some part types at time t + 1.
The following algorithm describes the rate allocation procedure in detail.

Algorithm 5.2: Allocation ofproduction rates to parallel machines


(a) Initialize:
M =set of operational machines of workstation n
remaining operating time until time t + 1, T = 1
set of remaining parts-to-allocate, 1 = ln
amount of work available, Awj = T X AWj, n( t) = Awj, n(t), v j Eln
estimates of cumulative production in the interval [t, t + 1], R1, m = 0, Vj Elm
mEM.
PRODUCTION NETWORKS 149

(b) Using Eq. (5.1), compute the maximum number of parts that can be produced in
Ttime units:

Cj, m= T pj = maximum production of machine m eM


L Pk
keJ RMk, m (I)

TCj,n = l:Cj,m =maximum production ofworkstation n


meM

for every part type j eJ.


(c) Let 1j denote the time required to complete the remaining work for parts}; that
is,

T.= T AWj
1 TCj,n

If 1j ~ T, then A Wj is enough to keep the workstation busy until time t + 1; oth-


erwise the available work for parts j will be completed before time t + 1. Find
the part i that has the earliest completion time, i.e.,

i=argminTj
jeJ

The quantity T; is the period during which the machines will produce all parts in
J at their maximum rates and according to the part selection probabilities.
(d) Set e= min {T;, T}. Clearly, if e= Tthen the workstation will be busy until, at
least, time t + 1; if e = T; then all the available work for part i will be finished be-
fore time t + 1.
(e) Replace T byT-e.
For every part typejeJ(including i) and operational machine m,
compute the production volume of m during e time units

tJ. e
Xj,m= T q,m

replace Rj, mby Rj, m+ Xj, m;


and replace A wj by A wj- Xj, m
Remove i from the set J.
(f) IfJ::t: 0 and T> 0, go to (b); otherwise, compute the flow rates of all parts},
j eJn, through workstation n
150 HYBRID SIMULATION MODELS OF PRODUCTION NETWORKS

TRj,n(f)= L Rj,m
meM

and stop.

Observe that, in a continuous flow system, the production rates are allocated simul-
taneously at time t rather than sequentially, that is, at times t + &, t + c+ c', etc. One thus
might argue that Algorithm 5.2 does not reflect what actually happens when the flow is
continuous. However, it can easily be verified that the rate allocation returned by the al-
gorithm will not change if, instead of the interval [t, t + I], we consider smaller intervals
of the form [t, t + Ilk], for any real number k >I, and adjust the initial work to
A W1, n = AW1, n(t)/k for each j EJn. As k ~ oo, the intervals become infinitely small and,
therefore, all rates are determined simultaneously at timet.
After determining the processing rates, we compute the flows of the upstream and
downstream buffers ofworkstation n. We do this by analyzing the assembly and splitting
operations.
First, we examine a group of buffers Ug carrying parts g that combine with parts
from other groups preceding workstation n to yield a composite part type}. Workstation n
requires 8g parts from Ug to produce one unit of}. According to Eq. (5.3), at timet, the
workstation requests Oa(t) =rna 8g TR1, n(t) parts per time unit from buffer a of group Ug,
where rna is the merging parameter of a. Since the merging parameters express propor-
tions of items requested from the buffers of group Ug, we have that

For convenience, we write Eq. (5.3) as

Oa(t) =

If, however, buffer a is empty during [ r, t) and its input rate I a( r) cannot satisfy the de-
mand at time t, then its outflow rate is given by Oa(t) =!a( r). The resulting shortage will
be covered by the non-empty buffers of Ug, whose output rates will be increased. The rate
allocation procedure is as follows.

Algorithm 5.3. Adjustment of the outflow rates of the upstream buffers


{a) Let U denote the subset of non-empty buffers of group Ug In the beginning, set
U= Ug.
(b) Trace all the elements of U by computing
PRODUCTION NETWORKS 151

OJJ)== ~[6gTR;,n(t)-
I m b
'l:Da(t)l
all empty buffers
(5.15)
bEU a'E(Ug-U)

until an empty buffer a is encountered with Oa(t);:::: fa( r).


(c) If such a buffer is not found, then stop; otherwise, remove a from U, set
Oa(t) ==fa( r), and go to step (b).

The term in brackets in Eq. (5.15) is the number of parts} workstation n attempts to
load from its upstream, non-empty buffers during one time unit. The fraction
mal 'L 6 Eum 6 is the proportion of the items requested from buffer a. The algorithm ends
when U empties or it is left with non-empty and empty buffers with adequate supply, i.e.,
Ia(r) > Oa(t). The latter were empty during [r, t) but now their states become "partially
full".
In a dual fashion, we allocate input rates to downstream buffers. One composite part
j splits into s1 that are sent to the downstream buffers fJ, fJeD1. According to Eq. (5.4), at
timet, the workstation sends lp(t) == rp s1 TR;,n(t) parts per time unit to buffer fJ, where rp
is the routing parameter of fJ. Since the routing parameters express proportions of items
routed to the various downstream buffers, we have that

For convenience, we write Eq. (5.4) as

rp
fp(t) ::: - - S; TR;, n(t)
'Lrb
bEDJ

If, however, buffer fJ was full during [ r, t) and its output rate Op( r) is less than the at-
tempted input, then its inflow rate is given by lp(t) == Op(r). The resulting overflow will
be rerouted to the non-full buffers of D1 according to the following:

Algorithm 5.4. Adjustment of the inflow rates of the downstream buffers


(a) Let D denote the subset of non-full buffers of set D1. In the beginning, set
D == D1.
(b) Carry out the recursion

lp(t) == _!)_[s
I rb
1 TR J,n (t)- LIp
all full buffers
(t)l (5.16)
bED P'E(Dj-D)

for all buffers fJeD, until a full buffer fJ is encountered with lp(t);:::: Op( r).
152 HYBRID SIMULATION MODELS OF PRODUCTION NETWORKS

(c) If such a buffer is not found, then stop; otherwise, remove f3 from D, set
Ip(t) = Op( r), and go to step (b).

The term in brackets in Eq. (5.16) represents the number of items workstation nat-
tempts to transfer to the downstream, non-full buffers during one time unit. The fraction
rp/ IbeD rb is the proportion of items forwarded to buffer f3 of set D. The algorithm ends
when D empties or it is left with non-full and full buffers for which Ip(t) < Op( r). Upon
termination we switch the states of the full buffers, if any, in D to "partially full".
Algorithms 5.B-5.D correspond to Eq. (5.8) of the CF algorithm and determine the
new flow rates of buffers and workstations that are affected by the occurrence of an
event. We now derive the event scheduling equations to find the type of next event in the
system and the time of its occurrence.

5.2.3. Scheduling of Next Events

Buffers. For a partially full buffer a we have the following possibilities:


(i) If OJt) > Ia(t), the buffer will become empty at time

Ts = t + BLa(t)
a Oa(l)-fa(l)

(ii) If !Jt) > OJt), the buffer will become full at time

TB = f + _BCa - BLa.:::....:..-'
_.::..__ (t) -
a la(t) - Oa(l)

(iii) If !Jt) = OJt), the buffer will not change its state and for computational pur-
poses we schedule a fictitious event at time Tsa = oo.
For an empty buffer a we have:
(i) If !Jt) > OJt), a not-empty event is scheduled immediately, i.e. Tsa= t.
(ii) If !Jt) = OJt), we schedule a fictitious event at time Tsa= oo.
For a full buffer a :
(i) If OJt) > !Jt), a not-full event is scheduled immediately, i.e. Tsa= t.
(ii) If !Jt) = OJt), we schedule a fictitious event at time Tsa= oo.
Machines. If a machine m breaks down at timet, the time-of-next event is given by

time drawn from)


T
Mm
= t+ ( repair . . .
known dtstnbutton

If the machine is operational, it requires 1IR1. m(t) time units to output 1 unit of part type j.
Hence the time-of-next change of its maximum processing rate is
PRODUCTION NETWORKS 153

Qj,m(l)
t +
Rj,m(l)

Since the machine is subject to independent failures, the time of the next failure for part
type} is

F1 m (t)
t+ '
Rj,m(l)

and, by considering the whole family of parts produced by machine m, the next event in
m occurs at

. min{Fj,m(t),Qj,m(t)}
TMm= t + mm _ __.:_-::..:...__;_~::..::.;__..:...
}EJn Rj,m (f)

5.3. NUMERICAL RESULTS

This section presents a number of experiments conducted to verifY the accuracy of


the continuous flow (CF) model and its efficiency over a conventional piece-by-piece
(PP) model. The models are tested under the same experimental conditions by using
common streams of random numbers for the parts-to-failure and repair times. The rela-
tive speed of the CF model is computed by the ratio

RS = run time of the PP simulator


run time of the CF simulator

For a given network that produces J final parts, let PROD(i) denote the cumulative flow
of part type j during a finite observation interval. The relative estimation error of the CF
model for the throughput is computed by

RE = ..!_ f IPRODpp (j)- PRODCF (J)I 100%


J 1=1 PRODpp (j)

and the maximum error by

ME= max IPRODpp (j)- PRODcF(J)IlOO%


1 PRODpp(j)

We have developed an algorithm that generates tree-like networks at random. This


algorithm was used to generate a total of 150 networks, each with 6, 9, or 12 worksta-
tions. Each workstation consists of two machines that produce at most two types of parts
154 HYBRID SIMULATION MODELS OF PRODUCTION NETWORKS

with equal selection probabilities. The first machine has exponential processing times and
the second has Erlang with 6 stages. The mean processing times are 0.1 and the mean
repair times 5.0 time units. Failures are assumed independent in the sense of the discus-
sion in Section 5.I. I. The failure probabilities are 0.005 for all part types and machines.
The assembly factors of composite parts are 1 and the routing probabilities are all equal.
Throughput estimates were collected for a simulation period of 10,000 time units.

40

30 k-:-: ~

RS
20 0 rt"
~
-
~ ~
--+--q = 200
10 --cr-q = 100
~q=70
0
0 20 40 60 80 100
buffer capacity

(a)

10
.ME, q = 200
8 !!!!!IRE, q = 200
errors ORE, q = 100
6
(%) mRE,q=70
4

0
5 10 20 50 100
buffer capacity

(b)

Figure 5.3. Relative speed (a) and throughput errors (b) versus buffer capacities.

Figure 5.3* illustrates the dependence of the model's performance on the frequency of
blocking events for various values of the parameter q of parts-to-a-rate-change. The effi-
ciency of the CF model increases with increasing storage space because the buffers do
not become full frequently. Throughput errors are large when buffers become full fre-
quently because the CF model performs instantaneous adjustments of flow rates and ig-
nores a number of transient phenomena associated with discrete traffic. We also remark
on the relation between the speed of the model and the parameter q. For q == 200 the CF

1997 Taylor & Francis. Reprinted, with permission, from Int. J Prod. Res. 35:381.
PRODUCTION NETWORKS 155

model is faster than PP by a factor of 25 or more. By choosing smaller values for q the
accuracy improves slightly but the speed reduces dramatically, because the number of
flow rate changes in the sample path is proportional to llq.
From the above results, it turns out that when the system is relatively reliable, a
tradeoff between efficiency and accuracy is attained by setting q == 200. However, the
optimal value for q depends on the geometry of the production network and the parame-
ters of its components (machines and buffers). When the system topology is known, the
problem of specifYing the optimal value for q requires one run of the conventional simu-
lator to obtain precise results and then a few runs of the CF model with different values
of q. The computational requirements for this task are considerably smaller than those for
determining the length of the simulation horizon or specifYing the number of replications
to obtain reliable estimates, using conventional simulators.
Figure 5.4 shows the dependence of the efficiency of the model on the number of
workstations. For each system size, relative speed and throughput error estimates are
based on the outputs of ten simulation runs. The relative speed decreases smoothly with
increasing system size but it is always larger than 12.

120 ....
=200q
100 ~ "'
-D-q =100
80 \ q =70
[\\
A
""U"

RS
60
40 ~"r~
20
L~
'~Al..
0 I I u

0 3 6 9 12 15 18 21
number of workstations

Figure 5.4. Relative speed versus system size.

The last series of experiments concerns the network ofFig. 5.5. The buffer capacities
are 20. Each workstation consists of three machines with mean repair times 10 and mean
processing times 0.5. The first machine has deterministic processing times, the second
has exponential and the third has Erlang with 6 stages. Workstations 1, 2, 3, 5, 6, 7, 8
produce two types of parts with probabilities PI== 0.3, p 2 == 0.7, for n == 1, 2, 3, 5, 6, and
PI== pz == 0.5, for n == 7, 8. The routing probabilities of part type 2 from workstation 6 are
equal to 0.5. The simulation period spans 100,000 time units and the system produces
more than 190,000 type 1 parts and 85,000 type 2 parts.

1997 Taylor & Francis. Reprinted, with permission, from Int. J. Prod Res. 35:381.
156 HYBRID SIMULATION MODELS OF PRODUCTION NETWORKS

product 1 t
......
.
~r:r
. L__.rO . .

Figure 5.5. A network with two final products.

Figure 5.6* illustrates the dependence ofRS on the failure probabilities. The parame-
ter q of parts-to-a-rate-change assumes the values 20 and 30. The estimation errors for the
throughputs ofproducts 1 and 2 were 2.6% and 2.8%, respectively.

15T-------~--------~------~------~
--+-q = 30
-Q-q= 20

0+-------~--------~--------+-------~
0.00 0.01 0.02 0.03 0.04
failure probability

Figure 5.6. Relative speed versus failure probability.

From Fig. 5.6 we see that for q = 30 the CF model is quite efficient, especially when
failure probabilities are small. This is because the CF algorithm saves computations when
the frequencies of failures and rate changes are small. Therefore, the choice q = 30 is sat-
isfactory for this system. For failure probabilities 0.04, the machines remain under repair
for as long as 45% of the total simulation period. Even in this case the CF model is still
about 3 times faster than the PP simulator.

1997 Taylor & Francis. Reprinted, with permission, from Int. J Prod. Res. 35:381.
PRODUCTION NETWORKS 157

On the other hand, PP simulators have a clear advantage over the CF model when the
machines alternate between up and down states and buffers become full or empty fre-
quently, since they will provide accurate estimates at smaller CPU times and they are
easier to develop. From the above experiments, however, it appears that there is a wide
range of system topologies where the CF model is more efficient than a conventional
simulator.

5.4. ALGORITHMIC DEADLOCKS IN NON-ACYCLIC NETWORKS

As we have already discussed, hybrid simulation models perform better than conven-
tional simulators when the frequency of flow rate adjustments is smaller than the produc-
tion rates of the machines. This implies that buffers do not alternate between full and
empty states frequently and machines produce several parts before they break down. In
acyclic systems (e.g. production lines, assembly, disassembly, and tree-like networks),
this condition is fulfilled under a wide range of machine operational characteristics and
buffer capacities. The discussion in this section reveals some cases in which the hybrid
algorithm can be trapped, thus executing an endless sequence of simultaneous events.
The continuous flow model adjusts the flow rates based on the current event. The
new flow rates may immediately induce additional events at adjacent buffers and work-
stations. Continuing in the same spirit, we argue that a perturbation in the flow can be
propagated immediately to a wide neighborhood around its origin by means of local in-
teractions.

Figure 5.7. An acyclic network.

Consider the acyclic network of Fig. 5.7. Assume that an event reduces the rate of
machine M 3 This change affects the outflow and inflow rates of the buffers upstream and
downstream from M3 If the upstream buffer a (fJ) happens to be full, then the rate of M 1
(Mz) will be reduced. Ifthe downstream buffer y(o) happens to be empty, then the rate of
the downstream machine M 5 (M6 ) will be reduced. Finally, assume that M 5 is slowed
down while buffer is full. Since the parts from M 3 and M 4 are assembled into a com-
posite part at M5, machine M4 will slow down too.
In the above example, a perturbation in the flow rate through a single machine
propagates to the source (sink) nodes of the network via full (empty) buffers. By marking
all perturbed buffers starting from the component that hosts the original event, we obtain
158 HYBRID SIMULATION MODELS OF PRODUCTION NETWORKS

a graph that contains no cycles. The graph spans part of or the whole production network
and its arcs show the direction of perturbations rather than part flows.
Therefore, in acyclic systems the number of secondary events is bounded from above
by the total number of buffers. The proof of this proposition is based on the fact that local
perturbations are unidirectional, that is, when a perturbation is passed from one compo-
nent of the system to another it does not feed back. In the case of production lines, which
are the simplest acyclic systems, a perturbation graph is just a chain of full and/or empty
buffers upstream and downstream from the original event (see Section 4.1).
Now let us consider the system pictured in Fig. 5.8. Raw parts enter M 1 for the first
operation and then are sent to M 2 After completing the second operation, the parts are
sent back to M 1 for the final operation.

raw parts

Figure 5. 7. A non-acyclic network.

When an event occurs, the continuous flow model invokes Algorithms 5.2-5.4 of
Section 5.2.2 to adjust the flow rates of the affected machines and buffers. But since these
adjustments act locally, the model may start executing an infinite sequence of simultane-
ous events and flow rate adjustments along the circuit M 1 - a- M2 - P- M 1 Here we
have a situation in which an event immediately feeds back on itself. This phenomenon
will be referred to as an algorithmic deadlock and it is an immediate consequence of the
continuous flow assumption.
When the traffic is discrete, events occur only when parts are transferred from one
component of the system to another. Since the processing times of parts and the parts
themselves are not infinitesimal, an event may generate secondary events only after an
elapsed time. Consequently, piece-by-piece simulators are never trapped into algorithmic
deadlocks. A hybrid, deadlock-free model of non-acyclic production systems will be de-
veloped in Section 6.2.2 of the next chapter.

5.5. SUMMARY

In this chapter, we have developed a hybrid discrete event model for assem-
bly/disassembly production networks in which the machines can produce different parts
and are subject to multiple types of failures. The model approximates random processing
times and discrete traffic by a piecewise deterministic fluid flow. The accuracy of this
approximation and its computational efficiency have been verified through a large num-
ber of experiments.
6
OPTIMIZATION

In this final chapter, we introduce the topic of integrating hybrid simulation models
with optimization algorithms to support decisions about manufacturing systems.
Three kinds of manufacturing decisions could be distinguished: strategic, tactical,
and operational decisions, depending on the amount of time over which they affect a sys-
tem (Buzacott and Shanthikumar, 1993). Strategic decisions have a long time horizon and
relate to the size and location of plants, technology, degree of automation, and product
diversity. Tactical decisions are made every month or every season and involve, for ex-
ample, workforce and production planning, buffer capacity allocation, etc. Operational
decisions go down to a weekly or daily horizon, providing detailed scheduling and con-
trol actions based on a continuously updated shop-floor status.
In this chapter, we describe analytical and heuristic methods for the solution of some
optimization problems encountered at the tactical and operational levels. In Section 6.1
we consider the problem of assigning a limited number of repairmen to an unreliabie pro-
duction line. We introduce a number of repair control policies that are based on the ma-
chines' operational characteristics and use simulation to evaluate their performance. In
Section 6.2 we work similarly to determine optimal lot scheduling policies in a non-
acyclic production system. In Sections 6.3 and 6.4 we present a mathematical program-
ming formulation and two methods of solving the problem of the design of production
lines.

6.1. OPTIMAL ASSIGNMENT OF REPAIRMEN

In this section, we study production lines maintained by a limited number of repair-


men. Once a repairman completes the repair of a machine, he selects another one from
the set of failed machines, if any, and starts repairing it immediately. The objective is to
assign the repairmen to failed machines in a manner that optimizes system performance.
This problem has been solved analytically only for some special cases, including systems
with two machines and a finite buffer and production lines with infinite or zero buffer
capacities (Smith, 1978; Li, 1987).

159
160 HYBRID SIMULATION MODELS OF PRODUCTION NETWORKS

The idea here is to use the hybrid, discrete traffic model for testing various policies
according to which a repairman is always sent to the failed machine with the highest pri-
ority among the failed ones. Specifically, we consider the following priority rules:
FIFO: first in, first out (the machine that failed first)
SERT: shortest expected repair time (min llr;)
LERT: longest expected repair time (max llr;)
SEUT: shortest expected uptime (min lip;)
LEUT: longest expected uptime (max 1/p;)
SENP: smallest expected number of parts-to-failure (min RM/P;)
GENP: greatest expected number of parts-to-failure (max RM;fp;)
SEI: smallest efficiency in isolation (min 71;)
GEl: greatest efficiency in isolation (max 71;).
A machine is said to be in isolation if it is neither starved nor blocked. Given the mean
production rate RM; the mean uptime lip;, and the mean downtime llr;, of machine M;
and assuming it is in isolation, the fraction of time it is operational is given by

______
mean ___:;_______ =__
uptime 11 p; :o....;.__ = _....:.....__
r;
(mean uptime)+ (mean downtime) 11 r; + 11 p; r; + p;

and its efficiency 71;, defined as the mean production rate, by

r;
71; = RM; --=---
r; +p;

Repair actions are assumed to be non-preemptive. Therefore, a busy repairman is not


allowed to interrupt his activity in order to switch to another failed machine.
The hybrid, discrete part model of Section 4.2 is modified appropriately as follows.
If machine M; breaks down and all repairmen are busy, then the values of the
transient times d; _ 1 and a; of the adjacent buffers and the time of repair TM; of
that machine are myopically set to oo. These adjustments are made because, as
long as all the repairmen are busy, machine M; will not send any parts to buffer
B; and will block all incoming parts in buffer B; _ 1
When a repairman completes a repair, he is sent to a failed machine, if one ex-
ists, according to a given repair policy. The transient times and the time of repair
of that machine are adjusted appropriately.
As an application we examine two production lines, S1 and S2 , each with ten ma-
chines and nine buffers whose capacities are all equal to 10. The other parameters of S1
are given in Table 6.1. The failure rates of S2 are two times the corresponding failure
rates ofline S 1 All other parameters of S2 are those of S1
The simulation horizon spans 100,000 time units and is divided into 20 periods of
equal length. The first period is considered to be a warmup period. For each repair policy,
a 95% confidence interval (see Appendix l.A1.7) was constructed for the expected
OPTIMIZATION 161

throughput using the throughputs of the remaining 19 periods. The estimation errors for
the throughput (half-length of the intervals) were less than 1% in all cases.

Table 6.1. Machine parameters of line S1

i 1 2 3 4 5 6 7 8 9 10
RM; 20 18 16 14 10 11 15 17 19 21
P; 0.10 0.06 0.16 0.20 0.12 0.14 0.08 0.12 0.18 0.22
r; 0.65 0.61 1.02 1.31 1.29 1.45 0.90 0.91 0.95 0.80

Table 6.2. Average throughput for various repair policies.

Number of repairmen in S1 Number of repairmen in S2


Policy 1 2 10 1 2 3 10
SEI 7.16 8.13 8.20 4.90 6.57 6.81 6.84
SERT 7.15 8.13 8.20 4.89 6.57 6.81 6.84
SENP 7.13 8.13 8.20 4.87 6.56 6.81 6.84
LEUT 7.08 8.13 8.20 4.81 6.54 6.81 6.84
FIFO 7.07 8.13 8.20 4.79 6.55 6.82 6.84
SEUT 7.05 8.13 8.20 4.77 6.54 6.81 6.84
GENP 7.00 8.12 8.20 4.71 6.54 6.82 6.84
LERT 7.00 8.13 8.20 4.69 6.53 6.82 6.84
GEl 6.98 8.12 8.20 4.67 6.54 6.81 6.84

Table 6.2 summarizes the results for various numbers of repairmen. The average
production rates of the two systems maintained by 10 repairmen are 8.20 and 6.84, re-
spectively. It is obvious that with two repairmen for line sl and three for s2 the systems
perform as well as with more. For most cases considered the SEI and SERT policies
show best performance. For two configurations that apparently violate this rule, namely,
line sl with two repairmen and s2 with three, the deviations between the maximum
throughput and the throughputs achieved by the SEI and SERT policies are less than the
estimation error 1%. Thus with approximately 95% confidence, we can say that these two
policies are superior for every number of repairmen. However, experiments with different
machine parameters and buffer capacities and theoretical results (Smith, 1978; Li, 1987)
suggest that there is no unique optimal control policy and each line has its own idiosyn-
crasies concerning repair allocation. At any rate, for any type of production line the hy-
brid model can easily be used to provide an optimal policy.

6.2. LOT SCHEDULING POLICIES AND STABILITY

Control of production systems often involves lot sizing and scheduling of several
part types on each machine. This situation typically arises when a machine must select a
162 HYBRID SIMULATION MODELS OF PRODUCTION NETWORKS

number of parts to process next among two or more lots of different orders competing for
service at the same time. One important goal of production control is to minimize the
manufacturing cycle time, which is the average amount of time of an order spends in the
system (mean time in the system).
Due to the discrete nature of the decision problem, it is often impossible to find op-
timal policies even for systems with two machines. Consequently, policies used in prac-
tice, rather than striving for optimality, aim at achieving stability, namely, boundedness
of buffer levels and cycle times. This means that all customer orders are satisfied in finite
time.
A particular class of control policies is that of decentralized policies in which each
machine makes scheduling decisions based only on knowledge of its own buffer levels.
Our objective here is to develop a hybrid model of a controlled production network and
to illustrate its use in evaluating various decentralized policies.

6.2.1. System and Control Policy Description

We consider systems of a particular structure shown in Fig. 6.1, which produce one
product and in which a part may visit a machine at several (not necessarily consecutive)
stages of its production process. Such systems are called reentrant flow lines and they are
frequently encountered in semiconductor manufacturing.
Parts at the ith processing stage are stored in buffer B;. In the system of the figure,
parts require twelve operations and visit each machine four times. The system operates
according to a produce-to-order policy (see also Section 4.4.2), that is, when a customer
request arrives, it authorizes the release of a new raw part into buffer B 1

MACHINE 1 MACHINE 2 MACHINE 3

Figure 6.1. A reentrant flow line.

We assume that the machines are unreliable and incur setup delays when they switch
processing parts from one buffer to another. When a machine is under repair or setup it
cannot produce. Consequently, setups should not occur too frequently, in order to keep
OPTIMIZATION 163

machine utilization as high as possible, but also not too rarely because then the buffer
levels grow and the mean cycle time increases. In the sequel, we use the term ''produc-
tion run" to define the period of time in which a machine serves one buffer exclusively.
Let 2 be the mean demand rate and r; the mean production time of a part at the ith
processing stage. The mean production time is the inverse of the efficiency in isolation
(see Section 6.1) ofmachine m if all stages except the ith are suppressed. Due to machine
breakdowns, the mean production time T; at that stage is longer than the net processing
time. Hence, assuming for simplicity that the net processing times are deterministic, we
have that

( net processing time) (mean number of failures of machine) ( mean )


T; = of a part at stage i + m during the production of that part x time-to-repair

We will now describe a scheduling policy, known as universally stabilizing supervis-


ing mechanism (USSM), that provides a balance between the frequency of setups and the
length of production runs .. This policy has been proposed and analyzed by Kumar and
Seidman (1991). To implement the USSM, one must specifY two kinds of control pa-
rameters: a positive number Ym hereafter called the truncation parameter, for each ma-
chine m and a nonnegative number BC;, called the target level, for each buffer B;. Let Ym
denote the truncation parameter of machine m and BC; the target level of buffer B;. The
operation of the USSM is governed by the following rules:
(l) Each machine serves a list ofbuffers based on a first-in-first-out protocol. Buffer
B; enters the tail of the list whenever it is not processed or setup and its level is
greater than or equal to BC;.
(2) When machine m is available, the buffer at the head of the list, say B;, is set up
for processing.
(3) When the setup is complete, the machine begins processing parts from B;
throughout a production run spanning L1; YmA T; time units, or until the buffer
empties, whichever occurs first. Buffer B; is then removed from the list and the
machine commences a setup to switch to another operation.
The above policy is distributed because decisions at each machine are made on the
basis of the levels of the adjacent buffers, regardless of the buffer levels at other ma-
chines. There is a large amount of literature concerning distributed production control
policies (see, e.g., Perkins and Kumar, 1989; Chase and Ramadge, 1992; Sharifnia, 1994;
Humes, 1994; Kumar and Kumar, 1994; Kumar and Meyn, 1995; and the references
therein).
For multiproduct manufacturing systems of general geometry with reliable machines
and deterministic processing times, Kumar and Seidman ( 1991) have shown that the
USSM is stable whenever
164 HYBRID SIMULATION MODELS OF PRODUCTION NETWORKS

L (maxi~um time to set up)


all buffersBi machme m for buffer Bi
> ofmachinem
(6.1)
Ym
1-
all buffers Bi
ofmachinem

and the stability condition

l:(A.ri) < 1
all buffers Bi
ofmachinem

are satisfied for each machine m.


Condition (6.1) imposes a lower bound on the truncation parameter Ym and, therefore,
on the lengths .1i = rmA.ri of production runs of machine m. The stability condition ensures
that each machine can process all the parts of arriving customer requests during one time
unit within less than one time unit, provided the machine is not being set up.
The mean cycle time achieved by this policy depends on the selected control parame-
ters. The following remarks give some insight on the effects of these parameters on sys-
tem performance.
When the target levels are large, a machine incurs long idle periods waiting for
an upstream buffer to reach its target level, even though there may be work to
do. However, insertion of idle times may be beneficial to reduce the frequency
of setups (see e.g. Chase and Ramadge, 1992; Federgruen and Katalan, 1996;
and references therein). This feature is referred to as idling.
When the truncation parameters are large, the production runs .1i are long, the
machines are rarely set up and, therefore, they have full utilization and the sys-
tem is stable. On the other hand, by using small truncation parameters, we avoid
long production runs and thus all buffer levels tend to be small. However, since
the USSM is stable for deterministic systems, using the lower bound of rm. that
is the term on the right side of condition (6.1 ), may render the system unstable if
the demand rate or the production times vary over time.
To compute the optimal values for the control parameters we must solve an optimi-
zation problem that is both combinatorial and stochastic in nature. As an alternative, we
can simply select the best among different combinations using simulation. Given the
speed and accuracy of the hybrid models, such tests can be performed quickly and relia-
bly in real time on the shop floor. This topic is discussed in the next section.

6.2.2. Hybrid Model and Performance Evaluation

Here we develop a hybrid continuous flow model to test various combinations ofBCi
and Ym The model observes changes in the inflow and outflow rates of the buffers which
result from the occurrence of the following events:
(a) a machine begins a new setup
OPTIMIZATION 165

(b) a machine fails


(c) a buffer enters the list
(d) a buffer empties
(e) a machine resumes operation.
The last event corresponds to the end of a setup period or a repair period.
To complete the model we must derive the update and scheduling equations and de-
velop the event routines. The update and scheduling equations are as in Section 3.3.2,
with the following convention: Ri denotes the outflow rate of Bi and also the inflow rate
of Bi+ 1 The events are treated as follows.

Algorithm 6.1. Event routines ofa controlled production network


(a) Machine m begins a new setup. We update all upstream and downstream buffers
whose rates are affected by this event. Let Bi be the buffer that was served by the
machine during the previous production run. We set the outflow rate of Bi to 0.
lfthe list is empty, the machine becomes idle and we set TMm= oo; otherwise, we
select the buffer at the head of the list and schedule a type-e event after a setup
time. The machine rate is zero. We schedule next events in the affected buffers.
(b) Machine mfails. We update all upstream and downstream buffers whose rates
are affected by this event. Then we schedule a type-e event after a repair time.
The machine rate and the flow rates into and from the adjacent buffers are all
zero. Finally, we schedule next events in the affected buffers.
(c) A buffer enters the list. If the list is empty, we execute a type-a event.
(d) Buffer Bi empties. We execute a type-a event.
(e) Machine m resumes operation. We update all upstream and downstream buffers
whose rates are affected by this event. The machine starts producing at maxi-_
mum speed. We adjust the outflow rate of the upstream buffer that is currently
being served to the maximum rate. Finally, we schedule next events in the af-
fected upstream and downstream buffers. The next event at machine m is a fail-
ure or a new setup, whichever occurs first.

The model observes all events in the system until a specified production period is
reached. It uses Little's formula, Fi = AW, to compute the mean cycle time W from the
mean number N of items in the system, which is also equal to the mean number ofback-
logged orders. This quantity is approximated by the average number of items in the sys-
tem during the simulation period [0, !max].
Let us consider a system into which raw parts enter at a constant rate A, they are suc-
cessively stored in n buffers Bt. B 2, ... , Bm and finally exit from the last machine as fin-
ished products. At any time t, the number of items in the system is

BL(t) = BL(O) + Po(t) + Pn(t),

where BL(t) is the inventory, that is, the sum of all buffer levels, at time t, P0(t) is the
number of arriving orders at the system and Pn(t) the number of departures from Bn in the
interval [0, t]. Since P0(t) =At, the average number of items in the system can be ex-
pressed as follows:
166 HYBRID SIMULATION MODELS OF PRODUCTION NETWORKS

1 tmax A. 1 tmax
N =- JBL(t)dt = BL(O) + - - - JPn(t)dt
lmax 0 2 fmax 0

To compute the integral we observe that Pn(t) is linear and increasing whenever the last
buffer is served, otherwise it is constant. Suppose that the outflow rate of Bn changes at
times t 0, t~o ... , tK, where t 0 = 0 ~ t 1 ~ ~ tK = tmax Figure 6.2 shows a possible evolution
of Pn(t). The region between the plot of Pn(t) and the time axis consists of alternating
rectangles and trapezoids delimited by the times t0, f~o ... , tK. Hence

1 lmax A. 1 K -I fk+l
- JBL(t)dt = BL(O)+--- L JPn(t)dt
fmax 0 2 !max k=O tk

Pn(f)

Figure 6.2. Number of departures from Bn.

Algorithm 6.1 can be extended to describe more complex production networks that
produce several types of products with random processing times.
From several experiments we found that when BC; = 0, the performance of the de-
centralized policy can be improved considerably if we permit idling, that is, whenever B;
becomes empty (as a result of R; _ 1 being less than R;) we continue processing parts at the
rateR; at which they arrive. To implement this policy we modify routine (d) as follows:
(d) Buffer B; empties. IfBC;> 0 or R;_ 1 = 0, then execute a type-a event for another
buffer. Otherwise, continue processing parts from B; at a reduced speed R;= R;_ 1
until the cumulative production run reaches a total of .d; time units.
In order to evaluate the performance of the scheduling policies, we simulate the sys-
tem of Fig. 6.1 for 1,000,000 time units. We use the following data: the arrival rate is 1.0,
the net processing times are 0.2 for all parts, setup times are 1.0, failure probabilities are
0.01 and the downtimes are exponential random variables with mean 1.0. Since the num-
OPTIMIZATION 167

ber of failures during the production of one part has a geometric distribution on {0, 1, ... }
with parameter 0.01, its mean is 0.01/(1- 0.01) and the mean processing times

_0 (mean number of failures of machine) ( mean )


r; - 2 + m during the production of that part x downtime

= 0.2 + 001 xl.O = 0.20101


1-0.01

satisfy the stability condition 4 x r; < 1.


We have tested a large number of combinations of target levels and truncation pa-
rameters. In each experiment, all target levels assumed the same value BC, and also all
truncation parameters assumed the same value y. Figure 6.3 shows the minimum mean
cycle time Was a function of the target level. This quantity is the minimum of the mean
cycle times achieved with various values of y for a given BC. From the figure it is clear
that the selection BC = 0 achieves the smallest delays. Note that BC can be fractional by
the continuous flow assumption.

65~----~----~----~----~
0.00 0.50 1.00 1.50 2.00
BC

Figure 6.3. Mean cycle time versus BC.

Finally, another important measure of performance is the mean number of unfinished


~ders, N, in the system. If we are interested in minimizing N, then by Little's formula,
N = A.W, the selection BC = 0 is again optimal.

6.3. PERTURBATION ANALYSIS AND SYSTEM DESIGN

Optimization formulations are often used in the design of manufacturing systems.


The objective function in these formulations usually represents economic goals such as
capital or operating costs. Constraints, on the other hand, capture underlying physical
laws governing system behavior as well as budgetary limitations and operation require-

1997 Taylor & Francis. Reprinted, with permission, from Int. J. Prod. Res. 35:381.
168 HYBRID SIMULATION MODELS OF PRODUCTION NETWORKS

ments. In this section, we develop an algorithm for the optimal repair and buffer alloca-
tion of discrete part, unreliable production lines with constant processing times. Next we
review the standard Lagrangian approach for solving constrained optimization problems.

6.3.1. Optimization with Equality Constraints

First we introduce the concepts of convex sets and convex functions.

Definition 6.1. A subset X of Rn is convex if the line segment joining any two points
of X also belongs to X; that is,

x,yeX=>Bx+(1-B)y eX V'Be[0,1]

For example, if aeR then the set

X~ {r: reRn, f.r; =a}


1=1

is convex because, for every r ~ (r~. r 2, , rn) and p~ (p 1, P2, ... , Pn) eX, we have that

n
LJBr; +(1-B)p;]= Ba+(1- B)a=a
i=l

Also, it can easily be verified that the intersection of convex sets is a convex set.

Definition 6.2. A scalar function/ Rn----). R defined on a convex set X is convex if

x,y eX=> f[Bx + (1- B)y] =:; Bf(x) + (1- B)f(y)

for every Be[O, 1]; the function is concave if -f(x) is convex.

A property we shall use in later sections is that if/(x) is concave and positive, then
1 lf(x) is convex. To prove this property, first observe that, if/(x) is concave, then

x,y eX=> f[Bx + (1- B)y] ~ Bf(x) + (1- B)f(y)

and sincef(x) is positive, the above yields

______1_____ =:; 1
f[Ox+(l-B)y] Of(x)+(1-B)f(y)

Using the inequality [f(x)- f(y)] 2 ~ 0 we obtain after a little algebra


OPTIMIZATION 169

____1_ _ _ .::; _B_ + _1-_B


B f(x) + (1- B)f(y) f(x) f(y)

From the above inequalities we obtain

- - -1- - - .::; -B- +1-B


--
f[Bx+(l-B)y] f(x) f(y)

which proves that 1 lf(x) is convex.


Supposef JRn---). Rand let x denote the vector [xi x2 . Xn]. Hence,f(x) can be writ-
ten as

Consider the problem of finding a point x ~ [xi x2* ... xn*] ERn such that

f(x*) ?.f(x) V XERn

This problem is an optimization or mathematical programming problem of the uncon-


strained type, which is written symbolically as

maxf(x)
X ERn

The functionf(x) is called the objective function and the coordinates XI. x2, , Xn of point
x are the decision variables of the optimization problem. The point x* is called the global
maximum off(x) or, equivalently, the global optimum of the mathematical programming
problem. The next theorem gives a sufficient condition for the optimality of x .

Theorem 6.1. Letf Rn---). R be concave function with continuous partial derivatives
x
and such that

at = 0 i = 1, 2, ... , n (6.2)
ax;
Then, x is a global maximum off

See Bazaraa et al. (1993) for a proof of a similar theorem-Theorem 4.1.5 therein.
Solving Eqs. (6.2) yields the optimal values x;, i = 1, 2, ... , n, of the decision variables.
Next we consider constrained optimization problems of the form

maxf(x)
xeR"

subject to gm(x) = 0 m = 1, 2, ...


170 HYBRID SIMULATION MODELS OF PRODUCTION NETWORKS

where f is concave and gm are convex and differentiable functions. Equations gm(x) = 0,
m = 1, 2, ... , express physical, economical, or other constraints that bind the decision
variables. A point xeRn satisfying the constraints is called feasible point. The subset of
Rn comprising all feasible points is called the feasible region of the optimization problem.
Since the decision variables are not independent, it is no longer true that Eqs. (6.2)
will yield a feasible point. A procedure to solve this problem would be to use the con-
straints to express some decision variables as functions of the remaining (independent)
ones, and then substitute the resulting expressions into Eqs. (6.2) of the independent deci-
sion variables. It can be shown (see e.g. Hadley and Whitin, 1963) that this procedure is
equivalent to the method of Lagrange multipliers, which is outlined next.
With each constraint m we associate a nonnegative number Am called the Lagrange
multiplier, and form the function

where A. [.4 1 .42 Am ... ). We consider the problem of maximizing.fa(x, A.) with respect
to x and A.. Since gm(x) is convex and Am nonnegative, the function.fa(x, A.) is jointly con-
cave in x and A.. Hence, by Theorem 6.1, if there exist vectors x and A." such that

8fa =0= 8f -LAm 8gm i= 1,2, ... ,n


ax/ ax/ m ax;

a~ = 0 = gm(X) m = 1, 2, ...

then the point (x*, A.") is the global maximum of.fa, whereas x" is also the global optimum
of the original constrained optimization problem. If the partial derivatives off cannot be
evaluated analytically, then the above equations can be solved using standard numerical
methods. One such method, known as the method of steepest ascent, will be presented in
the next section.

6.3.2. Allocation of Buffer Space and Repair Effort

We consider the problem of maximizing the throughput TH of a production line


when the total budget for storage (pallets, conveyors, etc.) and repair effort is limited.
Specifically, we assume that the total amount of repair rate a and storage space bare lim-
ited. The line consists of n machines and n - 1 intermediate buffers. Let r = [r 1 r 2 rn]
be the vector of repair rates and BC = [BC 1 BC2 BCn- I] the vector ofbuffer capacities
to be determined. The problem is formulated as follows (see also Hillier and Bolling,
1966; Ho and Cao, 1983):

max TH(r, BC)


OPTIMIZATION 171

n
subject to L r; =a
i=I

n-I
LBCj =b
j=l

The above problem is a constrained optimization problem for which the objective
function does not have a closed form. To solve this problem we proceed sequentially as
follows:

Algorithm 6.2. Steepest ascent procedure


(a) Find a feasible set of design parameters r<0>, sc<O>. Set k == 0.
(b) Evaluate the throughput and its gradients with respect tor?> and Bel>, denoted
by Sr/k> and SBl> respectively. If k > 0 and TH (k+ I ) - TH <k> < &, a small number,
then stop; otherwise go to (c).
(c) Update the parameters:

where and J 1(k) and J2<k> are step sizes determined empirically. Modify the up-
dated values so that the buffer capacities assume nonnegative integer values that
sum up to b. Replace k by k + 1 and go to (b).

As we discussed in the previous section, a sufficient condition for convergence to the


global optimum is that the throughput be jointly concave in r and BC. Two relevant re-
sults have appeared in the literature, namely, the throughput of reliable, assem-
bly/disassembly systems is concave in BC (Dallery et al., 1994) and the reciprocal
throughput of unreliable production lines is convex in r (Kouikoglou and Phillis, 1994).
Although analytic solutions of small systems (Altiok and Stidham, 1983) and simulation
experiments with longer production lines do not indicate the existence of multiple local
optima, concavity of the throughput ofunreliable systems remains an open question.
Another important issue is the estimation of the gradients of the throughput with re-
spect to buffer capacities and repair rates. Since the buffer capacities are discrete, a pos-
sible approach is to define the corresponding gradients as finite differences

S !! TH(r,BC + lj )- TH{r,BC)
Bj - (BC + 1j) - BC

= TH(r, BC + lj)- TH(r, BC)


172 HYBRID SIMULATION MODELS OF PRODUCTION NETWORKS

where lj is the vector whose jth element is 1 and the others are 0. This approach requires
a total of n simulation runs, that is, one simulation run to estimate the throughput corre-
sponding to nominal parameter values r and BC, and n - 1 additional runs to estimate
TH(r, BC + 1) for each buffer}, j = 1, 2, ... , n- I.
In the next section, we present an efficient method for extracting gradient informa-
tion with respect to every repair rate without having to perform additional simulations.
This method, known as infinitesimal perturbation analysis, was proposed by Ho and his
colleagues in their pioneering work reported in Ho et al. ( 1979, 1983 ).

6.3.3. Infinitesimal Perturbation Analysis

The basis of infinitesimal perturbation analysis (IPA) is a set of simple rulesdescrib-


ing the effect of a change in a system parameter on the average throughput rate. We de-
rive these rules using the following paradigm.
Consider L items, k = 1, 2, ... , L, processed sequentially by a production line with n
machines and n - 1 intermediate buffers. Suppose that machine M; has constant process-
ing times 1/RM; and buffer B; has finite capacity BC;. We define the following quantities:
T;,k total production time (net processing time plus downtime) of the kth part at
machine M;
F;,k time at which M; completes processing the kth part
D;,k departure time of the kth part from M;
TH(L) average throughput of the system after L parts are produced
Sr;(L) partial derivative ofTH(L) with respect tor;.
Since the items in each buffer are processed in a FIFO manner, the dynamics of the
production line can be described as follows:

(1) At timeD;, k- 1 the (k- l)th item departs from M; and enters the downstream
buffer B;. If at that time B; _ 1 is not empty, M; begins processing the next item;
otherwise, the machine begins processing at time F; _ 1, k at which the kth item
completes processing at M;_ 1 In the latter case we have F;-l,k = D;-l,k. Com-
bining these two cases we compute the completion time of the kth part by

D; k-1 if M; is not starved


*
F;, k = r;, + { D ,
i-1, k if M; is starved
(6.3)

(2) At time F;, *' the machine attempts to send the kth item to B;. If, however, buffer
B; happens to be full, that is, occupied by items k- 1, k- 2, ... , k- BC;, then
machine M; remains blocked until the time at which the (k- BC;- 1)th item de-
parts from M; + 1 From this observation we get

F;,k if M; is not blocked


D;k= { (6.4)
' D;+ l,k-BC;-1 if M; is blocked

(3) Combining Eqs. (6.3) and (6.4) yields


OPTIMIZATION 173

D; _ 1, k + r;, k if M; is starved but not blocked


{
D;, k = D;, k- 1 + 1';, k ~f M; ~s neither starved nor blocked (6.5)
D;+ 1, k- sc;-I 1f Mj 1s blocked

(4) Finally, the expected throughput of the system THis approximated by the aver-
age throughput TH{L). The latter is determined from the departure times of the
last machine. Hence,

TH(L) =.....!::__ (6.6)


Dn,L

Now assume that the production times can be expressed as functions of a decision
variable, say x. For example, if M; has exponential repair times and xis the mean repair
rate then the production time of the kth part is computed from

. . ( total downtime during )


r;, k =(net processmg time) + the production of the kth part

number of
= _1_ + faies (- lnuq)
RM; q=l X

where

lnuq
---
X

is the random variate and uqE [0, 1] the random number corresponding to the qth down-
time incurred by Mj while processing the kth part.
Differentiating Eq. (6.5) with respect to x we get

ani-] k dr; k
_ ___;._ + - - if Mj is starved but not blocked
ax dx
aD;,k aD; k-I dr; k
= _....;.'-+-'- if Mj is neither starved nor blocked (6.7)
ax ax dx
aDi+l,k-BC;-1
if Mj is blocked
ax

provided the above derivatives exist almost everywhere. Equation (6.7) holds almost eve-
rywhere for production lines (Hu, 1992) if
A1) every r;,k is a continuously differentiable and convex function ofx almost
everywhere;
174 HYBRID SIMULATION MODELS OF PRODUCTION NETWORKS

A2) the probability that a machine is marginally starved or marginally blocked (see
the discussion that follows) is zero.
Now we verify conditionAl for production lines with unreliable machines and de-
terministic processing times. If x -:t. r1, then, since x is not the repair rate of M1, the produc-
tion time 'li,k is independent ofx and, therefore, Al holds. Ifx = r 1 then r 1, 1 is given by

number of
1 +
r;,.t=-- L.
failures ( ___
In u q )
RMi q=l X

The function (In uq) I x is continuously differentiable at x for every uqE (0, 1). Further-
more,

number of
_._,,_k = -2 failures
d 2,... L. __ In u q
~o
tJx2 q=l x3

for every UqE(O, 1); hence, 'lf,k is convex almost everywhere and conditionAl holds.
We introduce the terminology of marginal blockage or starvation. A machine is mar-
ginally blocked if production of a part fills the downstream buffer exactly at the time an-
other part departs from the same buffer. This machine is instantaneously unblocked.
Marginal starvation or marginal blockage and starvation are defined similarly.
We now examine the validity of condition A2. It turns out that this is not so. Indeed,
the derivative of D 1, 1 may not be well defined when at least two conditions in Eq. (6.5)
are simultaneously true. For example, if B1_ 1 is empty and machines M; _1 and M1 com-
plete their parts simultaneously, then the part that is completed by M1_ 1 will be trans-
ferred toM; without delay. Thus M; is marginally starved. In this case, D 1, 1 _ 1 =D1_ 1, 1
and the first two expressions in Eq. (6.5) yield the same D1,.t- This may not be the case for
Eq. (6.7) since, in general, the partial derivatives of D 1, 1 _ 1 andD1_ 1, 1 are not equal. By
the same reasoning, differentiability of D 1, 1 may not hold when M 1 is marginally blocked,
that is, buffer B 1 is full, and machine M; completes a part at the same time machine M; + 1
transfers its part downstream (or, simply, completes its part, if M1+ 1 is not blocked).
The probabilities of marginally starved or blocked states would be zero if the produc-
tion times were absolutely continuous random variables. This however, is not the case
here, because of the assumption of constant processing times. For example, suppose that
the machines have equal production rates and each buffer contains one item at time zero.
Then every machine M1, i = 2, 3, ... , n, will be marginally starved during some initial
transient period. In a dual fashion, if all buffers are full at time zero, then machine M 1,
i = 1, 2, ... , n - 1, will be marginally blocked during some initial transient period. This
demonstrates our assertion of the invalidity of A2.
Although A2 may not hold during an initial transient period, we now demonstrate
that the departure times are differentiable functions of the repair rates. The transient peri-
ods differ from one machine to another, depending on the position and the times at which
the first failures of the system are observed and how these failures affect a specific ma-
chine. Since during these periods the machines do not incur any delays due to failure, the
OPTIMIZATION 175

partial derivatives of the production times and departure times with respect to any failure
rate x are zero, in which case all the expressions ofEq. (6.7) are zero and it does not mat-
ter which one is selected. Therefore Eq. (6.7) is valid during these periods. Next we ob-
serve that after the initial transient period, the departure times from any machine depend
on at least one downtime, which is an absolutely continuous random variable. This im-
plies that the only possibility for M to be marginally starved after the initial period is that
the departure times from M 1_ 1 and M be affected by the same downtimes. But this again
implies that the partial derivatives of the departure times of the two machines are equal.
Similarly M cannot be marginally blocked unless the partial derivatives of the departure
times from M and M+ 1 are equal. This proves that the departure times D 1,k of unreliable
production lines with deterministic processing times are differentiable.
Equation (6.7) is equivalent to the following three IPA rules:
When M completes the kth item, it gains a perturbation equal to (dr;,kldx).
When a machine M1 is starved, it copies the perturbation accumulated at M _1
In a dual fashion if M is blocked, it copies the perturbation accumulated at
M+l
The first rule is known as the perturbation generation rule. We examine two distinct
*
cases. If xis the mean repair rate of ~.j i, then, dr;, kldx = 0. If xis the mean repair rate
of M, then, according to this rule, when M completes a part the derivative of the depar-
ture time from that machine gains an incremental perturbation

For computational convenience, the above can be written as

lnuq)
number of number of
failures ( failures
I -- L (duration of the qth repair)
- - = __q=l
d-r;,k .;._____
X = q=l
dx X X

Therefore, if during the kth production cycle M is neither starved nor blocked, then upon
completion of the kth item, its derivative of the departure time becomes

number of
failures
L (duration of the qth repair)
ani,k ani,k-1 q=l
--= (6.8)
Ox Ox X

The other two rules, known as perturbation propagation rules, imply that, if for any
reason a variation in x has caused a delay in the departures from machine M; _1 or M 1+ t.
then this delay is passed to M; whenever it is starved or blocked, respectively. By the
same rule, it is possible for a perturbation generated at some intermediate machine to
176 HYBRID SIMULATION MODELS OF PRODUCTION NETWORKS

propagate to the beginning and to the end of the production line by means of blocking
and starvation phenomena, respectively.
We now summarize the algorithm for obtaining partial derivatives of the throughput
with respect to each repair rate. We define the gradients Sj, r; by

s. . ana1,k
'),r,
r;

where k is the index of the part that is currently produced by machine ~

Algorithm 6.3. Infinitesimal Perturbation Analysis ofProduction Lines


(a) Initialize the line and set S},r;= 0, fori= 1, 2, ... , n andj = 1, 2, ... , n.
(b) Simulate the system using any discrete event model. When an event occurs exe-
cute one of the following:
(b 1) When ~ is repaired, replace SJ. r; by

S _ duration of the repair


'), 'j
rj

(b2) If Bj becomes full, set sj, r; = Sj + I, r;' i = 1' 2, ... ' n.


(b3) If Bj-t becomes empty, set SJ.r;= SJ-t,r; i = 1, 2, ... , n.
(c) When L parts are produced, terminate the simulation and calculate the gradients
ofEq. (6.6) from

OTH(L)
S fL)
r;\ ar;
a(L/ Dn,d L- aDn,L TH(L) aDn,L
=---'---'-- -- =

__ TH(L) S
- D n.r;
n,L

The above algorithm applies the perturbation generation rule at step (b1) and the pertur-
bation propagation rules at steps (b2) and (b3).
From a theoretical point of view, two fundamental questions arise in the study of
IPA, namely, unbiasedness and strong consistency of the gradient estimators Sr; As we
have discussed in Appendix l.A1.6, S,; is unbiased if the following equality holds:

E[S,,{L)] E[aTH(L)] = aE[TH(L)]


a~ a~
OPTIMIZATION 177

for every i = 1, 2, ... , nand L = 1, 2, ... , that is, the operators 8/8r; and E are interchange-
able. Strong consistency refers to the limiting behavior of Sr; as L goes to infinity, that is

lim Sr; (L) = _g_ {lim E[TH(L)]} almost everywhere


L-HrJ 8r; L--too

for every i = 1, 2, ... , n. These issues have been first addressed by Cao (1985) and later
by several authors (see e.g. Glasserman, 1991; Hu, 1992; Cheng, 1994; and the references
therein) who derived explicit conditions under which IPA is applicable to production sys-
tems.
Finally we remark that Balduzzi and Menga (1998) have developed algorithms for
perturbation analysis and optimization of complex, continuous flow production networks
by extending the continuous flow model of Chapter 5.

6.3.4. Numerical Results

As an application, we consider a discrete part production line with four machines


having deterministic processing times and exponential uptimes and downtimes. The pro-
duction rates are RM1 = 10, RM2 = 8, RM3 = 12, RM4 = 15 and all failure rates are equal to
0.1. We want to maximize the system throughput subject to a= 4 and b = 15. The total
production is set to 50,000 items and the convergence parameter & in Algorithm 6.2 as-
sumes the value 0.001.
The results are summarized in Table 6.3*. Observe that the total inventory space is
disposed to the buffers surrounding the slowest machine M 2 This is made in order to
eliminate blockage or starvation phenomena. In addition, the repair rate of M2 should be
the greatest of all. This is possible in practice by assigning the repairman of M4 to the
slowest machine on part-time basis.

Table 6.3. Five iterations of the gradient algorithm.

k BC 1 BC2 BC3 ,, ,2 TJ r,. TH


0 5 5 5 1.00 1.00 1.00 1.00 6.690
6 6 3 1.01 1.05 1.00 0.94 6.751
2 6 8 I 1.01 1.08 1.00 0.91 6.790
3 6 9 0 1.02 1.10 0.99 0.89 6.806
4 6 9 0 1.02 1.13 0.98 0.87 6.811
5 6 9 0 1.03 1.15 0.97 0.85 6.812

These results agree with intuition since they suggest that machine M 2, which has the
smallest nominal production rate and, therefore, the smallest efficiency in isolation at step
k = 0, should be favored. For longer lines, however, this rule may not always lead to the

1991 Academic Press. Reprinted, with permission, from Control and Dynamic Systems, 47:1.
178 HYBRID SIMULATION MODELS OF PRODUCTION NETWORKS

best design. Indeed, Hillier and Boling ( 1966) have discovered that the optimal allocation
of machine speed is unbalanced, with the middle machines favored over the extreme
ones, a property known as the bowl phenomenon.

6.4. DESIGNING WITH CONCAVE COSTS

In this chapter we take a second look at the problem of the optimal repair and buffer
allocation of production lines. Here the goal is to minimize the operating cost of the sys-
tem, expressed in terms of the profit from the throughput and the cost of repair effort al-
located to machines, when the total budget for storage (pallets, conveyors, etc.) is limited.
Quantification of these entities necessitates the use of mathematical functions that are
often nonconvex.
Problems involving nonconvex constraints and/or objectives may have several local
minima. Standard optimization algorithms, such as the one presented in Section 6.3.2
and stochastic approximation methods (e.g., Liu and Sanders, 1988; Tandiono and Gem-
mill, 1994), can, at best, guarantee the local optimality of obtained solutions for these
problems. In some cases these algorithms produce solutions that are not even locally op-
timal (such examples are presented by Bagajewicz and Manousiouthakis, 1991). So-
called global optimization algorithms must be employed if one is to guarantee that ob-
tained solutions are globally optimal.
Here we consider a particular class of nonconvex optimization problems, in which
the objective and constraint functions can be expressed as sums of one convex function in
several variables and a number of concave functions in one variable. These problems can
be transformed into a convex problem with a single reverse convex constraint. The solu-
tion procedure we propose is based on a branch and bound method (Falk and Soland,
1969) to generate a sequence of convex subproblems and a sequence of solutions that
converge to the global minimum. For each subproblem a continuous flow simulator is
involved which uses infinitesimal perturbation analysis to obtain the gradient informa-
tion.
In the next sections we formulate the optimization problem and present the solution
methodology (Phillis et al., 1997). Finally, we discuss some experimental results obtained
by applying the method to an optimization problem.

6.4.1. Formulation of Optimization Problems

The general design problem consists of finding a vector of parameters that mini-
mize the expected cost of a production line, subject to limited budget. We shall consider
here a special class of problems that can be stated as

m
(PO) min F[x, TH(x)] =min F0 [x, TH(x)] + L F; (x;)
i=l

m
subjecttoGj(x, TH(x)]=G10 [x, TH(x)]+ LG1;(x;) ::;;;o }= 1,2, ... ,k
i=l
OPTIMIZATION 179

where x = [x 1 x2 Xm] is the vector of the design parameters; TH(x) is the expected
throughput rate and F is the expected operation cost per time unit; Gj, j = 1, 2, ... , k, are
constraint functions which express economic, operational, or physical limitations. The
total budget for storage space, the requirement that TH(x) be larger than a given demand
value, and the nonnegativity of the system's parameters, are typical constraints. The func-
tions F 0, F;, Gjo. and Gj; are discussed below.
The steepest descent algorithm (the minimizing version of the steepest ascent algo-
rithm we presented in Section 6.3.2) for solving (PO) requires that F and Gj be continuous
and convex in x. We now relax this assumption requiring F 0 and G0 to be convex in x and
F;, Gj; concave in x;. Employing the transformation y;= F,{x;) and Zj;= Gix;) for
i = 1, 2, ... , m and j = 1, 2, ... , k, we obtain the equivalent problem

m
(P1) min F[x, TH(x)] =min Fo[x, TH(x)] + LY;
i=l

m
subject to Gjo[x, TH(x)] + L z ji :s; 0 j = 1, 2, ... , k
i=l

y;-F,{x;):s;O i= 1,2, ... ,m

Zj;- Gj,{x;) :s; 0 i= 1, 2, ... , m }= 1, 2, ... , k

~ [- Y; + F;(x;) + ![-zJi + GJi(x;)]] :s; 0


This is a convex optimization problem with a single reverse convex constraint (the last
one), which is separable since F; and Gji are functions in one variable.

6.4.2. Solution Methodology

For the above problem we assume that the throughput and its gradients can be esti-
mated from finite-length simulation runs. The effects of estimation errors will be dis-
cussed in the next section. The global solution of (Pl) will be pursued through a branch
and bound algorithm that can identify a feasible point, which lies arbitrarily close to the
global optimum. This algorithm assumes the existence of a rectangular region II where x
lies: ll={xeRm: l;:s;x;:s; U;, i=l, 2, ... , m}.
A simple bound estimation method employs existing bounds on some of the parame-
ters of the problem to obtain bounds on other problem parameters. For example, positiv-
ity of optimization variables such as buffer capacities and repair rates can be employed to
obtain lower bounds on these variables and upper bounds on other variables by exploiting
the constraints.
Let Un[F,{x;)] denote the linear underestimator of F.{x;) over II, defined as:
180 HYBRID SIMULATION MODELS OF PRODUCTION NETWORKS

Un[F~(x;)] = F (l;) +
1 F; (u~~
I
=~; (/;) (x; -I;)
I

The linear underestimator Un[ Gj1(x;)] of Gix;) over II can be obtained in an analo-
gous manner. Introduction of these underestimators in place of the reverse convex func-
tions in (P 1) leads to the following convex optimization problem

m
(P2) min F 0 [x, TH(x)] + LY;
i=l

m
subject to Gjo[x, TH(x)] + LZ ji ~ 0 j = 1, 2, ... , k
i=l

y;- F 1(x;) ~ 0 i = 1, 2, ... , m


Zj;-Gj1(x;)~O i= 1,2, ... ,m }= 1,2, ... ,k

~ [- y; + Un [ F;( x;)] + t 1
[- z ji + U n [ G ji( x;)] ] ] ~0
Since the feasible region of (P2) contains the feasible region of (P 1), the value of (P2) is
smaller than the value of (P 1). Furthermore, the value of (P2) over the intersection of its
feasible region with any rectangle is always smaller than the value of (P 1) over the inter-
section of its feasible region with the same rectangle. The distance of the values of the
two problems depends on the distance between the underestimator and the corresponding
concave function over the particular rectangle. The tighter the underestimator the closer
the value of (P2) is to the value of (P 1). In general, tighter underestimation is achieved
when the rectangle is shrinking. The resulting branch and bound algorithm is outlined
below.

Algorithm 6.4 Branch and bound method


(a) Initialization: A convergence parameter&> 0 is defined. The initial rectangle II
is identified by setting upper and lower bounds for the variables X;, i = 1, 2, ... ,
m. Then, the underestimators of the concave constraint over II are constructed.
(b) Iteration 1: The convex underestimating problem over II, similar to (P2), is for-
mulated and solved. Then, the information about the rectangle solution is re-
corded and stored as the first element in a list. Rectangle information includes
the upper and lower bounds of the variables X;, i =1, 2, ... , m.
(c) Iteration k- Bounding: The rectangle at the top of the list is selected andre-
moved from the list. The underestimating problem value for this rectangle is the
new lower bound to the global optimum of problem (P1). The feasibility of the
OPTIMIZATION 181

corresponding underestimating solution for the original problem is then checked.


This amounts to a simple evaluation of the left side of the reverse convex con-
straint in (PI) since the other constraints in (PI) are common in (PI) and (P2)
and, thus, automatically satisfied. If the value ofthe reverse convex constraint
does not exceed the tolerance c, convergence is declared and the algorithm ter-
minates.
(d) Iteration k - Branching: The selected rectangle, 14. is split in two smaller rec-
tangles, 141 and Ilk2, according to the, so called, weak refining rule (Soland,
1971):
First, the errors Ll;, i = 1, 2, ... , m, of the convex underestimating problem
for 14 at the solution [x 1 x2 Xm] are calculated:

Then, the coordinate i" such that Ll; = max; Ll; is selected and the corre-
sponding interval [/;, u;] in 14 is split into two subintervals, [/;, x;] and
[x;, u;].
This interval division gives rise to the rectangles 141, Ilk2.
(e) Iteration k - Ranking: The convex underestimating problems that correspond to
141 and Ilk2 are formulated and solved. Then, the corresponding solution and
rectangle information are entered into the previously mentioned list. Finally, the
list of the rectangles is ordered in increasing optimum value order and the algo-
rithm proceeds with step (c).

The sequence of lower bounds identified in step (b) is non-decreasing and converges
to the global optimum of (P 1). The speed of convergence of the branch and bound algo-
rithm depends on the problem relaxation introduced when (P2) is considered instead of
the original problem. The magnitude of the relaxation is directly associated with the size
of the rectangle over which underestimation is performed.

6.4.3. Numerical Results

Consider a three-stage production line whose parameters are given in Table 6.4.
Uptimes and downtimes of machines are exponentially distributed. Failures are operation
dependent. Machine M 1 has Erlang processing times with 6 stages, M2 exponential, and
M 3 deterministic.
We wish to design the system given that
budget for buffer storage= 32
fabrication cost of a buffer = BC 08, where BC is the buffer capacity
profit from one unit of product = 1
cost of employing additional technicians to increase repair speed by one per time
unit= 0.1.
182 HYBRID SIMULATION MODELS OF PRODUCTION NETWORKS

Table 6.4. Machine parameters.

Erlang stages Mean production rate Mean failure rate


20 0.4
30 0.6
10 0.2

In this problem, the cost of buffer space is a concave function of buffer capacities.
This situation is often encountered in practice. For example, the cost of pressure vessels,
columns and reactors for the chemical industry is proportional to H 082 (Guthrie, 1969),
where His the height. Let r; denote the mean repair rate of M;. The problem is stated as

(P2) min F(x) =- TH(x) + 0.1 (r 1 + r 2 + r 3)

subject to - r; 5. 0 i=l, 2, 3

-BC;5.0 i=l,2

BCI 0'8 + BC2 '


8- 32 5. 0

where x! [r 1 r 2 r3 BC 1 BC2]. Using the last constraint and the fact that BC1 and BC2 are
nonnegative the lower and upper bounds on both variables are identified as 0 and 76.11,
respectively.
The particular implementation of the branch and bound algorithm employs the ellip-
soid algorithm (Ecker and Kupferschmid, 1983) for the solution of the intermediate con-
vex optimization problems. The algorithm uses gradient information obtained from a hy-
brid simulation algorithm, which is based on the fluid flow approximation of random
processing times, as discussed in Section 4.4.3. Throughput gradients with respect tore-
pair rates are obtained using IPA. Table 6.5' shows the parameter estimates obtained at
intermediate iterations of the branch and bound algorithm, which converged after 18 it-
erations.

Table 6.5. Iterations of the branch and bound algorithm.

Iteration 1 4 7 10 13 16 17 18 final OfH I ()xi


sc. 10.6 10.6 3.3 4.9 6.3 4.5 4.6 4.5 0.0025
BC2 65.4 57.6 68.6 65.6 63.4 66.4 66.2 66.4 0.0016
'
r2
1.05
1.12
1.05
1.21
1.10
1.18
1.12
1.19
1.12
1.20
1.11
1.19
1.12
1.20
1.11
1.19
0.1005
0.1090
r3 4.23 4.24 4.24 4.24 4.22 4.23 4.24 4.24 0.0996

~ 1997 Taylor & Francis. Reprinted, with permission, from Int. J. Prod. Res. 35:753.
OPTIMIZATION 183

The final throughput rate is 9.534. Since the final point is interior to the region of the
first three constraints {x:- r;-:;, 0, i = 1, 2, 3}and satisfies BC 18 + BC 2"8 = 32, the follow-
ing conditions must hold.

oF = 0 ____. 0
__,._ TH = 0.1 . 1' 2 ' 3
z= (6.9)
or; 8r;

(6.10)

(6.11)

where F(x) =- TH(x) + O.l(r 1 + r 2 + r 3 ). Evaluation of the partial derivatives using the
values ofTable 6.5, yields

8 TH= 0.1005 8 TH= 0.1090 8 TH= 0.0996


8r1 8r2 8r3

__1f_ = 0.0002 _f_ = 0.0001


8BC1 8BC2

Since the conditions (6.9)-( 6.11) are satisfied with good accuracy, the solution is quite
close to an extremal point.
The computational savings of IPA over a finite-difference scheme for gradient esti-
mation are proportional to the number of decision variables. The use of IP A to calculate
gradients in repair rates saves half of the simulation runs. Also, in a number of experi-
ments reported in Section 4.4.3, the continuous flow simulator appears 10 times faster
than a conventional simulator. It is therefore clear that combination of IPA and discrete
event simulation reduces the computational requirements by a factor of 20 or more.
Two important issues with regard to the proposed algorithm are raised about the ef-
fect of errors in the simulation estimates and the possible inconvexity of-TH. In our
model, simulation terminates when a specified number of items are produced. As this
number increases, the estimation errors become smaller but the execution time increases.
Since it is not possible to eliminate these errors, an investigation of their importance has
to be carried out.
We examine a system with five identical machines. Processing times are Erlang-4
random variables with mean 0.1, uptimes and downtimes are exponential with mean rates
p; = 0.001 and r; = 2, respectively, and the buffer capacities are pairwise equal, that is,
BC1 = BC 4 , and BC 2 = BC 3 This system exhibits the reversibility property, namely, re-
184 HYBRID SIMULATION MODELS OF PRODUCTION NETWORKS

versing the flow of workpieces and replacing the initial buffer levels BL,{O) by BC;-
BL,{O) yields an identical system. Hence in steady state, the throughput and its partial
derivatives are symmetric in (BCI> BC4) and (BC 2, BC3).
Clearly, in steady state we should have

We shall use this property to investigate the accuracy of the simulation estimates for
various production volumes. We performed a number of test runs for this system with
BC;= 32. From Table 6.6* we see that the throughput estimate TH does not fluctuate,
whereas its derivatives converge very slowly. The lack of symmetry in gradient estimates
results from using distinct sequences of random numbers to generate uptimes and down-
times for every machine. Since these sequences are finite, it is obvious that symmetric
machines cannot be treated evenly by the simulator.

Table 6.6. Test runs for various production volumes.

Total Eroduction {x106} 20 30 40 60 100


TH 9.9579 9.9578 9.9579 9.9579 9.9579
8TH /8BC 1 (x 10-6) 80 76 76 78 80
8TH /8BC2 (x10-6) 111 114 114 115 115
8TH /8BC 3 (xl0-6) 115 113 110 110 112
8TH /8BC 4 (x10-6) 81 79 78 78 80

As discussed in Section 6.3.2, so far it has not been possible to prove whether-THis
convex in both parameters r; and BC; . To assess the effects of estimation errors and in-
convexities during optimization, we set the production volume equal to 20,000,000 items.
We consider the problem of buffer space allocation of the previous line subject to

4
:LBC; 08 = 64
i=l

To employ the branch and bound algorithm, we replace the above by the following two
inequalities:

4
- :LBC/ 8 + 64:$0
i=l

1997 Taylor & Francis. Reprinted, with permission, from Int. J. Prod. Res. 35:753.
OPTIMIZATION 185

4
:LBC/ 8 -64 ::;;o
i=l

where the last inequality is the reverse convex constraint. The lower and upper bounds on
BC;, i = 1, 2, 3, 4, are 0 and 181.02, respectively. Table 6. 7 gives the resulting design
after 42 iterations of the branch and bound algorithm.

Table 6.7. Best design after the 42nd iteration.

BC; 8TH /8BC; (xl0-41)


1 22.847 123
2 43.788 110
3 42.777 112
4 20.253 135

Tables 6.6 and 6.7 suggest that errors in gradient estimates of the order of 3% may
result in deviations of the order of 2-5% from the optimal values satisfYing BCt = BC4
and BC 2 = BC 3 As a test of convergence of the algorithm, we performed a gradient
search starting from the final point. We applied the following steepest ascent procedure:

BCtJ)=BC+J[ 8TH- dBC/8


i\ ' 8BC; dBC;
A]

where A> 0 is a Lagrange multiplier and J ~ 0 is a step size, which is considered a search
parameter. From Table 6.8 we deduce that there is no improvement in the system's
throughput for J::;:. 0.

Table 6.8. A single-parameter gradient search.

J 5000 1000 100 0 -100


BC 1(J) 22.884 22.855 22.848 22.847 22.847
BC2(J) 43.816 43.792 43.787 43.788 43.786
BC3(J) 42.720 42.765 42.775 42.777 42.778
BC4(J) 20.242 20.250 20.253 20.253 20.253
TH(J) 9.9583133 9.9583135 9.9583136 9.9583136 9.5831172

The required number of branch and bound iterations determines the solvability of
nonconvex optimization problems and the results are encouraging. The main performance

1997 Taylor & Francis. Reprinted, with permission, from Int. J. Prod. Res. 35:753 .
1997 Taylor & Francis. Reprinted, with permission, from Int. J. Prod. Res. 35:753.
186 HYBRID SIMULATION MODELS OF PRODUCTION NETWORKS

bottleneck in this implementation can be traced to the use of a primitive version of the
ellipsoid algorithm. Due to this fact, significant time is required for the solution of each
convex optimization problem. Use of a more sophisticated optimization should result in
significant reduction in computation time.

6.5. SUMMARY

In this chapter, we have developed hybrid models for the design and control of pro-
duction networks. The problems examined are the allocation of buffer space and repair
effort between machine centers and the evaluation of alternative maintenance and lot
scheduling policies for several different models of production networks.
The proposed models are more suitable for large production systems for which ana-
lytical methods and traditional simulation are computationally inefficient. Managers in
practice often want immediate answers to questions related to changes of the production
floor, alternative scheduling disciplines, etc., in order to make rational decisions. Such
answers in complicated systems are obtained hours or days later using analytical models
or traditional simulators. The use of hybrid simulation can reduce this time to just a few
minutes, thus allowing rapid decisions to be made on the production floor.
7
CLOSURE

Manufacturing has become a very active field of research. Problems extend over a
large area of disciplines. This book has developed in detail a number of techniques to
solve a specific problem: that of analysis of complex manufacturing networks under
fairly broad and realistic assumptions. Whenever possible, these techniques have been
used to design and control networks.
The problem of manufacturing network analysis is hard mainly because of its dimen-
sionality. The approach of this book is a novel one, based on the idea of separating fast
and slow dynamics, thus disregarding a large number of states. The states essential to the
analysis are just a few and they incur negligible computational times.
Wherever we made approximations such as fluid traffic of parts or piecewise deter-
ministic representations of random processing times, these proved extremely good for
realistic networks. The speeds of all our models are very high. Where a traditional simu-
lator spends hours the hybrid models run in a few minutes. The combination of accuracy
and speed is unmatched by any model in most practical situations.
The approach of the book is both computational and analytic. We do believe that the
problems of this work have neither computational nor analytical solutions that can in
general be classified as efficient and accurate. The algorithms of the book fill this gap.
Indeed these algorithms respond to needs of academics and practitioners of the field.
They can solve and actually have solved real problems on the factory floor.
We have provided the code for a specific type of network but we believe that an ex-
perienced programmer can provide any code related to our algorithms. The payoff is re-
warding: extremely fast and quite accurate analysis results that are hard to come by using
classical or decomposition methods. Such results are very useful to managers when they
want to make quick decisions concerning allocation of resources, adoption of scheduling
policies, acceptance of production orders and so on. Even if the decisions are made on a
trial-and-error basis, our models are useful since they provide accurate answers in just
minutes.
Of course, the book focuses on a narrow class of manufacturing problems. The world
is moving on and we are moving to new fields of research. What we have recorded here
is the research effort of one group in the early 80's at Boston University, where one of us

187
188 HYBRID SIMULATION MODELS OF PRODUCTION NETWORKS

(Phillis) was a faculty member, and a decade of research of both of us at the Technical
University of Crete covering the mid 80's and 90's.
The field is neither closed nor exhausted. There is a number of open problems one
might consider. The most ubiquitous ones are those of control and scheduling. We only
made a few hints towards solving such problems. On the other hand, we hope that,
through this book, we have provided a new vista to the interested researcher and practi-
tioner to attack these and other problems. If this opinion is shared by you, the reader, then
we feel that we have fulfilled our modest goals.
REFERENCES

Altiok, T., and Stidham, S., 1983, The allocation of interstage buffer capacities in production lines, liE Trans.
15:292.
D'Angelo, H., Caramanis, M., Finger, S., Mavretic, A., Phillis, Y. A., and Ramsden, E., 1988, Event-driven
model of unreliable production lines with storage, Int. J. Prod. Res. 26:1173.
Bagajewicz, M. J., and Manousiouthakis, V., 1991, On the generalized Benders decomposition, Comput. Chern.
Eng. 15: 691-700.
Balduzzi, F., and Menga, G., 1998, A state variable model for the fluid approximation of flexible manufacturing
systems, Proc. IEEE Inter nat. Conf Robotic. Autom., Leuven, Belgium, pp. 1172-1178.
Banks, J., and Carson, J. S., 1984, Discrete-Event System Simulation, Prentice-Hall, Englewood Cliffs.
Baskett, F., Chandy, K. M., Muntz, R.R., and Palacios, F., 1975, Open, closed and mixed networks of queues
with different classes of customers, J. Assoc. Comput. Mach. 22:248.
Bazaraa, M.S., Sherali, H. D., and Shetty, C. M., 1993, Nonlinear Programming, Wiley, New York.
Buzacott, J. A., and Shanthikumar, J. G., 1993, Stochastic Models of Manufacturing Systems, Prentice-Hall,
Englewood Cliffs.
Cao, X. R., 1985, Convergence of parameter sensitivity estimates in a stochastic experiment, IEEE T. Automat.
Contr. 30:845.
Cao, X. R., and Ho, Y. C., 1987, Sensitivity analysis and optimization of throughput in a production line with
blocking, IEEE T. Automat. Contr. 32:959.
Capinski, M., and Kopp, E., 1999, Measure, Integral and Probability, Springer, London.
Chase, C., and Ramadge, P. J., 1992, On real-time scheduling policies for flexible manufacturing systems, IEEE
T. Automat. Contr. 37:491.
Cheng, D. W., 1994, On the design of a tandem queue with blocking: modeling, analysis, and gradient estima-
tion, Nav. Res. Log. 41:759.
Cox, D. R., 1962, Renewal Theory, Chapman-Hall, London.
Dallery, Y., Liu, Z., and Towsley, D., 1994, Equivalence, reversibility, symmetry and concavity properties of
fork-join networks with blocking, J. Assoc. Comput. Mach. 41:903.
Dallery, Y., and Liberopoulos, G., 2000, Extended kanban control system: combining kanban and base stock,
IIE Trans. 32:369.
Ecker, J. G., and Kupferschmid, M., 1983, An ellipsoid algorithm for nonlinear programming, Math. Program.
27:83.
Falk, J. E., and Soland, R. M., 1969, An algorithm for separable nonconvex programming problems, Manage.
Sci. 15:550.
Federgruen, A., and Katalan, Z., 1996, The stochastic economic lot scheduling problem: cyclical base-stock
policies with idle times, Manage. Sci. 42:783.
Fishman, G. S., 1978, Principles of Discrete Event Simulation, Wiley, New York.
Glasserman, P., 1991, Structural conditions for perturbation analysis of queuing systems, J. Assoc. Comput.
Mach. 38:1005.
Gordon, W., and Newell, G., 1967, Closed queueing systems with exponential machines, Oper. Res. 15:254.
Guthrie, K. M., 1969, Capital cost estimating, Chern. Eng. 76:114.
Hadley, G., and Whitin, T. M., 1963, Analysis ofinventory Systems, Prentice-Hall, Englewood Cliffs.

189
190 HYBRID SIMULATION MODELS OF PRODUCTION NETWORKS

Heyman. D.P., and Sobel, M. J., 1982, Stochastic Models in Operations Research Vol. I, McGraw-Hill, New
York.
Hildebrand, F. B., 1974, Introduction to Numerical Analysis, Dover, New Yorlc.
Hillier, F. S., and Bolling, R. W., 1966, The effect of some design factors on the efficiency of production lines
with variable operation times, J. Ind. Eng. 17:651.
Ho, Y. C., and Cao, X. R., 1983, Perturbation analysis and optimization of queueing networlcs, J. Optimiz. The-
ory App. 40:559.
Ho, Y. C., and Cao, X. R., 1991, Perturbation Analysis ofDiscrete Event Dynamic Systems, K.luwer, Boston.
Ho, Y. C., Cao, X. R., and Cassandras, C. G., 1983, Infinitesimal and finite perturbation analysis for queueing
networlcs, Automatica, 19:439.
Ho, Y. C., Eyler, M.A., and Chien, T. T., 1979, A gradient technique for general buffer storage design in a
production line, Int. J. Prod. Res. 1.1:551.
Ho, Y. C., Eyler, M.A., and Chien, T. T., 1983, A new approach to determine parameter sensitivities of trans-
fer lines, Manage. Sci. 1.9:700.
Hu, J.-Q., 1992, Convexity of sample path performance and strong consistency of infinitesimal perturbation
analysis estimates, IEEE T. Automat. Contr. 37:258.
Humes, Jr., C., 1994, A regulator stabilization technique: Kumar-Seidman revisited, IEEE T. Automat. Contr.
39:191.
Jackson, J., 1957, Networks of waiting lines, Oper. Res. 5:518.
K.lcinrock, L., 1975, Queueing Systems, Vol. I, Wiley, New York.
Knuth, D. E., 1981, The Art of Computer Programming, Vol. 2, Addison-Wesley, Reading.
Kouikoglou, V. S., and Phillis, Y. A., 1991, An exact discrete-event model and control policies for production
lines with buffers. IEEE T. Automat. Contr. 36:515.
Kouikoglou, V. S., and Phillis, Y. A., 1994, Discrete event modeling and optimization of production lines with
random rates, IEEE T. Robotic. Autom. 10:153.
Kouikoglou, V. S., and Phillis, Y. A., 1995, An efficient discrete-event model for production networlcs of gen-
eral geometry, DE Thlns. 1.7:32.
Kouikoglou, V. S., and Phillis, Y. A., 1997, A continuous-flow model for production networlcs with finite buff-
ers, unreliable machines, and multiple products, Int. J. Prod. Res. 35:381.
Kumar, P.R., and Meyn, S. P., 1995, Stability of queueing networlcs and scheduling policies, IEEE T. Automat.
Contr._40:251.
Kumar, P.R., and Seidman, T. 1., 1990, Dynamic instabilities and stabilization methods in distributed real-time
scheduling of manufacturing systems, IEEE T. Automat. Contr. 35:289.
Kumar, S., and Kumar, P. R., 1994, Performance bounds for queueing networlcs and scheduling policies, IEEE
T. Automat. Contr. 39:1600.
Law, A. M., and Kelton. D. W., 1991, Simulation Modeling and Analysis, McGraw-Hill, New Yorlc.
Li, K. F., 1987, Serial production lines with unreliable machines and limited repair, Nav. Res. Log. 34:101.
Liu, C. M., and Sanders, J. L., 1988, Stochastic design optimization of asynchronous flexible assembly systems,
Annals ofOper. Res.15:131.
Lu, S. H., and Kumar, P.R., 1991, Distributed scheduling based on due dates and buffer priorities, IEEE T.
Automat. Contr. 36: 1406.
Marse, K., and Roberts, S.D., 1983, Implementing a portable FORTRAN uniform (0,1) generator, Simulation,
41:135.
Perkins, J. R., and Kumar, P. R., 1989, Stable, distributed, real-time scheduling of flexible manufactur-
ing/assembly/disassembly systems, IEEE T. Automat. Contr. 34:139.
Phillis, Y. A., and Kouikoglou, V. S., 1991, Techniques in modeling and control policies for production net
worlcs, Contr. Dyn. Sys., C. T. Lcondcs, ed., 47:1.
Phillis, Y. A., and Kouikoglou, V. S., 1996, A continuous-flow model for unreliable production networlcs of the
finite queue type, IEEE T. Robotic. Autom. 11.:505.
Phillis, Y. A., Kouikoglou, V. S., Sourlas, D., and Manousiouthlkis, V., 1997, Desip of serial production sys-
tems using discrete event simulation and nonconvcx prograrmning techni4(UCS, Int. J. Prod. Res. 35:753.
Ross, S.M., 1970, Applied Probability Models with Optimization Applications, Holden-Day, San Francisco.
Ross, S. M., 1990, A Course in Simulation, Macmillan, New York.
Sharifuia, A., 1994, Stability and performance of distributed production control methods based on continuous-
flow models, IEEE T. Aut4lmat. Contr. 39:725.
Smith, D. R., 1978, Optimal repairman allocation-asymptotic results, Manage. Sci. 1.4:665.
Soland, R. M., 1971, An algorithm for separable nonconvex programming problems IT: nonconvex constraints,
Manage. Sci. 17:759.
Tandiono, E., and Gemmill, D. D., 1994, Stochastic optimization of the cost of automatic assembly systems,
Eur. J. Oper. Res. 77:303.
APPENDIX A
STATISTICAL TABLES

191
192 HYBRID SIMULATION MODELS OF PRODUCTION NETWORKS APPENDIX A

Table AI. Critical points Za for the standard normal distri-


bution, where a= P(Z ~ za) and Z is a standard normal
random variable. Za

Za -.-0 -.-1 -.-2 -.-3 -.-4 -.-5 -.-6 -.-7 -.-8 -.-9
0.0- 0.5000 0.4960 0.4920 0.4880 0.4840 0.4801 0.4761 0.4721 0.4681 0.4641
0.1- 0.4602 0.4562 0.4522 0.4483 0.4443 0.4404 0.4364 0.4325 0.4286 0.4247
0.2- 0.4207 0.4168 0.4129 0.4090 0.4052 0.4013 0.3974 0.3936 0.3897 0.3859
0.3- 0.3821 0.3783 0.3745 0.3707 0.3669 0.3632 0.3594 0.3557 0.3520 0.3483
0.4- 0.3446 0.3409 0.3372 0.3336 0.3300 0.3264 0.3228 0.3192 0.3156 0.3121
0.5- 0.3085 0.3050 0.3015 0.2981 0.2946 0.2912 0.2877 0.2843 0.2810 0.2776
0.6- 0.2743 0.2709 0.2676 0.2643 0.2611 0.2578 0.2546 0.2514 0.2483 0.2451
0.7- 0.2420 0.2389 0.2358 0.2327 0.2296 0.2266 0.2236 0.2206 0.2177 0.2148
0.8- 0.2119 0.2090 0.2061 0.2033 0.2005 0.1977 0.1949 0.1922 0.1894 0.1867
0.9- 0.1841 0.1814 0.1788 0.1762 0.1736 0.1711 0.1685 0.1660 0.1635 0.1611
1.0- 0.1587 0.1562 0.1539 0.1515 0.1492 0.1469 0.1446 0.1423 0.1401 0.1379
1.1- 0.1357 0.1335 0.1314 0.1292 0.1271 0.1251 0.1230 0.1210 0.1190 0.1170
1.2- 0.1151 0.1131 0.1112 0.1093 0.1075 0.1056 0.1038 0.1020 0.1003 0.0985
1.3- 0.0968 0.0951 0.0934 0.0918 0.0901 0.0885 0.0869 0.0853 0.0838 0.0823
1.4- 0.0808 0.0793 0.0778 0.0764 0.0749 0.0735 0.0721 0.0708 0.0694 0.0681
1.5- 0.0668 0.0655 0.0643 0.0630 0.0618 0.0606 0.0594 0.0582 0.0571 0.0559
1.6- 0.0548 0.0537 0.0526 0.0516 0.0505 0.0495 0.0485 0.0475 0.0465 0.0455
1.7- 0.0446 0.0436 0.0427 0.0418 0.0409 0.0401 0.0392 0.0384 0.0375 0.0367
1.8- 0.0359 0.0351 0.0344 0.0336 0.0329 0.0322 0.0314 0.0307 0.0301 0.0294
1.9- 0.0287 0.0281 0.0274 0.0268 0.0262 0.0256 0.0250 0.0244 0.0239 0.0233
2.0- 0.0228 0.0222 0.0217 0.0212 0.0207 0.0202 0.0197 0.0192 0.0188 0.0183
2.1- 0.0179 0.0174 0.0170 0.0166 0.0162 0.0158 0.0154 0.0150 0.0146 0.0143
2.2- 0.0139 0.0136 0.0132 0.0129 0.0125 0.0122 0.0119 0.0116 0.0113 0.0110
2.3- 0.0107 0.0104 0.0102 0.0099 0.0096 0.0094 0.0091 0.0089 0.0087 0.0084
2.4- 0.0082 0.0080 0.0078 0.0075 0.0073 0.0071 0.0069 0.0068 0.0066 0.0064
2.5- 0.0062 0.0060 0.0059 0.0057 0.0055 0.0054 0.0052 0.0051 0.0049 0.0048
2.6- 0.0047 0.0045 0.0044 0.0043 0.0041 0.0040 0.0039 0.0038 0.0037 0.0036
2.7- 0.0035 0.0034 0.0033 0.0032 0.0031 0.0030 0.0029 0.0028 0.0027 0.0026
2.8- 0.0026 0.0025 0.0024 0.0023 0.0023 0.0022 0.0021 0.0021 0.0020 0.0019
2.9- 0.0019 0.0018 0.0018 0.0017 0.0016 0.0016 0.0015 0.0015 0.0014 0.0014
3.0- 0.0013 0.0013 0.0013 0.0012 0.0012 0.0011 0.0011 0.0011 0.0010 0.0010
3.1- 0.0010 0.0009 0.0009 0.0009 0.0008 0.0008 0.0008 0.0008 0.0007 0.0007
3.2- 0.0007 0.0007 0.0006 0.0006 0.0006 0.0006 0.0006 0.0005 0.0005 0.0005
4.0 0.000032
APPENDIX A: STATISTICAL TABLES 193

Table A2. Critical points tn, a for the t distribution, where


a= P(Tn ~ tn, a) and Tn is random variable drawn from the t
distribution with n degrees of freedom. tn, a

n 0.2500 0.1 000 0.0500 0.0250 0.0100 0.0050 0.0025 0.0005


1 1.0000 3.0777 6.3137 12.706 31.821 63.656 127.32 636.58
2 0.8165 1.8856 2.9200 4.3027 6.9645 9.9250 14.089 31.600
3 0.7649 1.6377 2.3534 3.1824 4.5407 5.8408 7.4532 12.924
4 0.7407 1.5332 2.1318 2.7765 3.7469 4.6041 5.5975 8.6101
5 0.7267 1.4759 2.0150 2.5706 3.3649 4.0321 4.7733 6.8685
6 0.7176 1.4398 1.9432 2.4469 3.1427 3.7074 4.3168 5.9587
7 0.7111 1.4149 1.8946 2.3646 2.9979 3.4995 4.0294 5.4081
8 0.7064 1.3968 1.8595 2.3060 2.8965 3.3554 3.8325 5.0414
9 0.7027 1.3830 1.8331 2.2622 2.8214 3.2498 3.6896 4.7809
10 0.6998 1.3722 1.8125 2.2281 2.7638 3.1693 3.5814 4.5868
11 0.6974 1.3634 1.7959 2.2010 2.7181 3.1058 3.4966 4.4369
12 0.6955 1.3562 1.7823 2.1788 2.6810 3.0545 3.4284 4.3178
13 0.6938 1.3502 1.7709 2.1604 2.6503 3.0123 3.3725 4.2209
14 0.6924 1.3450 1.7613 2.1448 2.6245 2.9768 3.3257 4.1403
15 0.6912 1.3406 1.7531 2.1315 2.6025 2.9467 3.2860 4.0728
16 0.6901 1.3368 1.7459 2.1199 2.5835 2.9208 3.2520 4.0149
17 0.6892 1.3334 1.7396 2.1098 2.5669 2.8982 3.2224 3.9651
18 0.6884 1.3304 1.7341 2.1009 2.5524 2.8784 3.1966 3.9217
19 0.6876 1.3277 1.7291 2.0930 2.5395 2.8609 3.1737 3.8833
20 0.6870 1.3253 1.7247 2.0860 2.5280 2.8453 3.1534 3.8496
21 0.6864 1.3232 1. 7207 2.0796 2.5176 2.8314 3.1352 3.8193
22 0.6858 1.3212 1.7171 2.0739 2.5083 2.8188 3.1188 3.7922
23 0.6853 1.3195 1.7139 2.0687 2.4999 2.8073 3.1040 3.7676
24 0.6848 1.3178 1.7109 2.0639 2.4922 2.7970 3.0905 3.7454
25 0.6844 1.3163 1.7081 2.0595 2.4851 2.7874 3.0782 3.7251
26 0.6840 1.3150 1.7056 2.0555 2.4786 2.7787 3.0669 3.7067
27 0.6837 1.3137 1.7033 2.0518 2.4727 2.7707 3.0565 3.6895
28 0.6834 1.3125 1.7011 2.0484 2.4671 2.7633 3.0470 3.6739
29 0.6830 1.3114 1.6991 2.0452 2.4620 2.7564 3.0380 3.6595
30 0.6828 1.3104 1.6973 2.0423 2.4573 2.7500 3.0298 3.6460
35 0.6816 1.3062 1.6896 2.0301 2.4377 2.7238 2.9961 3.5911
40 0.6807 1.3031 1.6839 2.0211 2.4233 2.7045 2.9712 3.5510
45 0.6800 1.3007 1.6794 2.0141 2.4121 2.6896 2.9521 3.5203
50 0.6794 1.2987 1.6759 2.0086 2.4033 2.6778 2.9370 3.4960
60 0.6786 1.2958 1.6706 2.0003 2.3901 2.6603 2.9146 3.4602
70 0.6780 1.2938 1.6669 1.9944 2.3808 2.6479 2.8987 3.4350
80 0.6776 1.2922 1.6641 1.9901 2.3739 2.6387 2.8870 3.4164
90 0.6772 1.2910 1.6620 1.9867 2.3685 2.6316 2.8779 3.4019
100 0.6770 1.2901 1.6602 1.9840 2.3642 2.6259 2.8707 3.3905
200 0.6757 1.2858 1.6525 1.9719 2.3451 2.6006 2.8385 3.3398
1000 0.6747 1.2824 1.6464 1.9623 2.3301 2.5807 2.8133 3.3002
INDEX

Acceptance-rejection method, 59 Cycle time. See Mean time in the system


Acyclic network, 138
Algorithm Decentralized control policies, 162
acceptance-rejection method, 59 Decomposition
allocation of rates to parallel machines, 148 decomposability conditions, 49
branch and bound method, 180 Disassembly operation, 139
continuous flow production lines, 101 Discrete event system, 45
continuous flow, two-stage system, 80 Discrete events, 45
conventional discrete event model, 48 Discrete time system, 45
conventional model of a discrete part
production line, 69 Flow line, 8
discrete time model, 45 reentrant, 162
estimation of the number of simulations, 63 Fluid approximation of random rates, 119
hybrid discrete event model, 52 FORTRAN code, 121
hybrid model for discrete part production lines,
110 Generalized semi-Markov process, 48
hybrid model of a continuous flow production
network, 145 Hybrid discrete event model, 51
infinitesimal perturbation analysis, 176
input rates of buffers in networks, 151 Infinitesimal perturbation analysis, 172
model of a hybrid system, 53 Inverse transform method, 56
output rates of buffers in networks, 150
simulation of production control policies, 165 Jackson networks, 137
steepest ascent procedure, 171 Job shop, 8
Assembly operation, 139
Lagrange multipliers, 170
Birth-death process, 39 Little's formula, 40, 79, 165, 167
Bowl phenomenon, 178
Buffers, 9 Macroscopic event, 51
Macroscopic state, 51
Chapman-Kolmogorov equation, 32 Markov chains, 31
continuous time, 3 7 continuous time, 34
discrete time, 32 discrete time, 32
Common random numbers, 94 embedded chain, 37
Concave function, 168 Mean buffer level
Confidence interval, 28 continuous flow model, 78
Continuous time system, 44 conventional model, 71
Conventional discrete event model, 48 discrete traffic model, 90
Convex function, 168 Mean number of items in the system
Convex set, 168 continuous flow model, 165

195
196 HYBRID SIMULATION MODELS OF PRODUCTION NETWORKS

queueing systems, 40 Reversibility of production lines, 183


Mean time in the system, 10, 40, 162
continuous flow model, 79 Series-parallel production system, 115
conventional model, 72 Setup time, 162
Memory less property Simulation, 44
exponential distribution, 23 Stability
geometric distribution, 21 condition for, 40, 164
Markov chains, 31 definition, 162
Microscopic event, 51 Starved-and-blocked state, l 05
Microscopic state, 51 Steepest ascent algorithm, 171
Stochastic equivalence, 56
Nominal production rate, 1, 65 Synchronous production system, 3
Non-acyclic network, 158 transfer line, 8
Nonconvex programming, 178
Nonlinear programming, 168 Throughput, 2, 40
continuous flow model, 78
Percent downtime, 79 conventional model, 7l
Performance measures, l 0 Transfer line, 8
queueing system, 40 Transient times, 81
Perturbation generation rule, 175 Transition probabilities, 32
Perturbation propagation rules, 175 Transition rate, 35
Produce-to-order systems, 118, 162
Produce-to-stock systems, 118 Universally stabilizing supervising mechanism,
Production line, 8 163
Utilization
Queueing systems, 38 continuous flow model, 79
conventional model, 73
Random number generators, 55 discrete traffic model, 91
linear congruential generator, 55 factor, 40
multiplicative congruential generator, 55
seed of, 55 Variance of buffer level
Random processing times, 119 continuous flow model, 78
Random variate generators, 56 conventional model, 72

También podría gustarte