Documentos de Académico
Documentos de Profesional
Documentos de Cultura
Este capítulo trata de los modelos múltiples de operación que son diferentes de
los modelos de tienda flujo discutidos en el capítulo anterior. En un modelo de
tienda de flujo de todos los puestos de trabajo siguen la misma ruta. Cuando las
rutas son fijos, pero no necesariamente el mismo para cada trabajo, el modelo
se llama una tienda de trabajo. Si un trabajo en una tienda de trabajo tiene que
visitar ciertas máquinas más de una vez, se dice que el trabajo para recircular.
La recirculación es un fenómeno común en el mundo real. Por ejemplo, en la
fabricación de semiconductores tienen empleos para recircular varias veces
antes de completar todo su procesamiento.
La primera sección se centra en las representaciones y las formulaciones del
problema clásico taller de trabajo con el objetivo makespan y sin recirculación.
También se describe un procedimiento rama-y-bound que está diseñado para
encontrar la solución óptima. La segunda sección describe una heurística
popular para los talleres de trabajo con el objetivo makespan y sin
recirculación. Esta heurística se refiere típicamente como el desplazamiento de
heurística cuello de botella. La tercera sección se centra en una versión más
elaborada de la heurística de cuello de botella desplazamiento que está
diseñado específicamente para el objetivo total de la tardanza ponderada. La
cuarta sección se describe una aplicación de un procedimiento de programación
con restricciones para la minimización del makespan. La última sección se
analizan las posibles extensiones.
7.1 Disjunctive Programming and Branch-and-Bound 184
(sólido) arcos A representan las rutas de los puestos de trabajo. Si el arco (i, j) →
(k, j) es parte de A, entonces trabajo j tiene que ser procesado en la máquina i
antes de que se procesa en la máquina de k, es decir, la operación (i, j)
funcionamiento precede (k, j ). Dos operaciones que pertenecen a dos puestos
de trabajo diferentes y que tienen que ser procesados en la misma máquina
están conectados entre sí por dos llamados arcos disyuntivos (roto) que van en
direcciones opuestas. Los arcos B forma m camarillas disyuntivos de arcos
dobles, una clique para cada máquina. (A clique es un término en la teoría de
grafos que se refiere a un gráfico en el que cualquiera de los dos nodos están
conectados el uno al otro; en este caso, cada conexión dentro de un clique
consiste en un par de arcos disyuntivos.) Todas las operaciones (nodos) en el
misma camarilla tiene que ser hecho en la misma máquina. Todos los arcos que
emanan de un nodo, conjuntiva, así como disyuntivo, tienen como longitud el
tiempo de procesamiento de la operación que está representada por ese nodo.
Además hay una fuente de U y un disipador de V, que son nodos ficticios. El
nodo de origen U tiene arcos n conjuntivas que emanan de las primeras
operaciones de los puestos de trabajo n y el nodo receptor V ha próximos arcos
n conjuntivas ) Todas las operaciones (nodos) en la misma camarilla tienen que
ser hechas en la misma máquina. Todos los arcos que emanan de un nodo,
conjuntiva, así como disyuntivo, tienen como longitud el tiempo de
procesamiento de la operación que está representada por ese nodo. Además
hay una fuente de U y un disipador de V, que son nodos ficticios. El nodo de
origen U tiene arcos n conjuntivas que emanan de las primeras operaciones de
los puestos de trabajo n y el nodo receptor V ha próximos arcos n conjuntivas )
Todas las operaciones (nodos) en la misma camarilla tienen que ser hechas en
la misma máquina. Todos los arcos que emanan de un nodo, conjuntiva, así
como disyuntivo, tienen como longitud el tiempo de procesamiento de la
operación que está representada por ese nodo. Además hay una fuente de U y
un disipador de V, que son nodos ficticios. El nodo de origen U tiene arcos n
conjuntivas que emanan de las primeras operaciones de los puestos de trabajo
n y el nodo receptor V ha próximos arcos n conjuntivas
2 2 2 3
43
0 42 43
Fig. 7.1 grafo dirigido para el taller de trabajo con makespan como objetivo
desde todas las últimas operaciones. Los arcos emanan de la fuente tienen
longitud cero (véase la figura7.1). Este gráfico se denota por G = (N, A, B).
7.1 Disjunctive Programming and Branch-and-Bound 186
yij- Yil ≥ pil o Yil- yij ≥ pij para todos (i, l) y (i, j), yo = 1, ..., m
yij≥ 0 para todos (i, j) ∈ N
El segundo grupo está formado por diez limitaciones, uno para cada operación.
Un ejemplo es
t
y dejar i* denotarla máquina en la que se consigue el mínimo.
Paso 3. (ramificación)
Let Ω designar el conjunto de todas las operaciones (I *, j) en la máquina i * de
tal manera que
V V
para la operación (i, j), es decir, dij es igual a LB (V) menos la longitud de la
(0 + 10,0 + 8,0 + 4) = 4, i * = 1,
Así que hay dos nodos de interés a escala 1, uno correspondiente a la operación
(1,1) que se procesan primero en la máquina 1 y el otro a la operación (1,3) que
se procesan primero en la máquina 1.
Si la operación (1,1) está prevista en primer lugar, a continuación, los dos
arcos disyuntivos representan en la Figura7.3.B se añaden a la gráfica. El nodo
se caracteriza por los dos arcos disyuntivos
→ (1,3).
trabajos 1 2 3
pags1j 10 3 4
r1j 0 10
10
re1j 10 13
14
La secuencia que minimiza Lmax es 1,2,3 con Lmax = 3. Esto implica que un
límite inferior para el makespan en el nodo correspondiente es 24 + 3 = 27. Una
7.1 Disjunctive Programming and Branch-and-Bound 193
trabajos 1 2 3
pags2j 8 8 7
r2j 10 0 14
re2j 20 10
21
La secuencia óptima es 2,1,3 con Lmax = 4. Esto produce una mejor límite
inferior para la makespan en el nodo que corresponde a un funcionamiento
(1,1) están programando primero, es decir, 24 + 4 = 28. máquinas Análisis de 3
y 4 de la misma manera no cede una inferior mejor unido.
El segundo nodo en el nivel 1 corresponde a la operación (1,3) está prevista
primero. Si (1,3) está programada para ir primero, dos arcos disyuntivos
diferentes se añaden a la gráfica original, produciendo un límite inferior de 26.
La instancia asociada del problema máximo retraso para la máquina 1 tiene una
secuencia óptima 3,1,2 con Lmax = 2. esto implica que el límite inferior para la
makespan en este nodo, que corresponde a la operación (1,3) programado
primera, está también es igual a 28. máquinas Análisis de 2, 3, y 4 no da como
resultado una mejor inferior Unido.
El siguiente paso es para ramificar desde el nodo (1,1) en el nivel 1 y generar
los No hay arcos disyuntivos nodos
en el Figura 7.3a
, ,
Nivel 1 programado
LB = 28 por LB = 28 por
la máquina 1
primera la máquina 1
primera
, 1) programado primero
en la máquina 1
, 2) prevista primero
en la máquina 2
siguiente nivel.
nivel 0
7.1 Disjunctive Programming and Branch-and-Bound 194
Nivel 2
LB = 28
.
Hay un nodo de interés en esta parte del Nivel 2, el nodo correspondiente a la
operación (2,2) que se procesa primero en la máquina 2 (véase la figura7.4).
Dos arcos disyuntivos se añaden a la gráfica, es decir, (2,2) → (2,1) y (2,2) →
(2,3). Así que este nodo se caracteriza por un total de cuatro arcos disyuntivos:
(1,1) → (1,2),
(1,1) → (1,3),
→ (2,3).
pags1j 10 3 4
r1j 0 10
10
re1j 14 17
18
7.2 The Shifting Bottleneck Heuristic and the Makespan 195
1 3 2
2 1 3
1 2
0 10 30
L
máquina de secuencia k de acuerdo con la secuencia
obtenida en el Paso 2 para esa máquina.
Insertar todos los correspondientes arcos disyuntivos en el
gráfico G. Insertar máquina k en M0.
Paso 4. (resecuenciación de todas las máquinas programadas anteriormente)
Hacer para cada máquina i ∈ {M0 - k} la siguiente:
eliminar de G los arcos disyuntivos correspondientes a máquina i;
formular un único subproblema máquina para la máquina i con
fechas de lanzamiento y fechas de vencimiento de las operaciones
determinadas por más largos cálculos de ruta en G.
Encuentra la secuencia que minimice Lmáx(I) y el
inserto de los correspondientes arcos disyuntivos en
el gráfico G.
Paso 5. (criterio de parada)
Si M0 = M luego STOP, de lo contrario ir al paso 2. ||
7.1.1 y 7.1.4. Las rutas de los puestos de trabajo, es decir, las secuencias de la
máquina, y los tiempos de procesamiento se dan en la siguiente tabla:
pags1j 10 3 4
r1j
080
re1j
10 11
12
La secuencia óptima resulta ser 1,2,3 con Lmax (1) = 5.
10 8
1 1 2 1 3 1
0 10 4
0 8 3 5 6
2 2 1 2 4 2 3 2
3
0 3
4 7
1 3 2 3 4 3
pags2j 88 7
r2j 10 0 4
re2j18 8
19
La secuencia óptima para este problema es 2,3,1 con Lmax (2) = 5. Del mismo
modo, se puede demostrar que
Lmáx(3) = 4
y
Lmáx(4) = 0.
pags2j 8 8 7
r2j 10 0 17
re2j23 10
24
El calendario óptimo es 2,1,3 y la Lmax resultante (2) = 1. Los datos para el
ejemplo correspondiente a la máquina 3 son:
trabajos 1 2
pags3j 4 6
r3j 18
18
re3j 27
27
7.2 The Shifting Bottleneck Heuristic and the Makespan 201
pueden tienen que ser procesados una cantidad dada de tiempo separados el
uno del otro. Esto es, de entre el procesamiento de dos operaciones que están
sujetas a estas restricciones de precedencia una cantidad mínima determinada
de tiempo (es decir, un retardo) puede tener que transcurra.
Las longitudes de los retardos también se determinan por las secuencias de
las operaciones en las máquinas ya programados. Por tanto, estas restricciones
de precedencia se denominan restricciones de precedencia retraso.
El siguiente ejemplo ilustra la posible necesidad de restricciones de
precedencia retraso en el subproblema sola máquina. Sin estas limitaciones la
heurística de cuello de botella desplazamiento puede terminar en una situación
en la que hay un ciclo en el gráfico disyuntiva y el horario correspondiente no es
factible. El siguiente ejemplo ilustra cómo las secuencias en las máquinas ya
programadas (máquinas en M0) imponen restricciones en las máquinas todavía
para ser programado (máquinas en M-M0).
34 43 0
0
4
4
7.2 The Shifting Bottleneck Heuristic and the Makespan 203
3 4 3 4
trabajos 1 trabajos 1
2 2
1 1
1 pags1j
pags2j 1
0 r1j
r2j 1
1 0
re1j 7 re2j 8
8 7
Las soluciones óptimas para máquinas 1 y 2 ambos tienen Lmax = -6, por lo
0
re2j 8
5
Cualquier horario para la máquina 2, se obtiene una Lmax ≤ 0. Si se
anterior con una regla de prioridad llama la aparente Costo tardanza primero
(ATC) regla.
1 1 2 1 3 1 1
3 2 1 2 2 2 2
3 3 2 3 1 3 3
Fig. 7.8 grafo dirigido para el taller de trabajo con el objetivo total de la tardanza ponderada
sido programada (es decir, todos los arcos disyuntivos se han seleccionado para
estas máquinas) y en esta iteración se tiene que decidir qué máquina debe ser
programado siguiente y cómo debe ser programado. Cada una de las máquinas
restantes tiene que ser analizado por separado y para cada una de estas
máquinas una medida de la criticidad tiene que ser calculada. Los pasos para
hacer dentro de una iteración se pueden describir como sigue.
En la representación gráfica disyuntiva todos los arcos disyuntivos
pertenecientes a las máquinas todavía para ser programados se eliminan y
todos los arcos disyuntivos seleccionados para las máquinas ya programados
(SET M0) se mantienen en su lugar. Dado este grafo dirigido, los tiempos de
finalización de todos los trabajos n se pueden calcular fácilmente. Let Ck
denotan el tiempo de finalización de k trabajo. Consideremos ahora una
máquina i que todavía tiene que ser programada (máquina i es un elemento del
conjunto M - M0). Para evitar un aumento en el Ck tiempo de finalización, la
operación (i, j), j = 1, ..., n, debe ser completado en la máquina i por algunos dkij
fecha de vencimiento local. Si no hay caminos desde el nodo (i, j) al fregadero
correspondiente a k trabajo, es decir, Vk, entonces la fecha dkij debido local es
infinito. Si hay uno o más caminos de nodo (i, j) para Vk, con ser el camino más
largo de la longitud L ((i, j), Vk), entonces la fecha de vencimiento local es
hij(Cij) = wkTijk.
k= 1
Fig. 7.9 función de coste hijde operación (i, j) En una sola máquina subproblema
una función del tiempo t en el que la máquina se convirtió en gratuita, así como
de la PJ, el WJ y el DJ de los trabajos restantes. El índice se define como
K
donde K es un parámetro de escala. Sumando sobre todos los puestos de
trabajo, es decir,
norte +
sem(Ck
K
k= 1
5 5 7
pags1j
r1j 5 4 8
10 6 3
pags2j
r2j 10 9 5
4 4 5
pags3j
r3j 20 0 0
10 6 3
pags2j
r2j 10 15 5
4 4 5
pags3j
r3j 20 0 0
1, 1 1, 2 1, 3
La máquina 1
2, 3 2, 1 2, 2
máquina 2
3, 3 3, 2 3, 1
máquina 3
0 5 10 15 20 25
4 4 5
pags3j
r3j 20 0 0
j= 1
o
σ(I, j) → (i, k) = Cik - Sij - pij - PIK
o
σ(I, j) → (i, k) = dik - rij - pij - PIK.
7.4 Constraint Programming and the Makespan 215
Si
σ(I, j) → (i, k) <0
Caso 1:
Si σ(I, j) → (i, k) ≥ 0 y σ (i, k) → (i, j) <0, entonces la restricción de precedencia (i, j) → (i, k)
tiene que ser impuesta.
Caso 2:
Si σ(I, k) → (i, j) ≥ 0 y σ (i, j) → (i, k) <0, entonces la restricción de precedencia (i, k) → (i, j)
tiene que ser impuesta.
Caso 3:
Si σ(I, j) → (i, k) <0 y σ (i, k) → (i, j) <0,
entonces no hay horario que satisface las restricciones de precedencia ya
en su lugar.
Caso 4:
Si σ(I, j) → (i, k) ≥ 0 y σ (i, k) → (i, j) ≥ 0, entonces o bien ordenando entre las dos
operaciones es todavía posible.
7.4 Constraint Programming and the Makespan 216
En uno de los pasos del algoritmo que se describen en esta sección un par de
operaciones tiene que ser seleccionado que satisface el caso 4, es decir, ya sea
de pedido entre las operaciones es todavía posible. En este paso del algoritmo
de muchos pares de operaciones pueden todavía satisfacen Caso 4. Si hay más
de un par de operaciones que satisface la caja 4, a continuación, una heurística
control de búsqueda tiene que ser aplicado. La selección de un par se basa en la
flexibilidad de secuenciación que este par todavía proporciona. Se selecciona el
par con la flexibilidad más bajo. El razonamiento detrás de este enfoque es
sencillo. Si un par con baja flexibilidad no está prevista desde el principio en el
proceso, entonces puede ser el caso de que más tarde en el proceso de este par
no se puede programar en absoluto. Así que tiene sentido para dar prioridad a
los pares con una baja flexibilidad y posponer pares con una alta flexibilidad.
Claramente, la flexibilidad depende de las cantidades de holgura bajo los dos
ordenamientos. Una simple estimación de la flexibilidad secuenciación de un
par de operaciones, φ ((i, j) (i, k)), es el mínimo de los dos pantalones, es decir,
φ((I, j) (i, k)) = min (σ (i, j) → (i, k), σ (i, k) → (i, j)).
Sin embargo, basándose en este mínimo puede llevar a problemas. Por ejemplo,
supongamos que un par de operaciones tiene valores de holgura 3 y 100,
mientras que otro par tiene valores de holgura 4 y 4. En este caso, puede haber
sólo posibilidades limitadas para la programación de la segunda pareja y
posponer una decisión con respecto a la segunda pareja bien puede eliminarlas.
Un pedido factible con respecto a la primera pareja en realidad no puede estar
en peligro. En lugar de utilizar φ ((i, j) (i, k)), la siguiente medida de flexibilidad
secuenciación ha demostrado ser más eficaz:
φ((I, j) (i, k)) = min (σ (i, j) → (i, k), σ (i, k) → (i, j)) × max (σ (i, j) → (i, k), σ (i, k)
→ (i, j)).
de pedidos realizados en iteraciones anteriores tiene que ser anulado (es decir,
restricciones de precedencia que se habían impuesto anteriormente tienen que
ser retirados). O bien, puede implicar que no existe una solución factible para el
problema en la forma en que ha sido presentado y formulado y que algunas de
las limitaciones originales del problema tiene que estar relajado.
El procedimiento de búsqueda heurística restricción guiada se puede
resumir de la siguiente manera.
Paso 2.
Comprobar las condiciones de dominancia y clasificar las decisiones de
ordenación restantes.
Si cualquier decisión de pedido es una de Caso 1 Caso 2 o vaya al paso
3. Si cualquier decisión de ordenar es el caso 3, a continuación, dar
marcha atrás; de lo contrario ir al paso 4.
Paso 3.
Inserte una nueva restricción de precedencia y vaya al paso 1.
Etapa 4.
La falta de resolución de pedido es de Caso 4, a continuación, se encuentra una
solución. DETENER. De lo contrario, vaya al paso 5.
Paso 5.
Calcular φ((I, j) (i, k)) para cada par de operaciones todavía no clasificadas.
Seleccione el par con el mínimo de φ((I, j) (i, k)).
Si σ(I, j)→ (i, k) ≥ σ (i, k) → (i, j), entonces la operación (i, k) debe seguir la operación (i, j); de lo contrario la
operación (i, j) debe seguir la operación (i, k). Vaya al paso 3.
||
Con el fin de aplicar la restricción guiado procedimiento de búsqueda
heurística para Jm || Cmax, tiene que ser incorporado en el siguiente marco. En
primer lugar, un límite superior y una du dl límite inferior que encontrar para el
makespan.
Paso 2.
Si Cmáx <D, conjunto du= D.
Si Cmáx> d, conjunto dl =
d.
Paso 3.
Si du- dl> 1 volver al paso 1.
De otro modo STOP.
||
El siguiente ejemplo ilustra el uso de la técnica de satisfacción de
restricciones.
Ejemplo 7.4.3 (Aplicación de Programación con restricciones a la Tienda de
empleo). Considere la instancia del problema taller de trabajo descrito en el
Ejemplo 7.1.1.
= -1,
7.4 Constraint Programming and the Makespan 219
10 8
1 1 2 1 3 1
0 4
0 8 3 5 6
2 2 1 2 4 2 3 2
0
3
4 7
1 3 2 3 4 3
lo que implica que el orden (2,3) → (2,2) no es factible. Por lo que el arco
disyuntiva (2,2) → (2,3) tiene que ser insertado. De la misma manera, se puede
demostrar que los arcos disyuntivos (2,2) → (2,1) y (1,1) → (1,2) tienen que ser
insertado también.
ordenado de las operaciones en cada máquina el φ factor de ((i, j) (i, k)) (véase
la tabla7.2(do)).
El par con la menor flexibilidad es (1,1) (1,3) y la restricción de precedencia
(1,1) → (1,3) tiene que ser insertado.
Inserción de esta última restricción de precedencia hace cumplir uno más
restricción, a saber, (2,1) → (2,3). Ahora solamente un par no ordenado de
restos de operaciones, a saber, par (1,3) (1,2). Estas dos operaciones se pueden
pedir en cualquier manera sin violar ninguna fecha de vencimiento. Un pedido
factible es (1,3) → (1,2). El programa resultante con un makespan de 32 se
representa en la figura7.13. Este horario se encuentra con la fecha de
vencimiento fijada inicialmente, pero no es óptima.
(una) (si)
(1,1) 0 20 (1,1) 0 18
(2,1) 10 28 (2,1) 10 28
(3,1) 18 32 (3,1) 18 32
(2,2) 0 18 (2,2) 0 18
(1,2) 8 21 (1,2) 10 21
(4,2) 11 26 (4,2) 13 26
(3,2) 16 32 (3,2) 18 32
(1,3) 0 22 (1,3) 0 22
(2,3) 4 29 (2,3) 8 29
(4,3) 11 32 (4,3) 15 32
Tabla 7.1 (A) la movilización local y las fechas de vencimiento. (B) la liberación local y las
fechas de vencimiento después de la actualización. (C) Computingφ((i, j) (i, k)).
7.4 Constraint Programming and the Makespan 221
Cuando el par (3,1) (3,2) tuvo que ser ordenado la primera vez, que podría
haber sido ordenada en cualquier dirección debido a que los dos valores de
holgura eran iguales. Supongamos que en ese punto se seleccionó el orden
opuesto, es decir, (3,1) → (3,2). El reinicio del proceso en ese punto se obtiene
la liberación y fechas de vencimiento se muestra en la Tabla7.3(una).
Estos liberación y fechas de vencimiento hacer cumplir una restricción de
precedencia sobre el par de operaciones (2,1) (2,3) y la restricción es (2,1) →
(2,3). Esta restricción adicional cambia las fechas de publicación y fechas de
vencimiento (véase la tabla7.3(si)).
Estas nuevas fechas de lanzamiento y fechas de vencimiento tienen un efecto
en el par (4,2) (4,3) y el arco (4,2) → (4,3) tiene que ser incluido. Este arco
adicional no causa ningún cambio adicional en la liberación y fechas de
vencimiento. En este punto sólo dos pares de operaciones permanecen sin
ordenar, a saber, el par (1,1) (1,3) y el par (1,2) (1,3) (véase la tabla7.3(do)).
(una) (si)
(1,1) 0 14 (1,1) 0 14
(2,1) 10 28 (2,1) 10 28
(3,1) 24 32 (3,1) 24 32
(2,2) 0 14 (2,2) 0 14
(1,2) 10 17 (1,2) 10 17
(4,2) 13 22 (4,2) 13 22
(3,2) 18 28 (3,2) 18 28
(1,3) 0 22 (1,3) 0 22
(2,3) 8 29 (2,3) 8 29
(4,3) 15 32 (4,3) 18 32
Tabla 7.2 (A) la movilización local y las fechas de vencimiento. (B) la liberación local y las
fechas de vencimiento después de la actualización. (C) Computingφ((i, j) (i, k)).
7.4 Constraint Programming and the Makespan 222
Así el par (1,1) (1,3) es más crítico y tiene que ser ordenado (1,1) → (1,3).
Resulta que el último par a ser ordenado, (1,2) (1,3), pueden ser ordenados de
cualquier manera.
El programa resultante resulta ser óptima y tiene una makespan de 28. ||
7.5 Discussion
The disjunctive graph formulation for Jm || Cmax extends to Jm | rcrc | Cmax.
The set of disjunctive arcs for a machine may now not be a clique. If two
operations of the same job have to be performed on the same machine, a
precedence relationship is given. These two operations are not connected by a
pair of disjunctive arcs, since they are already connected by conjunctive arcs.
223
7.5 Discussion
1 1 1 3 1 2
2 2 1 2 3
3 2 3 1
4, 2 4, 3
0 10 20 30 t
(1,1) 0 14 (1,1) 0 14
(2,1) 10 22 (2,1) 10 22
(3,1) 18 26 (3,1) 18 26
(2,2) 0 18 (2,2) 0 18
(1,2) 10 21 (1,2) 10 21
(4,2) 13 26 (4,2) 13 26
(3,2) 18 32 (3,2) 22 32
(1,3) 0 22 (1,3) 0 22
(2,3) 8 29 (2,3) 18 29
(4,3) 15 32 (4,3) 25 32
(1,2)(1,3) 5 14 = 8.36
224
Table 7.3 (a) Local release and due dates. (b) Local release and due dates after update. (c)
Computing φ((i,j)(i,k)).
It is clear from this chapter that there are a number of completely different
techniques for dealing with job shops, namely disjunctive programming,
shifting bottleneck, constraint programming and also local search techniques.
Future research on job shop scheduling may focus on the development of
hybrid techniques incorporating two or more of these techniques in a single
framework that can be adapted easily to any given job shop instance.
Exercises (Computational)
7.1. Consider the following heuristic for Jm || Cmax. Each time a machine is
freed, select the job (among the ones immediately available for processing on
the machine) with the largest total remaining processing (including the
processing on the machine freed). If at any point in time more than one
machine is freed, consider first the machine with the largest remaining
workload. Apply this heuristic to the instance in Example 7.1.1.
7.4. Apply the heuristic described in Exercise 7.1 to the instance in Exercise 7.3.
226
7.5. Consider the instance in Exercise 7.2.
(a) Apply the Shifting Bottleneck heuristic to this instance (doing the
computation by hand).
(b) Compare your result with the result of the shifting bottleneck routinein
the LEKIN system.
7.6. Consider again the instance in Exercise 7.2.
(a) Apply the branch-and-bound algorithm to this instance of job
shopproblem.
(b) Compare your result with the result of the local search routine in
theLEKIN system.
7.7. Consider the following instance of the two-machine flow shop with the
makespan as objective (i.e., an instance of F2 || Cmax, which is a special case of
J2 || Cmax).
jobs 1 2 3 4 5 6 7 8 9 10 11 p1j
3 6 4 3 4 2 7 5 5 6 12 p2j 4 5 5 2
336647 2
(a) Apply the heuristic described in Exercise 7.1 to this two-machine flow
shop.
(b) Apply the shifting bottleneck heuristic to this two-machine flow shop.
(c) Construct a schedule using Johnson’s rule (see Chapter 6).
(d) Compare the schedules found under (a), (b), and (c).
7.8. Consider the instance of the job shop with the total weighted tardiness
objective described in Example 7.3.1. Apply the Shifting Bottleneck
heuristic again, but now use as scaling parameter K = 5. Compare the
resulting schedule with the schedule obtained in Example 7.3.1.
p1j 12 4 6 8 2
p2j 10 5 4 6 3
dj 12 32 21 14 28
wj 3 2 4 3 2
Apply the shifting bottleneck heuristic to minimize the total weighted tardiness.
Comments and References
Exercises (Theory)
7.11. Design a branching scheme for a branch-and-bound approach that is
based on the insertion of disjunctive arcs. The root node of the tree corresponds
to a disjunctive graph without any disjunctive arcs. Each node in the branching
tree corresponds to a particular selection of a subset of the disjunctive arcs.
That is, for any particular node in the tree a subset of the disjunctive arcs has
been fixed in certain directions, while the remaining set of disjunctive arcs has
not been fixed yet. From every node there are two arcs emanating to two nodes
at the next level. One of the two nodes at the next level corresponds to an
additional disjunctive arc being fixed in a given direction while the other node
corresponds to the reverse arc being selected. Develop an algorithm that
generates the nodes of such a branching tree and show that your algorithm
generates every possible schedule.
7.12. Determine an upper and a lower bound for the makespan in an m machine
job shop when preemptions are not allowed. The processing time of job j on
machine i is pij (i.e., no restrictions on the processing times).
7.13. Show that when preemptions are allowed there always exists an optimal
schedule for the job shop that is non-delay.
Chapter 8
This chapter deals with multi-operation models that are different from the job shop models considered
in the previous chapter. In a job shop each job has a fixed route that is predetermined. In practice, it
often occurs that the route of the job is immaterial and up to the scheduler to decide. When the routes
of the jobs are open, the model is referred to as an open shop.
The first section covers non-preemptive open shop models with the makespan as objective. The
second section deals with preemptive open shop models with the makespan as objective. The third and
fourth section focus on nonpreemptive and preemptive models with the maximum lateness as
objective. The fifth section considers non-preemptive models with the number of tardy jobs as
objective.
since the makespan cannot be less than the workload on either machine. One would typically expect
the makespan to be equal to the RHS of the inequality;
1 4 2 3
2 1 3 4
1 4 2 3
2 1 3 4
Fig. 8.1 Idle periods in two-machine open shops: (a) idle period causes unnecessary increase in makespan (b) idle
period does not cause an unnecessary increase in makespan
only in very special cases, one would expect the makespan to be larger than the RHS. It is worthwhile to
investigate the special cases where the makespan is strictly greater than the maximum of the two
workloads.
This section considers only non-delay schedules. That is, if there is a job waiting for processing when
a machine is free, then that machine is not allowed to remain idle. It immediately follows that an idle
period can occur on a machine if and only if one job remains to be processed on that machine and,
when that machine is available, this last job is just then being processed on the other machine. It can be
shown that at most one such idle period can occur on at most one of the two machines (see Figure 8.1).
Such an idle period may cause an unnecessary increase in the makespan; if this last job turns out to be
the very last job to complete all its processing, then the idle period does cause an increase in the
makespan (see Figure 8.1.a). If this last job, after having completed its processing on the machine that
was idle, is not the very last job to leave the system, then the makespan is still equal to the maximum of
the two workloads (see Figure 8.1.b).
Consider the following rule: whenever a machine is freed, start processing among the jobs that have
not yet received processing on either machine the one with the longest processing time on the other
machine. This rule is in what follows referred to as the Longest Alternate Processing Time first (LAPT)
rule. At time zero, when both machines are idle, it may occur that the same job qualifies to be first on
both machines. If that is the case, then it does not matter on which machine this job is processed first.
According to this LAPT rule, whenever a machine is freed, jobs that already have completed their
processing on the other machine have the lowest, that is, zero, priority on the machine just freed. There
is therefore no distinction between the priorities of two jobs that both already have been processed on
the other machine.
8.1 The Makespan without Preemptions
Theorem 8.1.1. The LAPT rule results in an optimal schedule for O2 || Cmax with makespan
n n
Proof. Actually, a more general (and less restrictive) scheduling rule already guarantees a minimum
makespan. This more general rule may result in many different schedules that are all optimal. This
class of optimal schedules includes the LAPT schedule. This general rule also assumes that unforced
idleness is not allowed.
Assume, without loss of generality, that the longest processing time among the 2n processing times
belongs to operation (1,k), that is,
232
The more general rule can be described as follows. If operation (1,k) is the longest operation, then
job k must be started at time 0 on machine 2. After job k has completed its processing on machine 2, its
operation (1,k) has the lowest possible priority with regard to processing on machine 1. Since its
priority is then at all times lower than the priority of any other operation available for processing on
machine 1, the processing of operation (1,k) will be postponed as much as possible. It can only be
processed on machine 1 if no other job is available for processing on machine 1 (this can happen either
if it is the last operation to be done on machine 1 or if it is the second last operation and the last
operation is not available because it is just then being processed on machine 2). The 2(n − 1)
operations of the remaining n − 1 jobs can be processed on the two machines in any order; however,
unforced idleness is not allowed.
That this rule results in a schedule with a minimum makespan can be shown as follows. If the
resulting schedule has no idle period on either machine, then, of course, it is optimal. However, an idle
period may occur either on machine 1 or on machine 2. So two cases have to be considered.
Case 1: Suppose an idle period occurs on machine 2. If this is the case, then only one more operation
needs processing on machine 2 but this operation still has to complete its processing on machine 1.
Assume this operation belongs to job l. When job l starts on machine 2, job k starts on machine 1 and
p1k > p2l. So the makespan is determined by the completion of job k on machine 1 and no idle period
has occurred on machine 1. So the schedule is optimal.
Case 2: Suppose an idle period occurs on machine 1. An idle period on machine 1 can occur only
when machine 1 is freed after completing all its operations with the exception of operation (1,k) and
operation (2,k) of job k is at that point still being processed on machine 2. In this case, the makespan is
equal to p2k + p1k and the schedule is optimal. Another rule that may seem appealing at first sight is
the rule that gives, whenever a machine is freed, the highest priority to the job with the largest total
remaining processing time on both machines. It turns out that there are instances, even with two
machines, when this rule results in a schedule that is not optimal (see Exercise8.12). The fact that the
priority level of a job on one machine depends only on the amount of processing remaining to be done
on the other machine is key.
The LAPT rule described above may be regarded as a special case of a more general rule that can be
applied to open shops with more than two machines. This more general rule may be referred to as the
Longest Total Remaining Processing on Other Machines first rule. According to this rule, again, the
processing required on the machine currently available does not affect the priority level of a job.
However, this rule does not always result in an optimal schedule since the Om || Cmax problem is NP-
hard when m ≥ 3.
Proof. The proof is based on a reduction of PARTITION to O3 || Cmax. The PARTITION problem can be
formulated as follows. Given positive integers a1,...,at and
t
b = aj,
j=1
The reduction is based on the following transformation. Consider 3t + 1 jobs. Of these 3t+1 jobs there
are 3t jobs that have only one non-zero operation and one job that has to be processed on each one of
the three machines.
p1j = aj, p2j = p3j = 0, for 1 ≤ j ≤ t,
p2j = aj, p1j = p3j = 0, for t + 1 ≤ j ≤ 2t,
p3j = aj, p1j = p2j = 0, for 2t + 1 ≤ j ≤ 3t,
p1,3t+1 = p2,3t+1 = p3,3t+1 = b,
where
t 2t 3t
aj= aj = aj = 2b
and z = 3b. The open shop problem now has a schedule with a makespan equal to z if and only if there
exists a partition. It is clear that to have a makespan equal to 3b job 3t + 1 has to be processed on the
three machines without interruption. Consider the machine on which job 3t + 1 is processed
8.2 The Makespan with Preemptions
0 2 3
second, that is, during the interval (b,2b). Without loss of generality it may be assumed that this is
machine 1. Jobs 1,...,t have to be processed only on machine 1. If there exists a partition of these t jobs
in such a way that one set can be processed during the interval (0,b) and the other set can be processed
during the interval (2b,3b), then the makespan is 3b (see Figure8.2). If there does not exist such a
partition, then the makespan has to be larger than 3b.
The LAPT rule for O2 || Cmax is one of the few polynomial time algorithms for non-preemptive open
shop problems. Most of the more general open shop models within the framework of Chapter 2 are NP-
hard, for example, O2 | rj | Cmax. However, the problem Om | rj,pij = 1 | Cmax can be solved in
polynomial time. This problem is discussed in a more general setting in Section 8.3.
From the fact that the value of the makespan under LAPT is a lower bound for the makespan with
two machines even when preemptions are allowed, it follows that the non-preemptive LAPT rule is also
optimal for O2 | prmp |
Cmax.
It is easy to establish a lower bound for the makespan with m (m ≥ 3) machines when preemptions
are allowed:
m n
Cmax ≥ maxpij,pij .
j
That is, the makespan is at least as large as the maximum workload on each of the m machines and at
least as large as the total amount of processing to be done on each of the n jobs. It turns out that it is
rather easy to obtain a schedule with a makespan that is equal to this lower bound.
In order to see how the algorithm works, consider the m×n matrix P of the processing times pij. Row
i or column j is called tight if its sum is equal to the lower bound and slack otherwise. Suppose it is
possible to find in this matrix a subset of non-zero entries with exactly one entry in each tight row and
one entry in each tight column and at most one entry in each slack row and slack column. Such a subset
would be called a decrementing set. This subset is used to construct a partial schedule of length Δ, for
some appropriately chosen Δ. In this partial schedule machine i works on job j for an amount of time
that is equal to min(pij,Δ) for each element pij in the decrementing set. In the original matrix P the
entries corresponding to the decrementing set are then reduced to max(0,pij − Δ) and the resulting
matrix is then called P. If Δ is chosen appropriately, the makespan Cmax that corresponds to the new
matrix P is equal to Cmax − Δ. This value for Δ has to be chosen carefully. First, it is clear that the Δ has
to be smaller than every pij in the decrementing set that is in a tight row or column, otherwise there
will be a row or column in P that is strictly larger than Cmax . For the same reason, if pij is an element
in the decrementing set in a slack row, say row i, it is necessary that
where Cmax − pik is the amount of slack time in row i. Similarly, if pij is an entry in the slack column j,
then
where Cmax −pkj is the amount of slack time in column j. If row i or column j does not contain an
Δ ≤ Cmax − pij
j
235
or
Δ ≤ Cmax − pij.
If Δ is chosen to be as large as possible subject to these conditions, then either P will contain at least
one less strictly positive element than P or P will contain at least one more tight row or column than P.
It is then clear that there cannot be more than r + m + n iterations where r is the number of strictly
positive elements in the original matrix.
It turns out that it is always possible to find a decrementing set for a nonnegative matrix P. This
property is the result of a basic theorem due to Birkhoff and von Neumann regarding stochastic
matrices and permutation matrices.
However, the proof of this theorem is beyond the scope of this book.
8.2 The Makespan with Preemptions
Machine 1 2 1 4
1 3 3
Machine 2
4 4 1
Machine 3
0 2 4 6 8 10 12 t
Fig. 8.3 Optimal Schedule for O3 | prmp | Cmax with four jobs (Example 8.2.1)
Example 8.2.1 (Minimizing Makespan with Preemptions). Consider 3 machines and 4 jobs with the
processing times being the entries in the matrix
⎡
3 4 0 4 ⎣ 4 0 0 6⎦
P =4 0 6 0⎤
It is easily verified that Cmax = 11 and that the first row and first column are tight. A possible
decrementing set comprises the processing times p12 = 4, p21 = 4 and p34 = 6. If Δ is set equal to 4,
then Cmax = 7. A partial schedule is constructed by scheduling job 2 on machine 1 for 4 time units; job
1 on machine 2 for 4 time units and job 4 on machine 3 for 4 time units. The matrix is now
P ⎡ =0 0 6 0⎤ 3 0 0 4 ⎣ 4 0 0 2⎦
Again, the first row and the first column are tight. A decrementing set is obtained with the processing
times p11 = 3, p23 = 6 and p34 = 2. Choosing Δ = 3, the partial schedule can be augmented by assigning
job 1 to machine 1 for 3 time units, job 3 to machine 2 for 3 time units and job 4 again to machine 3 but
now only for 2 time units. The matrix is
236
P ⎡ =0 0 3 0⎤ 0 0 0 4 ⎣ 4 0 0 0⎦
The last decrementing set is obtained with the remaining three positive processing times. The final
schedule is obtained by augmenting the partial schedule by assigning job 4 on machine 1 for 4 time
units, job 3 to machine 2 for 3 time units, and job 1 to machine 3 for 4 time units (see Figure 8.3). ||
8.3 The Maximum Lateness without Preemptions
The Om || Lmax problem is a generalization of the Om || Cmax problem and is therefore at least as hard.
Theorem 8.3.1. The problem O2 || Lmax is strongly NP-Hard.
Proof. The proof is done by reducing 3-PARTITION to O2 || Lmax. The 3-PARTITION problem is
formulated as follows. Given positive integers a1,...,a3t and b, such that
b b < aj <
4 2
and
3t
aj= tb,
j=1
do there exist t pairwise disjoint three element subsets Si ⊂ {1,...,3t} such that
aj = b
j∈Si
for i = 1,...,t ?
The following instance of O2 || Lmax can be constructed. The number of jobs, n, is equal to 4t and
There exists a schedule with Lmax ≤ 0 if and only if jobs 1,...,3t can be divided into t groups, each
containing 3 jobs and requiring b units of processing time on machine 2, i.e., if and only if 3-PARTITION
has a solution.
It can be shown that O2 || Lmax is equivalent to O2 | rj | Cmax. Consider the O2 || Lmax problem with
deadlines d¯j rather than due dates dj. Let
d¯max = max(d¯1,...,d¯n).
Apply a time reversal to O2 || Lmax. Finding a feasible schedule with Lmax = 0 is now equivalent to
and a makespan that is less than d¯max. So the O2 | rj | Cmax problem is therefore also strongly NP-
hard.
Consider now the special case Om | rj,pij = 1 | Lmax. The fact that all processing times are equal to 1
makes the problem considerably easier. The polynomial time solution procedure consists of three
phases, namely
8.3 The Maximum Lateness without Preemptions
The first phase of the procedure involves a parametrization. Let L be a free parameter and assume
that each job has a deadline dj + L. The objective is to find a schedule in which each job is completed
before or at its deadline, ensuring that Lmax ≤ L. Let tmax = max(d1,...,dn) + L,
that is, no job should receive any processing after time tmax.
The second phase focuses on the following network flow problem: There is a source node U that has
n arcs emanating to nodes 1,...,n. Node j corresponds to job j. The arc from the source node U to node j
has capacity m (equal to the number of machines and to the number of operations of each job). There is
a second set of tmax nodes, each node corresponding to one time unit. Node t, t = 1,...,tmax, corresponds
to the time slot [t−1,t]. Node j has arcs emanating to nodes rj + 1,rj + 2,...,dj + L. Each one of these arcs
has unit capacity. Each node of the set of tmax nodes has an arc with capacity m going to sink V (see
Figure 8.4). The capacity limit on each one of these arcs is necessary to ensure that no more than m
operations are processed in any given time period. The solution of this network flow problem indicates
in which time slots the m operations of job j are to be processed.
However, the network flow solution cannot be translated immediately into a feasible schedule for
the open shop, because in the network flow formulation no distinction is made between the different
machines (i.e., in this solution it may be possible that two different operations of the same job are
processed in two different time slots on the same machine). However, it turns out that the assignment
238
of operations to time slots prescribed by the network flow solution can be transformed into a feasible
schedule in such a way that each operation of job j is processed on a different machine.
The third phase of the algorithm generates a feasible schedule. Consider a graph coloring problem
with a bipartite graph that consists of two sets of nodes N1 and N2 and a set of undirected arcs. Set N1
has n nodes and set N2 has tmax nodes. Each node in N1 is connected to m nodes in N2; a node in N1 is
connected to those m nodes in N2 that correspond to the time slots in which its operations are
supposed to be processed (according to the solution of the network flow problem in the second phase).
So each one of the nodes in N1 is connected to exactly m nodes in N2, while each node in N2 is
connected to at most m nodes in N1. A result in graph theory states that if each node in a bipartite
graph has at most m arcs, then the arcs can be colored with m different colors in such a way that no
node has two arcs of the same color. Each color then corresponds to a given machine.
The coloring algorithm that achieves this can be described as follows. Let gj, j = 1,...,n denote the
degree of a node from set N1, and let ht, t = 1,...,tmax denote the degree of a node from set N2. Let
Δ = max(g1,...,gn,h1,...,htmax)
In order to describe the algorithm that yields a coloring with Δ colors, let ajt = 1 if node j from N1 is
connected to node t from N2, and let ajt = 0 otherwise. The ajt are elements of a matrix with n rows and
tmax columns.
Clearly,
n
ajt≤ Δ t = 1,...,tmax
and
tmax
ajt ≤ Δ j = 1,...,n
The entries (j,t) in the matrix with ajt = 1 are referred to as occupied cells. Each occupied cell in the
matrix has to be assigned one of the Δ colors in such a way that in no row or column the same color is
assigned twice.
The assignment of colors to occupied cells is done by visiting the occupied cells of the matrix row by
row from left to right. When visiting occupied cell (j,t) a color c, not yet assigned in column t, is selected.
If c is assigned to another cell in row j, say (j,t∗), then there exists a color c not yet assigned in row j that
can be used to replace the assignment of c to (j,t∗). If another cell
8.3 The Maximum Lateness without Preemptions
239
(j∗,t∗) in column j∗ already has assignment c, then this assignment is replaced by c. This conflict
resolution process stops when there is no remaining conflict. If the partial assignment before coloring
(j,t) was feasible, then the conflict resolution procedure yields a feasible coloring in at most n steps.
Example 8.3.2 (Minimizing the Maximum Lateness without Preemptions). Consider the following instance
of O3 | rj,pij = 1 | Lmax with 3 machines and 7 jobs.
jobs 1 2 3 4 5 6 7 rj 0 1 2 2 3 4 5 dj 5 5 5 6 6
88
Assume that L = 1. Each job has a deadline d¯j = dj +1. So tmax = 9. Phase 2 results in the network
flow problem described in Figure 8.5. On the left there are 7 nodes that correspond to the 7 jobs and on
the right there are 9 nodes that correspond to the 9 time units.
The result of the network flow problem is that the jobs are processed during the time units given in
the table below.
jobs 1 2 3 4 5 6 7
1 2 3 4 5 6 7
1 2 3 4 5 6 7 8 9
It can be verified easily that at no point in time more than three jobs are processed simultaneously.
Phase 3 leads to the graph coloring problem. The graph is depicted in Figure 8.6 and the matrix with
the appropriate coloring is
111000000
011100000
000111000
000111000
000011100
000000111
240
000000111
It is easy to find a red (r), blue (b), and white (w) coloring that corresponds to a
feasible schedule.
rbw------rbw-------rbw---
- --bwr------rbw-------rbw
- -----bwr
Since there is a feasible schedule for L = 1, it has to be verified at this point whether or not there is a
feasible schedule for L = 0. It can be shown easily that there does not exist a schedule in which every
job is completed on time. ||
Ak= p1j
j=1
and
k
Bk= p2j.
j=1
The procedure to minimize the maximum lateness first considers the due dates as absolute
deadlines and then tries to generate a feasible solution. The jobs are scheduled in increasing order of
their deadlines, i.e., first job 1, then job 2, and so on. Suppose that jobs 1,...,j−1 have been scheduled
successfully and that job j has to be scheduled next. Let xj (yj) denote the total amount of time prior to
dj that machine 1 (2) is idle while machine 2 (1) is busy. Let zj denote the total amount of time prior to
dj that machines 1 and 2 are idle simultaneously. Note that xj, yj, and zj are not independent, since
xj+ zj = dj − Aj−1
and
yj+ zj = dj − Bj−1.
241
The minimum amount of processing that must be done on operation (1,j) while both machines are
available is max(0,p1j − xj) and the minimum amount of processing on operation (2,j) while both
machines are available is max(0,p2j − yj). It follows that job j can be scheduled successfully if and only
if
So job j can be scheduled successfully if and only if each one of the following three feasibility conditions
holds:
Aj≤ dj
Bj≤ dj
Aj+ Bj ≤ 2dj − zj
These inequalities indicate that in order to obtain a feasible schedule an attempt has to be made in each
iteration to minimize the
possible values of z1,...,zn are defined recursively by value of zj. The smallest
z1 = d1
zj= dj − dj 1 + max(0,zj 1 − p1,j 1 − p2,j 1), j = 2,...,n.
− − −
In order to verify the existence of a feasible schedule, the values of z1,...,zn have to be computed
recursively and for each zj it has to be checked whether it satisfies the third one of the feasibility
conditions. There exists a feasible schedule if all the zj satisfy the conditions.
In order to minimize Lmax a parametrized version of the preceding computation has to be done.
Replace each dj by dj + L, where L is a free parameter. The smallest value of L for which there exists a
feasible schedule is equal to the minimum value of Lmax that can be achieved with the original due
dates dj.
It turns out that there exists also a polynomial time algorithm for the more general open shop with
m machines, even when the jobs have different release dates, that is, Om | rj,prmp | Lmax. Again, as in
the case with 2 machines, the due dates dj are considered deadlines d¯j, and an attempt is made to find
a feasible schedule where each job is completed before or at its due date. Let
denote the ordered collection of all distinct release dates rj and deadlines d¯j. So there are p intervals
[ak,ak+1]. Let Ik denote the length of interval k, that is,
Let the decision variable xijk denote the amount of time that operation (i,j) is processed during interval
k. Consider the following linear program:
p m n
max xijk
242
subject to
The constraints on xijk ensure that no job is assigned to a machine either before its release date or after
its due date. An initial feasible solution for this problem is clearly xijk = 0. However, since the objective
is to maximize the sum of the xijk, the third inequality is tight under the optimal solution assuming
there exists a feasible solution for the scheduling problem.
If there exists a feasible solution for the linear program, then there exists a schedule with all jobs
completed on time. However, the solution of the linear program only gives the optimal values for the
decision variables xijk. It does not specify how the operations should be scheduled within the interval
[ak,ak+1]. This scheduling problem within each interval can be solved as follows: consider interval k as
an independent open shop problem with the processing time of operation (i,j) being the value xijk that
came out of the linear program. The objective for the open shop scheduling problem for interval k is
equivalent to the minimization of the makespan, i.e., Om | prmp | Cmax. The polynomial algorithm
described in the previous section can then be applied to each interval separately.
If the outcome of the linear program indicates that no feasible solution exists, then (similar to the m
= 2 case) a parametrized version of the entire procedure has to be carried out. Replace each d¯j by d¯j +
L, where L is a free parameter. The smallest value of L for which there exists a feasible schedule is equal
to the minimum value of Lmax that can be achieved with the original due dates dj.
Example 8.4.1 (Minimizing Maximum Lateness with Preemptions). Consider the following instance of O3 |
rj,prmp | Lmax with 3 machines and 5 jobs.
jobs 1 2 3 4 5
p1j 12223
p2j 31221
p3j 21121
rj 11333
dj 97679
There are 4 intervals that are determined by a1 = 1, a2 = 3, a3 = 6, a4 = 7, a5 = 9. The lengths of the four
intervals are I1 = 2, I2 = 3, I3 = 1, and I4 = 2. There are 4 × 3 × 5 = 60 decision variables xijk.
243
The first set of constraints of the linear program has 20 constraints. The first one of this set, i.e., j =
1,k = 1, is
The second set of constraints has 12 constraints. The first one of this set, i.e., i = 1,k = 1, is x111 + x121
+ x131 + x141 + x151 = 2.
The third set of constraints has 15 constraints. The first one of this set, i.e., i = 1,j = 1, is
x111 + x112 + x113 + x114 + x115 = 1.
It turns out that this linear program has no feasible solution. Replacing dj by dj +1 yields another
linear program that also does not have a feasible solution. Replacing the original dj by dj + 2 results in
the following data set:
jobs 1 2 3 4 5
p1j 1222 3
p2j 3122 1
p3j 2112 1
rj 1 1 3 3 3 d¯j 11 9 8 9 11
There are 4 intervals that are determined by a1 = 1, a2 = 3, a3 = 8, a4 = 9, a5 = 11. The lengths of the
four intervals are I1 = 2, I2 = 5, I3 = 1, and I4 = 2. The resulting linear program has feasible solutions
and the optimal solution is the following:
x111 = 1 x211 = 0 x311 = 1
x121 = 1 x221 = 1 x321 = 0
x131 = 0 x231 = 0 x331 = 0
x141 = 0 x241 = 0 x341 = 0
x151 = 0 x251 = 0 x351 = 0
x112 = 0 x212 = 0 x312 = 1
x122 = 1 x222 = 0 x322 = 0
x132 = 2 x232 = 2 x332 = 1
x142 = 1 x242 = 2 x342 = 2
x152 = 1 x252 = 1 x352 = 1
x113 = 0 x213 = 1 x313 = 0
x123 = 0 x223 = 0 x323 = 1
x133 = 0 x233 = 0 x333 = 0
x143 = 1 x243 = 0 x343 = 0
x153 = 0 x253 = 0 x353 = 0
x114 = 0 x214 = 2 x314 = 0
x124 = 0 x224 = 0 x324 = 0
x134 = 0 x234 = 0 x334 = 0
x144 = 0 x244 = 0 x344 = 0
x154 = 2 x254 = 0 x354 = 0
8.5 The Number of Tardy Jobs
244
Each one of the four intervals has to be analyzed now as a separate O3 | prmp | Cmax problem.
Consider, for example, the second interval [ 3,8 ], i.e., xij2. The O3 | prmp | Cmax problem for this
interval contains the following data.
Applying the algorithm described in Section 8.2 results in the schedule presented in Figure 8.7
(which turns out to be non-preemptive). The schedules in the other three intervals can be determined
very easily. ||
4 2 3 5
3 5 4
1 4 5 3
3 4 5 6 7 8
It can be shown easily that the set of jobs that are completed on time in an optimal schedule belong
to a set k∗,k∗ + 1,...,n. So the search for an optimal schedule has two aspects. First, it has to be
determined what the optimal value of k∗ is, and second, given k∗, a schedule has to be constructed in
which each job of this set finishes on time.
The value of k∗ can be determined via binary search. Given a specific set of jobs that have to be
completed on time, a schedule can be generated as follows: Consider the problem Om | rj,pij = 1 | Cmax,
which is a special case of the Om | rj,pij = 1 | Lmax problem that is solvable by the polynomial time
algorithm described in Section 8.3. Set rj in this corresponding problem equal to dmax−dj in the
original problem. In essence, the Om | rj,pij = 1 | Cmax problem is a time reversed version of the
original Om | pij = 1 | Uj problem. If for the makespan minimization problem a schedule can be found
with a makespan less than dmax, then the reverse schedule is applicable to the Om | pij = 1 | Uj problem
with all jobs completing their processing on time.
245
8.6 Discussion
This chapter, as several other chapters in this book, focuses mainly on models that are polynomial time
solvable. Most open shop models tend to be NP-hard. For example, very little can be said about the total
completion time objective.
The Om || Cj problem is strongly NP-hard when m ≥≥2. The Om | prmp |
In the same way that a flow shop can be generalized to a flexible flow shop, an open shop can be
generalized to a flexible open shop. The fact that the flexible flow shop allows for few structural results
gives already an indication that it may be hard to obtain results for the flexible open shop. Even the
proportionate cases, i.e., pij = pj for all i or pij = pi for all j, are hard to analyze.
Another class of models that are closely related to open shops have received recently a considerable
amount of attention in the literature. This class of models are typically referred to as concurrent open
shops or open shops with job overlap. In these open shops the processing times of any given job on the
different machines are allowed to overlap in time (in contrast to the conventional open shops where
they are not allowed to overlap). This class of models are at times also referred to as the class of order
scheduling models. The motivation is based on the following: Consider a facility with m different
machines in parallel and each machine being able to produce a specific type of product. A customer
places an order requesting a certain quantity of each product type. After all the items for a given
customer have been produced, the entire order can be shipped to the customer.
Exercises (Computational)
8.1. Consider the following instance of O2 || Cmax and determine the number of optimal schedules that
are non-delay.
jobs 1 3 42
p1j 9 7 5 13 p2j 5 10 11 7
Exercises
8.2. Consider the following instance of O5 || Cmax with 6 jobs and all processing times either 0 or 1.
Find an optimal schedule.
jobs 1 2 3 4 5 6
p1j 100111
p2j 111010
p3j 010111
p4j 111001
p5j 111100
8.3. Consider the proportionate open shop O4 | pij = pj | Cmax with 6 jobs. Compute the makespan
under the optimal schedule.
jobs 1 2 3 4 5 6 pj 356689
246
8.4. Consider the problem O4 || Cmax and consider the Longest Total Remaining Processing on Other
Machines (LTRPOM) rule. Every time a machine is freed the job with the longest total remaining
processing time on all other machines, among available jobs, is selected for processing. Unforced
idleness is not allowed. Consider the following processing times.
jobs 1 2 34
p1j 5 5 13 0
p2j 57 38
p3j 12 5 70
p4j 05 0 15
(a) Apply the LTRPOM rule. Consider at time 0 first machine 1, thenmachine 2, followed by
machines 3 and 4. Compute the makespan.
(b) Apply the LTRPOM rule. Consider at time 0 first machine 4, thenmachine 3, followed by
machines 2 and 1. Compute the makespan.
(c) Find the optimal schedule and the minimum makespan.
8.5. Find an optimal schedule for the instance of O4 | prmp | Cmax with 4 jobs and with the same
8.6. Consider the following instance of O4 | rj,pij = 1 | Lmax with 4 machines and 7 jobs.
jobs 1 2 3 4 5 6 7 rj 0 1 2 2 3 4 5 dj 6 6 6 7 7
99
8.9. Consider the Linear Programming formulation of the instance in Exercise 8.7. Write out the
objective function. How many constraints are there?
8.10. Consider the following instance of Om | pij = 1 | Uj with 3 machines and 8 jobs.
jobs 1 2 3 4 5 6 7 8 dj 3 3 4 4 4 4 5 5
247
Find the optimal schedule and the maximum number of jobs completed on time.
Exercises
Exercises (Theory)
8.11. Show that non-delay schedules for Om || Cmax have at most m − 1 idle times on one machine.
Show also that if there are m − 1 idle times on one machine there can be at most m − 2 idle times on any
other machine.
8.12. Consider the following rule for O2 || Cmax. Whenever a machine is freed, start processing the job
with the largest sum of remaining processing times on the two machines. Show, through a
counterexample, that this rule does not necessarily minimize the makespan.
8.13. Give an example of Om || Cmax where the optimal schedule is not nondelay.
8.14. Consider O2 || Cj. Show that the rule which always gives priority to the job with the smallest total
8.15. Consider O2 | prmp | Cj. Show that the rule which always gives preemptive priority to the job
with the smallest total remaining processing time is not necessarily optimal.
8.16. Consider Om || Cmax. The processing time of job j on machine i is either 0 or 1. Consider the
following rule: At each point in time select, from the machines that have not been assigned a job yet, the
machine that still has the largest number of jobs to do. Assign to that machine the job that still has to
undergo processing on the largest number of machines (ties may be broken arbitrarily). Show through
a counterexample that this rule does not necessarily minimize the makespan.
8.17. Consider a flexible open shop with two workcenters. Workcenter 1 consists of a single machine
and workcenter 2 consists of two identical machines. Determine whether or not LAPT minimizes the
makespan.
8.18. Consider the proportionate open shop Om | pij = pj | Cmax. Find the optimal schedule and prove
its optimality.
8.19. Consider the proportionate open shop Om | prmp,pij = pj | Cj. Find the optimal schedule and
8.20. Consider the following two-machine hybrid of an open shop and a job shop. Job j has processing
time p1j on machine 1 and p2j on machine 2. Some jobs have to be processed first on machine 1 and
then on machine 2. Other jobs have to be processed first on machine 2 and then on machine 1. The
routing of the remaining jobs may be determined by the scheduler. Describe a schedule that minimizes
the makespan.
8.21. Find an upper and a lower bound for the makespan in an m machine open shop when
preemptions are not allowed. The processing time of job j on machine i is pij (i.e., no restrictions on the
processing times).
248
8.22. Compare Om | pj = 1 | γ with Pm | pj = 1,chains | γ in which there are n chains consisting of m jobs
each. Let Z1 denote the value of the objective function in the open shop problem and let Z2 denote the
value of the objective function in the parallel machine problem. Find conditions under which Z1 = Z2
and give examples where Z1 > Z2.