Documentos de Académico
Documentos de Profesional
Documentos de Cultura
Cadenas de Markov
Cadenas de Markov
de Markov
• Si m=0
Las ecuaciones Chapman-Kolmogorov
• Se trata de una generalización del resultado obtenido
anteriormente
• Demostración
Matriz de transición de n pasos
Diagramas de transición
• Suponga que al arrojar una moneda, el
resultado dependiera del lanzamiento anterior
Clases de estados
• Alcanzable. Un estado j es alcanzable desde algún
estado i sii
Observe que la Matriz nos brinda información de
alcanzabilidad entre estados
• Dos estados se comunican si son alcanzables
mútuamente
• El concepto de comunicación divide al espacio de
estados en clases
• Dos estados que se comunican pertenecen a la misma
clase
• Todos los estados de una misma clase se comunican
entre sí
• Decimos que una clase es cerrada si ninguno de los
estados que la conforman puede se alcanzado por
ningún estado fuera de la clase
Cadenas de Markov irreductibles
• Son cadenas de Markov en las cuales todos los
estados se comunican
• Eso implica que los estados conforman una
única clase
Probabilidad de primera pasada
• Sea la probabilidad condicional de que dado
que el proceso se encuentra actualmente en el
estado i, la primera vez que llegue al estado j
ocurra en exactamente n transiciones
(Probabilidad de primera pasada del estado i al j
en n transiciones)
• Sabemos que
• Y si los estados límite existen y no dependen
del estado inicial entonces
Obteniendo las probabilidades de
estados límite
• Definiendo el vector de probabilidades de
estados límite como
De la primera ecuación
Substituyendo en la segunda
Calcular
• Demostración
Substituyendo
Ejemplo
Tiempo esperado de estancia en un
estado
• Sea el número de unidades de tiempo que un
proceso permanecería en el estado i antes de
abandonarlo
Sea transformada Z de
En forma matricial:
Análisis transitorio de Cadenas de
Markov de tiempo discreto
Obtenemos Mediante la inversa de G(z) obteniendo dos componentes,
uno constante y un término transitorio
El término constante tiene la característica de que todos los renglones son idénticos
Y sus elementos son las probabilidades de estados límite
Ejemplo
Obtener para una cadena de Markov con la siguiente Matriz de transición entre estados
Ejemplo
Ejemplo
Como el tiempo que el proceso dura en cada estado es 1, esta ecuación dice
que el tiempo promedio es el tiempo que dura en el estado i mas el tiempo medio
de primera transición del estado k al j dado que el estado que sigue del i es el k
De donde:
Ejemplo
Determinar
stic Modeling
kov Processes for Stochastic Modeling
Tiempos de ocupación
ccupancy
72 MarkovTimes
Processes for Stochastic Modeling
Markov chain
⎡ ⎡ ⎤
φ11 (n) φ⎡ 12 (n) φ13 (n) . . . φ1N (n) ⎤
⎡
⎢ φ21φ(n) φ 11 (n) φ 12 (n) φ 11
φ (n)
13 (n) ⎤ ⎥.φ. .12 φ1N (n) 13 (n)
(n) φ
⎢ (n) φ φ22 (n)
(n) φφ 23 (n)
(n) . . .
. . . φ φ
1N 2N
(n) (n) ⎥φ (n)
⎢ ⎢
11
⎢ 12
φ (n)
13
φ (n)⎢ φ φ
21 (n)(n) ⎥ ⎥. . .22 φ (n) ⎥23 (n)
φ
"(n) = ⎢ φ⎢31φ(n) 21 (n) φ⎢ φ3222(n)21
(n) φφ2333(n) (n) . ⎢
22 23
.. . . φ2Nφ(n) (n)
3N ⎥ ⎥ 2N ⎥
"(n)⎣ =⎢ ⎢."(n). (n)= ⎢
φ.31 ⎢
φ.32.φ(n)
.31 (n)"(n)
φ33.(n) .⎢
.φ. 32=(n)... . φ
. φ3N33(n)
φ31 (n)
(n)
. . ⎥ ⎦.φ
. . .32 (n)
φ3N (n)φ ⎥ (n)
⎥33
⎢ ⎥
φN1 (n) φ⎣ ... ⎦...
⎣ ... ⎦
. . .. . . . . . . . .. ... . . .φ.. .. . (n) . . .
N2 (n) φN3 (n) ⎣ . . .
NN . . .
φN1 (n) φN2 φ(n) φN3 (n) . . . φNN (n)
N1 (n) φN2 (n) φN3 (n) . . . φNN (n)
Then Then
we have that
φN1 (n) φN2 (n) φN3 (n)
we have that
Then we have that
Then "(n)
we have
=
nn
!
! that
Prr
"(n) = P n
!
r=0 "(n) = Pr
r=0 n
!
Example 3.8 Consider the transition probability matrix
r=0 associated =
"(n)with P r
Example
Example3.8 Consider
3.6. We would likethe transition
to obtain probability
the mean first passagematrix
time φ13associated
(5). with
Example 3.6.Example 3.8to obtain
We would like Consider
⎡ the ⎤transition
the mean probability
first passage matrixr=0
time φ13 (5). associated
0.4 0.5 0.1
⎢ 21⎢ 21 22 22 23 23 2N ⎥2N ⎥
⎢
"(n)"(n) ⎢
= ⎢ φ=31⎢
(n)φ31 (n)
φ32 (n)φ32 (n)
φ33 (n)φ33.(n) . . .(n) ⎥
. . φn3N φ (n) ⎥
⎥3N ⎥
⎣ . .⎣. . . . . . . . . . . . . .
.... . ! . . . ⎦
... r ... ⎦
"(n). =
Ejemplo . .P
φN1 (n) φN2 (n)φN2φ(n)
φN1 (n) N3 (n)φN3 . . φNN
(n) .(n)φNN (n)
r=0
ThenThen
we have that that
we have
Example 3.8 Consider the
!n
transition
r!
n probability matrix associated w
"(n) =
Determine el tiempo medio de ocupación
•Example
P
3.6. We would like"(n) tor=0obtain
= Pthe
r
mean first passage time φ13 (5).
r=0
para el proceso de Markov
Example
Example 3.8 Consider the
⎡
3.8 Consider the transition probability
transition
con matríz
matrix
0.4 probability
0.5 0.1 matrix associated de with
⎤associated with
Example 3.6. We would like to obtain the mean first passage time φ13 (5).
distribución de transiciones
Example 3.6. We would like
⎡ to P =
obtain ⎣the
0.3
⎤ mean0.3first 0.4 ⎦ time φ13 (5).
passage
0.4 ⎡0.5 0.1 0.3 0.2 ⎤ 0.5
0.4 0.4
P = ⎣0.3 0.3 0.5⎦ 0.1
= ⎣0.2
P0.3 0.3 0.4⎦
0.3 0.5 !
Solution: The matrix0.3 "(5)0.2is given
0.5 by
!
Solution: The matrix "(5) is given by ⎡ ⎤r
Solution: The matrix "(5) is ! given
5
⎡ by ! 5 ⎤0.4r 0.5 0.1
!5 !5 0.4 0.5 0.1
"(5) = "(5) Pr = = ⎣0.3P r⎡
=
0.3 0.4
⎣
⎦0.3 ⎤0.3
r 0.4⎦
! 5 ! 5 0.4 0.5 0.1
0.3 0.2 0.5 0.3 ⎦0.2 0.5
⎣0.3r=0
r=0
"(5) = P rr=0
r=0
= 0.3 0.4
r=0 r=0 0.3 0.2 0.5
Continúa ejemplo Discrete-Time Markov Chains 73
A primary parameter of interest is the quantity Nij , which is the mean number
I 0
"P =n
0 (I + Q + · · · + Q n−1
)R Qn
R Q n La matriz fundamental
imary parameter of interest is the quantity N , which is the mean numbe
ij
s the process is in transient state j before hitting an absorbing state, given
antity ij , es el número de veces que el proceso está
whichstate
starts• inNtransient is the meanthat
i. Note number
the emphasis is on both state i and
gefore hitting
transient an If,
absorbing
en el estado transitorio j
states. for example,state, given
state iantes de ir a un
is an absorbing state and i ̸ = j
ethe
quantity is zero.isSimilarly,
emphasis on both if state an absorbing state and i ̸ = j, then
state j iisand
ntity is
estado absorbente dado que comenzó en el
infinity if state j isstate
accessible
ate i isestado transitorio i
an absorbing and ifrom̸= j,state i. The following theorem
s proved in Grinstead and Snell (1997), establishes the relationship between
j is an absorbing
Nij ] (i.e., is the
stateofand
matrix the
i ̸=) and
j, then
• El teorema de Snell establece la relación entre
N
e from state i. The following theorem,
N ij Q.
rem
7), la matrices N
3.2
establishes y Q between
the relationship
nd Q. #∞
N= Qk = [I − Q]−1
k=0
Tiempo de absorción
∞
#
−1
N= Q = [I − Q] k
k=0
em
of3.3 Let µiuntil
Grinstead
•transitions denote the number of state
transitions
y Snell demostraron que los
absorption from before
j given that the
the process hits
next transit
rption state, j.
i is state given
Thisthat
is the chain
true for starts
all andinso
state
we and over
i,sum let Mall
bejthe
∈ column
.
whoseThe tiempos de absorción se pueden determinar
entry is
ith mean µi . to
Then
j T
time absorption can also be computed directly from the
de la siguiente manera:
matrix. The following theorem, which is proved in Grinstead and
−1
defines how to compute
M = N1the mean
= [I − Q]times
1 to absorption for the diffe
states.
• Donde M s un vector columna cuyo i-ésimo
isTheorem
a column vector
3.3 whose entries
Let µ are the
denote all 1.
elemento es y 1 es un vector columna de
number of transitions before th
i
plean3.9
absorption
puros 1’sstate,
Consider thegiven
Markovthatchain
the chain
whosestarts in state i, and
state-transition let Misb
diagram
vector 3.7.
n Figure whose µ3entry
Findith . is µi . Then !
−1
tion: M = N1 = [I − Q] 1 follows:
The sets of nontransient and transient states are as
µ3 = 1 + p31 µ1 + p32 µ2 = 1 + µ1
column vector whose entries are all 1.
2 = 1 +chain
3.9 Consider the µMarkov Ejemplo
p21 µwhose
4
1 + p23state-transition
µ3 = 1 + µ3diagram is
5
gure 3.7. Find µ3 . !
• Para la siguiente cadena de Markov 1 2
µ1 = 1 + p11 µ1 + p12 µ2 = 1 + µ1 + suponga µ2
3 3
n: The que A={4} y T={1,2,3}
sets of nontransient and transient states are as follows:
2/3
1/3 1 A = {4} 2 4 1
76 T = {1, 1/5
Markov Processes for Stochastic
2, 3}
Modeling
1 4/5
3
using the direct method we obtain this system of equations we obtain
From
• Si usamos la recurrencia
Figure 3.7. State-transition diagram for Example 3.9.
De donde:
µ3 = 1 + p31 µ1 + p32 µ2 = 1 + µ1
µ3 = 17.5
4
µ2 = 1 + p21 µ1 + p23 µ3 = 1 + µ3 µ2 = 15.0
5
1 2
µ1 = 1 + p11 µ1 + p12 µ2 = 1 + µ1 + µ2 µ1 = 16.5
3 3
µ1 = 16.5
damental matrix are given by
Finalmente:
• Thus, ⎡ ⎤⎡ ⎤ ⎡ ⎤ ⎡ ⎤
15/2 5 4 1 33/2 16.5
M =⎡N1 = ⎣ 6 ⎤ ⎡5 ⎤ 4⎦⎡⎣1⎦ =
⎤ ⎣ ⎡15 ⎦⎤= ⎣ 15 ⎦
15/2 515/2
4 51 5 33/2 1 16.5
35/2 17.5
M = N1 = ⎣ 6 5 4⎦ ⎣1⎦ = ⎣ 15 ⎦ = ⎣ 15 ⎦
15/2 5 5 1 35/2 17.5
Otro ejemplo
Example 3.10 Consider
Example 3.10 Consider the Markov the
chain whose Markovdiagra
state-transition
shown in Figure 3.8. Find the µi for i = 2, 3, 4.
ch
• Para la sig
shown cadinde Markov
Figure 3.8.determine para
Find the µi for i = 2, 3
i=2,3,4
1 1
1/2 2/3
1/2 3/4
1 2 3 4 5
Discrete-Time Markov Chains 77
1 1/3 1/4
1/2
• Como T={2,3,4} y A={1,5} las matrices P,Q y R
Solution: Because the transient
Figure 3.8. State-transition = {2, 3,
states are Tdiagram for4}Example
and the3.10.
absorbing
states are A = {1, 5}, the P , Q, and R matrices are as follows:
quedan ⎡ 1/2
1 1 0 0 0 02 ⎤ 3
⎢ 0 1 0 0 0 ⎥
⎢ ⎥
⎢
P = ⎢1/2 0 0 1/2 0 ⎥
⎣ 0 0 1/3 0 2/3⎦
⎥ 1/3
0 3/4 0 1/4 0
⎡ Figure
⎤ 3.8.
⎡ State-transition
⎤ di
0 1/2 0 1/2 0
Q = ⎣1/3 0 2/3⎦ R=⎣ 0 0 ⎦
0 1/4 0 0 3/4
Thus,
Thus,Thus, Q = ⎣1/3 0 2/3⎦ R = ⎣ 0 ⎡0 ⎦
⎡ ⎤ 1
⎡ ⎡ ⎤ 1⎤ 0 −1/2 1/4 0 0 0 ⎣ 3/4
I −Q ⎣
1 1 −1/2
−1/3
= Q = −1/3
⎣ 1
Ej continuación
−1/2 0 ⎣0
I − Q = −1/3
Thus, −2/3 ⎦ ⎦ 1 −2/3 ⎦
I − Q = −1/3
0
I− 1 −2/3
⎡ 0 −1/4 1⎤
• Entonces 0 0 −1/4 −1/4 1 1 1 −1/2 0 2
I −Q=⎣ 2 −1/3 1 −2/3⎦ |I − Q| =
|I −
2= 2 |I − Q| = 0 −1/4 1
3
|I − Q| = Q|
3
3 3
⎡ ⎡ 5/6 2 ⎤ ⎤⎡ ⎡ ⎡ ⎤
⎤⎤ ⎡
1/2 N = [I5/4 −Q
|I − Q| 3 = 1/21/21/31/3
5/6 5/4 1/2
5/6
5/4 3 3 1/2 1/3
N = [I −−1Q]−13=⎣N1/3 3
⎣ 1/3 1 −12/3 ⎦3=⎣ ⎣1/2 3/2 1 ⎦ ⎦⎦ ⎣
N = [I − Q] = = [I −
2 1/121 1/42/35/6=
Q] =⎦
⎡ 1/2 3/2 1⎤ ⎡= 1/2
1/3 1 2/3
2 1/12 1/4 5/6 2 5/6 1/81/23/81/35/4 5/4
1/12
1/8 1/4
3/8 From 5/6
5/4 1/83
this we obtain
3
From this we obtain N = [I − Q]−1 = ⎣ 1/3 1 2/3⎦ = ⎣1/2 3/⎡
From this we obtain From ⎡ this we obtain ⎤ ⎡ ⎤ 2⎡ 1/12⎤ 1/4 ⎡ 5/6⎤
⎡ 5/4 3 1/2 ⎤ ⎡ ⎡⎤1 ⎡ 19/4 ⎤ ⎡⎤ 4.75 ⎡⎤ ⎤ M ⎡ =1/8N1⎤=3/⎣
M = N1 = 5/4 3 3/21/2 1 ⎦1⎣5/4
⎣1/2this 1⎦ =19/4
⎣3 3 ⎦ 1/2
=⎣ 4.7531 ⎦ 19/4
From we obtain
M = N1 = ⎣1/2 1/83/2 M3/8 = 1N1 ⎦=
5/4 ⎣⎡1⎣⎦1/2
1= ⎣ 3/2 ⎦ =1⎤⎣⎦
37/4 3⎤1⎦
⎣
⎡ 1.75 ⎦⎡ =⎣ ⎤ 3 ⎦⎡=
1/8 3/8 5/4 15/4
1/8
37/4 1/2 1.75
3/8 5/4
1That is,19/4
1
µ2 = 4.75,4µ
7/4
That is, µ2 = 4.75, µ3 = 3, Mµ=4 = = ⎣1/2 3/2 1 ⎦ ⎣1⎦ = ⎣ 3 ⎦ = ⎣
N11.75.
That is, µ2 = 4.75, µ3That= 3,is,
µ4µ= = 1.75.
4.75, µ1/8 3/8 5/4 1 7/4 1
2 3 = 3, µ4 = 1.75.3.10.2 Absorption P
3.10.2 Absorption That Probabilities
is, µ2 = 4.75, µ3 = 3, µ4 = 1.75.
.2 Absorption Probabilities For an absorbing Marko
For an absorbing3.10.2
Markov Absorption Probabilities
chain, the probability that the chain that state
transient startsi in
willa be a
M = N1 1/8
= ⎣1/2 3/8
3/2 5/4
1 ⎦ ⎣1⎦ =1⎣ 3 ⎦ =7/4
⎣ 3 ⎦ 1
s
= 4.75, µ3Probabilidad de absorción
1/8 3/8 5/4 1 7/4 1.75
= 3, µ4 = 1.75.
is, µ2 = 4.75, µ3 that
probability = 3, µthe
4 = 1.75.
chain
that starts in a
is denoted by bij . Let B be the m × k
ate •j Denotamos por la probabilidad de que la
ption Probabilities
Absorption Probabilities sea absorbida al estado j si
cadena de Markov
s given by
gbsorbing
Markov
−1
comenzó en el estado i
Markov chain, the
chain, the probability that thethat
probability chainthe
that chain
starts in ta
− Q]
state R
i will
will •beSea B =
be NR
absorbed
la matríz
absorbed in state j is denoted by bij . Let B be the m × k
in cuyos elementos son los
state j is denoted by bij . Let B
hose entries are bij . Then B is given by
riesRare
nd bij . m
is the Then is given
× k Bmatrix whoseby entries are
B = [I − Q]−1 R = NR
ient states toBthe absorbing
−1states.
= [I − Q] R = NR
is the fundamental matrix and R is the m × k matrix whose entries are
probabilities from
undamental the transient
matrix andstates
R isto the
the absorbing
m × kstates.
matrix wh
ain whose state-transition diagram is shown
ilities
e 3.11 from
babilities bijthethe
For for transient
i =chain
Markov 4states
2, 3,whose
and j to
= the
1, 5.absorbing
state-transition sta
! is shown
diagram
3.8, find the absorption probabilities bij for i = 2, 3, 4 and j = 1, 5. !
For the Markov chain whose state-transition dia
M = N1 = ⎣ 6 5 4⎦ ⎣1⎦ = ⎣ 15 ⎦ = ⎣ 15 ⎦
15/2 5 5 1 35/2 17.5
Ejemplo
Example 3.10 Consider the Markov chain whose state-transition diagram is
bability shownthat the
in Figure chain
3.8. Find the µ that
for i = 2,starts
i 3, 4. in a !
is ⎡the
5/4
m 3
× ⎡⎤
k
1/2
matrix whose
⎤⎡
⎡ 3 3.8.1/2State-transition
Figure
5/4
1/2 0 5/8
entries
⎤1/2 ⎡ 0 ⎤diagram 5/8 ⎤3/8
3/8
are
⎡ for Example ⎤ 3.10.
B=
B = ⎣1/2 3/2 1 ⎦ ⎣ 0 0 ⎦ = ⎣ 1/4 3/4 ⎦
Btates
= ⎣1/2to 3/2
the absorbing
⎦ ⎣ 3/8
1 1/8 0 5/40 ⎦ states.⎣ 1/4 1/16
0= 3/4
⎦
3/4 15/16
1/8 3/8 5/4 0 3/4 1/16 15/16
That is, b21 = 5/8, b25 = 3/8, b31 = 1/4, b35 = 3/4, b41 = 1/16, That is,
whose
s, b21 = state-transition
b45 = b25 = 3/8, b31 =diagram
15/16.
5/8, 1/4, b35 = is 3/4,shown
b41 = 1/16, b45 = 15/16.
16.
bij for
ties 3.11 i = 2, 3, 4 and j
Reversible Markov Chains
= 1, 5. !
3.11 Reve
1/8 ⎡3/8 5/4 0 ⎤⎡
3/4 1/1
⎤
5/4 3 1/2 1/2 0
That is, b21 = ⎣5/8, b25 = 3/8, b⎦31⎣= 1/4, b35⎦
B = 1/2 3/2 1 0 0
Cadenas de Markov reversibles
b45 = 15/16.
1/8 3/8 5/4 0 3/4
Reversible Markov Chains
That is, b21 = 5/8, b25 = 3/8, b31 = 1
• En una cadena de Markov
= 15/16. reversible una
3.11 b45Reversible Markov Chains
chain {X secuencia de estados
n } is defined to be a reversible Markov chain if the seq
chain {Xn } is defined to be a reversible Markov c
Xn+1 , Xn , Xn−1 , .Astates
. .Markov
has. . .the same probabilistic structure
, Xn+1 , Xn , Xn−1 , . . . has the same probabilistic str
as the s
Xn , Xn+1 , . . . . That 3.11 Reversible Markov Chains
. . . , Xn−1 , Xn , Xn+1 , . . . . That is, the sequence ofatstates
is, the sequence of states looked back l
he same • Tiene la misma estructura probabilística que la
structuretimeas Athe has thesequence running
same structure as theforward in
sequence running time. forw
secuencia de estados
a Markov Markov
chainchain{Xn } {X
withn } limiting
is defined to be
state a reversible
probabilities {πM
chain {Xn } with limiting
sitionstates
state probabilities
. . . , Xn+1
probabilities pij,.XSuppose
n , Xn−1 ,that
{π 1 , π
. . . starting 2 ,
has the same π 3 , . .
at timeprobab
.}
n we
a
1