Documentos de Académico
Documentos de Profesional
Documentos de Cultura
de Markov
Si m=0
Las ecuaciones Chapman-Kolmogorov
Se trata de una generalizacin del resultado obtenido
anteriormente
Demostracin
Matriz de transicin de n pasos
Diagramas de transicin
Suponga que al arrojar una moneda, el
resultado dependiera del lanzamiento anterior
Clases de estados
Alcanzable. Un estado j es alcanzable desde algn
estado i sii
Observe que la Matriz nos brinda informacin de
alcanzabilidad entre estados
Dos estados se comunican si son alcanzables
mtuamente
El concepto de comunicacin divide al espacio de
estados en clases
Dos estados que se comunican pertenecen a la misma
clase
Todos los estados de una misma clase se comunican
entre s
Decimos que una clase es cerrada si ninguno de los
estados que la conforman puede se alcanzado por
ningn estado fuera de la clase
Cadenas de Markov irreductibles
Son cadenas de Markov en las cuales todos los
estados se comunican
Eso implica que los estados conforman una
nica clase
Probabilidad de primera pasada
Sea la probabilidad condicional de que dado
que el proceso se encuentra actualmente en el
estado i, la primera vez que llegue al estado j
ocurra en exactamente n transiciones
(Probabilidad de primera pasada del estado i al j
en n transiciones)
Sabemos que
Y si los estados lmite existen y no dependen
del estado inicial entonces
Obteniendo las probabilidades de
estados lmite
Definiendo el vector de probabilidades de
estados lmite como
De la primera ecuacin
Substituyendo en la segunda
Calcular
Demostracin
Substituyendo
Ejemplo
Tiempo esperado de estancia en un
estado
Sea el nmero de unidades de tiempo que un
proceso permanecera en el estado i antes de
abandonarlo
Sea transformada Z de
En forma matricial:
Anlisis transitorio de Cadenas de
Markov de tiempo discreto
Obtenemos Mediante la inversa de G(z) obteniendo dos componentes,
uno constante y un trmino transitorio
El trmino constante tiene la caracterstica de que todos los renglones son idnticos
Y sus elementos son las probabilidades de estados lmite
Ejemplo
Obtener para una cadena de Markov con la siguiente Matriz de transicin entre estados
Ejemplo
Ejemplo
Como el tiempo que el proceso dura en cada estado es 1, esta ecuacin dice
que el tiempo promedio es el tiempo que dura en el estado i mas el tiempo medio
de primera transicin del estado k al j dado que el estado que sigue del i es el k
De donde:
Ejemplo
Determinar
stic Modeling
kov Processes for Stochastic Modeling
Tiempos de ocupacin
ccupancy
72 MarkovTimes
Processes for Stochastic Modeling
Markov chain
11 (n) 12 (n) 13 (n) . . . 1N (n)
21(n) 11 (n) 12 (n) 11
(n)
13 (n) .. .12 1N (n) 13 (n)
(n)
(n) 22 (n)
(n) 23 (n)
(n) . . .
. . .
1N 2N
(n) (n) (n)
11
12
(n)
13
(n)
21 (n)(n) . . .22 (n) 23 (n)
"(n) = 31(n) 21 (n) 3222(n)21
(n) 2333(n) (n) .
22 23
.. . . 2N(n) (n)
3N 2N
"(n) = ."(n). (n)=
.31
.32.(n)
.31 (n)"(n)
33.(n) .
.. 32=(n)... .
. 3N33(n)
31 (n)
(n)
. . .
. . .32 (n)
3N (n) (n)
33
N1 (n) ... ...
...
. . .. . . . . . . . .. ... . . ... .. . (n) . . .
N2 (n) N3 (n) . . .
NN . . .
N1 (n) N2 (n) N3 (n) . . . NN (n)
N1 (n) N2 (n) N3 (n) . . . NN (n)
Then Then
we have that
N1 (n) N2 (n) N3 (n)
we have that
Then we have that
Then "(n)
we have
=
nn
!
! that
Prr
"(n) = P n
!
r=0 "(n) = Pr
r=0 n
!
Example 3.8 Consider the transition probability matrix
r=0 associated =
"(n)with P r
Example
Example3.8 Consider
3.6. We would likethe transition
to obtain probability
the mean first passagematrix
time 13associated
(5). with
Example 3.6.Example 3.8to obtain
We would like Consider
the transition
the mean probability
first passage matrixr=0
time 13 (5). associated
0.4 0.5 0.1
21 21 22 22 23 23 2N 2N
"(n)"(n)
= =31
(n)31 (n)
32 (n)32 (n)
33 (n)33.(n) . . .(n)
. . n3N (n)
3N
. .. . . . . . . . . . . . . .
.... . ! . . .
... r ...
"(n). =
Ejemplo . .P
N1 (n) N2 (n)N2(n)
N1 (n) N3 (n)N3 . . NN
(n) .(n)NN (n)
r=0
ThenThen
we have that that
we have
Example 3.8 Consider the
!n
transition
r!
n probability matrix associated w
"(n) =
Determine el tiempo medio de ocupacin
Example
P
3.6. We would like"(n) tor=0obtain
= Pthe
r
mean first passage time 13 (5).
r=0
para el proceso de Markov
Example
Example 3.8 Consider the
3.8 Consider the transition probability
transition
con matrz
matrix
0.4 probability
0.5 0.1 matrix associated de with
associated with
Example 3.6. We would like to obtain the mean first passage time 13 (5).
distribucin de transiciones
Example 3.6. We would like
to P =
obtain the
0.3
mean0.3first 0.4 time 13 (5).
passage
0.4 0.5 0.1 0.3 0.2 0.5
0.4 0.4
P = 0.3 0.3 0.5 0.1
= 0.2
P0.3 0.3 0.4
0.3 0.5 !
Solution: The matrix0.3 "(5)0.2is given
0.5 by
!
Solution: The matrix "(5) is given by r
Solution: The matrix "(5) is ! given
5
by ! 5 0.4r 0.5 0.1
!5 !5 0.4 0.5 0.1
"(5) = "(5) Pr = = 0.3P r
=
0.3 0.4
0.3 0.3
r 0.4
! 5 ! 5 0.4 0.5 0.1
0.3 0.2 0.5 0.3 0.2 0.5
0.3r=0
r=0
"(5) = P rr=0
r=0
= 0.3 0.4
r=0 r=0 0.3 0.2 0.5
Contina ejemplo Discrete-Time Markov Chains 73
A primary parameter of interest is the quantity Nij , which is the mean number
I 0
"P =n
0 (I + Q + + Q n1
)R Qn
R Q n La matriz fundamental
imary parameter of interest is the quantity N , which is the mean numbe
ij
s the process is in transient state j before hitting an absorbing state, given
antity ij , es el nmero de veces que el proceso est
whichstate
starts inNtransient is the meanthat
i. Note number
the emphasis is on both state i and
gefore hitting
transient an If,
absorbing
en el estado transitorio j
states. for example,state, given
state iantes de ir a un
is an absorbing state and i = j
ethe
quantity is zero.isSimilarly,
emphasis on both if state an absorbing state and i = j, then
state j iisand
ntity is
estado absorbente dado que comenz en el
infinity if state j isstate
accessible
ate i isestado transitorio i
an absorbing and ifrom= j,state i. The following theorem
s proved in Grinstead and Snell (1997), establishes the relationship between
j is an absorbing
Nij ] (i.e., is the
stateofand
matrix the
i =) and
j, then
El teorema de Snell establece la relacin entre
N
e from state i. The following theorem,
N ij Q.
rem
7), la matrices N
3.2
establishes y Q between
the relationship
nd Q. #
N= Qk = [I Q]1
k=0
Tiempo de absorcin
#
1
N= Q = [I Q] k
k=0
em
of3.3 Let iuntil
Grinstead
transitions denote the number of state
transitions
y Snell demostraron que los
absorption from before
j given that the
the process hits
next transit
rption state, j.
i is state given
Thisthat
is the chain
true for starts
all andinso
state
we and over
i,sum let Mall
bejthe
column
.
whoseThe tiempos de absorcin se pueden determinar
entry is
ith mean i . to
Then
j T
time absorption can also be computed directly from the
de la siguiente manera:
matrix. The following theorem, which is proved in Grinstead and
1
defines how to compute
M = N1the mean
= [I Q]times
1 to absorption for the diffe
states.
Donde M s un vector columna cuyo i-simo
isTheorem
a column vector
3.3 whose entries
Let are the
denote all 1.
elemento es y 1 es un vector columna de
number of transitions before th
i
plean3.9
absorption
puros 1sstate,
Consider thegiven
Markovthatchain
the chain
whosestarts in state i, and
state-transition let Misb
diagram
vector 3.7.
n Figure whose 3entry
Findith . is i . Then !
1
tion: M = N1 = [I Q] 1 follows:
The sets of nontransient and transient states are as
3 = 1 + p31 1 + p32 2 = 1 + 1
column vector whose entries are all 1.
2 = 1 +chain
3.9 Consider the Markov Ejemplo
p21 whose
4
1 + p23state-transition
3 = 1 + 3diagram is
5
gure 3.7. Find 3 . !
Para la siguiente cadena de Markov 1 2
1 = 1 + p11 1 + p12 2 = 1 + 1 + suponga 2
3 3
n: The que A={4} y T={1,2,3}
sets of nontransient and transient states are as follows:
2/3
1/3 1 A = {4} 2 4 1
76 T = {1, 1/5
Markov Processes for Stochastic
2, 3}
Modeling
1 4/5
3
using the direct method we obtain this system of equations we obtain
From
Si usamos la recurrencia
Figure 3.7. State-transition diagram for Example 3.9.
De donde:
3 = 1 + p31 1 + p32 2 = 1 + 1
3 = 17.5
4
2 = 1 + p21 1 + p23 3 = 1 + 3 2 = 15.0
5
1 2
1 = 1 + p11 1 + p12 2 = 1 + 1 + 2 1 = 16.5
3 3
1 = 16.5
damental matrix are given by
Finalmente:
Thus,
15/2 5 4 1 33/2 16.5
M =N1 = 6 5 41 =
15 = 15
15/2 515/2
4 51 5 33/2 1 16.5
35/2 17.5
M = N1 = 6 5 4 1 = 15 = 15
15/2 5 5 1 35/2 17.5
Otro ejemplo
Example 3.10 Consider
Example 3.10 Consider the Markov the
chain whose Markovdiagra
state-transition
shown in Figure 3.8. Find the i for i = 2, 3, 4.
ch
Para la sig
shown cadinde Markov
Figure 3.8.determine para
Find the i for i = 2, 3
i=2,3,4
1 1
1/2 2/3
1/2 3/4
1 2 3 4 5
Discrete-Time Markov Chains 77
1 1/3 1/4
1/2
Como T={2,3,4} y A={1,5} las matrices P,Q y R
Solution: Because the transient
Figure 3.8. State-transition = {2, 3,
states are Tdiagram for4}Example
and the3.10.
absorbing
states are A = {1, 5}, the P , Q, and R matrices are as follows:
quedan 1/2
1 1 0 0 0 02 3
0 1 0 0 0
P = 1/2 0 0 1/2 0
0 0 1/3 0 2/3
1/3
0 3/4 0 1/4 0
Figure
3.8.
State-transition
di
0 1/2 0 1/2 0
Q = 1/3 0 2/3 R= 0 0
0 1/4 0 0 3/4
Thus,
Thus,Thus, Q = 1/3 0 2/3 R = 0 0
1
1 0 1/2 1/4 0 0 0 3/4
I Q
1 1 1/2
1/3
= Q = 1/3
1
Ej continuacin
1/2 0 0
I Q = 1/3
Thus, 2/3 1 2/3
I Q = 1/3
0
I 1 2/3
0 1/4 1
Entonces 0 0 1/4 1/4 1 1 1 1/2 0 2
I Q= 2 1/3 1 2/3 |I Q| =
|I
2= 2 |I Q| = 0 1/4 1
3
|I Q| = Q|
3
3 3
5/6 2
1/2 N = [I5/4 Q
|I Q| 3 = 1/21/21/31/3
5/6 5/4 1/2
5/6
5/4 3 3 1/2 1/3
N = [I 1Q]13=N1/3 3
1/3 1 12/3 3= 1/2 3/2 1
N = [I Q] = = [I
2 1/121 1/42/35/6=
Q] =
1/2 3/2 1 = 1/2
1/3 1 2/3
2 1/12 1/4 5/6 2 5/6 1/81/23/81/35/4 5/4
1/12
1/8 1/4
3/8 From 5/6
5/4 1/83
this we obtain
3
From this we obtain N = [I Q]1 = 1/3 1 2/3 = 1/2 3/
From this we obtain From this we obtain 2 1/12 1/4 5/6
5/4 3 1/2 1 19/4 4.75 M =1/8N1=3/
M = N1 = 5/4 3 3/21/2 1 15/4
1/2this 1 =19/4
3 3 1/2
= 4.7531 19/4
From we obtain
M = N1 = 1/2 1/83/2 M3/8 = 1N1 =
5/4 11/2
1= 3/2 =1
37/4 31
1.75 = 3 =
1/8 3/8 5/4 15/4
1/8
37/4 1/2 1.75
3/8 5/4
1That is,19/4
1
2 = 4.75,4
7/4
That is, 2 = 4.75, 3 = 3, M=4 = = 1/2 3/2 1 1 = 3 =
N11.75.
That is, 2 = 4.75, 3That= 3,is,
4= = 1.75.
4.75, 1/8 3/8 5/4 1 7/4 1
2 3 = 3, 4 = 1.75.3.10.2 Absorption P
3.10.2 Absorption That Probabilities
is, 2 = 4.75, 3 = 3, 4 = 1.75.
.2 Absorption Probabilities For an absorbing Marko
For an absorbing3.10.2
Markov Absorption Probabilities
chain, the probability that the chain that state
transient startsi in
willa be a
M = N1 1/8
= 1/2 3/8
3/2 5/4
1 1 =1 3 =7/4
3 1
s
= 4.75, 3Probabilidad de absorcin
1/8 3/8 5/4 1 7/4 1.75
= 3, 4 = 1.75.
is, 2 = 4.75, 3 that
probability = 3, the
4 = 1.75.
chain
that starts in a
is denoted by bij . Let B be the m k
ate j Denotamos por la probabilidad de que la
ption Probabilities
Absorption Probabilities sea absorbida al estado j si
cadena de Markov
s given by
gbsorbing
Markov
1
comenz en el estado i
Markov chain, the
chain, the probability that thethat
probability chainthe
that chain
starts in ta
Q]
state R
i will
will beSea B =
be NR
absorbed
la matrz
absorbed in state j is denoted by bij . Let B be the m k
in cuyos elementos son los
state j is denoted by bij . Let B
hose entries are bij . Then B is given by
riesRare
nd bij . m
is the Then is given
k Bmatrix whoseby entries are
B = [I Q]1 R = NR
ient states toBthe absorbing
1states.
= [I Q] R = NR
is the fundamental matrix and R is the m k matrix whose entries are
probabilities from
undamental the transient
matrix andstates
R isto the
the absorbing
m kstates.
matrix wh
ain whose state-transition diagram is shown
ilities
e 3.11 from
babilities bijthethe
For for transient
i =chain
Markov 4states
2, 3,whose
and j to
= the
1, 5.absorbing
state-transition sta
! is shown
diagram
3.8, find the absorption probabilities bij for i = 2, 3, 4 and j = 1, 5. !
For the Markov chain whose state-transition dia
M = N1 = 6 5 4 1 = 15 = 15
15/2 5 5 1 35/2 17.5
Ejemplo
Example 3.10 Consider the Markov chain whose state-transition diagram is
bability shownthat the
in Figure chain
3.8. Find the that
for i = 2,starts
i 3, 4. in a !
is the
5/4
m 3
k
1/2
matrix whose
3 3.8.1/2State-transition
Figure
5/4
1/2 0 5/8
entries
1/2 0 diagram 5/8 3/8
3/8
are
for Example 3.10.
B=
B = 1/2 3/2 1 0 0 = 1/4 3/4
Btates
= 1/2to 3/2
the absorbing
3/8
1 1/8 0 5/40 states. 1/4 1/16
0= 3/4
3/4 15/16
1/8 3/8 5/4 0 3/4 1/16 15/16
That is, b21 = 5/8, b25 = 3/8, b31 = 1/4, b35 = 3/4, b41 = 1/16, That is,
whose
s, b21 = state-transition
b45 = b25 = 3/8, b31 =diagram
15/16.
5/8, 1/4, b35 = is 3/4,shown
b41 = 1/16, b45 = 15/16.
16.
bij for
ties 3.11 i = 2, 3, 4 and j
Reversible Markov Chains
= 1, 5. !
3.11 Reve
1/8 3/8 5/4 0
3/4 1/1
5/4 3 1/2 1/2 0
That is, b21 = 5/8, b25 = 3/8, b31= 1/4, b35
B = 1/2 3/2 1 0 0
Cadenas de Markov reversibles
b45 = 15/16.
1/8 3/8 5/4 0 3/4
Reversible Markov Chains
That is, b21 = 5/8, b25 = 3/8, b31 = 1
En una cadena de Markov
= 15/16. reversible una
3.11 b45Reversible Markov Chains
chain {X secuencia de estados
n } is defined to be a reversible Markov chain if the seq
chain {Xn } is defined to be a reversible Markov c
Xn+1 , Xn , Xn1 , .Astates
. .Markov
has. . .the same probabilistic structure
, Xn+1 , Xn , Xn1 , . . . has the same probabilistic str
as the s
Xn , Xn+1 , . . . . That 3.11 Reversible Markov Chains
. . . , Xn1 , Xn , Xn+1 , . . . . That is, the sequence ofatstates
is, the sequence of states looked back l
he same Tiene la misma estructura probabilstica que la
structuretime as Athe has thesequence running
same structure as theforward in
sequence running time. forw
secuencia de estados
a Markov Markov
chainchain{Xn } {X
with n } limiting
is defined to be
state a reversible
probabilities {M
chain {Xn } with limitingsitionstates
state probabilities
. . . , Xn+1
probabilities pij,.XSuppose
n , Xn1 ,that
{ 1 ,
. . . starting 2 ,
has the same 3 , . .
at timeprobab
.}
n we
a
1