Lecture 12 : Continuous Time Markov Chains
Transcription
Lecture 12 : Continuous Time Markov Chains
Lecture 12 : Continuous Time Markov Chains Parimal Parag 1 Continuous Time Markov Chains Consider a continuous time stochastic process {X(t), t ≥ 0} taking on values on the set of non-negative integers. The process {X(t), t ≥ 0} is continuous time Markov chain if for all s, t ≥ 0 and i, j ∈ N0 , P (X(t + s)|X(u), u ∈ [0, s]) = P (X(t+s)|X(s)). If the probability is independent of s, the continuous time Markov chain (CTMC) has homogeneous transitions and we denote the probability by Pij (t). Suppose X(0) = i, X(u) = i, ∀u ∈ [0, s]. We are interested in knowing probabilities of the form P (X(v) = i, v ∈ [s, s+t]||X(u) = i, u ∈ [0, s]). To that end, τi , {t ≥ 0 : X(t) 6= i|X(0) = i}. We could sample the process at these instants and construct a DTMC out of it and study the same. Observe that, P (τi ≥ s + t|τi > s) = P (X(v) = i, v ∈ [s, s + t]|X(0) = i) = P (τi ≥ t|X(0) = i) τi is memoryless and exponentially distributed. 1.1 Alternative way of constructing CTMC A CTMC is a stochastic process that each time it enters state i, 1. τi ∼ exp(νi ). 2. P(Entering state j— state i ) ≡ Pij is such that P i6=j Pij = 1. Note that Pi,j and τi are not dependent. νi is called rate of state i. νi < ∞. Else we call the state to be instantaneous. A CTMC is a DTMC with exponential sojourn time in each state. Definition: A CTMC is called ”regular” if P (#transitionsin[0, t]isfinite) = 1, ∀t < ∞. Consider the following example of a non-regular CTMC. Example: Pi,i+1 = 1, νi = i2 . Show that it is regular. Let Q be a matrix such that 1. qij = νi Pij , ∀i 6= j. 2. qii = −νi . 1 Properties of Q : 1. 0 ≤ −qii < ∞, ∀i. 2. qij ≥ 0, ∀i 6= j. 3. P j qij = 0, ∀i. From the Q matrix, we can construct the whole CTMC. In DTMC, we had the result P (n) (i, j) = (P n )i,j . We can generalize this notion in the case of P k CTMC as follows: P = eQ , k∈N0 Qk! . Observe that eQ1 +Q2 = eQ1 eQ2 , enQ = (eQ )n = P n . Theorem 1.1. Let Q be a finite sized matrix. Let P (t) = etQ . Then {P (t), t ≥ 0} has the following properties: 1. P (s + t) = P (s)P (t), ∀s, t (semi group property). 2. P (t), t ≥ 0 is the unique solution to the forward equation, P (t)Q, P (0) = I. 3. And the backward equation 4. For all k ∈ N, dk P (t) | dk (t) t=0 dP (t) dt dP (t) dt = = QP (t), P (0) = I. = Qk . −tQ Proof. dM (t)e = 0, M (t)e−tQ is constant. M (t) is any matrix satisfying the dt forward equation. Theorem 1.2. A finite matrix Q is Q matrix if and only if P (t) = etQ is a stochastic matrix for all t ≥ 0. Proof. P (t) = I + tQ + O(t2 ) (f (t) = O(t) ⇒ f (t) ≤ c, for small t, c < ∞ t ). qij ≥ 0 if and only if Pij (t) ≥ 0, ∀i 6= j and t ≥ 0 sufficiently small. P (t) = P ( nt )n . Note that if Q has zero row sums, Qn also has zero row sums. X XX XX [Qn−1 ]ik Qkj = Qkj [Qn−1 ]ik = 0. [Qn ]ij = j j X j Conversely 1.2 j k k X tn X Pij (t) = 1 + [Qn ]ij = 1 + 0 = 1. n! j n∈N P j Pij (t) = 1, ∀t ≥ 0, then P j Qij = dPij (t) dt = 0. Kolmogorov Differential Equations Lemma 1.3. 2. limt→0 1. limt→0 Pij (t) t 1−Pii (t) t = νi . = qij . Lemma 1.4. For all s, t ≥ 0, Pij (t + s) = 2 P k∈N0 Pik (t)Pkj (s) 1.3 Chapman Kolmogorov Equation for CTMC Theorem 1.5. Kolmogorov Backward equation: For all i, j, t ≥ 0, Pij0 (t) = P dP (t) k6=i qik Pkj (t) − νi Pij (t) , dt = QP (t) P Proof. Pij (t + h) = k Pik (h)Pkj (t). Pij (t + h) − Pij (t) = X Pik (h)Pkj (t) − (1 − Pii (h))Pij (t), k6=1 P dPij (t) divide by h, h → 0, we get dt = limh→0 Pij0 (t) = k6=i qik Pkj (t) − νi Pij (t). Now the exchange of limit and summation has to be justified. lim inf h→0 = X X Pik (h) h k6=1 Pkj (t) ≥ lim inf h→0 X Pik (h) k6=1 h Pkj (t), k < N qjk Pkj (t), k < N. k6=1 This is true for any finite N . Take supremum over all N . We get P P P lim inf h→0 k6=1 Pikh(h) Pkj (t) ≥ k6=1 qjk Pkj (t). Suffices to show that lim suph→0 k6=1 P k6=1 qjk Pkj (t). To that end, X LHS ≤ lim sup[ h→0 k6=1,k<N Pik (h) Pkj (t) + h X k6=1,k≥N Pik (h) ] h Pik (h) 1 − Pii (h) X Pik (h) Pkj (t) + ] h h h h→0 k6=1,k<N k6=1,k≥N X X =[ qik Pkj (t) + νi − qik ] = lim sup[ X k6=1,k<N = X k6=1,k≥N qik Pkj (t). k6=1 Theorem 1.6. Kolmogorov Forward Equation: Under suitablle reqularity P conditions, Pij0 (t) = k6=i Pik (t)qkj − Pij (t)νi , i.e. dPdt(t) = P (t)Q. 3 Pik (h) h Pkj (t) ≤