ϵʽ

  • QQ99515681
  • 䣺99515681@qq.com
  • ʱ䣺8:00-23:00
  • ΢ţcodinghelp2

ǰλãҳ >> CSҵCSҵ

ڣ2019-09-02 08:23

STATS 325/721 Assignment 2
1. Consider a Markov chain X = (Xn : n = 0, 1, 2, . . .) on the set of vertices {A, B, C, D, E}
of the 3D object with six triangular faces represented in the following diagram:
Let Pi and Ei represent probability and expectation, respectively, conditional on
X0 = i. The first hitting time of state i is defined by
Ti:= inf{n > 0 : Xn = i}.
(a) Prove that
PB(TA < TD) = 37,
where you should justify your steps. [4]
(b) Calculate PE(TB < TA; TB < TD), and deduce PC(TB < TD < TA). [4]
(c) Find the average time, EA(TB), it takes to reach state B starting initially in
state A. [4]
(d) Deduce the average time to return to state B starting from state B. Deduce
the long term proportion of time spent in each state. [
?]
2. Consider the Markov chain X = (Xn)n∈N with state space I = {A, B, C, D, E, F}
and one step transition probabilities given in the following diagram:
(a) Decompose the state space into its communicating classes and state the period
of each class. Hence, identify the set of transient states T and a communicating
class of recurrent states R. [3]
Due noon, Monday, September 2nd 2019 (Science SRC)
STATS 325/721 Assignment 2
(b) Write down the one-step transition matrix P for the discrete parameter Markov
chain Y with state space R, that is, the restriction of the Markov chain X to
the recurrent class R ?I. [3]
(c) What conditions does an invariant probability mass function π for a discrete
time Markov chain satisfy? Find π for the Markov chain Y . [3]
(d) Stating any general results that you appeal to, deduce the following:
i. Y is positive recurrent, [1]
ii. the distribution for the position of Y after the chain has been running for
a very long time, [1]
iii. the long-term proportion of time spent in each of the states, [1]
iv. the average time to return to each state EiTi
, [1]
v. the average number of visits made to A before returning to the starting
position at C. [
?]
3. Consider the random walk W = (Wn)n? with state space Z such that
Wn := W0 + X1 + · · · + Xn,
where X1, X2, . . . are independent, identically distributed random variables with
P(Xn = ?) = 25, P(Xn = 1) = 15, P(Xn = 2) = 25.(a) For k ?1, let xk be the probability that the random walk ever visits the origin
given that it starts at position k, that is,
xk := Pk(hit 0) := P(Wn = 0 for some n ?0 | W0 = k).
i. By splitting according to the first move, show that
and explain why xk = (x1)
k
for k ?1. [5]
ii. Show that Pk(hit 0) = 2−k
for k ?1. [5]
(b) For k ??, let yk be the probability that the random walk ever visits k given
that it starts at 0, that is,
yk := P0(hit k) := P(Wn = k for some n ?0 | W0 = 0).
i. Write down the values of y? and y0. [2]
ii. For k ?1, briefly explain why
iii. Find all solutions to (? of the form yk ?mk and write down the general
solution of the recurrence relation (?. Deduce P0(hit k) for k ??. [
?]
iv. If the random walk starts at the origin and n > 0 is a very large integer, deduce
that the probability that position n is never visited is approximately
1/6. [
?]
Stats325: mark is out of 40, including max 5 bonus from ?starred questions
Stats721: mark is out of 50, please attempt all questions
Due noon, Monday, September 2nd 2019 (Science SRC)

ȨУѧ̸ 2018 All Rights Reserved ϵʽQQ:99515681 䣺99515681@qq.com
վݴֻοаȨϵվɾ