5.6 Existence and uniqueness of stationary measures

In this section we give an explicit general formula for a stationary measure of a recurrent Markov chain, provided that it has a recurrent state. Moreover, under an obviously necessary irreducibility condition, we show that this measure is the unique stationary measure. The construction of this measure is very natural: its value at \(y\) is simply the expected number of visits at \(y\) during the first excursion from some fixed reference state \(x\) back to \(x.\)

Proposition 5.37

Suppose that \(x \in S\) is recurrent. Then the measure \[\nu_x(y) :=\mathbb{E}_x \Biggl[\sum_{k = 0}^{H_x - 1} \mathbf 1_{X_k = y}\Biggr]\] is stationary. Moreover, \(\nu_x(y) > 0\) if and only if \(x\) and \(y\) communicate.

Proof. The main idea of the proof is to condition on the value \(z\) just before the chain visist \(y.\) Since \(X_0 = X_{H_x} = x,\) we find \[\begin{aligned} \nu_x(y) &= \mathbb{E}_x \Biggl[\sum_{k = 1}^{H_x} \mathbf 1_{X_k = y}\Biggr] \\ &= \sum_{z \in S} \mathbb{E}_x \Biggl[\sum_{k = 1}^{H_x} \mathbf 1_{X_{k-1} = z} \mathbf 1_{X_k = y}\Biggr] \\ &= \sum_{z \in S} \sum_{k \in \mathbb{N}^*} \mathbb{E}_x \bigl[\mathbf 1_{k \leqslant H_x} \mathbf 1_{X_{k-1} = z} \mathbf 1_{X_k = y}\bigr] \\ &= \sum_{z \in S} \sum_{k \in \mathbb{N}^*} \mathbb{P}_x \bigl(k \leqslant H_x, X_{k-1} = z, X_k = y\bigr) \\ &= \sum_{z \in S} \sum_{k \in \mathbb{N}^*} \mathbb{P}_x \bigl(X_k = y \mid k \leqslant H_x, X_{k-1} = z\bigr) \, \mathbb{P}_x(k \leqslant H_x, X_{k-1} = z) \\ &= \sum_{z \in S} \sum_{k \in \mathbb{N}^*} Q(z,y) \, \mathbb{E}_x\bigl[\mathbf 1_{k \leqslant H_x} \mathbf 1_{X_{k-1} = z}\bigr] \\ &= \sum_{z \in S}Q(z,y) \, \mathbb{E}_x\Biggl[ \sum_{k = 1}^{H_x} \mathbf 1_{X_{k-1} = z}\Biggr] \\ &= \sum_{z \in S}Q(z,y) \, \nu_x(z)\,, \end{aligned}\] where in the sixth step we used the Markov property from Proposition 5.15 combined with the remark that the event \[\{k \leqslant H_x\} = \{X_1 \neq x, \dots, X_{k-1} \neq x\}\] depends only on the values of \(X\) up to time \(k-1.\) We have shown that \(\nu_x = Q \nu_x,\) i.e. that \(\nu_x\) is stationary.

To show the final assertion, note first that if \(x\) and \(y\) do not communicate then \(\mathbb{E}_x[N_y] = U(x,y) = 0,\) and hence \(\nu_x(y) = 0.\)

Next, suppose that \(x\) and \(y\) communicate, and let us show that \(0 < \nu_x(y) < \infty.\) First, since there exists \(n \in \mathbb{N}\) such that \(Q^n(x,y) > 0,\) we conclude that \[1 = \nu_x(x) = \sum_{z \in S} \nu_x(z) Q^n(z,x) \geqslant\nu_x(y) Q^n(y,x)\,,\] from which we deduce that \(\nu_x(y) < \infty.\) Second, since there exists \(m \in \mathbb{N}\) such that \(Q^m(y,x) > 0,\) we conclude that \[\nu_x(y) = \sum_{z \in S} \nu_x(z) Q^m(z, x) \geqslant\nu_x(x) Q^m(y,x) = Q^m(y,x)\,,\] from which we deduce that \(\nu_x(y) > 0.\)

Is the stationary measure \(\nu_x\) unique, i.e. is it independent of the choice of \(x\)? In general, the answer is clearly no. There are two reasons for this, both of which turn out to be “silly”. First, if the Markov chain is not irreducible, as in the example preceding Definition 5.34, choosing \(x\) in \(S_1\) or in \(S_2\) will clearly yields two measures that are supported on different disjoint sets. Let us therefore assume that our chain is irreducible. Even then, because of the obvious constraint \(\nu_x(x) = 1\) for all \(x,\) for each \(x\) we get in general a different measure. However, it turns out that this dependence on \(x\) is only via a multiplicative positive constant. Up to this constant, the measure \(\nu_x\) is indeed unique for any irreducible recurrent chain.

Proposition 5.38

If the Markov chain is irreducible and recurrent, then it has (up to a multiplicative constant in \((0,\infty)\)) a unique stationary measure.

Proof. Suppose that \(\mu\) is a stationary measure. We shall show, by induction on \(p \in \mathbb{N},\) that for all \(x,y \in S\) we have \[\tag{5.12} \mu(y) \geqslant\mu(x) \, \mathbb{E}_x \Biggl[\sum_{k = 0}^{p \wedge (H_x - 1)} \mathbf 1_{X_k = y}\Biggr]\,.\] Note first that if \(x = y\) then (5.12) is trivially true (with an equality) for all \(p.\) Suppose therefore that \(x \neq y\) and let us show (5.12) by induction on \(p.\) By stationarity of \(\mu\) and the induction assumption (5.12) for \(p,\) we get \[\begin{aligned} \mu(y) &= \sum_{z \in S} \mu(z) Q(z,y) \\ &\geqslant\mu(x) \sum_{z \in S} \mathbb{E}_x \Biggl[\sum_{k = 0}^{p \wedge (H_x - 1)} \mathbf 1_{X_k = z}\Biggr] Q(z,y) \\ &= \mu(x)\sum_{z \in S} \sum_{k = 0}^p \mathbb{E}_x \bigl[\mathbf 1_{k \leqslant H_x - 1} \mathbf 1_{X_k = z}\bigr] Q(z,y) \\ &= \mu(x)\sum_{z \in S} \sum_{k = 0}^p \mathbb{P}_x \bigl(k \leqslant H_x - 1,X_k = z\bigr) Q(z,y) \\ &=\mu(x) \sum_{z \in S} \sum_{k = 0}^p \mathbb{P}_x \bigl(k \leqslant H_x - 1,X_k = z\bigr) \mathbb{P}_x \bigl(X_{k+1} = y \mid k \leqslant H_x - 1,X_k = z\bigr) \\ &= \mu(x)\sum_{z \in S} \sum_{k = 0}^p \mathbb{P}_x \bigl(X_{k+1} = y, k \leqslant H_x - 1,X_k = z\bigr) \\ &= \mu(x)\sum_{z \in S} \sum_{k = 0}^p \mathbb{E}_x \bigl[\mathbf 1_{X_{k+1} = y} \mathbf 1_{ k \leqslant H_x - 1} \mathbf 1_{X_k = z}\bigr] \\ &= \mu(x) \, \mathbb{E}_x \Biggl[\sum_{k = 0}^{p \wedge (H_x - 1)} \mathbf 1_{X_{k+1} = y}\Biggr] \\ &= \mu(x) \, \mathbb{E}_x \Biggl[\sum_{k = 1}^{(p+1) \wedge H_x} \mathbf 1_{X_{k} = y}\Biggr] \\ &= \mu(x) \, \mathbb{E}_x \Biggl[\sum_{k = 0}^{(p+1) \wedge (H_x-1)} \mathbf 1_{X_{k} = y}\Biggr]\,, \end{aligned}\] where in the fifth step we used the Markov property from Proposition 5.15 combined with \[\{k \leqslant H_x - 1\} = \{X_1 \neq x, \dots, X_k \neq x\}\,,\] and in the last step we used that \(x \neq y.\) We have therefore shown (5.12) for \(p+1,\) and the proof of (5.12) is hence complete.

Taking \(p \to \infty\) in (5.12), we get by monotone convergence \[\mu(y) \geqslant\mu(x) \, \mathbb{E}_x \Biggl[\sum_{k = 0}^{H_x - 1} \mathbf 1_{X_k = y}\Biggr] = \mu(x) \nu_x(y)\,.\] By stationarity of \(\mu\) and of \(\nu_x\) (see Proposition 5.37), we therefore find for any \(n \in \mathbb{N}^*\) \[\mu(x) = \sum_{z \in S} \mu(z) Q^n(z,x) \geqslant\sum_{z \in S} \mu(x) \nu_x(z) Q^n(z,x) = \mu(x) \nu_x(x) = \mu(x)\,.\] The inequality is therefore an equality, and we have \[\sum_{z \in S} \mu(z) Q^n(z,x) = \sum_{z \in S} \mu(x) \nu_x(z) Q^n(z,x)\,.\] Since \(\mu(z) \geqslant\mu(x) \nu_x(z)\) we conclude that \(\mu(z) = \mu(x) \nu_x(z)\) whenever \(Q^n(z,x) > 0.\) By irreducibility, for any \(x\) and \(z\) there exists \(n \in \mathbb{N}^*\) such that \(Q^n(z,x) > 0.\) We conclude that \[\mu = \mu(x) \, \nu_x\] for any \(x.\)

5.7 Positive and null recurrence

It is now easy to show the following remarkable result about recurrent Markov chains. Recall that recurrent means that \(H_x < \infty\) a.s. under \(\mathbb{P}_x.\) In this case, the expectation \(\mathbb{E}_x[H_x]\) may be finite or infinite, which leads to an imporant refinement of the notion of recurrence.

Proposition 5.39

Consider an irreducible and recurrent Markov chain.

  1. Either there exists a stationary probability measure \(\mu,\) in which case \(\mathbb{E}_x[H_x] = \frac{1}{\mu(x)}\) for all \(x\);

  2. or any stationary measure has infinite total mass, in which case we have \(\mathbb{E}[H_x] = \infty\) for all \(x.\)

Definition 5.40

In case (i) we say that the chain positive recurrent and in case (ii) we say that it is null recurrent.

Proof of Proposition 5.39. By Proposition 5.38, the stationary measure \(\mu\) is unique up to a constant, and we can choose it to be either a probability measure (case (i)) or a measure of infinite total mass (case (ii)). Either way, for any \(x \in S\) we can write \(\mu = C \nu_x\) for some constant \(C\) depending on \(x.\)

In case (i), we have \[1 = \mu(S) = C \nu_x(S)\,,\] which implies \(C = \frac{1}{\nu_x(S)}\) and hence \[\mu(x) = \frac{\nu_x(x)}{\nu_x(S)} = \frac{1}{\nu_x(S)}\,.\] On the other hand, by Fubini’s theorem, \[\tag{5.13} \nu_x(S) = \sum_{y \in S} \mathbb{E}_x \Biggl[\sum_{k = 0}^{H_x - 1} \mathbf 1_{X_k = y}\Biggr] = \mathbb{E}_x \Biggl[\sum_{k = 0}^{H_x - 1} 1\Biggr] = \mathbb{E}_x[H_x]\,,\] as claimed.

In case (ii), \(\nu_x(S) = \infty,\) so that (5.13) implies \(\mathbb{E}_x[H_x] = \infty.\)

Clearly, if \(S\) is finite then any recurrent chain is always positive recurrent. Thus, null recurrent chains can only occur on an infinite state space.

Example 5.41 • Examples 5.11 and 5.21 continued

In Example 5.21, we saw that the simple random walk on \(\mathbb{Z}^d\) is recurrent for \(d \leqslant 2.\) Moreover, in Example 5.23, we saw that that counting measure on \(\mathbb{Z}^d\) is a stationary measure. Since the total mass of the counting measure on \(\mathbb{Z}^d\) is infinite, we conclude that the simple random walk on \(\mathbb{Z}^d\) is null recurrent for \(d \leqslant 2.\)

Let us suppose that an irreducible Markov chain has a stationary probability measure. Proposition 5.39 tells us that if the chain is recurrent then it is in fact positive recurrent. What if we do not a priori know that it is recurrent? It turns out that this assumption is in fact not needed.

Proposition 5.42

If an irreducible Markov chain has a stationary probability measure, then it is recurrent (and hence positive recurrent).

Proof. Let \(\mu\) be a stationary probability measure and let \(y \in S\) satisfy \(\mu(y) > 0.\) Then, by Proposition 5.32, \[\begin{aligned} \mu(S) U(y,y) &= \sum_{x \in S} \mu(x) U(y,y) \\ &\geqslant\sum_{x \in S} \mu(x) U(x,y) \\ &= \sum_{n \in \mathbb{N}} \sum_{x \in S} \mu(x) Q^n(x,y) \\ &= \sum_{n \in \mathbb{N}} \mu(y) \\ &= \infty\,. \end{aligned}\] Since \(\mu(S) = 1,\) we conclude that \(U(y,y) = \infty,\) so that \(y\) is recurrent. The claim now follows from Remark 5.35.

Note that the existence of a stationary measure with infinite mass does not imply anything about the recurrence of the chain. For instance, in Example 5.23 we saw that the simple random walk on \(\mathbb{Z}^d\) has the counting measure as a stationary measure, but it is recurrent for \(d \leqslant 2\) (see Example 5.21) and transient for \(d \geqslant 3.\)

Now that we have worked hard in deriving the theory behind stationary measures, let us see some applications.

Propositions 5.39 and 5.42 gives a very powerful tool for computing \(\mathbb{E}_x[H_x]\) whenever it is finite. Indeed, it suffices to find a stationary probability measure \(\mu,\) in which case we know that \(\mathbb{E}_x[H_x] = \frac{1}{\mu(x)}.\)

Example 5.43 • Random chess

A rook moves randomly on a chessboard: at each step, it makes uniformly at random any legal move (motion along rows or columns). How many moves on average does it take to return to its inital square?

This problem is a random walk on a finite graph in disguise (Examples 5.13 and 5.28). The vertex set is the set of squares on the chessboard, \(S = \{1, \dots, 8\}^2.\) There is an edge between \(x = (x_1, x_2)\) and \(y = (y_1, y_2)\) if and only if \(x_1 = y_1\) or \(x_2 = y_2,\) under the additional constraint \(x \neq y.\) Clearly, the chain is irreducible, and we already worked out the stationary measure in Example 5.28: \(\mu(x) = C D_x,\) where \(C > 0\) is a normalization constant that ensures that \(\mu\) is a probability measure. Since a rook can move from any square to any of \(14\) squares, we find that \(D_x = 14\) for all \(x \in S.\) We have the condition \[1 = \sum_{x \in S} \mu(x) = C \cdot 14 \cdot 64\,,\] from which we deduce that \(C = \frac{1}{14 \cdot 64},\) and therefore \[\mathbb{E}_x[H_x] = \frac{1}{\mu(x)} = \frac{1}{C D_x} = \frac{14 \cdot 64}{14} = 64\,.\]

Suppose now that instead of a rook we play with a king, which can move to any of the eight squares sharing an edge or a corner with the original square. In this case, \(D_x\) depends on \(x.\) We consider three types of squares:

  • corner: \(D_x = 3\)

  • edge but no corner: \(D_x = 5\)

  • neither edge nor corner: \(D_x = 8.\)

There are \(4\) squares of kind (c), \(24\) squares of kind (e), and \(36\) squares of kind (d). Thus we find \[1 = \sum_{x \in S} \mu(x) = C (4 \cdot 3 + 24 \cdot 5 + 36 \cdot 8) = C \cdot 420.\] We conclude that \(\mathbb{E}_x[H_x]\) is \(\frac{420}{3} = 140\) for \(x\) of kind (c), \(\frac{420}{5} = 84\) for \(x\) of kind (e), and \(\frac{420}{8} = 52.5\) for \(x\) of kind (b).

Example 5.44

(Asymmetric random walk on \(\mathbb{N}\)). Let us consider a random walk on \(\mathbb{N}.\) It is asymmetric in the sense that the probability \(p\) of taking a step to the right may be different from the probability \(1 - p\) of taking a step to the left. Unlike the random walk on \(\mathbb{Z}\) studied in Examples 5.11 and 5.21, this walk has a reflecting barrier at \(0.\)

The precise definition is as follows. Let \(0 < p < 1.\) For \(x \in \mathbb{N}^*\) set \[Q(x,x+1) :=p \,, \qquad Q(x,x-1) :=1 - p\,,\] and moreover \(Q(0,1) = 1.\) (All other entries of \(Q\) vanish.) It is clear that \(Q\) is a stochastic matrix. It describes a \(p\)-asymmetric random walk on \(\mathbb{N},\) which bounces off \(0\) back to the right whenever it hits it.

This chain is clearly irreducible. Let us look for a stationary measure. As usual, it is much easier to look for a reversible measure. The detailed balance equations from Definition 5.24 read \[\begin{aligned} \mu(x) p &= \mu(x+1) (1 - p) \qquad \text{for } x \geqslant 1 \\ \mu(0) &= \mu(1) (1 - p)\,, \end{aligned}\] which cen be easily solved by induction to yield \[\tag{5.14} \mu(x) = C \begin{cases} 1 - p & \text{if } x = 0 \\ \bigl(\frac{p}{1 - p}\bigr)^{x-1} & \text{if } x \geqslant 1\,, \end{cases}\] where \(C\) is a normalization constant. For \(p<\frac{1}{2}\) the measure \(\mu\) is finite (and hence can be chosen to be a probability measure). By Proposition 5.42, we conclude that for \(p<\frac{1}{2}\) the chain is positive recurrent.

What about \(p \geqslant\frac{1}{2}\)? In that case the stationary measure is infinite, and we cannot conclude anything about recurrence or transience from it (all that we can say is that the chain is not positive recurrent).

Instead, we shall use a coupling argument that relates, or couples, the chain \(X_n\) to a suitable random walk on \(\mathbb{Z},\) which can be more easily analysed. We consider the cases \(p = \frac{1}{2}\) and \(p > \frac{1}{2}\) separately.

  • Let \((Y_n)\) be the simple random walk on \(\mathbb{Z}\) (see Example 5.11). Then we claim that \(X_n :=\lvert Y_n \rvert\) is a simple random walk on \(\mathbb{N}\) with transition matrix \(Q.\) To show this, let \(y, x_0, \dots, x_n \in S\) and abbreviate \[B :=\{\lvert Y_0 \rvert = x_0, \dots, \lvert Y_{n-1} \rvert = x_{n-1}\}\,.\] Consider first the case \(x_n = 0,\) so that, by the Markov property from Proposition 5.15, \[\begin{aligned} &\mspace{-10mu} \mathbb{P}(\lvert Y_{n+1} \rvert = y \mid \lvert Y_n \rvert = x_n, \dots, \lvert Y_0 \rvert = x_0) \\ &= P(\lvert Y_{n+1} \rvert = y \mid Y_n = 0, B) \\ &= \mathbf 1_{y = 1} \\ &= Q(0,y)\,, \end{aligned}\] as desired.

    Next, if \(x_n \neq 0\) we get \[\begin{aligned} &\mspace{-10mu} \mathbb{P}(\lvert Y_{n+1} \rvert = y \mid \lvert Y_n \rvert = x_n, \dots, \lvert Y_0 \rvert = x_0) \\ &= \mathbb{P}(\lvert Y_{n+1} \rvert = y \mid \lvert Y_n \rvert = x_n, B) \\ &= \mathbb{P}(\lvert Y_{n+1} \rvert = y \mid Y_n = x_n, B) \frac{\mathbb{P}(Y_n = x_n, B)}{ \mathbb{P}(\lvert Y_n \rvert = x_n, B)} \\ &\qquad + \mathbb{P}(\lvert Y_{n+1} \rvert = y \mid Y_n = -x_n, B) \frac{\mathbb{P}(Y_n = -x_n, B)}{ \mathbb{P}(\lvert Y_n \rvert = x_n, B)}\,. \end{aligned}\] The first factor of each term on the right-hand side is \(\frac{1}{2} \mathbf 1_{\lvert y - x_n \rvert = 1} = Q(x_n, y)\) by Proposition 5.15 and the definition of \(Y_n\) (note that this is only correct because \(p = \frac{1}{2}\)). Thus we conclude that \[\begin{gathered} P(\lvert Y_{n+1} \rvert = y \mid \lvert Y_n \rvert = x_n, \dots, \lvert Y_0 \rvert = x_0) \\ = Q(x_n, y) \Biggl(\frac{\mathbb{P}(Y_n = x_n, B)}{ \mathbb{P}(\lvert Y_n \rvert = x_n, B)} + \frac{\mathbb{P}(Y_n = -x_n, B)}{ \mathbb{P}(\lvert Y_n \rvert = x_n, B)}\Biggr) = Q(x_n, y)\,, \end{gathered}\] as desired.

    We conclude that \[\mathbb{E}_0 \Biggl[\sum_{n \in \mathbb{N}} \mathbf 1_{X_n = 0}\Biggr] = \mathbb{E}_0 \Biggl[\sum_{n \in \mathbb{N}} \mathbf 1_{\lvert Y_n \rvert = 0}\Biggr] = \mathbb{E}_0 \Biggl[\sum_{n \in \mathbb{N}} \mathbf 1_{Y_n = 0}\Biggr] = \infty\,,\] where the last step follows from the recurrence of the simple random walk on \(\mathbb{Z}\) (Example 5.21). Hence, for \(p = \frac{1}{2}\) the chain \((X_n)\) is recurrent. Since it has a stationary measure (5.14) of infinite total mass, from Proposition 5.39 we conclude that it is null recurrent.

  • For \(p > \frac{1}{2},\) the previous coupling argument does not work because of the lack of symmetry. However, a somewhat different coupling to the asymmetric random walk on \(\mathbb{Z}\) does work. Let \((Y_n)\) be the asymmetric random walk on \(\mathbb{Z}\) starting from \(0\) from Example 5.27. It has the transition matrix \[\mathbb{P}(Y_{n+1} = y \mid Y_n = x) = p \mathbf 1_{y = x+1} + (1 - p) \mathbf 1_{y = x - 1}\,.\] The idea of the argument is to define a walk \(X_n\) on \(\mathbb{N}\) in terms of \(Y_n\) by imposing that \(X_n\) takes a step to the right whenever \(Y_n\) does, and \(X_n\) takes a step to the left whenever \(X_n\) does, unless \(X_n = 0,\) in which case \(X_n\) takes a step to the right even if \(Y_n\) takes a step to the left.

    More formally, \(X_0 :=0\) and \[\tag{5.15} X_{n+1} :=X_{n} + \begin{cases} Y_{n+1} - Y_n &\text{if } X_n > 0 \\ 1 &\text{if } X_n = 0\,. \end{cases}\] From the definition and a simple induction argument, we get that \[\tag{5.16} X_n \geqslant Y_n\,, \qquad \forall n \in \mathbb{N}\,.\] Moreover, we claim that \((X_n)\) thus defined is a Markov chain with transition matrix \(Q.\) To show this, let \(y, x_0, \dots, x_n \in S\) and abbreviate \[B :=\{X_0 = x_0, \dots, X_{n} = x_{n}\}\,.\] By the definition (5.15), the vector \((X_0, \dots, X_{n})\) is a deterministic function of the vector \((Y_0, \dots, Y_{n}),\) and hence we can also write \(B = \{(Y_0, \dots, Y_{n}) \in A\}\) for some set \(A \subset S^{n+1}.\) Now if \(x_n = 0\) we have \[\mathbb{P}(X_{n+1} = y \mid X_n = x_n, \dots, X_0 = x_0) = \mathbf 1_{y = 1} = Q(0,y)\,,\] as desired. On the other hand, if \(x_n > 0,\) we get \[\begin{aligned} &\mspace{-10mu} \mathbb{P}(X_{n+1} = y \mid X_n = x_n, \dots, X_0 = x_0) \\ &= \mathbb{P}(X_{n+1} = y \mid B) \\ &= \mathbb{P}(Y_{n+1} = Y_n + y - x_n \mid B) \\ &= \sum_{z \in S} \mathbb{P}(Y_{n+1} = z + y - x_n \mid Y_n = z, B) \, \frac{\mathbb{P}(Y_n = z, B)}{\mathbb{P}(B)} \\ &= \sum_{z \in S} Q(x_n, y) \, \frac{\mathbb{P}(Y_n = z, B)}{\mathbb{P}(B)} \\ &= Q(x_n, y)\,, \end{aligned}\] where in the fourth step we used the Markov property Proposition 5.15 for the Markov chain \((Y_n).\) We conclude that \((X_n)\) is indeed a Markov chain with transition matrix \(Q.\)

    To conclude the analysis, we note that the strong law of large numbers from Proposition 3.27 implies that \[\lim_{n \to \infty} \frac{Y_n}{n} = 2p - 1\] almost surely, because each step of the random walk \((Y_n)\) has expectation \(\mathbb{E}[Y_1 - Y_0] = 2p - 1.\) Since \(2p - 1 > 0,\) we conclude that, almost surely, \(Y_n \to + \infty\) as \(n \to \infty.\) From (5.16) we decude that almost surely \(X_n \to + \infty\) as \(n \to \infty,\) and therefore \((X_n)\) is transient.

We summarise: the random walk on \(\mathbb{N}\) is

  • positive recurrent for \(p < \frac{1}{2},\)

  • null recurrent for \(p = \frac{1}{2},\)

  • transient for \(p > \frac{1}{2}.\)

Home

Contents

Study Weeks