Symmetric random walk passes through 1
Clash Royale CLAN TAG#URR8PPP
up vote
4
down vote
favorite
I am working on the following problem (which I couldn't find on the website so far):
Show that a symmetric random walk ($1$ dimensional) starting from the origin visits the point $1$ with probability $1$.
My attempt so far (an adaptation of the recurrence proof from Probability An Introduction $2^nd$ edition by Grimmett and Welsh -p.170).
Let $S_n$ be the r.v. which represents where (on the $X$-axis) the random walk is at time $n$. And let $X_n$ be the random variable representing the move that has happened at time $n$. Of course, $X_i=pm1$ and
$$
P(X_i=1) = P(X_i=-1)=frac12 mbox as it is symmetric
$$
Hence, we can write $S_n = X_1 + dots + X_n$.
In order to be at point $1$ we would need to have had an odd number of moves ($m+1$ to the right and $m$ to the left). Hence we cannot be at point $1$ after an even number of moves. Hence,
$$
P(S_2m=1) = 0, mbox mgeq0 tag1
$$
and
$$
P(S_2m+1 = 1) = binom2m+1mfrac12^2m+1, mbox mgeq0 tag2
$$
Let, now, $A_n = left S_n = 1right $ for the event that the walk visits point $1$ at time $n$, and:
$$
B_n = left S_n = 1, S_kneq1 mbox for 1leq kleq n-1 right
$$
for the event that the first visit of the walk through point $1$ occurs at time $n$. If $A_n$ occurs, then exactly one of $B_1, dots , B_n$ occurs, giving that:
$$
P(A_n) = sum_k=1^nP(A_ncap B_k).
$$
Now, $A_ncap B_k$ is the event that the walk goes through $1$ for the first time at time $k$ and then again in another $n-k$ steps. Hence:
$$
P(A_ncap B_k) = P(B_k)P(A_n-k), mboxfor 2leq k leq n tag3
$$
I have doubts that in the equation above, the boundaries are correct i.e. not sure if $ngeq 2$
since transitions in disjoint intervals of time are independent of each other. We write $f_n = P(B_n)$ and $u_n = P(A_n)$. Hence, from the above equations we get that:
$$
u_n = sum_k=2^n f_ku_n-k, mbox for n=1,2,dots
$$
We know the $u_i$s from $(1)$ and $(2)$ and we want to find the $f_k$. Given that the summation we got above is a convolution, we can use probability generating functions.
$$
U(s) = sum_n=0^inftyu_ns^n
$$
$$
F(s) = sum_n=0^inftyf_ns^n
$$
noting that $u_0 = 0$ and $f_0 = 0$, we have, from $(3)$ that:
$$
sum_n=2^inftyu_ns^n = F(s)U(s)
$$
Hence $U(s) - frac12s = F(s)U(s)$. Hence
$$F(s) = 1 - frac12sU(s)
$$
And we can find out (not so easily), that:
$$
U(s) = frac1-sqrt1-s^2ssqrt1-s^2, |s|<1
$$
Hence,
$$
F(s) = 1-fracssqrt1-s^22s-2ssqrt1-s^2, |s|<1
$$
We get the probability we are interested in by taking the limit in the above equation as $sto 1$, which yields $1$.
I am not sure if I manipulated the (summation) indexes correctly. I would appreciate comments on this proof as well as an alternative proof if anybody knows one.
probability proof-verification stochastic-processes random-walk
add a comment |Â
up vote
4
down vote
favorite
I am working on the following problem (which I couldn't find on the website so far):
Show that a symmetric random walk ($1$ dimensional) starting from the origin visits the point $1$ with probability $1$.
My attempt so far (an adaptation of the recurrence proof from Probability An Introduction $2^nd$ edition by Grimmett and Welsh -p.170).
Let $S_n$ be the r.v. which represents where (on the $X$-axis) the random walk is at time $n$. And let $X_n$ be the random variable representing the move that has happened at time $n$. Of course, $X_i=pm1$ and
$$
P(X_i=1) = P(X_i=-1)=frac12 mbox as it is symmetric
$$
Hence, we can write $S_n = X_1 + dots + X_n$.
In order to be at point $1$ we would need to have had an odd number of moves ($m+1$ to the right and $m$ to the left). Hence we cannot be at point $1$ after an even number of moves. Hence,
$$
P(S_2m=1) = 0, mbox mgeq0 tag1
$$
and
$$
P(S_2m+1 = 1) = binom2m+1mfrac12^2m+1, mbox mgeq0 tag2
$$
Let, now, $A_n = left S_n = 1right $ for the event that the walk visits point $1$ at time $n$, and:
$$
B_n = left S_n = 1, S_kneq1 mbox for 1leq kleq n-1 right
$$
for the event that the first visit of the walk through point $1$ occurs at time $n$. If $A_n$ occurs, then exactly one of $B_1, dots , B_n$ occurs, giving that:
$$
P(A_n) = sum_k=1^nP(A_ncap B_k).
$$
Now, $A_ncap B_k$ is the event that the walk goes through $1$ for the first time at time $k$ and then again in another $n-k$ steps. Hence:
$$
P(A_ncap B_k) = P(B_k)P(A_n-k), mboxfor 2leq k leq n tag3
$$
I have doubts that in the equation above, the boundaries are correct i.e. not sure if $ngeq 2$
since transitions in disjoint intervals of time are independent of each other. We write $f_n = P(B_n)$ and $u_n = P(A_n)$. Hence, from the above equations we get that:
$$
u_n = sum_k=2^n f_ku_n-k, mbox for n=1,2,dots
$$
We know the $u_i$s from $(1)$ and $(2)$ and we want to find the $f_k$. Given that the summation we got above is a convolution, we can use probability generating functions.
$$
U(s) = sum_n=0^inftyu_ns^n
$$
$$
F(s) = sum_n=0^inftyf_ns^n
$$
noting that $u_0 = 0$ and $f_0 = 0$, we have, from $(3)$ that:
$$
sum_n=2^inftyu_ns^n = F(s)U(s)
$$
Hence $U(s) - frac12s = F(s)U(s)$. Hence
$$F(s) = 1 - frac12sU(s)
$$
And we can find out (not so easily), that:
$$
U(s) = frac1-sqrt1-s^2ssqrt1-s^2, |s|<1
$$
Hence,
$$
F(s) = 1-fracssqrt1-s^22s-2ssqrt1-s^2, |s|<1
$$
We get the probability we are interested in by taking the limit in the above equation as $sto 1$, which yields $1$.
I am not sure if I manipulated the (summation) indexes correctly. I would appreciate comments on this proof as well as an alternative proof if anybody knows one.
probability proof-verification stochastic-processes random-walk
1
An alternative proof would be to prove that the time of first return to $0$ is finite. That proves that $0$ is a recurrent state (when seeing the random walk as a Markov Chain). Since $0$ communicates with $1$, $1$ is recurrent as well. This little reasonning show that even if you start at $2^10000$, you will reach $1$ someday with probability $1$.
â nicomezi
Aug 14 at 12:13
this might be useful. math.stackexchange.com/questions/536/â¦
â pointguard0
Aug 14 at 13:56
add a comment |Â
up vote
4
down vote
favorite
up vote
4
down vote
favorite
I am working on the following problem (which I couldn't find on the website so far):
Show that a symmetric random walk ($1$ dimensional) starting from the origin visits the point $1$ with probability $1$.
My attempt so far (an adaptation of the recurrence proof from Probability An Introduction $2^nd$ edition by Grimmett and Welsh -p.170).
Let $S_n$ be the r.v. which represents where (on the $X$-axis) the random walk is at time $n$. And let $X_n$ be the random variable representing the move that has happened at time $n$. Of course, $X_i=pm1$ and
$$
P(X_i=1) = P(X_i=-1)=frac12 mbox as it is symmetric
$$
Hence, we can write $S_n = X_1 + dots + X_n$.
In order to be at point $1$ we would need to have had an odd number of moves ($m+1$ to the right and $m$ to the left). Hence we cannot be at point $1$ after an even number of moves. Hence,
$$
P(S_2m=1) = 0, mbox mgeq0 tag1
$$
and
$$
P(S_2m+1 = 1) = binom2m+1mfrac12^2m+1, mbox mgeq0 tag2
$$
Let, now, $A_n = left S_n = 1right $ for the event that the walk visits point $1$ at time $n$, and:
$$
B_n = left S_n = 1, S_kneq1 mbox for 1leq kleq n-1 right
$$
for the event that the first visit of the walk through point $1$ occurs at time $n$. If $A_n$ occurs, then exactly one of $B_1, dots , B_n$ occurs, giving that:
$$
P(A_n) = sum_k=1^nP(A_ncap B_k).
$$
Now, $A_ncap B_k$ is the event that the walk goes through $1$ for the first time at time $k$ and then again in another $n-k$ steps. Hence:
$$
P(A_ncap B_k) = P(B_k)P(A_n-k), mboxfor 2leq k leq n tag3
$$
I have doubts that in the equation above, the boundaries are correct i.e. not sure if $ngeq 2$
since transitions in disjoint intervals of time are independent of each other. We write $f_n = P(B_n)$ and $u_n = P(A_n)$. Hence, from the above equations we get that:
$$
u_n = sum_k=2^n f_ku_n-k, mbox for n=1,2,dots
$$
We know the $u_i$s from $(1)$ and $(2)$ and we want to find the $f_k$. Given that the summation we got above is a convolution, we can use probability generating functions.
$$
U(s) = sum_n=0^inftyu_ns^n
$$
$$
F(s) = sum_n=0^inftyf_ns^n
$$
noting that $u_0 = 0$ and $f_0 = 0$, we have, from $(3)$ that:
$$
sum_n=2^inftyu_ns^n = F(s)U(s)
$$
Hence $U(s) - frac12s = F(s)U(s)$. Hence
$$F(s) = 1 - frac12sU(s)
$$
And we can find out (not so easily), that:
$$
U(s) = frac1-sqrt1-s^2ssqrt1-s^2, |s|<1
$$
Hence,
$$
F(s) = 1-fracssqrt1-s^22s-2ssqrt1-s^2, |s|<1
$$
We get the probability we are interested in by taking the limit in the above equation as $sto 1$, which yields $1$.
I am not sure if I manipulated the (summation) indexes correctly. I would appreciate comments on this proof as well as an alternative proof if anybody knows one.
probability proof-verification stochastic-processes random-walk
I am working on the following problem (which I couldn't find on the website so far):
Show that a symmetric random walk ($1$ dimensional) starting from the origin visits the point $1$ with probability $1$.
My attempt so far (an adaptation of the recurrence proof from Probability An Introduction $2^nd$ edition by Grimmett and Welsh -p.170).
Let $S_n$ be the r.v. which represents where (on the $X$-axis) the random walk is at time $n$. And let $X_n$ be the random variable representing the move that has happened at time $n$. Of course, $X_i=pm1$ and
$$
P(X_i=1) = P(X_i=-1)=frac12 mbox as it is symmetric
$$
Hence, we can write $S_n = X_1 + dots + X_n$.
In order to be at point $1$ we would need to have had an odd number of moves ($m+1$ to the right and $m$ to the left). Hence we cannot be at point $1$ after an even number of moves. Hence,
$$
P(S_2m=1) = 0, mbox mgeq0 tag1
$$
and
$$
P(S_2m+1 = 1) = binom2m+1mfrac12^2m+1, mbox mgeq0 tag2
$$
Let, now, $A_n = left S_n = 1right $ for the event that the walk visits point $1$ at time $n$, and:
$$
B_n = left S_n = 1, S_kneq1 mbox for 1leq kleq n-1 right
$$
for the event that the first visit of the walk through point $1$ occurs at time $n$. If $A_n$ occurs, then exactly one of $B_1, dots , B_n$ occurs, giving that:
$$
P(A_n) = sum_k=1^nP(A_ncap B_k).
$$
Now, $A_ncap B_k$ is the event that the walk goes through $1$ for the first time at time $k$ and then again in another $n-k$ steps. Hence:
$$
P(A_ncap B_k) = P(B_k)P(A_n-k), mboxfor 2leq k leq n tag3
$$
I have doubts that in the equation above, the boundaries are correct i.e. not sure if $ngeq 2$
since transitions in disjoint intervals of time are independent of each other. We write $f_n = P(B_n)$ and $u_n = P(A_n)$. Hence, from the above equations we get that:
$$
u_n = sum_k=2^n f_ku_n-k, mbox for n=1,2,dots
$$
We know the $u_i$s from $(1)$ and $(2)$ and we want to find the $f_k$. Given that the summation we got above is a convolution, we can use probability generating functions.
$$
U(s) = sum_n=0^inftyu_ns^n
$$
$$
F(s) = sum_n=0^inftyf_ns^n
$$
noting that $u_0 = 0$ and $f_0 = 0$, we have, from $(3)$ that:
$$
sum_n=2^inftyu_ns^n = F(s)U(s)
$$
Hence $U(s) - frac12s = F(s)U(s)$. Hence
$$F(s) = 1 - frac12sU(s)
$$
And we can find out (not so easily), that:
$$
U(s) = frac1-sqrt1-s^2ssqrt1-s^2, |s|<1
$$
Hence,
$$
F(s) = 1-fracssqrt1-s^22s-2ssqrt1-s^2, |s|<1
$$
We get the probability we are interested in by taking the limit in the above equation as $sto 1$, which yields $1$.
I am not sure if I manipulated the (summation) indexes correctly. I would appreciate comments on this proof as well as an alternative proof if anybody knows one.
probability proof-verification stochastic-processes random-walk
asked Aug 14 at 12:10
Andrei Crisan
3469
3469
1
An alternative proof would be to prove that the time of first return to $0$ is finite. That proves that $0$ is a recurrent state (when seeing the random walk as a Markov Chain). Since $0$ communicates with $1$, $1$ is recurrent as well. This little reasonning show that even if you start at $2^10000$, you will reach $1$ someday with probability $1$.
â nicomezi
Aug 14 at 12:13
this might be useful. math.stackexchange.com/questions/536/â¦
â pointguard0
Aug 14 at 13:56
add a comment |Â
1
An alternative proof would be to prove that the time of first return to $0$ is finite. That proves that $0$ is a recurrent state (when seeing the random walk as a Markov Chain). Since $0$ communicates with $1$, $1$ is recurrent as well. This little reasonning show that even if you start at $2^10000$, you will reach $1$ someday with probability $1$.
â nicomezi
Aug 14 at 12:13
this might be useful. math.stackexchange.com/questions/536/â¦
â pointguard0
Aug 14 at 13:56
1
1
An alternative proof would be to prove that the time of first return to $0$ is finite. That proves that $0$ is a recurrent state (when seeing the random walk as a Markov Chain). Since $0$ communicates with $1$, $1$ is recurrent as well. This little reasonning show that even if you start at $2^10000$, you will reach $1$ someday with probability $1$.
â nicomezi
Aug 14 at 12:13
An alternative proof would be to prove that the time of first return to $0$ is finite. That proves that $0$ is a recurrent state (when seeing the random walk as a Markov Chain). Since $0$ communicates with $1$, $1$ is recurrent as well. This little reasonning show that even if you start at $2^10000$, you will reach $1$ someday with probability $1$.
â nicomezi
Aug 14 at 12:13
this might be useful. math.stackexchange.com/questions/536/â¦
â pointguard0
Aug 14 at 13:56
this might be useful. math.stackexchange.com/questions/536/â¦
â pointguard0
Aug 14 at 13:56
add a comment |Â
1 Answer
1
active
oldest
votes
up vote
0
down vote
Here is a much simpler martingale argument. Note that by continuity from below,
beginalign*
T_1< infty = bigcup_n=1^inftyT_1 < T_-n Rightarrow P(T_1 < infty) = lim_n P(T_1 < T_-n),
endalign*
where $T_-n doteq infm: S_m = -n$. Letting $T = T_1 wedge T_-n$, we have by the Optional Sampling Theorem that
beginalign*
0 = ES_0 = ES_T wedge m.
endalign*
Also note that $P(T < infty) = 1$, since $P(T geq m(1+n)) leq (1-2^-(1+n))^m$, i.e. every sequence of $n+1$ flips has probability $2^-(n+1)$ of being all heads, in which case the walk will escape the interval $(-n,1)$. If the time of escape is greater than $m(n+1)$, then we must have failed to obtain $n+1$ heads in a row, $m$ times in a row. We now have
beginalign*
Eleft(fracTn+1right) = sum_m=1^infty P(T geq m(n+1)) < infty,
endalign*
since this is a geometric series. Hence $ET < infty$ so $T$ is finite with probability $1$, so that $S_T wedge m to S_T$ almost surely. Since $S_T wedge m$ is bounded between $-n$ and $1$, we have by the Dominated Convergence Theorem that
beginalign*
0 = lim_m toinfty ES_T wedge m = ES_T = P(T_1 < T_-n)cdot 1 + (1-P(T_1 < T_-n)) cdot (-n).
endalign*
Rearranging gives
beginalign*
P(T_1 < T_-n) = fracnn+1.
endalign*
Hence, $P(T_1 < infty) = lim_ntoinfty P(T_1 < T_-n) = 1$.
add a comment |Â
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
0
down vote
Here is a much simpler martingale argument. Note that by continuity from below,
beginalign*
T_1< infty = bigcup_n=1^inftyT_1 < T_-n Rightarrow P(T_1 < infty) = lim_n P(T_1 < T_-n),
endalign*
where $T_-n doteq infm: S_m = -n$. Letting $T = T_1 wedge T_-n$, we have by the Optional Sampling Theorem that
beginalign*
0 = ES_0 = ES_T wedge m.
endalign*
Also note that $P(T < infty) = 1$, since $P(T geq m(1+n)) leq (1-2^-(1+n))^m$, i.e. every sequence of $n+1$ flips has probability $2^-(n+1)$ of being all heads, in which case the walk will escape the interval $(-n,1)$. If the time of escape is greater than $m(n+1)$, then we must have failed to obtain $n+1$ heads in a row, $m$ times in a row. We now have
beginalign*
Eleft(fracTn+1right) = sum_m=1^infty P(T geq m(n+1)) < infty,
endalign*
since this is a geometric series. Hence $ET < infty$ so $T$ is finite with probability $1$, so that $S_T wedge m to S_T$ almost surely. Since $S_T wedge m$ is bounded between $-n$ and $1$, we have by the Dominated Convergence Theorem that
beginalign*
0 = lim_m toinfty ES_T wedge m = ES_T = P(T_1 < T_-n)cdot 1 + (1-P(T_1 < T_-n)) cdot (-n).
endalign*
Rearranging gives
beginalign*
P(T_1 < T_-n) = fracnn+1.
endalign*
Hence, $P(T_1 < infty) = lim_ntoinfty P(T_1 < T_-n) = 1$.
add a comment |Â
up vote
0
down vote
Here is a much simpler martingale argument. Note that by continuity from below,
beginalign*
T_1< infty = bigcup_n=1^inftyT_1 < T_-n Rightarrow P(T_1 < infty) = lim_n P(T_1 < T_-n),
endalign*
where $T_-n doteq infm: S_m = -n$. Letting $T = T_1 wedge T_-n$, we have by the Optional Sampling Theorem that
beginalign*
0 = ES_0 = ES_T wedge m.
endalign*
Also note that $P(T < infty) = 1$, since $P(T geq m(1+n)) leq (1-2^-(1+n))^m$, i.e. every sequence of $n+1$ flips has probability $2^-(n+1)$ of being all heads, in which case the walk will escape the interval $(-n,1)$. If the time of escape is greater than $m(n+1)$, then we must have failed to obtain $n+1$ heads in a row, $m$ times in a row. We now have
beginalign*
Eleft(fracTn+1right) = sum_m=1^infty P(T geq m(n+1)) < infty,
endalign*
since this is a geometric series. Hence $ET < infty$ so $T$ is finite with probability $1$, so that $S_T wedge m to S_T$ almost surely. Since $S_T wedge m$ is bounded between $-n$ and $1$, we have by the Dominated Convergence Theorem that
beginalign*
0 = lim_m toinfty ES_T wedge m = ES_T = P(T_1 < T_-n)cdot 1 + (1-P(T_1 < T_-n)) cdot (-n).
endalign*
Rearranging gives
beginalign*
P(T_1 < T_-n) = fracnn+1.
endalign*
Hence, $P(T_1 < infty) = lim_ntoinfty P(T_1 < T_-n) = 1$.
add a comment |Â
up vote
0
down vote
up vote
0
down vote
Here is a much simpler martingale argument. Note that by continuity from below,
beginalign*
T_1< infty = bigcup_n=1^inftyT_1 < T_-n Rightarrow P(T_1 < infty) = lim_n P(T_1 < T_-n),
endalign*
where $T_-n doteq infm: S_m = -n$. Letting $T = T_1 wedge T_-n$, we have by the Optional Sampling Theorem that
beginalign*
0 = ES_0 = ES_T wedge m.
endalign*
Also note that $P(T < infty) = 1$, since $P(T geq m(1+n)) leq (1-2^-(1+n))^m$, i.e. every sequence of $n+1$ flips has probability $2^-(n+1)$ of being all heads, in which case the walk will escape the interval $(-n,1)$. If the time of escape is greater than $m(n+1)$, then we must have failed to obtain $n+1$ heads in a row, $m$ times in a row. We now have
beginalign*
Eleft(fracTn+1right) = sum_m=1^infty P(T geq m(n+1)) < infty,
endalign*
since this is a geometric series. Hence $ET < infty$ so $T$ is finite with probability $1$, so that $S_T wedge m to S_T$ almost surely. Since $S_T wedge m$ is bounded between $-n$ and $1$, we have by the Dominated Convergence Theorem that
beginalign*
0 = lim_m toinfty ES_T wedge m = ES_T = P(T_1 < T_-n)cdot 1 + (1-P(T_1 < T_-n)) cdot (-n).
endalign*
Rearranging gives
beginalign*
P(T_1 < T_-n) = fracnn+1.
endalign*
Hence, $P(T_1 < infty) = lim_ntoinfty P(T_1 < T_-n) = 1$.
Here is a much simpler martingale argument. Note that by continuity from below,
beginalign*
T_1< infty = bigcup_n=1^inftyT_1 < T_-n Rightarrow P(T_1 < infty) = lim_n P(T_1 < T_-n),
endalign*
where $T_-n doteq infm: S_m = -n$. Letting $T = T_1 wedge T_-n$, we have by the Optional Sampling Theorem that
beginalign*
0 = ES_0 = ES_T wedge m.
endalign*
Also note that $P(T < infty) = 1$, since $P(T geq m(1+n)) leq (1-2^-(1+n))^m$, i.e. every sequence of $n+1$ flips has probability $2^-(n+1)$ of being all heads, in which case the walk will escape the interval $(-n,1)$. If the time of escape is greater than $m(n+1)$, then we must have failed to obtain $n+1$ heads in a row, $m$ times in a row. We now have
beginalign*
Eleft(fracTn+1right) = sum_m=1^infty P(T geq m(n+1)) < infty,
endalign*
since this is a geometric series. Hence $ET < infty$ so $T$ is finite with probability $1$, so that $S_T wedge m to S_T$ almost surely. Since $S_T wedge m$ is bounded between $-n$ and $1$, we have by the Dominated Convergence Theorem that
beginalign*
0 = lim_m toinfty ES_T wedge m = ES_T = P(T_1 < T_-n)cdot 1 + (1-P(T_1 < T_-n)) cdot (-n).
endalign*
Rearranging gives
beginalign*
P(T_1 < T_-n) = fracnn+1.
endalign*
Hence, $P(T_1 < infty) = lim_ntoinfty P(T_1 < T_-n) = 1$.
answered Aug 14 at 15:51
Daniel Xiang
1,843414
1,843414
add a comment |Â
add a comment |Â
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2882378%2fsymmetric-random-walk-passes-through-1%23new-answer', 'question_page');
);
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
1
An alternative proof would be to prove that the time of first return to $0$ is finite. That proves that $0$ is a recurrent state (when seeing the random walk as a Markov Chain). Since $0$ communicates with $1$, $1$ is recurrent as well. This little reasonning show that even if you start at $2^10000$, you will reach $1$ someday with probability $1$.
â nicomezi
Aug 14 at 12:13
this might be useful. math.stackexchange.com/questions/536/â¦
â pointguard0
Aug 14 at 13:56