$pi(x) P_x ( tau_y < tau_x^+) = pi(y) P_y( tau_x < tau_y^+)$?
Clash Royale CLAN TAG#URR8PPP
up vote
1
down vote
favorite
$P$ is the transition probability of a finite, irreducible and reversible Markov chain (i.e. a walk on a finite network$G$). $pi$ is the stationary distribution.For $y in G$, $tau_y$ is hitting time of $y$, $tau_y^+$ is first hitting time after time $1$.
Is it true that $pi(x) P_x ( tau_y < tau_x^+) = pi(y) P_y( tau_x < tau_y^+)$?
This seems to be a necessary fact in an exercise I'm working on. (But perhaps it is false?)
I tried writing out:
$pi(x) P_x( tau_y < tau_x^+) = sum_w pi(x) P(x,w) P_w( tau_y < tau_x) = sum_w pi(w) P(w,x) P_w( tau_y < tau_x) $, since I thought that fact that $P_w(tau_y < tau_x)$ is harmonic in w away from $x,y$ could be useful, but this doesn't seem to be helpful.
A hint would be really appreciated!
markov-chains
add a comment |Â
up vote
1
down vote
favorite
$P$ is the transition probability of a finite, irreducible and reversible Markov chain (i.e. a walk on a finite network$G$). $pi$ is the stationary distribution.For $y in G$, $tau_y$ is hitting time of $y$, $tau_y^+$ is first hitting time after time $1$.
Is it true that $pi(x) P_x ( tau_y < tau_x^+) = pi(y) P_y( tau_x < tau_y^+)$?
This seems to be a necessary fact in an exercise I'm working on. (But perhaps it is false?)
I tried writing out:
$pi(x) P_x( tau_y < tau_x^+) = sum_w pi(x) P(x,w) P_w( tau_y < tau_x) = sum_w pi(w) P(w,x) P_w( tau_y < tau_x) $, since I thought that fact that $P_w(tau_y < tau_x)$ is harmonic in w away from $x,y$ could be useful, but this doesn't seem to be helpful.
A hint would be really appreciated!
markov-chains
add a comment |Â
up vote
1
down vote
favorite
up vote
1
down vote
favorite
$P$ is the transition probability of a finite, irreducible and reversible Markov chain (i.e. a walk on a finite network$G$). $pi$ is the stationary distribution.For $y in G$, $tau_y$ is hitting time of $y$, $tau_y^+$ is first hitting time after time $1$.
Is it true that $pi(x) P_x ( tau_y < tau_x^+) = pi(y) P_y( tau_x < tau_y^+)$?
This seems to be a necessary fact in an exercise I'm working on. (But perhaps it is false?)
I tried writing out:
$pi(x) P_x( tau_y < tau_x^+) = sum_w pi(x) P(x,w) P_w( tau_y < tau_x) = sum_w pi(w) P(w,x) P_w( tau_y < tau_x) $, since I thought that fact that $P_w(tau_y < tau_x)$ is harmonic in w away from $x,y$ could be useful, but this doesn't seem to be helpful.
A hint would be really appreciated!
markov-chains
$P$ is the transition probability of a finite, irreducible and reversible Markov chain (i.e. a walk on a finite network$G$). $pi$ is the stationary distribution.For $y in G$, $tau_y$ is hitting time of $y$, $tau_y^+$ is first hitting time after time $1$.
Is it true that $pi(x) P_x ( tau_y < tau_x^+) = pi(y) P_y( tau_x < tau_y^+)$?
This seems to be a necessary fact in an exercise I'm working on. (But perhaps it is false?)
I tried writing out:
$pi(x) P_x( tau_y < tau_x^+) = sum_w pi(x) P(x,w) P_w( tau_y < tau_x) = sum_w pi(w) P(w,x) P_w( tau_y < tau_x) $, since I thought that fact that $P_w(tau_y < tau_x)$ is harmonic in w away from $x,y$ could be useful, but this doesn't seem to be helpful.
A hint would be really appreciated!
markov-chains
asked Aug 7 at 15:09
Lorenzo
11.5k31537
11.5k31537
add a comment |Â
add a comment |Â
1 Answer
1
active
oldest
votes
up vote
1
down vote
accepted
Note first that since the probability measures start at $x$ and $y$ on the left and right hand sides, respectively, we can replace $tau$ by $tau^+$.
Now observe that if the Markov chain is reversible, then a positive weight function $w: mathcalX to mathbbR^+$ satisfying the detailed balance conditions, i.e.
beginalign*
w(x) p(x,y) = w(y) p(y,x)
endalign*
exists. Since $mathcalX$ is finite, $sum_x in mathcalX w(x) doteq A < infty$, and since any scalar multiple of a weight function is also a weight function, we must have that $pi = w/A$ is a stationary distribution (sum both sides of the detailed balance equation over $x$). Since the chain is irreducible, and a stationary distribution exists, we must have $pi(x) > 0$ for all $x in mathcalX$ (Exercise!). Hence the chain is recurrent (which implies that stationary measures are unique up to constant multiples).
One way to get the result is to view the Markov Chain as an electrical network with conductance function $C(x) = w(x)$, and then invoke the result
beginalign*
P_x(tau^+_y < tau^+_x) = 1/(C(x)R_texteff),
endalign*
where $R_texteff$ is the effective resistance between $x$ and $y$. Then it follows immediately from dividing this by the analogous equation for $y$ that
beginalign*
fracP_x(tau^+_y < tau_x^+)P_y(tau^+_x < tau_y^+) = fracC(y)C(x) = fracpi(y)pi(x),
endalign*
as desired.
To obtain the result without using electrical network theory, let $x$ be a recurrent state (all states are recurrent by the above discussion) and define the measure
beginalign*
mu_x(y) doteq E_xsum_n=0^tau_x^+ - 1 textbf1_X_n = y,
endalign*
i.e. the expected number of visits to $y$, starting at $x$, before hitting $x$. This defines a stationary measure, and is thus a scalar multiple of $pi$ (stationary measures are unique up to constant multiples for irreducible, recurrent chains). Note that by definition, $mu_x(x) = 1$. Hence,
beginalign*
fracpi(y)pi(x) = fracmu_x(y)mu_x(x) = mu_x(y).
endalign*
To compute this expectation, denote $N_x(y) doteq sum_n=0^tau_x^+ -1textbf1_X_n = y$, so it suffices to show that
beginalign*
E_x(N_x(y)) = fracP_x(tau_y^+ < tau_x^+)P_y(tau_x^+ < tau_y^+).
endalign*
Indeed,
beginalign*
E_x(N_x(y)) &= E_x(N_x(y) textbf1_tau_y^+ < tau_x^+ + N_x(y) textbf1_tau_y^+ > tau_x^+) \
&= E_x(N_x(y) textbf1_tau_y^+ < tau_x^+) \
&= E_x(N_x(y) mid tau_y^+ < tau_x^+) P_x(tau_y^+ < tau_x^+),
endalign*
where the second equality follows from the fact that the number of times we visit $y$ before $x$ is $0$ on the set where we hit $x$ before $y$. Now note that conditioned on $tau_y^+ < tau_x^+$, $N_x(y) in 1,2,3,dots$ is distributed as a geometric random variable with success parameter $P_y(tau_x^+ < tau_y^+)$ (think of breaking time up into chunks separated by visits to either $x$ or $y$. Then hitting $x$ to end the process is like flipping a coin with bias $P_y(tau_x^+ < tau_y^+)$). Hence the expectation is $frac1P_y(tau_x^+ < tau_y^+)$, which yields
beginalign*
fracpi(y)pi(x) = E_x(N_x(y)) = fracP_x(tau_y^+ < tau_x^+)P_y(tau_x^+ < tau_y^+),
endalign*
as desired. Notice that we didn't use the hypothesis of reversibility in the second argument, in contrast to the first, where it was used to define the conductance function for the electrical network (which made the proof much quicker).
Thank you! Do you think you could add a bit on how to see that $mu_x(y)$ is a stationary measure? (This seems intuitive, but I'm not sure how to actually write down a proof...)
â Lorenzo
Aug 7 at 16:12
The proof is nontrivial but can be found in Durrett's Probability Theory and Examples (Theorem 6.5.2, the book is online and free).
â Daniel Xiang
Aug 7 at 16:16
add a comment |Â
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
1
down vote
accepted
Note first that since the probability measures start at $x$ and $y$ on the left and right hand sides, respectively, we can replace $tau$ by $tau^+$.
Now observe that if the Markov chain is reversible, then a positive weight function $w: mathcalX to mathbbR^+$ satisfying the detailed balance conditions, i.e.
beginalign*
w(x) p(x,y) = w(y) p(y,x)
endalign*
exists. Since $mathcalX$ is finite, $sum_x in mathcalX w(x) doteq A < infty$, and since any scalar multiple of a weight function is also a weight function, we must have that $pi = w/A$ is a stationary distribution (sum both sides of the detailed balance equation over $x$). Since the chain is irreducible, and a stationary distribution exists, we must have $pi(x) > 0$ for all $x in mathcalX$ (Exercise!). Hence the chain is recurrent (which implies that stationary measures are unique up to constant multiples).
One way to get the result is to view the Markov Chain as an electrical network with conductance function $C(x) = w(x)$, and then invoke the result
beginalign*
P_x(tau^+_y < tau^+_x) = 1/(C(x)R_texteff),
endalign*
where $R_texteff$ is the effective resistance between $x$ and $y$. Then it follows immediately from dividing this by the analogous equation for $y$ that
beginalign*
fracP_x(tau^+_y < tau_x^+)P_y(tau^+_x < tau_y^+) = fracC(y)C(x) = fracpi(y)pi(x),
endalign*
as desired.
To obtain the result without using electrical network theory, let $x$ be a recurrent state (all states are recurrent by the above discussion) and define the measure
beginalign*
mu_x(y) doteq E_xsum_n=0^tau_x^+ - 1 textbf1_X_n = y,
endalign*
i.e. the expected number of visits to $y$, starting at $x$, before hitting $x$. This defines a stationary measure, and is thus a scalar multiple of $pi$ (stationary measures are unique up to constant multiples for irreducible, recurrent chains). Note that by definition, $mu_x(x) = 1$. Hence,
beginalign*
fracpi(y)pi(x) = fracmu_x(y)mu_x(x) = mu_x(y).
endalign*
To compute this expectation, denote $N_x(y) doteq sum_n=0^tau_x^+ -1textbf1_X_n = y$, so it suffices to show that
beginalign*
E_x(N_x(y)) = fracP_x(tau_y^+ < tau_x^+)P_y(tau_x^+ < tau_y^+).
endalign*
Indeed,
beginalign*
E_x(N_x(y)) &= E_x(N_x(y) textbf1_tau_y^+ < tau_x^+ + N_x(y) textbf1_tau_y^+ > tau_x^+) \
&= E_x(N_x(y) textbf1_tau_y^+ < tau_x^+) \
&= E_x(N_x(y) mid tau_y^+ < tau_x^+) P_x(tau_y^+ < tau_x^+),
endalign*
where the second equality follows from the fact that the number of times we visit $y$ before $x$ is $0$ on the set where we hit $x$ before $y$. Now note that conditioned on $tau_y^+ < tau_x^+$, $N_x(y) in 1,2,3,dots$ is distributed as a geometric random variable with success parameter $P_y(tau_x^+ < tau_y^+)$ (think of breaking time up into chunks separated by visits to either $x$ or $y$. Then hitting $x$ to end the process is like flipping a coin with bias $P_y(tau_x^+ < tau_y^+)$). Hence the expectation is $frac1P_y(tau_x^+ < tau_y^+)$, which yields
beginalign*
fracpi(y)pi(x) = E_x(N_x(y)) = fracP_x(tau_y^+ < tau_x^+)P_y(tau_x^+ < tau_y^+),
endalign*
as desired. Notice that we didn't use the hypothesis of reversibility in the second argument, in contrast to the first, where it was used to define the conductance function for the electrical network (which made the proof much quicker).
Thank you! Do you think you could add a bit on how to see that $mu_x(y)$ is a stationary measure? (This seems intuitive, but I'm not sure how to actually write down a proof...)
â Lorenzo
Aug 7 at 16:12
The proof is nontrivial but can be found in Durrett's Probability Theory and Examples (Theorem 6.5.2, the book is online and free).
â Daniel Xiang
Aug 7 at 16:16
add a comment |Â
up vote
1
down vote
accepted
Note first that since the probability measures start at $x$ and $y$ on the left and right hand sides, respectively, we can replace $tau$ by $tau^+$.
Now observe that if the Markov chain is reversible, then a positive weight function $w: mathcalX to mathbbR^+$ satisfying the detailed balance conditions, i.e.
beginalign*
w(x) p(x,y) = w(y) p(y,x)
endalign*
exists. Since $mathcalX$ is finite, $sum_x in mathcalX w(x) doteq A < infty$, and since any scalar multiple of a weight function is also a weight function, we must have that $pi = w/A$ is a stationary distribution (sum both sides of the detailed balance equation over $x$). Since the chain is irreducible, and a stationary distribution exists, we must have $pi(x) > 0$ for all $x in mathcalX$ (Exercise!). Hence the chain is recurrent (which implies that stationary measures are unique up to constant multiples).
One way to get the result is to view the Markov Chain as an electrical network with conductance function $C(x) = w(x)$, and then invoke the result
beginalign*
P_x(tau^+_y < tau^+_x) = 1/(C(x)R_texteff),
endalign*
where $R_texteff$ is the effective resistance between $x$ and $y$. Then it follows immediately from dividing this by the analogous equation for $y$ that
beginalign*
fracP_x(tau^+_y < tau_x^+)P_y(tau^+_x < tau_y^+) = fracC(y)C(x) = fracpi(y)pi(x),
endalign*
as desired.
To obtain the result without using electrical network theory, let $x$ be a recurrent state (all states are recurrent by the above discussion) and define the measure
beginalign*
mu_x(y) doteq E_xsum_n=0^tau_x^+ - 1 textbf1_X_n = y,
endalign*
i.e. the expected number of visits to $y$, starting at $x$, before hitting $x$. This defines a stationary measure, and is thus a scalar multiple of $pi$ (stationary measures are unique up to constant multiples for irreducible, recurrent chains). Note that by definition, $mu_x(x) = 1$. Hence,
beginalign*
fracpi(y)pi(x) = fracmu_x(y)mu_x(x) = mu_x(y).
endalign*
To compute this expectation, denote $N_x(y) doteq sum_n=0^tau_x^+ -1textbf1_X_n = y$, so it suffices to show that
beginalign*
E_x(N_x(y)) = fracP_x(tau_y^+ < tau_x^+)P_y(tau_x^+ < tau_y^+).
endalign*
Indeed,
beginalign*
E_x(N_x(y)) &= E_x(N_x(y) textbf1_tau_y^+ < tau_x^+ + N_x(y) textbf1_tau_y^+ > tau_x^+) \
&= E_x(N_x(y) textbf1_tau_y^+ < tau_x^+) \
&= E_x(N_x(y) mid tau_y^+ < tau_x^+) P_x(tau_y^+ < tau_x^+),
endalign*
where the second equality follows from the fact that the number of times we visit $y$ before $x$ is $0$ on the set where we hit $x$ before $y$. Now note that conditioned on $tau_y^+ < tau_x^+$, $N_x(y) in 1,2,3,dots$ is distributed as a geometric random variable with success parameter $P_y(tau_x^+ < tau_y^+)$ (think of breaking time up into chunks separated by visits to either $x$ or $y$. Then hitting $x$ to end the process is like flipping a coin with bias $P_y(tau_x^+ < tau_y^+)$). Hence the expectation is $frac1P_y(tau_x^+ < tau_y^+)$, which yields
beginalign*
fracpi(y)pi(x) = E_x(N_x(y)) = fracP_x(tau_y^+ < tau_x^+)P_y(tau_x^+ < tau_y^+),
endalign*
as desired. Notice that we didn't use the hypothesis of reversibility in the second argument, in contrast to the first, where it was used to define the conductance function for the electrical network (which made the proof much quicker).
Thank you! Do you think you could add a bit on how to see that $mu_x(y)$ is a stationary measure? (This seems intuitive, but I'm not sure how to actually write down a proof...)
â Lorenzo
Aug 7 at 16:12
The proof is nontrivial but can be found in Durrett's Probability Theory and Examples (Theorem 6.5.2, the book is online and free).
â Daniel Xiang
Aug 7 at 16:16
add a comment |Â
up vote
1
down vote
accepted
up vote
1
down vote
accepted
Note first that since the probability measures start at $x$ and $y$ on the left and right hand sides, respectively, we can replace $tau$ by $tau^+$.
Now observe that if the Markov chain is reversible, then a positive weight function $w: mathcalX to mathbbR^+$ satisfying the detailed balance conditions, i.e.
beginalign*
w(x) p(x,y) = w(y) p(y,x)
endalign*
exists. Since $mathcalX$ is finite, $sum_x in mathcalX w(x) doteq A < infty$, and since any scalar multiple of a weight function is also a weight function, we must have that $pi = w/A$ is a stationary distribution (sum both sides of the detailed balance equation over $x$). Since the chain is irreducible, and a stationary distribution exists, we must have $pi(x) > 0$ for all $x in mathcalX$ (Exercise!). Hence the chain is recurrent (which implies that stationary measures are unique up to constant multiples).
One way to get the result is to view the Markov Chain as an electrical network with conductance function $C(x) = w(x)$, and then invoke the result
beginalign*
P_x(tau^+_y < tau^+_x) = 1/(C(x)R_texteff),
endalign*
where $R_texteff$ is the effective resistance between $x$ and $y$. Then it follows immediately from dividing this by the analogous equation for $y$ that
beginalign*
fracP_x(tau^+_y < tau_x^+)P_y(tau^+_x < tau_y^+) = fracC(y)C(x) = fracpi(y)pi(x),
endalign*
as desired.
To obtain the result without using electrical network theory, let $x$ be a recurrent state (all states are recurrent by the above discussion) and define the measure
beginalign*
mu_x(y) doteq E_xsum_n=0^tau_x^+ - 1 textbf1_X_n = y,
endalign*
i.e. the expected number of visits to $y$, starting at $x$, before hitting $x$. This defines a stationary measure, and is thus a scalar multiple of $pi$ (stationary measures are unique up to constant multiples for irreducible, recurrent chains). Note that by definition, $mu_x(x) = 1$. Hence,
beginalign*
fracpi(y)pi(x) = fracmu_x(y)mu_x(x) = mu_x(y).
endalign*
To compute this expectation, denote $N_x(y) doteq sum_n=0^tau_x^+ -1textbf1_X_n = y$, so it suffices to show that
beginalign*
E_x(N_x(y)) = fracP_x(tau_y^+ < tau_x^+)P_y(tau_x^+ < tau_y^+).
endalign*
Indeed,
beginalign*
E_x(N_x(y)) &= E_x(N_x(y) textbf1_tau_y^+ < tau_x^+ + N_x(y) textbf1_tau_y^+ > tau_x^+) \
&= E_x(N_x(y) textbf1_tau_y^+ < tau_x^+) \
&= E_x(N_x(y) mid tau_y^+ < tau_x^+) P_x(tau_y^+ < tau_x^+),
endalign*
where the second equality follows from the fact that the number of times we visit $y$ before $x$ is $0$ on the set where we hit $x$ before $y$. Now note that conditioned on $tau_y^+ < tau_x^+$, $N_x(y) in 1,2,3,dots$ is distributed as a geometric random variable with success parameter $P_y(tau_x^+ < tau_y^+)$ (think of breaking time up into chunks separated by visits to either $x$ or $y$. Then hitting $x$ to end the process is like flipping a coin with bias $P_y(tau_x^+ < tau_y^+)$). Hence the expectation is $frac1P_y(tau_x^+ < tau_y^+)$, which yields
beginalign*
fracpi(y)pi(x) = E_x(N_x(y)) = fracP_x(tau_y^+ < tau_x^+)P_y(tau_x^+ < tau_y^+),
endalign*
as desired. Notice that we didn't use the hypothesis of reversibility in the second argument, in contrast to the first, where it was used to define the conductance function for the electrical network (which made the proof much quicker).
Note first that since the probability measures start at $x$ and $y$ on the left and right hand sides, respectively, we can replace $tau$ by $tau^+$.
Now observe that if the Markov chain is reversible, then a positive weight function $w: mathcalX to mathbbR^+$ satisfying the detailed balance conditions, i.e.
beginalign*
w(x) p(x,y) = w(y) p(y,x)
endalign*
exists. Since $mathcalX$ is finite, $sum_x in mathcalX w(x) doteq A < infty$, and since any scalar multiple of a weight function is also a weight function, we must have that $pi = w/A$ is a stationary distribution (sum both sides of the detailed balance equation over $x$). Since the chain is irreducible, and a stationary distribution exists, we must have $pi(x) > 0$ for all $x in mathcalX$ (Exercise!). Hence the chain is recurrent (which implies that stationary measures are unique up to constant multiples).
One way to get the result is to view the Markov Chain as an electrical network with conductance function $C(x) = w(x)$, and then invoke the result
beginalign*
P_x(tau^+_y < tau^+_x) = 1/(C(x)R_texteff),
endalign*
where $R_texteff$ is the effective resistance between $x$ and $y$. Then it follows immediately from dividing this by the analogous equation for $y$ that
beginalign*
fracP_x(tau^+_y < tau_x^+)P_y(tau^+_x < tau_y^+) = fracC(y)C(x) = fracpi(y)pi(x),
endalign*
as desired.
To obtain the result without using electrical network theory, let $x$ be a recurrent state (all states are recurrent by the above discussion) and define the measure
beginalign*
mu_x(y) doteq E_xsum_n=0^tau_x^+ - 1 textbf1_X_n = y,
endalign*
i.e. the expected number of visits to $y$, starting at $x$, before hitting $x$. This defines a stationary measure, and is thus a scalar multiple of $pi$ (stationary measures are unique up to constant multiples for irreducible, recurrent chains). Note that by definition, $mu_x(x) = 1$. Hence,
beginalign*
fracpi(y)pi(x) = fracmu_x(y)mu_x(x) = mu_x(y).
endalign*
To compute this expectation, denote $N_x(y) doteq sum_n=0^tau_x^+ -1textbf1_X_n = y$, so it suffices to show that
beginalign*
E_x(N_x(y)) = fracP_x(tau_y^+ < tau_x^+)P_y(tau_x^+ < tau_y^+).
endalign*
Indeed,
beginalign*
E_x(N_x(y)) &= E_x(N_x(y) textbf1_tau_y^+ < tau_x^+ + N_x(y) textbf1_tau_y^+ > tau_x^+) \
&= E_x(N_x(y) textbf1_tau_y^+ < tau_x^+) \
&= E_x(N_x(y) mid tau_y^+ < tau_x^+) P_x(tau_y^+ < tau_x^+),
endalign*
where the second equality follows from the fact that the number of times we visit $y$ before $x$ is $0$ on the set where we hit $x$ before $y$. Now note that conditioned on $tau_y^+ < tau_x^+$, $N_x(y) in 1,2,3,dots$ is distributed as a geometric random variable with success parameter $P_y(tau_x^+ < tau_y^+)$ (think of breaking time up into chunks separated by visits to either $x$ or $y$. Then hitting $x$ to end the process is like flipping a coin with bias $P_y(tau_x^+ < tau_y^+)$). Hence the expectation is $frac1P_y(tau_x^+ < tau_y^+)$, which yields
beginalign*
fracpi(y)pi(x) = E_x(N_x(y)) = fracP_x(tau_y^+ < tau_x^+)P_y(tau_x^+ < tau_y^+),
endalign*
as desired. Notice that we didn't use the hypothesis of reversibility in the second argument, in contrast to the first, where it was used to define the conductance function for the electrical network (which made the proof much quicker).
answered Aug 7 at 15:42
Daniel Xiang
1,823414
1,823414
Thank you! Do you think you could add a bit on how to see that $mu_x(y)$ is a stationary measure? (This seems intuitive, but I'm not sure how to actually write down a proof...)
â Lorenzo
Aug 7 at 16:12
The proof is nontrivial but can be found in Durrett's Probability Theory and Examples (Theorem 6.5.2, the book is online and free).
â Daniel Xiang
Aug 7 at 16:16
add a comment |Â
Thank you! Do you think you could add a bit on how to see that $mu_x(y)$ is a stationary measure? (This seems intuitive, but I'm not sure how to actually write down a proof...)
â Lorenzo
Aug 7 at 16:12
The proof is nontrivial but can be found in Durrett's Probability Theory and Examples (Theorem 6.5.2, the book is online and free).
â Daniel Xiang
Aug 7 at 16:16
Thank you! Do you think you could add a bit on how to see that $mu_x(y)$ is a stationary measure? (This seems intuitive, but I'm not sure how to actually write down a proof...)
â Lorenzo
Aug 7 at 16:12
Thank you! Do you think you could add a bit on how to see that $mu_x(y)$ is a stationary measure? (This seems intuitive, but I'm not sure how to actually write down a proof...)
â Lorenzo
Aug 7 at 16:12
The proof is nontrivial but can be found in Durrett's Probability Theory and Examples (Theorem 6.5.2, the book is online and free).
â Daniel Xiang
Aug 7 at 16:16
The proof is nontrivial but can be found in Durrett's Probability Theory and Examples (Theorem 6.5.2, the book is online and free).
â Daniel Xiang
Aug 7 at 16:16
add a comment |Â
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2875066%2fpix-p-x-tau-y-tau-x-piy-p-y-tau-x-tau-y%23new-answer', 'question_page');
);
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password