## Sampling conditioned Markov chains, and diffusions

In many situations it might be useful to know how to sample a Markov chain (or a diffusion process) ${X_k}$ between time ${0}$ and ${T}$, conditioned on the knowledge that ${X_0=a}$ and ${X_T=b}$. This conditioned Markov chain ${X^{\star}}$ is still a Markov chain but in general is not time homogeneous. Moreover, it is generally very difficult to compute the transition probabilities of this conditioned Markov chain since they depend on the knowledge of the transition probabilities ${p(t,x,y) = \mathop{\mathbb P}(X_t=y | X_0=x)}$ of the unconditioned Markov chain, which are usually not available: this has been discussed in this previous post on Doob h-transforms. Perhaps surprisingly, this article by Michael Sorensen and Mogens Bladt shows how this is sometimes quite easy to sample good approximations of such a conditioned Markov chain, or diffusion.

1. Reversible Markov chains

Remember that a Markov chain ${X_k}$ on the state space ${S}$ with transition operator ${p(x,y) = \mathop{\mathbb P}(X_{k+1}=y \,|X_k=x)}$ is reversible with respect to the probability ${\pi}$ if for any ${x,y \in S}$ $\displaystyle \pi(x) p(x,y) = p(y,x) \pi(y). \ \ \ \ \ (1)$

In words, this means that looking at a trajectory of this Markov chain, this is impossible to say if the time is running forward or backward: indeed, the probability of observing ${y_1, y_2, \ldots, y_n}$ is equal to ${\pi(y_1)p(y_1, y_2) \ldots p(y_{n-1}, y_n)}$, which is also equal to ${\pi(y_n)p(y_n,y_{n-1}) \ldots p(y_2, y_1)}$ if the chain is reversible.

This is precisely this property of invariance by time reversal that allows to sample from conditioned reversible Markov chain. Since under mild conditions a one dimensional diffusion is also reversible, this also shows that (ergodic) one dimensional conditioned diffusions are sometimes quite easy to sample from!

2. One dimensional diffusions are reversible!

The other someone told me that almost any one dimensional diffusion is reversible! I did not know that, and I must admit that I still find this result rather surprising. Indeed, this is not true for multidimensional diffusions, and this is very easy to construct counter-examples. What makes the result works for real diffusions is that there is only one way to go from ${a}$ to ${b}$, namely the segment ${[a,b]}$. Indeed, the situation is completely different in higher dimensions.

First, let us remark that any Markov chain on ${\mathbb{Z}}$, that has ${\pi}$ as invariant distribution and that can only make jumps of size ${+1}$ or ${-1}$ is reversible: such a Markov chain is usually called skip-free in the litterature. This is extremely easy to prove, and since skip-free Markov chains have been studied a lot, I am sure that this result is somewhere in the litterature (any reference for that?). To show the result, it suffices to show that ${u_k-d_{k+1}=0}$ for any ${k \in \mathbb{Z}}$ where ${u_k = \pi(k) p(k,k+1)}$ is the upward flux at level ${k}$ and ${d_k = \pi(k)p(k,k-1)}$ is the downward flux. Because ${\pi}$ is invariant it satisfies the usual balance equations, $\displaystyle \begin{array}{rcl} u_k+d_k &=& \pi(k) = \pi(k-1)p(k-1,k)+\pi(k+1)p(k+1,k)\\ &=& u_{k-1} + d_{k+1} \end{array}$

so that ${u_{k}-d_{k+1} = u_{k-1}-d_{k}}$. Interating we get that for any ${n \in \mathbb{Z}}$ we have ${u_k-d_{k+1} = u_{k-n}-d_{k-n+1}}$: the conclusion follows since ${\lim_{m \rightarrow \pm \infty} u(m) = \lim_{m \rightarrow \pm \infty} d(m) = 0}$.

This simple result on skip-free Markov chains gives also the result for many one dimensional diffusions ${dX_t = \alpha(X_t) \, dt + \sigma(X_t) \, dW_t}$ since under regularity assumptions on ${\alpha}$ and ${\sigma}$ they can be seen as limit of skip-free one dimensional Markov chains on ${\epsilon \mathbb{Z}}$. Indeed, I guess that the usual proof of this result goes through introducing the scale function and the speed measure of the diffusion, but I would be very glad if anyone had another pedestrian approach that gives more intuition into this.

3. How to sample conditioned reversible Markov chains

From what has been said before, I am sure that this becomes quite clear how a conditioned reversible Markov chain can be sampled from. Suppose that ${X}$ is reversible Markov chain and that we would like to sample a path conditioned on the event ${X_0=a}$ and ${X_T=b}$:

1. sample the path ${X^{(a)}}$ between ${t=0}$ and ${t=T}$, starting from ${X^{(a)}_0=a}$
2. sample the path ${X^{(b)}}$ between ${t=0}$ and ${t=T}$, starting from ${X^{(b)}_0=b}$
3. if there exists ${t \in [0,T]}$ such that ${X^{(a)}_t = X^{(b)}_{T-t}}$ then define the path ${X^{\star}}$ by $\displaystyle X^{\star}_s = \left\{ \begin{array}{ll} X^{(a)}_s & \text{ for } s \in [0,t]\\ X^{(b)}_s & \text{ for } s \in [t,T], \end{array} \right. \ \ \ \ \ (2)$and otherwise go back to step ${1}$.

Indeed, the resulting path ${X^{\star}}$ is an approximation of the realisation of the conditioned Markov chain: this is not hard to prove it, playing around with the definition of time reversibility. It is not hard at all to adapt this idea to reversible diffusions, though the result is indeed again an approximation. The interesting question is to discuss how good this approximation is (see the paper by Michael Sorensen and Mogens Bladt )

For example, here is a sample from a Birth-Death process, conditioned on the event ${X_0=0}$ and ${X_{1000}=50}$, with parameter ${\mathop{\mathbb P}(X_{t+1}=k+1 | X_t=k) = 0.4 = 1-\mathop{\mathbb P}(X_{t+1}=k-1 | X_t=k)}$.

4. Final remark

It might be interesting to notice that this method is especially inefficent for multidimensional processes: the probability of finding an instant ${t}$ such that ${X^{(a)}_t = X^{(b)}_{T-t}}$ is extremely small, and in many cases equal to ${0}$ for diffusions! This works pretty well for one dimensional diffusion thanks to the continuity of the path and the intermediate value property. Nevertheless, even for one dimensional diffusion this method does not work well at all when trying to sample from conditioned paths between two meta-stable position: this is precisely this situation that is interesting in many physics when one wants to study the evolution of a particle in a double well potential, for example. In short, sampling conditioned (multidimensional) diffusions is still a very difficult problem.

1. #### TheBridge said,

February 5, 2014 at 7:55 am

Hi, a look here might raise your interest :

Best regards

TheBridge

2. #### Jamal said,

July 16, 2017 at 5:36 am

This game shares improved and superior rules and
hand ratings (not at all times). It is the most popular online flash games in the world and may be
played just for fun and wealth. On the other hand, without having those required poker skills,
a low limit poker tournament might be a good place to start out practicing
the way you use them.