In many situations it might be useful to know how to sample a Markov chain (or a diffusion process) between time and , conditioned on the knowledge that and . This conditioned Markov chain is still a Markov chain but in general is not time homogeneous. Moreover, it is generally very difficult to compute the transition probabilities of this conditioned Markov chain since they depend on the knowledge of the transition probabilities of the unconditioned Markov chain, which are usually not available: this has been discussed in this previous post on Doob h-transforms. Perhaps surprisingly, this article by Michael Sorensen and Mogens Bladt shows how this is sometimes quite easy to sample good **approximations** of such a conditioned Markov chain, or diffusion.

**1. Reversible Markov chains **

Remember that a Markov chain on the state space with transition operator is reversible with respect to the probability if for any

In words, this means that looking at a trajectory of this Markov chain, this is impossible to say if the time is running forward or backward: indeed, the probability of observing is equal to , which is also equal to if the chain is reversible.

This is precisely this property of invariance by time reversal that allows to sample from conditioned reversible Markov chain. Since under mild conditions a one dimensional diffusion is also reversible, this also shows that (ergodic) one dimensional conditioned diffusions are sometimes quite easy to sample from!

**2. One dimensional diffusions are reversible! **

The other someone told me that almost any one dimensional diffusion is reversible! I did not know that, and I must admit that I still find this result rather surprising. Indeed, this is not true for multidimensional diffusions, and this is very easy to construct counter-examples. What makes the result works for real diffusions is that there is only one way to go from to , namely the segment . Indeed, the situation is completely different in higher dimensions.

First, let us remark that any Markov chain on , that has as invariant distribution and that can only make jumps of size or is reversible: such a Markov chain is usually called skip-free in the litterature. This is extremely easy to prove, and since skip-free Markov chains have been studied a lot, I am sure that this result is somewhere in the litterature (any reference for that?). To show the result, it suffices to show that for any where is the upward flux at level and is the downward flux. Because is invariant it satisfies the usual balance equations,

so that . Interating we get that for any we have : the conclusion follows since .

This simple result on skip-free Markov chains gives also the result for many one dimensional diffusions since under regularity assumptions on and they can be seen as limit of skip-free one dimensional Markov chains on . Indeed, I guess that the usual proof of this result goes through introducing the scale function and the speed measure of the diffusion, but I would be very glad if anyone had another pedestrian approach that gives more intuition into this.

**3. How to sample conditioned reversible Markov chains **

From what has been said before, I am sure that this becomes quite clear how a conditioned reversible Markov chain can be sampled from. Suppose that is reversible Markov chain and that we would like to sample a path conditioned on the event and :

- sample the path between and , starting from
- sample the path between and , starting from
- if there exists such that then define the path byand otherwise go back to step .

Indeed, the resulting path is an **approximation** of the realisation of the conditioned Markov chain: this is not hard to prove it, playing around with the definition of time reversibility. It is not hard at all to adapt this idea to reversible diffusions, though the result is indeed again an approximation. The interesting question is to discuss how good this approximation is (see the paper by Michael Sorensen and Mogens Bladt )

For example, here is a sample from a Birth-Death process, conditioned on the event and , with parameter .

**4. Final remark **

It might be interesting to notice that this method is especially inefficent for multidimensional processes: the probability of finding an instant such that is extremely small, and in many cases equal to for diffusions! This works pretty well for one dimensional diffusion thanks to the continuity of the path and the intermediate value property. Nevertheless, even for one dimensional diffusion this method does not work well at all when trying to sample from conditioned paths between two meta-stable position: this is precisely this situation that is interesting in many physics when one wants to study the evolution of a particle in a double well potential, for example. In short, sampling conditioned (multidimensional) diffusions is still a very difficult problem.

## TheBridge said,

February 5, 2014 at 7:55 am

Hi, a look here might raise your interest :

Best regards

TheBridge

## Jamal said,

July 16, 2017 at 5:36 am

This game shares improved and superior rules and

hand ratings (not at all times). It is the most popular online flash games in the world and may be

played just for fun and wealth. On the other hand, without having those required poker skills,

a low limit poker tournament might be a good place to start out practicing

the way you use them.