I read today about Doob h-transforms in the Rogers-Williams … It is done quite quickly in the book so that I decided to practice on some simple examples to see how this works.
So we have a Markov process living in the state space , and we want to see how this process looks like if we condition on the event where is a subset of the state space. To fix the notations we define and . The conditioned semi-group is quite easily computed from and . Indeed, this also equals
Notice also that is indeed a Markov kernel in the sense that : the only property needed for that is
In fact, we could take any function that satisfies this equality and define a new Markovian kernel and study the associated Markov process. That’s what people usually do by the way.
Remark 1 we almost never know explicitly the quantity , except in some extremely simple cases !
Before trying these ideas on some simple examples, let us see what this says on the generator of the process:
- continuous time Markov chains, finite state space:let us suppose that the intensity matrix is and that we want to know the dynamic on of this Markov chain conditioned on the event . Indeed so that so that in the limit we see that at time , the intensity of the jump from to of the conditioned Markov chain is
Notice how this behaves while : if at the Markov chain is in state then the intensity of jump from to is equivalent to .
- diffusion processes:this time consider a -dimensional diffusion on conditioned on the event and define as before . The generator of the (non-homogeneous) conditioned diffusion is defined at time by
so that if is the generator of the original diffusion we get
Because , this also reads
This means that the conditioned diffusion follows the SDE:
The volatility function remains the same while an additional drift shows up.
We will try these ideas on some examples where the probability densities are extremely simple. Notice that in the case of diffusions, if we take , the function is identically equal to (except degenerate cases): to condition on the event we need instead to take to be the transition probability . This follows from the approximation . Let’s do it:
- Brownian Bridge on :in this case so that the additional drift reads : a Brownian bridge follows the SDE
This might not be the best way to simulate a Brownian bridge though!
- Poisson Bridge on :we condition a Poisson process of rate on the event . The intensity matrix is simply and everywhere else while the transition probabilities are given by . This is why at time , the intensity from to is given by
Again, that might not be the most efficient way to simulate a Poisson Bridge ! Notice how the intensity has disappeared …
- Ornstein-Uhlenbeck Bridge:Let’s consider the usual OU process given by the dynamic : the invariant probability is the usual centred Gaussian distribution. Say that we want to know how does such an OU process behave if we condition on the event . Because we find that the conditioned O-U process follows the SDE
If we Taylor expand the additonal drift, it can be seen that this term behaves exactly as in the case of the Brownian bridge. Below is a plot of an O-U process conditioned on the event , starting from .
TheBridge said,
August 6, 2010 at 8:45 am
Hi Alekk,
A brillant article indeed, thank’s alekk,
About the diffusion case I think it would be interesting to connect things with Girsanov Theorem (to check which kind of transform correspond to h-transform).
Another point is that you can check that for brownian bridge with the same start and end point on the time interval [0,1], we have some kind of symmetry for the law of the paths around point t=1/2, this is in some way some kind of reflection principle (in the time coordinate). And I was wondering how general this fact is (which might be use for simulation purposes).
Best Regards
Nick said,
September 25, 2010 at 7:18 am
Nice and intuitive notes. Thank you
sampling conditioned Markov chain, and diffusions « Journey into Randomness said,
November 7, 2010 at 3:45 pm
[…] Markov chain, which are usually not available: this has been discussed in this previous post on Doob h-transforms. Perhaps surprisingly, this article by Michael Sorensen and Mogens Bladt shows […]