A simple model of interacting particles
The mean field Potts model is extremely simple: there are interacting particles and each one of them can be in different states . Define the Hamiltonian
where and is the Kronecker symbol. The normalization ensures that the energy is an extensive quantity so that the mean energy per particle does no degenerate to or for large values of . The sign minus is here to favorize configurations that have a lot of particles in the same state. The Boltzman distribution at inverse temperature on is given by
where is a normalization constant. Notice that if we choose a configuration uniformly at random in , with overwhelming probability the ratio of particles in state will be close to . Also it is obvious that if we define
then will be close to for a configuration taken uniformly at random. Stirling formula even says that the probability that is close to is close to where
Indeed . The situation is quite different under the Boltzman distribution since it favorizes the configurations that have a lot of particles in the same state: this is because the Hamiltonian is minimized for configurations that have all the particles in the same state. In short there is a competition between the entropy (there are a lot of configurations close to the ratio ) and the energy that favorizes the configurations where all the particles are in the same state.
With a little more work, one can show that there is a critical inverse temperature such that:
- for the entropy wins the battle: the most probable configurations are close to the ratio
- for the energy effect shows up: there are most probable configurations that are the permutations of where and are computable quantities.
The point is that above the system has more than one stable equilibrium point. Maybe more important, if we compute the energy of these most probable states
then this function has a discontinuity at . I will try to show in the weeks to come how this behaviour can dramatically slow down usual Monte-Carlo approach to the study of these kind of models.
Hugo Touchette has a very nice review of statistical physics that I like a lot and a good survey of the Potts model. Also T. Tao has a very nice exposition of related models. The blog of Georg von Hippel is dedicated to similar models on lattices, which are far more complex that this mean field approximation presented here.
These is extremely easy to simulate this mean field Potts model since we only need to keep track of the ratio to have an accurate picture of the system. For example, a typical Markov Chain Monte Carlo approach would run as follows:
- choose a particle uniformly at random in
- try to switch its value uniformly in
- compute the Metropolis ratio
- update accordingly.
If we do that times for states at inverse temperature and for particles (which is fine since we only need to keep track of the -dimensional ratio vector) and plot the result in barycentric coordinates we get a picture that looks like:
Here I started with a configuration where all the particles were in the same states i.e ratio vector equal to . We can see that even with steps, the algorithm struggles to go from one most probable position to the other two and – in this simulation, one of the most probable state has even not been visited! Indeed, this approach was extremely naive, and this is quite interesting to try to come up with better algorithms. Btw, Christian Robert’s blog has tons of interesting stuffs related to MCMC and how to boost up the naive approach presented here.