Here is a simulation of a simple, one-dimensional adaptive MCMC algorithm: (Updated: see the new scalable version.)

[Oops, your browser will not display java applets. Instead try this JavaScript version and press `ex' twice.]

See the chain run! An explanation is below.

The applet accepts the following keyboard inputs. (You may need to "click" on the applet first.)

• Use the numbers '0' through '9' to set the animation speed level higher or lower.
• Use 'r' to restart the simulation, or 'z' to just zero the empirical count, or 's' to toggle whether or not to show the (black) empirical distribution.
• Use '>' and '<' to increase/decrease the target probability of state 2 (and restart the simulation).
• Use '+' and '-' to increase/decrease the number of states (and restart the simulation).
• Use 'y' to always adapt (default), or 'n' to never adapt, or 'd' to adapt with probability 1/iteration, or 'o' to fix gamma=1, or 't' to fix gamma=2, or 'F' to fix gamma=50.
• Use 'p' and 'm' to increase/decrease the current value of gamma.
• Use 'g' to toggle between the default "counter-example" target distribution, and a randomly-generated more "general-looking" target distribution.
• At fast animation speed levels, you can press any other key (e.g. 'space') at any time to get an instantaneous snapshot of the iteration in progress.

#### Explanation:

This algorithm runs an adaptive Metropolis algorithm for the target probability distribution (graphed with blue bars). The algorithm's current state is indicated by the black disk.

Proposal: The proposal distribution is uniform on the white disks, from x-gamma to x+gamma (but excluding x itself). The yellow disk then shows the actual proposal state.

Accept/Reject: The yellow disk turns green if the proposal is accepted, or red if it is rejected. The (black) current state is updated accordingly.

Adaption: The algorithm adapts by increasing gamma by 1 if the previous proposal was accepted, or decreasing gamma by 1 (to a minimum of 1) if the previous proposal was rejected.

Empirical distribution: The empirically estimated distribution is graphed with black bars. If the simulation correctly preserved stationarity of the target distribution, then the black and blue bars should converge in height.

Comparison of means: The small vertical blue line at the top shows the target mean, while the small vertical black line shows the current empirical mean. If the simulation correctly preserved stationarity, then the two lines should converge.

Conclusion: In fact, once the (original) chain reaches state 1 with gamma=1, it gets stuck there for a very long time, causing the empirical distribution to significantly overweight state 1. This shows that, counter-intuitively, this adaptive algorithm does not preserve stationarity of the target distribution. However, if we instead select diminishing adaption probabilities (with 'd'), or no adaptions (with 'n'), then stationarity is preserved.

Final remark: The example presented here is on a discrete state space, but this is not essential. Indeed, if the above target and proposal distributions are each convolved with a Normal(0, 0.000001) distribution, this produces an example on a continuous state space (with continuous, everywhere-positive densities) which has virtually identical behaviour, and similarly fails to converge.

For further discussion of adaptive MCMC algorithms and related examples, see e.g.:

Applet by Jeffrey S. Rosenthal (contact me).