Here is a simulation of a simple one-dimensional Metropolis (MCMC) algorithm, including an adaptive option. (If you have trouble running the applet, see these notes.)

[Oops, your browser will not display java applets. Instead, you may wish to try the jar file (which can be run using the JRE), or the related JavaScript version.]

See the chain run! An explanation is below.

The applet accepts the following keyboard inputs. (You may need to "click" on the applet first.)

• Use the numbers '0' through '9' to set the animation speed level higher or lower.
• Use 'r' to restart the simulation, or 'z' to just zero the empirical count, or 's' to toggle whether or not to show the (black) empirical distribution.
• Use 'g' to cycle the target distribution between certain specific values, and randomly-generated values, and a special "counter-example" target.
• Use '+' and '-' to increase/decrease the number of states (and restart the simulation).
• Use 'n' to never adapt (default), or 'y' to always adapt, or 'd' to adapt with probability 1/iteration, or 'o' to fix gamma=1, or 't' to fix gamma=2, or 'F' to fix gamma=50.
• Use 'p' and 'm' to increase/decrease the current value of gamma.
• Use '>' and '<' to increase/decrease the target probability of state 2 (and restart the simulation) for the counter-example target.
• At fast animation speed levels, you can press any other key (e.g. 'space') at any time to get an instantaneous snapshot of the iteration in progress.

#### Explanation:

This algorithm runs a random-walk Metropolis (RWM) algorithm, for the target probability distribution graphed with blue bars. The algorithm's current state is indicated by the black disk.

Proposal: The proposal distribution is uniform on the white disks, from x-gamma to x+gamma (but excluding x itself). The yellow disk then shows the actual proposal state.

Accept/Reject: The yellow disk turns green if the proposal is accepted, or red if it is rejected. The (black) current state is updated accordingly.

Empirical distribution: The empirically estimated distribution is graphed with black bars. If the simulation correctly preserved stationarity of the target distribution, then the black and blue bars should converge in height.

Comparison of means: The small vertical blue line at the top shows the target mean, while the small vertical black line shows the current empirical mean. If the simulation correctly preserved stationarity, then the two lines should converge.

*If* adaption is turned on (with 'y'), the algorithm "adapts", by increasing gamma by 1 if the previous proposal was accepted, or decreasing gamma by 1 (to a minimum of 1) if the previous proposal was rejected.

Conclusion: With the adapt option turned on (with 'y'), once the chain reaches state 1 with gamma=1, it tends to get stuck there for a very long time, causing the empirical distribution to significantly overweight state 1. This shows that, counter-intuitively, this adaptive algorithm does not preserve stationarity of the target distribution. However, if we instead select diminishing adaption probabilities (with 'd'), or no adaptions (with 'n'), then convergence is preserved.

Remark: The example presented here is on a discrete state space, but this is not essential. Indeed, if the above target and proposal distributions are each convolved with a Normal(0, 0.000001) distribution, this produces an example on a continuous state space (with continuous, everywhere-positive densities) which has virtually identical behaviour, and similarly fails to converge.

For further discussion of adaptive MCMC algorithms and related examples, see e.g.:

Applet by Jeffrey S. Rosenthal (contact me).