Here is a simulation of a simple one-dimensional Metropolis (MCMC) algorithm, including an adaptive option. (See also a related JavaScript version.)

[Oops, your browser will not display java applets. Instead, you may wish to try the jar file (which can be run using the JRE), or the related JavaScript version.]

See the chain run! An explanation is below.

The applet accepts the following keyboard inputs. (You may need to "click" on the applet first.)


Explanation:

This algorithm runs a random-walk Metropolis (RWM) algorithm, for the target probability distribution graphed with blue bars. The algorithm's current state is indicated by the black disk.

Proposal: The proposal distribution is uniform on the white disks, from x-gamma to x+gamma (but excluding x itself). The yellow disk then shows the actual proposal state.

Accept/Reject: The yellow disk turns green if the proposal is accepted, or red if it is rejected. The (black) current state is updated accordingly.

Empirical distribution: The empirically estimated distribution is graphed with black bars. If the simulation correctly preserved stationarity of the target distribution, then the black and blue bars should converge in height.

Comparison of means: The small vertical blue line at the top shows the target mean, while the small vertical black line shows the current empirical mean. If the simulation correctly preserved stationarity, then the two lines should converge.


Adaption:

*If* adaption is turned on (with 'y'), the algorithm "adapts", by increasing gamma by 1 if the previous proposal was accepted, or decreasing gamma by 1 (to a minimum of 1) if the previous proposal was rejected.

Conclusion: With the adapt option turned on (with 'y'), once the chain reaches state 1 with gamma=1, it tends to get stuck there for a very long time, causing the empirical distribution to significantly overweight state 1. This shows that, counter-intuitively, this adaptive algorithm does not preserve stationarity of the target distribution. However, if we instead select diminishing adaption probabilities (with 'd'), or no adaptions (with 'n'), then convergence is preserved.

Remark: The example presented here is on a discrete state space, but this is not essential. Indeed, if the above target and proposal distributions are each convolved with a Normal(0, 0.000001) distribution, this produces an example on a continuous state space (with continuous, everywhere-positive densities) which has virtually identical behaviour, and similarly fails to converge.

For further discussion of adaptive MCMC algorithms and related examples, see e.g.:



Applet by Jeffrey S. Rosenthal (contact me).

[Return to my Applets Page / Home Page]