See also the Markov chain applets below, my cartoon series, some recent notes about running applets, and the once-impressive hit counts.
My tennismatch applet lets you interactively play tennis against a computer opponent. Depending on your monitor and pixel size, you may prefer the large size, medium size, or small size. Also available is a (large-sized) tennis practice applet, where you can hit the ball against a wall to improve your control.
My spacetag applet (my favourite, including a cash prize!) is a space game, where you fly around in a green ship and come across planets, space stations, bad guys, etc. It is available in large size, medium size, or small size. See also the instructions.
My manymoons applet simulates a (randomly initialized) collection of N moons, circling each other under the influence of gravity. (Here is the corresponding source code.) Also available are a (periodic) two-body version and a (rather unstable) three-body version.
My gambler's ruin applet illustrates the famous gambler's ruin problem of classical probability.
My buckets applet illustrates the pouring of water into a triangular array of buckets -- sort of like Pascal's Triangle, but trickier.
My frogwalk applet simulates a 1/3,1/3,1/3 random walk on a discrete circle.
My uncunx applet illustrates that generalised quincunx device described in this paper.
My poisson applet illustrates that even if dots are placed uniformly at random, various "patterns" will seem to appear ("Poisson clumping").
A second Markov chain applet is "unif". It simulates a one-dimensional Metropolis sampler Markov chain with exponential target distribution and uniform proposal distributions. Would you trust this sampler's results?
A third Markov chain applet, "slice", simulates a one-dimensional slice sampler. See how the chain's convergence properties depend on the nature of the target distribution.
A fourth Markov chain applet, "cftp", simulates a "coupling from the past" algorithm. See how to obtain an exact sample from a distribution, using only a Markov chain for which the distribution is stationary.
A fifth Markov chain applet, "rwm", shows a very simple random-walk Metropolis MCMC algorithm.
Another Markov chain applet, "adapt", illustrates the perils of naive use of adaptive MCMC algorithms.
Another Markov chain applet, "pointproc", runs a Metropolis-within-Gibbs algorithm on a spatial point process.
See also my finance-related applet "option", which uses a Monte Carlo algorithm to estimate the maximum of a stock price over a time interval.
There is also a Java applet of my Galactic Peace interactive fiction game.