## Stochastic ant colony model

Previously on Physics of Risk website we have presented Kirman’s ant colony agent based model 1, where each ant was represented as an agent. In this article we will move from the agent based model framework to the stochastic differential equation framework. Thus showing that in case of simple agent based models full transition to stochastic framework is possible. This transition is very important as stochastic framework is very popular and well developed in quantitative finance. The problem is that stochastic framework mainly gives only a macroscopic insight into the modeled system, while microscopic behavior currently is also of big interest.

### Derivation of stochastic differential equation

In this section of the article we will follow derivation of stochastic differential equation, analogous to the previously discussed agent based model, done by Alfarano and Lux in 2. Authors of 2 in their derivation follow the underlying ideas of birth-death processes or one-step processes, overview of which is given in most of handbooks concerning Stochastic Calculus. Thus if you want to get more familiar with the ideas bellow you shoud see 3 or other similar works.

Alfarano and Lux start by simplifying notation, used in the previous agent based model, of system state, defined as number of ants using one of the food sources, , transition probabilities,

where stands for and for . In such case Master equation, for very short times , can be expressed as

here is probability for system to be in the state described by agent number , or in the other words probability of ants at a given time to be using one of the two food sources.

It is comfortable for further derivation to introduce, from the Master equation above, total probability flux, , between states and . Latter can be expressed as

here first term describes transitions from to , while the second term describes transitions in the opposite direction – from to . Thus if flux, , is possitive system state with larger becomes more probable than the system state with smaller . Now by using defined probability fluxes and Master equation above one can obtain a discrete continuity equation

The interpretation of this equation is generaly the same as, for example, in case of electric current continuity equation – if flux outside of current system state (or differential volume in case of electric current) is positive, the probability (analogous to charge) density of this system state will decrease. This idea stands behind the idea of local continuity. If probability flux vanishes at some boundaries, let say , then one can show that is true for every time moment. The last result actually stands behind the idea of the global continuity.

Now lets move on from the discrete case of to continous case with , applying transformation , where . One can reexpress probability of continous system state through discrete system state as

and total probability flux as

The reasoning behind the offset in the latter equation lies within the fact that flux noted by connects two discrete states and , thus it should be located in the middle of that interval. This offset also helps to avoid tedious mathematics in further derivation. Alfarano and Lux also mention that this offset in flux is widely used in discretization of Maxwell’s equations and in gauge theories on a discrete lattices (see the references in 2).

One can rewrite the above discrete continuity equation in continous terms by taking analogy with electric current continuity equation or by expanding using Taylor series expansion (droping second order, , and above terms). Either way one would obtain,

continuity equation for continous time (is introduced by assuming that ) and space.

Now let’s recall definition of total probability flux in discrete terms and rewrite it in continous terms. In the process it becomes

In the equation above was additionaly shifted by .

When , we can also expand , using Taylor series expansion (up to second order), as . And thus we finally obtain

Now one should put into the equation above definitions of and to make one more step in derivation, but before this it is comfortable to define two custom functions

Then after putting in definitions of transitions probabilities, and , and droping terms of second order and above one obtains

And then from continuity equation one can obtain Fokker-Plank equation

which produces the same dynamics as agent based model. Note that custom functions, which were introduced before, have special meaning – describes drift of the system state and describes it’s diffusion.

Fokker-Plank equation above can be altenatively modeled using Langevin stochastic differential equation (for general details on conversion see 3)

and in the limit of high

here is Wiener-Brownian process. This, final, equation is solved in the java applet bellow.

### Observed population fraction dynamics

The only thing, which has changed since the previous implementation of Kirman’s ant colony model, is modeling framework – in the section above we have derived Langevin equation for Kirman’s ant colony. Thus observations discussed in the previous article also apply towards this model. This time we just limit ourselves to simply showing that Langevin equation and agent based model produce same results using same parameter values (see Fig 1.).

Fig 1. Comparison of probability density function (a) and power spectral density (b) of external observable, x, time series, which were produced by agent based model (points) and stochastic model (lines). Parameters are set as follows: h=1 (same in all cases), σ1=0.2 (red points, blue lines), σ1=16 (magenta points, cyan lines), σ2=5 (same in all cases).

### Population fraction SDE Applet

In the applet bellow we solve aforementioned Langevin equation using simple Euler-Maruyama method (see 4). Using this method we transform stochastic differential equation into difference equation

where is Gaussian random variable with zero mean and unit variance. As has meaning only in the interval , we also enforce boundary conditions to restrict values. And in order for numerical solution to be stable we require that .

Above you should see Java applet. If you do not see it, then please make sure that you have JRE installed and that your browser has Java enabled. Also make sure that you are running newest available JRE version. Newest JRE version can be downloaded from http://java.com/getjava.

### References

• . Ants, rationality and recruitment. Quarterly Journal of Economics 108, 1993, pp. 137-156.
• , , . Estimation of Agent-Based Models: The Case of an Asymmetric Herding Model. Computational Economics 26 (1), 2005, pp. 19-49.
• . Handbook of stochastic methods. Springer, Berlin, 2009.
• , . Numerical Solution of Stochastic Differential Equations. Springer, Berlin, 1999.