Experiments with ant species (Iridomyrmex humilis, Linepithema humile and Lasius niger) showed that there is an indirect communication between individuals through pheromones. The ants, on their way from the nest to the food source and vice versa, deposit pheromones in the soil forming a trail, which are able to smell the rest of the components of the colony.
The greater the concentration on a path the greater the probability that an ant will follow that path. To test this fact, double-bridge experiments were designed, changing the relationship (𝑟 = 𝑙𝑙/𝑙𝑠 ) between the lengths of branches, being 𝑙𝑙 and 𝑙𝑠 are the lengths of the long and short branches, respectively.
In the first experiment the nest was connected with the food source using two branches of same length, while in the second experiment the two branches were of different length.
Fig.1 Bridge experiments: a) Branches of same length, r=1. b) Branches of different length, r= 2.
In the first case the results showed that ants move in one way or another indifferently, but eventually more pheromone accumulates in the most traveled path, reason why the ants choose it with greater probability. After several repetitions of the experiment was observed that the branch was chosen indistinctly.
In second case, they used two branches of different length and after letting the colony move freely it was observed that eventually the ants chose a single branch. But in this case, in most trials the ants chose the shortest path.
In second case, they used two branches of different length and after letting the colony move freely it was observed that eventually the ants chose a single branch. But in this case, in most trials the ants chose the shortest path.
This phenomenon is explained by the fact that the shorter path allows the ants to travel more times between the nest and the source and, therefore, more pheromone is deposited.
Ant System (AS)
ACO is a metaheuristic approach inspired by Ant System (AS) initially proposed by Marco Dorigo in 1992 in his doctoral thesis.
This algorithm consists of four main components:
▪ Ant:
Ants are the agents of the algorithm that explore and exploit the searching area.
▪ Pheromone:
The agents (ants) drop pheromone in the search space and the quantities of these pheromones indicate the probability of that path to be chosen by other agents. The intensity of the pheromone on the paths may be considered as a memory of the system.
▪ Daemon action:
It is used to gather global information of the system that cannot be done by a single ant. If the convergence is slowly the daemon action adds an extra pheromone to accelerate the process.
▪ Decentralized control:
The decentralized control operated by the algorithm makes it robust and flexible.
To find the best solution the agents chose the next node using the equation known as the transition rule, that represents the probability for ant 𝑘 to go from city 𝑖 to city 𝑗 on the 𝑡th tour, also known as random proportional transition rule:
In the equation, 𝜏𝑖𝑗 represents the pheromone trail and 𝜂𝑖𝑗 the visibility between the two cities, while 𝛼 and 𝛽 are adjustable parameters that control the relative weight of trail intensity and visibility.
Pheromones are a very important component in the algorithm, since it contributes to the memory of the system. Agents leave a trail of pheromones that make paths to be chosen with higher probability by the next agents.
Another very important component is the pheromone evaporation rate or decay factor. Too high evaporation rate makes the agents to explore more, since there will be less pheromone on the searching area. While low evaporation rate results in exploitation behavior of the agents.
Pheromone evaporation has the advantage of avoiding convergence to a locally optimal solution. If there were no evaporation the paths chosen by the first ants would tend to be excessively attractive to the following ants, and hence the exploration of space and solutions would be limited. The influence of pheromone evaporation on real ant systems is unclear, but it is very important in artificial systems.
Pheromone decay factor is implemented by using a decay coefficient 𝜌 ( 0 ≤ 𝜌 < 1), and the pheromone update rule that is applied to all edges is as follows:
where
being 𝑚 = number of ants.
Another important parameter is the number of ants (𝑚), it keeps constant over time but it has to be chosen carefully since too many ant lead to suboptimal path whereas too few ants are not able to cooperate between them due to pheromone decay.
This algorithm has been used to solve TSP, the quadratic assignment problem, routing and image processing, among others.
Ant Colony Optimization modifications
Some modifications have been created in order to improve AS performance:
- Elitist ant system (EAS)
This variant was proposed by Dorigo and is based on an elitist strategy, in which the best solution found so far deposits an additional quantity of pheromone in each iteration. This is controlled by a parameter (𝑒) which defines the weight of each given solution. The computational results showed that the elitist strategy allows to find better solutions with a smaller number of iterations than the AS.
- Rank-based ant system
AS rank-based was proposed by Bullnheimer, Hartl and Straus. The proposal is to choose the ants with the best solutions of each iteration to make the deposit of pheromones. Prior to the update of pheromone, the 𝑟 best ants so far are sorted based on performance in a list, and each of them deposits an amount of pheromone depending on the range assigned (w - r), being the best solution so far that adds more (w), even if it does not belong to the current iteration.
- Max-Min Ant System
The approach was introduced by Stützle and Hoos in 2000 and limits pheromone values within a range. Several aspects regarding the original algorithm are modified:
- At the beginning, pheromone traces are set at the maximum value to encourage exploration of the environment.
- Pheromone values are limited to one interval to avoid stagnation.
- Only one ant is allowed to add pheromone, to favor the exploitation of the best solutions found during the execution of the algorithm.
- Ant Colony System (ACS)
This algorithm was introduced by Dorigo and Gambardella to improve the performance of the Ant System algorithm. Uses an approach based on a centralized (global) update for the pheromone update, so that it concentrates the search close to the best solution found so far in order to improve the convergence time.
ACS is based on four modifications of the AS:
1. Different transition rule
The transition rule is modified to allow explicitly for exploration. It allows to cut the exploration by tuning a parameter, what make the algorithm to concentrate on the best solutions find so far instead of exploring constantly.
This is done following the rule:
where 𝑞 is a random variable uniformly distributed over [0,1], and 𝑞0 is an adjustable parameter (0 ≤ 𝑞0 ≤ 1), and 𝐽∈𝐽𝑖𝑘 is a city randomly chosen according to probability:
2. Different pheromone trail update rule
Only the agent (ant) that generates the best tour is allowed to globally update the concentration of the pheromone. The update of pheromone is applied only on the edges of the of the best tour found so far. In this way, the agents are encouraged to explore paths in the vicinity of the best tour found so far.
Where (𝑖,𝑗) represents the edges of the best tour in the trial find so far (𝑇+), and Δ𝜏𝑖𝑗(𝑡)=1/𝐿+, where 𝐿+ is the length 𝑇+.
3. Use of local updates of pheromone trail to favor exploration
In order to avoid stagnation local updates are performed. The local updated rule makes the level of pheromone diminish on the edges visited. So, the edges visited are less attractive allowing the agents to explore more.
The local update: when the ant 𝑘 is in the city 𝑖 and goes to city 𝑗 the local pheromone concentration is updated following the formula:
4. Use of candidate list to restrict the choice of the next city to visit
Cities in the candidate list are ordered by increasing distance, and the list is scanned sequentially. The agents first explore the cities on the list, and consider the other cities only if the ones in the list have already been visited.
Bibliography:
Eric Bonabeau, Marco Dorigo & Guy Theraulaz “SWARM INTELLIGENCE From Natural to Artificial Systems” Oxford University Press, New York, 1999.