Skip to main content

Ant Colony Optimization (part 1)

Experiments with ant species (Iridomyrmex humilis, Linepithema humile and Lasius niger) showed that there is an indirect communication between individuals through pheromones. The ants, on their way from the nest to the food source and vice versa, deposit pheromones in the soil forming a trail, which are able to smell the rest of the components of the colony.

The greater the concentration on a path the greater the probability that an ant will follow that path. To test this fact, double-bridge experiments were designed, changing the relationship (𝑟 = 𝑙𝑙/𝑙𝑠 ) between the lengths of branches, being 𝑙𝑙 and 𝑙𝑠 are the lengths of the long and short branches, respectively.

In the first experiment the nest was connected with the food source using two branches of same length, while in the second experiment the two branches were of different length.
Fig.1 Bridge experiments: a) Branches of same length, r=1. b) Branches of different length, r= 2.


In the first case the results showed that ants move in one way or another indifferently, but eventually more pheromone accumulates in the most traveled path, reason why the ants choose it with greater probability. After several repetitions of the experiment was observed that the branch was chosen indistinctly. 
In second case, they used two branches of different length and after letting the colony move freely it was observed that eventually the ants chose a single branch. But in this case, in most trials the ants chose the shortest path.
This phenomenon is explained by the fact that the shorter path allows the ants to travel more times between the nest and the source and, therefore, more pheromone is deposited.

Ant System (AS)
ACO is a metaheuristic approach inspired by Ant System (AS) initially proposed by Marco Dorigo in 1992 in his doctoral thesis.

This algorithm consists of four main components:
▪ Ant:
Ants are the agents of the algorithm that explore and exploit the searching area.

▪ Pheromone:
The agents (ants) drop pheromone in the search space and the quantities of these pheromones indicate the probability of that path to be chosen by other agents. The intensity of the pheromone on the paths may be considered as a memory of the system.

▪ Daemon action:
It is used to gather global information of the system that cannot be done by a single ant. If the convergence is slowly the daemon action adds an extra pheromone to accelerate the process.

▪ Decentralized control:
The decentralized control operated by the algorithm makes it robust and flexible.


To find the best solution the agents chose the next node using the equation known as the transition rule, that represents the probability for ant 𝑘 to go from city 𝑖 to city 𝑗 on the 𝑡th tour, also known as random proportional transition rule: 


In the equation, 𝜏𝑖𝑗 represents the pheromone trail and 𝜂𝑖𝑗 the visibility between the two cities, while 𝛼 and 𝛽 are adjustable parameters that control the relative weight of trail intensity and visibility.



Pheromones are a very important component in the algorithm, since it contributes to the memory of the system. Agents leave a trail of pheromones that make paths to be chosen with higher probability by the next agents.



Another very important component is the pheromone evaporation rate or decay factor. Too high evaporation rate makes the agents to explore more, since there will be less pheromone on the searching area. While low evaporation rate results in exploitation behavior of the agents. 
Pheromone evaporation has the advantage of avoiding convergence to a locally optimal solution. If there were no evaporation the paths chosen by the first ants would tend to be excessively attractive to the following ants, and hence the exploration of space and solutions would be limited. The influence of pheromone evaporation on real ant systems is unclear, but it is very important in artificial systems.



Pheromone decay factor is implemented by using a decay coefficient 𝜌 ( 0 ≤ 𝜌 < 1), and the pheromone update rule that is applied to all edges is as follows: 

where
being 𝑚 = number of ants.


Another important parameter is the number of ants (𝑚), it keeps constant over time but it has to be chosen carefully since too many ant lead to suboptimal path whereas too few ants are not able to cooperate between them due to pheromone decay.



This algorithm has been used to solve TSP, the quadratic assignment problem, routing and image processing, among others.



Ant Colony Optimization modifications

Some modifications have been created in order to improve AS performance:


- Elitist ant system (EAS)

This variant was proposed by Dorigo and is based on an elitist strategy, in which the best solution found so far deposits an additional quantity of pheromone in each iteration. This is controlled by a parameter (𝑒) which defines the weight of each given solution. The computational results showed that the elitist strategy allows to find better solutions with a smaller number of iterations than the AS.


- Rank-based ant system
AS rank-based was proposed by Bullnheimer, Hartl and Straus. The proposal is to choose the ants with the best solutions of each iteration to make the deposit of pheromones. Prior to the update of pheromone, the 𝑟 best ants so far are sorted based on performance in a list, and each of them deposits an amount of pheromone depending on the range assigned (w - r), being the best solution so far that adds more (w), even if it does not belong to the current iteration. 


- Max-Min Ant System 


The approach was introduced by Stützle and Hoos in 2000 and limits pheromone values within a range. Several aspects regarding the original algorithm are modified:

  • At the beginning, pheromone traces are set at the maximum value to encourage exploration of the environment.
  • Pheromone values are limited to one interval to avoid stagnation.
  • Only one ant is allowed to add pheromone, to favor the exploitation of the best solutions found during the execution of the algorithm.


- Ant Colony System (ACS)


This algorithm was introduced by Dorigo and Gambardella to improve the performance of the Ant System algorithm. Uses an approach based on a centralized (global) update for the pheromone update, so that it concentrates the search close to the best solution found so far in order to improve the convergence time.



ACS is based on four modifications of the AS:

1. Different transition rule

The transition rule is modified to allow explicitly for exploration. It allows to cut the exploration by tuning a parameter, what make the algorithm to concentrate on the best solutions find so far instead of exploring constantly.
This is done following the rule: 
where 𝑞 is a random variable uniformly distributed over [0,1], and 𝑞0 is an adjustable parameter (0 ≤ 𝑞0 ≤ 1), and 𝐽∈𝐽𝑖𝑘 is a city randomly chosen according to probability: 

2. Different pheromone trail update rule

Only the agent (ant) that generates the best tour is allowed to globally update the concentration of the pheromone. The update of pheromone is applied only on the edges of the of the best tour found so far. In this way, the agents are encouraged to explore paths in the vicinity of the best tour found so far. 
Where (𝑖,𝑗) represents the edges of the best tour in the trial find so far (𝑇+), and Δ𝜏𝑖𝑗(𝑡)=1/𝐿+, where 𝐿+ is the length 𝑇+.


3. Use of local updates of pheromone trail to favor exploration

In order to avoid stagnation local updates are performed. The local updated rule makes the level of pheromone diminish on the edges visited. So, the edges visited are less attractive allowing the agents to explore more.
The local update: when the ant 𝑘 is in the city 𝑖 and goes to city 𝑗 the local pheromone concentration is updated following the formula:


4. Use of candidate list to restrict the choice of the next city to visit

Cities in the candidate list are ordered by increasing distance, and the list is scanned sequentially. The agents first explore the cities on the list, and consider the other cities only if the ones in the list have already been visited.

Bibliography:
Eric Bonabeau, Marco Dorigo & Guy Theraulaz “SWARM INTELLIGENCE From Natural to Artificial Systems” Oxford University Press, New York, 1999.

Popular posts from this blog

Support Vector Machines (SVM) in R (package 'kernlab')

Support Vector Machines (SVM) learning combines of both the instance-based nearest neighbor algorithm and the linear regression modeling. Support Vector Machines can be imagined as a surface that creates a boundary (hyperplane) between points of data plotted in multidimensional that represents examples and their feature values. Since it is likely that the line that leads to the greatest separation will generalize the best to the future data, SVM involves a search for the Maximum Margin Hyperplane (MMH) that creates the greatest separation between the 2 classes. If the data ara not linearly separable is used a slack variable, which creates a soft margin that allows some points to fall on the incorrect side of the margin. But, in many real-world applications, the relationship between variables are nonlinear. A key featureof the SVMs are their ability to map the problem to a higher dimension space using a process known as the Kernel trick, this involves a process of constructing ne

Initial Data Analysis (infert dataset)

Initial analysis is a very important step that should always be performed prior to analysing the data we are working with. The data we receive most of the time is messy and may contain mistakes that can lead us to wrong conclusions. Here we will use the dataset infert , that is already present in R. To get to know the data is very important to know the background and the meaning of each variable present in the dataset. Since infert is a dataset in R we can get information about the data using the following code: require(datasets) ?infert #gives us important info about the dataset inf <- infert #renamed dataset as 'inf' This gives us the following information: Format 1.Education: 0 = 0-5 years, 1 = 6-11 years, 2 = 12+ years 2.Age: Age in years of case 3.Parity: Count 4.Number of prior induced abortions: 0 = 0, 1 = 1, 2 = 2 or more 5.Case status: 1 = case 0 = control 6.Number of prior spontaneous abortions: 0 = 0, 1 = 1, 2

Ant Colony Optimization (part 2) : Graph optimization using ACO

The Travelling Salesman Problem (TSP) is one of the most famous problems in computer science for studying optimization, the objective is to find a complete route that connects all the nodes of a network, visiting them only once and returning to the starting point while minimizing the total distance of the route. The problem of the traveling agent has an important variation, and this depends on whether the distances between one node and another are symmetric or not, that is, that the distance between A and B is equal to the distance between B and A, since in practice is very unlikely to be so. The number of possible routes in a network is determined by the equation: (𝒏−𝟏)! This means that in a network of 5 nodes the number of probable routes is equal to (5-1)! = 24, and as the number of nodes increases, the number of possible routes grows factorially. In the case that the problem is symmetrical the number of possible routes is reduced to half: ( (𝒏−𝟏)! ) / 𝟐 The complexity o