Shortest Path Finding Algorithm Using Ant Colony Optimization

DOI : 10.17577/IJERTV2IS60197

Download Full-Text PDF Cite this Publication

Text Only Version

Shortest Path Finding Algorithm Using Ant Colony Optimization

Er. Sarbjeet Kaur

Assist. Prof: Dept. of Electronics and Communication Engg Rayat College Hoshiarpur

Abstract: Finding the shortest path in a road network is a well known problem. Various proven static algorithms such as Dijkstra are extensively evaluated and implemented. When confronted with dynamic costs, such as link travel time predictions, alternative route planning algorithms have to be applied. This paper applies Ant Colony Optimization to find the shortest path. In this paper, flowchart and algorithm have been developed for ants to search the food from source to destination by depositing pheromones using shortest path. Despite of this, we will introduce in more details a survey of the nowadays most important metaheuristics because ant colony optimization is considered as one of these new metaheuristics. We outline the different components and concepts that are used in the different metaheuristics in order to find the percentage of ants able to find the shortest path.

Keywords: Ant colony optimization, Metaheuristic, Pheromones, Stigmergy,

1. Introduction:

In this paper, I will present a new dynamicalgorithm, which is based on Ant Colony Optimization (ACO) algorithm. The basic idea of the ACO meta-heuristic is taken from the food searching behavior of real ants. While walking, ants deposit pheromone, which marks the route taken as they move from a food source to their nest, and foragers follow such pheromone trails. The concentration of pheromone on a certain path is an indication of its usage. These pheromone trails are used as a simple indirect form of communication. The process of emerging global information from local actions through small, independent agents not communicating with each other is called Stigmergy [2]. This behavior of the ants can be used to find the shortest path in networks. Especially, the dynamic component of this method allows a high adaptation to changes in mobile ad hoc network topology, since in these networks the existence of links are not guaranteed and link changes occur very often. The simple ant colony optimization meta-heuristic illustrates why this kind of algorithms could perform well in mobile ad hoc networks by different

reasons. The main reason is that ACO meta-heuristic is based on agent systems and works with individual ants. This allows a high adaptation to the current topology of the network.[3]

  1. ANT COLONY OPTIMIZATION

    Ants are creatures of limited intelligence, yet in nature they manage amazing feats such as building nests and finding food [8]. They do this through an organized collaborative behavior that exploits the intelligence of the swarm of individuals in the ant colony [9]. The French entomologist, Pierre Paul Grasse investigated the social behavior of insects and discovered that ants are capable to react to what he referred to as significant stimuli, which are signals that activate a genetically encoded reaction. He observed that the effects of these reactions can act as new significant stimuli for both the insect that produced them and for the other insects in the colony. Grasse coined the term Stigmergy to describe this particular type of indirect communication in which the workers are stimulated by the performance they have achieved. Stigmergy is defined as a method of indirect communication in a self-organizing emergent system where its individual parts communicate with one another by modifying their local environment [3]. Ant colony optimization (ACO) is an optimization technique inspired by the exploratory behavior of ants while finding food. Ants start from their nest and find different paths to the food. In this context, the local information available to the ant is the path that it took to the destination. However a single ant is not aware of the complete topology of the environment. Ants thus communicate with each other by depositing traces of pheromone as they walk along their path. Subsequent ants that arrive in search of food, base their decisions of which path to take on the pheromone traces left in that locality by the previous ants. This form of communication is indirect, i.e., one ant releases the pheromone information into the environment, and another ant senses that pheromone information from the environment just as Grasse had defined. As more ants travel over a particular path, the concentration of pheromone increases along that path. Pheromones along a path also gradually evaporate decreasing their concentration on that path. The pheromone acts significant stimuli since other ants are able to sense the pheromones deposited by each other, and they generally take the path of maximum pheromone concentration. This is how the ants progressively converge on a single optimum path between their nest and the food [1].

    1. THE ANT COLONY META HEURISTIC

      Recently, many researchers have focused their attention on a new class of algorithms, called metaheuristics. A metaheuristic is a set of algorithmic concepts that can be used to define heuristic methods applicable to a wide set of different problems. The use of metaheuristics has significantly increased the ability of finding very high quality solutions to hard, practically relevant combinatorial optimization problems in a reasonable time. A particularly successful metaheuristic is inspired by the behavior of real ants. Starting with Ant System, a number of algorithmic approaches based on the very same ideas were developed and applied with considerable success to a variety of combinatorial optimization problems from academic as well as from real-world applications. In this chapter we introduce ant colony optimization, a metaheuristic framework which covers the algorithmic approach mentioned above. The ACO metaheuristic has been proposed as a common framework for the existing applications and algorithmic variants of a variety of ant algorithms. Algorithms that fit into the ACO metaheuristic framework will be called in the following ACO algorithms. Meta heuristic algorithms are algorithms which, in order to escape from local optima, drive some basic heuristic: either a constructive heuristic starting from a null solution and adding elements to build a good complete one, or a local search heuristic starting from a complete solution and iteratively modifying some of its elements in order to achieve a better one. The meta heuristic part permits the low level heuristic to obtain solutions better than those it could have achieved alone, even if iterated [4].

      SPFDA : Shortest path finding dynamic Algorithm

      It is mentioned that the following points must be defined to build an SPFDA algorithm:

      • An appropriate representation of the problem, which allows the ants to incrementally construct/modify solutions through the use of a probabilistic transition rule based on the amount of pheromone in the trail and on a local heuristic;

      • A heuristic function that measures the quality of items that can be added to the current partial solution;

      • A method to enforce the construction of valid solutions, i.e. solutions that are legal in the real-world situation corresponding to the problem definition;

      • A rule for pheromone updating, which specifies the pheromone trail; A probabilistic rule of transition based on the value of the heuristic function and on the contents of the pheromone trail

        An ACO algorithm iteratively performs a loop containing two basic procedures, namely:

      • A procedure specifying how the ants construct/modify solutions of the problem to be solved

      • A procedure to update the pheromone trails.

        The construction/modification of a solution is performed in a probabilistic way. The probability of adding a new item to the current partial solution is given by a function that depends on a problem-dependent heuristic and on the amount of pheromone deposited by ants on the trail in the past. The updates in the pheromone trail are implemented as a function that depends on the rate of pheromone evaporation and on the quality of the produced solution.

        Figure presents the generic ant algorithm.

      • The first step consists mainly of the pheromone trail initialization.

      • The second step is the iteration step, where each ant constructs a complete solution to the problem according to a probabilistic state transition rule. The state transition rule depends mainly on the state of the pheromone.

        Once all ants generate a solution, a global pheromone updating rule is applied in two phases:

      • An evaporation phase where a fraction of the pheromone evaporates,

      • A reinforcement phase where each ant deposits an amount of pheromone that is proportional to

      the fitness of its solution. This process is iterated until reaching stopping criteria The following Figure (1) represents a generalized flowchart of the ant algorithm

      Require: parameters

      1: while Iterations not complete do 2: Construct Solutions;

      3: Update Pheromones;

      4: end while

      Set input activities and Vol. 2 Issue 6, June – 2013

      resources

      Initialization of parameters

      Construct solution

      Update Pheromone

      Is terminating condition

      Yes

      Print the best solution

      Terminate

      Terminate

      NO

      Assign new solution

      Assign new solution

      Figure 1 Flowchart of ant colony optimization

    2. THE DOUBLE BRIDGE EXPERIMENT

The ants begin by walking randomly. They cannot see the ground and have a very limited view of what is around them. Therefore, if the ground has not been explored yet, they will just wander and take random decision at each crossroads. After a while, the places around the nest will be all

explored. The ants will get to know that by the marking done by the previous ants. Indeed, they will leave behind them the famous pheromones and inform the other ants that the way is already explored. The concentration of pheromones depends on the number of ants who took the way, the more ants taking the way, the more pheromones.[5]

Figure 2: Double bridge experiment [5]

At start time, all ants are in the nest and they are left free to move. The experimental apparatus is built in such a way that the only way for the ants to reach the food is by using one of the bridge branches. In the initial phase, the ants move randomly and they choose between the shorter and the longer branch with equal probability. While walking, ants deposit on the ground a pheromone trail. Those ants choosing the shortest branch will be the first to find the food and go back to the nest, so the pheromone trail on the shortest branch will grow faster. Forthcoming ants choose with higher probability those directions marked by stronger pheromone concentration. This autocatalytic (positive feedback) process is at the heart of the auto-organizing behavior that very quickly leads all the ants to choose the shortest branch.[11] A similar mechanism can be used by opportunely defined artificial ants to find minimum cost paths on graphs.

Consider for example, other experimental setting shown in Figure (3). There is a path along which ants are walking for example from the nest A to food source E and vice versa (Figure 3a). Suddenly, an obstacle appears and the path is cut off. Then at position B, the ants walking from A to E, or at position D those walking in the opposite direction have to decide whether to turn right or left (Figure 3b)

a). Ants follow a path between points A and E

  1. An obstacle is interposed on the path; ants can choose to go around it following two different paths with equal probability

  2. On the shorter path more pheromone is laid down increasing the probability of that route being preferred by following ants

Figure 3. Ants facing an obstacle

The choice is influenced by the intensity of the pheromone trails left by preceding ants. A higher level of pheromone on the right or left path gives ants a stronger stimulus and thus a higher probability to turn right or left. The first ant reaching point B or D has the same probability to turn right or left as there was no previous pheromone on the two alternative paths. Because path BCD is shorter than BHD, the first ant following it will reach D before the first ant following BHD (Figure3c). The result is that an ant returning from E to D will find a stronger trail on path

DCB caused by the half of all the ants that by chance decided to approach the obstacle via DCBA and by the already arrived ones coming via BCD: they will therefore prefer in probability path DCB to path DHB. As a consequence, the number of ants following path BCD per unit of time will be higher than the number of ants following BHD. This causes the quantity of pheromone on the shorter path to grow faster than on the longer one and therefore, the probability with which any single ant chooses the path to follow is quickly biased toward the shorter one.[12]

The final result is that all ants will quickly choose the shortest path as shown in fig (4)

Percentage of ants that have selected shortest path

Figure (4) Distribution of the percentage of ants that selected the shorter path.

CONCLUSION:

This paper presents an Ant Colony Optimization based algorithmic approach to find the shortest path in the network using dynamic system. The algorithm presented in this paper no longer operates in well defined iterations. The consequences of this modification is that applying global updates to pheromone levels becomes infeasible. Instead, the algorithms presented in this paper only use local updates by exploration or primer ants. This restriction makes the algorithms more easily deployed in distributed environments. With the help of double bridge experiment pheromone concentration is analyzed at different time. Finally we are able to find the quantity of pheromone on the shorter path to grow faster than on the longer one and therefore, the

probability with which any single ant chooses the path to follow is quickly biased toward the shorter one.

REFERENCES:

  1. D. Corne, M. Dorigo, and F. Glover (Eds.), New ideas in optimization, Maidenhead, UK: McGraw-Hill, 1999.

  2. C. E. Perkins. Ad Hoc Networking. Addison-Wesley, ISBN 0-201-30976-9, 2001.

  3. J. Gomez, A. T. Campbell, M. Naghshineh, C. Bisdikian. Conserving Transmission Power in Wireless Ad-Hoc Networks. IEEE 9th International Conference on Network Protocols, 2001.

  4. R. Schoonderwoerd, O. Holland, J. Bruten, and L. Rothkrantz, Ants for Load Balancing in Telecommunications Networks, Adaptive Behavior, Vol. 5, No. 2, 1997, pp. 169-207.

  5. M. Heissenbüttel, and T. Braun, Ants-Based Routing in Large Scale Mobile Ad-Hoc Networks, Kommunikation in Verteilten Systemen (KiVS), 2003.

  6. J.S. Baras, H. Mehta. A Probabilistic Emergent Routing Algorithm for Mobile Ad Hoc Networks. WiOpt03 Proc. of Modeling and Optimization in Mobile, Ad Hoc and Wireless Networks, 2003.

  7. Y. Lü, G. Zhao, F. Su, and X. Li. Adaptive Swarm-Based Routing in Communication Networks. Journal of Zhejiang University Science JZUS 2004, vol. 5(7), pages 867-872, 2004

  8. D.Sivakumar, R.S Bhuvaneswaran, Proposal on Multi agent Ants based Routing Algorithms for for Mobile Ad-hoc Networks, IJCNS International Journal of Computer Science and Network Security. Vol7, No6,261-268, july 2007.

  9. I. Chlamtac, M. Conti, and J.-N. Jennifer. Mobie Ad Hoc Networking: Imperatives and Challenges. In Elsevier proceeding for ad hoc networks, vol.1, pages13-64, 2003.

  10. M. Günes. Routing und Adressierung in Mobilen Multi-Hop Ad-Hoc-Netzen, Ph.D. thesis, Rheinisch-Westfälische Technische Hochschule Aachen, 2004.

  11. M. Dorigo and T. Stützle. The Ant Colony Optimization Metaheuristic: Algorithms, Applications and Advances. In F. Glover and G. Kochenberger, (Eds.), Handbook of Metaheuristics, Kluwer Academic Publishers, pages 251-285, 2003.

  12. M. Dorigo. Optimization, Learning and Natural Algorithms (in Italian). PhD thesis, Dipartimento di Elettronica, Politecnico di Milano, Italy, 1992.

Leave a Reply