Stochastic Algorithms : Part 2 – Clever Algorithms in Python

This is a multi part series on implementing Clever Algorithms by Jason Brownlee in Python. See overview, Part 1.

In part 2 we look at the following Stochastic Algorithms: Iterated Local Search, Guided Local Search, Variable Neighborhood Search, Greedy Randomized Adaptive Search, Tabu Search and Reactive Tabu Search. This completes Stochastic Algorithms.

We apply the above algorithms to the Berlin52 instance of the Travelling Salesman Problem. The solution attempts to find a permutation of the cities (called a tour) order that minimizes the total distance traveled. The optimal distance for Berlin52 TSP is at 7542 units.

Download

Download source code. Unzip and import the attached folder in to your Eclipse workspace. It includes a test suite for exercising the algorithms

Iterated Local Search

We use Stochastic 2-opt move for doing local search and the double bridge move for creating a ‘perturbation’. The idea is to escape the local minima by going to neighborhoods which are beyond the local search method.  Advanced usage of ILS takes in to account search history for perturbation and acceptance criteria. I tried to keep the code as near to the pseudo code as possible. This would enable one to understand the algorithm and incorporate search history. Search History can be used to either ‘intensify’ or ‘diversify’  (or a blend) the candidates in the search space. The way I have used is to intensify the search using  a ‘search history’ of ONE.

Variable Neighborhood Search

Variable Neighborhood Search explores larger and larger neighborhoods of a given local optima until an improvement is found. After an improvement is found the search is repeated on expanding neighborhoods. This strategy is based on the following three principles:

A local minimum for one neighborhood may not be the local minimum for for a different neighborhood.

A global minimum is a local minimum for all neighborhoods.

Local minima are assumed to be clustered for many problem domains.

One can see these three principles in action, by repeatedly applying the stochastic 2 opt move in a given neighborhood, doing the same at a larger frequency for the embedded local search heuristic and expanding the search to other neighborhoods when we reach a local optimum.

Greedy Randomized Adaptive Search

GRASP is related to ILS in terms of identifying candidate solutions which are ‘greedy’. This is achieved by constructing a Restricted Candidate List (RCL) which are used to penalize solutions. We start of by constructing a greedy candidate solution and then use a local embedded search to refine the result.  The greedy solution uses a factor ‘alpha’ to control how greedy we need the solution to be. By increasing alpha [0,1] our solution becomes generalized, whereas alpha near zero are more greedy. The approach is to select each ‘feature’ (here a feature is the cost of adding a point to the solution), is within a certain distance from the minimum cost. The code listing that is provided contains extensive comments, which explain in more detail.

Guided Local Search

GLS uses a scaling factor to calculated an ‘augmented cost function’ which is used to restart a search. The augmented cost function is specific to the embedded local search. The main search procedure is oblivious to this function. The aim is to diversify, so that we can escape local optima.  See code listing for more details.

Tabu Search

Tabu Search maintains a tabu list of edges that have been rewired in stochastic 2-opt move. This is used to prevent the generation of candidate solutions from revisiting already visited paths (called cycles). By preventing cycles we search a broader candidate space, there by converging to the global optima faster.

Reactive Tabu Search

Reactive Tabu Search is an extension of Tabu Search. Here we monitor the search and react to occurrence of cycles, by increasing or decreasing our candidate space. This algorithm is the most involved in terms of use of certain heuristics for prohibition period, repetition interval etc. The intent is to diversify the search once a threshold of cycle repetitions is reached. The aim here is to ‘react’ to the search direction and change it.

 

Note: The descriptions are purposely kept short, to encourage delving in to the code itself. I would recommend reading the pseudo code provided in Clever Algorithms and follow my code along. I went out-of-the-way to keep my code similar to pseudo code.

Visualization of Stochastic 2-opt and Double Bridge Moves

2 opt move

Stochastic 2-opt Move : Original tour on the left and resulting tour on the right

Schematic Representation of Double Bridge Move

Double Bridge Move : The four dotted edges are removed and the remaining parts A, B, C, D are reconnected by the dashed edges.

Posted in Technology Tagged with: , ,
2 comments on “Stochastic Algorithms : Part 2 – Clever Algorithms in Python
  1. Paul Smith says:

    Thanks for the information. I love your blog! I have added it on my blog – http://www.top-websites.org I hope that’s all right! All the best Paul Smith

  2. jasonb says:

    keep up the great work Sai!

%d bloggers like this: