In part 2 we look at the following Stochastic Algorithms: Iterated Local Search, Guided Local Search, Variable Neighborhood Search, Greedy Randomized Adaptive Search, Tabu Search and Reactive Tabu Search. This completes Stochastic Algorithms.
We apply the above algorithms to the Berlin52 instance of the Travelling Salesman Problem. The solution attempts to find a permutation of the cities (called a tour) order that minimizes the total distance traveled. The optimal distance for Berlin52 TSP is at 7542 units.
Download source code. Unzip and import the attached folder in to your Eclipse workspace. It includes a test suite for exercising the algorithms
Iterated Local Search
We use Stochastic 2-opt move for doing local search and the double bridge move for creating a ‘perturbation’. The idea is to escape the local minima by going to neighborhoods which are beyond the local search method. Advanced usage of ILS takes in to account search history for perturbation and acceptance criteria. I tried to keep the code as near to the pseudo code as possible. This would enable one to understand the algorithm and incorporate search history. Search History can be used to either ‘intensify’ or ‘diversify’ (or a blend) the candidates in the search space. The way I have used is to intensify the search using a ‘search history’ of ONE.
Variable Neighborhood Search
Variable Neighborhood Search explores larger and larger neighborhoods of a given local optima until an improvement is found. After an improvement is found the search is repeated on expanding neighborhoods. This strategy is based on the following three principles:
A local minimum for one neighborhood may not be the local minimum for for a different neighborhood.
A global minimum is a local minimum for all neighborhoods.
Local minima are assumed to be clustered for many problem domains.
One can see these three principles in action, by repeatedly applying the stochastic 2 opt move in a given neighborhood, doing the same at a larger frequency for the embedded local search heuristic and expanding the search to other neighborhoods when we reach a local optimum.
Greedy Randomized Adaptive Search
GRASP is related to ILS in terms of identifying candidate solutions which are ‘greedy’. This is achieved by constructing a Restricted Candidate List (RCL) which are used to penalize solutions. We start of by constructing a greedy candidate solution and then use a local embedded search to refine the result. The greedy solution uses a factor ‘alpha’ to control how greedy we need the solution to be. By increasing alpha [0,1] our solution becomes generalized, whereas alpha near zero are more greedy. The approach is to select each ‘feature’ (here a feature is the cost of adding a point to the solution), is within a certain distance from the minimum cost. The code listing that is provided contains extensive comments, which explain in more detail.
Guided Local Search
GLS uses a scaling factor to calculated an ‘augmented cost function’ which is used to restart a search. The augmented cost function is specific to the embedded local search. The main search procedure is oblivious to this function. The aim is to diversify, so that we can escape local optima. See code listing for more details.
Tabu Search maintains a tabu list of edges that have been rewired in stochastic 2-opt move. This is used to prevent the generation of candidate solutions from revisiting already visited paths (called cycles). By preventing cycles we search a broader candidate space, there by converging to the global optima faster.
Reactive Tabu Search
Note: The descriptions are purposely kept short, to encourage delving in to the code itself. I would recommend reading the pseudo code provided in Clever Algorithms and follow my code along. I went out-of-the-way to keep my code similar to pseudo code.
Visualization of Stochastic 2-opt and Double Bridge Moves