A Heuristic Search Approach to Multidimensional Scaling

This research effort presents an approach to accomplish Multidimensional Scaling (MDS) via the heuristic approach of Simulated Annealing. Multidimensional scaling is an approach used to convert matrix-based similarity (or dissimilarity data) into spatial form, usually via two or three dimensions. Performing MDS has several important applications—Geographic Information Systems, DNA Sequencing, and Marketing Research are just a few. Tra-ditionally, classical MDS decomposes the similarity or dissimilarity matrix into its eigensystem and uses the eigensystem to calculate spatial coordinates. Here, a heuristic search-based approach is used to find coordinates from a dissimilarity matrix that minimizes a cost function. The proposed methodology is used over a variety of problems. Experimentation shows that the presented methodology consistently outperforms solutions obtained via the classical MDS approach, and this approach can be used for other important ap-plications.

One such general alternative is a search heuristic. A search heuristic usually starts with a feasible, randomly-generated solution with the intent of eventually finding an optimal (or near-optimal) solution to an optimization problem. The optimality of a solution is determined via an objective function measure. During the search process, feasible solutions are modified to a minor degree with hopes of improvement ultimately approaching global optimality. While this search process is ongoing, newly obtained solutions replace incumbent solutions. Sometimes, these incumbent solutions are replaced by relatively inferior newly obtained solutions. This might seem odd, but research has shown, that when properly implemented, search heuristics that occasionally replace incumbent solutions with relatively inferior solutions can help avoid being trapped at local optima [2].
Tabu Search [3] [4], Simulated Annealing [5] and Genetic Algorithms [6] are three types of search heuristics that fit the description above. When properly constructed and implemented, search heuristics can provide near-optimal results with less computational effort than other solution approaches.
For this research effort, a Simulated Annealing approach is used to address an MDS problem with a distance matrix as input that outputs a set of Cartesian Coordinates. Specifically, the following sections: describe classical multidimensional scaling; describe a simulated annealing based alternative to the MDS problem, outline an experiment for implementation of the proposed methodology; detail experimental results; and draw conclusions.

Classical Approach to Multidimensional Scaling
Classical Multidimensional Scaling is a technique used to convert similarity (or dissimilarity) data into m-dimensions. This conversion is done so that a spatial representation of the data can be pursued. The given data is typically provided in n x n matrix form. Each element in the matrix is some pairwise measure of the similarity or dissimilarity between data points. The conversion of the data from matrix to spatial form is done for a variety of reasons: market segmentation, graphing, matching, sequencing, etc. Any attempt to visualize observations when given similarity data can be thought of as an application of MDS.
In classical MDS, an n × n dissimilarity matrix is provided, which is referred to as D. This matrix contains elements d ij , which shows the Euclidean distance between points i and j. For further analysis, the matrix D 2 is constructed, which is the square of the values of d ij . Centering matrix C is then used, which is I − ((1/n)(J n )), where I is the identity matrix (values of "1" on the main diagonal, "0" elsewhere), and J n is a matrix of "1". These elements form the double-centering matrix B. This double-centered matrix is constructed so that the means of all row and column vectors are zero [7]. This matrix is as follows: The B matrix is then decomposed into its eigensystem. Specifically, the m-largest eigenvalues and eigenvectors are captured. A spatial representation of the data is provided by the n by m matrix X, which is determined as follows: where E is an n × m matrix of the m-largest eigenvectors, and Λ 1/2 is the corresponding m × m matrix of the square-root of the m-largest eigenvalues on the main diagonal, with values of "0" elsewhere [8] [9].
This eigenpair approach exploits the dissimilarity matrix D to obtain spatial coordinates (X).

( ) ( )
In Equation (3)  tances, the objective function value has been minimized to a value of "0".
Classical MDS decomposes an n × n matrix and outputs an n × 2 (or n × 3). This means that n − 2 eigenpairs are lost for a two-dimensional spatial solution, or n − 3 eigenpairs are lost for a three-dimensional spatial solution. These "lost" eigenpairs limit our ability to explain the variation in the distance data, as each eigenpair assists in explaining the variation in the distance data. This, of course, is unfortunate. Additionally, decomposing a matrix into its eigenpairs is computationally expensive. As such, a viable alternative is worth exploration.

Heuristic Approach to Multidimensional Scaling
Regardless of the approach used to perform MDS, the objective is the sameminimization of the objective function shown in Equation (3). In order to present the methodology used to solve this problem, the following parameters are defined in Table 1.
As stated before, Simulated Annealing is the chosen search heuristic to address the MDS problem at hand. Simulated Annealing gets its name from "annealing," which is the heating of a solid (usually a metal) to a high temperature and then the slow, subsequent cooling of the solid. During the cooling process, "simulated" annealing, the objective function is analogous to the desired physical property of the solid-we start with a randomly-generated solution (analogous to a solid at a high temperature) and modify the solution until a desired condition is obtained. Early in the search, there is more liberality in replacing the incumbent solution as compared to later in the search-this is analogous to the "chaotic" state early in the annealing process compared to the more "ordered" state later in the annealing process.
The following subsections detail the Simulated Annealing heuristic search process used to find a solution to the problem at hand.
Step 1: Initialization An initial solution is constructed by assigning random values to the x ab values.
Specifically, this is done as follows: ( ) where U(0, 500) is a uniformly-distributed random variable on the (0, 500) interval. It should be noted that our solution boundary is confined to the (0, 500) interval for both the horizontal and vertical axes. The "test" solution and "best" solution values are set equal to this initial solution. Specifically, this is as follows: The initial solution is also used to determine the objective function value, Z. This is done as follows: The objective function values for the "test" and "best" solutions are assigned this P. R. McMullen American Journal of Operations Research same value as follows: Step 2: Modification In order to improve the current solution such that it results in an optimal (or at least "near optimal") condition, it must be "modified". Specifically, the "test" solution must be modified as the first step toward improving the current solution. This modification is done to one of the n data points. The modification is a "minor" modification-large modifications are avoided such that a controlled improvement process is followed.
First, the cost vector is obtained via the following: The cost vector shows the amount of "error" each of the n locations contributes to the objective function value. Higher cost values contribute more to sub-optimality.
Each potential target data point (q) for the modification has the following probability of being selected: Monte-Carlo simulation is used to select the target value q. Cities with higher cost vectors are more likely to be selected as targets for modification, the actual modification of the selected data point (q) is as follows: As stated, this modification is done for all m dimensions of the targeted data point (m = 2 is used here). The current temperature value (T) is used in this calculation so that the aggressiveness of each modification decreases proportionally to the value of T.
Step 3: Objective Function The modified test solution is then used to determine the objective function value (Z t ) as shown below: Step 4: Solution Comparison Test 1: If the objective function value associated with the test solution is less than the objective function associated with the current solution, the test solution replaces the current solution. Specifically, the following applies: Otherwise, the difference between the current solution and test solution is determined: This difference is shown in relative form, with the value being represented by δE. This value is always positive, since Z t > Z. This value is used to determine whether-or-not the inferior test solution should replace the current solution, via a probabilistic condition. Specifically, the value P A represents the probability of replacing the current solution with the test solution. This probability is as follows: The value k B is referred to as the Boltzman Constant [10], and is user-determined such that a certain degree of relative inferiority (δE) results in the test solution having a specific probability of replacing the current solution.
A uniformly-distributed random number on the U(0, 1) interval is then generated. If this random number is less than P A , the test solution replaces the current solution, in accordance with Equations (12) and (13).

Test 2:
If the objective function value associated with the test solution is less than the objective function associated with the best solution, the test solution replaces the best solution.
Otherwise, no action is taken.
Step 5: Incrementation Steps 2, 3 and 4 are repeated Iter times. The value of T is then updated as follows: This continues while T > 1. When T ≤ 1, the simulation has concluded. When the simulation has concluded, the current solution is replaced by the best solution, and the value of T is set to T 1 . Mathematically, this is done as follows: This is repeated Sim times, and the best solution found is reported.
To gain a better understanding of the presented methodology, it is presented in pseudocode via Figure 1.
As a point of clarification, it should be noted that repeated simulations in an effort to obtain an optimal solution may present some confusing terminology. As such, Figure 2 below shows the hierarchy of terms used here. American Journal of Operations Research  A simulation is a subset of a specific solution. Specifically, there are Sim simulations performed for each solution. This is one of the defenses against being "trapped" at local optima. At the conclusion of each simulation (Sim) during the search process, the "current" solution is replaced by the "best" solution. This provides the next simulation (Sim) with the best possible starting point in the attempt to find the optimal solution. If this were not done, the search process is likely trapped at a sub-optimal solution. A single instance of executing the presented methodology is classified as a "solution". A "simulation" in this context is only a subset of the solution process.

Experimentation
An experiment is designed to evaluate the effectiveness of the presented methodology. A distance matrix was obtained for 36 metropolitan areas in the continental United States. These are "air-distances". These values were obtained via the website https://www.airmilescalculator.com/. These distances can be thought of as "links" in the parlance of networks. The number of links in a network of n locations is n(n − 1)/2. These links can also be thought of as pairwise distances.

Implementation
The methodology presented is coded via the NetLogo software package American Journal of Operations Research (https://ccl.northwestern.edu). NetLogo is a Java-based programming environment, particularly suited to enable agent-based simulation [11] [12]. Agent-based simulation is desired here, as the n cities can be thought of as agents.
The distance matrix is used to address problems ranging from four cities (n = 4) to 36 cities (n = 36). All values of n between 4 and 36 are used for assessment.
As such, 33 (1 + 36 − 4) unique problems are addressed. Each problem is solved five times so that reliable estimates of performance are available. An example solution is detailed in Figure 3. Figure 3 shows a solution for a problem with n = 26 cities. The "best" solution found for this example has an objective function value of Z b = 24.13. This value quantifies the square root of the sum of squared differences between the actual (given) distance matrix and the distance matrix associated with the "best" solution ((x ab ) b ). In this case, there is a total difference of 24.13 US miles between the two matrices. A value of Z b = 0 is considered optimal-which would imply no difference between the actual distance matrix and the distance matrix associated

Computational Experience
The program was run on a Windows machine, with an Intel Core 8565U processor, at 4.6 GHz. The user-chosen parameters were chosen via trial and error, so that the "best" solutions could be found repeatedly. This is detailed in Table   2.
For all solutions, a cooling rate (CR) of 0.99 was used, along with 75 iterations per temperature level (Iter). The starting temperature (T 1 ) was 25 and the final temperature (T F ) was 17.5. The k B value was set such that a solution with 1% relative inferiority would replace the incumbent solution with a 10% probability.

Stagnation
During the development of the search process, it was noticed that the solution was getting trapped at local optima, despite efforts to prevent this. This local optima trapping phenomenon is referred to as "stagnation" hereafter. Specifically, improvement was impeded by cities being out of place. That is, cities located in places where minor modifications are not aggressive enough to improve the objective function value, resulting in solutions becoming stagnant. There are two types of stagnation requiring treatment-single city stagnation and two-city stagnation. seem relatively properly placed. Unfortunately, the modification process described above will not correct the problem, as this modification process can only accomplish minor modifications-as such, MIA will not be moved anywhere that is helpful. Instead, the MIA location must be moved in a more aggressive fashion. The general procedure for this lies with the cost vector (c). The maximum value in the cost vector displays the most "offensive" city in terms of contribution  to the objective function value. The city associated with the highest value of the cost vector is then selected as the target. This target city is referred to as "q". The modification to city q is as follows:

One-City Stagnation
This modification is similar to the one done in normal circumstances detailed above with two key differences. The modification to combat the stagnation is not sensitive to the value of T. Additionally, the modification here is much more aggressive than the modification performed under normal circumstances, so that the "offending" city can be moved further that normal. Figure 5 shows yet another solution. If the image is rotated about 45 degrees in the counter-clockwise direction, proper perspective might be realized.

Two-City Stagnation
This solution, at first, might appear to be "reasonable". Upon further inspection, however, one might notice a "disconnect" between the eastern cities and western cities. Specifically, the western cities seem to be inverted when compared to the eastern cities. Seattle and Portland (SEA and POR) are shown very much out of place. Unfortunately, this condition cannot be remedied via the single-city stagnation just described. The two "offensive cities" are properly oriented with respect to each other and other nearby cities. This prevents us from moving, for example, SEA to its proper general location because the result would still result in a suboptimal condition-a large distance between SEA and POR would be harmful. Moving both offending cities simultaneously by the same amount is Figure 5. Example of two-city stagnation.
found to alleviate the problem. This two-city movement is done by re-examining the cost vector c. Specifically, the following value Y is calculated: The row and column resulting in the maximum value Y are the two cities contributing most to sub-optimality. Here "r" is used as row index "i" and "s" is used as column index "j". These two targets are moved in an aggressive fashion.
The following shows the movement process. The dx and dy terms will show horizontal and vertical movements, respectively.
( ) ( ) The following movements are then made for cities r and s.
Cities r and s are moved the exact same amount and in the same direction. The distance between them will not change. It is desired that their simultaneous movement will at some point induce a condition that ends stagnation and move towards an optimal solution.
It should be noted that the search heuristic performed modification to address both forms of stagnation on a periodic basis during the search.

Convergence
With stagnation addressed, attention is turned to how well the solutions approach their desired objective function values. It is, of course, desired that the objective function value finds its minimum as efficiently as possible. Figure 6 below shows three types of convergence noticed during the development process. The horizontal axis represents time (the "age" of the search), while the vertical axis shows the "best" objective function value found at the associated time. For example, at the start of the search, the objective function value is high (undesirable) because of randomly-generated initial solution is not motivated in terms of the objective function value. Improvement then commences because the search process is motivated to find minimal values of the objective function value. The blue line in Figure 6 shows the first type of convergence, which is classified here as "quick convergence". This is where the objective function achieves a near-optimal value without incident. The orange line shows what is classified as "stagnation remedied". This is where during the search process, stagnation occurs, but is remedied via the approaches described above, and a near-optimal condition is eventually obtained. Note the "rapid improvement" for the orange line when stagnation is remedied. The grey line is classified as "stagnation not remedied". This is where stagnation occurs, and is not remedied. Fortunately, this research effort only experienced "stagnation not remedied" during the developmental phase-once this condition was observed during development, the search heuristic was modified such that stagnation was always remedied. The results section further details this.  Table 3 below shows the results for each value of n. For each value of n, a solution was obtained (5) times. The mean objective function value is reported along with its standard deviation. The ratio of standard deviation to mean is less than 1% for each value of n. As such, the small variation in the mean provides a large degree of confidence in the estimates. Also reported is the objective function value when the classical MDS approach ("Eigen") is used.

Experimental Results
As one can see, the search approach provides vastly superior results as compared to the classical approach ("Eigen"). Figure 7 details these findings.  The blue line shows the mean objective function value associated with the presented search approach, as a function of the number of cities (n). The orange line shows the objective function value via the classical MDS approach as a function of n.

Concluding Comments
A search heuristic has been presented to convert a distance matrix into spatial coordinates. Some potential pitfalls encountered such as "stagnation" were encountered and remedied. The final results show objective function values that are near-optimal. Additionally, these results are superior to those obtained via classical MDS. Classical MDS is unable to compete with the direct search heuristic presented here.
As stated earlier, MDS can be used for a variety of applications. It so happens the application used here was to convert distance data into spatial coordinates.
The intent of this research is not to just find spatial coordinates, but to illustrate an approach to obtain a good-quality solution to an MDS problem. If one just wanted spatial coordinates for locations, they could simply look them up on the internet. The example here is considered reasonable because the end-user should have a pre-conceived belief as to where the locations should be assisting with understanding the search process. This search process could be used for most any situation where it is desired to convert similarity (or dissimilarity data) into spatial data.
There are, of course, opportunities for research beyond this effort. More locations would be an interesting pursuit, as would consideration of multiple geographic regions (i.e., continents). Of course, the search parameters could also be further explored. Other possible follow-up efforts could be related to other applications of MDS, such as consumer segmentation, information storage, and DNA sequence comparison.