Central Force Optimization with Gravity < 0, Elitism, and Dynamic Threshold Optimization: An Antenna Application, 6-Element Yagi-Uda Arrays

This paper investigates the effect of adding three extensions to Central Force Optimization when it is used as the Global Search and Optimization method for the design and optimization of 6-elementYagi-Uda arrays. Those extensions are Negative Gravity, Elitism, and Dynamic Threshold Optimization. The basic CFO heuristic does not include any of these, but adding them substantially improves the algorithm’s performance. This paper extends the work reported in a previous paper that considered only negative gravity and which showed a significant performance improvement over a range of optimized arrays. Still better results are obtained by adding to the mix Elitism and DTO. An overall improvement in best fitness of 19.16% is achieved by doing so. While the work reported here was limited to the design/optimization of 6element Yagis, the reasonable inference based on these data is that any antenna design/optimization problem, indeed any Global Search and Optimization problem, antenna or not, utilizing Central Force Optimization as the Global Search and Optimization engine will benefit by including all three extensions, probably substantially.


Introduction
In a previous paper, [1], six element Yagi-Uda arrays (Yagis) were optimally de-signed using Central Force Optimization (CFO) with a small measure of pseudo randomly injected negative gravity (see [1] for details and array geometry). CFO searches for the maxima of an objective function, not its minima. The effect of negative gravity is to improve CFO's exploration of the decision space (DS) by causing probes that otherwise would coalesce around discovered maxima to disperse away from those maxima and sample under-sampled regions or perhaps regions that were not sampled at all. Based on the results reported in [1] there is no question that all CFO implementations and applications likely will benefit from some measure of negative gravity. At a minimum the algorithm's performance should be investigated with this added extension.
Negative gravity, however, is not the only improvement that can be made. This note reports the results of adding to CFO with negative gravity two additional extensions: Elitism [2] and Dynamic Threshold Optimization (DTO) [3], each of which improves CFO's performance still further. CFO is an evolutionary algorithm (EA) that explores DS in a series of "time steps" using "probes" to sample the objective function. Elitism comes in two flavours: 1) Step-by-step Elitism that involves in a given run storing step-by-step, the DS coordinates for a specified percentage of the best probes' locations and inserting those data into the algorithm's next step, thereby preserving best fitness information at each step. For the work reported various percentages of the best fitnesses' data from each step were pseudo randomly inserted into the next step. 2) Run-to-Run Elitism, also referred to as Seed Probe Elitism, involves passing the coordinates of the best fitness probe(s) from a previous run to the current CFO run. For the work reported here the coordinates of the best fitness probe in the previous run are saved in an external "seed probe" data file and inserted into the last of CFO's probes in the current run.
Dynamic Threshold Optimization is a purely geometrical method that modifies the objective function's topology on successive runs of any global search and optimization program (GSO, referring to both a program or to the heuristic, context permitting). DTO compresses the objective function's "landscape" from below by filtering out all local maxima that are below a threshold value that has been established at the run's start. DTO can be applied to any GSO algorithm, even different algorithms on successive runs. As the DTO threshold rises, more and more local maxima are removed so that the landscape becomes "flatter" and flatter. This, in turn, may (likely will) make it harder to explore DS because there is less topological information to guide the search. One way to address this issue is to increase the number of sample points during successive applications of the algorithm (in CFO's case, the number of probes). Besides step-by-step elitism within a CFO run, in this paper DTO is implemented with run-to-run elitism as well. That is, the best global maximum returned by a CFO run is passed on to the next CFO run within the DTO shell by pseudo randomly inserting it into the next run's initial probe distribution (IPD). As with step-by-step elitism, runby-run elitism prevents loss of information that may be, almost certainly will be, useful in searching for maxima.
It is important to note that both elitism and DTO do not preserve the best fitness step-to-step or run-to-run. The previous best fitness information, specifically its coordinates in DS, are used only to guide the search. At any given step or at the end of any run the best fitness may actually be lower than the value inserted using elitism or DTO. Of course, preservation of the global best fitness is possible, but that approach was not taken for the results reported here because doing so may actually impede the GSO's exploration.

CFO-GED Algorithm
CFO is a deterministic metaheuristic. It returns the same results every time it is run with the same setup. This characteristic is a major advantage when trying to develop either a truly useful objective function, or appropriate runtime parameters for real-world problems like Yagi array design. By contrast, if a stochastic optimizer is used, it is difficult, maybe impossible, to determine for sure whether or not a change in the objective function or a change in runtime parameters accounts for the different, sometimes wildly different, results on successive runs [4].
However, even though CFO is deterministic it does benefit from the addition of a pseudo random component because doing so improves its exploration. Both in [1] and in the work reported here pseudo randomness was injected using BBP-generated π-fractions. Calculation and use of the π-fractions are discussed in [5], and details of CFO theory and its implementation are contained in [6] (note important discussion on equations of motion in §VI-B of [6]). The essential characteristics of pseudo random variables (prv's) are that they are uncorrelated with the fitness function's topology, they are uniformly distributed, and they are uncorrelated among themselves. π-fractions meet these requirements. A prv is specified by enumeration or by calculation, and thus always remains deterministic. By contrast, a true random variable (rv) is a priori unknowable because it must be calculated from a probability distribution. This difference is fundamental and important, especially when it comes to solving real-world problems with metaheuristic methods. Thus, by using prv's CFO remains deterministic at every step while including some "randomness" that improves its exploration.
A major advantage that CFO provides is its repeatability. Stochastic GSO's generate results that vary from one run to the next, and there is no way of knowing why they are different. Was it a change in the run parameters? Was is a change in the objective function? Or was it the program's inherent randomness? Real-world problems do not have built-in objective functions. A suitable function must be developed for each new problem, and doing so can be quite difficult if the GSO's results are completely unpredictable. CFO by contrast permits rapid evaluation of changes in run parameters or in the objective function itself because every run with the same setup returns the same results. Any change in CFO's output is a result of changes in the program setup or in the fitness function.
The GSO algorithm used throughout this paper is basic CFO augmented with the three extensions not contained in that basic algorithm: 1) Negative Gravity; 2) Elitism; and 3) Dynamic Threshold Optimization. The new algorithm is called CFO-GED, and its pseudo code appears in Figure 1. The effect of negative gravity, denoted G < 0 where G is CFO's "gravitational constant," is discussed extensively in [1]. Elitism involves storing the best fitness(es) from a previous GSO step or run and inserting those data into the next step or run. As discussed above, the first is referred to as step-by-step and the second as run-by-run, and it is important to note their differences because they produce different results. Both are used in CFO-GED. In run-by-run the single best fitness/coordinate data are passed on to the next run, whereas in step-by-step elitism a user-specified number of best fitnesses and locations are passed one step to the next (details in Table 1).  Notes: Num elitist probes, # pr; Fmax = best fitness; gmax = max gain (dBi); VZ0 = Var Z0 feed impedance (Ω); S = VSWR//VZ0 (min/max).

Dynamic Threshold Optimization
While Elitism has been extensively used in, for example, most multiobjective algorithms, ant colony algorithms, scatter search, (μ + λ) algorithms, and many others [2], DTO has not. Dynamic Threshold Optimization is not a GSO heuristic per se. Instead it is a geometric approach to modifying the objective function's topology to remove local maxima. DTO can be used with any GSO algorithm. In this paper that GSO program happens to be CFO, but any other search/optimization program(s) could have been used instead, alone or in combination.
DTO operates by compressing the fitness function's landscape from below. It comprises a series of passes that successively increase the objective function's "floor" or threshold so as to filter out any local maxima below the threshold value. This is accomplished by using the auxiliary function . On each successive pass, DTO changes the topology of the objective function by redefining it using auxiliary function ( ) A more detailed discussion of DTO with examples appears in the Appendix.

Yagi Fitness Function
There are many ways to measure an antenna's performance, for example, as representative parameters: maximum directive gain, gain bandwidth, radiation pattern, pattern bandwidth, polarization, radiation efficiency, input impedance, impedance bandwidth, directionality, beamwidth, maximum sidelobe levels/directions, specific absorption rate, physical size, fabrication cost/time. In a realworld problem the antenna engineer must decide which of these parameters will be used and how they will be combined in an objective function to be maximized. Should they be added/subtracted? Multiplied? Made the arguments of some other mathematical manipulation, say, exponentiation? And so on, and so on. The possibilities are endless, yet some objective functions work much better than others, both in terms of reflecting the desired balance between antenna parameters and in being "searchable," that is, amenable to investigation by GSO. Some, unfortunately, are pathological, but not obviously so, and as a result they are difficult or impossible to deal with, especially when the pathology is hidden [4] [5] [6]. The work reported here uses the same (very simple) fitness function that is used in [1], viz., computed relative to the feed point impedance Z 0 and is denoted VSWR//Z 0 . While Z 0 usually is a fixed, user-supplied parameter, typically the industry standard value of 50 Ω resistive, Variable Z 0 technology (VZ 0 ), which was used here, treats it the same as any other decision space variable. This approach embraces array current distributions that otherwise would be excluded because they fail to adequately match the predetermined value of Z 0 . Whether or not the CFO-returned value is feasible and desirable is an engineering and economic judgment, but more often than not impedance-matching the "non-standard" Z 0 is worthwhile because the antenna's performance is better, often much better [7]. Variable Z 0 technology is now available for public use without limitation [8]. At each of the three frequencies L, M, U, the Method of Moments code NEC-4 [9] computed the Yagi's maximum gain and feedpoint Z 0 from which VSWR//Z 0 was computed for use in the fitness function.

Yagi Array Prior Results
In [1] the best CFO-returned fitness occurs with 6% negative gravity (6% G < 0 where G is CFO's "gravitational constant"). Its value of 49.1892 corresponds to a maximum array gain of 11.92 dBi and feedpoint impedance Z 0 = 59.8 ohms with VSWR//59.8 ranging from 1.49 to 2.01 over the optimization frequency range 294.8 to 304.8 MHz. The reference array in that paper, that is, the CFO-optimized array with zero G < 0, has a best fitness of 47.8932 with a gain of 11.28 dBi and VSWR//65.56 ranging from 1.25 to 1.61. Injecting a small amount of negative gravity into CFO resulted in discovering a range of array designs whose fitnesses were better than or very similar to the reference array's. This improvement is attributable to enhanced DS exploration because the effect of negative gravity is to cause CFO's probes to fly away from each other, apparently into regions of DS that have been under-sampled or perhaps not sampled at all. In this paper CFO is further enhanced by including Elitism and Dynamic Threshold Optimization. Table 1 shows the fitnesses and corresponding Yagi parameters for CFO with Negative Gravity ("G < 0") and with Elitism, but not with DTO. The results from [1] are included for comparison (the CFO runs in [1] used G < 0 but not Elitism). While the run in [1] was made with 550 steps, all the entries in Table 1 were made with 1000 using the same CFO setup parameters shown in Table 1 and Table 2 of [1]. The best CFO-GED results with only G < 0 and Elitism are highlighted in bold red text. Table 1 shows the number of probes, N p , the target and actual values of G < 0, and the amount of Elitism as a percentage of N p with the corresponding number of elitist probes # pr.

Negative Gravity & Elitism
Step-by-step elitism was employed, so the number of elitist probes shown in the table was pseudo randomly injected into the following step. Because CFO-GED is completely deterministic, even with π-fraction prv's, every * Re-run of Yagi #14 without Elitism, no seed probe, but with G < 0 (5% target, 3.6% actual) and DTO only. run in Table 1 can be precisely recovered simply by running it again. If any change is made in the setup then a different fitness result is a consequence of that change, likewise for changing the objective function. Deterministic GSO programs permit the (relatively) quick comparison of how well different run setups or fitness functions perform. This flexibility is essential for solving realworld problems, but it is not provided by stochastic GSO's (see [6] for some specific examples).
Turning to the effect of adding Elitism to CFO along with G < 0, the data in Table 1 provide convincing evidence that supplementing G < 0 with Elitism improves the discovered fitnesses by quite a bit. The best fitness of 54.466 is nearly 11% better than the best fitness in [1] which used 6% G < 0 without Elitism. Regardless of the specific problem being worked, the obvious conclusion is that all CFO runs should be augmented with both G < 0 and Elitism because in all likelihood the results will be better.

Negative Gravity, Elitism & DTO
The next question is whether or not including DTO improves CFO's performance even more. Recall from Figure 1 that the DTO shell comprises a series of passes with progressively increasing thresholds (objective function "floors"). Setting the sequence of thresholds can be tricky, quite tricky in some cases, because the problem's landscape becomes sparser and sparser as the floor rises and consequently adequate exploration may be impeded. DTO appears to work quite well with highly multimodal objective functions such as the Schfwefel's Problem 2.26 (see Appendix) because there is sufficient topology at every threshold to guide the search. When the landscape is too flat, however, DTO may miss the remaining maxima and return the threshold value itself as its best fitness. For the Schwefel 2.26 the very high level of multimodality is evident from the 2D plot in Figure A3, pass #1, so DTO is expected to, and in fact does, perform very well.
But no such plot is available to assess how multimodal is the Yagi problem. In the problem's twelve dimensions its landscape might resemble the Schewfel 2D's, but then again it might not. This conundrum is inherent in DTO, and it requires consideration in setting up the DTO shell. Section A3 of this paper discusses this issue in greater detail, and provides some examples of setting thresholds by calculation.
For the Yagi problem, however, the first approach is setting the thresholds by enumeration because the fitness function is readily bounded, approximately, even though its maximum value is unknown. The best VSWR value of course is 1, and a reasonable maximum gain is, say, 15 dBi midband and 10 dBi at the band edges, these values being based on experience with this type of array. Inserting them into Equation (1) yields a maximum fitness of F max = 60. The CFO runs with G < 0 and Elitism suggest that reasonable best fitnesses probably are in the range 55 -60, so that the highest threshold should be placed far enough below this range that CFO can still effectively explore DS. Again, because CFO is deterministic it is straightforward to run test cases to determine what is a suitable threshold sequence, whereas a stochastic GSO might make this impossible from a practical point of view. Table 2 shows results for runs with all three added extensions, G < 0, Elitism, and DTO. Elitism was introduced in two ways, step-by-step and run-to-run. For step-by-step the four best probes at each step were pseudo randomly injected into the next step, whereas for run-by-run the single best probe in each run was injected as the last probe in the next run. Five DTO passes were made. The table shows the enumerated thresholds and the number of probes and time steps at each threshold. These run parameters are followed by the best CFO-returned fitness, and the Yagi's maximum gain, Variable Z 0 feedpoint impedance, and the corresponding VSWR range.
The data in Table 2 show that adding DTO for the most part substantially increased the best fitness, from 54.466 to 56.641, a change of 2.175 or about 4%.
The data also show the pitfalls of choosing thresholds poorly, specifically Yagis #13 and 18. Yagi #13's performance is quite poor because CFO was unable to locate any maxima that were not on the threshold itself (value of 50). In this case the last DTO threshold of 50 was simply too high, which depressed CFO's exploration to the point of its not discovering any other maxima. Yagi #18 was run to investigate the effect of removing Elitism entirely. In this case the threshold values and number of probes and time steps used in Yagi #14 were replicated, but the DTO passes were made only with G < 0, no Elitism. As in the previous case, CFO was unable to locate any maxima above the threshold value of 45, and the result was an array with extremely poor performance. Table 3 summarizes the best fitness data and run statistics for the eighteen Yagi designs that used negative gravity, elitism and DTO, but no seed probe. DTO was not included for the first twelve runs through Yagi #10, but it was included for Yagis #11 through 18. The averages fitness for the first set was 49.710.
Including DTO improved that statistic substantially to 55.476 for the second.
The total number of NEC runs is included as a measure of computational effort and ranges from a low just over twenty thousand to a high slightly above two and one half million. While the longest run did provided the best overall fitness, 56.461, its length raises the important question of how much fitness improvement is worth the additional computational effort. NEC runs averaged 0.037129 sec for the work reported here.
Adding a seed probe improves the best fitness even more as shown in Table 4.
Each run was seeded with a probe having the coordinates of the best fitness probe from the previous run. For example, the seed probe for Yagi #25 was the best probe located by Run #24. Yagi #19 was seeded with a probe from a run that did not use negative gravity, elitism or DTO. Except for the shortest run, Yagi #21, the best fitness increased monotonically using seeded runs. The benefit of starting a search in the vicinity of a previous global maximum is apparent from these data, and would be a recommended approach for any GSO. Wireless Engineering and Technology  All DTO results presented for Yagis #11 through 18 were computed using specified thresholds as shown in Table 2. But that approach may not be the best.
The thresholds for runs 19 through 27 were calculated, rather than enumerated. Table 5 shows threshold and best fitness data for Yagi #19 and for Yagi #27, the data for the other runs in Table 4 all being similar. Each run started with five probes, and the number of probes was doubled on each successive threshold.
The thresholds were computed as ( ) where F max , F min are maximum, minimum fitnesses, and C = 0.2 for k = 1, C = 0.8 for k > 1. As seen in By this run the best fitness has essentially saturated as is seen in the data in Table 4. It is again apparent that a deterministic GSO allows the user to quickly converge on a run setup that minimizes computer usage while homing in on an optimized design. In this case, as soon as saturation of the best fitness is clear, then there is no point in doing additional calculations because they will not provide significantly better results. Table 6 provides a summary of how the best fitness is improved by extending CFO with Negative Gravity, Elitism and DTO. The overall improvement compared to the reference run in [1] in which no extension was used is quite dramatic, just under 20%. It is clear that CFO performs better as a global search and optimization algorithm when these extensions are included.

Design or Optimization?
These observations raise a very important practical question about how to best design the "best" antenna, whether it be a Yagi array or any other antenna. Should it be "optimized" as was the sequence of Yagi designs discussed thus far, or should it be "designed" by specifying minimum performance criteria which, if met, terminate the design process. This activity is the "D" in global search "D/O," design and optimization. The fact is, again as a practical matter, it almost always will be quicker and will utilize fewer resources to set up the antenna problem as a design problem instead of as an optimization problem. And to that end, a deterministic design algorithm makes all the difference in the world be-cause the antenna engineer has complete control of every aspect of the process, nothing being "left to chance." As an example, a design run was made with the objectives of midband gain of at least 13 dBi and midband VSWR less than 1.5 relative to the Variable Z 0computed feedpoint impedance. Rather than use the fitness function in Equation (1) the following fitness was used instead:     where the variables have the same meanings as before with the new variable g t being the target midband gain. Using the same run parameters that were used for Yagi #21 this design run achieved midband gain and VSWR, respectively, of 13. A very important difference between the Yagi #21 and this example design run is the computational effort. The number of NEC runs for Yagi #21 was 6820 whereas only 700 NEC runs were required for the design run. Of course, the Yagi #21 run was made at three frequencies, not one, and an entirely different fitness function was used. Nevertheless it is unlikely that these differences account for requiring nearly ten times as many NEC runs. Rather, it is far more likely that the explanation lies in performing optimization instead of design. As a general proposition, all things being equal, design is likely to be much quicker than optimization, and this design run is an example of that effect.
This example also serves to highlight the importance of a deterministic GSO. Let's say the fitness function Equation (2) was modified by adding two exponents, m and n, as follows: How should values be assigned to these parameters? There is no obvious theoretical guidance for assigning any specific values, so the inevitable approach is trial and error. The problem with trial and error is that there are dozens of combinations that might be tried, and to evaluate them on a comparative basis, which values work better than others, would take hundreds of runs of a stochastic GSO and very likely tens of thousands of NEC runs. And this is only one change that might be made. It might be desirable or necessary to try many altogether different fitness functions, which raises the same question: Which approach is better, a stochastic GSO or a deterministic one? Which has the same obvious answer, the deterministic one.
In order to compare antenna performances, Figures 2-6 show the NEC-computed antenna performance data for Yagi #21 and for the Design Run Yagi. Both Yagis are quite good antennas, but depending on the application one may be preferred over the other. The figures are self-explanatory as to their meaning, and they permit a head-to-head comparison of these two arrays.

Conclusion
This paper has investigated the effect of adding three extensions to the GSO Central Force Optimization as applied to Yagi-Uda array design and optimization (D/O), those extensions being: 1) A small measure of pseudo randomly injected Negative Gravity, (G < 0); 2) Two types of Elitism, step-by-step and runto-run; and 3) Dynamic Threshold Optimization. The basic CFO algorithm does not include any of these extensions. This paper extends the work reported in a previous paper that considered only G < 0 and which showed a significant performance improvement over a range of optimized arrays. Still better optimization results are obtained by adding to the mix Elitism and DTO. While this work was limited to the D/O of 6-element Yagis, the reasonable conclusion based on these data is that any antenna D/O, indeed any GSO problem, antenna or not, utilizing CFO as the GSO engine will benefit by adding all three extensions, probably substantially. Adding Elitism to CFO with negative gravity alone improved the best fitness by nearly 11%. Adding DTO with enumerated thresholds and no seed probe increased the best fitness by approximately another 4%. Still further improvement is possible by including a seed probe (coordinates of the best fitness location from a previous run) and by using calculated instead of enumerated DTO thresholds. These modifications increased the best returned fitness from 56.641 to 57.067. For comparison, from [1] the best fitness without Negative Gravity, Elitism or DTO is 47.8932 whereas when all three are added the best fitness increases to 57.0670, an overall increase of 19.16%. Wireless Engineering and Technology

Appendix: Dynamic Threshold Optimization
The following material is adapted from the author's arXiv post: http://arXiv.org/abs/1206.0414.

A.1. Problem Statement
In a bounded hyperspace , that is, is its "fitness," and the problem's "landscape" (topology over DS) is

A.2. Dynamic Threshold Optimization: Theory
DTO conceptually is quite simple. Figure A1 is a schematic illustration of how it works in a one-dimensional (1-D) DS. Objective function ( ) f x  is multimodal with many local maxima and a single global maximum, and the problem is to locate that maximum (coordinates and value). DTO bounds ( ) f x  from below using a series of successively increasing "thresholds," in effect compressing DS in the direction of the dependent variable (from "below") instead of, as is sometimes done, shrinking DS by reducing an independent variable's domain (from the "sides"). Locating the global maximum is easier in the compressed DS because unwanted local maxima are progressively filtered out as the "floor" (threshold) rises. Because DTO is a general geometric technique, it is algorithm-independent so that it can be used with any global search and optimization algorithm. Although DTO is described in the context of maximization, it can be applied to minimization as well with obvious modifications. Procedure is a global search and optimization (GSO) routine that returns 1) the d N coordinates * x  of a maximum of function ( ) q x  , 2) its value * q , and 3) a minimum value min q (no coordinates).

[ ]
OPT ⋅ may comprise any search and optimization algorithm (singly or in com- . On each successive pass, DTO changes the topology of the objective function by redefining it using auxiliary function ( ) g x  . The DTO run continues until a user-specified termination criterion is met (often maximum number of passes or fitness saturation). Its pseudocode appears in Figure  A2.

A.3. Setting DTO Thresholds
How to set DTO's starting threshold and how it is updated are determined by the algorithm designer. It can be done by enumeration, calculation, or some combination of both. One obvious starting value is the minimum fitness returned by

[ ]
OPT ⋅ , that is, 0 min T f = (threshold by calculation). This appears to be a good default choice when there is no other information about the objective function that permits specifying specific threshold values (enumeration). But updating the thresholds k T as DTO progresses is more problematic because of the floor's profound impact on ( ) f x  's landscape. More and more local maxima are removed from the landscape as the threshold rises, so that effectively sampling DS becomes progressively more difficult (the topology becomes flatter and flatter). In the limit of the floor rising to a global maximum, DS collapses to a plane, and there is no information available for performing a search. How well DS can be explored thus becomes more and more of an issue as the threshold rises, and the search algorithm's exploration characteristics become very important. One approach to setting k T is shown in Figure A1 in which successive thresholds are set to the best returned fitness, k k T g * = , but this approach has not worked well in numerical tests because a good GSO often sets the threshold too high too early in the run. The 2-D example that follows employs a different approach, and it clearly illustrates the effect of flattening the landscape too much.
in this case is included to keep the threshold far enough below the global maximum that the landscape is not compressed into a plane. This formula for setting the threshold was chosen as much for its ability to illustrate the DTO concept (see plots below) as for its ability to produce good results, and there no doubt are countless other approaches to setting the threshold that will work as well or better.
The number of CFO probes was initialized to 4 p N = , and it was doubled on each successive pass in order to enhance CFO's exploration. Each run comprised 25 t N = time steps. While CFO is an inherently deterministic search and optimization metaheuristic, in this case it was implemented with a random initial probe distribution (IPD) instead of the usual "Probe Line" IPD [11]. The reason for this change, again, was to enhance CFO's exploration in the progressively flatter landscape.
The DTO/CFO algorithm returned a best overall fitness of  Table A1 summarizes the DTO threshold evolution pass-by-pass and CFO's best fitness. Figure A3 shows how DTO compresses the landscape as its threshold increases.
The objective function is plotted at each of the 10 passes. The first pass (no threshold) shows the Schwefel Problem 2.26's complex landscape. It is highly multimodal with many similar amplitude local maxima. As DTO progresses more and more of these maxima are filtered out because the floor is higher and higher relative to the single global maximum. At pass #8, for example, 16 local maxima are visible, whereas at thresholds #9 and 10, respectively, the number of maxima falls to 8 and to 3. On the last pass the global maximum is clearly visible on the right side of the plot.
DTO also was tested against Schwefel 2.26 in 30D. Six passes were made using the linear threshold scheme described above, but with 0.  One possible approach might be to implement as a group of quasirandom (QR) samplings of DS at each DTO threshold (any sampling scheme can be used, but QR is attractive because these sequences are deterministic). This approach is especially attractive because of its simplicity. The data in each group could be used to develop statistics characterizing DS's topology at that threshold. Those statistics, in turn, can provide a measure of the likelihood of locating maxima. As DTO's threshold moves up, any peak at or below the floor cannot be a global maximum (unless the landscape is compressed into a plane). As the problem's topology is progressively compressed, QR sampling will return more and more sample points on the floor, that is, points at which there is no maximum of any kind. Repeatedly sampling a given threshold develops a picture of where the current maxima (local and global) might be located.
In the limit, every point on the floor would be visited, and the global maxima located precisely. Of course, only a finite number of runs can be made, but it seems likely that very good statistics could be developed fairly quickly as DTO's threshold increases. At a minimum, this approach should be able to provide a reliable estimate of the likelihood of locating global maxima. Of course, all of these remarks are pure speculation at this point. Whether or not implementing some of these ideas may lead to a new and effective optimization methodology is an open question. But DTO appears to hold enough promise to be investigated further. One approach might be to initialize DTO with a deterministic algorithm such as CFO with a Probe Line IPD, because it tends to converge quickly to the vicinity of global maxima, followed by QR-based exploration as described above (or possibly a stochastic algorithm) because of potentially improved exploration.

A.6. Final Remarks
DTO appears to be an effective technique for adaptively changing the topology of the decision space in a multidimensional search and optimization problem. DTO should be useful with any search and optimization algorithm. Bounding the objective function from below removes local maxima, and as the threshold or "floor" is increased, more and more local maxima are eliminated. In the limit, the problem's landscape collapses to a plane whose value ("height") corresponds to the value of the global maximum. In that case, DS contains no information as to the global maximum's location, but the maximum's value is known precisely.
In order to preserve location information, the DTO threshold should not be set too high, thereby retaining enough structure for efficient DS exploration. There are many unanswered questions concerning how DTO should be implemented. For example, there almost certainly are better ways to set the threshold than the simple linear scheme used here. Thresholds that are progressively closer together probably will work better. Another question arises in connection with what optimization algorithm should be used. Even though DTO is algorithm-independent, it may work best when different algorithms are combined to take advantage of their different strengths and weaknesses. For example, CFO, which is inherently deterministic, often converges very quickly to the vicinity of a global maximum (good exploitation). But its very determinism inhibits exploration in decision spaces with "sparse" structure (mostly planar, few local maxima). By contrast, stochastic algorithms (for example, Particle Swarm, Ant Colony, or Differential Evolution) exhibit better exploration, but they completely lack repeatability when implemented using the true random variables in their underlying equations (computed from probability distributions). Combining a deterministic algorithm used first with a stochastic one used later may provide better results by emphasizing exploitation early in the run and exploration later in the run. Or, in the case of CFO, it might be started deterministically and then switched to stochastic mode (recall that the CFO used here was stochastic for the first 2D Schwefel 2.26 run and deterministic for the subsequent 30D/2D cases). Another improvement might utilize "lateral" DS compression on one of DTO's thresholds. It may be possible in the DTO-compressed landscape to reliably determine the global maximum's approximate location and based on that information shrink DS "from the sides" or "laterally" (reduce the domain of definition), making it easier to search the smaller DS. If DTO is a novel approach to optimization, as the author believes it is, then all of these possibilities merit consideration as fruitful areas of research, and the author hopes that these remarks will encourage such work.