Suggestions for a global optimization contest

In May 1996, the First International Contest on Evolutionary Optimization was a first attempt to allow a coordinated assessment of different global optimization algorithms, at least for the class of bound constrained problems (and traveling salesman problems) and the class of evolutionary optimization algorithms. Its sequel is the Second International Contest on Evolutionary Optimization.

The following are some suggestions that would serve to make a global optimization contest most useful to the scientific community.


How should the test problems be selected?

Apart from well known standard problems, a good test set should also contain

The results from the various categories (standard, penalty, single minimum, wide bounds) could be weighted according to some scheme made public with the contest announcement.

I think that for a contest, each test problem should have a parameter in it that may be varied, and the average count for solving five problem instances, say, with a fixed set of parameters should be the basis of evaluation. One should specify in the contest announcement the parametric form and a range for the parameter in the problem description, but keep secret the actual parameter set to be used until all submissions are in. Then tuning to the specific instance would be impossible, it is not so easy to tune the algorithm to the problem, and performance can be expected to be more like that for an unknown future problem.


Two relevant submission criteria

An important problem that one faces when one wants to use an algorithm for an expensive practical problem is that one generally has not the resources to play with the parameters and needs a first shot success. Thus each algorithm should come with an easy-to-use recipe for picking all tuning parameters, and the methods should be compared based on this default setting, at least as one of the criteria for quality.

Another important point, especially for methods for which it is easy to vary details of the algorithm, is that a comparison always compares specific implementations, not `methods'. Sometimes changing what seem minor details may have a large positive or negative effect on the quality of a method. Therefore, part of the submission requirements should be a specific implementation that is made available online. Then people can verify the results for themselves, and fast dissemination of the fruits of the best investigations is guaranteed.

This also gives a basis for selecting a new test set for a subsequent contest. Indeed, one can try this new test set on the old algorithms, and pick those problems where the old algorithms differ in performance from the general trend, or where all perform disappointingly. Because these are the areas where the need for improvement is most pressing.


Optimization Test Problem Collection
Global Optimization
my home page (http://arnold-neumaier.at)

Arnold Neumaier (Arnold.Neumaier@univie.ac.at)