Date: 12 May 1997 16:14:11 GMT From: Lester Ingber Subject: Global Optimization Software Evaluation In article <337728E3.41C6@cma.univie.ac.at>, Arnold Neumaier wrote: :%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% : : :Global Optimization Software Evaluation :--------------------------------------- : :We are planning a comparative evaluation of a number of currently :available public domain global optimization programs for optimizing :smooth functions : a) without constraints, : b) with bound constraints, and : c) with general smooth constraints, :provided the software is : - publicly available in Fortran, C, C++, or Matlab source code, : - running on UNIX platforms, and : - not dependent on commercial software (except standard compilers). :Codes, taken in the form as available via the WWW, will be evaluated :on our DEC Alpha machines, later perhaps also on other machines. While the intent is well meaning, the worth of such comparisons most likely will be very misleading. Nonlinear systems typically are non-typical, and the strength of many, if not most, global optimization codes to deal robustly with many classes of problems require that they support tuning for many such systems. Yes, this indeed means that global optimization in general cannot be applied in a "black-box" fashion as can many quasi-linear algorithms. Not only does tuning make an important difference in efficiency (number of function calls) and robustness (ability to get the global minimum) for many systems that might superficially seem similar, but tuning often makes a tremendous difference across complex systems. Some examples of the importance of tuning when using Adaptive Simulated Annealing (ASA) even on "toy" problems are in http://www.ingber.com/MISC.DIR/asa_examples ftp://ftp.ingber.com/MISC.DIR/asa_examples and some papers that illustrate the even greater importance of tuning on complex systems is in http://www.ingber.com/asa_papers.html ftp://ftp.ingber.com/asa_papers Does this mean that comparisons are worthless? No, but I think comparisons are really only worthwhile for a given problem. Unfortunately, this means that a given researcher/group may have to learn and try several algorithms to determine which is best for the given problem. Indeed, for difficult nonlinear problems, "there is no free lunch!" I certainly don't have time to optimize a full suite of problems, each of which may require varying degrees of difficulty in tuning. (I'd rather spend the time on fewer harder problems that have some physical significance -- but that's just my own choice.) I think the program you will carry out will give most support to those codes whose authors take the most time preparing, or it will give the most support to sub-classes of problems that yield most easily to particular algorithms (which might be helpful, if it is or can be made clear just what these sub-classes are). In such cases, unfortunately, previous success will not be a good indicator of future performance on new problems. I think the most useful public archive would be one with references to many problems solved with various algorithms, so a given researcher might see if his/her problem is at least all similar to one already solved (although the above caveats still hold true). Unfortunately, sometimes for reasonable reasons (e.g., not wanting to invite many inquiries) most researchers do not make their codes publicly available so all can see exactly how they solved their problem. Even in published papers, where the algorithm is well specified, the calculation of the cost function and/or how faithful it is to a given physical system is not well specified at all, and the latter can be much more important than the former in determining success, e.g., as is the case for the Travelling Salesman Problems (TSPs), etc. Lester -- /* RESEARCH ingber@ingber.com * * INGBER ftp://ftp.ingber.com * * LESTER http://www.ingber.com/ * * Prof. Lester Ingber __ PO Box 857 __ McLean, VA 22101-0857 __ USA */