Slides and/or Videos of Lectures (usually unpublished)
It is God's privilege to conceal things,
but the kings' pride is to research them.
(Proverbs 25:2)
A= Analysis, C = Combinatorics, F = Foundations,
L = Linear Algebra, N = Numerical Analysis, O = Optimization,
P = Physics/Chemistry, S = Statistics
* = book
For preprints of the papers listed below,
abstracts and ps and/or pdf files are available online;
for the published version see the references given.
For my remaining publications,
scanned copies of the published version are at
list of publications.
For manuscripts with an e-print number, you can also get the latex
source (of some version of the paper)
from an e-print archive such as
http://xxx.uni-augsburg.de
I do not send out paper copies of my manuscripts.
If you have problems downloading, decoding, or printing a file, look at
downloading/printing problems?
Manuscripts submitted for publication
O000.
M. Kimiaei, A. Neumaier and P. Faramarzi,
New subspace method for unconstrained black box optimization,
Manuscript (2021).
pdf file (K),
C000.
S. Penjic and A. Neumaier,
A unified view of inequalities for distance-regular graphs, Part II,
Manuscript (2018).
pdf file (702K),
In this paper we continue to study the language of t-point counts. In
particular, we consider the case where Part I did not lead to a good
diameter bound. We also study induced polygons of even and odd length,
where we prove Koolen's inequalities under assumptions different from
those of Koolen. We obtain new inequalities for graphs of even girth.
O000.
M. Kimiaei and A. Neumaier,
Efficient global unconstrained black box optimization,
Manuscript (2020).
pdf file (665K),
For the unconstrained global optimization of black box functions, this
paper presents a new stochastic algorithm called VSBBO.
In practice, VSBBO matches the quality of other state-of-the-art
algorithms for finding, with reasonable accuracy, a global minimizer in
small and large dimensions, or at least in the majority of cases a
point as good as competing algorithms.
For smooth, everywhere defined functions, it is proved that, with
probability arbitrarily close to 1, one finds with
O(nε-2)
function evaluations a point with gradient 2-norm at most ε.
In the smooth convex case, this number improves to
O(nε-1)
and in the smooth strongly convex case to
O(nlog ε-1).
This matches known recent complexity results for reaching a slightly
different goal, namely the expected gradient 2-norm at most ε.
O000.
A. Neumaier and B. Azmi,
Line search and convergence in bound-constrained optimization,
Manuscript (2019).
pdf file (384K),
The first part of this paper discusses convergence properties of a new
line search method for the optimization of continuously differentiable
functions with Lipschitz continuous gradient.
The line search uses (apart from the gradient at the current best point)
function values only.
After deriving properties of the new, in general curved,
line search, global convergence conditions for an unconstrained
optimization algorithm are derived and applied to prove the global
convergence of a new nonlinear conjugate gradient (CG) method.
This method works with the new, gradient-free line search -- unlike
traditional nonlinear CG methods that require line searches satisfying
the Wolfe condition.
In the second part, a class of algorithms is developed for bound
constrained optimization. The new scheme uses the gradient-free line
search along bent search paths. Unlike traditional algorithms for bound
constrained optimization, our algorithm ensures that the reduced
gradient becomes arbitrarily small.
It is also proved that all strongly active variables are found and
fixed after finitely many iterations.
A Matlab implementation of a bound constrained solver based on the new
theory is discussed in the companion paper
''LMBOPT - a limited memory program for bound-constrained
optimization'' by M. Kimiaei and the present authors.
LP000.
A. Neumaier and A. Ghaani Farashahi,
Introduction to coherent quantization,
Manuscript (2018).
pdf file (347K),
arXiv:1804.01400
This paper is the second in a series of papers on coherent spaces and
their applications. It begins the study of coherent quantization - the
way operators in a quantum space can be studied in terms of objects
defined directly on the coherent space. The results may be viewed as a
generalization of geometric quantization to the non-unitary case.
Care has been taken to work with the weakest meaningful topology and
to assume as little as possible about the spaces and groups involved.
Unlike in geometric quantization, the groups are not assumed to be
compact, locally compact, or finite-dimensional. This implies that the
setting can be successfully applied to quantum field theory, where the
groups involved satisfy none of these properties.
The paper characterizes linear operators acting on the quantum space
of a coherent space in terms of their coherent matrix elements.
Coherent maps and associated symmetry groups for coherent spaces are
introduced, and formulas are derived for the quantization of coherent
maps.
The importance of coherent maps for quantum mechanics is due to the
fact that there is a quantization operator that associates
homomorphically with every coherent map a linear operator from the
quantum space into itself. This operator generalizes to general
symmetry groups of coherent spaces the second quantization procedure
for free classical fields. The latter is obtained by specialization
to Klauder spaces, whose quantum spaces are the bosonic Fock spaces.
A coordinate-free derivation is given of the basic properties of
creation and annihilation operators in Fock spaces.
Papers accepted for publication
Published papers
O199.
M. Kimiaei, A. Neumaier and B. Azmi,
LMBOPT - a limited memory method for bound-constrained optimization,
Math. Programming Comput., published online 10.January 2022.
pdf file (972K),
Recently, Neumaier and Azmi gave a comprehensive convergence theory
for a generic algorithm for bound constrained optimization problems
with a continuously differentiable objective function. It combines an
active set strategy with a gradient-free line search CLS along a
piecewise linear search path defined by directions guarded against
zigzagging.
This paper describes LMBOPT, an efficient implementation of this scheme.
It employs new limited memory techniques for computing the search
directions, improves CLS by various safeguards relevant when finite
precision arithmetic is used, and adds many practical enhancements in
other details.
The paper compares LMBOPT and many other solvers on the unconstrained
and bound constrained problems from the CUTEst collection and makes
recommendations as to which solver should be used when. Depending on
the problem class, the problem dimension, and the precise goal, the
best solvers are found to be LMBOPT, ASACG, and LMBFG-EIG-MS
O198.
M. Kimiaei and A. Neumaier,
A new limited memory method for unconstrained nonlinear least
squares,
Soft Computing, published online 13. December 2021.
pdf file (K),
This paper suggests a new limited memory trust region algorithm for
large unconstrained black box least squares problems, called LMLS.
Main features of LMLS are a new non-monotone technique, a new adaptive
radius strategy, a new Broyden-like algorithm based on the previous
good points, and a heuristic estimation for the Jacobian matrix in a
subspace with random basis indices. Our numerical results show that
LMLS is robust and efficient, especially in comparison with solvers
using traditional limited memory and standard quasi-Newton
approximations.
C197.
A. Neumaier and S. Penjic,
On bounding the diameter of a distance-regular graph,
Combinatorica (2021), published online 25.11.2021.
pdf file (335K),
view-only final version
In this note we investigate how to use an initial portion of the
intersection array of a distance-regular graph to give an
upper bound for the diameter of the graph. We prove three new diameter
bounds. Our bounds are tight for the Hamming d-cube,
doubled Odd graphs, the Heawood graph, Tutte's 8-Cage and 12-Cage,
the generalized dodecagons of order (1; 3) and (1; 4),
the Biggs-Smith graph, the Pappus graph, the Wells graph, and the
dodecahedron.
NO196.
A. Baharev, H. Schichl, A. Neumaier, and T. Achterberg,
An exact method for the minimum feedback arc set problem,
ACM J. Exper. Alg. 26 (2021), 1.4.
pdf file (883K),
A feedback arc set of a directed graph G is a subset of its arcs
containing at least one arc of every cycle in G. Finding a feedback arc
set of minimum cardinality is an NP-hard problem called the minimum
feedback arc set problem. Numerically, the minimum set cover formulation
of the minimum feedback arc set problem is appropriate as long as all
simple cycles in G can be enumerated. Unfortunately, even those sparse
graphs that are important for practical applications often have
Omega(2n) simple cycles. Here we address precisely such
situations: An exact method is proposed for sparse graphs that
enumerates simple cycles in a lazy fashion and iteratively extends an
incomplete cycle matrix. In all cases encountered so far, only a
tractable number of cycles has to be enumerated until a minimum
feedback arc set is found. The practical limits of the new method are
evaluated on a test set containing computationally challenging sparse
graphs, relevant for industrial applications. The 4,468 test graphs are
of varying size and density and suitable for testing the scalability of
exact algorithms over a wide range.
C195.
S. Penjic and A. Neumaier,
A unified view of inequalities for distance-regular graphs, Part I,
J. Combinatorial Theory, Series B, published online 5 October 2020.
pdf file (588K),
In this paper we introduce language of a conguration and t-point counts
for distance-regular graphs (DRGs). Every t-point count can be written
as a sum of (t-1)-point counts. This leads to a system of linear
equations and inequalities for the t-point counts in terms of the
intersection parameters, i.e., a linear constraint satisfaction problem
(CSP). This language is a very useful tool for a better understanding
of the combinatorial structure of distance-regular graphs. We also
obtain various old and new inequalities for the parameters of DRGs,
including the diameter bounds by Terwilliger.
O194.
F. Domes, T. Montanher, H. Schichl and A. Neumaier,
Rigorous global filtering methods with interval unions,
pp.249--267 in:
Beyond Traditional Probabilistic Data Processing Techniques:
Interval, Fuzzy, etc. Methods and Their Applications
(O. Kosheleva, et al., eds.),
Springer, New York 2020.
pdf file (745K)
This paper presents rigorous filtering methods for constraint
satisfaction problems based on the interval union arithmetic. Interval
unions are finite sets of closed and disjoint intervals that generalize
interval arithmetic. They allow a natural representation of the
solution set of interval powers, trigonometric functions and the
division by intervals containing zero. We show that interval unions are
useful when applied to the forward-backward constraint propagation on
directed acyclic graphs (DAGs) and can also replace the interval
arithmetic in the Newton operator. Empirical observations support the
conclusion that interval unions reduce the search domain even when more
expensive state-of-the-art methods fail. Interval unions methods tend
to produce a large number of boxes at each iteration. We address this
problem by taking a suitable gap-filling strategy. Numerical
experiments on constraint satisfaction problems from the COCONUT show
the capabilities of the new approach.
NO193.
A. Baharev, A. Neumaier, H. Schichl.
A manifold-based approach to sparse global constraint satisfaction
problems,
Journal of Global Optimization 75 (2019), 949-971.
pdf file (1449K)
We consider square, sparse nonlinear systems of equations whose
Jacobian is structurally nonsingular, with reasonable bound
constraints on all variables. We propose an algorithm for finding good
approximations to all well-separated solutions of such systems. We
assume that the input system is ordered such that its Jacobian is in
bordered block lower triangular form with small diagonal blocks and
with small border width; this can be performed fully automatically
with off-the-shelf decomposition methods. Five decades of numerical
experience show that models of technical systems tend to decompose
favorably in practice. Once the block decomposition is available, we
reduce the task of solving the large nonlinear system of equations to
that of solving a sequence of low-dimensional ones. The most serious
weakness of this approach is well-known: It may suffer from severe
numerical instability. The proposed method resolves this issue with
the novel backsolve step. We study the effect of the decomposition on
a sequence of challenging problems. Beyond a certain problem size, the
computational effort of multistart (no decomposition) grows
exponentially. In contrast, thanks to the decomposition, for the
proposed method the computational effort grows only linearly with the
problem size. It depends on the problem size and on the hyperparameter
settings whether the decomposition and the more sophisticated
algorithm pay off. Although there is no theoretical guarantee that all
solutions will be found in the general case, increasing the so-called
sample size hyperparameter improves the robustness of the proposed
method.
O191.
M. Ahookhosh and A. Neumaier,
An optimal subgradient algorithm with subspace search for costly convex
optimization problems,
Bull. Iranian Math. Soc. 45 (2019), 883-910.
(published online 10. October 2018)
pdf file (1041K)
This paper presents an acceleration of the optimal subgradient
algorithm OSGA for solving convex optimization problems, where the
objective function involves costly affine and cheap nonlinear terms.
We combine OSGA with a multidimensional subspace search technique,
which leads to low-dimensional problem that can be solved efficiently.
Numerical results concerning some applications are reported.
A software package implementing the new method is available.
OC190.
T. Montanher, A. Neumaier, M.C. Markot, F. Domes and H. Schichl,
Rigorous packing of unit squares into a circle,
J. Global Optimization 73 (2019), 547-565.
(published online October 6, 2018)
pdf file (724K),
This paper considers the task of finding the smallest circle into which
one can pack a fixed number of non-overlapping unit squares that are
free to rotate. Due to the rotation angles, the packing of unit squares
into a container is considerably harder to solve than their circle
packing counterparts. Therefore, optimal arrangements were so far
proved to be optimal only for one or two unit squares.
By a computer-assisted method based on interval arithmetic techniques,
we solve the case of 3 squares and find rigorous enclosures for every
optimal arrangement of this problem. We model the relation between the
squares and the circle as a constraint satisfaction problem (CSP) and
found every box that may contain a solution inside a given upper bound
of the radius. Due to symmetries in the search domain, general purpose
interval methods are far too slow to solve the CSP directly.
To overcome this difficulty, we split the problem into a set of
subproblems by systematically adding constraints to the center of each
square. Our proof requires the solution of 6, 43 and 12 subproblems
with 1, 2 and 3 unit squares respectively. In principle, the method
proposed in this paper generalizes to any number of squares.
O189.
M. Ahookhosh and A. Neumaier,
Solving structured nonsmooth convex optimization with complexity
O(ε-1/2),
Top 26 (2018), 110-145.
(Published online 14. August 2017)
pdf file (425K)
This paper describes an algorithm for solving structured nonsmooth
convex optimization problems using the optimal subgradient algorithm
(OSGA), a first-order method with optimal complexity both for Lipschitz
continuous nonsmooth problems and for smooth problems with Lipschitz
continuous gradient. If the nonsmoothness of the problem is manifested
in a structured way, we reformulate the problem in a form that can be
solved efficiently by OSGA with the complexity for the smooth case. To
solve the reformulated problem, we equip OSGA by an appropriate
prox-function for which the OSGA subproblem can be solved either in a
closed form or by a simple iterative scheme, which decreases the
computational cost of applying the algorithm, especially for
large-scale problems. We show that applying the new scheme is feasible
for many problems arising in applications. Some numerical results are
reported.
OS188.
E. Groeneveld and A. Neumaier,
BLUP without (inverse) relationship matrix,
Proc. World Congress Genetics Livestock Production,
vol. Theory to Application 3 (2018), 21.
pdf file (192K)
Mixed model methodology provides the machinery for genetic evaluation
in animal breeding. In fact they are the basis of all modern methods
for obtaining estimates of their unknowns. One central component of
Henderson's Mixed Model Equations (MME) is the numerator relationship
matrix (NRM) and, more precisely, its inverse. Only after Henderson
found a way to directly set up the inverse, the animal model became
computationally feasible.
We show that there is a way to totally avoid the modelling of the NRM
or its inverse, yet arriving at exactly the same BLUPs and BLUEs.
This reduces the program complexity and the computational complexity.
In practical modelling, it is useful to split the general model into
blocks of uncorrelated model equations, here called element equations.
Each such element equation consists of a small number of (possibly only
one) correlated equations linked by their covariance matrix, which are
block diagonal and thus trivial to invert; multiple trait models being
an example. In their NRM-free form discussed in the present paper,
standard animal models have three types of model equations:
phenotype based, trivial for simple random effects, and pedigree model
equations. The coefficients derived for the pedigree model equation are,
indeed, the same as those given by Westell et al. (1987) without having
to deal with the relationship matrix separately.
BLUPs can be computed solely on the basis of these element equations:
once the model equations have been set up in terms of coefficients and
mixed model addresses and their covariance matrix association, further
processing is oblivious to the statistical model and becomes a very
simple sweep. Only a few lines of code are required for either setting
up the complete MME in sparse format for a direct solution and possibly
computing its inverse, or in conjugate gradients to iteratively solve
for BLUEs and BLUPs.
A small numerical example is given to describe the process.
The model equations are well suited for coarse-grained parallelization.
As implemented in PEST2, they scale well with little computing overhead,
therefore making good use of multi core computers.
An outlook is given for the use of the method for the handling of
genomic (SNP) data in genetic evaluation. Also here, explicit treatment
of the NRM and GRM is not required, leading to a joint uniform one step
evaluation with pedigree and genomic data with no additional assumptions
except the model and covariances.
A187.
T. Montanher, A. Neumaier and F. Domes,
A computational study of global optimization solvers on two trust
region subproblems,
J. Global Optimization 71 (2018), 915-934.
pdf file (1310K)
This paper considers one of the simplest hard cases of quadratically
constrained quadratic programming (QCQP), the two trust region
subproblem (TTRS). In this case, one needs to minimize a quadratic
function constrained by the intersection of two ellipsoids.
The Lagrangian dual of the TTRS is a semidefinite program (SDP) and
this result has been extensively used to solve the problem efficiently.
We focus on numerical aspects of branch-and-bound solvers with three
goals in mind.
We provide
(i) a detailed analysis of the ability of state-of-the-art
solvers to complete the global search for a solution,
(ii) a quantitative
approach for measuring the cluster effect on each solver and
(iii) a comparison between the branch-and-bound and the SDP approaches.
We perform the numerical experiments on a set of 212 challenging
problems provided by Kurt Anstreicher. Our findings indicate that SDP
relaxations and branch-and-bound have orthogonal difficulties, thus
pointing to a possible benefit of a combined method.
A186.
B. Banhelyi, T. Csendes, T. Krisztin, and A. Neumaier,
A Bounding Scheme for Proving the Wright Conjecture on Delay
Differential Equations,
Acta Polytechnica Hungarica 15 (2018), 163-175.
pdf file (193K)
We provide here an elementary derivation of the bounding scheme applied
for proving the Wright conjecture on delay differential equations.
We also report a minor extension of the parameter range where the
conjecture was proven, to α in [1.5,1.57065].
O185.
W. Huyer and A. Neumaier,
MINQ8 - general definite and bound constrained indefinite quadratic
programming,
Computational Optimization and Applications 69 (2018), 351-381.
(Online First 03 October 2017.)
pdf file (435K)
We propose new algorithms for
(i) the local optimization of bound constrained quadratic programs,
(ii) the solution of general definite quadratic programs, and
(iii) finding a point satisfying given linear equations and
inequalities or a certificate of infeasibility.
The algorithms are implemented in Matlab and tested against
state-of-the-art quadratic programming software.
S184.
V. Kreinovich and A. Neumaier,
For Piecewise Smooth Signals, l1 Method Is the Best Among
lp: An Interval-Based Justification of an Empirical
Fact,
J. Computational Technologies 22 (2017), 50-58.
pdf file (97K)
downloading/printing problems?
Traditional engineering techniques use the least squares method
(i.e., in mathematical terms, the l^2-norm) to process data.
It is known that in many practical situations, l^p-methods with
other values of p lead to better results.
In different practical situations, different values of p are optimal.
It is known that in several situations when we need to reconstruct a
piecewise smooth signal, the empirically optimal value of p is close
to 1. In this paper, we provide a new theoretical explanation for this
empirical fact based on ideas and results from interval analysis.
O183.
A. Baharev, A. Neumaier, H. Schichl. Failure modes of tearing and a
novel robust approach,
pp 353-362 in: Proc. 12th Int. Modelica Conf. Prague, May 15-17, 2017.
DOI 10.3384/ecp17132353
pdf file (272K)
State-of-the-art Modelica implementations may fail in various ways when
tearing is turned on: Completely incorrect results are returned without
a warning, or the software fails with an obscure error message, or it
hangs for several minutes although the problem is solvable in
milliseconds without tearing. We give three detailed examples and an
in-depth discussion why such failures are inherent in tearing and
cannot be fixed within the traditional approach.
Without compromising the advantages of tearing, these issues are
resolved for the first time with staircase sampling. This is a
non-tearing method capable of robustly finding all well-separated
solutions of sparse systems of nonlinear equations without any initial
guesses. Its robustness is demonstrated on the steady-state simulation
of a particularly challenging distillation column. This column has
three solutions, one of which is missed by most methods, including
problem-specific tearing methods. All three solutions are found with
staircase sampling.
N182.
T. Montanher, F. Domes, H. Schichl and A. Neumaier,
Using interval unions to solve linear systems of equations with
uncertainties,
BIT Numerical Mathematics 57 (2017), 901-926.
(published online 22 April 2017).
pdf file (1392K)
An interval union is a finite set of closed and disjoint intervals.
In this paper, the interval union Gauss-Seidel procedure is introduced
to rigorously enclose the solution set of linear systems with
uncertainties given by intervals or interval unions. Also presented are
he interval union midpoint and Gauss-Jordan preconditioners.
The Gauss-Jordan preconditioner is used in a mixed strategy to improve
the quality and efficiency of the algorithm. Numerical experiments on
interval linear systems generated at random show the capabilities of
our approach.
O181.
M. Ahookhosh and A. Neumaier,
An optimal subgradient algorithm for large-scale bound-constrained
convex optimization,
Mathematical Methods of Operations Research 86 (2017), 123-147.
(published online 12 April 2017)
pdf file (1252K)
This paper shows that the OSGA algorithm - which uses first-order
information to solve convex optimization problems with optimal
complexity - can be used to efficiently solve arbitrary
bound-constrained convex optimization problems. This is done by
constructing an explicit method as well as an inexact scheme for
solving the bound-constrained rational subproblem required by OSGA.
This leads to an efficient implementation of OSGA on large-scale
problems in applications arising signal and image processing, machine
learning and statistics.
Numerical experiments demonstrate the promising performance of OSGA on
such problems. A software package implementing OSGA for
bound-constrained convex problems is available.
O180.
M. Ahookhosh and A. Neumaier,
An optimal subgradient algorithm for large-scale convex optimization in
simple domains,
Numerical Algorithms 76 (2107),1071-1097.
(published online 17 March 2017)
pdf file (1855K)
published, full-text view-only version
This paper shows that the optimal subgradient algorithm, OSGA,
can be used for solving structured large-scale convex constrained
optimization problems. Only first-order information is required, and
the optimal complexity bounds for both smooth and nonsmooth problems
are attained. More specifically, we consider two classes of problems:
(i) a convex objective with a simple closed convex domain, where the
orthogonal projection on this feasible domain is efficiently available;
(ii) a convex objective with a simple convex functional constraint.
If we equip OSGA with an appropriate prox-function, the OSGA subproblem
can be solved either in a closed form or by a simple iterative scheme,
which is especially important for large-scale problems.
We report numerical results for some applications to show the
efficiency of the proposed scheme. A software package implementing
OSGA for above domains is available.
O179.
A. Baharev, F. Domes and A. Neumaier,
A robust approach for finding all well-separated solutions of sparse
systems of nonlinear equations,
Numerical Algorithms 76 (2017), 163-189.
(published online: 17 December 2016)
pdf file (1966K)
supplementary material, including source code
Tearing is a long-established decomposition technique, widely adapted
across many engineering fields. It reduces the task of solving a large
and sparse nonlinear system of equations to that of solving a sequence
of low-dimensional ones. The most serious weakness of this approach is
well-known: It may suffer from severe numerical instability. The present
paper resolves this flaw for the first time.
The new approach requires reasonable bound constraints on the variables.
The worst-case time complexity of the algorithm is exponential in the
size of the largest subproblem of the decomposed system. Although there
is no theoretical guarantee that all solutions will be found in the
general case, increasing the so-called sample size parameter of the
method improves robustness.
This is demonstrated on two particularly challenging problems.
Our first example is the steady-state simulation a challenging
distillation column, belonging to an infamous class of problems where
tearing often fails due to numerical instability. This column has three
solutions, one of which is missed using tearing, but even with
problem-specific methods that are not based on tearing. The other
example is the Stewart-Gough platform with 40 real solutions, an
extensively studied benchmark in the field of numerical algebraic
geometry. For both examples, all solutions are found with a fairly small
amount of sampling.
O178.
H. Fendl, A. Neumaier and H. Schichl,
Certificates of infeasibility via nonsmooth optimization,
J. Global Optimization 69 (2017), 157-182.
(published online 28 October 2016)
pdf file (2336K)
An important aspect in the solution process of constraint satisfaction
problems is to identify exclusion boxes which are boxes that do not
contain feasible points. This paper presents a certificate of
infeasibility for finding such boxes by solving a linearly constrained
nonsmooth optimization problem. Furthermore, the constructed
certificate can be used to enlarge an exclusion box by solving a
nonlinearly constrained nonsmooth optimization problem.
P177.
A. Neumaier and D. Stehlé,
Faster LLL-type reduction of lattice bases,
pp. 373-380 in:
Proceedings ACM ISSAC '16, ACM, New Yourk 2016.
pdf file (363K)
We describe an asymptotically fast variant of the LLL lattice reduction
algorithm. It returns a reduced basis of the Euclidean lattice spanned
by an input basis, whose first vector satisfy stronger bounds than
previous polynomial algorithms. It terminates within
O(n4+εβ1+ε) bit operations
for any ε>0, where β is the maximal bit size of input
basis vectors. The algorithm relies on fast integer arithmetic but does
not make use of fast matrix multiplication.
P176.
A. Neumaier,
Bounding basis reduction properties,
Designs, Codes and Cryptography 84 (2017), 237-259.
(published online 15 September 2016)
pdf file (263K)
The paper describes improved analysis techniques for basis reduction
that allow one to prove strong complexity bounds and reduced basis
guarantees for traditional reduction algorithms and some of their
variants. This is achieved by a careful exploitation of the linear
equations and inequalities relating various bit sizes before and after
one or more reduction steps.
P175.
A. Neumaier and U.K. Deiters,
The characteristic curves of water,
Int. J. Thermophysics 37 (2016), 96.
(published online July 23, 2016)
pdf file (365K)
In 1960, E.H. Brown defined a set of characteristic curves (also known
as ideal curves) of pure fluids, along which some thermodynamic
properties match those of an ideal gas. These curves are used for
testing the extrapolation behaviour of equations of state. This work
is revisited, and an elegant representation of the first-order
characteristic curves as level curves of a master function is proposed.
It is shown that Brown's postulate - that these curves are unique and
dome-shaped in a double-logarithmic p,T representation - may fail for
fluids exhibiting a density anomaly. A careful study of the Amagat
curve (Joule inversion curve) generated from the IAPWS-95 reference
equation of state for water reveals the existence of an additional
branch.
P174.
U.K. Deiters and A. Neumaier,
Computer simulation of the characteristic curves of pure fluids,
J. Chem. Eng. Data 61 (2016), 2720-2728.
(published online March 24, 2016)
pdf file (313K)
Brown's characteristic curves (also known as ideal curves) describe
states at which one thermodynamic property of a real pure fluid matches
that of an ideal gas; these curves can be used for testing the
extrapolation behaviour of equations of state. In this work, some
characteristic curves are computed directly - without recourse to an
equation of state - for some pair potentials by Monte Carlo computer
simulation. The potentials used are an ab-initio potential for argon,
the 1-center Lennard-Jones potential, and a softer pair potential whose
short-range part is in accordance with quantum mechanical predictions.
The influence of the short-distance repulsion on the characteristic
curves is found to be significant even in the 10-100 MPa pressure
range.
N173.
T. Iqbal and A. Neumaier,
Worst case error bounds for the solution of uncertain Poisson
equations with mixed boundary conditions,
J. Comput. Appl. Math. 303 (2016), 40-55.
pdf file (275K)
Given linear elliptic partial dierential equations with mixed boundary
conditions, with uncertain parameters constrained by inequalities, it is
shown how to use finite element approximations to compute worst case a
posteriori error bounds for linear response functionals determined by
the solution. All discretization errors are taken into account.
The bounds are based on the dual weighted residual (DWR) method of
Becker & Rannacher and treat the uncertainties with an optimization
approach.
The method was implemented for Poisson-like equations with an uncertain
mass distribution and mixed Dirichlet/Neumann boundary conditions on
arbitrary polygonal domains. No knowledge of domain-dependent a priori
constants is necessary.
To get the error bounds, a first order formulation is used whose
solution with linear finite elements produces compatible piecewise
linear approximations of the solution and its gradient. From the
solution of nine related boundary value problems, the bounds are
produced.
O172.
F. Domes and A. Neumaier,
Linear and parabolic relaxations for quadratic constraints,
J. Global Optimization 65 (2016), 457-486.
(published online: 06 November 2015)
pdf file (971K)
This paper presents new techniques for pruning boxes in the presence
of an additional quadratic constraint, a problem relevant for branch
and bound methods for global optimization and constraint satisfaction.
This is done by generating powerful linear and parabolic relaxations
from a quadratic constraint and bound constraints, which are then
subject to standard constraint propagation techniques. The techniques
are often applicable even if the original box is unbounded in some but
not all variables.
As an auxiliary tool - needed to make our theoretical results
implementable in floating-point arithmetic without sacrificing
mathematical rigor - we extend the directed Cholesky factorization
from Domes and Neumaier (SIAM J. Matrix Anal. Appl. 32 (2011), 262-285)
to a partial directed Cholesky factorization with pivoting. If the
quadratic constraint is convex and the initial bounds are sufficiently
wide, the final relaxation and the enclosure are optimal up to
rounding errors.
Numerical tests show the usefulness of the new factorization methods
in the context of pruning.
O171.
A. Neumaier,
OSGA: A fast subgradient algorithm with optimal complexity,
Math. Programming A 158 (2016), 1-21.
(published online: 16 May 2015)
pdf file (186K)
This paper presents an algorithm for approximately minimizing a
convex function in simple, not necessarily bounded convex domains,
assuming only that function values and subgradients are available.
No global information about the objective function is needed apart
from a strong convexity parameter (which can be put to zero if only
convexity is known).
The worst case number of iterations needed to achieve a given accuracy
is independent of the dimension and -- apart from a constant factor -
best possible under a variety of smoothness assumptions on the
objective function.
O170.
F. Domes and A. Neumaier,
Constraint aggregation for rigorous global optimization,
Math. Programming A 155 (2016), 375-401.
(published online: 16. December 2014).
pdf file (1198K)
In rigorous constrained global optimization, upper bounds on the
objective function help to reduce the search space. Their construction
requires finding a narrow box around an approximately feasible solution,
verified to contain a feasible point. Approximations are easily found
by local optimization, but the verification often fails.
In this paper we show that even if the verification of an approximate
feasible point fails, the information extracted from the local
optimization can still be used in many cases to reduce the search space.
This is done by a rigorous filtering technique called constraint
aggregation. It forms an aggregated redundant constraint, based on
approximate Lagrange multipliers or on a vector valued measure of
constraint violation. Using the optimality conditions, two sided linear
relaxations, the Gauss-Jordan algorithm and a directed Cholesky
factorization, the information in the redundant constraint is turned
into powerful bounds on the feasible set. Constraint aggregation is
especially useful since it also works in a tiny neighborhood of the
global optimizer, thereby reducing the cluster effect.
A simple example demonstrates how our new method works.
Extensive tests show the performance on a large benchmark.
O169.
F. Domes and A. Neumaier,
Rigorous verification of feasibility,
J. Global Optimization 61 (2015), 255-278.
pdf file (705K)
This paper considers the problem of finding and verifying feasible
points for constraint satisfaction problems, including those with
uncertain coecients. The main part is devoted to the problem of finding
a narrow box around an approximately feasible solution for which it can
be rigorously and automatically proved that it contains a feasible
solution. Some examples demonstrate difficulties when attempting
verification.
We review some existing methods and present a new method for verifying
the existence of feasible points of constraint satisfaction problems
in an automatically determined narrow box. Numerical tests within
GloptLab, a solver developed by the authors, show how the different
methods perform.
Also discussed are the search for approximately feasible points and the
use of approximately feasible points within a branch and bound scheme
for constraint satisfaction.
N168.
A. Goldsztejn and A. Neumaier,
On the Exponentiation of Interval Matrices,
Reliable Computing 20 (2014), 53-72.
pdf file (1012K)
The numerical computation of the exponentiation of a real
matrix has been studied intensively. The main objective of a good
numerical method is to deal with round-off errors and computational
cost. The situation is more complicated when dealing with interval
matrix exponentiation: Indeed, the main problem will now be the
dependence loss in the interval evaluation due to multiple occurrences
of some variables, which may lead to enclosures that are too wide to be
useful. In this paper, the problem of computing a sharp enclosure of
the interval matrix exponential is proved to be NP-hard. Then the
scaling and squaring method is adapted to interval matrices and shown
to drastically reduce the dependence loss compared with the interval
evaluation of the Taylor series.
Although most of what is presented in this paper seems to be
known to the experts, one can find nowhere a coherent source for the
results. The present paper fills the gap, and adds numerical examples
and new insights.
S167.
A. Bichler, A. Neumaier and T. Hofmann,
A tree-based statistical classification algorithm (CHAID) for
identifying variables responsible for the occurrence of faecal indicator
bacteria during waterworks operations,
J. Hydrology 519 A (2014), 909-917.
pdf file (2341K)
Microbial contamination of groundwater used for drinking water
production is of major concern to public health, local water
authorities and suppliers. A preventive approach to protect raw water
sources emphasizes the identification of potential hazards.
We propose a non-parametric data mining technique to explore the
concentrations of total coliforms (TC) in a groundwater abstraction
well and their relationships to continuous time series of readily
available hydrometric monitoring parameters (seven year time record
of precipitation, river water levels, groundwater heads).
The initial monitoring parameters were used to create an extensive
data set of explanatory variables by considering different
accumulation or averaging periods, as well as temporal offsets of the
predictors. The recursive partitioning algorithm Chi-Squared Automatic
Interaction Detection (CHAID) revealed statistical significant
relationships of heavy precipitation to the occurrence of TC in the
production well and a close-by monitoring well. Additionally,
different predictors for the two well systems were identified:
elevated water levels of the river and short-term fluctuations were
indicative to TC in the observation well, suggesting infiltration of
contaminated river water into the aquifer. TC in the production well
were related to elevated groundwater heads and fluctuations.
The creation of generic variables proved to be useful in increasing
the significance of the predictors. Also, the prediction of TC based
on hydrometric variables worked fairly well with an overall correct
prediction rate of 90% and about 50% of positive test results being
correctly classified.
AN166.
B. Banhelyi, T. Csendes, T. Kristin and A. Neumaier,
Global attractivity of the zero solution for Wright's equation,
SIAM J. Applied Dynamical Systems 13 (2014), 537-563.
pdf file (403K)
In a paper published in 1955, E.M. Wright proved that all solutions of
the delay differential equation u'(t) = - alpha u(t-1)(1 + u(t))
converge to zero for alpha in (0, 1.5], and conjectured
that this is even true for alpha in (0, pi/2).
The present paper provides a computer-assisted proof of the
conjecture for alpha in [1.5, 1.5706] (compare with
pi/2 = 1.570796...).
PN165.
A. Neumaier,
Analytic representation of critical equations of state,
J. Statist. Phys. 155 (2014), 603-624.
arXiv:1401.0291
pdf file (460K)
We propose a new form for equations of state (EOS) of
thermodynamic systems in the Ising universality class.
The new EOS guarantees the correct universality and scaling behavior
close to critical points and is formulated in terms of the scaling
fields only -- unlike the traditional Schofield representation, which
uses a parametric form.
Close to a critical point, the new EOS expresses the square of the
strong scaling field as an explicit function of the thermal
scaling field and the dependent scaling field.
A numerical expression is derived, valid close to critical points.
As a consequence of the construction it is shown that the dependent
scaling field can be written as an explicit function of the relevant
scaling fields without causing strongly singular behavior of the
thermodynamic potential in the one-phase region.
Augmented by additional scaling correction fields, the new EOS also
describes the state space further away from critical points.
It is indicated how to use the new EOS to model multiphase fluid
mixtures, in particular for vapor-liquid-liquid equilibrium (VLLE)
where the traditional revised scaling approach fails.
O164.
H. Schichl, M.C. Markot and A. Neumaier,
Exclusion regions for optimization problems,
J. Global Optimization 59 (2014), 569-595.
pdf file (346K)
Branch and bound methods for finding all solutions of a global
optimization problem in a box frequently have the difficulty that
subboxes containing no solution cannot be easily eliminated if they
are close to the global minimum. This has the effect that near each
global minimum, and in the process of solving the problem also near
the currently best found local minimum, many small boxes are created
by repeated splitting, whose processing often dominates the total work
spent on the global search.
This paper discusses the reasons for the occurrence of this so-called
cluster effect, and how to reduce the cluster effect by defining
exclusion regions around each local minimum found, that are guaranteed
to contain no other local minimum and hence can safely be discarded.
In addition, we will introduce a method for verifying the existence of
a feasible point close to an approximate local minimum.
These exclusion regions are constructed using uniqueness tests based
on the Krawczyk operator and make use of first, second and third order
information on the objective and constraint functions.
NO163.
A. Baharev and A. Neumaier,
A globally convergent method for finding all steady-state solutions of
distillation columns,
AIChE Journal 60 (2014), 410-414.
DOI,
pdf file (345K)
supplementary material, including source code
A globally convergent method is proposed that either returns all
solutions to steady-state models of distillation columns or proves their
infeasibility. Initial estimates are not required. The method requires a
specific but fairly general block-sparsity pattern; in return, the
computational efforts grow linearly with the number of stages in the
column. The well-known stage-by-stage (and the sequential modular)
approach also reduces the task of solving high-dimensional steady-state
models to that of solving a sequence of low-dimensional ones.
Unfortunately, these low-dimensional systems are extremely sensitive to
the initial estimates, so that solving them can be notoriously difficult
or even impossible. The proposed algorithm overcomes these numerical
difficulties by a new reparameterization technique. The successful
solution of a numerically challenging reactive distillation column with
seven steady-states shows the robustness of the method. No published
software known to the authors could compute all solutions to this
difficult model without expert tuning.
O162.
F. Domes, M. Fuchs, H. Schichl and A. Neumaier,
The optimization test environment,
Optimization and Engineering 15 (2014), 443-468.
pdf file (331K)
The OPTIMIZATION TEST ENVIRONMENT is an interface to efficiently
test different optimization solvers. It is designed as a tool for both
developers of solver software and practitioners who just look for the
best solver for their specific problem class. It enables users to:
Perform a statistical analysis of the results, automatically
produced as LATEX, PDF, and JPG output.
The OPTIMIZATION TEST ENVIRONMENT is free to use for research purposes.
O161.
M. Ahookhosh and A. Neumaier,
High-dimensional convex optimization via optimal affine subgradient
algorithms,
Proc. International Workshop on Advances in Regularization,
Optimization, Kernel Methods and Support Vector Machines: Theory and
Applications (ROKS 2013), Leuven, July 2013.
pdf file (540K)
This study discusses some algorithms for solving high-dimensional convex
optimization problems appearing in applied sciences like signal and
image processing, machine learning and statistics. We improve an
optimal rst-order approach for a class of objective functions including
costly ane terms by employing a special multidimensional subspace
search. We report some numerical results for some imaging problems
including nonsmooth regularization terms.
S160.
M. Obernosterer, P. Panek, P. Mayer, A. Neumaier, W.L. Zagler,
Aktionsklassen für Handlungsabläufe im
computer-gestützten Wohnen,
pp. 99-105 in: Proc.
eHealth 2013,
Vienna, Austria (2013).
pdf file (662K)
The eHome assistance system was developed for supporting older people
living alone. The complete activity data resulting from 18 months in
11 flats was subjected in anonymised form to a nearly flat-independent
statistical analysis. The data analysis resulted in action classes,
from which typical behaviour patterns can be deduced. On their basis,
the expert system of eHome will be complemented by a self-learning
Bayesian network.
N159.
S. Das and A. Neumaier,
Solving overdetermined eigenvalue problems,
SIAM J. Sci. Comput. 35 (2013), A541-A560.
pdf file (662K)
We propose a new interpretation of the generalized overdetermined
eigenvalue problem (A-lambda B)v=0 for two rectangular matrices A and B,
its stability analysis, and an efficient algorithm for solving it.
Usually, the matrix pencil does not have any rank deficient member.
Therefore we aim at computing a lambda for which A-lambda B is as close
as possible to rank deficient; i.e., we search for lambda that locally
minimize the smallest singular value over the matrix pencil A-lambda B.
Practically, the proposed algorithm requires O(mn^2) operations for
computing all the eigenpairs. We also describe a method to compute
practical starting eigenpairs.
The effectiveness of the new approach is demonstrated with numerical
experiments.
O158.
H. Schichl, A. Neumaier, M.C. Markot and F. Domes,
On solving mixed-integer constraint satisfaction
problems with unbounded variables,
pp. 216-233 in:
Integration of AI and OR Techniques in Constraint Programming for
Combinatorial Optimization Problems,
Lecture Notes in Computer Science, Vol. 7874 (2013).
pdf file (282K)
Many mixed-integer constraint satisfaction problems and global
optimization problems contain some variables with unbounded domains.
Their solution by branch and bound methods to global optimality
poses special challenges as the search region is infinitely extended.
Many usually strong bounding methods lose their efficiency or fail
altogether when infinite domains are involved.
Most implemented branch and bound solvers add artificial bounds to make
the problem bounded, or require the user to add these. However, if
these bounds are too small, they may exclude a solution, while when
they are too large, the search in the resulting huge but bounded region
may be very inefficient. Moreover, if the global solver must provide a
rigorous guarantee (as for the use in computer-assisted proofs), such
articial bounds are not permitted without justication by proof.
We developed methods based on compactication and projective geometry
as well as asymptotic analysis to cope with the unboundedness in a
rigorous manner. Based on projective geometry we implemented two
different versions of the basic idea, namely
(i) projective constraint propagation, and
(ii) projective transformation of the variables,
in the rigorous global solvers COCONUT and GloptLab.
Numerical tests demonstrate the capability of the new technique,
combined with standard pruning methods, to rigorously solve unbounded
global problems. In addition, we present a generalization of projective
transformation based on asymptotic analysis.
Compactification and projective transformation, as well as asymptotic
analysis, are useless in discrete situations. But they can very well be
applied to compute bounded relaxations, and we present methods
for doing that in an efficient manner.
N157
Q. Fazal and A. Neumaier,
Error bounds for initial value problems by optimization,
Soft Computing 17 (2013), 1345-1356.
pdf file (295K)
We use preconditioned defect estimates and optimization techniques to
compute error bounds for approximate solutions of initial value problems
for ordinary differential equations with uncertain initial conditions.
The bounds are based on new results about conditional differential
inequalities, which are developed, too.
The scheme is implemented in MATLAB and AMPL, and the resulting
enclosures are compared with the packages VALENCIA-IVP, VNODE-LP and
VSPODE for bounding solutions of ODEs. The current prototype
implementation uses heuristics to solve the global optimization
subproblems, hence the bounds obtained in the numerical experiments are
not fully rigorous. They could be made so by using rigorous global
optimization and rounding error control, but the effect on the bounds
is likely to be marginal only.
P156
L. Pal, T. Csendes, M.C. Markot and A. Neumaier,
Black-box optimization benchmarking of the GLOBAL method,
Evolutionary Computation 20 (2012), 609-639.
pdf file (1333K)
downloading/printing problems?
GLOBAL is a multistart type stochastic method for bound constrained
global optimization problems. Its goal is to find the best local minima
that are potentially global. For this reason it involves a combination
of sampling, clustering, and local search. The role of clustering is to
reduce the number of local searches by forming groups of points around
the local minimizers from a uniformly sampled domain and to start few
local searches in each of those groups. We evaluate the performance of
the GLOBAL algorithm on the BBOB~2009 noiseless testbed, containing
problems which reflect the typical difficulties arising in real-word
applications. The obtained results are also compared with those
obtained form the simple multistart procedure in order to analyze the
effects of the applied clustering rule. An improved parametrization is
introduced in the GLOBAL method and the performance of the new
procedure is compared with the performance of the MATLAB GlobalSearch
solver by using the BBOB~2010 test environment.
AC155.
A. Baharev and A. Neumaier,
Chemical Process Modeling in Modelica
pp. 955-962 in:
Proceedings of the 9th International Modelica Conference, Munich,
Germany; Sep 3-5, 2012.
DOI
pdf file (245K)
One-page abstract
supplementary material, including source code
downloading/printing problems?
Chemical process models are highly structured. Information on how the
hierarchical components are connected helps to solve the model
efficiently. The structural information retrieved from the JModelica
environment will play an important role in the development of our novel
optimization methods.
Foundations of a Modelica library for
general-purpose chemical process modeling have been built. Multiple
steady-states in ideal two-product distillation were computed as a proof
of concept. The Modelica source code is available at the project
homepage. The issues encountered during modeling may be valuable to the
Modelica language designers.
F154.
K. Kofler and A. Neumaier,
DynGenPar - A Dynamic Generalized Parser for Common Mathematical
Language,
pp. 386-401 in:
Intelligent Computer Mathematics (J. Jeuring et al. , eds.),
Lecture Notes in Computer Science Volume 7362 (2012)
pdf file (580K)
software
This paper introduces a dynamic generalized parser aimed primarily at
natural mathematical language. Our algorithm combines the efficiency of
GLR parsing, the dynamic extensibility of tableless approaches
and the expressiveness of extended context-free grammars such
as parallel multiple context-free grammars (PMCFGs). In particular, it
supports efficient dynamic rule additions to the grammar at any moment.
The algorithm is designed in a fully incremental way, allowing
to resume parsing with additional tokens without restarting the parse
process, and can predict possible next tokens. Additionally, we handle
constraints on the token following a rule. This allows for grammatically
correct English indefinite articles when working with word tokens. It
can also represent typical operations for scannerless parsing such as
maximal matches when working with character tokens. Our long-term
goal is to computerize a large library of existing mathematical
knowledge using the new parser. In this paper, we present the
algorithmic ideas behind our approach, give a short overview of the
implementation, and present some efficiency results.
A153.
P. Schodl and A. Neumaier,
Continuity notions for multi-valued mappings,
Reliable Computing 16 (2012), 84-101.
pdf file (245K)
downloading/printing problems?
We discuss properties of continuity notions for multi-valued mappings
which allow disconnected images, but still have a useful zero property.
The starting point is the notion of c-continuity introduced by the
second author in the book Interval Methods for Systems of Equations to
study enclosure properties of interval Newton operators. It was claimed
in that book that c-continuity possesses a zero property. However, we
provide a counterexample. Two other continuity notions are introduced
and examined, and applied in a logical context.
FO152.
P. Schodl, A. Neumaier, K. Kofler, F. Domes, and H. Schichl,
Towards a Self-reflective, Context-aware Semantic
Representation of Mathematical Specifications,
Chapter 3 in: Modeling Languages in Mathematical Optimization
(J. Kallrath, ed.), Springer, 2012.
pdf file (135K)
We discuss a framework for the representation and processing of
mathematics developed within and for the MoSMath project. The MoSMath
project aims to create a software system that is able to translate
optimization problems from an almost natural language to the algebraic
modeling language AMPL.
As part of a greater vision (the FMathL project), this framework is
designed both to serve the optimization-oriented MoSMath project, and
to provide a basis for the much more general FMathL project.
We introduce the semantic memory, a data structure to represent
semantic information, a type system to define and assign types to data,
and the semantic virtual machine (SVM), a low level, Turing-complete
programming system that processes data represented in the semantic
memory.
Two features that set our approach apart from other frameworks are the
possibility to reflect every major part of the system within the system
itself, and the emphasis on the context-awareness of mathematics.
Arguments are given why this framework appears to be well suited for
the representation and processing of arbitrary mathematics.
It is discussed which mathematical content the framework is currently
able to represent and interface.
O151.
F. Domes and A. Neumaier,
Rigorous filtering using linear relaxations,
J. Global Optimization 53 (2012), 441-473.
pdf file (368K)
This paper presents rigorous filtering methods for continuous
constraint satisfaction problems based on linear relaxations.
Filtering or pruning stands for reducing the search space of constraint
satisfaction problems. We discuss partially improved and existing
methods as well as new approaches for rigorously enclosing the solution
set of linear systems of inequalities. We also discuss different
methods for computing linear relaxations. This allows one to customize
combinations of relaxation and filtering. Care is taken to ensure that
all methods correctly account for rounding errors in the computations.
Although most of the results apply more generally, strong emphasis is
given to relaxing and filtering quadratic constraints, as implemented
in the GloptLab environment, which internally exploits a quadratic
structure. Demonstrative examples and tests comparing the different
linear relaxation methods are also presented.
FO150.
A. Neumaier and P. Schodl,
A framework for representing and processing arbitrary mathematics,
pp. 476-479 in: Proc. Int. Conf. Knowledge Engineering and Ontology
Development (J. Filipe and J.L.G. Dietz, eds.), SciTe Press 2010.
pdf file (1469K)
While mathematicians already benefit from the computer as regards
numerical problems, visualization, symbolic manipulation, typesetting,
etc., there is no common facility to store and process information,
and mathematicians usually have to communicate the same mathematical
content multiple times to the computer.
We are in the process of creating and implementing a framework that is
capable of representing and interfacing optimization problems, and we
argue that this framework can be used to represent arbitrary
mathematics and contribute towards a universal mathematical database.
N149.
F. Domes and A. Neumaier,
Rigorous enclosures of ellipsoids and directed Cholesky factorizations,
SIAM J. Matrix Anal. Appl. 32 (2011), 262-285.
pdf file (379K)
downloading/printing problems?
This paper discusses the rigorous enclosure of an ellipsoid by a
rectangular box, its interval hull, providing a convenient
preprocessing step for constrained optimization problems.
A quadratic inequality constraint with a strictly convex Hessian
matrix defines an ellipsoid. The Cholesky factorization can be used to
transform a strictly convex quadratic constraint into a norm inequality,
for which the interval hull is easy to compute analytically. In exact
arithmetic, the Cholesky factorization of a nonsingular symmetric
matrix exists iff the matrix is positive definite. However, to
cope efficiently with rounding errors in inexact arithmetic is
nontrivial.
Numerical tests show that even nearly singular problems can be handled
successfully by our techniques. To rigorously account for the rounding
errors involved in the computation of the interval hull and to handle
quadratic inequality constraints having uncertain coefficients, we
define the concept of a directed Cholesky factorization and give two
algorithms for computing one.
We also discuss how a directed Cholesky factorization can be used for
testing positive definiteness.
Some numerical tests are given in order to exploit the features and
boundaries of the directed Cholesky factorization methods.
O148.
A. Neumaier, H. Fendl, H. Schilly, T. Leitner,
Derivative-free unconstrained optimization based on QR
factorizations,
Soft Computing 15 (2011), 2287-2298.
pdf file (186K)
vxqr1 software
downloading/printing problems?
This paper presents basic features of a new family of algorithms for
unconstrained derivative-free optimization, based on line searches
along directions generated from QR factorizations of past
direction matrices. Emphasis is on fast descent with a low number
of function values, so that the algorithm can be used for fairly
expensive functions. The theoretical total time overhead needed per
function evaluation is of order O(n^2), where $n$ is the problem
dimension, but the observed overhead is much smaller.
Numerical results are given for a particular algorithm VXQR1 from this
family, implemented in Matlab, and evaluated on the scalability test
set of Herrera et al. for problems in dimensions 50, 100, 200, 500,
and 1000.
Performance depends a lot on the graph of the function along line
segments.
The algorithm is typically very fast on smooth problems with not too
rugged graphs, and on problems with a roughly separable structure.
It typically performs poorly on problems where the graph along many
directions is highly multimodal without pronounced overall slope
(e.g., for smooth functions with superimposed
oscillations of significant size), where the graphs along many
directions are piecewise constant (e.g., for problems minimizing a
maximum norm), or where the function overflows on the major part of
the search region and no starting point with finite function value is
known.
O147.
M. Fuchs and A. Neumaier,
A splitting technique for discrete search based on convex
relaxation,
J. Uncertain Systems 4 (2010), 14-21.
pdf file (175K)
In mixed integer programming branching methods are a powerful and
frequently employed tool. This paper presents a branching strategy for
the case that the integer constraints are associated with a finite set
of points in a possibly multidimensional space. We use the knowledge
about this discrete set represented by its minimum spanning tree and
find a splitting based on convex relaxation.
Typical applications include design optimization problems where design
points specifying several discrete choices can be considered as
such discrete sets.
O146.
F. Domes and A. Neumaier,
Constraint propagation on quadratic constraints,
Constraints 15 (2010), 404-429.
pdf file (185K)
downloading/printing problems?
This paper considers constraint propagation methods for continuous
constraint satisfaction problems consisting of linear and quadratic
constraints. All methods can be applied after suitable preprocessing
to arbitrary algebraic constraints. The basic new techniques consist
in eliminating bilinear entries from a quadratic constraint, and
solving the resulting separable quadratic constraints by means of a
sequence of univariate quadratic problems. Care is taken to ensure
that all methods correctly account for rounding errors in the
computations.
O145.
R. Fourer, C. Maheshwari, A. Neumaier, D. Orban, and H. Schichl,
Convexity and Concavity Detection in Computational Graphs.
Tree Walks for Convexity Assessment,
INFORMS J. Computing 22 (2010), 26-43.
pdf file (313K)
downloading/printing problems?
In this paper, we examine sets of symbolic tools associated to
modeling systems for mathematical programming which can be used to
automatically detect the presence or lack of convexity and concavity
in the objective and constraint functions. As a consequence, convexity
of the feasible set may be assessed to some extent. The coconut solver
system focuses on nonlinear global continuous optimization and possesses
its own modeling language and data structures. The Dr.AMPL meta-solver
aims to analyze nonlinear differentiable optimization models and hooks
into the ampl Solver Library [Gay02]. The symbolic analysis may be
supplemented with a numerical disproving phase when the former returns
inconclusive results. We report numerical results using these tools on
sets of test problems for both global and local optimization.
O144.
M. Fuchs and A. Neumaier,
Potential based clouds in robust design optimization,
J. Stat. Theory Practice 3 (2009), 225-238.
pdf file (970K)
downloading/printing problems?
Robust design optimization methods applied to real life problem
face some major difficulties: how to deal with the estimation of
probability densities when data are sparse, how to cope with high
dimensional problems and how to use valuable information
in the form of unformalized expert knowledge.
In this paper we introduce in detail the clouds formalism as means
to process available uncertainty information reliably, even if
limited in amount and possibly lacking a formal description,
to providing a worst-case analysis with confidence regions of
relevant scenarios which can be involved in an optimization problem
formulation for robust design.
O143.
M. Fuchs and A. Neumaier,
Autonomous robust design optimization with potential clouds,
Int. J. Reliability Safety 3 (2009), 23-34.
pdf file (230K)
downloading/printing problems?
The task of autonomous and robust design cannot be regarded as a single
task, but consists of two tasks that have to be accomplished
concurrently. First, the design should be found autonomously; this
indicates the existence of a method which is able to find the optimal
design choice automatically. Second, the design should be robust;
in other words: the design should be safeguarded against uncertain
perturbations. Traditional modeling of uncertainties faces several
problems. The lack of knowledge about distributions of uncertain
variables or about correlations between uncertain data, respectively,
typically leads to underestimation of error probabilities. Moreover,
in higher dimensions the numerical computation
of the error probabilities is very expensive, if not impossible, even
provided the knowledge of the multivariate probability distributions.
Based on the clouds formalism we have developed new methodologies to
gather all available uncertainty information from expert engineers,
process it to a reliable worst-case analysis and finally optimize the
design seeking the optimal robust design.
SO142.
V. Kreinovich, A. Neumaier, and Gang Xiang,
Towards a Combination of Interval and Ellipsoid Uncertainty,
Vych. Techn. (Computational Technologies) 13 (2008), 5-16.
pdf file (217K)
downloading/printing problems?
In many real-life situations, we do not know the probability
distribution of measurement errors but only upper bounds on these
errors. In such situations, once we know the measurement results,
we can only conclude that the actual (unknown) values of a
quantity belongs to some interval. Based on this interval uncertainty,
we want to find the range of possible values of a desired function
of the uncertain quantities. In general, computing this range is an
NP-hard problem, but in a linear approximation, valid for small
uncertainties, there is a linear time algorithm for computing the range.
In other situations, we know an ellipsoid that contains the actual
values. In this case, we also have a linear time algorithm for
computing the range of a linear function.
In some cases, however, we have a combination of interval and
ellipsoid uncertainty. In this case, the actual values belong to the
intersection of a box and an ellipsoid. In
general, estimating the range over the intersection enables us to
get a narrower range for the function. In this paper, we provide two
algorithms for estimating the range of a linear function over an
intersection in linear time: a simpler O(n log(n)) algorithm and
a (somewhat more complex) linear time algorithm. Both algorithms can
be extended to the l^p-case, when instead of an ellipsoid we have
a set defined by a p-norm.
O141.
M. Fuchs and A. Neumaier,
Handling uncertainty in higher dimensions with potential clouds
towards robust design optimization,
pp. 376-382 in: Soft Methods for Handling Variability and Imprecision
(D. Dubois et al., eds.),
Advances in Soft Computing, Vol. 48, Springer 2008.
pdf file (265K)
downloading/printing problems?
Robust design optimization methods applied to real life problems
face some major difficulties: how to deal with the estimation of
probability densities when data are sparse, how to cope with high
dimensional problems and how to use valuable information in the
form of unformalized expert knowledge.
We introduce the clouds formalism as means to process available
uncertainty information reliably, even if limited in amount and
possibly lacking a formal description. We provide a worst-case
analysis with confidence regions of relevant scenarios which
can be involved in an optimization problem formulation
for robust design.
O140.
M. Fuchs, D. Girimonte, D. Izzo, and A. Neumaier,
Robust and automated space system design,
Chapter (pp. 251-272) in:
Robust intelligent systems (A. Schuster, ed.), Springer, 2008.
pdf file (357K)
downloading/printing problems?
Over the last few years, much research has been dedicated to
the creation of decisions support systems for space system engineers
or even for completely automated design methods capturing
the reasoning of system experts. However, the problem of taking
into account the uncertainties of variables and models defining
an optimal and robust spacecraft design have not been tackled
effectively yet. This chapter proposes a novel, simple approach
based on the clouds formalism to elicit and process the uncertainty
information provided by expert designers and to incorporate this
information into the automated search for a robust, optimal design.
O139.
M. Fuchs and A. Neumaier,
Uncertainty modeling with clouds in autonomous robust design
optimization,
pp. 1-22 in: Proc. 3rd Int. Workshop Reliable Engineering Computing,
Savannah, Georgia, USA, 2008.
pdf file (399K)
downloading/printing problems?
The task of autonomous and robust design cannot be regarded as a
single task, but consists of two tasks that have to be accomplished
concurrently.
First, the design should be found autonomously;
this indicates the existence of a method which is able to find
the optimal design choice automatically.
Second, the design should be robust; in other words: the design
should be safeguarded against uncertain perturbations.
Traditional modeling of uncertainties faces several problems.
The lack of knowledge about distributions of uncertain variables
or about correlations between uncertain data, respectively,
typically leads to underestimation of error probabilities.
Moreover, in higher dimensions the numerical computation
of the error probabilities is very expensive, if not impossible,
even provided the knowledge of the multivariate probability
distributions.
Based on the clouds formalism we have developed new methodologies
to gather all available uncertainty information from expert
engineers, process it to a reliable worst-case analysis and
finally optimize the design seeking the optimal robust design.
The new methods are applied to problems for autonomous optimization
in robust spacecraft system design at the European Space Agency (ESA).
O138.
W. Huyer and A. Neumaier,
Snobfit - Stable Noisy Optimization by Branch and Fit,
ACM Trans. Math. Software 35 (2008), Article 9.
pdf file (323K)
Matlab files
downloading/printing problems?
The software package Snobfit for bound constrained (and soft
constrained) noisy optimization of an expensive objective function is
described. It combines global and local search by branching and local
quadratic fits. The program is made robust and flexible for practical
use by allowing for hidden constraints, batch function evaluations,
change of search regions, etc..
O137.
F. Domes and A. Neumaier,
A scaling algorithm for polynomial constraint satisfaction problems,
J. Global Optimization 42 (2008), 327-345.
pdf file (223K)
downloading/printing problems?
Good scaling is an essential requirement for the good behavior of
many numerical algorithms. In particular, for problems involving
multivariate polynomials, a change of scale in one or more variable
may have drastic effects on the robustness of subsequent calculations.
This paper surveys scaling algorithms for systems of polynomials
from the literature, and discusses some new ones, applicable to
arbitrary polynomial constraint satisfaction problems.
AN136.
A. Neumaier,
Certified error bounds for uncertain elliptic equations,
J. Comput. Appl. Math. 218 (2008), 125-136.
pdf file (382K)
ps.gz file (190K)
downloading/printing problems?
In many applications, partial differential equations depend on
parameters which are only approximately known.
Using tools from functional analysis and global optimization,
methods are presented for obtaining certificates for rigorous and
realistic error bounds on the solution of linear elliptic partial
differential equations in arbitrary domains, either in an energy norm,
or of key functionals of the solutions, given an approximate solution.
Uncertainty in the parameters specifying the partial differential
equations can be taken into account, either in a worst case setting,
or given limited probabilistic information in terms of clouds.
O135.
M. Fuchs, A. Neumaier, and D. Girimonte,
Uncertainty modeling in autonomous robust spacecraft system design,
Proc. Appl. Math. Mech. 7 (2007), 2060041-2060042.
pdf file (179K)
downloading/printing problems?
In the last few years, much research has been dedicated to the
development of decisions support systems for the space system
engineers or even of completely automated design methods capturing
the reasoning of the system experts. However, the
problem of taking into account the uncertainties of the variables
and models to determine an optimal and robust spacecraft
design has not been tackled effectively yet. Based on the
clouds formalism we propose a novel approach to process the
uncertainty information provided by expert designers and
incorporate it into the automated search for a robust optimal design.
AO134.
A. Neumaier,
Computer-assisted proofs,
in: (W. Luther and W. Otten, eds.)
Proc. 12th GAMM-IMACS Symp. Sci. Comp., (SCAN 2006).
IEEE Computer Society, 2007.
ps.gz file (102K),
pdf file (111K)
downloading/printing problems?
This paper discusses the problem what makes a computer-assisted proof
trustworthy, the quest for an algorithmic support system for
computer-assisted proof, relations to global optimization, an
analysis of some recent proofs, and some current challenges which
appear to be amenable to a computer-assisted treatment.
C133.
R. Woo and A. Neumaier,
On graphs whose spectral radius is bounded by 3/2 sqrt(2),
Graphs and Combinatorics 23 (2007), 1-14.
ps.gz file (135K),
pdf file (144K)
slides (99K)
downloading/printing problems?
The structure of graphs whose largest eigenvalue is bounded by
3/2 sqrt(2) (approx. 2.1312) is investigated.
In particular, such a graph can have at most one circuit, and has a
natural quipu structure.
OP132.
T. Vinko and A. Neumaier,
New bounds for Morse clusters,
J. Global Optimization 39 (2007), 483-494.
ps.gz file (166K),
pdf file (197K)
downloading/printing problems?
This paper presents new, simple arguments improving the lower bounds
for the total energy and the minimal inter-particle distance in
minimal energy atom cluster problems with interactions given by a
Morse potential, where the atom separation problem is difficult due
to the finite energy at zero atom separation. Apart from being sharper
than previously known bounds, they also apply for a wider range
rho>4.967 of the parameter in the Morse potential.
Most results also hold for more general pair potentials.
NO131.
A. Neumaier and A. Pownuk,
Linear systems with large uncertainties, with applications to truss
structures,
Reliable Computing 13 (2007), 149-172.
ps.gz file (466K),
pdf file (621K)
downloading/printing problems?
Linear systems whose coefficients have large uncertainties
arise routinely in finite element calculations for
structures with uncertain geometry, material properties, or loads.
However, a true worst case analysis of the influence of such
uncertainties was previously possible only for very small systems
and uncertainties, or in special cases where the coefficients do
not exhibit dependence.
This paper presents a method for computing rigorous bounds on the
solution of such systems, with a computable overestimation factor that
is frequently quite small. The merits of the new approach are
demonstrated by computing realistic bounds for some large, uncertain
truss structures, some leading to linear systems with over
5000 variables and over 10000 interval parameters, with excellent
bounds for up to about 10% input uncertainty.
Also discussed are some counterexamples for the performance of
traditional approximate methods for worst case uncertainty analysis.
O130.
H. Schichl and A. Neumaier,
Transposition theorems and qualification-free optimality conditions,
Siam J. Optimization 17 (2006), 1035-1055.
ps.gz file (175K),
pdf file (211K)
downloading/printing problems?
New theorems of the alternative for polynomial constraints (based on
the Positivstellensatz from real algebraic geometry) and for linear
constraints (generalizing the transposition theorems of Motzkin and
Tucker) are proved.
Based on these, two Karush-John optimality conditions - holding
without any constraint qualification - are proved for single- or
multi-objective constrained optimization problems. The first condition
applies to polynomial optimization problems only, and gives for the
first time necessary and sufficient global optimality conditions for
polynomial problems. The second condition applies to smooth
local optimization problems and strengthens known local conditions.
If some linear or concave constraints are present, the new version
reduces the number of constraints for which a constraint qualification
is needed to get the Kuhn-Tucker conditions.
N129.
R.B. Kearfott, M.T. Nakao, A. Neumaier, S.M. Rump, S.P. Shary,
and P. van Hentenryck,
Standardized notation in interval analysis,
pp. 106-113 in:
Proc. XIII Baikal International School-seminar
"Optimization methods and their applications",
Irkutsk, Baikal, July 2-8, 2005.
Vol. 4 "Interval analysis". - Irkutsk: Institute of Energy Systems
SB RAS, 2005.
dvi.gz file (16K),
ps.gz file (81K),
pdf file (137K),
downloading/printing problems?
A standard for the notation of the most used quantities and operators
in interval analysis is proposed.
NO128.
H. Schichl and A. Neumaier,
Interval Analysis on Directed Acyclic Graphs for Global
Optimization,
J. Global Optimization 33 (2005), 541-562.
dvi.gz file (29K),
ps.gz file (94K),
pdf file (184K),
Slides, Part 1,
Slides, Part 2
downloading/printing problems?
A directed acyclic graph (DAG) representation of optimization
problems represents each variable, each operation, and each constraint
in the problem formulation by a node of the DAG,
with edges representing the flow of the computation.
Using bounds on ranges of intermediate results, represented as weights
on the nodes and a suitable mix of forward and backward evaluation, it
is possible to give efficient implementations of interval evaluation
and automatic differentiation. It is shown how to combine this with
constraint propagation techniques to produce narrower interval
derivatives and slopes than those provided by using only interval
automatic differentiation preceded by constraint propagation.
The implementation is based on earlier work by Kolev
on optimal slopes and by Bliek on backward slope evaluation.
Care is taken to ensure that rounding errors are treated correctly.
Interval techniques are presented for computing from the DAG useful
redundant constraints, in particular linear underestimators for the
objective function, a constraint, or a Lagrangian.
The linear underestimators can be found either by slope computations,
or by recursive backward underestimation.
For sufficiently sparse problems the work is proportional to the
number of operations in the calculation of the objective function
(resp. the Lagrangian).
P208.
A. Neumaier,
Mathematik, Physik und Ewigkeit (mit einem Augenzwinkern betrachtet),
Professorenforum-Journal 6 (2005), No. 3, 37-43.
pdf file (116K)
O126.
A. Neumaier, O. Shcherbina, W. Huyer and T. Vinko,
A comparison of complete global optimization solvers,
Math. Programming B 103 (2005), 335-356.
ps.gz file (232K),
pdf file (406K)
detailed results
downloading/printing problems?
Results are reported of testing a number of existing
state of the art solvers for global constrained optimization
and constraint satisfaction on a set of over 1000 test problems
in up to 1000 variables, collected from the literature.
The test problems are available online in AMPL and were translated
into the input formats of the various solvers using routines
from the COCONUT environment.
These translators are available online, too.
N125.
D. Daney, Y. Papegay and A. Neumaier,
Interval methods for certification of the kinematic calibration of
parallel robots,
pp. 191-198 in:
Proc. 2004 IEEE Int. Conf. Robotics Automation,
New Orleans, LA, April 2004.
pdf file (261K)
downloading/printing problems?
In this paper, we demonstrate how methods based on interval arithmetic
and interval analysis can be used to achieve numerical certification
of the kinematic calibration of a parallel robots.
After describing the usual calibration methods and the motivations for
a numerical certification, we briefly present the interval methods
we used and the kinematic calibration problem. In the main part, we
develop our certified approach of this problem in the case of a
Gough platform, and we show with numerical examples how this approach
avoids wrong solutions produced by classical approach.
Details on implementation and performance are also given.
NO124.
A. Neumaier,
Complete Search in Continuous Global Optimization and Constraint
Satisfaction,
pp. 271-369 in: Acta Numerica 2004 (A. Iserles, ed.),
Cambridge University Press 2004.
dvi.gz file (157K),
ps.gz file (262K),
pdf file (581K)
downloading/printing problems?
This survey covers the state of the art of techniques for solving
general purpose constrained global optimization problems and
continuous constraint satisfaction problems, with emphasis on
complete techniques that provably find all solutions (if there are
finitely many).
The core of the material is presented in sufficient detail that
the survey may serve as a text for teaching constrained global
optimization.
After giving motivations for and important examples of applications of
global optimization, a precise problem definition is given, and a
general form of the traditional first order necessary conditions for a
solution. Then more than a dozen software packages for complete global
search are described.
A quick review of incomplete methods for bound constrained problems
and recipes for their use in the constrained case follows, an explicit
example is discussed, introducing the main techniques used
within branch and bound techniques. Sections on interval arithmetic,
constrained propagation and local optimization are followed by
a discussion of how to avoid the cluster problem. Then a discussion of
important problem transformations follows, in particular of linear,
convex, and semilinear (= mixed integer linear) relaxations that are
important for handling larger problems.
Next, reliability issues - centering around rounding error handling and
testing methodology - are discussed, and the COCONUT framework for the
integration of the different techniques is introduced.
A list of challenges facing the field in the near future
concludes the survey.
NO123.
H. Schichl and A. Neumaier,
Exclusion regions for systems of equations,
SIAM J. Numer. Anal. 42 (2004), 383-408.
ps.gz file (784K),
pdf file (826K)
downloading/printing problems?
Branch and bound methods for finding all zeros of a nonlinear
system of equations in a box frequently have the difficulty
that subboxes containing no solution cannot be easily eliminated
if there is a nearby zero outside the box. This has the effect
that near each zero, many small boxes are created by repeated
splitting, whose processing may dominate the total work spent
on the global search.
This paper discusses the reasons for the occurrence of this so-called
cluster effect, and how to reduce the cluster effect by defining
exclusion regions around each zero found, that are guaranteed
to contain no other zero and hence can safely be discarded.
Such exclusion regions are traditionally constructed using
uniqueness tests based on the Krawczyk operator or the Kantorovich
theorem. These results are reviewed; moreover, refinements are
proved that significantly enlarge the size of the exclusion region.
Existence and uniqueness tests are also given.
SO122.
A. Neumaier,
Clouds, fuzzy sets and probability intervals,
Reliable Computing 10 (2004), 249-272.
dvi.gz file (36K),
ps.gz file (112K),
pdf file (233K)
downloading/printing problems?
Clouds are a concept for uncertainty mediating between
the concept of a fuzzy set and that of a probability distribution.
A cloud is to a random variable more or less what an interval is
to a number. We discuss the basic theoretical and numerical
properties of clouds, and relate them to histograms, cumulative
distribution functions, and likelihood ratios.
We show how to compute nonlinear transformations of clouds,
using global optimization and constraint satisfaction techniques.
We also show how to compute rigorous enclosures for the expectation
of arbitrary functions of random variables, and for probabilities
of arbitrary statements involving random variables,
even for problems involving more than a few variables.
Finally, we relate clouds to concepts from fuzzy set theory,
in particular to the consistent possibility and necessity measures
of Jamison and Lodwick.
C121.
A. Neumaier,
Dual polar spaces as extremal distance-regular graphs,
Europ. J. Combinatorics 25 (2004), 269-274.
dvi file (24K),
ps.gz file (54K),
pdf file (146K),
downloading/printing problems?
A new inequality for the parameters of distance-regular graphs is
proved. It implies that if there are infinitely many distance-regular
graphs with fixed lambda, mu, a_i and c_i containing an induced
quadrangle then necessarily c_{i+1} >= 1+(mu -1)c_i.
As the dual polar graphs show, this inequality is best possible.
Some related results are also discussed.
O120.
W. Huyer and A. Neumaier,
Integral approximation of rays and verification of feasibility,
Reliable Computing 10 (2004), 195-207.
ps.gz file (134K),
pdf file (423K)
downloading/printing problems?
An algorithm is presented that produces an integer vector nearly
parallel to a given vector. The algorithm can be used to discover
exact rational solutions of homogeneous or inhomogeneous
linear systems of equations, given a sufficiently accurate
approximate solution.
As an application, we show how to verify rigorously the feasibility
of degenerate vertices of a linear program with integer coefficients,
and how to recognize rigorously certain redundant linear constraints
in a given system of linear equations and inequalities. This is
a first step towards the handling of degeneracies and redundandies
within rigorous global optimization codes.
NO119.
A. Neumaier and O. Shcherbina,
Safe bounds in linear and mixed-integer programming,
Math. Programming A 99 (2004), 283-296.
dvi.gz file (38K),
ps.gz file (88K),
pdf file (165K),
AMPL file for counterexample (210K),
downloading/printing problems?
Current mixed-integer linear programming solvers are based on linear
programming routines that use floating point arithmetic. Occasionally,
this leads to wrong solutions, even for problems where all coefficients
and all solution components are small integers.
An example is given where many state-of-the-art MIP solvers fail.
It is then shown how, using directed rounding and interval arithmetic,
cheap pre- and postprocessing of the linear programs arising in a
branch-and-cut framework can guarantee that no solution is lost, at
least for mixed-integer programs in which all variables can be bounded
rigorously by bounds of reasonable size.
P118.
U. Leonhardt and A. Neumaier,
Explicit effective Hamiltonians for linear quantum-optical networks,
J. Optics B: Quantum Semiclass. Opt. 6 (2004), L1-L4.
quant-ph/0306123
download
N0117.
H. Schichl and A. Neumaier,
The NOP-2 Modeling Language,
Chapter 15 in: Modeling Languages in Mathematical Optimization
(J. Kallrath, ed.),
Applied Optimization, Vol. 88,
Kluwer, Boston 2004.
html version
ps.gz file (197K),
pdf.gz file (179K),
downloading/printing problems?
We present a short overview over the modeling language NOP-2 for
specifying general optimization problems, including constrained local
or global nonlinear programs and constrained single and multistage
stochastic programs. The proposed language is specifically designed
to represent the internal (separable and repetitive) structure of
the problem.
N116.
A. Neumaier,
Mathematical Model Building,
Chapter 3 in: Modeling Languages in Mathematical Optimization
(J. Kallrath, ed.),
Kluwer, Boston 2004.
html version
dvi.gz file (20K),
ps.gz file (47K),
pdf file (74K),
downloading/printing problems?
Some notes on mathematical modeling, listing motivations, applications,
a numerical toolkit, general modeling rules, modeling conflicts,
useful attitudes, and structuring the modeling work into 16 related
activities by means of a novel modeling diagram.
O115.
O. Shcherbina, A. Neumaier,
Djamila Sam-Haroud, Xuan-Ha Vu and Tuan-Viet Nguyen,
Benchmarking global optimization and constraint satisfaction codes,
pp.211--222 in:
Ch. Bliek, Ch. Jermann and A. Neumaier (eds.),
Global Optimization and Constraint Satisfaction,
Springer, Berlin 2003.
dvi.gz file (17K),
ps.gz file (60K),
pdf file (117K),
downloading/printing problems?
A benchmarking suite describing over 1000 optimization problems
and constraint satisfaction problems covering problems from different
traditions is described, annotated with best known solutions, and
accompanied by recommended benchmarking protocols for comparing test
results.
PN113.
P. Frantsuzov, A. Neumaier and V.A. Mandelshtam,
Gaussian resolutions for equilibrium density matrices,
Chem. Phys. Letters 381 (2003), 117-122.
quant-ph/0306124
download
PS112.
A. Neumaier,
Ensembles and experiments in classical and quantum physics,
quant-ph/0303047,
Int. J. Mod. Phys. B 17 (2003), 2937-2980.
download
N111.
A. Neumaier,
Enclosing clusters of zeros of polynomials,
J. Comput. Appl. Math. 156 (2003), 389-401.
dvi.gz file (53K),
ps.gz file (79K),
pdf file (207K)
downloading/printing problems?
Lagrange interpolation and partial fraction expansion can be used to
derive a Gerschgorin-type theorem that gives simple and powerful a
posteriori error bounds for the zeros of a polynomial if approximations
to all zeros are available. Compared to bounds from a corresponding
eigenvalue problem, a factor of at least two is gained.
The accuracy of the bounds is analyzed, and special attention is given
to ensure that the bounds work well not only for
single zeros but also for multiple zeros and clusters of close zeros.
A Rouché-type theorem is also given, that in many cases reduces
the bound even further.
O110.
W. Huyer and A. Neumaier,
A new exact penalty function,
SIAM J. Optimization 13 (2003), 1141-1158.
dvi.gz file (35K),
ps.gz file (99K),
pdf file (272K),
downloading/printing problems?
For constrained smooth or nonsmooth optimization problems,
new continuously differentiable penalty functions are derived.
They are proved exact in the sense that under some nondegeneracy
assumption, local optimizers of a nonlinear program
are precisely the optimizers of the associated penalty function.
This is achieved by augmenting the dimension of the program by a
variable that controls both the weight of the penalty terms
and the regularization of the nonsmooth terms.
NO207.
A. Neumaier,
Book review of
L. Jaulin, M. Kieffer, O. Didrit and E. Walter,
Applied Interval Analysis, 2001,
SIAM Rev. 44 (2002), 736-739.
ps.gz file (31K),
pdf file (86K),
downloading/printing problems?
NOS109.
A. Neumaier,
Fuzzy modeling in terms of surprise,
Fuzzy Sets and Systems 135 (2003), 21-38.
dvi.gz file (33K),
ps.gz file (84K),
pdf file (216K)
downloading/printing problems?
This paper presents a new approach to fuzzy modeling based on the
concept of surprise. The new concept is related to the traditional
membership function by an antitone transformation.
Advantages of the surprise approach include:
1. It has a consistent semantic interpretation.
2. It allows the joint use of quantitative and qualitative knowledge,
using simple rules of logic.
3. It is a direct extension of (and allows combination with)
the least squares approach to
reconciling conflicting approximate numerical data.
4. It is ideally suited to optimization under imprecise or conflicting
goals, specified by a combination of soft and hard interval constraints.
5. It gives a straightforward approach to constructing families of
functions consistent with fuzzy associative memories as used in fuzzy
control, with tuning parameters (reflecting linguistic ambiguity) that
can be adapted to available performance data.
O108.
A. Neumaier,
Rational functions with prescribed global and local minimizers,
J. Global Optimization 25 (2003), 175-181.
ps.gz file (142K),
pdf file (309K)
downloading/printing problems?
A family of multivariate rational functions is constructed. It has
strong local minimizers with prescribed function values at prescribed
positions. While there might be additional local minima, such minima
cannot be global. Another family of multivariate rational functions
is given, having prescribed global minimizers and prescribed
interpolating data.
NS107.
J. Santos, W.A. Lodwick and A. Neumaier,
A New Approach to Incorporate Uncertainty in Terrain Modeling,
pp. 291-299 in:
Geographic Information Science: Second International Conference,
GIScience 2002, Proceedings (M. Egenhofer and D. Mark, eds.),
Lecture Notes in Computer Science Vol. 2478,
Springer, New York 2002.
pdf file (392K),
downloading/printing problems?
A method for incorporating uncertainty in terrain modelling by
expressing elevations as fuzzy numbers is proposed. Given a finite set
of fuzzy elevations representative of the topographic surface in a
certain region, we develop methods to construct surfaces that
incorporate the uncertainty. The smoothness and continuity conditions
of the surface generating method are maintained. Using this approach,
we generalize some classic interpolators and compare them
qualitatively. Extensions to wider classes of interpolators follow
naturally from our approach. A numerical example is presented to
illustrate this idea.
OP106.
C.A. Meyer, C.A. Floudas, and A. Neumaier,
Global optimization with non-factorable constraints,
Industrial and Engineering Chemistry Research,
25 (2002), 6413-6424.
ps.gz file (163K),
pdf file (407K),
downloading/printing problems?
A new inequality for the parameters of distance-regular graphs is
proved. It implies that if there are infinitely many distance-regular
graphs with fixed lambda, mu, a_i and c_i containing an induced
quadrangle then necessarily c_{i+1} >= 1+(mu -1)c_i.
As the dual polar graphs show, this inequality is best possible.
Some related results are also discussed.
N105.
A. Neumaier,
Taylor forms - use and limits,
Reliable Computing 9 (2002), 43-79.
ps.gz file (150K),
pdf file (489K),
downloading/printing problems?
This review is a response to recent discussions on the reliable
computing mailing list, and to continuing uncertainties about the
properties and merits of Taylor forms, multivariate higher degree
generalizations of centered forms. They were invented around
1980 by Lanford, documented in detail in 1984
by Eckmann, Koch and Wittwer, and independently studied and
popularized since 1996 by Berz, Makino and Hoefkens.
A highlight is their application to the verified integration of
asteroid dynamics in the solar system in 2001.
Apart from summarizing what Taylor forms are and do,
this review puts them into the perspective of more traditional methods,
in particular centered forms, discusses the major applications,
and analyzes some of their elementary properties.
Particular emphasis is given to overestimation properties and the
wrapping effect. A deliberate attempt has been made to offer value
statements with appropriate justifications; but all opinions given
are my own and might be controversial.
P104.
V.A. Mandelshtam and A. Neumaier,
Further generalization and numerical implementation of pseudo-time
Schrödinger equations for quantum scattering calculations,
physics/0204049,
J. Theor. Comput. Chemistry 1 (2002), 1-15.
download
N103.
A. Neumaier,
Grand challenges and scientific standards in interval analysis,
Reliable Computing 8 (2002), 313-320.
dvi.gz file (14K),
ps.gz file (52K),
pdf file (148K),
downloading/printing problems?
This paper contains a list of 'grand challenge'
problems in interval analysis, together with some remarks on improved
interaction with mainstream mathematics and on raising scientific
standards in interval analysis.
NO102.
W.A. Lodwick, A. Neumaier and F. Newman,
Optimization under uncertainty: methods and applications in radiation
therapy,
Proc. 10th IEEE Int. Conf. Fuzzy Systems,
December 2-5, 2001, Melbourne, Australia,
pp.1219-1222.
ps.gz file (187K),
rtf.gz file (144K),
downloading/printing problems?
This research focuses on the methods and application of optimization
under uncertainty to radiation therapy planning, where it is natural
and useful to model the uncertainity of the problem directly.
In particular, we present methods of optimization under uncertainty in
radiation therapy of tumors and compare their results.
Two themes are developed in this study:
(1) the modeling of inherent uncertainty of the problems and
(2) the application of uncertainty optimization.
O101.
H. Schichl, A. Neumaier and S. Dallwig,
The NOP-2 modeling language,
Ann. Oper. Research 104 (2001), 281-312.
dvi.gz file (32K),
ps.gz file (97K),
downloading/printing problems?
An enhanced version NOP-2 of the NOP language
for specifying global optimization problems is described.
Because of its additional features,
NOP-2 is comparable to other modeling languages like AMPL and GAMS,
and allows the user to define a wide range of problems arising in real
life applications such as global constrained (and even stochastic)
optimization programs.
NOP-2 permits named variables, parameters, indexing, loops, relational
operators, extensive set operations, matrices and tensors, and
parameter arithmetic.
The main advantage that makes NOP-2 look and feel considerably
different from other modeling languages is the display of the internal
analytic structure of the problem. It is fully flexible for interfacing
with solvers requiring special features such as automatic
differentiation or interval arithmetic.
NS99.
A. Neumaier and T. Schneider,
Estimation of parameters and eigenmodes of multivariate autoregressive
models,
ACM Trans. Math. Softw. 27 (2001), 27-57.
ps.gz file (728K),
pdf file (319K),
downloading/printing problems?
Dynamical characteristics of a complex system can often be inferred
from analyses of a stochastic time series model fitted to
observations of the system. Oscillations in geophysical systems, for
example, are sometimes characterized by principal oscillation
patterns, eigenmodes of estimated autoregressive (AR) models of
first order. This paper describes the estimation of eigenmodes of AR
models of arbitrary order. AR processes of any order can be
decomposed into eigenmodes with characteristic oscillation periods,
damping times, and excitations. Estimated eigenmodes and confidence
intervals for the eigenmodes and their oscillation periods and
damping times can be computed from estimated model parameters. As a
computationally efficient method of estimating the parameters of AR
models from high-dimensional data, a stepwise least squares
algorithm is proposed. This algorithm computes model coefficients
and evaluates criteria for the selection of the model order stepwise
for AR models of successively decreasing order. Numerical
simulations indicate that, with the least squares algorithm, the AR
model coefficients and the eigenmodes derived from the coefficients
are estimated reliably and that the approximate 95% confidence
intervals for the coefficients and eigenmodes are rough
approximations of the confidence intervals inferred from the
simulations.
NS98.
T. Schneider and A. Neumaier,
Algorithm 808: ARfit - A Matlab package for the estimation of
parameters and eigenmodes of multivariate autoregressive models,
ACM Trans. Math. Softw. 27 (2001), 58-65.
ps.gz file (203K),
pdf file (195K),
Matlab code,
downloading/printing problems?
ARfit is a collection of Matlab modules for modeling and
analyzing multivariate time series with autoregressive (AR) models.
ARfit contains modules for fitting AR models to given time
series data, for analyzing eigenmodes of a fitted model, and for
simulating AR processes. ARfit estimates the parameters of
AR models from given time series data with a stepwise least squares
algorithm that is computationally efficient, in particular when the
data are high-dimensional. ARfit modules construct
approximate confidence intervals for the estimated parameters and
compute statistics with which the adequacy of a fitted model can be
assessed. Dynamical characteristics of the modeled time series can
be examined by means of a decomposition of a fitted AR model into
eigenmodes and associated oscillation periods, damping times, and
excitations. The ARfit module that performs the
eigendecomposition of a fitted model also constructs approximate
confidence intervals for the eigenmodes and their oscillation
periods and damping times.
PN97.
A. Neumaier and V.A. Mandelshtam,
Pseudo-time Schrödinger equation with absorbing potential for
quantum scattering calculations,
Phys. Rev. Lett. 86 (2001), 5031-5034.
physics/0101032
download
AN96.
A. Neumaier,
Generalized Lyapunov-Schmidt reduction for parametrized equations
at near singular points,
Linear Algebra Appl. 324 (2001), 119-131.
dvi.gz file (19K),
ps.gz file (74K),
downloading/printing problems?
The Lyapunov-Schmidt reduction is generalized to the
case of imperfect singularities. The results presented neither need
very precise information about the location of the (near)
singularities nor a precise knowledge of (near) null spaces.
NS95.
G.W. Weber, J. Kim, A. Neumaier, C.C. Magori, C.B. Saanane,
W. Recheis and H. Seidler,
Thickness mapping of the occipital bone on CT-data - a new approach
applied on OH9,
Acta Anthrop. Sin. Suppl. 19 (2000), 37-46.
pdf file (275K),
downloading/printing problems?
A new approach for the analysis of cranial bone thickness is
introduced. A semiautomatic algorithm detects a multitude of
thicknesses from CT-data of the investigated bones. Every bone is
characterized by its its own distribution pattern of cranial
thickness, which is then analyzed statistically.
LN94.
R.B. Kearfott, J. Dian and A. Neumaier,
Existence verification for singular zeros of nonlinear systems,
SIAM J. Numer. Anal. 38 (2000), 360-379.
ps.gz file (127K),
downloading/printing problems?
Computational fixed point theorems can be used to automatically verify
existence and uniqueness of a solution to a nonlinear system of n
equations in variables ranging within a given region of n-space.
But such computations succeed only when the Jacobi matrix is
nonsingular everywhere in this region. However, in many practical
problems, the Jacobi matrix is singular, or nearly so, at the
solution. In such cases, tiny perturbations of the problem result in
problems either with no solution in the region, or with more than
one; thus no general computational technique can prove existence
and uniqueness. However, for systems in n complex variables, the
multiplicity of such a solution can be verified.
Such verification is possible by computation
of the topological degree, but such computations previously required
a global search on the (n-1)-dimensional boundary of an n-dimensional
region. Here, it is observed that preconditioning leads to a system
of equations whose topological degree can be computed with a much
lower-dimensional search. Formulas are given for this computation,
and the special case of rank-defect one is studied, both theoretically
and empirically.
LN93.
A. Neumaier,
On Shary's algebraic approach for linear interval equations,
SIAM J. Matrix Anal. Appl. 21 (2000), 1156-1162.
dvi.gz file (10K),
ps.gz file (58K),
pdf file (97K),
downloading/printing problems?
A recent method by Shary for enclosing the solution set of a system of
linear interval equations is derived in a new way.
It is shown that the method converges to the fixed-point inverse,
and that it has finite termination with probability 1.
O92.
W. Huyer and A. Neumaier,
Global optimization by multilevel coordinate search,
J. Global Optimization 14 (1999), 331-355.
ps.gz file (146K),
pdf file (361K),
Matlab code (30K),
downloading/printing problems?
Implementation in the
NAG library
(local pdf file, 404K),
Inspired by the DIRECT method by Jones, Perttunen and Stuckman,
we present a global optimization algorithm based on multilevel
coordinate search. It is guaranteed to converge if the
function is continuous in the neighborhood of a global minimizer.
By starting a local search from certain good points, an improved
convergence result is obtained. We discuss implementation details and
give some numerical results.
LN91.
A. Neumaier,
A simple derivation of the Hansen-Bliek-Rohn-Ning-Kearfott
enclosure for linear interval equations,
Reliable Computing 5 (1999), 131-136.
Erratum, Reliable Computing 6 (2000), 227.
dvi.gz file (10K),
Erratum, dvi.gz file,
ps.gz file (69K),
Erratum, ps.gz file,
downloading/printing problems?
Recently, Ning and Kearfott derived a formula for the interval
enclosure of the solution set of linear systems of interval equations
in the case when the coefficient matrix is an H-matrix. The enclosure
is optimal when the midpoint matrix is diagonal, and when the midpoint
is the identity, it reduces to the Hansen-Bliek-Rohn method for
enclosing preconditioned systems.
An elementary proof of this formula is given using only simple
properties of H-matrices and Schur complements.
The new proof gives additional insight into why the theorem is true.
It is also shown how to preserve rigor in the enclosure when finite
precision arithmetic is used.
OPS90.
A. Neumaier, S. Dallwig, W. Huyer and H. Schichl,
New techniques for the construction of residue potentials for protein
folding,
pp. 212-224 in:
Algorithms for Macromolecular Modelling (P. Deuflhard et al., eds.),
Lecture Notes Comput. Sci. Eng. 4, Springer, Berlin 1999.
download
O89.
C.S. Adjiman, S. Dallwig, C.A. Floudas and A. Neumaier,
A global optimization method, alphaBB, for general
twice-differentiable constrained NLPs - I. Theoretical advances,
Computers and Chemical Engineering 22 (1998), 1137-1158.
ps.gz file (193K),
downloading/printing problems?
C.S. Adjiman, I.P. Androulakis and C.A. Floudas,
A global optimization method, alphaBB, for general
twice-differentiable constrained NLPs -
II. Implementation and computational results,
Computers and Chemical Engineering 22 (1998), 1159-1179.
ps.gz file of part II (163K)
containing implementation details and computational studies
A deterministic global optimization method called alphaBB is presented.
It finds the global minimizer of constrained nonlinear programs with
twice-differentiable objective and constraint functions and finite
bounds on the variables. Based on a branch and bound strategy and
convex relaxed subproblems, the minimizer is found with guarantee,
apart from influences of roundoff errors and the possibility of
overflow of work space if too many subproblems need to be considered.
The convex underestimators are generated optimally for bilinear and
trilinear terms, whereas for general functions, convexity is achieved
by adding a suitable separable quadratic function. The coefficients of
this function are calculated using interval arithmetic; a number of
different schemes for finding these coefficients are discussed and
compared.
LNS88.
A. Neumaier,
Solving ill-conditioned and singular linear systems:
A tutorial on regularization,
SIAM Review 40 (1998), 636-666.
dvi.gz file (57K),
ps.gz file (142K),
pdf file (309K),
downloading/printing problems?
It is shown that the basic regularization procedures
for finding meaningful approximate solutions of ill-conditioned or
singular linear systems can be phrased and analyzed in terms of
classical linear algebra that can be taught in any numerical analysis
course. Apart from rewriting many known results in a more elegant form,
we also derive a new two-parameter family of merit functions for the
determination of the regularization parameter. The traditional merit
functions from generalized cross validation (GCV) and generalized
maximum likelihood (GML) are recovered as special cases.
O205.
A. Neumaier,
Book review of
Janos D. Pinter, Global Optimization in Action,
J. Global Optimization 12 (1998), 319-321.
html file
NOS87.
A. Neumaier and E. Groeneveld,
Restricted maximum likelihood estimation
of covariances in sparse linear models,
Genet. Sel. Evol. 30 (1998), 3-26.
shortened published version of the following manuscript:
pdf file (246K),
ps.gz file (134K),
software,
downloading/printing problems?
This paper surveys the theoretical and computational
development of the restricted maximum likelihood (REML) approach for
the estimation of covariance matrices in linear stochastic models.
A new derivation of this approach is given, valid under very weak
conditions on the noise.
Then the calculation of the gradient of restricted loglikelihood
functions is discussed, with special emphasis on the case of large
and sparse model equations with a large number of unknown covariance
components and possibly incomplete data. It turns out that the
gradient calculations require hardly any extra storage, and only
a small multiple of the number of operations needed to calculate
the function values alone.
The analytic gradient procedure was integrated into the VCE package for
covariance component estimation in large animal breeding models. It
resulted in dramatic improvements of performance over the previous
implementation with finite difference gradients. An example with
more than 250 000 normal equations and 55 covariance components took
hours instead of days of CPU time, and this was not an untypical case.
NOP85.
A. Neumaier,
Molecular modeling of proteins and mathematical prediction of
protein structure,
SIAM Rev. 39 (1997), 407-460.
download
LN84.
A. Neumaier,
Scaling and structural condition numbers,
Linear Algebra Appl. 263 (1997), 157-165.
dvi.gz file (10K),
ps.gz file (59K),
downloading/printing problems?
We introduce structural condition numbers and discuss
their significance for the proper scaling of nonsymmetric and
symmetric matrices.
O83.
S. Dallwig, A. Neumaier and H. Schichl,
GLOPT - A Program for Constrained Global Optimization,
pp. 19-36 in:
I. Bomze et al., eds.,
Developments in Global Optimization,
Kluwer, Dordrecht 1997.
dvi.gz file (20K, no figures),
ps.gz file (80K),
downloading/printing problems?
GLOPT is a Fortran77 program for global minimization of a
block-separable objective function subject to bound constraints and
block-separable constraints. It finds a nearly globally optimal point
that is near a true local minimizer. Unless there are several local
minimizers that are nearly global, we thus find a good approximation
to the global minimizer.
GLOPT uses a branch and bound technique to split the problem recursively
into subproblems that are either eliminated or reduced in their size.
This is done by an extensive use of the block separable structure of
the optimization problem.
In this paper we discuss a new reduction technique for boxes and
new ways for generating feasible points of constrained nonlinear
programs. These are implemented as the first stage of our GLOPT project.
The current implementation of GLOPT uses neither derivatives nor
simultaneous information about several constraints. Numerical results
are already encouraging. Work on an extension using curvature
information and quadratic programming techniques is in progress.
O82.
A. Neumaier,
NOP - a Compact Input Format for Nonlinear Optimization Problems,
pp. 1-18 in:
I. Bomze et al., eds.,
Developments in Global Optimization,
Kluwer, Dordrecht 1997.
dvi.gz file (22K),
ps.gz file (93K),
downloading/printing problems?
This paper defines a compact format for specifying general
constrained nonlinear optimization problems. The proposed
format is a nonlinear analogue of an explicit representation
of sparse matrices by means of index lists and values of the
corresponding matrix entries. Thus the format abstracts from
the meaning of the problem and hence does not allow names for variables
or parameters, but it explicitly displays the internal structure of
the problem. This is a very useful feature for global or large scale
local optimization.
O80.
A. Neumaier,
Second-order sufficient optimality conditions for local and global
nonlinear programming,
J. Global Optim. 9 (1996), 141-151.
dvi.gz file (18K),
ps.gz file (83K),
downloading/printing problems?
This paper presents a new approach to the sufficient
conditions of nonlinear programming. Main result is a sufficient
condition for the global optimality of a Kuhn-Tucker point. This
condition can be verified constructively, using a novel convexity
test based on interval analysis, and is guaranteed to prove global
optimality of strong local minimizers for sufficiently narrow bounds.
Hence it is expected to be a useful tool within branch and bound
algorithms for global optimization.
NO79/86.
L.C. Kaufman and A. Neumaier,
Image reconstruction through regularization by envelope
guided conjugate gradients
ps.gz file (441K),
pdf file (320K),
downloading/printing problems?
For publication, the paper was split into two parts:
- PET regularization by envelope guided conjugate gradients,
IEEE Trans. Medical Imag. 15 (1996), 385-389.
- Regularization of ill-posed problems by envelope
guided conjugate gradients,
J. Comput. Graph. Stat. 6 (1997), 451-463.
In this paper we propose a new way to iteratively solve large scale
ill-posed problems, and in particular the
image reconstruction problem from noisy images or noisy data
linearly related to the pixel intensities.
This is done by exploiting the relation between Tikhonov regularization
and multiobjective optimization to obtain iteratively approximations
to the Tikhonov L-curve and its corner. Monitoring the change of the
approximate L-curves allows us to adjust the regularization
parameter adaptively during a preconditioned conjugate gradient
iteration, so that the desired image can be reconstructed with a low
number of iterations. Nonnegativity constraints are taken into
account automatically. We present test results on image reconstruction
in positron emission tomography (PET).
LN78.
A. Neumaier and M. Olschowka,
A new pivoting strategy for Gaussian elimination,
Linear Algebra Appl. 240 (1996), 131-151.
pdf file (171K),
ps.gz file (83K),
downloading/printing problems?
This paper discusses a method for determining a good pivoting sequence
for Gaussian elimination, based on an algorithm for solving assignment
problems. The worst case complexity is O(n^3), while in practice
O(n^{2.25}) operations are sufficient.
COS77.
C. Elster and A. Neumaier,
Screening by conference designs,
Biometrika 82 (1995), 589-602.
dvi.gz file (28K),
ps.gz file (92K),
pdf file (215K),
downloading/printing problems?
Screening experiments are addressed to the identification
of the relevant variables within some process or experimental outcome
potentially depending on a large number of variables.
In this paper we introduce a new class of experimental
designs called edge designs. These designs are very useful for
screening experiments
since they allow a model-independent estimate of the set of relevant
variables, thus providing more robustness than traditional designs.
We give a bound on the determinant of the information matrix of certain
edge designs, and show that a large class of edge designs meeting
this bound can be constructed from conference matrices. We also
show that the resulting conference designs have an optimal space
exploration property which is important as a guard against unexpected
nonlinearities. We survey the existence of and constructions
for conference matrices, and give, for n<50 variables, explicit such
matrices when n is a prime, and references to explicit constructions
otherwise.
O81.
C. Elster and A. Neumaier,
A trust region method for the optimization of noisy functions,
Computing 58 (1997), 31-46.
dvi.gz file (21K),
ps.gz file (73K),
pdf file (130K),
downloading/printing problems?
The optimization of noisy or not exactly known functions is a common
problem occuring in various applications as for instance in the task
of experimental optimization.
The traditional tool for the treatment of such problems is
the method of Nelder-Mead (NM). In this paper an alternative
method based on a trust region approach (TR) is offered and compared
to Nelder-Mead. On the standard collection of test functions for
unconstrained
optimization by Moré et al., TR performs substantially more
robust than NM.
If performance is measured by the number of function evaluations,
TR is on the average twice as fast as NM.
O76.
C. Elster and A. Neumaier,
A grid algorithm for bound constrained optimization of noisy functions,
IMA J. Numer. Anal. 15 (1995), 585-608.
dvi.gz file (39K),
ps.gz file (108K),
pdf file (228K),
downloading/printing problems?
The optimization of noisy functions is a common problem occurring in
various applications.
In this paper, a new approach is presented for
low-dimensional bound constrained problems,
based on the use of quadratic models and a restriction
of the evaluation points to successively refined grids. In the noiseless
case,
global convergence of the algorithm to a stationary point is
proved; in the noisy case a bound for the limit accuracy is derived.
An extensive numerical comparison with two widely used
methods - a quasi-Newton method and the simplex method of Nelder and
Mead - performed on a standard collection of test problems, shows
that the new algorithm is comparable with quasi-Newton
in the noiseless case, and is much more robust than Nelder-Mead in the
noisy case. If performance is measured solely by the number of
function evaluations needed to achieve
a prescribed reduction of the difference to the minimal function
value (as for instance in the optimization of experiments),
the new algorithm is also significantly faster than Nelder-Mead.
C75.
R. Woo and A. Neumaier,
On graphs whose smallest eigenvalue is at least -1-sqrt{2},
Linear Algebra Appl. 226-228 (1995), 577-592.
ps.gz file (87K),
downloading/printing problems?
Main result: If the smallest eigenvalue of a graph H exceeds
a fixed number larger than the smallest root (approx. -2.4812)
of the polynomial x^3+2x^2-2x-2,
and if every vertex of H has sufficiently large valency,
then the smallest eigenvalue of H is at least -1-sqrt{2}
and the structure of H is completely characterized through a
new generalization of line graphs.
NP73.
T. Rage, A. Neumaier and C. Schlier,
Rigorous verification of chaos in a molecular model,
Phys. Rev. E. 50 (1994), 2682-2688.
download
N72.
A. Neumaier,
Global, rigorous and realistic bounds for the solution of
dissipative differential equations. Part I: Theory,
Computing 52 (1994), 315-336.
dvi.gz file (37K),
pdf file (215K),
ps.gz file (115K),
downloading/printing problems?
It is shown how interval analysis can be used to calculate
rigorously valid enclosures of solutions to initial value problems
for ordinary differential equations. In contrast to previously known
methods, the enclosures obtained are valid over larger time intervals,
and for uniformly dissipative systems even globally.
This paper discusses the underlying theory; main tools are
logarithmic norms and differential inequalities.
N70.
A. Neumaier,
The wrapping effect, ellipsoid arithmetic, stability and
confidence regions,
Computing Supplementum 9 (1993), 175-190.
dvi.gz file (55K),
ps.gz file (109K),
pdf file (169K),
downloading/printing problems?
The wrapping effect is one of the main reasons that the application
of interval arithmetic to the enclosure of dynamical systems is
difficult. In this paper the source of wrapping is analyzed
algebraically and geometrically.
A new method for reducing the wrapping effect is proposed,
based on an interval ellipsoid arithmetic.
Applications are given to the verification of stability regions for
nonlinear discrete dynamical systems and to the computation of rigorous
confidence regions for nonlinear functions of normally distributed
random vectors.
N54.
A. Neumaier,
The enclosure of solutions of parameter-dependent systems of
equations.
In: Reliability in Computing
(ed. by R.E. Moore).
Acad. Press, San Diego 1988, pp. 269-286.
pdf file (scanned copy) (2909K)
Unrefereed or unpublished papers
LP000.
A. Neumaier,
Introduction to coherent spaces,
Manuscript (2018).
pdf file (399K),
arXiv:1804.01402
The notion of a coherent space is a nonlinear version of the notion of
a complex Euclidean space: The vector space axioms are dropped while
the notion of inner product is kept.
Coherent spaces provide a setting for the study of geometry in a
different direction than traditional metric, topological, and
differential geometry. Just as it pays to study the properties of
manifolds independently of their embedding into a Euclidean space,
so it appears fruitful to study the properties of coherent spaces
independent of their embedding into a Hilbert space.
Coherent spaces have close relations to reproducing kernel Hilbert
spaces, Fock spaces, and unitary group representations, and to many
other fields of mathematics, statistics, and physics.
This paper is the first of a series of papers and defines concepts
and basic theorems about coherent spaces, associated vector spaces,
and their topology. Later papers in the series discuss symmetries of
coherent spaces, relations to homogeneous spaces, the theory of group
representations, C*-algebras, hypergroups, finite geometry,
and applications to quantum physics. While the applications to quantum
physics were the main motiviation for developing the theory, many more
applications exist in complex analysis, group theory, probability
theory, statistics, physics, and engineering.
NO000.
A. Baharev, A. Neumaier, and H. Schichl,
Tearing systems of nonlinear equations II. A practical exact
algorithm,
Manuscript (2016).
pdf file (435K)
The objective of optimal tearing is to maximize the number of variables
eliminated by solving univariate equations. First, a simple algorithm is
presented for automatically identifying feasible assignments: If an
equation can be solved symbolically for one of its variables, and the
solution is unique, explicit, and numerically stable, then, and only
then, this equation-variable pair represents a feasible assignment.
Tearing searches for the optimal elimination order over these
numerically safe feasible assignments. We give a novel integer
programming formulation of optimal tearing. Unfortunately, a high amount
of permutation symmetry can cause performance problems; therefore, we
developed a custom branch and bound algorithm. Based on the performance
tests on the COCONUT Benchmark, we consider the proposed branch and
bound algorithm practical (a) for the purposes of global optimization,
and (b) in cases where systems with the same sparsity patterns are
solved repeatedly and the time spent on tearing pays off.
NO000.
A. Baharev, A. Neumaier, and H. Schichl,
Tearing systems of nonlinear equations I. A survey,
Manuscript (2016).
pdf file (566K)
The goal of tearing is to reduce the computation time needed to solve a
given system of equations by exploiting its sparsity pattern: A sequence
of small subproblems is solved according to an ordering. The ultimate
objective of tearing is to find the sequence that results in the most
savings with respect to the solution time including the time to find the
sequence. Since this objective would be too complicated to handle
rigorously, compromises have to be made. The choice of the objective
function is a matter of judgment and highly context dependent. In the
context of nonlinear systems of equations, a popular objective is to
maximize the number of variables eliminated by solving univariate
equations. This is significantly different from the objective of the
so-called fill-reducing orderings. The present paper is the first of two
reports on tearing; here, we review the published exact and heuristic
methods.
NO000.
A. Baharev, A. Neumaier, and H. Schichl,
An exact method for the minimum feedback arc set problem,
Manuscript (2016).
pdf file (594K)
Given a directed graph G, a feedback arc set of G is a subset of
its edges containing at least one edge of every cycle in G. Finding a
feedback arc set of minimum cardinality is the minimum feedback arc set
problem. The present paper focuses on large and sparse graphs. The
minimum set cover formulation of the minimum feedback arc set problem is
practical as long as all the simple cycles in G can be enumerated.
Unfortunately, even sparse graphs can have Omega(2^n) simple cycles,
and such graphs appear in practice. An exact method is proposed that
enumerates simple cycles in a lazy fashion, and extends an incomplete
cycle matrix iteratively in the hope that only a tractable number of
cycles has to be enumerated until a minimum feedback arc set is found.
Numerical results are given on a test set containing large and sparse
test graphs relevant for industrial applications.
O000.
A. Neumaier,
Phenomenological thermodynamics in a nutshell,
Manuscript (2014).
arXiv:1404.5273
pdf file (172K)
This paper gives a concise, mathematically rigorous description of
phenomenological equilibrium thermodynamics for single-phase systems
in the absence of chemical reactions and external forces.
The present approach is similar to that of Callen, who introduces in his
well-known thermodynamics book the basic concepts by means of a few
postulates from which everything else follows. His setting is modified
to match the more fundamental approach based on statistical mechanics.
Thermodynamic stability is derived from kinematical properties of
states outside equilibrium by rigorous mathematical arguments,
superseding Callen's informal arguments that depend on a dynamical
assumption close to equilibrium.
From the formulas provided, it is an easy step to go to various
examples and applications discussed in standard textbooks such as
Callen or Reichl. A full discussion of global equilibrium would also
involve the equilibrium treatment of multiple phases and chemical
reactions. Since their discussion offers no new aspects compared with
traditional textbook treatments, they are not treated here.
P000.
A. Neumaier and D. Westra,
Classical and Quantum Mechanics via Lie algebras.
Manuscript (2008, 2011)
arXiv:0810.1019
pdf file (3165K)
The goal of this book is to present classical mechanics, quantum
mechanics, and statistical mechanics in an almost completely algebraic
setting, thereby introducing mathematicians, physicists, and
engineers to the ideas relating classical and quantum mechanics with
Lie algebras and Lie groups. The book emphasizes the
closeness of classical and quantum mechanics, and the material is
selected in a way to make this closeness as apparent as possible.
Much of the material covered here is not part of standard
textbook treatments of classical or quantum mechanics (or is only
superficially treated there). For physics students who want to
get a broader view of the subject, this book may therefore serve
as a useful complement to standard treatments of quantum mechanics.
Almost without exception, this book is about precise concepts and
exact results in classical mechanics, quantum mechanics, and
statistical mechanics. The structural properties of
mechanics are discussed independent of computational techniques for
obtaining quantitatively correct numbers from the assumptions made.
The standard approximation machinery for calculating from first
principles explicit thermodynamic properties of materials, or
explicit cross sections for high energy experiments can be found in
many textbooks and is not repeated here.
P966.
A. Neumaier,
The 7 Basic Rules of Quantum Mechanics,
PhysicsForums Insights (May 2019).
Insight Article
For reference purposes and to help focus discussions in interpretation
questions on the real issues, there is a need for fixing the common
ground.
The basic rules reflect what is almost generally taught as the basics
in quantum physics courses around the world. Often they are stated in
terms of axioms or postulates, but this is not essential for their
practical validity. In some interpretations, some of these rules are
not considered fundamental rules but only valid as empirical or
effective rules for practical purposes.
P967.
A. Neumaier,
Clarifying Common Issues with Indistinguishable Particles,
PhysicsForums Insights (May 2019).
Insight Article
Commonly there is a lot of imprecision in talking about
''indistinguishable'' (or ''identical'') particles, even in serious
work. This Insight article clarifies the issues involved in a
conceptually precise way.
P968.
A. Neumaier,
How to Create a Universe - Instructions for an Apprentice God,
PhysicsForums Insights (March 2019).
Insight Article
A fantasy to be read at leisure time
Let us begin with a universal Turing machine....
P969.
A. Neumaier,
A Classical View of the Qubit,
PhysicsForums Insights (March 2019).
Insight Article
It is commonly said that quantum mechanics originated in 1900 with Max
Planck. It is very little known that much earlier - in 1852, at a time
when Planck was not even born -, George Stokes described all the modern
quantum phenomena of a single qubit, explaining them in classical
terms.
P970.
A. Neumaier,
Vacuum Fluctuations in Experimental Practice,
PhysicsForums Insights (January 2017).
Insight Article
This article is a sequel of several earlier ones that make
precise what a virtual particle is, what being real means, document
some of the liberties taken in physics textbooks in the use of this
concept, mention the most prominent misuses, and document the origin
of some of the associated myths. In short, the concept of virtual
particles is well-defined and useful when restricted to its use in
Feynman diagrams and associated technical discussions. But it is
highly misleading when used to argue about vacuum fluctuations, as if
these were processes happening in space and time. The latter is a
frequent misunderstanding, a myth that has not the slightest basis in
particle physics.
However, one meets occasionally mythical claims even in the scientific
literature. Therefore we look at a representative recent paper in
which vacuum fluctuations play a seemingly prominent role, and answer
the question: How do vacuum fluctuations look like in practice?
P971.
A. Neumaier,
The Vacuum Fluctuation Myth,
PhysicsForums Insights (2016).
Insight Article
This article explains how the widespread but misleading informal
practice of treating - when explaining the subject of subatomic
particles to the general public - virtual particles as real objects
popping in and out of existence for a tiny time could arise.
This is done at the example of Steve Carlip's page on Hawkings
radiation, where Steve Carlip, a well-known theorical physicist working
on quantum gravity, gave a lucid but completely mythical narrative
about how vacuum fluctuations create Hawking radiation.
P972.
A. Neumaier,
Misconceptions about Virtual Particles,
PhysicsForums Insights (2016).
Insight Article
This article goes though a number of wikipedia pages and comments on
their misleading statements about virtual particles and Feynman
diagrams. In addition, the article discusses some textbooks on
quantum field theory and the extent to which they contain problematic
formulations about virtual particles.
P973.
A. Neumaier,
The Physics of Virtual Particles,
PhysicsForums Insights (2016).
Insight Article
In discussions on the internet (including a number of wikipedia pages)
and in books and articles for non-experts in particle physics, there
is considerable confusion about various notions around the concept of
particles of subatomic size, and in particular about the notion of a
virtual particle. This is partly due to misunderstandings in the
terminology used, and partly due to the fact that subatomic particles
manifest themselves only indirectly, thus leaving a lot of leeway for
the imagination to equip these invisible particles with properties,
some of which sound very magical.
The aim of this Insight article is a definition of physical terms
essential for an informed discussion of which of these properties have
a solid basis in physics, and which of these are gross misconceptions
or exaggerations that shouldn't be taken seriously.
P974.
A. Neumaier,
Causal Perturbation Theory,
PhysicsForums Insights (2015).
Insight Article
Relativistic quantum field theory is notorious for the occurrence of
divergent expressions that must be renormalized by recipes that on
first sight sound very arbitrary and counterintuitive. This Insight
article shows that it doesn't have to be this way!.
P975.
A. Neumaier,
Renormalization without infinities - an elementary tutorial,
Manuscript (2015).
pdf file (385K)
Renormalization is an indispensable tool for modern theoretical
physics. At the same time, it is one of the least appealing techniques,
especially in cases where naive formulations result in divergences
that must be cured - a step that is often done in a mathematically
dubious way.
In this paper, it is shown how the renormalization procedure works
both in singular cases where it removes naive divergences and in
regular cases where a naive approach is possible but renormalization
improves the quality of perturbation theory. In fact, one can see
immediately that the singular situation is simply a limiting case of the
regular situation.
The paper introduces three families of toy examples, defined by special
perturbations of an arbitrary Hamiltonian with a discrete spectrum.
The examples show explicitly many of the renormalization effects
arising in realistic quantum field theories such as quantum
chromodynamics: logarithmic divergences, running couplings,
asymptotic freedom, dimensional transmutation, the renormalization
group, and renormalization scheme dependent results at any order of
perturbation theory.
Unlike in more realistic theories, everything is derived rigorously
and nonperturbatively in terms of simple explicit formulas. Thus one
can understand in detail how the infinities arise (whenever they arise)
- namely as an unphysical infinitely sensitive dependence on the bare
coupling constants. One also sees that all spurious infinities are
cured automatically by the same renormalization process that gives
robust physical results in the case where no infinities occur.
PS976.
A. Neumaier,
A multi-phase, multi-component critical equation of state,
Manuscript (2013).
arXiv:1307.8391
pdf file (193K)
Realistic equations of state valid in the whole state space of a
multi-component mixture should satisfy at least three important
constraints:
(i) The Gibbs phase rule holds.
(ii) At low densities, one can deduce a virial equation of state with
the correct multicomponent structure.
(iii) Close to critical points, plait points, and consolute points,
the correct universality and scaling behavior is guaranteed.
This paper discusses semiempirical equations of state for mixtures that
express the pressure as an explicit function of temperature and the
chemical potentials. In the first part, expressions are derived for
the most important thermodynamic quantities. The main result of the
second part is the construction of a large family of equations of
state with the properties (i)--(iii).
N977.
A. Neumaier,
Improving interval enclosures,
Manuscript (2011).
pdf file (244K)
This paper serves as background information for the Vienna proposal
for interval standardization, explaining what is needed in practice to
make competent use of the interval arithmetic provided by an
implementation of the standard to be.
Discussed are methods to improve the quality of interval enclosures
of the range of a function over a box,
considerations of possible hardware support facilitating the
implementation of such methods,
and the results of a simple interval challenge that I had posed to
the reliable computing mailing list on November 26, 2008.
Also given is an example of a bound constrained global optimization
problem in 4 variables that has a 2-dimensional continuum of global
minimizers. This makes standard branch and bound codes extremely slow,
and therefore may serve as a useful degenerate test problem.
N978.
S. Das and A. Neumaier,
Regularized low rank approximation of weighted data sets,
Manuscript (2011).
pdf file (1193K)
In this paper we propose a fast and accurate method for computing a
regularized low rank approximation of a weighted data set.
Weighted data sets are obtained whenever the data points are
collected with different level of accuracies, or from different
sources etc.. For example, consider the case when each data point is
obtained as a sample mean of certain number of observations, where
the number of observations per data point are unequal.
Even data sets with missing data points are special case of a weighted
data set.
Unlike the non-weighted case, the optimization problem posed to obtain
a low rank approximation for weighted data have local minima.
To alleviate the problem with local minima, and consequently to obtain
a meaningful solution, we use a priori information about the data set.
Thereby we obtain a regularized solution. In particular, we use a
priori information that the low rank approximants are smooth.
We use an iterative method to estimate left and right approximants.
Within each iteration we estimate the right (left) approximants by
posing a constrained weighted least squares problem. Unfortunately
the variables of the quadratic optimization problem are not separable.
We exploit the sparsity of the associated matrices to design a tractable
algorithm.
The proposed method has a potential use in various applied problems,
e.g., 3D image reconstruction, background correction in 2D
chromatography, background correction of astronomical images.
We illustrate the proposed method by reconstructing background of an
astronomical image observed in the optical wavelength range.
The algorithm is fast, i.e. the number of flops per iteration
is of the order of the number of data points, and the memory
requirement is negligible compared with the input data storage.
We provide extensive numerical results to highlight different features
of the method.
F979.
A. Neumaier and P. Schodl,
A Semantic Turing Machine,
Manuscript (2011).
pdf file (396K)
A semantic Turing machine (STM) is a variant of a programable register
machine that combines the transparency and simplicity of the action of
a Turing machine with a clearly arranged assembler-style programming
language and a user-friendly representation of semantic information.
This paper describes the concept of the STM, its memory management and
ow control, and shows how a semantic Turing machine can simulate any
ordinary Turing machine.
Analogous to a universal Turing machine, we give a universal semantic
Turing machine (USTM), which is a special STM that can simulate every
STM. The USTM serves both as a self-contained semantic explanation of
many aspects of the STM, and as a check that an STM implementation
works correctly.
Three appendices give the grammar of the STM programming language,
tables for character code, and the essential parts of a MATLAB
implementation of the STM.
S980.
M. Fuchs and A. Neumaier,
Optimization in latent class analysis,
Manuscript (2011).
pdf file (201K)
In latent class analysis (LCA) one seeks a clustering of categorical
data, such as patterns of symptoms of a patient, in terms of locally
independent stochastic models. This leads to practical definitions of
criteria, e.g., whether to include patients in further diagnostic
examinations. The clustering is often determined by parameters that
are estimated by the maximum likelihood method. The likelihood function
in LCA has in many cases - especially for sparse data sets - a
complicated shape with many local extrema, even for small-scale
problems. Hence a global optimization must be attempted.
This paper describes an algorithm implemented to optimize the
likelihood function constrained by the requirement of a good fit of
the data with a minimal number of classes. The problem is formulated
in the algebraic modeling language AMPL and solved via state of the art
optimization solvers.
The approach is successfully applied to three real-life problems.
Remarkably, the goodness-of-fit constraint makes one of the three
problems identifiable by eliminating all but one of the local
minimizers.
F981.
A. Neumaier,
The FMathL mathematical framework,
Manuscript (2009).
pdf file (497K)
Several frameworks for mathematics have been constructed in the
literature. To avoid the paradoxes of naive set theory, Zermelo
and Fraenkel constructed a type-free system of sets, based on first
order predicate logic, while von Neumann, Bernays and Goedel
constructed a system with two types, sets and classes. These
systems suffice for the common needs of a working mathematician.
But they are inconvenient to use without pervasive abuse of language
and notation, which makes human-machine communication of mathematics
difficult.
In this document, we construct a new framework for mathematics,
designed as part of the specification system FMathL for the formalized
communication of mathematics between humans and machines, in a way
close to the actual practice of mathematics. The framework is
described in such a way that this description can become part of the
FMathL system itself.
All axioms are given in the form of familiar existence requirements
and properties that are used by mathematicians on an everyday basis.
Thus mathematicians trusting the axiom of choice and hence the law
of excluded middle can readily convince themselves of their truth
in their private (but publicly educated) view of mathematical concepts.
The exposition is such that students exposed to the typical
introductory mathematics courses should have no serious difficulty
understanding the material.
N982.
A. Neumaier,
Computer graphics, linear interpolation, and nonstandard intervals,
Manuscript (2009).
pdf file (340K)
This document is an assessment of the value of optimal linear
interpolation enclosures and of nonstandard intervals,
especially with respect to applications in computer graphics, and of
the extent a future IEEE interval standard should support these.
It turns out that essentially all present applications of nonstandard
intervals to practical problems can be matched by similarly efficient
approaches based on standard intervals only. On the other hand,
a number of applications were inspired by the use of nonstandard
arithmetic.
This suggests the requirement of a minimal support for nonstandard
intervals, allowing implementations of nonstandard interval arithmetic
to be compatible with the standard, while a full support by making one
of the conflicting variants required seems not appropriate.
P983.
L. Pal, T. Csendes, M.C. Markot and A. Neumaier,
BBO0-benchmarking of the GLOBAL method for the noisy function testbed,
pdf file (880K)
downloading/printing problems?
GLOBAL is a multistart type stochastic method for bound
constrained global optimization problems. Its goal is to ¯nd
all the local minima that are potentially global. For this
reason it involves a combination of sampling, clustering, and
local search. We report its results on the noisy problems
given.
P984.
W. Huyer and A. Neumaier,
Benchmarking of MCS on the Noiseless Function Testbed,
Manuscript (2009).
pdf file (6815K)
downloading/printing problems?
Benchmarking results with the MCS algorithm for boundconstrained
global optimization on the noiseless BBOB 2009 testbed are described.
P985.
W. Huyer and A. Neumaier,
Benchmarking of SNOBFIT on the Noisy Function Testbed,
Manuscript (2009).
pdf file (7373K)
downloading/printing problems?
Benchmarking results with the SNOBFIT algorithm for noisy
bound-constrained global optimization on the noisy BBOB
2009 testbed are described.
P986.
W. Huyer and A. Neumaier,
Benchmarking of MCS on the Noisy Function Testbed,
Manuscript (2009).
pdf file (7582K)
downloading/printing problems?
Benchmarking results with the MCS algorithm for boundconstrained
global optimization on the noisy BBOB 2009 testbed are described.
P987.
W. Huyer and A. Neumaier,
Benchmarking of MCS on the Noiseless Function Testbed,
Manuscript (2009).
pdf file (6815K)
downloading/printing problems?
Benchmarking results with the MCS algorithm for boundconstrained
global optimization on the noiseless BBOB 2009 testbed are described.
N988.
A. Neumaier,
Vienna proposal for interval standardization,
Manuscript (2008).
pdf file (179K)
This document contains a detailed proposal for a future IEEE 1788
standard on interval arithmetic. It is written in a form that should
be not too difficult to transform into a formal, complete and fully
precise document specifying the standard to be.
Part 1 contains a concise summary of the basic assumptions (some of
which may be controversial) upon which the remaining document is
based. The items in Part 1 are grouped such that separate voting on
each issue is meaningful.
Parts 2-5 specify the internal representation of intervals and the
operations defined on them. Part 6 specifies the external
representation of intervals and the conversion between internal and
external representations.
Part 7 is optional and discusses useful modifications of the directed
rounding behavior specified in the IEEE 754-2008 standard that would
simplify an implementation of the proposed standard.
This final version incorporates many suggestions and corrections by the
people mentioned above. In particular, comprehensive discussions with
Michel Hack, who read and commented in detail many intermediate
versions, had a strong influence on this proposal.
P989.
A. Neumaier,
A simple hidden variable experiment,
arXiv:0706.0155
ps.gz file (170K),
pdf file (97K)
downloading/printing problems?
An experiment is described which proves, using single photons only,
that the standard hidden variables assumptions (commonly used to derive
Bell inequalities) are inconsistent with quantum mechanics.
The analysis is very simple and transparent.
In particular, it demonstrates that a classical wave model
for quantum mechanics is not ruled out by experiments demonstrating the
violation of the traditional hidden variable assumptions.
P990.
A. Neumaier,
On the foundations of thermodynamics,
arXiv:0705.3790
ps.gz file (324K),
pdf file (587K)
downloading/printing problems?
On the basis of new, concise foundations, this paper establishes
the four laws of thermodynamics, the Maxwell relations, and the
stability requirements for response functions, in a form applicable to
global (homogeneous), local (hydrodynamic) and microlocal (kinetic)
equilibrium.
The present, self-contained treatment needs very little formal
machinery and stays very close to the formulas as they are applied
by the practicing physicist, chemist, or engineer.
From a few basic assumptions, the full structure of phenomenological
thermodynamics and of classical and quantum
statistical mechanics is recovered.
Care has been taken to keep the foundations free of subjective aspects
(which traditionally creep in through information or probability).
One might describe the paper as a uniform treatment of the nondynamical
part of classical and quantum statistical mechanics
``without statistics'' (i.e., suitable for the definite descriptions
of single objects) and
``without mechanics'' (i.e., independent of microscopic assumptions).
When enriched by the traditional examples and applications, this paper
may serve as the basis for a course on thermal physics.
P991.
A. Neumaier, M. Fuchs, E. Dolejsi, T. Csendes,
J. Dombi, B. Bá&nhelyi, Z. Gera,
Application of clouds for modeling uncertainties in robust space system
design, Final Report,
ARIADNA Study 05/5201, European Space Agency (ESA), 2007.
pdf file (587K)
downloading/printing problems?
Ariadna Completed Studies
This report discusses a case study for modeling uncertainties in a
Matlab model for space system design, with the goal of determining
robust feasible designs for the model.
The problem structure involves all difficulties an optimization problem
can have: discrete variables, strong nonlinearities, discontinuities
due to branching decisions, multiple objectives, multiple local minima.
The formal modeling of uncertainty by means of clouds, and
the robust optimization of a resulting uncertain model was considered.
A heuristic approach using surrogate function modeling,
corner searches, and discrete line searches was used to solve the
robust optimization problem in case of interval uncertainties.
This approach works satisfactorily for the model problem for the case
of interval uncertainty.
Solving the model problem with uncertainty revealed significant
robustness advantages of the approach using uncertainty. The influence
on assumed knowledge about additional uncertainty information was
illustrated by for a few model choices.
P993.
A. Neumaier,
Collapse challenge for interpretations of quantum mechanics,
Manuscript (2005). quant-ph/0505172
dvi.gz file (7K),
ps.gz file (61K),
pdf file (62K)
downloading/printing problems?
NO994.
A. Neumaier,
Constraint propagation for univariate quadratic constraints,
Manuscript (January 5, 2005).
dvi.gz file (6K),
ps.gz file (67K),
pdf file (58K),
downloading/printing problems?
We present formulas for rigorous constraint propagation of
quadratic equality or inequality constraints involving a single
nonlinear variable. Since the analysis is very elementary,
probably everything in here has been known for a long time.
The present approach, based on directed rounding only, provide
efficient alternatives to the procedures discussed by
Hansen and Walster, which employ interval arithmetic.
S995.
A. Neumaier,
On the structure of clouds,
Manuscript (2003).
dvi.gz file (15K),
ps.gz file (60K),
pdf file (156K)
downloading/printing problems?
Clouds, recently introduced by the author, give - within the
traditional probabilistic framework - a concept for imprecise
probability, with which quantitative conclusions can be derived
from uncertain probabilistic information. A cloud is to a random
variable more or less what an interval is to a number.
In this paper, some structural results about clouds are derived.
In particular, it is shown that every cloud contains some random
variable.
O996.
C. Bliek, P. Spellucci, L.N. Vicente, A. Neumaier,
L. Granvilliers, E. Monfroy, F. Benhamou, E. Huens, P. Van Hentenryck,
D. Sam-Haroud and B. Faltings,
Algorithms for Solving Nonlinear Constrained and Optimization Problems:
The State of the Art.
A progress report of the COCONUT project, 2001.
html file of abstract,
with links to the 222 page report in ps and pdf.
The goal of this document is to summarize the state of the art of
algorithms for solving nonlinear constrained and optimization problems.
PS997.
A. Neumaier, W. Huyer and E. Bornberg-Bauer,
Hydrophobicity Analysis of Amino Acids,
WWW-Document (1999).
html file,
Based on a principal component analysis of 47 published attempts to
quantify hydrophobicity in terms of a single scale,
we define a representation of the 20 amino acids as
points in a 3-dimensional hydrophobicity space and display it by means
of a minimal spanning tree. The dominant scale is found to be close to
two scales derived from contact potentials.
O998.
A. Neumaier,
On convergence and restart conditions for a nonlinear
conjugate gradient method,
Manuscript (1997).
dvi.gz file (15K),
ps.gz file (71K),
downloading/printing problems?
A global convergence theorem for unconstrained minimization
algorithms with an efficient line search is derived. The theorem
applies to a new version of the conjugate gradient method derived here
in terms of minimizing the effect of zigzagging.
The global convergence condition makes much weaker
demands on the line search than previous methods.
Local Q-linear convergence in a neighborhood of a strong local
minimizer follows under additional conditions that define natural
restart conditions based on line search accuracy and loss of
conjugacy of successive gradients.
LS999.
B. Graf and A. Neumaier,
A greedy ball algorithm for classification,
Manuscript (1997).
ps.gz file (132K),
downloading/printing problems?
We propose a classification scheme that generates a set of balls
separating points belonging to different points sets in
n-dimensional space such that all points of a given
training set are classified correctly. The method is very fast in
training. Applications discussed include breast cancer prognosis and
character recognition.
I do not send out paper copies of my manuscripts;
but here are some hints of how to make your own copy.
To uncompress the files obtained you need the GNU program gunzip.
Some browsers seem to save *.dvi.gz files as *.dvi, so that one thinks
one has a dvi-file while one actually may still have to do a gunzip.
And some other browsers appear to do an automatic gunzip, while
leaving the file name with the suffix .gz! In both cases, the file
needs to be renamed appropriately for further processing.
See The gzip homepage
or
Compression for gunzip,
dvi.exe or
TeX Facilities or
LaTeX Home Page for a dvi-viewer,
Ghostscript for the popular free postscript viewer, and
Help for printing PostScript Files.
If you still have difficulties obtaining or printing one of the papers
above, please
tell me
details about the difficulties you encountered.
Global Optimization
Protein Folding
Interval Methods
Regularization
Mathematical Software
Computational Mathematics Links
Mathematics Links
Statistics Links
my
home page (http://arnold-neumaier.at)
Arnold Neumaier (Arnold.Neumaier@univie.ac.at)