## About

122

Publications

15,222

Reads

**How we measure 'reads'**

A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more

4,658

Citations

Introduction

Additional affiliations

September 2008 - present

August 2001 - August 2004

## Publications

Publications (122)

We consider the solution of finite-sum minimization problems, such as those appearing in nonlinear least-squares or general empirical risk minimization problems. We are motivated by problems in which the summand functions are computationally expensive and evaluating all summands on every iteration of an optimization method may be undesirable. We pr...

We develop and solve a constrained optimization model to identify an integrable optics rapid-cycling synchrotron lattice design that performs well in several capacities. Our model encodes the design criteria into 78 linear and nonlinear constraints, as well as a single nonsmooth objective, where the objective and some constraints are defined from t...

We explore novel approaches for solving nonlinear optimization problems with unrelaxable bound constraints, which must be satisfied before the objective function can be evaluated. Our method reformulates the unrelaxable bound-constrained problem as an unconstrained optimization problem that is amenable to existing unconstrained optimization methods...

We consider unconstrained stochastic optimization problems with no available gradient information. Such problems arise in settings from derivative-free simulation optimization to reinforcement learning. We propose an adaptive sampling quasi-Newton method where we estimate the gradients of a stochastic function using finite differences within a comm...

We develop and solve a constrained optimization model to identify an integrable optics rapid-cycling synchrotron lattice design that performs well in several capacities. Our model encodes the design criteria into 78 linear and nonlinear constraints, as well as a single nonsmooth objective, where the objective and some constraints are defined from t...

We study the problem of minimizing a convex function on a nonempty, finite subset of the integer lattice when the function cannot be evaluated at noninteger points. We propose a new underestimator that does not require access to (sub)gradients of the objective; such information is unavailable when the objective is a blackbox function. Rather, our u...

Almost all applications stop scaling at some point; those that dont are seldom performant when considering time to solution on anything but aspirational/unicorn resources. Recognizing these tradeoffs as well as greater user functionality in a near-term exascale computing era, we present libEnsemble, a library aimed at particular scalability- and ca...

We propose a novel Bayesian method to solve the maximization of a time-dependent expensive-to-evaluate stochastic oracle. We are interested in the decision that maximizes the oracle at a finite time horizon, given a limited budget of noisy evaluations of the oracle that can be performed before the horizon. Our recursive two-step lookahead acquisiti...

Almost all applications stop scaling at some point; those that don't are seldom performant when considering time to solution on anything but aspirational/unicorn resources. Recognizing these tradeoffs as well as greater user functionality in a near-term exascale computing era, we present libEnsemble, a library aimed at particular scalability- and c...

A microscopic description of the interaction of atomic nuclei with external electroweak probes is required for elucidating aspects of short-range nuclear dynamics and for the correct interpretation of neutrino oscillation experiments. Nuclear quantum Monte Carlo methods infer the nuclear electroweak response functions from their Laplace transforms....

A microscopic description of the interaction of atomic nuclei with external electroweak probes is required for elucidating aspects of short-range nuclear dynamics and for the correct interpretation of neutrino oscillation experiments. Nuclear quantum Monte Carlo methods infer the nuclear electroweak response functions from their Laplace transforms....

We address the calibration of a computationally expensive nuclear physics model for which derivative information with respect to the fit parameters is not readily available. Of particular interest is the performance of optimization-based training algorithms when dozens, rather than millions or more, of training data are available and when the expen...

We propose a novel Bayesian method to solve the maximization of a time-dependent expensive-to-evaluate oracle. We are interested in the decision that maximizes the oracle at a finite time horizon, when relatively few noisy evaluations can be performed before the horizon. Our recursive, two-step lookahead expected pay-off (r2LEY) acquisition functio...

Robust optimization (RO) has attracted much attention from the optimization community over the past decade. RO is dedicated to solving optimization problems subject to uncertainty: design constraints must be satisfied for all the values of the uncertain parameters within a given uncertainty set. Uncertainty sets may be modeled as deterministic sets...

Local Fourier analysis is a useful tool for predicting and analyzing the performance of many efficient algorithms for the solution of discretized PDEs, such as multigrid and domain decomposition methods. The crucial aspect of local Fourier analysis is that it can be used to minimize an estimate of the spectral radius of a stationary iteration, or t...

As x-ray microscopy is pushed into the nanoscale with the advent of more bright and coherent x-ray sources, associated improvement in spatial resolution becomes highly vulnerable to geometrical errors and uncertainties during data collection. We address a form of error in tomography experiments, namely, the drift between projections during the tomo...

Supervised learning is a promising approach for modeling the performance of applications running on large HPC systems. A key assumption in supervised learning is that the training and testing data are obtained under the same conditions. However, in production HPC systems these conditions might not hold because the conditions of the platform can cha...

In many optimization problems arising from scientific, engineering and artificial intelligence applications, objective and constraint functions are available only as the output of a black-box or simulation oracle that does not provide derivative information. Such settings necessitate the use of methods for derivative-free, or zeroth-order, optimiza...

In many optimization problems arising from scientific, engineering and artificial intelligence applications, objective and constraint functions are available only as the output of a black-box or simulation oracle that does not provide derivative information. Such settings necessitate the use of methods for derivative-free, or zeroth-order, optimiza...

We study the problem of minimizing a convex function on the integer lattice when the function cannot be evaluated at noninteger points. We propose a new underestimator that does not require access to (sub)gradients of the objective but, rather, uses secant linear functions that interpolate the objective function at previously evaluated points. Thes...

Tomography can be used to reveal internal properties of a 3D object using any penetrating wave. Advanced tomographic imaging techniques, however, are vulnerable to both systematic and random errors associated with the experimental conditions, which are often beyond the capabilities of the state-of-the-art reconstruction techniques such as regulariz...

We develop an algorithm for minimax problems that arise in robust optimization in the absence of objective function derivatives. The algorithm utilizes an extension of methods for inexact outer approximation in sampling a potentially infinite-cardinality uncertainty set. Clarke stationarity of the algorithm output is established alongside desirable...

X-ray ptychography is becoming the standard method for sub-30 nm imaging of thick extended samples. Available algorithms and computing power have traditionally restricted sample reconstruction to 2D slices. We build on recent progress in optimization algorithms and high-performance computing to solve the ptychographic phase retrieval problem direct...

We adapt a manifold sampling algorithm for the nonsmooth, nonconvex formulations of learning that arise when imposing robustness to outliers present in the training data. We demonstrate the approach on objectives based on trimmed loss. Empirical results show that the method has favorable scaling properties. Although savings in time come at the expe...

Mixed-integer derivative-free optimization.

With ever-increasing execution scale of parallel scientific simulations, potential unnoticed corruptions to scientific data during simulation make users more suspicious about the correctness of floating-point calculations than ever before. In this paper, we analyze the issue of the trust in results of numerical simulations and scientific data analy...

We propose and analyze an asynchronously parallel optimization algorithm for finding multiple, high-quality minima of nonlinear optimization problems. Our multistart algorithm considers all previously evaluated points when determining where to start or continue a local optimization run. Theoretical results show that when there are finitely many min...

We develop a manifold sampling algorithm for the minimization of a nonsmooth composite function f Δ= Ψ+ ho F when Ψ is smooth with known derivatives, h is a known, nonsmooth, piecewise linear function, and F is smooth but expensive to evaluate. The trust-region algorithm classifies points in the domain of h as belonging to different manifolds and u...

A growing disparity between supercomputer computation speeds and I/O rates makes it increasingly infeasible for applications to save all results for offline analysis. Instead, applications must analyze and reduce data online so as to output only those results needed to answer target scientific question(s). This change in focus complicates applicati...

A growing disparity between supercomputer computation speeds and I/O rates makes it increasingly infeasible for applications to save all results for offline analysis. Instead, applications must analyze and reduce data online so as to output only those results needed to answer target scientific question(s). This change in focus complicates applicati...

X-ray fluorescence tomography is based on the detection of fluorescence x-ray photons produced following x-ray absorption while a specimen is rotated; it provides information on the 3D distribution of selected elements within a sample. One limitation in the quality of sample recovery is the separation of elemental signals due to the finite energy r...

This paper demonstrates a new process that has been specifically designed for the support of the U.S. Department of Transportation’s (DOT’s) Corporate Average Fuel Economy (CAFE) standards. In developing the standards, DOT’s National Highway Traffic Safety Administration made use of the CAFE Compliance and Effects Modeling System (the “Volpe model”...

Although uncertainty quantification has been making its way into nuclear theory, these methods have yet to be explored in the context of reaction theory. For example, it is well known that different parameterizations of the optical potential can result in different cross sections, but these differences have not been systematically studied and quant...

We propose a new approach to robustly retrieve the exit wave of an extended sample from its coherent diffraction pattern by exploiting sparsity of the sample's edges. This approach enables imaging of an extended sample with a single view, without ptychography. We introduce nonlinear optimization methods that promote sparsity, and we derive update r...

Energy and power consumption are major limitations to continued scaling of computing systems. Inexactness, where the quality of the solution can be traded for energy savings, has been proposed as an approach to overcoming those limitations. In the past, however, inexactness necessitated the need for highly customized or specialized hardware. The cu...

Scientific user facilities — particle accelerators, telescopes, colliders, supercomputers, light sources, sequencing facilities, and more — operated by the U.S. Department of Energy (DOE) Office of Science (SC) generate ever increasing volumes of data at unprecedented rates from experiments, observations, and simulations. At the same time there is...

This paper presents CONORBIT (CONstrained Optimization by Radial Basis function Interpolation in Trust regions), a derivative-free algorithm for constrained black-box optimization where the objective and constraint functions are computationally expensive. CONORBIT employs a trust-region framework that uses interpolating radial basis function (RBF)...

In recent years, automatic data-driven modeling with machine learning (ML) has received considerable attention as an alternative to analytical modeling for many modeling tasks. While ad hoc adoption of ML approaches has obtained success, the real potential for automation in data-driven modeling has yet to be achieved. We propose AutoMOMML, an end-t...

A system of two or more quantum dots interacting with a dissipative plasmonic nanostructure is investigated in detail by using a cavity quantum electrodynamics approach with a model Hamiltonian. We focus on determining and understanding system configurations that generate multiple bipartite quantum entanglements between the occupation states of the...

An augmented Lagrangian (AL) can convert a constrained optimization problem into a sequence of simpler (e.g., unconstrained) problems, which are then usually solved with local solvers. Recently, surrogate-based Bayesian optimization (BO) sub-solvers have been successfully deployed in the AL framework for a more global search in the presence of ineq...

We propose a derivative-free algorithm for finding high-quality local minima for functions that require significant computational resources to evaluate. Our algorithm efficiently utilizes the computational resources allocated to it and also has strong theoretical results, almost surely starting a finite number of local optimization runs and identif...

In x-ray spectromicroscopy, a set of images can be acquired across an absorption edge to reveal chemical speciation. We previously described the use of non-negative matrix approximation methods for improved classification and analysis of these types of data. We present here an approach to find appropriate values of regularization parameters for thi...

Fluorescence tomographic reconstruction, based on the detection of photons coming from fluorescent emission, can be used for revealing the internal elemental composition of a sample. On the other hand, conventional X-ray transmission tomography can be used for reconstructing the spatial distribution of the absorption coefficient inside a sample. In...

We present a new algorithm, called manifold sampling, for the unconstrained minimization of a nonsmooth composite function h F when h has known structure. In particular, by classifying points in the domain of the nonsmooth function h into manifolds, we adapt search directions within a trust-region framework based on knowledge of manifolds intersect...

Coherent x-ray diffractive imaging is a novel imaging technique that utilizes phase retrieval and nonlinear optimization methods to image matter at nanometer scales. We explore how the convergence properties of a popular phase retrieval algorithm, Fienup's HIO, behave by introducing a reduced dimensionality problem allowing us to visualize and quan...

We provide a summary of new developments in the area of direct reaction
theory with a particular focus on one-nucleon transfer reactions. We provide a
status of the methods available for describing (d,p) reactions. We discuss the
effects of nonlocality in the optical potential in transfer reactions. The
results of a purely phenomenological potentia...

Many factors affect the performance and power characteristics of FPGA designs. Among them are the optimization parameters for synthesis, map, and place-and-route design tools. Choosing the right combination of these parameters can substantially lower power requirements, while still satisfying timing constraints. Finding such an improvement, however...

Analysis and optimization of simulation-generated data have myriads of scientific and industrial applications. Fuel consumption and emissions over the entire drive cycle of a large fleet of vehicles is an example of such an application and the focus of this study. Temporal variation of fuel consumption and emissions in an automotive engine are func...

This paper shows how a multiobjective problem is formulated and solved in order to size the components of a vehicle with a split hybrid transmission, such as a Toyota Prius. The goal is to explore feasible design options and the trade-offs between fuel economy and vehicle cost. Eight input variables are provided for this optimization, including pla...

The types of constraints encountered in black-box and simulation-based
optimization problems differ significantly from those treated in nonlinear
programming. We introduce a characterization of constraints to address this
situation. We provide formal definitions for several constraint classes and
present illustrative examples in the context of the...

Nuclear density functional theory (DFT) is one of the main theoretical tools
used to study the properties of heavy and superheavy elements, or to describe
the structure of nuclei far from stability. While on-going efforts seek to
better root nuclear DFT in the theory of nuclear forces [see Duguet et al.,
this issue], energy functionals remain semi-...

Statistical tools of uncertainty quantification can be used to assess the
information content of measured observables with respect to present-day
theoretical models; to estimate model errors and thereby improve predictive
capability; to extrapolate beyond the regions reached by experiment; and to
provide meaningful input to applications and planned...

This paper demonstrates a new process that has been specifically designed for the support of the U.S. Department of Transportation's (DOT's) Corporate Average Fuel Economy (CAFE) standards. In developing the standards, DOT's National Highway Traffic Safety Administration made use of the CAFE Compliance and Effects Modeling System (the "Volpe model"...

Increased complexity of computer architectures, consideration of power constraints, and expected failure rates of hardware components make the design and analysis of energy-efficient fault-tolerance schemes an increasingly challenging and important task. We develop run-time and study FTI, a multilevel checkpoint library, on an IBM Blue Gene/Q. We s...

In high-performance computing, there is a perpetual hunt for performance and scalability. Supercomputers grow larger offering improved computational science throughput. Nevertheless, with an increase in the number of systems’ components and their interactions, the number of failures and the power consumption will increase rapidly. Energy and reliab...

We analyze the relationship between the noise level of a function and the accuracy and reliability of derivatives and difference estimates. We derive and empirically validate measures of quality for both derivatives and difference estimates. Using these measures, we quantify the accuracy of derivatives and differences in terms of the noise level of...

X-Ray absorption spectromicroscopy provides rich information on the chemical organization of materials down to the nanoscale. However, interpretation of this information in studies of “natural” materials such as biological or environmental science specimens can be complicated by the complex mixtures of spectroscopically complicated materials presen...

Bayesian methods have been very successful in quantifying uncertainty in
physics-based problems in parameter estimation and prediction. In these cases,
physical measurements y are modeled as the best fit of a physics-based model
$\eta(\theta)$ where $\theta$ denotes the uncertain, best input setting. Hence
the statistical model is of the form $y =...

We have quantified the statistical uncertainties of the low-energy
coupling-constants (LECs) of an optimized nucleon-nucleon (NN) interaction from
chiral effective field theory ($\chi$EFT) at next-to-next-to-leading order
(NNLO). In addition, we have propagated the impact of the uncertainties of the
LECs to two-nucleon scattering phase shifts, effe...