## No full-text available

To read the full-text of this research,

you can request a copy directly from the authors.

To read the full-text of this research,

you can request a copy directly from the authors.

... Ces méthodes (ShapeVolc et MINUIT) sont basées sur l'application de l'algorithme du simplex (Nelder and Mead, 1965) permettant la recherche de la solution optimale qui ajuste les paramètres locaux de modélisation de telle sorte qu'ils minimisent la fonction de coût. Ayant l'ambition dans cette étude de modéliser les édifices au second ordre, c'est donc la méthode ShapeVolc qui a été privilégié. ...

... -Il crée une surface qui peut envelopper les points de contraintes (ou non selon la volonté de l'analyste) et ainsi n'est pas biaisé s'il reste au sein de la population de point de contrainte des lieux qui ont en fait subit une érosion et auraient dû être exclus en amont ( Fig. III-32). Cette méthode d'interpolation consiste à rechercher, via l'algorithme du simplex (Nelder and Mead, 1965), le jeu de paramètre qui minimise une fonction de coût, laquelle est paramétrable par les éléments de géométrie de l'édifice volcanique que sont ( -Les paramètres (1-4) du profil moyen appelé génératrice définissant la variation d'altitude en fonction de la distance au centre ( Fig. III-32). (Lahitte et al., in prep.) ...

... Par conséquent, les calculs de volumes issus d'une telle interpolation ne permettront pas d'obtenir des volumes réalistes au sens strict du terme mais, plutôt des volumes minima de la morphologie post-éruptive de l'édifice volcanique en question. Chapitre VII Discussion-générale 303 S'appuyant essentiellement sur l'algorithme du simplex (Nelder and Mead, 1965), ShapeVolc recherche le jeu de paramètre qui minimise une fonction de coût, laquelle est paramétrable par les éléments de géométrie de l'édifice volcanique que sont (1) la localisation de l'axe de révolution (X, Y), l'altitude du sommet (Z), l'ellipticité de l'édifice dans un plan horizontale et l'excentricité des ellipses. De plus, dans le but de prendre en considération les irrégularités des volcans, ShapeVolc autorise également une modélisation au second ordre. ...

Dans cette thèse, 47 nouveaux âges ont été obtenus par la technique Cassignol-Gillot. La très bonne reproductibilité des âges obtenus, ajoutée à une stricte cohérence observée entre les édifices volcaniques, supporte l'utilisation de la méthode K-Ar dans la datation des laves des Carpates orientales (Călimani-Gurghiu-Harghita ; CGH) et des dépôts d'ignimbrite du Miocène des Carpates occidentales (les monts de Mátra et Bükk). Dans la partie orientale de la chaîne, les données géochronologiques ont été combinées avec des analyses géomorphologiques pour contraindre l'histoire volcanique et calculer leur taux de construction et d'érosion. Parallèlement, dans la partie occidentale de la chaîne, les données géochronologiques obtenues ont été combinées aux données paléomagnétiques disponibles pour affiner leur stratigraphie. La chaîne des Carpates orientales a connu une migration de son activité éruptive le long de l’arc du Miocène au Quaternaire. Ici, une méthodologie novatrice et complexe apporte de nouvelles contraintes géochronologiques et géomorphologiques sur l'évolution des 20 volcans de la chaîne. Les nouveaux âges ont permis de contraindre leur durée d’activité Par exemple Seaca-Tătarca (6,79-6,47 Ma), Vârghiş (5,47-4,61 Ma) ou de dater l’activité volcanique la plus récente de Călimani. Pour le complexe volcanique de Ciomadul, composé d'une douzaine de dômes de lave, l’activité volcanique a été contrainte entre 704±18 et 28±1ka (<1 Ma) interrompue de périodes de repos. En parallèle, des reconstructions numériques de paléo-topographies volcaniques ont été réalisées dans le but de quantifier leur forme à la fin de leur construction. Les résultats déduits de nos reconstructions ont donné un volume total de matériel émis de 2300km3 sur toute la chaîne avec, à l’échelle de chaque volcan, une large gamme de variation de leur taille (3±3 à 592±115 km³). Ces volumes montrent une nette diminution du nord au sud de la chaîne avec des valeurs de 910, 880, 279 et165 km³ pour des secteurs géographiques de Călimani, Gurghiu, North Harghita et South Harghita respectivement. Combinés aux âges, ces volumes ont permis de calculer un taux de construction moyen de 200km³/Ma pour toute la chaîne, représenté par deux groupes distincts ; un groupe caractérisé par des taux de construction de 137 km³/Ma caractéristiques des vieux volcans (11-3,6 Ma) suivi d'un groupe avec des taux de construction de 28km³/Ma pour les volcans Plio-Quaternaires. La comparaison des volcans reconstruits et ceux actuels a permis de calculer un volume érodé total de 524±125km³, correspondant à une dénudation moyenne de 22% et à un taux d'érosion moyen de 20m/Ma pour la chaîne de CGH. Suite aux fluctuations climatiques enregistrées le long de cette chaîne, les taux d'érosion caractéristiques de ces grandes périodes climatiques ont été calculés dans le but de montrer le rôle qu’a joué le climat sur les taux d’érosion. Le taux d'érosion le plus élevé de 38m/Ma a été obtenu pour la période régie par un climat continental subtropical modéré transitionnel (9,5-8,2 Ma). Pour la période climatique continentale modérée (8,2-6,8 Ma), caractérisée par des conditions climatiques beaucoup moins humides, un taux d'érosion de 14 m/Ma est proposé. Pour la période correspondant à un climat continental avec identification de périodes semi-arides (6,8-5,8 Ma), un taux d'érosion de 7 m/Ma a été calculé. Pour les volcans Plio-Quaternaires ayant connus des cycles interglaciaires/glaciaires, un taux d'érosion de 28m/Ma a été obtenu. Une telle approche morphométrique et géochronologique quantitative démontre son efficacité à étudier le dynamisme volcanique, y compris les processus de construction et d'érosion au fil du temps. Dans la partie occidentale des Carpates, les âges obtenus sur les coulées de lave de Börzsöny contraignent sa période d'activité entre 14,3-15,1 Ma. Pour les dépôts ignimbritiques de Bükk, les résultats K-Ar évoluent entre 12,7-16,5 Ma.

... The hMWOA-NM is two ideas which join one another. The first idea is MWOA's capability to be socially based on the social behaviour of whales and the second idea is NM's derivative-free mechanism to find local minima [32]. The results show that the method hMWOA-NM improves efficiency and avoids being stuck into the local optima. ...

... A novel optimization technique to find the local minima point of a function having more than one variable has been proposed by Nelder and Mead in [32]. This technique delivers a modest technique for non-linear optimization models. ...

This paper investigates a novel cascaded fractional order PI-PD structure for power system stabilizers and flexible AC transmission system-based damping controllers to enhance the stability of the power system. The proposed controller parameters are optimized by a hybrid Modified Whale Optimization Algorithm (MWOA) with the Nelder-Mead algorithm. The novel hybrid algorithm accomplishes a proper balance between the exploration and exploitation phases of the modified whale optimization algorithm. This capability of the hybrid technique is certified by using the benchmark test functions compared with that of an MWOA, WOA, gravitational search algorithm, fast evolutionary programming, particle swarm optimization, genetic algorithm, and differential evolution. The proposed controller is optimized and verified under various loading conditions using a hybrid technique. Furthermore, to demonstrate its superiority, the results of the proposed hybrid algorithm are compared with recently published MWOA, WOA, and well-established genetic algorithms.

... Thereby, it exploits the special structure of MO landscapes (for details, see Sect. 2). Tracing the search path from multiple starting points, we were able to show in Steinhoff et al. (2020), that we pass better local optima for f 1 in that process than standard local search mechanisms (e.g., Nelder and Mead 1965) could possibly reach for many starting points. Depending on the structure of f 1 , those trajectories oftentimes even crossed the global optimum. ...

... The (default) parameters mentioned in Algorithm 1 were set to t \ ¼ 170 as angle for denoting sufficient proximity to a potentially local efficient set, r MO ¼ 0:05 as step size for multi-objective descent, and r SO ¼ 0:1 as step size for fast traversal by following the approximated gradient of f 2 for reaching the next basin of attraction. In Aspar et al. (2021) Gradient Search (GS) and Nelder-Mead (NM) (Nelder and Mead 1965) were used as local search mechanisms inside the SOMOGSA framework, resulting in two variants of SOMOGSA: SOMOGSA?GS and SOMOGSA?NM. As a baseline, we also evaluated GS and NM as stand-alone methods on each considered SO problem. ...

Single-objective continuous optimization can be challenging, especially when dealing with multimodal problems. This work sheds light on the effects that multi-objective optimization may have in the single-objective space. For this purpose, we examine the inner mechanisms of the recently developed sophisticated local search procedure SOMOGSA. This method solves multimodal single-objective continuous optimization problems based on first expanding the problem with an additional objective (e.g., a sphere function) to the bi-objective domain and subsequently exploiting local structures of the resulting landscapes. Our study particularly focuses on the sensitivity of this multiobjectivization approach w.r.t. (1) the parametrization of the artificial second objective, as well as (2) the position of the initial starting points in the search space. As SOMOGSA is a modular framework for encapsulating local search, we integrate Nelder–Mead local search as optimizer in the respective module and compare the performance of the resulting hybrid local search to its original single-objective counterpart. We show that the SOMOGSA framework can significantly boost local search by multiobjectivization. Hence, combined with more sophisticated local search and metaheuristics, this may help solve highly multimodal optimization problems in the future.

... Simplex search is a well-known, very powerful local search (local descent) proposed by Nelder and Mead [65], which can be utilized for optimization purposes, which does not need the gradient info of the feature landscape [66]. A simplex can be explained as a geometrical concept (polytope) with (n + 1) points z 1 , . . . ...

... , z n in an n-dimensional space. The procedures of this search can re-scale a simplex using the local info of the objective by using four operators: reflection, expansion, contraction, and shrinkage [65,66]. The stages of the search can be summarized as follows (see Figure 2): ...

The fine particulate matter (PM2.5) concentration has been a vital source of info and an essential indicator for measuring and studying the concentration of other air pollutants. It is crucial to realize more accurate predictions of PM2.5 and establish a high-accuracy PM2.5 prediction model due to their social impacts and cross-field applications in geospatial engineering. To further boost the accuracy of PM2.5 prediction results, this paper proposes a new wavelet PM2.5 prediction system (called WD-OSMSSA-KELM model) based on a new, improved variant of the salp swarm algorithm (OSMSSA), kernel extreme learning machine (KELM), wavelet decomposition, and Boruta-XGBoost (B-XGB) feature selection. First, we applied the B-XGB feature selection to realize the best features for predicting hourly PM2.5 concentrations. Then, we applied the wavelet decomposition (WD) algorithm to reach the multi-scale decomposition results and single-branch reconstruction of PM2.5 concentrations to mitigate the prediction error produced by time series data. In the next stage, we optimized the parameters of the KELM model under each reconstructed component. An improved version of the SSA is proposed to reach higher performance for the basic SSA optimizer and avoid local stagnation problems. In this work, we propose new operators based on oppositional-based learning and simplex-based search to mitigate the core problems of the conventional SSA. In addition, we utilized a time-varying parameter instead of the main parameter of the SSA. To further boost the exploration trends of SSA, we propose using the random leaders to guide the swarm towards new regions of the feature space based on a conditional structure. After optimizing the model, the optimized model was utilized to predict the PM2.5 concentrations, and different error metrics were applied to evaluate the model’s performance and accuracy. The proposed model was evaluated based on an hourly database, six air pollutants, and six meteorological features collected from the Beijing Municipal Environmental Monitoring Center. The experimental results show that the proposed WD-OLMSSA-KELM model can predict the PM2.5 concentration with superior performance (R: 0.995, RMSE: 11.906, MdAE: 2.424, MAPE: 9.768, KGE: 0.963, R2: 0.990) compared to the WD-CatBoost, WD-LightGBM, WD-Xgboost, and WD-Ridge methods.

... The temperature parameter determines the accept-reject criterion. Currently supported minimization methods are the nonlinear conjugate gradient (CG) [38], simplex (Nelder-Mead) [37], conjugate direction (Powell) [46], L-BFGS-B [4], Constrained Optimization BY Linear Approximation (COBYLA) [47], and Sequential Least Squares Programming (SLSQP) [22] methods. Hyperparameters: minimizer method, temperature. ...

Recent years have witnessed phenomenal growth in the application, and capabilities of Graphical Processing Units (GPUs) due to their high parallel computation power at relatively low cost. However, writing a computationally efficient GPU program (kernel) is challenging, and generally only certain specific kernel configurations lead to significant increases in performance. Auto-tuning is the process of automatically optimizing software for highly-efficient execution on a target hardware platform. Auto-tuning is particularly useful for GPU programming, as a single kernel requires re-tuning after code changes, for different input data, and for different architectures. However, the discrete, and non-convex nature of the search space creates a challenging optimization problem. In this work, we investigate which algorithm produces the fastest kernels if the time-budget for the tuning task is varied. We conduct a survey by performing experiments on 26 different kernel spaces, from 9 different GPUs, for 16 different evolutionary black-box optimization algorithms. We then analyze these results and introduce a novel metric based on the PageRank centrality concept as a tool for gaining insight into the difficulty of the optimization problem. We demonstrate that our metric correlates strongly with observed tuning performance.

... This method generally applies two types of algorithms: one is a global optimization algorithm, such as artificial neural network algorithm [12], genetic algorithm [13], simulated annealing method [14,15], simulated atomic transition inversion method [16], homotopy inversion method [17], etc. Another is a local optimization algorithm, such as simplex algorithm [18], particle swarm algorithm [19], multi-peak particle swarm algorithm [20] and so on. ...

The use of geodetic observation data for seismic fault parameters inversion is the research hotspot of geodetic inversion, and it is also the focus of studying the mechanism of earthquake occurrence. Seismic fault parameters inversion has nonlinear characteristics, and the gradient-based optimizer (GBO) has the characteristics of fast convergence speed and falling into local optimum hardly. This paper applies GBO algorithm to simulated earthquakes and real LuShan earthquakes in the nonlinear inversion of the Okada model to obtain the source parameters. The simulated earthquake experiment results show that the algorithm is stable, and the seismic source parameters obtained by GBO are slightly closer to the true value than the multi peak particle swarm optimization (MPSO). In the 2013 LuShan earthquake experiment , the root mean square error between the deformation after forwarding of fault parameters obtained by the introduced GBO algorithm and the surface observation deformation was 3.703 mm, slightly better than 3.708 mm calculated by the MPSO. Moreover, the inversion result of GBO algorithm is better than MPSO algorithm in stability. The above results show that the introduced GBO algorithm has a certain practical application value in seismic fault source parameters inversion.

... Prakash et al. [22] developed the CMOST model, an open-source framework for the microsimulation of CRC screening strategies also used in our study, facilitating automated parameter calibration against epidemiological adenoma prevalence and CRC incidence data. The authors used a heuristic greedy algorithm followed by Nelder-Mead optimization [23] to minimize the squared error between the benchmark values and the corresponding model predictions. ...

Background
Medical evidence from more recent observational studies may significantly alter our understanding of disease incidence and progression, and would require recalibration of existing computational and predictive disease models. However, it is often challenging to perform recalibration when there are a large number of model parameters to be estimated. Moreover, comparing the fitting performances of candidate parameter designs can be difficult due to significant variation in simulated outcomes under limited computational budget and long runtime, even for one simulation replication.
Methods
We developed a two-phase recalibration procedure. As a proof-of-the-concept study, we verified the procedure in the context of sex-specific colorectal neoplasia development. We considered two individual-based state-transition stochastic simulation models, estimating model parameters that govern colorectal adenoma occurrence and its growth through three preclinical states: non-advanced precancerous polyp, advanced precancerous polyp, and cancerous polyp. For the calibration, we used a weighted-sum-squared error between three prevalence values reported in the literature and the corresponding simulation outcomes. In phase 1 of the calibration procedure, we first extracted the baseline parameter design from relevant studies on the same model. We then performed sampling-based searches within a proper range around the baseline design to identify the initial set of good candidate designs. In phase 2, we performed local search (e.g., the Nelder-Mead algorithm), starting from the candidate designs identified at the end of phase 1. Further, we investigated the efficiency of exploring dimensions of the parameter space sequentially based on our prior knowledge of the system dynamics.
Results
The efficiency of our two-phase re-calibration procedure was first investigated with CMOST, a relatively inexpensive computational model. It was then further verified with the V/NCS model, which is much more expensive. Overall, our two-phase procedure showed a better goodness-of-fit than the straightforward employment of the Nelder-Mead algorithm, when only a limited number of simulation replications were allowed. In addition, in phase 2, performing local search along parameter space dimensions sequentially was more efficient than performing the search over all dimensions concurrently.
Conclusion
The proposed two-phase re-calibration procedure is efficient at estimating parameters of computationally expensive stochastic dynamic disease models.

... Hence, the FA is called just once, and its best solution is furnished to NMA to refine the search. For further details about the FA and the NMA, the reader is referred to [109,110] and [111]. The time-consuming non-linear response fragilities of Section 3 need to be computed interactivelly for every firefly of FA. ...

Recent Probabilistic Seismic Hazard Analysis (PSHA) for Chile has indicated a significant increase in PGA values currently adopted in the country. Thus, assessing the reliability of existing buildings and developing tools for seismic retrofitting purposes have become relevant. Adding external passive energy dissipation systems to attenuate seismic oscillations is usually cost-effective. Among the options, pendulum-tuned mass dampers (PTMDs) are attractive due to their performance and simplicity for installation and maintenance issues. Even so, a comprehensive literature survey reveals that reliability-based optimization (RBDO) studies of single and multiple PTMDs cannot be found, to the best of the authors' knowledge. Hence, this paper aims at performing the RBDO of single and multiple PTMDs of a steel building based on Chilean seismicity. For this purpose, uncertainties in both ground motion and parameters of the mechanical model are considered through seismic vulnerability analysis, which is integrated into an optimization problem. Because strong ground motion must be included, leading the oscillators to sway at large angles, the PTMDs' non-linear behavior is considered. The results show that for single PTMDs, the classical closed-form expressions can also be successfully employed, but only for higher mass ratios. Furthermore, the optimization for double PTMDs scenario leads to similar performance (as for the single PTMD), but with the associated advantages in installation and maintenance processes, due to their lower masses.

... We implemented the simplex minimization procedure as an alternative route to the Newton-Raphson algorithm (from (Press et al., 2007)). Although the downhill simplex method, as described by Nelder and Mead (Nelder and Mead, 1965), is one of the slowest minimization algorithms, it has several advantages. First, it does not require derivative evaluation, a step that often poses a challenge for convoluted functions. ...

The charged mineral/electrolyte interfaces are ubiquitous in the surface and subsurface–including the surroundings of the geological disposal sites for radioactive waste. Therefore, understanding how ions interact with charged surfaces is critically important for predicting radionuclide mobility in the case of waste leakage. At present, the Surface Complexation Models (SCMs) are the most successful thermodynamic frameworks to describe ion retention by mineral surfaces. SCMs are interfacial speciation models that account for the effect of the electric field generated by charged surfaces on sorption equilibria. These models have been successfully used to analyze and interpret a broad range of experimental observations including potentiometric and electrokinetic titrations or spectroscopy. Unfortunately, many of the current procedures to solve and fit SCM to experimental data are not optimal, which leads to a non-transferable or non-unique description of interfacial electrostatics and consequently of the strength and extent of ion retention by mineral surfaces. Recent developments in Artificial Intelligence (AI) offer a new avenue to replace SCM solvers and fitting algorithms with trained AI surrogates. Unfortunately, there is a lack of a standardized dataset covering a wide range of SCM parameter values available for AI exploration and training–a gap filled by this study. Here, we described the computational pipeline to generate synthetic SCM data and discussed approaches to transform this dataset into AI-learnable input. First, we used this pipeline to generate a synthetic dataset of electrostatic properties for a broad range of the prototypical oxide/electrolyte interfaces. The next step is to extend this dataset to include complex radionuclide sorption and complexation, and finally, to provide trained AI architectures able to infer SCMs parameter values rapidly from experimental data. Here, we illustrated the AI-surrogate development using the ensemble learning algorithms, such as Random Forest and Gradient Boosting. These surrogate models allow a rapid prediction of the SCM model parameters, do not rely on an initial guess, and guarantee convergence in all cases.

... We chose to employ the Nelder-Mead simplex algorithm [36] to minimize the measured rate of coincidence detection, by adjusting the piezoelectric polarization controller at Alice. The algorithm creates a four-vertex tetrahedral simplex in the 3D space defined by the voltages on three actuators. ...

Quantum measurements that use the entangled photons' polarization to encode quantum information require calibration and alignment of the measurement bases between spatially separate observers. Because of the changing birefringence in optical fibers arising from temperature fluctuations or external mechanical vibrations, the polarization state at the end of a fiber channel is unpredictable and time-varying. Polarization tracking and stabilization methods originally developed for classical optical communications cannot be applied to polarization-entangled photons, where the separately detected photons are statistically unpolarized, yet quantum mechanically correlated. We report here a fast method for automatic alignment and dynamic tracking of the polarization measurement bases between spatially separated detectors. The system uses the Nelder-Mead simplex method to minimize the observed coincidence rate between non-locally measured entangled photon pairs, without relying on classical wavelength-multiplexed pilot tones or temporally interleaved polarized photons. Alignment and control is demonstrated in a 7.1 km deployed fiber loop as well as in a controlled drifting scenario.

... The optimization algorithm Sbplx implemented in NLopt was used [11]. It is based on the Subplex method by Rowan [8] which is a gradient-free method for unconstrained problems based on the Nelder-Mead simplex algorithm [15]. The algorithm uses Nelder-Mead on a sequence of subspaces, promises to be more robust than the original and can handle bounds. ...

A sailplane with a morphing forward wing section allows a promising increase in performance. As a consequence, the morphing forward section leads to a smaller primary structure and reduced torsional stiffness. As the shear center is moved aft the aerodynamic center, an aeroelastic twisting moment is induced on the high-aspect-ratio wing. The layup and fiber angles of the wing shells are optimized to counteract the adverse wing twist by modifying stiffness and applying bending-torsion-coupling. An efficient parametrization and optimization method of the wing skin layup is developed for a finite element shell model of the wing structure. The aerodynamic model utilizes a doublet lattice model, based on an optimized aerodynamic wing design for a morphing wing sailplane. Structural masses and masses for controls, flaps and water ballast are included with discrete mass elements. To solve the static aeroelastic problem and to determine the deflection, NASTRAN SOL144 is used. Load cases for low- and high speed conditions as well as for pull-up manoeuvres are analyzed. Results show that the bending-torsion-coupling effect can have a beneficial or adverse effect depending on the specific load case and aerodynamic configuration.
https://www.icas.org/ICAS_ARCHIVE/ICAS2020/data/preview/ICAS2020_0575.htm

... We used a downhill simplex method [NM65] to solve the optimization problem. This method is effective in a variety of practical non-linear optimization problems with multiple local minima [BIJ96,LRWW98]. ...

Transfer function (TF) plays a key role for the generation of direct volume rendering (DVR), by enabling accurate identification of structures of interest (SOIs) interactively as well as ensuring appropriate visibility of them. Attempts at mitigating the repetitive manual process of TF design have led to approaches that make use of a knowledge database consisting of pre-designed TFs by domain experts. In these approaches, a user navigates the knowledge database to find the most suitable pre-designed TF for their input volume to visualize the SOIs. Although these approaches potentially reduce the workload to generate the TFs, they, however, require manual TF navigation of the knowledge database, as well as the likely fine tuning of the selected TF to suit the input. In this work, we propose a TF design approach where we introduce a new content-based retrieval (CBR) to automatically navigate the knowledge database. Instead of pre-designed TFs, our knowledge database contains image volumes with SOI labels. Given an input image volume, our CBR approach retrieves relevant image volumes (with SOI labels) from the knowledge database; the retrieved labels are then used to generate and optimize TFs of the input. This approach does not need any manual TF navigation and fine tuning. For improving SOI retrieval performance, we propose a two-stage CBR scheme to enable the use of local intensity and regional deep image feature representations in a complementary manner. We demonstrate the capabilities of our approach with comparison to a conventional CBR approach in visualization, where an intensity profile matching algorithm is used, and also with potential use-cases in medical image volume visualization where DVR plays an indispensable role for different clinical usages.

... Using Eq. 14-16, we found the parameters x, y, and z that maximized the likelihood of observing the given set of branch lengths. The optimization was carried out using the built-in R function, optim, with the "L-BFGS-B" method (Byrd et al. 1995) and multiple starting points for x, y, z, followed by local optimization using the "Nelder-Mead" method (Nelder and Mead 1965). ...

Phylogenetic trees describe relationships between extant species, but beyond that their shape and their relative branch lengths can provide information on broader evolutionary processes of speciation and extinction. However, currently many of the most widely used macro-evolutionary models make predictions about the shapes of phylogenetic trees that differ considerably from what is observed in empirical phylogenies. Here, we propose a flexible and biologically plausible macroevolutionary model for phylogenetic trees where times to speciation or extinction events are drawn from a Coxian phase-type (PH) distribution. First, we show that different choices of parameters in our model lead to a range of tree balances as measured by Aldous’ $$\beta $$ β statistic. In particular, we demonstrate that it is possible to find parameters that correspond well to empirical tree balance. Next, we provide a natural extension of the $$\beta $$ β statistic to sets of trees. This extension produces less biased estimates of $$\beta $$ β compared to using the median $$\beta $$ β values from individual trees. Furthermore, we derive a likelihood expression for the probability of observing an edge-weighted tree under a model with speciation but no extinction. Finally, we illustrate the application of our model by performing both absolute and relative goodness-of-fit tests for two large empirical phylogenies (squamates and angiosperms) that compare models with Coxian PH distributed times to speciation with models that assume exponential or Weibull distributed waiting times. In our numerical analysis, we found that, in most cases, models assuming a Coxian PH distribution provided the best fit.

... A smaller Bhattacharyya distance indicates a greater overlap between the two distributions. We optimised γ so as to minimise the Bhattacharyya distance between the distribution of disease and information spread times to reach the 75% threshold using R's "optimise" function (Nelder and Mead 1965). Parameter optimisation was run five times so as to avoid being trapped in local optima based on potential combinations of starting nodes and γ. ...

Social interactions between animals can provide many benefits, including the ability to gain useful environmental information through social learning. However, these social contacts can also facilitate the transmission of infectious diseases through a population. Animals engaging in social interactions must therefore face a trade-off between the potential informational benefits and the risk of acquiring disease. In order to understand how this trade-off can influence animal sociality, it is necessary to quantify the effects of different social structures on individuals’ likelihood of acquiring information versus infection Theoretical models have suggested that modular social networks, associated with the formation of groups or sub-groups, can slow spread of infection by trapping it within particular groups. However these social structures will not necessarily impact the spread of information in the same way if its transmission is considered as a “complex contagion”, e.g. through individuals copying the majority (conformist learning). Here we use simulation models to demonstrate that modular networks can promote the spread of information relative to the spread of infection, but only when the network is fragmented and group sizes are small. We show that the difference in transmission between information and disease is maximised for more well-connected social networks when the likelihood of transmission is intermediate. Our results have important implications for understanding the selective pressures operating on the social structure of animal societies, revealing that highly fragmented networks such as those formed in fission-fusion social groups and multilevel societies can be effective in modulating the infection-information trade-off for individuals within them.
Significance statement
Risk of infection is commonly regarded as one of the costs of animal social behaviours, while the potential for acquiring useful information is seen as a benefit. Balancing this risk of infection with the potential to gain useful information is one of the key trade-offs facing animals that engage in social interactions. In order to better understand this trade-off, it is necessary to quantify how different social structures can promote access to useful information while minimising risk of infection. We used simulations of disease and information spread to examine how group sizes and social network fragmentation influences both these transmission processes. Our models find that more subdivided networks slow the spread of disease far more than infection, but only group sizes are small. Our results demonstrate that showing that fragmented social structures can be more effective in balancing the infection-information trade-off for individuals within them.

... In particular, the parameter estimation has been carried out by maximizing the likelihood function of the model shown in Equation (23), with the equivalent stress s eq,MAX computed by considering the stress distribution. The "fminsearch" algorithm, based on the Nelder-Mead simplex algorithm [54], has been employed for the application of the Maximum Likelihood principle. With this approach, both failures and runout data have been considered. ...

Size effects concern the anomalous scaling of relevant mechanical properties of materials and structures over a sufficiently wide dimensional range. In the last few years, thanks to technological advances, such effects have been experimentally detected also in the very high cycle fatigue (VHCF) tests. Research groups at Politecnico di Torino are very active in this field, observing size effects on fatigue strength, fatigue life and fatigue limit up to the VHCF regime for different metal alloys. In addition, different theoretical models have been put forward to explain these effects. In the present paper, two of them are introduced, respectively based on fractal geometry and statistical concepts. Furthermore, a comparison between the models and experimental results is provided. Both models are able to predict the decrement in the fatigue life and in the conventional fatigue limit.

... Specifically, for each round, we randomly divided the dataset into 10 parts. For each of the 10 ways of selecting 9 parts from the 10, we computed the maximum likelihood estimate of the model's parameters based on those 9 parts, using the Nelder-Mead simplex algorithm (Nelder and Mead 1965). We then determined the log likelihood of the remaining part given the prediction. ...

It is standard in multiagent settings to assume that agents will adopt Nash equilibrium strategies. However, studies in experimental economics demonstrate that Nash equilibrium is a poor description of human players' initial behavior in normal-form games. In this paper, we consider a wide range of widely-studied models from behavioral game theory. For what we believe is the first time, we evaluate each of these models in a meta-analysis, taking as our data set large-scale and publicly-available experimental data from the literature. We then propose modifications to the best-performing model that we believe make it more suitable for practical prediction of initial play by humans in normal-form games.

... Asmussen et al. (1996) carried out the maximisation of the likelihood by a fitting procedure based on the Expectation-Maximisation (EM) algorithm. In Faddy (1998) the optimisation algorithm proposed by Nelder and Mead (1965) was utilised to maximise the log-likelihood function. By way of the penalised likelihood, the convergence of the algorithm is facilitated; see Faddy (2002). ...

We develop an efficient algorithm to compute the likelihood of the phase-type ageing model. The proposed algorithm uses the uniformisation method to stabilise the numerical calculation. It also utilises a vectorised formula to only calculate the necessary elements of the probability distribution. Our algorithm, with an error’s upper bound, could be adjusted easily to tackle the likelihood calculation of the Coxian models. Furthermore, we compare the speed and the accuracy of the proposed algorithm with those of the traditional method using the matrix exponential. Our algorithm is faster and more accurate than the traditional method in calculating the likelihood. Based on our experiments, we recommend using 20 sets of randomly-generated initial values for the optimisation to get a reliable estimate for which the evaluated likelihood is close to the maximum likelihood.

... were identified using the Nelder-Mead optimization algorithm (Nelder and Mead, 1965). To increase the likelihood that the global optimum was identified, 1000 initial best guesses of the model parameters were obtained by using a normal distribution with standard deviation one to jitter the estimates produced with the methods described in section 2.3. ...

Statistical analyses of wildfire growth are rarely undertaken, particularly in South America. In this study, we describe a simple and intuitive difference equation model of wildfire growth that uses a spread parameter to control the radial speed of the modeled fire and an extinguish parameter to control the rate at which the burning perimeter becomes inactive. Using data from the GlobFire project, we estimate these two parameters for 1003 large, multi-day fires in Peru between 2001 and 2020. For four fire-prone ecoregions within Peru, a set of 18 generalized linear models are fit for each parameter that use fire danger indexes and land cover covariates. Akaike weights are used to identify the best-approximating model and quantify model uncertainty. We find that, in most cases, increased spread rates and extinguish rates are positively associated with fire danger indexes. When fire danger indexes are included in the models, the spread component is usually the best choice. We also find that forest cover is negatively associated with spread rates and extinguish rates in tropical forests, and that anthropogenic cover is negatively associated with spread rates in xeric ecoregions. We explore potential applications of this model to wildfire risk assessment and burned area forecasting.

... Optimization using the comparator oracle was explored with directional direct search methods (Audet and Dennis Jr 2006) and the Nelson-Mead method (Nelder and Mead 1965). Directional direct search is guaranteed to converge to an optimal solution in the limit for smooth, convex functions. ...

We consider the problem of minimizing a smooth, Lipschitz, convex function over a compact, convex set using sub-zeroth-order oracles: an oracle that outputs the sign of the directional derivative for a given point and a given direction, an oracle that compares the function values for a given pair of points, and an oracle that outputs a noisy function value for a given point. We show that the sample complexity of optimization using these oracles is polynomial in the relevant parameters. The optimization algorithm that we provide for the comparator oracle is the first algorithm with a known rate of convergence that is polynomial in the number of dimensions. We also give an algorithm for the noisy-value oracle that incurs sublinear regret in the number of queries and polynomial regret in the number of dimensions.

... Afterwards, time picks were automatically generated using the Akaike information criterion 78 . Locations were then estimated via an iterative process that minimized travel time residuals using the downhill simplex algorithm 79 . At every iteration, inconsistent time picks with the larger discrepancies were systematically removed to improve the final overall location residuals. ...

Fast detection and characterization of seismic sources is crucial for decision-making and warning systems that monitor natural and induced seismicity. However, besides the laying out of ever denser monitoring networks of seismic instruments, the incorporation of new sensor technologies such as Distributed Acoustic Sensing (DAS) further challenges our processing capabilities to deliver short turnaround answers from seismic monitoring. In response, this work describes a methodology for the learning of the seismological parameters: location and moment tensor from compressed seismic records. In this method, data dimensionality is reduced by applying a general encoding protocol derived from the principles of compressive sensing. The data in compressed form is then fed directly to a convolutional neural network that outputs fast predictions of the seismic source parameters. Thus, the proposed methodology can not only expedite data transmission from the field to the processing center, but also remove the decompression overhead that would be required for the application of traditional processing methods. An autoencoder is also explored as an equivalent alternative to perform the same job. We observe that the CS-based compression requires only a fraction of the computing power, time, data and expertise required to design and train an autoencoder to perform the same task. Implementation of the CS-method with a continuous flow of data together with generalization of the principles to other applications such as classification are also discussed.

... This suggests that our problem may be sensitive to small changes in the data when we are interested in exact parameter estimates, and if we insist on exact parameter calculations, we may need multiple iterations or an algorithm with large step sizes. Therefore, as a local search strategy, we use the direct search method of Nelder and Mead [19] , implemented in Matlab's fminsearch. This algorithm can vary its step size, and it is also robust in the presence of noisy objective functions. ...

... We estimate β 0 (u) for quantiles u ∈ {0.3, 0.5, 0.7} and sample sizes n ∈ {500, 1000}. We use the algorithm of Nelder and Mead (1965) for the minimization of the objective function in (4.2). The minimum is searched in the compact set B = [0, 1] 3 , and the algorithm starts from a random point taken in B (the initial value follows a uniform distribution on B). ...

This paper studies a semiparametric quantile regression model with endogenous variables and random right censoring. The endogeneity issue is solved using instrumental variables. It is assumed that the structural quantile of the logarithm of the outcome variable is linear in the covariates and censoring is independent. The regressors and instruments can be either continuous or discrete. The specification generates a continuum of equations of which the quantile regression coefficients are a solution. Identification is obtained when this system of equations has a unique solution. Our estimation procedure solves an empirical analogue of the system of equations. We derive conditions under which the estimator is asymptotically normal and prove the validity of a bootstrap procedure for inference. The finite sample performance of the approach is evaluated through numerical simulations. The method is illustrated by an application to the national Job Training Partnership Act study.

... This variation of the template includes the sketch geometry, topology and also the sketch parameters which gave rise to the specific variation of the geometry with embedding . We then use the simplex algorithm [Nelder and Mead 1965] to fine tune the parameters of the sketch so that the extracted profile has the highest IoU with the approximate shape decoded from ( ). This results in the CAD profiles shown in black in Figure 9. ...

Reverse Engineering a CAD shape from other representations is an important geometric processing step for many downstream applications. In this work, we introduce a novel neural network architecture to solve this challenging task and approximate a smoothed signed distance function with an editable, constrained, prismatic CAD model. During training, our method reconstructs the input geometry in the voxel space by decomposing the shape into a series of 2D profile images and 1D envelope functions. These can then be recombined in a differentiable way allowing a geometric loss function to be defined. During inference, we obtain the CAD data by first searching a database of 2D constrained sketches to find curves which approximate the profile images, then extrude them and use Boolean operations to build the final CAD model. Our method approximates the target shape more closely than other methods and outputs highly editable constrained parametric sketches which are compatible with existing CAD software.

... Considering that the SWMM is used for simulation in this study in which the smallest simulation unit is sub-catchment, Eq. (5) needs to be discretized by the sub-catchments, as given by Eq. (6) below. Equation (6) is an implicit equation and is solved by Nelder-Mead method in this study (Nelder and Mead 1965): where n is number of sub-catchments in the focused urban catchment; x i , y i is the coordination of the ith sub-catchment; a i is the area of the ith sub-catchment. ...

An effective urban drainage system (UDS) is crucial for solving urban flooding problems, motivating plenty of studies to design, build and rehabilitate UDSs. However, the existing design and analysis methods usually assume a uniformly spatial distribution of rainfall intensity throughout an urban catchment, while there is an observably spatial variation of rainfall intensity (SVRI) in most practical systems, especially for short-duration storms and/or large-scale catchments. The assumption ignoring SVRI might fully or partially underestimate the runoffs locally and thus increase the partial flooding risks for the UDS designed under uniformly spatial rainfall distribution. To address this issue, this paper proposes an improved framework with two spatially variable rainfall models (SVRMs) to evaluate the impacts of SVRI on urban flooding. In this proposed framework, four aspects of improvements have been implemented: (i) both SVRMs are derived from the spatially uniform hyetographs to ensure the same total precipitation volume; (ii) both SVRMs utilize the density function of truncated two-dimensional Gaussian distribution to approximate the pattern of SVRI; (iii) different characteristics of SVRI are quantified in these two SVRMs respectively, and (iv) the Monte Carlo method is adopted to implement the uncertainty of rainfall intensity in SVRMs. Besides, two real-world UDSs of different configurations and scales are used to demonstrate the effectiveness of the developed framework. The application results show that the SVRI could significantly aggravate urban flooding risk including flooding duration and volume, and the impact patterns may vary with the characteristics of UDSs. The results and findings of this study also indicate the importance of taking SVRI into consideration in UDS design and flooding assessment practice.

... This stems from the fact that the linear quadratic (LQ) output-feedback control gain found provides good robustness properties for the closed-loop system. A classical way to compute the LQ output-feedback gain for linear time-invariant (LTI) systems involves solving a Riccati equation in terms of an optimization routine given by Nelder and Mead through a simplex algorithm [10]. However, the standard formulation of this technique does not cope with uncertain linear systems, as well as it is necessary to provide the state initial condition to obtain feasible solutions, even knowing that for practical applications such assumption may be unrealistic. ...

This paper presents novel linear quadratic (LQ) output-feedback synthesis conditions for stability augmentation systems for flexible aircraft. The proposed conditions have been formulated using linear matrix inequalities (LMIs) to provide two different approaches: centralized and decentralized LQ with output-feedback framework. The main highlights of the present design conditions are the ability to obtain static LQ controllers for stability augmentation systems resorting only to weighting matrices as tuning parameters, as well as a straightforward condition for uncertain systems. In addition, such existence conditions may be a useful strategy due to the low complexity for practical controller implementations in flexible aircraft and demonstrators, such that improving aircraft handling qualities should be achieved. In order to demonstrate the effectiveness of the proposed strategies, a dynamic model of an unmanned remotely-piloted experimental airplane called X-HALE is adopted. Linear simulations are performed and the main procedure steps to obtain a realistic stability augmentation system are addressed.

... To obtain the best-fit parameters of the Keplerian model, we maximized the likelihood function using a truncated Newton (TNC) algorithm and a Nelder-Mead algorithm (Nelder & Mead 1965). For parameter distribution and uncertainty analysis, we performed Markov Chain Monte Carlo MCMC) sampling with emcee package(Foreman-Mackey, Hogg, Lang, & Goodman 2013) from the maximum likelihood results. ...

We report the discovery of a triple-giant-planet system around an evolved star HD 184010 (HR 7421, HIP 96016). This discovery is based on observations from Okayama Planet Search Program, a precise radial velocity survey, undertaken at Okayama Astrophysical Observatory between 2004 April and 2021 June. The star is K0 type and located at beginning of the red-giant branch. It has a mass of $1.35_{-0.21}^{+0.19} M_{\odot}$, a radius of $4.86_{-0.49}^{+0.55} R_{\odot}$, and a surface gravity $\log g$ of $3.18_{-0.07}^{+0.08}$. The planetary system is composed of three giant planets in a compact configuration: The planets have minimum masses of $M_{\rm{b}}\sin i = 0.31_{-0.04}^{+0.03} M_{\rm{J}}$, $M_{\rm{c}}\sin i = 0.30_{-0.05}^{+0.04} M_{\rm{J}}$, and $M_{\rm{d}}\sin i = 0.45_{-0.06}^{+0.04} M_{\rm{J}}$, and orbital periods of $P_{\rm{b}}=286.6_{-0.7}^{+2.4}\ \rm{d}$, $P_{\rm{c}}=484.3_{-3.5}^{+5.5}\ \rm{d}$, and $P_{\rm{d}}=836.4_{-8.4}^{+8.4}\ \rm{d}$, respectively, which are derived from a triple Keplerian orbital fit to three sets of radial velocity data. The ratio of orbital periods are close to $P_{\rm{d}}:P_{\rm{c}}:P_{\rm{b}} \sim 21:12:7$, which means the period ratios between neighboring planets are both lower than $2:1$. The dynamical stability analysis reveals that the planets should have near-circular orbits. The system could remain stable over 1 Gyr, initialized from co-planar orbits, low eccentricities ($e=0.05$), and planet masses equal to the minimum mass derived from the best-fit circular orbit fitting. Besides, the planets are not likely in mean motion resonance. HD 184010 system is unique: it is the first system discovered to have a highly evolved star ($\log g < 3.5$ cgs) and more than two giant planets all with intermediate orbital periods ($10^2\ \rm{d} < P < 10^3\ \rm{d}$).

... We use a selection of different gradient-based as well as gradient-free optimizers for our studies. This includes the optimizers Nelder-Mead [34,22] and SPSA [35,22], which are frequently used for VQE problems, e.g. in the works by [6,36,19]. Based on the discussions in section 2, we further take into account NFT [23] and the Bayesian optimizer [37,38], as these are expected to be highly resilient to noise. ...

Quantum computers are expected to be highly beneficial for chemistry simulations, promising significant improvements in accuracy and speed. The most prominent algorithm for chemistry simulations on NISQ devices is the Variational Quantum Eigensolver (VQE). It is a hybrid quantum-classical algorithm which calculates the ground state energy of a Hamiltonian based on parametrized quantum circuits, while a classical optimizer is used to find optimal parameter values. However, quantum hardware is affected by noise, and it needs to be understood to which extent it can degrade the performance of the VQE algorithm. In this paper, we study the impact of noise on the example of the hydrogen molecule. First, we compare the VQE performance for a set of various optimizers, from which we find NFT to be the most suitable one. Next, we quantify the effect of different noise sources by systematically increasing their strength. The noise intensity is varied around values common to superconducting devices of IBM Q, and curve fitting is used to model the relationship between the obtained energy values and the noise magnitude. Since the amount of noise in a circuit highly depends on its architecture, we perform our studies for different ansatzes, including both hardware-efficient and chemistry-inspired ones.

... For verification of our custom-made optimization tool, we compared it with general-purpose methods of Mathematica: Levenberg-Marquardt method, which is common in nonlinear regression (Dennis & Schnabel, 1963), an interior point method (Potra & Wright, 2000), a differential evolution algorithm (Price et al., 2005), and the Nelder & Mead (1965) downhill simplex method. To ensure positive parameter values, in (1) we replaced parameter p by exp(pin+p1) for some initial value (e.g., pin = 0) and optimized for p1; the same for q and k0. ...

Background
The Solow–Swan model describes the long-term growth of the capital to labor ratio by the fundamental differential equation of Solow–Swan theory. In conventional approaches, this equation was fitted to data using additional information, such as the rates of population growth, capital depreciation, or saving. However, this was not the best possible fit.
Objectives
Using the method of least squares, what is the best possible fit of the fundamental equation to the time-series of the capital to labor ratios? Are the best-fit parameters economically sound?
Method
For the data, we used the Penn-World Table in its 2021 version and compared six countries and three definitions of the capital to labor ratio. For optimization, we used a custom-made variant of the method of simulated annealing. We also compared different optimization methods and calibrations.
Results
When comparing different methods of optimization, our custom-made tool provided reliable parameter estimates. In terms of R-squared they improved upon the parameter estimates of the conventional approach. Except for the USA, the best-fit values of the exponent were unplausible, as they suggested a too large elasticity of output. However, using a different calibration resulted in more plausible values of the best-fit exponent also for France and Pakistan, but not for Argentina and Japan.
Conclusion
Our results have shown a discrepancy between the best-fit parameters obtained from optimization and the parameter values that are deemed plausible in economy. We propose a research program to resolve this issue by investigating if suitable calibrations may generate economically plausible best-fit parameter values.

... The range parameters α k of the gaussians are often chosen stochastically using random [6] or quasirandom [10] sequences. However for the current one-dimensional problem we simply perform global optimization in the space of α k using the Melder-Need downhill simplex method [9,11]. ...

We apply the nuclear model with explicit mesons to photoproduction ofneutral pions off protons at the threshold. In this model the nucleons donot interact with each other via a potential but rather emit and absorbmesons that are treated explicitly on equal footing with the nucleons. We calculate the total cross section of the reaction for energies close tothreshold and compare the calculations with available experimental data. We show that the model is able to reproduce the experimental data anddetermine the range of the parameters where the model is compatible withthe experiment.

... We fitted the resulting food web's degree histogram, p(k), with power-law P PL (x) and gaussian P G (x) curves. The best-fitting parameters were calculated applying the Nelder-Mead method (Nelder & Mead, 1965), which performs unconstrained nonlinear minimisation of the sum of squared residuals with respect to its parameters. The correlation coefficient between the sampled dataset p j and the fitted dataset P j is defined as (Weisstein, 2021): ...

Networks describe nodes connected by links, with numbers of links per node, the degree, forming a range of distributions including random and scale‐free. How network topologies emerge in natural systems still puzzles scientists. Based on previous theoretical simulations, we predict that scale‐free food webs are favourably selected by random disturbances while random food webs are selected by targeted disturbances. We assume that lower human pressures are more likely associated with random disturbances, whereas higher pressures are associated with targeted ones. We examine these predictions using 351 empirical food webs, generally confirming our predictions. Should the topology of food webs respond to changes in the magnitude of disturbances in a predictable fashion, consistently across ecosystems and scales of organisation, it would provide a baseline expectation to understand and predict the consequences of human pressures on ecosystem dynamics.

... To solve the minimization problem we use the Nelder-Mead algorithm [24]. ...

The localization task in sensor networks is partic-ularly critical whenever the sensor measurements are position-related, as in case of thermal and electromagnetic quantities. The deployment of a sensor network often requires the usage of low-cost devices able to achieve acceptable measurement accuracy and having the need to retrieve fast and accurate positioning information. In such networks, the localization task is generally performed by a special node coordinating the network. Nevertheless, its computing power is often limited. To this aim, in this paper we compare two different positioning techniques (least square minimization, grid search), to be applied in Ultra- Wide-Band positioning scheme, from the accuracy point of view and computing time required for accomplishing the task. They differ in working principle, needed a priori information, localization resolution and time to completion parameter. According to the available resources, the adoption of one of them should be prefer-able to the other one. Obtained results prove the goodness of both methods, specifically ranking them by application purposes. The paper is intended to give the designers an extensive analysis to evaluate pros and cons to adopt a completely blind positioning technique, namely the least square minimization, versus a more informed and constrained system, as the grid search case.

... The range parameters α k of the gaussians are often chosen stochastically using random [6] or quasirandom [10] sequences. However for the current one-dimensional problem we simply perform global optimization in the space of α k using the Melder-Need downhill simplex method [9,11]. ...

We apply the nuclear model with explicit mesons to photoproduction of neutral pions off protons at the threshold. In this model the nucleons do not interact with each other via a potential but rather emit and absorb mesons that are treated explicitly on equal footing with the nucleons. We calculate the total cross section of the reaction for energies close to threshold and compare the calculations with available experimental data. We show that the model is able to reproduce the experimental data and determine the range of the parameters where the model is compatible with the experiment.

... 1.0 . The calculated convergence rates for the two Broyden method variants [7], for the Powell's method [8], for the adaptive coordinate descent method [9] and for the Nelder-Mead simplex method [10] were compared with the calculated values for the T-secant method in Table 7. Rows 1 − 5 are data from referenced papers, rows 6 − 8 are T-secant results with the referenced initial trials and rows 9-15 are calculated data for N > 2 . Results show that the mean convergence rate L (Equation 11.1) for N = 2 is much higher for the Tsecant method (≃ 5.5 − 6.9) than for the other listed methods ( ≃ 0.1 − 0.6), however it is obvious that the convergence rate values decrease rapidly with increasing N values (more unknowns need more function evaluations). ...

A new secant-method based numerical procedure (T-secant method) with super-quadratic convergence (convergence rate 2.618) has been developed for least-squares solving of simultaneous multi-variable nonlinear equations. The basic idea of the procedure is that the original secant equations are modified by a non-uniform scaling transformation “T”, an additional new estimate is determined in each iteration and a completely new set of trial solutions is constructed for the next iteration. The suggested method seems to eliminate the unstable behavior of the traditional method and the efficiency is an order higher than Broyden’s methods in case of the numerical example. The vector-based interpretation and its geometrical representation helps to explain the basic operations. The performance of the new method is demonstrated by the results of numerical tests with a Rosenbrock-type test-function with up to 1000 unknown variables.

... Minimization of the OFV was then performed with ECvalues forming the combination scenarios as design variables using Nelder-Mead [12] algorithm pre-minimizing and L-BFGS-B [13] algorithm for a final minimization. ...

Purpose
Quantification of pharmacodynamic interactions is key in combination therapies, yet conventional checkerboard experiments with up to 10 by 10 combinations are labor-intensive. Therefore, this study provides optimized experimental rhombic checkerboard designs to enable an efficient interaction screening with significantly reduced experimental workload.
Methods
Based on the general pharmacodynamic interaction (GPDI) model implemented in Bliss Independence, a novel rhombic ‘dynamic’ checkerboard design with quantification of bacteria instead of turbidity as endpoint was developed. In stochastic simulations and estimations (SSE), the precision and accuracy of interaction parameter estimations and classification rates of conventional reference designs and the newly proposed rhombic designs based on effective concentrations (EC) were compared.
Results
Although a conventional rich design with 20-times as many combination scenarios provided estimates of interaction parameters with higher accuracy, precision and classification rates, the optimized rhombic designs with one natural growth scenario, three monotherapy scenarios per combination partner and only four combination scenarios were still superior to conventional reduced designs with twice as many combination scenarios. Additionally, the rhombic designs were able to identify whether an interaction occurred as a shift on maximum effect or EC50 with > 98%. Overall, effective concentration-based designs were found to be superior to traditional standard concentrations, but were more challenged by strong interaction sizes exceeding their adaptive concentration ranges.
Conclusion
The rhombic designs proposed in this study enable a reduction of resources and labor and can be a tool to streamline higher throughput in drug interaction screening.

... In the current study, the optimum constants (the coefficients of equation (2)), leading to the predicted values being close to the experimental ones, are optimized through Nelder-Mead simplest (NMS) algorithm [52] with the objective function,F O , as follows: ...

Keywords: Hydrate formation temperature HFT Wide range of natural gas mixtures Unified correlation Group method of data handling GMDH Outlier detection a b s t r a c t There are numerous correlations and thermodynamic models for predicting the natural gas hydrate formation condition but still the lack of a simple and unifying general model that addresses a broad ranges of gas mixture. This study was aimed to develop a user-friendly universal correlation based on hybrid group method of data handling (GMDH) for prediction of hydrate formation temperature of a wide range of natural gas mixtures including sweet and sour gas. To establish the hybrid GMDH, the total experimental data of 343 were obtained from open articles. The selection of input variables was based on the hydrate structure formed by each gas species. The modeling resulted in a strong algorithm since the squared correlation coefficient (R 2) and root mean square error (RMSE) were 0.9721 and 1.2152, respectively. In comparison to some conventional correlation, this model represented not only the outstanding statistical parameters but also its absolute superiority over others. In particular, the result was encouraging for sour gases concentrated at H 2 S to the extent that the model outstrips all available thermodynamic models and correlations. Leverage statistical approach was applied on datasets to the discovery of the defected and doubtful experimental data and suitability of the model. According to this algorithm, approximately all the data points were in the proper range of the model and the proposed hybrid GMDH model was statistically reliable.

... This updating continues until reaching the convergence or the maximum number of iterations. This algorithm requires function evaluation at each step and typically it is not as fast as derivative-based algorithms [29,30]. ...

Experimental evidence in both human and animal studies demonstrated that deep brain stimulation (DBS) can induce short-term synaptic plasticity (STP) in the stimulated nucleus. Given that DBS-induced STP may be connected to the therapeutic effects of DBS, we sought to develop a computational predictive model that infers the dynamics of STP in response to DBS at different frequencies. Existing methods for estimating STP–either model-based or model-free approaches–require access to pre-synaptic spiking activity. However, in the context of DBS, extracellular stimulation (e.g. DBS) can be used to elicit presynaptic activations directly. We present a model-based approach that integrates multiple individual frequencies of DBS-like electrical stimulation as pre-synaptic spikes and infers parameters of the Tsodyks-Markram (TM) model from post-synaptic currents of the stimulated nucleus. By distinguishing between the steady-state and transient responses of the TM model, we develop a novel dual optimization algorithm that infers the model parameters in two steps. First, the TM model parameters are calculated by integrating multiple frequencies of stimulation to estimate the steady state response of post-synaptic current through a closed-form analytical solution. The results of this step are utilized as the initial values for the second step in which a non-derivative optimization algorithm is used to track the transient response of the post-synaptic potential across different individual frequencies of stimulation. Moreover, in order to confirm the applicability of the method, we applied our algorithm–as a proof of concept–to empirical data recorded from acute rodent brain slices of the subthalamic nucleus (STN) during DBS-like stimulation to infer dynamics of STP for inhibitory synaptic inputs.

... On the other hand, it is not differentiable in general. This means that in order to maximise the profit we need to use derivative-free optimisation algorithms, such as Nelder-Mead ( [26,29]). ...

Newsvendor problems are an important and much-studied topic in stochastic inventory control. One strand of the literature on newsvendor problems is concerned with the fact that practitioners often make judgemental adjustments to the theoretically "optimal" order quantities. Although judgemental adjustment is sometimes beneficial, two specific kinds of adjustment are normally considered to be particularly naive: demand chasing and pull-to-centre. We discuss how these adjustments work in practice and what they imply in a variety of settings. We argue that even such naive adjustments can be useful under certain conditions. This is confirmed by experiments on simulated data. Finally, we propose a heuristic algorithm for "tuning" the adjustment parameters in practice.

... Existing black-box optimization techniques that can solve F1 can be grouped into two classes: (a) sampling-based and (b) surrogate-model-based, each of which have advantages and disadvantages as discussed by previously cited review articles. Examples of local sampling-based methods include Nelder-Mead simplex algorithm (Nelder and Mead 1965), pattern search (Torczon 1997;Lewis and Torczon 1999), and mesh adaptive direct search (Audet and Dennis 2006). Global sampling-based methods progressively partition the search space, aiming to find the global optimum by searching the entire space. ...

Black-box surrogate-based optimization has received increasing attention due to the growing interest in solving optimization problems with embedded simulation data. The main challenge in surrogate-based optimization is the lack of consistently convergent behavior, due to the variability introduced by initialization, sampling, surrogate model selection, and training procedures. In this work, we build-up on our previously proposed data-driven branch-and-bound algorithm that is driven by adaptive sampling and the bounding of not entirely accurate surrogate models. This work incorporates Kriging and support vector regression surrogates, for which different bounding strategies are proposed. A variety of data-driven branching heuristics are also proposed and compared. The key finding of this work is that by bounding fitted, approximate surrogate models, one can employ a branch-and-bound structure that converges to the same optimum despite different initialization of samples and selection and training of a surrogate model. The performance of the algorithm is tested using box-constrained nonlinear benchmark problems with up to ten variables.

... Along this, Nelder-Mead (NM) meta-heuristic is leveraged here to solve the objective function in a few steps, since it provides reduced computational complexity and run-time, as first proposed in Nelder and Mead. 26 The NM algorithm first constructs a simplex with Δ = Г + 1 number of vertices, that is, Г denotes the dimension of the search space. Further, a vertex represents a single fog node in Layer II. ...

Fog computing allows for local data processing at the edge of the network. This allows for reduced link delays between mobile users and access points. Here, fog nodes are collocated with radio remote heads (RRHs) to provide local processing capabilities, that is, creating fog radio access network (F‐RAN) that can support ultralow latency for cellular networks that operate on millimeter wave (mmWave) bands. Here, mobile stations (MS) demand various network functions (NFs) that correspond to different service requests. Hence, it is critical to study function popularity and allow content caching at the F‐RAN. Given the limited resources at the fog nodes, it is important to efficiently manage the resources to improve network operations and enhance the capacity at reduced delays and cost. Therefore, caching in mmWave F‐RAN requires node allocation that can accommodate the highest number of cached functions. Hence, this paper proposes a novel node placement scheme that leverages Nelder–Mead meta‐heuristic. Results show that the proposed scheme yields in reduced delay, cost, and power and energy consumption.

... where k M (ω n ) = ω n /v M (ω n ) − jα M (ω n ) and k M (ω n , θ M ) are the measured and modeled 329 complex wave numbers at the n th discrete frequency, respectively, while N is the total number 330 of discrete frequencies of the useful bandwidth on which the optimization is performed 331 (recall the gray area in Fig. 4). The minimization was performed using the unconstrained 332 Simplex algorithm (Nelder and Mead, 1965), which was implemented in Python 3.8 (using ...

Photopolymer-based additive manufacturing has received increasing attention in the field of acoustics over the past decade, specifically towards the design of tissue-mimicking phantoms and passive components for ultrasound imaging and therapy. While these applications rely on an accurate characterization of the longitudinal bulk properties of the materials, emerging applications involving periodic micro-architectured media also require the knowledge of the transverse bulk properties to achieve the desired acoustic behavior. However, a robust knowledge of these properties is still lacking for such attenuating materials. Here, we report on the longitudinal and transverse bulk properties, i.e., frequency-dependent phase velocities and attenuations, of photopolymer materials, which were characterized in the MHz regime using a double through-transmission method in oblique incidence. Samples were fabricated using two different printing technologies (stereolithography and polyjet) to assess the impact of two important factors of the manufacturing process: curing and material mixing. Overall, the experimentally observed dispersion and attenuation could be satisfactorily modeled using a power law attenuation to identify a reduced number of intrinsic ultrasound parameters. As a result, these parameters, and especially those reflecting transverse bulk properties, were shown to be very sensitive to slight variations of the manufacturing process.

Global river systems are experiencing rapid changes in sediment transport under growing anthropogenic and climatic stresses. However, the response of sediment discharge to the coupled influence of anthropogenic and natural factors and the associated impacts on the fluvial geomorphology in the Yangtze and Mekong rivers are not comprehensively assessed. Here, we recalibrated a seamless retrieval algorithm of the total suspended sediment (TSS) concentrations using in situ data and concurrent satellite data sets to analyze spatiotemporal patterns of the TSS concentrations in the lower Yangtze and Mekong rivers. Combined with soil erosion rates estimated by the Revised Universal Soil Loss Equation for the past 20 years, we examined the contributions of different factors to TSS trends. The results show that TSS concentrations in the Yangtze River decreased from 0.47 g L⁻¹ in 2000 to 0.23 g L⁻¹ in 2018 due to the construction of the Three Gorges Dam (TGD), especially in the Jingjiang reach, with a declining magnitude of 0.3 g L⁻¹ (∼56%) since the TGD began operating. The Mekong River experienced increasing TSS concentration trends upstream and decreasing trends downstream from 2000 to 2018, possibly attributed to increased upstream soil erosion and decreased downstream water discharge. Declining TSS concentrations in both rivers have driven varying degrees of river channel erosion over the past two decades. This study investigated long‐term changes in the TSS concentrations and soil erosion in the Yangtze and Mekong rivers, and the results provide baseline information for the sustainable development of river sediment delivery.

Recent studies suggest that transcranial electrical stimulation (tES) can be performed during functional magnetic resonance imaging (fMRI). The novel approach of using concurrent tES‐fMRI to modulate and measure targeted brain activity/connectivity may provide unique insights into the causal interactions between the brain neural responses and psychiatric/neurologic signs and symptoms, and importantly, guide the development of new treatments. However, tES stimulation parameters to optimally influence the underlying brain activity may vary with respect to phase difference, frequency, intensity, and electrode's montage among individuals. Here, we propose a protocol for closed‐loop tES‐fMRI to optimize the frequency and phase difference of alternating current stimulation (tACS) for two nodes (frontal and parietal regions) in individual participants. We carefully considered the challenges in an online optimization of tES parameters with concurrent fMRI, specifically in its safety, artifact in fMRI image quality, online evaluation of the tES effect, and parameter optimization method, and we designed the protocol to run an effective study to enhance frontoparietal connectivity and working memory performance with the optimized tACS using closed‐loop tES‐fMRI. We provide technical details of the protocol, including electrode types, electrolytes, electrode montages, concurrent tES‐fMRI hardware, online fMRI processing pipelines, and the optimization algorithm. We confirmed the implementation of this protocol worked successfully with a pilot experiment.

This chapter treats the solution of the neuroelectromagnetic inverse problem (NIP), that is, the reconstruction of the primary current density underlying EEG/MEG measurements, given a particular source model (► Chap. 4) and forward model (► Chap. 5). We will present a theoretical framework of the problem, centered on Bayes’ theorem. Different approaches to the NIP will be classified according to their underlying source model as well as the applied priors and described in greater detail. Specifically, you will learn about focal source reconstruction (i.e., dipole fit methods), distributed source reconstruction (i.e., minimum norm methods), spatial filters and scanning methods, and dynamic source reconstruction.

ResearchGate has not been able to resolve any references for this publication.