## No full-text available

To read the full-text of this research,

you can request a copy directly from the authors.

To read the full-text of this research,

you can request a copy directly from the authors.

... In this direction, pioneering work in synthetic biology inserted known aptamer domains into 5' untranslated regions (UTRs) of messenger RNAs (mRNAs) to sense small molecules (10), and also exploited riboregulation in combination with smallmolecule-responsive promoters to control gene networks and metabolic pathways (8,9). More recently, important steps towards RNA-based sensing have been carried out by engineering aptazymes in the 5' or 3' UTRs to sense both small molecules (11,12) and small RNAs (sRNAs) (13). ...

... We developed an optimization algorithm to design regazymes provided the sequences of a given aptazyme and a riboregulator. On the one hand, the aptazyme responds to its ligand to cleave the RNA sequence at a given point 10 . On the other hand, the riboregulator is able to activate protein expression by inducing a conformational change in the 5' UTR of the mRNA. ...

... ! S15! Starting from random sequences for the prefix and suffix, the algorithm implements a heuristic optimization based on Monte Carlo Simulated Annealing (Supplementary Figure 6b) 10 . At each step, random mutations consisting in replacements, additions, or deletions are applied to evolve the sequence, and then selected with an objective function (ΔG score ). ...

Organisms have different circuitries that allow converting signal molecule levels to changes in gene expression. An important challenge in synthetic biology involves the de novo design of RNA modules enabling dynamic signal processing in live cells. This requires a scalable methodology for sensing, transmission, and actuation, which could be assembled into larger signaling networks. Here, we present a biochemical strategy to design RNA-mediated signal transduction cascades able to sense small molecules and small RNAs. We design switchable functional RNA domains by using strand-displacement techniques. We experimentally characterize the molecular mechanism underlying our synthetic RNA signaling cascades, show the ability to regulate gene expression with transduced RNA signals, and describe the signal processing response of our systems to periodic forcing in single live cells. The engineered systems integrate RNA-RNA interaction with available ribozyme and aptamer elements, providing new ways to engineer arbitrary complex gene circuits.

... Evolution algorithms (Ev-based) are: Genetic algorithm (GA) [11], Differential evolution (DE) [12], Biogeography-based optimizer (BBO) [13], Genetic programming (GP) [14]. Algorithms based on physical characteristics (Phy-based) are: Simulated annealing (SA) [15], Gravitational search algorithm (GSA) [16], Harmony search (HS) [17], Sine cosine algorithm (SCA) [18], Equilibrium optimizer (EO) [19], Gradientbased optimizer (GBO) [20]. Algorithms based on human social behavior (Hu-based) are: Teaching learning based optimization (TLBO) [21], Tabu search (TS) [22], Socio evolution According to the biological characteristics of natural animals (such as oviparous animals or mammals, and insects) and plants, swarm intelligence optimization algorithms can be divided into three categories. ...

... Update the position p b of best individual agent; 8 end 9 if g b > F new then 10 Update the position g b of best agents; 11 end 12 Obtain the new velocity of the new agent using Equation (13); 13 Update the position of current search agent using Equation (14); 14 Set a random number r in (0,1). 15 if r < SP then 16 Move towards best position by Equation (15) Update the parameters c using Equation (16). ...

Complex optimization (CO) problems have been solved using swarm intelligence (SI) methods. One of the CO problems is the Wireless Sensor Network (WSN) coverage optimization problem, which plays an important role in Internet of Things (IoT). A novel hybrid algorithm is proposed, named hybrid particle swarm butterfly algorithm (HPSBA), by combining their strengths of particle swarm optimization (PSO) and butterfly optimization algorithm (BOA), for solving this problem. Significantly, the value of individual scent intensity should be non-negative without consideration of the basic BOA, which is calculated with absolute value of the proposed HPSBA. Moreover, the performance of the HPSBA is comprehensively compared with the fundamental BOA, numerous potential BOA variants, and tried-and-true algorithms, for solving the twenty-six commonly used benchmark functions. The results show that HPSBA has a competitive overall performance. Finally, when compared to PSO, BOA, and MBOA, HPSBA is used to solve the node coverage optimization problem in WSN. The experimental results demonstrate that the HPSBA optimized coverage has a higher coverage rate, which effectively reduces node redundancy and extends WSN survival time.

... However, more computational cost as a result of evaluating much more objective functions are two substantial disadvantages this category suffers from as compared to the individual-based problems. Some of the popular individual-based algorithms include Hill climbing [2], Iterated local search [3], Simulated Annealing (SA) [4], and Tabu Search (TS) [5,6]. ...

... These potential merits are quantitatively evaluated through conducting numerous comparisons between the GMO and a vast range of the other newly-proposed and widely-used meta-heuristic algorithms. A schematic way to conduct the optimization process in the GMO is illustrated in Fig. 2. It is worth mentioning that in this figure, the average fitness of the four separate regions is different such that: 4 , in which denotes the . Accordingly, we have a relation as follows: ...

This paper introduces a new meta-heuristic technique, named Geometric Mean Optimizer (GMO) that emulates the unique properties of the geometric mean operator in mathematics. This operator can simultaneously evaluate the fitness and diversity of the search agents in the search space. In GMO, the geometric mean of the scaled objective values of a certain agent’s opposites is assigned to that agent as its weight representing its overall eligibility to guide the other agents in the search process when solving an optimization problem. Furthermore, the GMO has no parameter to tune, contributing its results to be highly reliable. The competence of the GMO in solving optimization problems is verified via implementation on 52 standard benchmark test problems including 23 classical test functions, 29 CEC2017 test functions as well as nine constrained engineering problems. The results presented by the GMO are then compared with those offered by several newly-proposed and popular meta-heuristic algorithms. The results demonstrate that the GMO significantly outperforms its competitors on a vast range of the problems.

... To solve the performance problems incurred by ILP1, Ref. [8] introduced a heuristic algorithm capable of generating quality solutions at low cost, referred to as Algorithm 1 (A1). This algorithm is based on the simulated annealing technique introduced by [12]. It starts from an initial solution and explores the solution space by executing two different movements. ...

... The objective function, (12), is similar to the objective function of the original model. The only difference is that in this case the group fragmentation among the different strips should be minimized. ...

This paper deals with new methods capable of solving the optimization problem concerning the allocation of DNA samples in plates in order to carry out the DNA sequencing with the Sanger technique. These methods make it possible to work with independent subproblems of lower complexity, obtaining solutions of good quality while maintaining a competitive time cost. They are compared with the ones introduced in the literature, obtaining interesting results. All the comparisons among the methods in the literature and the laboratory results have been made with real data.

... Os métodos evolucionários são técnicas com inspirações em fenômenos naturais ou biológicos que podem ser descritos como métodos de seleção de melhores resultados em uma população. Exemplos de técnicas evolucionárias são Simulated Annealing [Kirkpatrick et al. 1983], Particle Swarm Optimization [Eberhart and Kennedy 1995], CMAES [Hansen and Ostermeier 2001] e algoritmos genéticos. ...

... O tuning dos modelosé feito através de uma validação cruzada com 5 repartições. A métrica de desempenho adotadaé a taxa de erro de classificação queé a mais utilizada em tuning de classificação [Koch et al. 2018 Os algoritmos testados foram os clássicos: Random Search (busca aleatória), Simulated Annealing [Kirkpatrick et al. 1983], CMA-ES [Hansen and Ostermeier 2001], TPE do HyperOpt [Bergstra et al. 2015], além dos estado da arte TPOT [Le et al. 2020] e Hyperband [Li et al. 2017], todos comparados ao BarySearch. Todos os algoritmos foram testados com 25 iterações de tuning e o processo de tuning de cada algoritmo foi repetido 30 vezes com diferentes inicializações. ...

Em muitas aplicações de Machine Learning, é desejável obter o melhor conjunto de hiperparametros para otimizar o desempenho da aplicação. O problema de otimizar os hiperparametros é conhecido como tuning de modelos Machine Learning. Apesar de ser um problema de otimização, o tuning enfrenta dificuldades complexas, já que os modelos são vistos como caixas pretas sem formulação matemática bem definida. Além disso, há problemas com regioes de oscilações e regiões de grandes platôs. Nesse trabalho, nós apresentamos o BarySearch, um algoritmo que se utiliza da equação do baricentro sem necessidade de calcular derivadas da função objetivo. A técnica BarySearch demonstrou ter resultados promissores em testes praticos de tuning de modelos.

... It is therefore critical to find the optimal parameter estimators that minimize or maximize a chosen objective function, such as the squared error of the fit. In practice, iterative local methods (e.g., Levenberg-Marquardt method [9,10] and Nelder-Mead method [11]) and heuristic methods (e.g., genetic algorithms [12] and simulated annealing [13]) are commonly employed for parameter estimation in nonlinear models to minimize the squared error of the fit. However, practitioners are not able to know whether the parameter estimators derived through these extant methods are the optimal parameter estimators that correspond to the global minimum of the squared error of the fit. ...

... Besides these iterative local methods, several heuristic methods, also call guided random search techniques [24], such as genetic algorithms [12], simulated annealing [13], particle swarm optimization [25], and tabu search [26,27], also have been applied for parameter estimation in nonlinear models to minimize the squared error of the fit. These heuristic methods are usually initialized by multiple initial guesses, known as initial population, as the initial estimators for each parameter in the nonlinear model. ...

Important for many science and engineering fields, meaningful nonlinear models result from fitting such models to data by estimating the value of each parameter in the model. Since parameters in nonlinear models often characterize a substance or a system (e.g., mass diffusivity), it is critical to find the optimal parameter estimators that minimize or maximize a chosen objective function. In practice, iterative local methods (e.g., Levenberg-Marquardt method) and heuristic methods (e.g., genetic algorithms) are commonly employed for least squares parameter estimation in nonlinear models. However, practitioners are not able to know whether the parameter estimators derived through these methods are the optimal parameter esti-mators that correspond to the global minimum of the squared error of the fit. In this paper, a focused regions identification method is introduced for least squares parameter estimation in nonlinear models. Using expected fitting accuracy and derivatives of the squared error of the fit, this method rules out the regions in parameter space where the optimal parameter esti-mators cannot exist. Practitioners are guaranteed to find the optimal parameter estimators through an exhaustive search in the remaining regions (i.e., focused regions). The focused regions identification method is validated through a case study in which the Michaelis-Menten model is fitted to an experimental data set. The case study shows that the focused regions identification method can find the optimal parameter estimators and the corresponding global minimum effectively and efficiently.

... Our main idea can be well implemented with the simulated annealing algorithm [5]. The objective function value is the "energy", the number of iterations is the "temperature", a model update is a "solution", and we reject a solution with a certain probability concerning the energy change and the temperature. ...

... In the context of optimization, Kirkpatrick et al. [5] regarded objective function value as the energy, and finding a better solution as finding a lower-energy particle. Assuming the objective function of the problem is f (x), the current temperature is T i , the current feasible solution is x i , and the new solution is x , according to the simulated annealing process, the probability that the new solution x is accepted as the next feasible solution x i+1 is designed as Eq. ...

Differential privacy (DP) provides a formal privacy guarantee that prevents adversaries with access to machine learning models from extracting information about individual training points. Differentially private stochastic gradient descent (DPSGD) is the most popular training method with differential privacy in image recognition. However, existing DPSGD schemes lead to significant performance degradation, which prevents the application of differential privacy. In this paper, we propose a simulated annealing-based differentially private stochastic gradient descent scheme (SA-DPSGD) which accepts a candidate update with a probability that depends both on the update quality and on the number of iterations. Through this random update screening, we make the differentially private gradient descent proceed in the right direction in each iteration, and result in a more accurate model finally. In our experiments, under the same hyperparameters, our scheme achieves test accuracies 98.35%, 87.41% and 60.92% on datasets MNIST, FashionMNIST and CIFAR10, respectively, compared to the state-of-the-art result of 98.12%, 86.33% and 59.34%. Under the freely adjusted hyperparameters, our scheme achieves even higher accuracies, 98.89%, 88.50% and 64.17%. We believe that our method has a great contribution for closing the accuracy gap between private and non-private image classification.

... La méthode de recuit simulé (en anglais simulated annealing (SA) ) est une méthode d'optimisation appliquée à des nombreux systèmes (Kirkpatrick et al., 1983). Cette méthode basée sur l'algorithme de Metropolis (c.f. Figure 1.10) recourt à des méthodes statistiques pour engendrer une séquence de configurations qui tendent vers l'équilibre thermodynamique. ...

Deux méthodes différentes basées sur la dynamique moléculaire classique ont été développées dans ce travail de thèse dans le but de déterminer les abondances relatives des différents isomères des clusters carbonés neutres de taille variant entre 2 et 54 atomes ainsi que l'isomère le plus stable de chaque taille; et de mettre en évidence la nature et les mécanismes de collage puis en calculer leurs probabilités. Nous avons tout d'abord développé une méthode de condensation-recuit combinés basée sur un cycle thermique comportant trois phases : une phase de condensation et de chauffage, une phase à température constante et une phase de refroidissement. Premièrement, nous avons identifié les abondances statistiques des différents types des isomères en fonction des paramètres de cycle thermique. Ensuite, nous avons identifié l'isomère le plus stable pour chaque taille étudiée. Dans un second temps, nous avons mis en place une méthode de projection d'atomes de carbone sur des clusters cibles C36 (fullerène et graphène) et C80 (fullerène). Cette méthode nous a permis d'étudier les mécanismes de collage C-Cn et les probabilités associées pour différentes conditions. L'originalité de cette étude est que nous avons différencié la nature de collage obtenue pour chaque cas

... [1][2][3]. For these problems physics has contributed with lots of concepts and methods to the field of optimization, starting from the idea of thermal simulated annealing [4], to the applications of replica and cavity methods [3]. Quantum Annealing was initially formulated in the 90's [5] as a quantum alternative to classical simulated annealing in which quantum tunneling replaces thermal hopping in order for the system to avoid being trapped in local minima and reach the ground state (i.e. the solution of the optimization problem). ...

Quantum Annealing has proven to be a powerful tool to tackle several optimization problems. However, its performances are severely limited by the number of qubits we can fit on a single chip and their local connectivity. In order to address these problems, in this work, we propose a protocol to perform distributed quantum annealing. Our approach relies on Trotterization to slice the adiabatic evolution into local and non-local steps, the latter which are distributed using entanglement-assisted local operations and classical communications (eLOCC). Theoretical bounds on the Trotter step size and successful distribution probability of the process have been established, even in the presence of noise. These bounds have been validated by simulating numerically the evolution of the system, for a range of annealing problems of increasing complexity.

... Finally, it is worth noting that generating samples from exponential distributions is of independent interest for a variety of applications. These include understanding the thermodynamic properties of materials [42], optimization with simulated annealing [44], and machine learning with "Boltzmann machines" [45]. It is possible that new quantum-classical algorithms might be developed, which harness QAOA as a generator of exponential samples. ...

The quantum approximate optimization algorithm (QAOA) is a quantum algorithm for approximately solving combinatorial optimization problems with near-term quantum computers. We analyze structure that is present in optimized QAOA instances solving the MaxCut problem, on random graph ensembles with $n=14-23$ qubits and depth parameters $p\leq12$. Remarkably, we find that the average basis state probabilities in the output distribution of optimized QAOA circuits can be described through a Boltzmann distribution: The average basis state probabilities scale exponentially with their energy (cut value), with a peak at the optimal solution. This generalizes previous results from $p=1$. We further find a unified empirical relation that describes the rate of exponential scaling or "effective temperature" across instances at varying depths and sizes, and use this to generate approximate QAOA output distributions. Compared to the true simulation results, these approximate distributions produce median approximation ratio errors of $\leq0.6\%$ at each $n$ and $p$ and worst-case error of $2.9\%$ across the 7,200 instances we simulated exactly. The approximate distributions also account for the probability for the optimal solution and cumulative distribution functions, but with larger errors. We conclude that exponential patterns capture important and prevalent aspects of QAOA measurement statistics on random graph ensembles, and make predictions for QAOA performance metric scalings on random graphs with up to 38 qubits and depth parameters $p\leq 12$.

... Although general MAs can find global optimal solutions after enough iterations, there are an increasing number of MAs being proposed with higher efficiency and improved performance. For example, the multiverse optimizer (MVO) [5] establishes mathematical models by simulating white holes, black holes, and wormholes in the universe to conduct global search and local exploitation; simulated annealing (SA) [6] comes from the principle of solid annealing; the golden ratio optimization method (GROM) [7] is inspired by the golden ratio of plant growth; the artificial chemical reaction optimization algorithm (ACROA) [8] is inspired by different types of chemical reactions; the whale optimization algorithm (WOA) [9] imitates whales' behavior to defend prey and obtain food through bubble net attack; the slime mold algorithm (SMA) [10] is based on the spread and foraging behavior of slime molds; the aquila optimizer (AO) [11] establishes a mathematical model based on the aquila's hunting behavior to solve optimization problems; the inspiration of teaching-and learning-based optimization (TLBO) [12] comes from the teaching behavior between teachers and students and the self-learning behavior among students; and harmony search (HS) [13] simulates the improvisation of musicians. ...

Recently, a new swarm intelligence optimization algorithm called the remora optimization algorithm (ROA) was proposed. ROA simulates the remora's behavior of the adsorption host and uses some formulas of the sailfish optimization (SFO) algorithm and whale optimization algorithm (WOA) to update the solutions. However, the performance of ROA is still unsatisfactory. When solving complex problems, ROA's convergence ability requires further improvement. Moreover , it is easy to fall into local optimization. Since the remora depends on the host to obtain food and optimize ROA performance, this paper introduces the mutualistic strategy to strengthen the symbiotic relationship between the remora and the host. Meanwhile, chaotic tent mapping and roulette wheel selection are added to further improve the algorithm's performance. By incorporating the above improvements, this paper proposes an improved remora optimization algorithm with a mutualistic strategy (IROA) and uses 23 benchmark functions in different dimensions and CEC2020 functions to validate the performance of the proposed IROA. Experimental studies on six classical engineering problems demonstrate that the proposed IROA has excellent advantages in solving practical optimization problems.

... Subsequently, an acceptance decision is made to determine whether to update the current solution or not. We apply a simulated annealing (SA) acceptance criterion ( [48]) to accept solution ″. It is controlled by three parameters: initial temperature , cooling rate , and the number of iterations to reset the initial temperature . ...

The bike rebalancing problem is one of the major operational challenges in the urban bike-sharing system, which involves the redistribution of bikes among stations to prevent stations from being empty or overloaded. This paper investigates a new bike rebalancing problem, which considers the collection of broken bikes in the multi-depot system. The proposed problem can be classified as a two-commodity vehicle routing problem with pick-up and delivery. An integer programming model is formulated to find the optimal vehicle assignment and visiting sequences with the minimum total working time and fixed cost of vehicles. A hybrid heuristic algorithm integrating variable neighborhood search and dynamic programming is proposed to solve the problem. The computational results show that the proposed method can find 26 best solutions out of 36 instances, while the CPLEX obtains 16 best solutions. Impact of broken bikes collection and distribution of depots is examined. Comparison of different practical strategies indicates that the number of vehicles can be significantly reduced by allowing multiple visits to depots. Allowing vehicles to return to different depots can help reduce the total working time.

... The possible adsorption configurations were determined by using MC searches of the configurational space of the Mt-water system as the temperature is decreased slowly based on the simulated annealing schedule. The simulated annealing is a metaheuristic algorithm for locating a suitable approximation to the global minimum of a given function in a large search space [37]. The candidate Mt-water configurations were sampled using a lVT canonical ensemble (the chemical potential l, the volume V and the temperature T are constant). ...

Expansive soil is blamed for many engineering problems such as foundation damages, subgrade heave, and road surface bulking. Lime is one of the most widely utilized materials in the stabilization of expansive soil. However, the stabilization mechanism of lime-treated expansive soil has not been thoroughly studied from the nanoscale level. This paper employed montmorillonite (Mt) and portlandite (Po) to represent expansive soil and the hydration product of lime. Four types of portlandite-montmorillonite (Po-Mt) molecular models with different surface charges and interlayer cations revealed the nanoscale stabilization mechanism of Po-Mt. The results show that volume change of Po-Mt samples is not only related to adsorption energy of Mt, but also controlled by competitive adsorption of Po and interaction between lime and Mt. The interface energy between Po and Mt generated by Ca ions migration from Po to Mt surface plays a most significant role in governing the swelling behavior of Po-Mt by providing strong repulsive force to confine the swelling of Mt layers.

... In the literature, most search algorithms for comparative experiments focus on variants of simulated annealing (Kirkpatrick et al. 1983) and Tabu search (Glover 1989), see a review in Vo-Thanh and Piepho (2022). Recently, Vo-Thanh and Piepho (2022) considered a variant of hill climbing (Appleby et al. 1961) to obtain optimal row-column designs with a complex blocking structure, and equal or unequal treatment replications. ...

Two-phase experiments are widely used in many areas of science (e.g., agriculture, industrial engineering, food processing, etc.). For example, consider a two-phase experiment in plant breeding. Often, the first phase of this experiment is run in a field involving several blocks. The samples obtained from the first phase are then analyzed in several machines (or days, etc.) in a laboratory in the second phase. There might be field-block-to-field-block and machine-to-machine (or day-to-day, etc.) variation. Thus, it is practical to consider these sources of variation as blocking factors. Clearly, there are two possible strategies to analyze this kind of two-phase experiment, i.e., blocks are treated as fixed or random. While there are a few studies regarding fixed block effects, there are still a limited number of studies with random block effects and when information of block effects is uncertain. Hence, it is beneficial to consider a Bayesian approach to design for such an experiment, which is the main goal of this work. In this paper, we construct a design for a two-phase experiment that has a single treatment factor, a single blocking factor in each phase, and a response that can only be observed in the second phase.

... As a representation of the meta-heuristic algorithms, we have used Simulated Annealing (SA) (Kirkpatrick, Gelatt, & Vecchi, 1983) and Genetic Algorithm (GA) (Tan, Fu, Zhang, & Bourgeois, 2008). ...

Selecting the best features in a dataset improves accuracy and efficiency of classifiers in a learning process. Datasets generally have more features than necessary, some of them being irrelevant or redundant to others. For this reason, numerous feature selection methods have been developed, in which different evaluation functions and measures are applied. This paper proposes the systematic application of individual feature evaluation methods to initialize search-based feature subset selection methods. An exhaustive review of the starting methods used by genetic algorithms from 2014 to 2020 has been carried out. Subsequently, an in-depth empirical study has been carried out evaluating the proposal for different search-based feature selection methods (Sequential forward and backward selection, Las Vegas filter and wrapper, Simulated Annealing and Genetic Algorithms). Since the computation time is reduced and the classification accuracy with the selected features is improved, the initialization of feature selection proposed in this work is proved to be worth considering while designing any feature selection algorithms.

... We also see that the conventional optimization-based approaches may fail when the circuit benchmark has high chip area usage, such as "bigblue3 " and "ariane". Also, MaskPlace gets the lowest wirelength compared with Graph Placement and simulated annealing [35] in the IBM benchmark, which is shown in Appendix A.7. This project website 5 visualizes and compares different placements. ...

Placement is an essential task in modern chip design, aiming at placing millions of circuit modules on a 2D chip canvas. Unlike the human-centric solution, which requires months of intense effort by hardware engineers to produce a layout to minimize delay and energy consumption, deep reinforcement learning has become an emerging autonomous tool. However, the learning-centric method is still in its early stage, impeded by a massive design space of size ten to the order of a few thousand. This work presents MaskPlace to automatically generate a valid chip layout design within a few hours, whose performance can be superior or comparable to recent advanced approaches. It has several appealing benefits that prior arts do not have. Firstly, MaskPlace recasts placement as a problem of learning pixel-level visual representation to comprehensively describe millions of modules on a chip, enabling placement in a high-resolution canvas and a large action space. It outperforms recent methods that represent a chip as a hypergraph. Secondly, it enables training the policy network by an intuitive reward function with dense reward, rather than a complicated reward function with sparse reward from previous methods. Thirdly, extensive experiments on many public benchmarks show that MaskPlace outperforms existing RL approaches in all key performance metrics, including wirelength, congestion, and density. For example, it achieves 60%-90% wirelength reduction and guarantees zero overlaps. We believe MaskPlace can improve AI-assisted chip layout design. The deliverables are released at https://laiyao1.github.io/maskplace.

... However, we can roughly say the first one is through Machine Learning and Deep Learning algorithms and the second one is through heuristic and metaheuristic methods. Some of the most popular general heuristics and meta-heuristics are simulated annealing [12], variable neighborhood search [13], large neighborhood search [14], amongst others which allow for escaping local optima in local search [15]. In the specific case of the CVRP, two of the leading state-of-the-art heuristics are LKH3 [16] and HGS [3]. ...

The Capacitated Vehicle Routing Problem is a well-known NP-hard problem that poses the challenge of finding the optimal route of a vehicle delivering products to multiple locations. Recently, new efforts have emerged to create constructive and perturbative heuristics to tackle this problem using Deep Learning. In this paper, we join these efforts to develop the Combined Deep Constructor and Perturbator, which combines two powerful constructive and perturbative Deep Learning-based heuristics, using attention mechanisms at their core. Furthermore, we improve the Attention Model-Dynamic for the Capacitated Vehicle Routing Problem by proposing a memory-efficient algorithm that reduces its memory complexity by a factor of the number of nodes. Our method shows promising results. It demonstrates a cost improvement in common datasets when compared against other multiple Deep Learning methods. It also obtains close results to the state-of-the art heuristics from the Operations Research field. Additionally, the proposed memory efficient algorithm for the Attention Model-Dynamic model enables its use in problem instances with more than 100 nodes.

... The process is shown in Figure 5. (Figure 5b). The solution of the TSP is found using the simulated annealing algorithm [27], as shown in Figure 5c. ...

This paper proposes the design of the communications, control systems, and navigation algorithms of a multi-UAV system focused on remote sensing operations. A new controller based on a compensator and a nominal controller is designed to dynamically regulate the UAVs’ attitude.
The navigation system addresses the multi-region coverage trajectory planning task using a new approach to solve the TSP-CPP problem. The navigation algorithms were tested theoretically, and the combination of the proposed navigation techniques and control strategy was simulated through the Matlab SimScape platform to optimize the controller’s parameters over several iterations. The results
reveal the robustness of the controller and optimal performance of the route planner.

... We desire to direct our future work in this direction. Similar to how ISCDM can be improved in terms of -CLL and RRSE by using our current versions of DE-AWISCDM, other optimization methods, such as generation algorithm and particle swarm optimization (Dorigo et al. 2006;Kennedy and Eberhart 1995;Kirkpatrick et al. 1983) can be used for replacing the DE algorithm in our future research. In addition, we also wish to apply the proposed distance measures to improve distance-related algorithms, such as the k-nearest neighbor algorithm and its variants (Hastie and Tibshirani 1996;Domeniconi et al. 2000;Domeniconi and Gunopulos 2001) in our future work. ...

Distance metrics are central to many machine learning algorithms. Improving their measurement performance can greatly affect the classification result of these algorithms. The inverted specific-class distance measure (ISCDM) is effective in handling nominal attributes rather than numeric ones, especially if a training set contains missing values and non-class attribute noise. However, similar to many other distance metrics, this method is still based on the attribute independence assumption, which is obviously infeasible for many real-world datasets. In this study, we focus on establishing an improved ISCDM by using an attribute weighting scheme to address its attribute independence assumption. We use a differential evolution (DE) algorithm to determine better attribute weights for our improved ISCDM, which is thus denoted as DE-AWISCDM. We experimentally tested our DE-AWISCDM on 29 UCI datasets, and find that it significantly outperforms the original ISCDM and other state-of-the-art methods with respect to negative conditional log likelihood and root relative squared error.

... work, a hybridization of Simulated Annealing (SA) and Squirrel Search Algorithm (SSA) methods was performed to accomplish the optimization process of space trusses. The choice of methods aimed to merge an already established neighborhood shifting algorithm with a recently developed promising population algorithm. Simulated Annealing, developed by Kirkpatrick et. al. (1983), is analogous to the metal annealing procedure, in which an alloy is heated to a high temperature and then cooled. If the process is done in too much haste, the metal becomes brittle. To obtain good strength results, the cooling Corresponding author, Professor E-mail: 142663@upf.br process must have a slow decline in temperature so tha ...

One of the biggest problems in structural steel calculation is the design of structures using the lowest possible material weight, making this a slow and costly process. To achieve this objective, several optimization methods have been developed and tested. Nevertheless, a method that performs very efficiently when applied to different problems is not yet available. Based on this assumption, this work proposes a hybrid metaheuristic algorithm for geometric and dimensional optimization of space trusses, called Simulated Squirrel Search Algorithm, which consists of an association of the well-established neighborhood shifting algorithm (Simulated Annealing) with a recently developed promising population algorithm (Squirrel Search Algorithm, or SSA). In this study, two models are tried, being respectively, a classical model from the literature (25-bar space truss) and a roof system composed of space trusses. The structures are subjected to resistance and displacement constraints. A penalty function using Fuzzy Logic (FL) is investigated. Comparative analyses are performed between the Squirrel Search Algorithm (SSSA) and other optimization methods present in the literature. The results obtained indicate that the proposed method can be competitive with other heuristics.

... Our goal in this section is to develop a physics-based mathematical description of a set of rules governing honeycomb construction at the local scale that can explain the global patterns we observe in our experiments. To that end, we establish a computational model solved via simulated annealing, a technique for approximating the global optimum of a function (21,22) that is based on Monte Carlo methods and was originally developed to generate sample states of a thermodynamic system (23). It receives its name from the similarity to the process of annealing in materials science and is often used to study the formation of crystals resulting from the minimization of a potential energy (24)(25)(26) or to reconstruct the microstructure of dispersions and heterogeneous solids (27)(28)(29). ...

As honeybees build their nests in preexisting tree cavities, they must deal with the presence of geometric constraints, resulting in nonregular hexagons and topological defects in the comb. In this work, we study how bees adapt to their environment in order to regulate the comb structure. Specifically, we identify the irregularities in honeycomb structure in the presence of various geometric frustrations. We 3D-print experimental frames with a variety of constraints imposed on the imprinted foundations. The combs constructed by the bees show clear evidence of recurring patterns in response to specific geometric frustrations on these starter frames. Furthermore, using an experimental-modeling framework, we demonstrate that these patterns can be successfully modeled and replicated through a simulated annealing process, in which the minimized potential is a variation of the Lennard-Jones potential that considers only first-neighbor interactions according to a Delaunay triangulation. Our simulation results not only confirm the connection between honeycomb structures and other crystal systems such as graphene, but also show that irregularities in the honeycomb structure can be explained as the result of analogous interactions between cells and their immediate surroundings, leading to emergent global order. Additionally, our computational model can be used as a first step to describe specific strategies that bees use to effectively solve geometric mismatches while minimizing cost of comb building.

... The choice of sampling distribution and evaluation criterion depend on the theoretical foundation behind each algorithm. Simulated annealing (Kirkpatrick et al., 1983) is an algorithm inspired by thermodynamics and samples one candidate solution around the current iterate through random perturbation. The probability of transition, which is a function of the energy/cost of the candidate solution, determines whether to use the candidate solution for the next iteration. ...

We systematically review the Variational Optimization, Variational Inference and Stochastic Search perspectives on sampling-based dynamic optimization and discuss their connections to state-of-the-art optimizers and Stochastic Optimal Control (SOC) theory. A general convergence and sample complexity analysis on the three perspectives is provided through the unifying Stochastic Search perspective. We then extend these frameworks to their distributed versions for multi-agent control by combining them with consensus Alternating Direction Method of Multipliers (ADMM) to decouple the full problem into local neighborhood-level ones that can be solved in parallel. Model Predictive Control (MPC) algorithms are then developed based on these frameworks, leading to fully decentralized sampling-based dynamic optimizers. The capabilities of the proposed algorithms framework are demonstrated on multiple complex multi-agent tasks for vehicle and quadcopter systems in simulation. The results compare different distributed sampling-based optimizers and their centralized counterparts using unimodal Gaussian, mixture of Gaussians, and stein variational policies. The scalability of the proposed distributed algorithms is demonstrated on a 196-vehicle scenario where a direct application of centralized sampling-based methods is shown to be prohibitive.

... Based on the processing technique, the optimization problem is degraded into univariate function optimization problems. The graph-cut optimization method [29] or the simple simulated annealing approach [30] can be applied in order to solve the optimization problem. The procedure of optimization with simulated annealing is shown in Algorithm 1. • Generate x 0 with a slight disturbance from X; • T n ← α · T n−1 ; ...

Nowadays, with the increased numbers of video cameras, the amount of recorded video is growing. Efficient video browsing and retrieval are critical issues when considering the amount of raw video data to be condensed. Activity-based video synopsis is a popular approach to solving the video condensation problem. However, conventional synopsis methods always consists of complicated and pairwise energy terms that involve a time-consuming optimization problem. In this paper, we propose a simple online video synopsis framework in which the number of collisions of objects is classified first. Different optimization strategies are applied according to different collision situations to maintain a balance among the computational cost, condensation ratio, and collision cost. Secondly, tube-resizing coefficients that are dynamic in different frames are adaptively assigned to a newly generated tube. Therefore, a suitable mapping result can be obtained in order to represent the proper size of the activity in each frame of the synopsis video. The maximum number of activities can be displayed in one frame with minimal collisions. Finally, in order to remove motion anti-facts and improve the visual quality of the condensed video, a smooth term is introduced to constrain the resizing coefficients. Experimental results on extensive videos validate the efficiency of the proposed method.

... Naturalistic optimization algorithms are intelligent optimization algorithms that simulate various natural phenomena and various laws of physics. The most typical representative is the simulated annealing optimization algorithm proposed by the American physicist Metropolis based on the annealing process of solids [52]. Subsequently, Erol, OK proposed the Big Bang Grand Convergence algorithm in 2006 [53], Rashedi E proposed the Gravitational Search algorithm in 2009 [54], H. Shareef proposed the Light Search algorithm in 2015 [55], and Biyanto, TR proposed the Rainwater algorithm in 2019 [56]. ...

In recent years, with the rapid development of distributed photovoltaic systems (DPVS), the shortage of data monitoring devices and the difficulty of comprehensive coverage of measurement equipment has become more significant, bringing great challenges to the efficient management and maintenance of DPVS. Virtual collection is a new DPVS data collection scheme with cost-effectiveness and computational efficiency that meets the needs of distributed energy management but lacks attention and research. To fill the gap in the current research field, this paper provides a comprehensive and systematic review of DPVS virtual collection. We provide a detailed introduction to the process of DPVS virtual collection and identify the challenges faced by virtual collection through problem analogy. Furthermore, in response to the above challenges, this paper summarizes the main methods applicable to virtual collection, including similarity analysis, reference station selection, and PV data inference. Finally, this paper thoroughly discusses the diversified application scenarios of virtual collection, hoping to provide helpful information for the development of the DPVS industry.

... Simulated annealing (SA) is an optimization method proposed by Kirkpatrick et al. [22] in 1983, based on the previous work of Metropolis et al. [23]. The idea behind SA is to simulate small movements of atoms and the change in energy. ...

The green vehicle routing problem (GVRP) is a relatively new topic, which aims to minimize greenhouse gasses (GHG) emissions produced by a fleet of vehicles. Both internal combustion vehicles (ICV) and alternative fuel vehicles (AFV) are considered, dividing GVRP into two separate subclasses: ICV-based GVRP and AFV-based GVRP. In the ICV-based subclass, the environmental aspect comes from the objective function which aims to minimize GHG emissions or fuel usage of ICVs. On the other hand, the environmental aspect of AFV-based GVRP is implicit and comes from using AFVs in transport. Since GVRP is NP-hard, finding the exact solution in a reasonable amount of time is often impossible for larger instances, which is why metaheuristic approaches are predominantly used. The purpose of this study is to detect gaps in the literature and present suggestions for future research in the field. For that purpose, we review recent papers in which GVRP was tackled by some metaheuristic methods and describe algorithm specifics, VRP attributes, and objectives used in them.

... This algorithm, by imitating the process of social, economic and political development of countries and by mathematical modeling of parts of this process, presents operators in a regular form as an algorithm that can help solving complex optimization problems. Simulated annealing algorithm is a popular algorithm that has been widely used in many solutions to solve optimization problems after its introduction by Kirkpatrick et al. (1983).This method is a random local search style based on the principles of nature. This algorithm is based on the jumping of metal drops during cooling, which is done according to a certain number. ...

Reducing the cost of concrete construction as the most expensive building material reduces the overall cost of construction high-strength concrete (HSC). In the present study, to achieve this goal, the mix design of HSC is optimized in terms of strength and price using the meta-heuristic genetic algorithm (HGA). To do this, in the first step, a series of experimental data was considered as basic information and then, a mix design function was obtained to determine 28-day compressive strength and a slump calculation function using meta-HGA m in MATLAB software. In the next step, strength-price function was optimized using the meta HGA by changing the material ratios in the mix design, the price of the materials in each mix design and applying the required conditions of HSC including slump obtained from slump calculation function. Then, a comparison was performed between the results of this algorithm and the regression method, the results showed the better responses of the algorithm compared to regression for both factors of strength and price with the values of 10.2% and 6.5%, respectively. In addition, the construction of the mix design resulting from responses of the algorithm and regression in laboratory indicates that more than 97% of the strength was achieved at 28 days of age.

... Physics-based algorithms are designed with inspiration from phenomena, concepts, and laws in physics. The Simulated Annealing (SA) [28] algorithm is one of the most famous physics-based approaches. The modeling of the metal annealing phenomenon in metallurgy has been the main idea in its design. ...

This article introduces a new metaheuristic algorithm called the Serval Optimization Algorithm (SOA), which imitates the natural behavior of serval in nature. The fundamental inspiration of SOA is the serval’s hunting strategy, which attacks the selected prey and then hunts the prey in a chasing process. The steps of SOA implementation in two phases of exploration and exploitation are mathematically modeled. The capability of SOA in solving optimization problems is challenged in the optimization of thirty-nine standard benchmark functions from the CEC 2017 test suite and CEC 2019 test suite. The proposed SOA approach is compared with the performance of twelve well-known metaheuristic algorithms to evaluate further. The optimization results show that the proposed SOA approach, due to the appropriate balancing exploration and exploitation, is provided better solutions for most of the mentioned benchmark functions and has superior performance compared to competing algorithms. SOA implementation on the CEC 2011 test suite and four engineering design challenges shows the high efficiency of the proposed approach in handling real-world optimization applications.

... Le Recuit Simulé est un méta-algorithme probabiliste générique, il permet de réaliser une bonne approximation de l'optimum global d'une « fonction objectif » dans un grand espace de définition. Il a été développé parallèlement par Kirkpatrick et al. (1983), et par Cerny (1985). ...

Pour un équipement donné, la connaissance de son architecture modulaire présente l’avantage de faciliter sa maintenance. En effet, lors d’un problème relevant de la maintenance, on n’agit pas sur tout l’équipement, sauf sur le module défaillant et on gagne aussi en temps pour détecter le défaut observé. Cependant, la définition d’une bonne architecture est conditionnée par un choix judicieux des paramètres de conceptions, plus spécifiquement des paramètres de désassemblage du fait que lors la phase initiale de conception, on intègre de plus en plus les contraintes de désassemblage pour garantir une bonne désassembilité des équipements pour des fins de maintenance. Suite aux difficultés majeures de maintenance des systèmes de topologies complexes, la conception modulaire est donc un atout pour les entreprises. D’autre part, la conception modulaire bien qu’elle présente des avantages de standardisation et de reconfigurabilité est énormément onéreuse, ce qui nécessite plus de ressources, plus de temps et plus de travail des concepteurs. Cela engendre des conséquences sur la maintenance des équipements, car plus les modules de l’équipement sont complexes ou plus denses, plus la maintenance est couteuse. Il est donc question dans cette thèse d’améliorer la maintenance par la conception de la modularité. L’atteinte de cet objectif passe par une méthodologie qui consiste d’abord à évaluer la meilleure solution de conception pour apprécier la pertinence des contraintes de conceptions choisies, ensuite développer un outil qui s’appuie sur l’algorithme de clustering existant , en ayant au préalable présenté les limites de l’algorithme de clustering existant et enfin valider l’outil proposé en observant une réduction en coûts de couplage des modules pour garantir une bonne maintenance et déduire une reconfiguration de l’équipement.

... Simulated Annealing, introduced in 1983 by Kirkpatrick et al. [28], is a single-based meta-heuristic used for solving optimization problems. It is inspired by the annealing theory, where it simulated the cooling process of metal atoms. ...

Unmanned Aerial Vehicles path planning is one of the critical issues in terms of guaranteeing good performance in real-world applications. Basically, it is responsible for determining and ensuring a short, smooth, and collision-free path between two positions from a source to a destination point. This paper presents a hybrid optimization scheme based on the hybridization of Chaotic Aquila Optimization with Simulated Annealing for solving the Unmanned Aerial Vehicles path planning problem. The main purpose of using Simulated Annealing is to provide a more suitable balance between exploitation/exploration. The performance of the proposed algorithm is assessed using three benchmark maps with various numbers of threats. Compared to nine well-known meta-heuristics, the results of simulated experiments demonstrate that the proposed algorithm performs better in the majority of the scenarios by obtaining a significant enhancement in minimizing the fitness value and the path cost up to 36% and 19%, respectively.

... This ensures that every key-value pair in ∆ is mapped to a cell of type 1. (min i η ⋆ i ) . An approximate solution to this optimization problem was obtained by relying on simulated annealing [28], an optimization algorithm to approximate the global optimum of a given function. ...

We consider a set reconciliation setting in which two parties hold similar sets which they would like to reconcile In particular, we focus on set reconciliation based on invertible Bloom lookup tables (IBLTs), a probabilistic data structure inspired by Bloom filters but allowing for more complex operations. IBLT-based set reconciliation schemes have the advantage of exhibiting a low complexity, however, the schemes available in literature are known to be far from optimal in terms of communication complexity (overhead). The inefficiency of IBLT-based set reconciliation can be attributed to two facts. First, it requires an estimate of the cardinality of the set difference between the sets, which implies an increase in overhead. Second, in order to cope with errors in the aforementioned estimation of the cardinality of the set difference, IBLT schemes in literature make a worst-case assumption and oversize the data structures, thus further increasing the overhead. In this work, we present a novel IBLT-based set reconciliation protocol that does not require estimating the cardinality of the set difference. The scheme we propose relies on what we term multi-edge-type (MET) IBLTs. The simulation results shown in this paper show that the novel scheme outperforms previous IBLT-based approaches to set reconciliation

... In this paper a simulated annealing algorithm is also proposed for solving the MOSC design model. Kirkpatrick et al. (1983) developed the first version of simulated annealing for solving complex problems. In the simulated annealing algorithm, the temperature and cooling rate play an important role in determining the properties of the material structure. ...

In recent years, interest in designing multi-echelon, multi-product supply chains using multi-objective optimization has surged. This growing interest is exemplified by the number of studies published in this field. The resulting models for these cases are complex multi-objective optimization network models of a combinatorial nature. Exact algorithms can at best provide an Pareto optimal solution for medium size problems. In such situations, metaheuristic algorithms become a viable option for solving these kinds of problems. Therefore, the purpose of this paper is to develop three meta-heuristic algorithms to solve large size multi-objective supply chain network design problems. The algorithms are based on tabu search, genetic algorithm, and simulated annealing to find near optimal global solutions. The three algorithms are designed, coded, tested, and their parameters are fine tuned. The exact ε-constraint algorithm embedded in the General Algebraic Modeling System (GAMS) is used to validate the results of the three algorithms. A well-designed study is used to compare the performance of the three algorithms based on several performance measures using sound statistical tests. A typical multi-objective supply chain model is used to compare the algorithms’ performance. The results show that the tabu search algorithm outperformed the other two algorithms in terms of the percent of domination and computation time. On the other hand, the simulated annealing solutions are the best in terms of their diversity.

... DIRECT is a pattern search algorithm of a class of Lipschitz optimization methods that relies on a dividing rectangle principle, hence it is called DIRECT [34]. SA, a heuristic-based gradient-free algorithm [35], probabilistically accepts solutions while exploring the neighborhood of a search point. The probability of acceptance depends on an energy factor called temperature that goes to zero from a certain initial value in a controlled manner by a factor reducing it in each iteration. ...

The paper proposes a novel adaptive search space decomposition method and a novel gradient-free optimization-based formulation for the pre- and post-buckling analyses of space truss structures. Space trusses are often employed in structural engineering to build large steel constructions, such as bridges and domes, whose structural response is characterized by large displacements. Therefore, these structures are vulnerable to progressive collapses due to local or global buckling effects, leading to sudden failures. The method proposed in this paper allows the analysis of the load-equilibrium path of truss structures to permanent and variable loading, including stable and unstable equilibrium stages and explicitly considering geometric nonlinearities. The goal of this work is to determine these equilibrium stages via optimization of the Lagrangian kinematic parameters of the system, determining the global equilibrium. However, this optimization problem is non-trivial due to the undefined parameter domain and the sensitivity and interaction among the Lagrangian parameters. Therefore, we propose formulating this problem as a nonlinear, multimodal, unconstrained, continuous optimization problem and develop a novel adaptive search space decomposition method, which progressively and adaptively re-defines the search domain (hypersphere) to evaluate the equilibrium of the system using a gradient-free optimization algorithm. We tackle three benchmark problems and evaluate a medium-sized test representing a real structural problem in this paper. The results are compared to those available in the literature regarding displacement-load curves and deformed configurations. The accuracy and robustness of the adopted methodology show a high potential of gradient-free algorithms in analyzing space truss structures.

An improved variant of the Jaya optimization algorithm, called Jaya2, is proposed to enhance the performance of the original Jaya sacrificing its algorithmic design. The proposed approach arranges the solutions in a ring topology to reduce the likelihood of premature convergence. In addition, the population size reduction is used to automatically adjust the population size during the optimization process. Moreover, the translation dependency problem of the original Jaya is discussed, and an alternative solution update operation is proposed. To test Jaya2, we compare it with nine different optimization methods on the CEC 2020 benchmark functions and the CEC 2011 real-world optimization problems. The results show that Jaya2 is highly competitive on the tested problems where it generally outperforms most approaches. Having an easy-to-implement approach with little parameter tuning is highly desirable since researchers from different disciplines with basic programming skills can use it to solve their optimization problems.

Polymer-Biokonjugationen, vornehmlich mit dem Goldstandard PEG, führen zu einer verbesserten Pharmakokinetik, beeinflussen aber auch die konformative Stabilität von Proteinen. Bisherige Mutationsstudien, in denen überwiegend (Asn)PEG4 -Konjugate der Beta-faltblattstrukturreichen, humanen Pin 1 WW-Domäne untersucht wurden, postulieren auf einer Proteindesolvatation beruhende Stabilisierungsmechanismen: eine Stärkung intramolekularer Salzbrücken und NH-pi-Bindungen, sowie entropisch günstige Wasserverdrängungen um apolare Aminosäuren und Hydroxylgruppen. Ziel dieser Arbeit ist es, die Protein-Polymer-Dynamik auf molekularer Ebene zu charakterisieren, um damit rationale Ansätze zum Design neuer Biokonjugate voranzutreiben und mögliche PEG-Alternativen zu etablieren. Hierzu wurde eine Vielzahl an Deskriptoren mittels Molekulardynamik-Simulationen der WW-Konjugate gewonnen und mit publizierten Stabilitätsdaten in multivariaten Regressions- und logistischen Klassifikationsmodellen korreliert. Die gewonnenen QSPR-Modelle decken im Vergleich zu einer bereits publizierten, kristallstrukturbasierten Richtlinie einen größeren und strukturell vielfältigeren Datensatz an Konjugaten ab und zeigen gleichzeitig, auch für ein Konjugat der Src SH3-Domäne, eine deutlich verbesserte Leistung. Die Modelldeskriptoren beschreiben sowohl eine Modulation der Solvatation als auch Protein-Polymer-Interaktionen. Metadynamik-Simulationen zeigten zudem die Polymerdynamik während einer partiellen Proteinentfaltung auf. Mithilfe weiterer Simulationen von Konjugaten des alpha-helikalen Her2-Affibodys wurde die Dynamik von PEG und verschiedener Alternativen (LPG, PEtOx, PMeOx) systematisch studiert. PEG interagierte mit positiv geladenen Lysinen und Argininen in der Nähe hydrophober Aminosäuren. LPG zeigte zusätzliche Wechselwirkungen der Hydroxylgruppen mit Aspartaten und Glutamaten. POx-Polymere interagierten mit Phenylalaninen, Tyrosinen und über Carbonylgruppen mit HB-Donatoren. Größere Konjugate (10 - 50 kDa PEG/LPG/PEtOx) des antiviralen Biologikums Interferon-alpha2a wurden mittels gaußbeschleunigter MDs und einer CG-Simulation analysiert. Charakteristische Wechselwirkungspartner stimmten mit den Beobachtungen zu Oligomer-Konjugaten überein. In Einklang mit experimentellen Daten der Kooperationspartner zu den 10-kDa-Varianten deuteten zusätzliche Constrained-Network-Analysen, welche die Proteinflexibilität evaluieren, auf eine thermische Destabilisierung hin. Die Bioaktivität der untersuchten Konjugate wurde weiterhin erfolgreich mit den Gyrationsdurchmessern der modellierten Strukturen korreliert.

Generative adversarial networks (GANs), which have powerful fitting ability and thus generate diverse samples, are efficient deep neural networks for generation tasks. GANs are rarely used to solve combinatorial optimization problems though they have great potential in the field of operational research. This paper presents a hybrid evolutionary algorithm (DCG-EA/I) driven by deep convolutional generative adversarial networks (DCGAN). First, the evolutionary individuals are encoded to fit the training of GANs. Then, an escaping strategy driven by DCGAN is proposed to expand evolutionary individuals space and enhance evolutionary population diversity. Moreover, we use a iterative local search 2-opt method to improve the quality of the solutions. Finally, a negative-as-positive mechanism is constructed so as to stabilize the training process of DCGAN and reduce the harm caused by mode collapse. The algorithm is tested on TSP standard library instances and real-world instances. Experimental results show that the proposed algorithm can mitigate the problem of local convergence and achieve competitive performances over other GAN-based algorithms.

One of the important types of lightweight concrete is expanded polystyrene (EPS) concrete which can be constructed by substituting part of coarse aggregates of the concrete with EPS beads. The characteristics of EPS concrete is greatly dependent on the material which is used in its manufacturing. Present study aims to develop two soft computing approaches according to radial basis function neural network (RBFNN) and coupled simulated annealing-least square support vector machine (CSA-LSSVM) models and a regression model for approximation the compressive strength of various types of concrete with main focus of EPS concrete. The regression model was considered as the benchmark model and the outcomes of RBFNN and CSA-LSSVM models were compared with it. The results revealed that the developed CSA-LSSVM and RBFNN approaches can be efficiently utilized to estimate the compressive strength of various kinds of concrete. The outcomes of the regression model were not as accurate as those of the machine learning approaches. In addition, the CSA-LSSVM model developed by the use of radial basis kernel function was decided as a better model. The R², AARD%, RMSE, and MAPE for the benchmark model were 0.7555, 8.87%, 25.49, and 0.00887; however, these values for CSA-LSSVM were obtained as 0.9970, 3.01%, 1.08, and 0.0301 and for the RBFNN model were obtained to be 0.9953, 4.84%, 1.23, and 0.0484, respectively.

Inversi data gravitasi penting untuk dilakukan untuk mengetahui model bawah permukaan bumi berdasarkan data anomali medan gravitasi. Inversi data gravitasi dapat dilakukan dengan pendekatan global, Salah satu metode inversi dengan pendekatan global adalah metode simulated annealing (SA). Metode SA ini didasarkan pada analogi proses termodinamika pembentukan kristal suatu substansi. Perkembangannya terinspirasi oleh proses pendinginan logam, dimana struktur kristal energi minimum yang teratur berkembang di dalam logam saat didinginkan secara perlahan dari keadaan panas. Data yang digunakan merupakan data sintetik hasil dari forward modeling untuk menguji keakurasian metode SA. Hasil inversi SA dapat diterapkan dalam menyelesaikan masalah inversi gravitasi dan menunjukkan hasil model inversi yang sesuai dengan model sintetik. Hasil uji coba dengan 2 jenis model sintetik yaitu model persegi dan model patahan menunjukkan nilai misfit estimasi anomali medan gravitasi 2,8 x10-4 mGal untuk model persegi dan 1,7 x 10-4 mGal untuk model patahan. Solusi inversi SA dipengaruhi oleh dua hal, yaitu pembatasan ruang pencarian dan penentuan model tebakan awal.

We use interferometric synthetic aperture radar observations to investigate the fault geometry and afterslip evolution within 3 years after a mainshock. The postseismic observations favor a ramp‐flat structure in which the flat angle should be lower than 10°. The postseismic deformation is dominated by afterslip, while the viscoelastic response is negligible. A multisegment, stress‐driven afterslip model (hereafter called the SA‐2 model) with depth‐varying frictional properties better explains the spatiotemporal evolution of the postseismic deformation than a two‐segment, stress‐driven afterslip model (hereafter called the SA‐1 model). Although the SA‐2 model does not improve the misfit significantly, this multisegment fault with depth‐varying friction is more physically plausible given the depth‐varying mechanical stratigraphy in the region. Compared to the kinematic afterslip model, the mechanical afterslip models with friction variation tend to underestimate early postseismic deformation to the west, which may indicate more complex fault friction than we expected. Both the kinematic and stress‐driven models can resolve downdip afterslip, although it could be affected by data noise and model resolution. The transition depth of the sedimentary cover basement interface inferred by afterslip models is ∼12 km in the seismogenic zone, which coincides with the regional stratigraphic profile. Because the coseismic rupture propagated along a basement‐involved fault while the postseismic slip may activate the frontal structures and/or shallower detachments in the sedimentary cover, the 2017 Sarpol‐e Zahab earthquake may have acted as a typical event that contributed to both thick‐ and thin‐skinned shortening of the Zagros in both seismic and aseismic ways.

Alternative computing such as stochastic computing and bio-inspired computing holds promise for overcoming the limitations of von Neumann computers. However, one difficulty in the implementation of such alternative computing is the need for a large number of random bits at the same time. To address this issue, we propose a scalable true-random-number generating scheme that we refer to as XORing shift registers (XSR). XSR generates multiple uncorrelated true random bitstreams using only two true random number generators as entropy sources and can thus be implemented by a variety of logic devices. Toward superconducting alternative computing, we implement XSR using an energy-efficient superconductor logic family, adiabatic quantum-flux-parametron (AQFP) logic. Furthermore, to demonstrate its performance, we design and observe an AQFP-based XSR circuit that generates four random bitstreams in parallel. The results of the experiment confirm that the bitstreams generated by the XSR circuit exhibit no autocorrelation and that there is no correlation between the bitstreams.

Recent advances in continuous-flow microfluidics have enabled highly integrated lab-on-a-chip biochips. These chips can execute complex biochemical applications precisely and efficiently within a tiny area, but they require a large number of control ports and the corresponding control logic to generate required pressure patterns for flow control, which, consequently, offset their advantages and prevent their wide adoption. In this article, we propose the first flow-control layer co-synthesis flow called MiniControl, for continuous-flow microfluidic biochips under strict constraints for control ports, incorporating high-level synthesis, physical design, and control system design simultaneously, which has never been considered in previous work. With the maximum number of allowed control ports specified in advance, this synthesis flow aims to generate biochip architectures with high execution efficiency and the corresponding control systems with optimized timing performance. Besides, the overall cost of a biochip can be reduced and the tradeoff between a control system and execution efficiency of biochemical applications can be evaluated for the first time. The experimental results demonstrate that MiniControl leads to high execution efficiency, low platform cost, as well as excellent timing performance, while strictly satisfying the given control-port constraints.

The sand cat swarm optimization algorithm (SCSO) is a recently proposed metaheuristic optimization algorithm. It stimulates the hunting behavior of the sand cat, which attacks or searches for prey according to the sound frequency; each sand cat aims to catch better prey. Therefore, the sand cat will search for a better location to catch better prey. In the SCSO algorithm, each sand cat will gradually approach its prey, which makes the algorithm a strong exploitation ability. However, in the later stage of the SCSO algorithm, each sand cat is prone to fall into the local optimum, making it unable to find a better position. In order to improve the mobility of the sand cat and the exploration ability of the algorithm. In this paper, a modified sand cat swarm optimization (MSCSO) algorithm is proposed. The MSCSO algorithm adds a wandering strategy. When attacking or searching for prey, the sand cat will walk to find a better position. The MSCSO algorithm with a wandering strategy enhances the mobility of the sand cat and makes the algorithm have stronger global exploration ability. After that, the lens opposition-based learning strategy is added to enhance the global property of the algorithm so that the algorithm can converge faster. To evaluate the optimization effect of the MSCSO algorithm, we used 23 standard benchmark functions and CEC2014 benchmark functions to evaluate the optimization performance of the MSCSO algorithm. In the experiment, we analyzed the data statistics, convergence curve, Wilcoxon rank sum test, and box graph. Experiments show that the MSCSO algorithm with a walking strategy and a lens position-based learning strategy had a stronger exploration ability. Finally, the MSCSO algorithm was used to test seven engineering problems, which also verified the engineering practicability of the proposed algorithm.

The choice of heuristic operators is strongly related to the performance of a (meta-)heuristic algorithm. Hence, applying an automated selection approach can increase the robustness of an optimization system. In this work, we investigate the use of a reinforcement learning technique as the selection mechanism of a hyper-heuristic algorithm. Specifically, we use the approximate Q-learning using an Artificial Neural Network as function approximation. Moreover, we evaluate different sets of metrics for representing the state of the environment, which in this scenario, must indicate the search stage of the optimization algorithm. The experiments conducted on six combinatorial problem domains indicate that, with simple state measures (combining the last action vector and fitness improvement rate), our approach yields better results compared to a state-of-the-art Multi-Armed Bandit approach, which does not have state representation.KeywordsHyper-heuristicReinforcement learningCombinatorial optimization

Although mmCIF is the current official format for deposition of protein and nucleic acid structures to the Protein Data Bank (PDB) database, the legacy PDB format is still the primary supported format for many structural bioinformatics tools. Therefore, reliable software to convert mmCIF structure files to PDB files is needed. Unfortunately, existing conversion programs fail to correctly convert many mmCIF files, especially those with many atoms and/or long chain identifies. This study proposed BeEM, which converts any mmCIF format structure files to PDB format. BeEM conversion faithfully retains all atomic and chain information, including chain IDs with more than 2 characters, which are not supported by any existing mmCIF to PDB converters. The conversion speed of BeEM is at least ten times faster than existing converters such as MAXIT and Phenix. BeEM is available under the BSD licence at https://github.com/kad-ecoli/BeEM/ .

S-wave velocity (Vs), as a crucial parameter in the study of site effects, has been investigated extensively. In the last decade, the use of the surface wave analysis technique to create Vs profiles has attracted considerable attention due to its success in many case studies. Nevertheless, because of the high nonlinearity of the inversion of surface wave data, the use of the surface wave analysis technique may yield erroneous results. Therefore, it is important to develop an appropriate inversion algorithm to obtain a reasonable Vs profile. To this end, in the current study, a novel inversion algorithm based upon a meta-heuristic algorithm, i.e., artificial jellyfish search (AJS) algorithm, for inverting surface wave dispersion curves is developed. The algorithm is tested on the synthetic data and a real data set. Also, the proposed method has been compared with the particle swarm optimization (PSO)-based inversion algorithm to invert the dispersion curve. The results show that the performance of the AJS-based inversion algorithm in the inversion of synthetic data sets under different conditions (for instance, in the presence of noise, a broad search space, and the presence of a low-speed layer) is fast, stable, and powerful. In addition, the Vs profile estimated from a realistic dispersion curve by the applied algorithm is consistent with the borehole stratigraphic data and the geological information of the study area. Also, the obtained results show that while the performance of the AJS-based inversion algorithm is as good as the PSO-based inversion method, the AJS algorithm requires fewer internal parameter settings and achieves sufficient accuracy in a smaller number of iterations. Thus, it is more time-efficient and easy to implement.

Within the context of the changing design requirements of digital systems spanning the semiconductor era, this paper describes the significant steps in the development of Design Automation technology in IBM. We cover the design tools which support the design of the electronic portion of such systems. The paper emphasizes the systems approaches taken and the topics of design verification, test generation, and physical design. Descriptions of the technical contributions and interactions which have led to the unique characteristics of IBM's Design Automation systems are included.