Book

Genetic Algorithm+Data Structures=Evolution Programs

Authors:
... Using the evolutionary approach recently developed by us, the shapes of odd 2s1d-shell 23 Na, 25 Mg and 25 Al nuclei in the ground and single-particle excited states have been extracted from the experimental data on the energies, spins, and parities of these states, as well as the measured probabilities of electromagnetic transitions between them. We have found that the single-particle spectra of the nuclei studied contain the single states and the continuous sets of states with abnormally weak deformation. ...
... Клепіков, І.С. Тімченко За допомогою розробленого нами нового еволюційного підходу форми непарних ядер 2s1d оболонки 23 Na, 25 Mg та 25 Al в основних та одночастинкових збуджених станах визначені з експериментальних даних про енергії, спіни та парності цих станів, а також виміряних ймовірностей електромагнітних переходів між ними. В одночастинкових спектрах досліджених ядер знайдені окремі стани та послідовності станів з аномально малою деформацією. ...
... Evolutionary algorithms make up, generally, the global optimization technique that, however, cannot guarantee that the optimum found is the global one (see, e.g., Refs. [23][24][25] or any textbook on evolutionary computations). Therefore, it is necessary to run the procedure several times. ...
Article
Full-text available
Using the evolutionary approach recently developed by us, the shapes of odd 2s1d-shell 23Na, 25Mg and 25Al nuclei in the ground and single-particle excited states have been extracted from the experimental data on the energies, spins, and parities of these states, as well as the measured probabilities of electromagnetic transitions between them. We have found that the single-particle spectra of the nuclei studied contain the single states and the continuous sets of states with abnormally weak deformation. This indicates the existence of the shape phase transitions from the spherical state of the nucleus into a deformed state.
... The process preserves the best chromosomes for subsequent generations, continuing until a termination condition is met. For an introduction and review of genetic algorithms, see Goldberg [38], Michalewicz [39], and Haupt and Haupt [40]. Genetic algorithms are particularly effective in handling optimization problems in a large or complex design space. ...
... Rather than using binary or other encoding methods, we employ real-value encoding with a precision of four decimal places for three reasons: (1) it is well-suited for optimization in a continuous search space, (2) it permits a wider range of possible values within smaller chromosomes, and (3) it is advantageous when addressing problems involving more complex values. As presented by Michalewicz [39], compared to binary encoding, real-valued encoding provides greater efficiency in terms of CPU time and offers superior precision for replications. A chromosome C with n × q dimension represents a possible design where n denotes the number of the design points and q represents the number of mixture components. ...
Article
Full-text available
Missing observation is a common problem in scientific and industrial experiments, particularly in a small-scale experiment. They often present significant challenges when experiment repetition is infeasible. In this research, we propose a multi-objective genetic algorithm as a practical alternative for generating optimal mixture designs that remain robust in the face of missing observation. Our algorithm prioritizes designs that exhibit superior D-efficiency while maintaining a high minimum D-efficiency due to missing observations. The focus on D-efficiency stems from its ability to minimize the impact of missing observations on parameter estimates, ensure reliability across the experimental space, and maximize the utility of available data. We study problems with three mixture components where the experimental region is an irregularly shaped polyhedral within the simplex. Our designs have proven to be D-optimal designs, demonstrating exceptional performance in terms of D-efficiency and robustness to missing observations. We provide a well-distributed set of optimal designs derived from the Pareto front, enabling experimenters to select the most suitable design based on their priorities using the desirability function.
... Test-data generation in software testing identifies a set of program input data that satisfy a given test-coverage criterion. This phase uses the GA algorithm [17] to generate test cases to cover all def-use pairs of the given program or its mutant. Then, the tested program is executed with the test cases generated by the GA. ...
... where x i represents the decimal value of the binary string stringi [17]. It should be noted that the above method can be applied for representing values of integer input variables by setting d i to 0, and using the following formula instead of the formula in Equation (3): ...
Article
Full-text available
Data-Flow and Higher-Order Mutation are white-box testing techniques. To our knowledge, no work has been proposed to compare data flow and Higher-Order Mutation. This paper compares all def-uses Data-Flow and second-order mutation criteria. The comparison will support the testing decision-making, especially when choosing a suitable criterion. This compassion investigates the subsumption relation between these two criteria and evaluates the effectiveness of test data developed for each. To compare the two criteria, a set of test data satisfying each criterion is generated using genetic algorithms; the set is then used to explore whether one criterion subsumes the other criterion and assess the effectiveness of the test set that was developed for one methodology in terms of the other. The results showed that the mean mutation coverage ratio of the all du-pairs adequate test cover is 80.9%, and the mean data flow coverage ratio of the second-order mutant adequate test cover is 98.7%. Consequently, second-order mutation “ProbSubsumes” the all du-pairs data flow. The failure detection efficiency of the mutation (98%) is significantly better than the failure detection efficiency of data flow (86%). Consequently, second-order mutation testing is “ProbBetter” than all du-pairs data flow testing. In contrast, the size of the test suite of second-order mutation is more significant than the size of the test suite of all du-pairs.
... This operator is crucial in preventing the creation of a uniform population incapable of further evolution. For an introduction and review of genetic algorithms, see Goldberg [38], Michalewicz [39], and Haupt and Haupt [40]. ...
... Rather than using binary or other encoding methods, we employ real-value encoding with a precision of four decimal places for three reasons: (1) it is well-suited for optimization in a continuous search space, (2) it permits a wider range of possible values within smaller chromosomes, and (3) it is advantageous when addressing problems involving more complex values. As presented by Michalewicz [39], compared to binary encoding, real-valued encoding provides greater efficiency in terms of CPU time and offers superior precision for replications. A chromosome with × dimension represents a possible design where denotes the number of the design points and represents the number of mixture components. ...
Preprint
Full-text available
Missing observation is a common problem in scientific and industrial experiments, particularly in a small-scale experiment. They often present significant challenges when experiment repetition is infeasible. In this research, we propose a multi-objective genetic algorithm as a practical alternative for generating optimal mixture designs that remain robust in the face of missing observation. Our algorithm prioritizes designs that exhibit superior D-efficiency while maintaining a high minimum D-efficiency due to missing observation. The focus on D-efficiency stems from its ability to minimize the impact of missing observations on parameter estimates, ensure reliability across the experimental space, and maximize the utility of available data. We study problems with three mixture components where the experimental region is an irregularly shaped polyhedral within the simplex. Our designs have proven to be D-optimal designs, demonstrating exceptional performance in terms of D-efficiency and robustness against missing observations. We provide a well-distributed set of optimal designs derived from the Pareto front, enabling experimenters to select the most suitable design based on their priorities using the desirability function.
... Зокрема, це задачі лінійного, нелінійного, опуклого програмування тощо. Професор З. Міхалєвіч разом із своїми колегами та учнями розробив ряд технологій, що дозволяють розв'язувати такі задачі на базі генетичного алгоритму [112,113,123,124]. Він ділить їх на чотири категорії [124]: -методи, що базуються на збереженні допустимості розв'язків; -методи, використання яких пов'язано із введенням штрафних функцій; -методи, в основі яких лежить чіткий поділ на допустимі та недопустимі розв'язки; -інші гібридні методи. ...
... Професор З. Міхалєвіч разом із своїми колегами та учнями розробив ряд технологій, що дозволяють розв'язувати такі задачі на базі генетичного алгоритму [112,113,123,124]. Він ділить їх на чотири категорії [124]: -методи, що базуються на збереженні допустимості розв'язків; -методи, використання яких пов'язано із введенням штрафних функцій; -методи, в основі яких лежить чіткий поділ на допустимі та недопустимі розв'язки; -інші гібридні методи. ...
Book
Full-text available
Монографія присвячена проблемі підвищення ефективності процесів прийняття рішень при пожежогасінні об’єктів, на яких може перебувати велика кількість людей та матеріальних цінностей. Одним із способів її ви-рішення є удосконалення систем пожежного моніторингу, направлене на мінімізацію часу виявлення пожежі. Надійне та швидке спрацювання поже-жних сповіщувачів дозволить врятувати людські життя та зменшити матері-альні збитки. Запропоновано проектування пожежної сигналізації здійснювати із урахуванням масштабу можливих негативних наслідків пожежі. Побудовані моделі, які дозволять визначити час спрацювання сповіщувачів для примі-щень із рівномірним, нерівномірним та змінним пожежним навантаженням і джерелами підвищеної небезпеки. Для оптимізації їх розміщення розроб-лено еволюційні методи на основі використання генетичних алгоритмів та еволюційних стратегій. Для фахівців пожежної справи, науковців та осіб, що цікавляться проце-сами прийняття рішень в умовах невизначеності, а також інтелектуальними інформаційними технологіями.
... Problems of predictive maintenance usage are presented in several publications. They often consider the use of artificial neural networks [4][5][6][7][8][9][10][11], fuzzy logic [12][13][14][15], genetic algorithms [16][17][18], or expert systems [19][20][21][22]. ...
Article
Full-text available
In the field of transport, and more precisely in supply chains, if any of the vehicle components are damaged, it may cause delays in the delivery of goods. Eliminating undesirable damage to the means of transport through the possibility of predicting technical conditions and a state of failure may increase the reliability of the entire supply chain. From the aspect of sustainability, the issue of reducing the number of failures also makes it possible to reduce supply chain disturbances, to reduce costs associated with delays, and to reduce the materials needed for the repair of the means of transport, since, in this case, the costs only relate to the replaced elements before their damage. Thus, it is impossible for more serious damage to occur. Often, failure of one item causes damage to others, which generates unnecessary costs and increases the amount of waste due to the number of damaged items. This article provides an author’s method of technical condition prediction; by applying the method, it would be possible to develop recommended maintenance activities for key elements related to the safety and reliability of transport. The combination of at least two artificial intelligence methods allows us to achieve very good prediction results thanks to the possibility of individual adjustments of weights between the methods used. Such predictive maintenance methods can be successfully used to ensure sustainable development in supply chains.
... Evolutionary Algorithms (EAs) are popular and powerful population-based metaheuristic optimization methods. Their effectiveness strongly depends on designed evolutionary operators: mutation, crossover, and selection [1]. One of the potential improvements can be achieved by adding some local optimization techniques -either as a separate algorithm step or as a part of the evolutionary operators [2]. ...
Chapter
Full-text available
In this paper, we introduce several mutation modifications in Evolutionary Algorithm for finding Strong Stackelberg Equilibrium in sequential Security Games. The mutation operator used in the state-of-the-art evolutionary method is extended with several greedy optimization techniques. Proposed mutation operators are comprehensively tested on three types of games with different characteristics (in total over 300 test games). The experimental results show that application of some of the proposed mutations yields Defender's strategies with higher payoffs. A trade-off between the results quality and the computation time is also discussed.
... Genetic algorithms construct a succession of search space points towards an ideal outcome by using biological evolution. A better set of coded variables is created by GAs by choosing an occupant of the coded variables with the best fitness levels and performing a combination of crossover, matting, and mutation actions on them [37]. ...
Article
Full-text available
This paper introduces an innovative approach that significantly enhances the reliability indices of the radial distribution system, thereby effectively upgrading its overall performance. The optimal location and number of isolators in the electrical distribution network are determined using a metaheuristic algorithm, such as the genetic algorithm and particle swarm optimization approaches. The product of the system's annual outage length and the total number of consumers affected is used to frame the problem within the restrictions of the ideal number of switches and the need for a power supply that can meet consumer requests. The cost of energy not served (CENS) and system average interruption duration index (SAIDI) are reliability indicators that have been used to identify the root of the issue. The method has been put to the test on 59 and 34 load point systems, and the results demonstrate that the suggested method in the study is accepted. The GA and PSO are compared on 34 and 59 bus radial network and found that GA provide better result. The ENS and profit have been compared, and it is found GA provides superb result in both cases. It is also observed that the optimal number of isolators required in 59 bus system is 31 and 42 for GA and PSO, respectively. The application of the suggested technique for isolator placement in 59 bus radial network provides a reduction in SAIDI 45.6% and 43.97% from GA and PSO, respectively, it is also observed that reduction in ENS is 59.9% and 58.45% from GA and PSO, respectively. Diminution in ENS indicates that the final group of clients is happy with the quality of the service being rendered in their location. Graphical abstract
... The arithmetic crossover method is used to carry out the combination of chromosomes [38]. According to the authors, the descendants are formed from linear combinations of the original individuals. ...
Article
Full-text available
This research aims to employ and qualify the bio-inspired algorithms: Particle Swarm Optimization (PSO), Genetic Algorithm (GA), and Differential Evolution Algorithm (DE) in the extraction of the parameters of the circuit equivalent to a photovoltaic module in the models of a diode and five parameters (1D5P) and two diodes and seven parameters (2D7P) in order to simulate the I-V characteristics curves for any irradiation and temperature scenarios. The peculiarity of this study stands in the exclusive use of information present in the module’s datasheet to carry out the full extraction and simulation process without depending on external sources of data or experimental data. To validate the methods, a comparison was made between the data obtained by the simulations with data from the module manufacturer in different scenarios of irradiation and temperature. The algorithm bound to the model with the highest accuracy was DE 1D5P, with a maximum relative error of 0.4% in conditions close to the reference and 3.61% for scenarios far from the reference. On the other hand, the algorithm that obtained the worst result in extracting parameters was the GA in the 2D7P model, which presented a maximum relative error of 9.59% in conditions far from the reference.
... The core algorithmic tool employed to perform antenna structure development using the parameterization of Section 2.2 is a floating-point evolutionary algorithm with elitism and adaptive adjustment of the mutation rate. The algorithm is similar to standard evolutionary procedures (see, e.g., [61]). The main components of the algorithm include the following: Random mutation with non-uniform probability distribution. ...
Article
Full-text available
Designing modern antenna structures is a challenging endeavor. It is laborious and heavily reliant on engineering insight and experience, especially at the initial stages oriented towards the development of a suitable antenna architecture. Due to its interactive nature and hands-on procedures (mainly parametric studies) for validating the suitability of particular geometric setups, typical antenna development requires many weeks and significant involvement of a human expert. The same reasons only allow the designer to try out a very limited number of options in terms of antenna geometry arrangements. Automated topology development and dimension sizing is therefore of high interest, especially from an industry perspective where time-to-market and expert-related expenses are of paramount importance. This paper discusses a novel approach to unsupervised specification-driven design of planar antennas. The presented methodology capitalizes on a flexible and scalable antenna parameterization, which enables the realization of complex geometries while maintaining reasonably small parameter space dimensionality. A customized nature-inspired algorithm is employed to carry out space exploration and identification of a quasi-optimum antenna topology in a global sense. A fast gradient-based procedure is then incorporated to fine-tune antenna dimensions. The design framework works entirely in a black-box fashion with the only input being design specifications, and optional constraints, e.g., concerning the structure size. Numerous illustration case studies demonstrate the capability of the presented technique to generate unconventional antenna topologies of satisfactory performance using reasonable computational budgets, and with no human expert interaction necessary whatsoever.
... In this paper, the smaller the interference is the higher fitness value is. Some core operators [19], [42]- [43] in our slow but thorough GASch are described below: Selection: Based on the fitness value, the selection process selects a finite number of solutions or chromosome for the mating pool from a randomized set of solutions. How many copies of a particular chromosome will be transferred to the new mating pool depends on its fitness value. ...
Article
Full-text available
The Uplink/Downlink transmission mode or Transmission Order (TO) optimization has recently appeared as a new optimization domain in radio resource management. Such optimization is a combinatorics problem and requires good heuristic algorithm to be approximately solved within short time for the dynamic radio environment. This paper shows how the TO optimization problem in Time Division Duplex (TDD) indoor femtocells can be formulated and solved by the Hopfield Neural Network (HNN) based TO schedulers. Both centralized and distributed versions are analyzed in the context of indoor femtocells. We also examine proposed TO schedulers’ system performance in TDD indoor femtocells environment by extensive simulation campaigns. Our simulation results for a large 3-story building including 120 femtocells show that (i) the indoor femtocell system performance is improved up to 13 to 20 percent by the proposed HNN schedulers depending on the number of femtocells, (ii) the proposed TO schedulers converge within the first few epochs. (iii) The performance of the proposed schedulers are justified by a time-consuming but a thorough Genetic Algorithm Scheduler.
... First, we check if there is a mutation according to the mutation probability; in work done here, we use a non-uniform mutation of Michalewicz (1992). The process of selection, reproduction, and mutation is then repeated. ...
Article
Full-text available
Sorghum seedling transplanting is an essential agricultural activity in Sub-Saharan Africa. However, conventional manual transplanting of sorghum is a time-consuming, labour-intensive, costly activity with a low transplanting rate, uneven plant distribution, and low degree of accuracy. In order to realize rapid and precise sorghum seedlings transplanting, a duckbill-type mechanism has been designed. This mechanism is a five-bar linkage consisting of two crankshafts, two connecting rods, and a duckbillshaped planter to improve the quality of transplanting operations. The study includes kinematic and synthesis analysis through MATLAB software, parts design, and motion analysis using SolidWorks software. After synthesis analysis using a genetic algorithm, the optimal length between the two cranks is 300 mm, the length of the upper crankshaft is 106 mm, the length of the connecting rod I is 169 mm, the length of the connecting rod II is 222 mm, and the length of the lower crankshaft is 67 mm. Furthermore, the speed and acceleration analysis show that the seedlings are planted with zero-speed operation to obtain a high perpendicularity qualification. The results show that the proposed planting mechanism meets the agronomic requirements of transplanted sorghum with a good transplanting rate.
... GA consists of five basic components , as summarized by Michalewicz (1996): ① genetic representation of problem 142 ...
Preprint
Full-text available
The loess tableland area, located in the south of the Ordos Basin, shows complicated landforms and geological conditions. They lead to a mass of noise on seismic profiles, especially surface waves. Conventional methods are difficult to suppress 3D surface waves since they are hyperbolic in the far-array. This paper proposed a new model-based, data-driven noise attenuation method, which uses dispersion curve analysis of surface wave and joint inversion based on genetic algorithm (GA) and conjugate gradient algorithm, to build an accurate surface wave model, and then to subtract the model directly from seismic data to improve the signal to noise ratio (SNR). Numerical tests show that joint inversion of dispersion waves was stable, accurate, and efficient in multi-type models, including models with a low-velocity layer and with a high-velocity layer. In practical applications in the south of Ordos Basin, the 3-D surface wave attenuation method is effective in surface wave suppression. The low-frequency information and the reflect signal are efficiently preserved. Seismic faces are more apparent for reservoir characterization. This method provided credible data for the exploration of the loess tableland area. Keywords Surface Wave, Model-Based, Genetic Algorithm, Dispersion Curves
... The solutions are subjected to genetic operations such as selection, crossover, and mutation that mimic the process of biological evolution. In this section genetic operators and strategies are explained according to the following works [36][37][38][39]. The algorithms used in this work have been adapted to suit the requirements of the investigated problem. ...
Article
Full-text available
The aim of this paper is to optimize the thickness variation function of simply supported and cantilever beams, in terms of maximizing gaps between chosen neighboring frequencies, and to analyze the obtained results. The optimization results are examined in terms of achieving the objective function (related to eigenvalue problems), but also in terms of their dynamic stiffness (forced vibrations excited by a point harmonic load). In the optimization process, a genetic algorithm was used. Problems related to structural dynamics were solved by FEM implementation into the algorithm. Sample results were presented, and the developed algorithm was analyzed in terms of the results convergence by examining several variable parameters. The authors demonstrated the validity of applying the described optimization tool to the presented problems. Conclusions were drawn regarding the correlation between stiffness and mass distribution in the optimized beams and the natural frequency modes in terms of which they were optimized.
... The developed microwave DA synthesis technique is GA-based. 48,49 GA is a widespread and efficient optimization method for large search space tasks. In the GA-based synthesis technique, binary chromosomes are used. ...
Article
Full-text available
This article presents a genetic algorithm‐based technique for microwave integrated distributed amplifier synthesis. To speed‐up synthesis, only linear characteristics are used in the goal function. The technique uses MMIC foundry process design kit models providing manufacturing‐ready schematics to be found. The models from the process design kit are presented as S‐parameters again to enhance algorithm performance. The technique allows searching among most of the existing schematics of the microwave distributed amplifier but is not limited to typical topologies. The section number is theoretically unlimited and makes it possible to find principally new circuit topology. The synthesis shortens and simplifies the distributed amplifier design time. Experimental validation of the technique is carried out by synthesizing the broadband driver stage for the 20–30 GHz buffer amplifier. The technique produces several amplifier topologies meeting the requirements. The most appropriate circuit was manufactured using a commercial 0.25 μm GaAs pHEMT process. The manufactured buffer amplifier demonstrates a gain of 10 dB and 20 dBm output power at a 1 dB compression point.
... The genetic algorithm, based on natural selection, the mechanism that propels biological evolution, is a technique for resolving limited and unconstrained optimization issues (Michalewicz and Schoenauer 1996;Zbigniew 1996). Chromosome representation, fitness selection, and biologically inspired operators make up the core elements of GA. ...
Article
Full-text available
The present research employs the models based on the relevance vector machine (RVM) approach to predict the unconfined compressive strength (UCS) of the cohesive virgin (fine-grained) soil. For this purpose, the Linear, Polynomial, Gaussian, and Laplacian kernel functions have been implemented in RVM models. Two types of RVM models have been developed: (i) single kernel function based (mentioned by SRVM) and (ii) dual kernel function-based (mentioned by DRVM). Each model has been optimized by each genetic (GA) and particle swarm optimization (PSO) algorithm. Eighty-five data points (75 training + ten testing) have been collected from the literature to train and test the SRVM and DRVM models. The data proportionality method has been used to create six training databases, i.e., 50%, 60%, 70%, 80%, 90%, and 100%, to determine the effect of the quality and quantity of training database on the performance, accuracy, and overfitting of the soft computing models. Ten conventional and three new performance parameters, i.e., a20 index, index of agreement (IOA), and index of scatter (IOS), have measured the performance of models. The present research concludes that (i) a strongly correlated pair of data points affect the performance and accuracy of the model; (ii) GA-optimized SRVM model MD119 has outperformed other SRVM and DRVM models with a20 = 100, IOA = 0.9947, and IOS = 0.0272; (iii) k-fold cross-validation test (k = 10) validates the capabilities of SRVM and DRVM models; (iv) model MD119 has predicted UCS better than GPR model MD11 developed in part 1 of this research; (v) high correlated data points increases the overfitting of the model; (vi) model MD119 has predicted UCS of lab tested soil with a confidence interval of ± 4.0%.
... David Goldberg [20] is the scientist who first introduced a cycle of genetic algorithms such that the research developed into a cycle that selects the best individuals. Furthermore, Zbigniew Michalewicz [21] developed a genetic algorithm cycle by changing the order of the selection process to be carried out after mutation and crossover and adding an elitism operator. The elitism operator in question provides a coding process by providing individual forming genes so that it results in encoding binary values, real numbers, and integers so that optimization occurs based on gene combinations. ...
Article
Full-text available
Sea transportation such as that by container ships has an essential role in the economy both locally and internationally. Ships are a major commodity in distributing goods over long distances due to their relatively low price compared to air shipping. The study implemented an optimization method using heuristic algorithms with ship route selection to minimize operational costs based on the parameters of mileage between 12 ports in the Asia-Pacific region. The ship speed, engine power, and fuel prices at each port are processed using asymmetric traveling salesman problem modeling (ATSP). The research uses three different algorithms to compare with the performance of the traveling salesman problem, namely the nearest neighbor algorithm, simulated annealing, and a genetic algorithm, with an objective function of keeping fuel costs that ships will incur to a minimum. The results show that the genetic algorithm provides the route with the lowest fuel cost.
... In order to achieve effective results in the solutions of multi-objective problems, researchers have developed methods that cover the entire solution space and adapted them to solution algorithms. The fixed-weight objective function combines multiple objectives under a single objective and makes them a single objective problem [2,44]. The variable-weighted objective function was created for the first time in a study to eliminate the deficiencies in the fixed-weighted objective function method [2,3]. ...
Article
Full-text available
Here, optimization of a quadrafilar helical antenna is presented to compare the performances of objective function pairs adapted from mathematical models by using DEA with fixed weight objective function structure and SPEA2 with variable weight objective function structure from 2 different competitive multi-objective algorithms. The most important purpose in optimization problems is to find the result with the lowest cost. For this, the selection of the appropriate objective function pair is very important. The most important aim in this study is to determine the optimum objective function pair model. For this purpose, five different objective function models were derived by using nonlinear mathematical models. These objective functions are adapted from polynomial, power, exponential, gaussian and fourier mathematical models. In order to determine the most successful model without question, the objective functions adapted from the mathematical models are compared separately in both evolutionary algorithms by using different algorithm parameters and different weight coefficients. According to the results obtained, it is seen that the objective function adapted from the power mathematical model has the lowest cost. This proposed adaptation technique, which is the novelty of the study, is an efficient and reliable method to find the most appropriate objective function and the lowest cost result in optimization problems. It can also be quickly adapted to any optimization problem.
... Evolutionary metaheuristics have proven to be universal global optimization algorithms. This claim is supported not only by textbooks and experimental research (e.g., [1,2]), but also by extensive theoretical works (e.g., [3,4]). On the one hand, they are easily understandable, thanks to influences from theory of evolution, introduced by Darwin and enriched through centuries by evolutionists like Baldwin (who added the context of social learning) and Dawkins (with his notion of meme) [5]. ...
... GA is a population-based optimization algorithm, which adapts the concept of survival of fittest [25]. It can be used to optimize controller parameters, in this paper GA is used to optimize Verr, Xerr, and Vx on ACC controller. ...
Article
Full-text available
The importance of safety in transportation has forced developments in automated vehicles. Many control methods have been introduced to ensure safe distance between automated vehicle and another car. This article presents an adaptive cruise control whose parameters are tuned by genetic algorithm for the case study of safe distance in automated vehicle. Two genetic algorithms with two different objective functions, i.e., integral square error (ISE) and integral absolute error (IAE), were evaluated. The results show that genetic algorithms could work well using both functions indicated by the obtained best fitness values through generations. The optimal controller parameter obtained by them also seems to be similar each other, including the distance, velocity and acceleration response of the automated vehicle. It is only the standard deviation that differs them. After 20 running tests, IAE produced 0.0288, 0.0439, and 0.0134 in standard deviation for verr_gain, xerr_gain, vx_gain, respectively, while ISE got 0.0169, 0.0755, and 0.0101. From the total values of standard deviation, we can conclude that GA with IAE performs slightly better since it has smaller value indicating that it has better repeatability.
... The genetic algorithm (Davis, 1991;Golberg, 1989;Michalewicz, 1999), which is based on the principles of natural genetic systems, is a robust, adaptive, and efficient optimization tool. GA has a wide range of applications in scientific domains such as image processing, pattern recognition and machine learning. ...
... The solutions are subjected to genetic operations such as selection, crossover, and mutation, that mimic the process of biological evolution. In this section genetic operators and strategies are explained according to the following works [32][33][34][35]. The algorithms used in this work have been adapted to suit the requirements of the investigated problem. ...
Preprint
Full-text available
The aim of this paper is to optimize thickness variation function of simply supported and cantilever beams, in terms of maximizing gaps between chosen neighboring frequencies and analyze obtained results. The optimization results are examined in terms of achieving the objective function (related to eigenproblem), but also in terms of their dynamics stiffness (forced vibrations excited by point harmonic load). In the optimization process, a Genetic Algorithm was used. Problems related to structural dynamics were solved by FEM implementation into the algorithm. Sample results were presented, and the developed algorithm was analyzed in terms of result convergence by examining several variable parameters. The authors demonstrated the validity of applying the described optimization tool to the presented problems. Conclusions were drawn regarding the correlation between stiffness and mass distribution in the optimized beams and the natural frequency modes in terms of which they were optimized.
... The corresponding dynamics is described by a multidimensional Boltzmann equation and can be simulated with the help of Monte Carlo algorithms [2,24]. With this work, we aim at extending KBO algorithm reaching out towards genetic algorithms (GA) [20,27], a very popular class of metaheuristics that is widely used in engineering. The GA models a natural selection process based on biological evolution [13,21]. ...
Preprint
Full-text available
We propose and analyse a variant of the recently introduced kinetic based optimization method that incorporates ideas like survival-of-the-fittest and mutation strategies well-known from genetic algorithms. Thus, we provide a first attempt to reach out from the class of consensus/kinetic-based algorithms towards genetic metaheuristics. Different generations of genetic algorithms are represented via two species identified with different labels, binary interactions are prescribed on the particle level and then we derive a mean-field approximation in order to analyse the method in terms of convergence. Numerical results underline the feasibility of the approach and show in particular that the genetic dynamics allows to improve the efficiency, of this class of global optimization methods in terms of computational cost.
... The decoder uses an elaborate LM comprising of 3 models: a word model using 3-grams, a lemma model using 5-grams and a grammar class model using 7-grams. The three models are combined using linear interpolation optimized using an Evolutionary Strategy [113] that minimizes perplexity. The feature front-end implements a dynamic normalization scheme and a VAD module. ...
Thesis
Full-text available
This thesis describes several unconventional methods of signal analysis for the purpose of modeling and recognizing speech and music. This process is commonly referred to as feature extraction and is an important step in any machine learning task. Most of the current research on this topic involves Fourier transform derived features. These are usually formulated as a set of spectral features arranged according to a perceptual scale, like the mel scale, and possibly transformed into the cepstrum domain. The basis for this thesis lies in the use of alternative signal representation techniques derived from two signal processing methodologies. One involves sparse coding mechanisms and the matching pursuit (MP) algorithm. The other is a novel wavelet derived feature set known as the scattering wavelet transform (SWT). These methods have already been applied to various signal processing tasks, involving both audio and image processing. On the other hand, they have not been utilized in many practical settings, like the modern largevocabulary continuous speech recognition (LVCSR) systems. The sparse coding mechanisms are often used in computer vision research but rarely are they applied to analyzing audio and even less so to speech recognition. The SWT is a fairly novel technique and while it has been used for solving some speech related problems it was never utilized in an actual LVCSR system. Within the thesis, sparse coding mechanisms are studied in detail in order to verify their capacity for modeling speech signals. Several coding mechanisms and dictionary adaptation methods are discussed and the technique that yields the highest quality of reconstruction is chosen. Similarly, the SWT is chosen in a configuration best fitting its intended use. Next, both of these feature sets are tested on the problem of framewise phoneme classification, representative of the issues behind the acoustic modeling used in most speech recognition systems. The SWT is additionally tested on two more problems: musical genre recognition and LVCSR. All these methods are compared to the most commonly used signal processing methods. Various topics related to the above experiments were also discussed, like the construction of LVCSR and various usability concerns related to exploiting such systems in real-life situations, with an example of a dialog system operating in a telephony environment. This dissertation postulates four main theses. It is shown that sparse coding can be effectively used to encode speech signals and that this form of representation can be used to improve the performance of speech recognition. The second thesis shows that SWT also enhances speech recognition accuracy, which is proven using the same problem that was utilized in the first thesis. In addition to that, the third thesis demonstrates that SWT derived feature set also improves the performance of LVCSR. The final thesis shows that IA has a substantial significance in voice user interface (VUI) design. The author’s contribution to this field of science is primarily in the novel application of the methods described above, in order to make them usable in practical speech recognition tasks. The author’s contribution also includes a novel approach to the conversion of sparse coding into a form which can be applied to speech recognition and an innovative concept of exploiting IA in the domain of VUIs.
... This generates, often randomly, a population from which new generations are formed. At this point, it needs to define the terminating condition too, so that the algorithm stops running once an acceptable solution is found [30]. The second step is the crossover. ...
Article
Full-text available
This paper is focused on structural scalability studies of a new generation of civil tiltrotor wingbox structures. Starting from a reference wingbox, developed under the H2020 Clean Sky 2 NGCTR-TD T-WING project, a geometric scaling was performed to upscale the concept up to a larger class tiltrotor named “NGCTR”. Given the wing and the wingbox geometry, a multi-objective optimization, based on genetic algorithms, was performed to find for the NGCTR, among different materials and layups, the best composite wing in terms of weight that satisfies stiffness and crash requirements. The crash requirement plays an important role in regards to wing weight performance. It was found that not all materials investigated in this study succeeded in satisfying both stiffness and crash requirements. The results in terms of minimum structural mass as the target of the optimization process show that the mass ratio of the optimized up-scaled wing is near the geometrical scale factor: 1.58 vs. 1.29. Furthermore, the solution found by the optimizer NGCTR upscaled wing is comparable with other tiltrotor data coming from a literature study. The difference in terms of the ratio between wing structural weight and tiltrotor MTOW is Δ% = +1.4: an acceptable small overestimation of weight compared to a design, optimization, and scalability method that is easily adaptable and effective. The study presented in this work is, in fact, part of a broader activity on scalability and constitutes its first phase, based on low-fidelity models. The scalability study will continue with a further phase (indicated as “phase 2”), in which more reliable models will be set up, allowing a better estimation of the wing’s structural weight and further optimization. The results shown in this manuscript concern phase 1 only and can be considered a starting point at the System Requirements Review level of the up-scaled wing. This phase allowed for a fast exploration of the available solutions by making a first assessment of the main requirements and by aiding in the material choice at the very beginning of the design.
Preprint
Full-text available
Eminent in pandemic management is accurate information on infection dynamics to plan for timely installation of control measures and vaccination campaigns. Despite huge efforts in clinical testing of individuals, the underestimation of the actual number of SARS-CoV-2 infections remains significant due to the large number of undocumented cases. In this paper we demonstrate and compare three methods to estimate the dynamics of true infections based on secondary data i.e., a) test positivity b) infection fatality and c) wastewater monitoring. The concept is tested with Austrian data on a national basis for the period of April 2020 to December 2022. Further, we use the results of prevalence studies from the same period to generate (upper and lower bounds of) credible intervals for true infections for four data points. Model parameters are subsequently estimated by applying Approximate Bayesian Computation – rejection sampling and Genetic Algorithms. The method is then validated for the case study Vienna. We find that all three methods yield fairly similar results for estimating the true number of infections, which supports the idea that all three datasets contain similar baseline information. None of them is considered superior, as their advantages and shortcomings depend on the specific case study at hand.
Preprint
Full-text available
Purpose: To predict foot soft tissue stiffness based on plantar pressure characteristics during walking using a neural network model, and the association between plantar pressure features and foot soft tissue stiffness was examined utilizing mean impact value analysis. Methods: 30 male subjects were recruited. A foot pressure measurement system was used to collect average pressure data from different foot regions during 5 trials of walking for both feet. Foot soft tissue stiffness was recorded using a MyotonPRO biological soft tissue stiffness meter before each walking trial. Intraclass correlation coefficient was used to evaluate within-session reliability for each measurement. A backpropagation neural network, optimized by integrating particle swarm optimization and genetic algorithm, was constructed to predict foot soft tissue stiffness using plantar pressure data collected during walking. Mean impact value analysis was conducted in parallel to investigate the relative importance of different plantar pressure features. Results: All parameters except average pressure in the 4th metatarsal region demonstrated moderate to high within-session reliability. For the training set, the maximum relative error percentage between predicted and actual data was 7.82%, average relative error percentage was 1.98%, mean absolute error was 9.42 N/m, mean bias error was 0.77 N/m, and root mean square error was 11.89 N/m. For the test set, maximum relative error percentage was 7.35%, average relative error percentage was 2.55%. Mean absolute error, mean bias error and root mean square error were 12.28 N/m, -4.43 N/m, and 14.73 N/m, respectively. Regions with highest contribution rates to foot soft tissue stiffness prediction were the 3rd metatarsal (13.58%), 4th metatarsal (14.71%), midfoot (12.43%) and medial heel (12.58%) regions, which accounted for 53.3% of total contribution. Conclusions: The optimized algorithm developed in this study can effectively predict foot soft tissue stiffness from regional plantar pressures during walking. Pressures in the medial heel, midfoot, 3rd and 4th metatarsal regions during walking best reflect foot soft tissue stiffness. Future studies are suggested to develop subject-specific prediction models for different foot types and foot conditions based on biomechanical characteristics during actual movements.
Article
Full-text available
Plain Language Summary Understanding the evolution of fracture permeability during hydraulic stimulation of subsurface reservoirs is the key to characterizing fluid transport and formulating strategies to limit induced seismicity. Accordingly, there is a significant interest in deciphering how the fluid pressurization rate, a constitutive operational parameter during injection, influences the transient permeability change during fracture slip. We conducted a series of experiments in the laboratory using different fluid pressurization rates on a natural rough fracture in granite under a pre‐stressed state. The fracture had a high initial permeability. Our findings show that when fluid is injected into a fracture with a slight velocity‐strengthening frictional behavior, it causes slow slipping with significant permeability enhancement. The change in hydraulic aperture caused by slip velocity is the main reason for the temporary change in permeability, and this effect is modulated by fluid pressurization rate and fracture surface irregularities. Our results suggest that we can modulate the permeability of subsurface geoenergy reservoirs by controlling the fluid pressurization rate on slowly slipping fractures.
Article
Full-text available
This paper uses an effective method, CODEQ method, with integer programming for solving the capacitor placement problems in distribution systems. Different from the original differential evolution (DE), the concepts of chaotic search, opposition-based learning, and quantum mechanics are used in the CODEQ method to overcome the drawback of selection of the crossover factor and scaling factor used in the original DE method. One benchmark function and one 9-bus system from the literature are used to compare the performance of the CODEQ method with the DE, and simulated annealing (SA). Numerical results show that the performance of the CODEQ method is better than the other methods. Also, the CODEQ method used in 9-bus system is superior to some other methods in terms of solution power loss and costs.
Chapter
A lot of studies have been made on new product development process to make it an ideal procedure and many researchers have contributed significantly to achieve this by studying various factors associated with it. In this study, an attempt has been made to predict the optimal numbers of new products produced by electronics and metal & machinery industry by considering various factors those significantly affects the production pattern of these industries. For prediction purposes, functional linked artificial neural network (FLANN) with and without nature-inspired techniques have been used and comparison of performance for both the models have been done by using mean square error (MSE) and mean absolute percentage error (MAPE) as the measurement indices.
Conference Paper
Random Forest (RF) is one of the most popular and effective machine learning algorithms. It is known for its superior predictive performance, versatility, and stability, among other things. However, an ensemble of decision trees (DTs) represents a black-box classifier. On the other hand, interpretability and explainability are ones of the top artificial intelligence trends, to make predictors more trustworthy and reliable. In this paper, we propose an evolutionary algorithm to extract a single DT that mimics the original RF model in terms of predictive power. The initial population is composed of trees from RF. During evolution, the genetic operators modify individuals (DTs) and exploit the initial (genetic) material. e.g., splits/tests in the tree nodes or more expanded parts of the DTs. The results show that the classification accuracy of a single DT predictor is not worse than that of the original RF. At the same time, and probably most importantly, the resulting classifier is a single smaller-size DT that is almost self-explainable.
Book
This book is a reprint of the special issue in the journal Algorithms with the same title and can be completely downloaded as a pdf from: https://www.mdpi.com/books/book/7632 It consists of IX + 265 pages. It is also available as hardcover book.
Book
Full-text available
Computational intelligence-based optimization methods, also known as metaheuristic optimization algorithms, are a popular topic in mathematical programming. These methods have bridged the gap between various approaches and created a new school of thought to solve real-world optimization problems. In this book, we have selected some of the most effective and renowned algorithms in the literature. These algorithms are not only practical but also provide thought-provoking theoretical ideas to help readers understand how they solve optimization problems. Each chapter includes a brief review of the algorithm’s background and the fields it has been used in. Additionally, Python code is provided for all algorithms at the end of each chapter, making this book a valuable resource for beginner and intermediate programmers looking to understand these algorithms.
Chapter
This article reviews the major improvements in efficiency and quality of evolutionary multi-objective and multi-disciplinary design optimization techniques achieved during 1994–2021. At first, we introduce briefly Evolutionary Algorithms (EAs) of increasing complexity as accelerated optimizers. After that, we introduce the hybridization of EAs with game strategies to gain higher efficiency. We review a series of papers where this technique is considered an accelerator of multi-objective optimizers and benchmarked on simple mathematical functions and simple aeronautical model optimization problems using friendly design frameworks. Results from numerical examples from real-life design applications related to aeronautics and civil engineering, with the chronologically improved EAs models and hybridized game EAs, are listed and briefly summarized and discussed. This article aims to provide young scientists and engineers a review of the development of numerical optimization methods and results in the field of EA-based design optimization, which can be further improved by, e.g., tools of artificial intelligence and machine learning.KeywordsSingle/multi-disciplinary design optimizationEvolutionary algorithmsGame strategiesHybridized gamesAeronauticsCivil engineering
Chapter
The article presents the popularity of nature-based optimization methods in the context of issues related to digital filters. An analysis was made concerning the number of publications in popular scientific databases. Attention was paid to publications that are related to nature-inspired optimization methods and their application to digital filters design. The results obtained from scientific databases make it possible to identify the most popular optimization methods in the context of their applications to digital filters. In this paper, we also point out the thematic areas in which the articles related to nature-inspired optimization methods and digital filters are published more often.
Chapter
Full-text available
Los servidores web constituyen una parte fundamental para el funcionamiento de las aplicaciones web y por estos fluye toda la información de las entidades y personas. Estos son constantemente amenazados por ataques informáticos, los cuales aprovechan vulnerabilidades o problemas de seguridad para lograr sus objetivos. Los administradores web y especialistas de seguridad informática, por medio de controles de seguridad tienen que garantizar la preservación de la confidencialidad, integridad y disponibilidad de la información. En la presente investigación se desarrollan controles de seguridad para los servidores web, que contribuyen a aumentar la protección de la información, disminuir las vulnerabilidades de software y por tanto disminuir las posibilidades de éxito de un ataque informático. Los resultados obtenidos confirman su efectividad en la mitigación de vulnerabilidades y en el aumento de la protección de la información.
Article
Full-text available
This paper reflects on how some Artificial Intelligence Techniques may positively affect the operation of a distance education platform. It addresses to a particular distance education platform at which specific characteristics are incorporated, such as the adaptation to different users profiles as much to diagnose as to determine a plan for the most adequate teaching strategies. The use of Bayes nets and neural networks are mentioned in the evaluation process and a summarized example is included in the use of the platform.
Article
Full-text available
This paper proposes a parametric study of modified rectangular microstrip antenna in the frequency range between 1.4-2.65 GHz for wireless communication applications, incorporating with optimization methods Particle Swarm Optimization (PSO) and Fruit Fly Optimization (FOA). To design an antenna using optimization methods a fitness function of required parameters is needed. The resonance frequency of Microstrip Patch Antennas (MPAs) depends on various parameters and a standard frequency function does not exist for MPAs. In this study, a rectangular patch antenna is designed for the required resonance frequency and modified with circular quarter slots. The frequency-shift with the change of design variables, which are the substrate thickness and the radius of the slots, is observed. The resonance frequency is obtained as a function of the design variables and it is used in the optimization process to minimize the difference between the target frequency and the calculated one. The original algorithms FOA and PSO have been adapted for its application to a modified rectangular patch antenna design problem: resonance frequency and design of antenna. The design parameter values obtained via optimization and the performances of the optimization methods are presented. The results showed that both PSO and FOA find the dimensions correctly. It is also observed that the sensitivity of the FOA increases with the fruit fly population and the convergence gets faster. The outcomes of this paper show that the PSO algorithm gives better results when compared to the FOA for the proposed antenna.
Chapter
Full-text available
Tourism has a positive effect on economic growth in large and small countries. The combination of technology and applications is referred to as Tourism 3.0. In this chapter, we propose a tourism app to produce personalized trip itineraries. The app has embedded a genetic algorithm that searches for the best choices of places to maximize the tourist’s satisfaction. However, running a genetic algorithm in a standard smartphone suppose to be a challenge since a standard smartphone is a constrained device. The above may affect the tourists’ behavior towards using our proposed tourism app. Then, to evaluate tourists’ attitudes toward using our tourism app based on a genetic algorithm, we apply the technology acceptance model. Results show that most test participants found our tourism app easy to use. They also show a strong positive correlation between attitude towards use and intention to use.KeywordsTechnology acceptance modelGenetic algorithmItineraryTourism app
Chapter
In general, genetic algorithm (GA) is more suitable for optimization (maximizing and minimizing) problems in numerical and random sequences with much better solutions in a short time than many other mathematical modeling alternatives. The first historical appearance of decimal number system is explained in detail. The application of GA mutation and crossover operations depend on random variability, so different random selections are conveniently explained to GA procedures. The major components of GA procedure are optimization, error minimization, fitness, target function, initial population, mutation, and crossover procedures. Although there are random elements in the method itself, GA can reach the absolute optimization solution in the shortest time. The GA method is easy to understand by everyone as it can reach the result with only arithmetic calculations without requiring detailed and heavy mathematics. However, for this, it is necessary to explain the verbal aspects of the subject within the framework of the rules of philosophy, logic, and rationality. In this chapter, the principles, logic, and similarities of GA philosophy with other classical methods are explained, and the reader ambition for the subject and self-development principles is taken into consideration by giving the necessary clues. Different application examples are given with numerical applications.KeywordsAlgorithmCrossoverDecimalFitnessGeneticMutationOptimizationPopulationRandom selectionsTarget
Article
Full-text available
Numerical optimization has been ubiquitous in antenna design for over a decade or so. It is indispensable in handling of multiple geometry/material parameters, performance goals, and constraints. It is also challenging as it incurs significant CPU expenses, especially when the underlying computational model involves full-wave electromagnetic (EM) analysis. In most practical cases, the latter is imperative to ensure evaluation reliability. The numerical challenges are even more pronounced when global search is required, which is most often carried out using nature-inspired algorithms. Population-based procedures are known for their ability to escape from local optima, yet their computational efficiency is poor, which makes them impractical when applied directly to EM models. A common workaround is the utilization of surrogate modeling techniques, typically in the form of iterative prediction-correction schemes, where the accumulated EM simulation data is used to identify the promising regions of the parameter space and to refine the surrogate model predictive power at the same time. Notwithstanding, implementation of surrogate-assisted procedures is often intricate, whereas their efficacy may be hampered by the dimensionality issues and considerable nonlinearity of antenna characteristics. This work investigates the benefits of incorporating variable-resolution EM simulation models into nature-inspired algorithms for optimization of antenna structures, where the model resolution pertains to the level of discretization density of an antenna structure in the full-wave simulation model. The considered framework utilizes EM simulation models which share the same physical background and are selected from a continuous spectrum of allowable resolutions. The early stages of the search process are carried out with the use of the lowest fidelity model, which is subsequently automatically increased to finally reach the high-fidelity antenna representation (i.e., considered as sufficiently accurate for design purposes). Numerical validation is executed using several antenna structures of distinct types of characteristics, and a particle swarm optimizer as the optimization engine. The results demonstrate that appropriate resolution adjustment profiles permit considerable computational savings (reaching up to eighty percent in comparison to high-fidelity-based optimization) without noticeable degradation of the search process reliability. The most appealing features of the presented approach—apart from its computational efficiency—are straightforward implementation and versatility.
Thesis
Full-text available
This thesis presents an evolutionary approach for the learning of the Adaptive Network Based Fuzzy Inference System (ANFIS). The previous works are based on the descent of the gradient (GD), this algorithm converges very slowly and stuck into bad local minima. In this thesis, we apply genetic algorithms (GA) and particle swarms (PSO) to optimize the antecedents and consequent parameters of ANFIS fuzzy rules. First, the subtractive clustering algorithm was used to determine the optimal structure of the ANFIS network, i.e., the best partitioning of the input space; then, adjusting the antecedent and consequent parameters of the fuzzy rules so that a specified objective function is minimized. The evolutionary process begins by randomly generating an initial population, each candidate solution is represented by a vector. The length of the latter is based on the number of antecedent and consequent parameters in the ANFIS model. Then, the entire population was made to improve gradually until the maximum number of iterations was reached. The proposed approach was applied for the recognition of phonemes from TIMIT database and speaker recognition from the CHAINS database. The results obtained by the hybrid models AG-ANFIS and PSO-ANFIS showed an improvement in precision compared to a similar classic ANFIS based on the back-propagation of the gradient.
Preprint
Full-text available
Establishing a communication bridge by transferring data driven from different embedded sensors via internet or reconcilable network protocols between enormous number of distinctively addressable objects or "things", is known as the Internet of Things (IoT). IoT can be amalgamated with multitudinous objects such as thermostats, cars, lights, refrigerators, and many more appliances which will be able to build a connection via internet. Where objects of our diurnal life can establish a network connection and get smarter with IoT, robotics can be another aspect which will get beneficial to be brought under the concept of IoT and is able to add a new perception in robotics having "Mechanical Smart Intelligence" which is generally called "Internet of Robotic Things" (IoRT). A robotic arm is a part of robotics where it is usually a programmable mechanical arm which has human arm like functionalities. In this paper, IoRT will be represented by a 5 DoF (degree of freedoms) Robotic Arm which will be able to communicate as an IoRT device, controlled with heterogeneous devices using IoT and "Cloud Robotics".
Chapter
Full-text available
Ternary Quantum-Dot Cellular Automata (TQCA) is a developing nanotechnology that guarantees lower power utilization and littler size, with quicker speed contrasted with innovative transistor. In this article, we are going to propose a novel architecture of level-sensitive scan design (LSSD) in TQCA. These circuits are helpful for the structure of numerous legitimate and useful circuits. Recreation consequences of proposed TQCA circuits are developed by utilizing such QCA designer tool. In realization to particular specification, we need to find the parameter values by using Schrodinger equation. Here, we have optimized the different parameter in the equation of Schrodinger.KeywordsTQCALSSDQuantum phenomenon for combinational as well as sequential logicJ-K flip-flopSchrodinger equationEnergyPower
Chapter
A fuzzy system can perform well with uncertain data with inefficiency in defining appropriate parameter values of membership function. There exist certain challenges while selecting the fuzzy membership function parameters that can eventually affect the performance of the fuzzy-based classification system. This paper proposes a particle swarm optimization-based fuzzy rule-based system for classification problems. The proposed model focuses on optimizing the membership function parameters using particle swarm optimization. Particle swarm optimization algorithm optimizes the parameters of the triangular membership function to improve the performance of fuzzy rule-based systems. Two datasets—iris and appendicitis—with various combinations of partition type and rule induction algorithms are used to evaluate the proposed model. Along with the proposed model, a simple fuzzy rule-based system without particle swarm optimization is developed, having the same combination of partition type and rule induction algorithm for comparison. In the case of IRIS dataset, using the combination of hierarchical fuzzy partition, prototyping and Wang–Mendel rule induction algorithm, the developed fuzzy rule-based system attained an accuracy of 96.86%, 96.86% and 97.77% for 10, 20 and 50 particle sizes, respectively. While fuzzy rule-based system using k-means fuzzy partition, prototyping and fuzzy decision tree rule induction algorithm achieved an accuracy of 91.67%, 91.89% and 92.140% for APPENDICITIS dataset with 10, 20 and 50 particles, respectively. The experimental result shows that a particle swarm optimization-based fuzzy rule-based system could significantly improve the accuracy compared to a simple fuzzy rule-based system.KeywordsFuzzy rule-based systemParticle swarm optimizationTuning of membership function
Article
It is often computationally intensive to solve combinatorialoptimisation problems due to the inherent large solution space. These problems are commonly observed in the fields of engineering system design and operations. Traditional techniques are limited in handling the growing complexity and size of these problems efficiently. This paper presents a twofold update quantum-inspired genetic algorithm to solve combinatorial optimisation problems. It is generalised as an improved version of quantum-inspired evolutionary algorithm. The paper proposes a new problem formulation and the solution procedure for quantum-inspired evolutionary algorithms. An improved quantum-inspired genetic algorithm is proposed with a twofold update mechanism along with various operators. The proposed method is applied to solving a real-life engineering system optimisation problem of modular design. The results are compared using a classical genetic algorithm versus a quantum-inspired evolutionary algorithm, indicating that the proposed method outperforms the traditional methods and is more robust and more efficient.
Conference Paper
Full-text available
This paper describes how a user modeling knowledge base for personalized TV servers can be generated starting from an analysis of lifestyles surveys. The aim of this research is the construction of well-designed stereotypes for generating adaptive electronic program guides (EPGs) which filter the information about TV events depending on the user’s interests.
ResearchGate has not been able to resolve any references for this publication.