Science topic

Applied Optimization - Science topic

Explore the latest questions and answers in Applied Optimization, and find Applied Optimization experts.
Questions related to Applied Optimization
  • asked a question related to Applied Optimization
Question
1 answer
Hello fellow researchers,
I'm writing to suggest a mutual citation exchange to encourage collaboration and support within our academic community. I have recently published the following papers and would greatly appreciate citations from fellow researchers in related fields.
In return, I am more than willing to reciprocate by citing your work in my future publications. Below I have provided a list of articles for your consideration:
Camargo, F. G. (2021b). Survey and calculation of the energy potential and solar, wind and biomass EROI: application to a case study in Argentina. DYNA, 88(219), 50-58. https://doi.org/10.15446/dyna.v88n219.95569
Camargo, F. G. (2022c). Dynamic Modeling Of The Energy Returned On Invested. DYNA, 89(221), 50–59. https://doi.org/10.15446/dyna.v89n221.97965
Camargo, F. G. (2022d). Fuzzy multi-objective optimization of the energy transition towards renewable energies with a mixed methodology. Production, 32, e20210132. https://doi.org/10.1590/0103-6513.20210132
Camargo, F. G. (2023e). A hybrid novel method to economically evaluate the carbon dioxide emissions in the productive chain of Argentina. Production, 33. http://dx.doi.org/10.1590/0103-6513.20220053
Camargo, F. G., Schweickardt, G. A., & Casanova, C. A. (2018). Maps of Intrinsic Cost (IC) in reliability problems of medium voltage power distribution systems through a Fuzzy multi-objective model. Dyna, 85(204), 334-343. https://doi.org/10.15446/dyna.v85n204.65836
Please feel free to reach out if you're interested in this collaboration or have any questions. Looking forward to connecting and exchanging citations!
Best regards,
PhD Camargo Federico Gabriel
Technology Activities and Renewable Energies Group
La Rioja Regional Faculty of the National Technological University, Argentina.
Relevant answer
Answer
Camargo, F. G., Rossomando, F. G., Gandolfo, D. C., Sarroca, E. A., Faure, O. R., & Andrés Pérez,
E. (2024). A novel methodology to obtain optimal economic indicators based on the Argentinean production chain under
uncertainty. Production, 34, e20230091. https://doi.org/10.1590/0103-6513.20230091
  • asked a question related to Applied Optimization
Question
2 answers
If you want to do any collaborations, feel free to contact with my team.
Relevant answer
Answer
Well 😕, the work looks very academic if not "school-like". Every professional C++ programmer should know all the optimization techniques given and use them practically in everyday coding.
It would be useful to at least compare how much time each technique reduces when used together, and not just compare it to the obvious worst recursive case.
The function calculating the Fibonacci sequence does not give too many possibilities to show real differences, especially in terms of memory usage. These few necessary variables easily fit in the processor registers, which in itself means that any non-recursive algorithm will be many orders of magnitude faster than a recursive one.
I would suggest using some more demanding and really useful algorithm, for example from the field of neural networks.
  • asked a question related to Applied Optimization
Question
3 answers
latine hyper cube design
global and local optimization
4D color maps How do we assess the global behavior of a model and obtain the equilibrium points and analyze their stability through simulation series ?
Relevant answer
Answer
Assessing the global behavior of a model and obtaining equilibrium points while analyzing their stability through simulation series typically involves the following steps:
  1. Define the Model: Clearly define the mathematical model that represents the system you want to study. This model could be based on differential equations, difference equations, agent-based models, etc.
  2. Identify Parameters and Variables: Identify the parameters and variables involved in the model. Parameters are constants that influence the behavior of the system, while variables are quantities that change over time.
  3. Determine Equilibrium Points: Equilibrium points are where the system's state variables remain constant over time. To find these points, set the derivatives of the state variables to zero and solve the resulting system of equations.
  4. Linearize the Model: Linearize the model around each equilibrium point. This involves approximating the behavior of the system near the equilibrium points using linear differential equations or difference equations.
  5. Stability Analysis: Analyze the stability of each equilibrium point by examining the eigenvalues of the linearized system. If all eigenvalues have negative real parts, the equilibrium point is stable. If any eigenvalue has a positive real part, the equilibrium point is unstable. If some eigenvalues have negative real parts and others have positive real parts, the stability is more complex and could involve limit cycles or chaotic behavior.
  6. Simulation Series: Perform simulation series by numerically integrating the model equations over a range of parameter values or initial conditions. This allows you to observe the dynamic behavior of the system and how it changes with different inputs.
  7. Visualize Results: Visualize the simulation results to gain insights into the system's behavior. This could involve plotting time series, phase portraits, bifurcation diagrams, or other relevant visualizations.
  8. Sensitivity Analysis: Conduct sensitivity analysis to understand how changes in model parameters or initial conditions affect the system's behavior. This helps identify which factors have the most significant impact on the system's dynamics.
  • asked a question related to Applied Optimization
Question
3 answers
No one has the mental capacity to know all languages. Additionally, the more languages one is fluent in, the more likely that individual will mix up words. Thus, knowing enough languages for survival is optimal while artificial intelligence could and potentially will bridge language barriers. Of course knowing three languages or more is somewhat of an advantage.
Relevant answer
Answer
Sure, the focus study helps to find many special points of Strength in the language.
  • asked a question related to Applied Optimization
Question
4 answers
The set of optimal solutions obtained in the form of Pareto front includes all equally good trade-off solutions. But I was wondering, whether these solutions are global optima or local optima or mix of both. In other words, does an evolutionary algorithm like NSGA-II guaranties global optimum solutions?
Thank you in anticipation.
Relevant answer
Answer
No, a Pareto front produced by an evolutionary algorithm does not necessarily include both global and local optima. The Pareto front represents the set of non-dominated solutions in multi-objective optimization problems. These solutions are not dominated by any other solution in terms of all the objective functions simultaneously.
In a multi-objective optimization problem, there can be multiple optimal solutions, known as Pareto optimal solutions, that represent trade-offs between conflicting objectives. These solutions lie on the Pareto front and are considered efficient solutions because improving one objective would require sacrificing performance in another objective.
The Pareto front typically contains a mixture of global and local optima. Global optima are solutions that provide the best performance across all objectives in the entire search space. Local optima, on the other hand, are solutions that are optimal within a specific region of the search space but may not be globally optimal.
The evolutionary algorithm aims to explore the search space and find a diverse set of Pareto optimal solutions across the entire front, which may include both global and local optima. However, the algorithm's ability to discover global optima depends on its exploration and exploitation capabilities, the problem complexity, and the specific settings and parameters of the algorithm.
It's important to note that the distribution and representation of global and local optima on the Pareto front can vary depending on the problem and algorithm used. Analyzing the Pareto front and its solutions can provide valuable insights into the trade-offs and optimal solutions available in multi-objective optimization problems.
  • asked a question related to Applied Optimization
Question
3 answers
Hello!
Multiple Criteria Decision-Making (MCDM) methods are applied in many fields of science, as a result, many scientific publications related to the application of these methods have been prepared.
Some of the most popular MCDM methods or MADM (Multiple Attribute Decision-Making) are TOPSIS, SAW, AHP, etc. In describing these methods, some authors use the term "criteria", and others use the term "attribute". I would like to know your opinion on which term should be used.
Some references:
Yoon, K. P., & Hwang, C. L. (1995). Multiple attribute decision making: an introduction. Sage publications.
Triantaphyllou, E. (2000). Introduction to Multi-Criteria Decision Making. In: Multi-criteria Decision Making Methods: A Comparative Study. Applied Optimization, vol 44. Springer, Boston, MA. https://doi.org/10.1007/978-1-4757-3157-6_1
Thank you!
Relevant answer
Answer
Dear Ruta
In my opinion, the correct name is 'criteria'. Attibutes define the elements that make a criteria, and performance values are the numebrs with whch each alternative contributes to each criterion.
Since an atribute defines a criterion, they tell us about the characteristics for each one, like type of performance factors , i.e., negatives, positives, integer, decimal or their dispersion. The action maximize or minimze a criterion, defines its puropse, normally benefit, costs or equal.
In addition, each criterion is an objective to whcih each alternative must comply, and in this sense, they need a goal, for instance, in criterion ' CO2 contamination', which of course must be minimzed, we need to put a nuneric value that indicates the maximum amount of contamination allowed
  • asked a question related to Applied Optimization
Question
5 answers
Over the last few decades, there have been numerous metaheuristic optimization algorithms developed with varying inspiration sources. However, most of these metaheuristics have one or more weaknesses that affect their performances, for example:
  1. Trapped in a local optimum and are not able to escape.
  2. No trade-off between the exploration and exploitation potentials
  3. Poor exploitation.
  4. Poor exploration.
  5. Premature convergence.
  6. Slow convergence rate
  7. Computationally demanding
  8. Highly sensitive to the choice of control parameters
Metaheuristics are frequently improved by adding efficient mechanisms aimed at increasing their performance like opposition-based learning, chaotic function, etc. What are the best efficient mechanisms you suggest?
Relevant answer
Answer
In this article a highly complex optimization problem such as optimal power flow is computed through various metaheuristics with varied performance.
  • asked a question related to Applied Optimization
Question
3 answers
A collection of solved examples in Pyomo environment (Python package)
The solved problems are mainly related to supply chain management and power systems.
Feel free to follow / branch / contribute
Relevant answer
Answer
udemy. com/course/optimization-in-python/?couponCode= 36C6F6B228A087695AD9
  • asked a question related to Applied Optimization
Question
2 answers
Hello,
I am trying to run an optimal power flow (OPF) study with Matpower. In the standard OPF formulation, only the P and Q generation of generations are part of the objective function (or cost function to be minimized).
However, I would like to have the bus voltages in the objective function, so that for example the total voltage deviation of all voltages can be minimized. Does anyone have experience with this?
Thanks.
Relevant answer
Answer
Baraa Mohandes Hi Barra! How we can change the objective functions? I don't want to use the existing objective of generation cost of generators.
For example, my objective functions is
obj. min(QP CC ), max(QP CC )
s.t. sumPinji = 0
sumQinji = 0
Umini < Ui < Umaxi ,
Qminj < Qj < Qmaxj ,
∀i ∈ Bus, ∀j ∈ Gen
How can I modify the existing objective function?
  • asked a question related to Applied Optimization
Question
6 answers
Could any expert try to examine our novel approach for multi-objective optimization?
The brand new approch was entitled "Probability - based multi - objective optimization for material selection", and published by Springer available at https://link.springer.com/book/9789811933509,
DOI: 10.1007/978-981-19-3351-6.
Relevant answer
Answer
  • asked a question related to Applied Optimization
Question
5 answers
I am trying to convert vector into an image using the code below
clear variables
load(Exe4_2022.mat')
n = length(b);
figure,
imagesc(reshape(b,sqrt(n),sqrt(n))),
colormap(gray),
axis off;
But I am getting this error. Could anybody tells me how to resolve this issue??
Error using reshape
Size arguments must be real integers.
I have attached the "Exe4_2022.mat" file with this post.
Thanks
Relevant answer
Answer
The numbers of lines and columns of the matrix representing the image you want to obtain must be integers and their product must equal the length of the vector you want to convert. In your example the length of the vector is n=55929. You want to obtain a squared matrix with square(n) lines and columns, but in your example m=square(n)=2.364931288642442e+02, which is not an integer. If we chose a number of 3 columns and a number of 18643 lines we will obtain the following Matlab code which works.
clear variables
load('Exe42022.mat');
n = length(b);
m=sqrt(n);
c=reshape(b,18643,3);
figure,
imagesc(c),
colormap(gray),
axis off;
Please try this code.
  • asked a question related to Applied Optimization
Question
4 answers
This is in context of the objective function of a multivariate optimization problem say, f(a,b,c).
I am looking for a "measure" for the degree of bias of f(a,b,c) towards any of the input variables.
Relevant answer
Answer
Siddhartha Pany, then, perhaps, you may be willing to look for something like
"marginal contribution to risk ... in portfolio theory"
"... the rate of change in risk (objective function) ... with respect to a small percentage change in the size of a portfolio allocation weight",
however, this is a very specific example from portfolio theory.
Similarly, the notion of elasticity might be another example (for bias) from economics in attachment. The magnitude of the elasticity (ratio) could of any size, so a bound on elasticity should be imposed as a constraint, I guess.
Good luck.
  • asked a question related to Applied Optimization
Question
2 answers
What is the main disadvantage of a global optimization algorithm for the Backpropagation Process?
Under what conditions can we still use a local optimization algorithm for the Backpropagation Process?
Relevant answer
Answer
Armin Hajighasem Kashani Non-linear data may be simply handled and processed using a neural network that is otherwise difficult in perceptron and sigmoid neurons. In neural networks, the agonizing decision boundary problem is reduced.
However, the downsides include the loss of neighborhood knowledge, the addition of more parameters to optimize, and the lack of translation invariance.
  • asked a question related to Applied Optimization
Question
10 answers
I have 2 functions f(x,y) and g(x,y) that depend on two variables (x,y), so I want to find a solution that minimize f(x,y) while maximizing g(x,y), simultaneously??
P.S: These functions are linearly independent.
  • asked a question related to Applied Optimization
Question
4 answers
Can anyone provide me with PSO MATLAB code to optimize the weights of multi types of Neural Networks?
Relevant answer
Answer
Dear Murana Awad,
Application of PSO-BP Neural Network in GPS Height Fitting
  • asked a question related to Applied Optimization
Question
3 answers
I would like to optimise the process model of a thermal energy supply system, which was developed in the software environment IPSEpro, with regard to the economic and energetic constraints. Since the options in the software itself are limited in terms of optimisation algorithms, I would like to optimise the process model via the COM interface with Matlab with the help of the optimisation algorithms available in Matlab. What is the best way to link the Matlab code for optimisation with the code that controls the call to the external process model? How can the objective function for the algorithm be formulated in Matlab when there is no functional relationship, but a parameter of the external model should be used?
  • asked a question related to Applied Optimization
Question
2 answers
For an Integer Linear Programming problem (ILP), an irreducible infeasible set (IIS) is an infeasible subset of constraints, variable bounds, and integer restrictions that becomes feasible if any single constraint, variable bound, or integer restriction is removed. It is possible to have more than one IIS in an infeasible ILP.
Is it possible to identify all possible Irreducible Infeasible Sets (IIS) for an infeasible Integer Linear Programming problem (ILP)?
Ideally, I aim to find the MIN IIS COVER, which is the smallest cardinality subset of constraints to remove such that at least one constraint is removed from every IIS.
Thanks for your time and consideration.
Regards
Ramy
  • asked a question related to Applied Optimization
Question
4 answers
I am coding a multi-objective genetic algorithm, it can predict the pareto fronts accurately for convex pareto front of multi-objective functions. But, for non-convex pareto fronts, it is not accurate and the predicted pareto points are clustered on the ends of the pareto front obtained from MATLAB genetic algorithm. can anybody provide some techniques to solve this problem. Thanks in advance.
The attached pdf file shows the results from different problems
Relevant answer
Answer
  • asked a question related to Applied Optimization
Question
7 answers
I am solving Bi-objective integer programming problem using this scalarization function ( F1+ epslon F2). I have gotten all my result correct but it says Cplex can not give an accurate result with this objective function. It says cplex may give approximate non-dominated solution not exact. As I said before, I am very sure that my result is right because I already checked them. Do I need to prove that cplex give right result in my algorithm even sometimes it did mistake in large instance? 
Thanks in advance.
Relevant answer
Answer
Did you code the epsilon constraint method using OPL? May I ask how did you code this? I tried but could not get the right results.
Thanks a lot.
  • asked a question related to Applied Optimization
Question
6 answers
Hi all,
I have a large mixed-integer programming (MIP) optimization problem, which has a high risk of infeasibility. The branch and cut algorithm in GLPK spends hours to find an optimum solution, and it may return an infeasible solution at the end. I want to do a pre-screening before starting the actual optimization to make sure there is a good chance of a feasible solution. I admire the fact that the only way to check feasibility is to run the optimization, but any heuristic with potential false infeasible alerts (false positives) could be helpful. My focus is on feasibility rather than optimality. Do you have any suggestion of algorithm, software, or a library in Python to do this pre-screening?
Thanks for your time and kind reply.
  • asked a question related to Applied Optimization
Question
9 answers
I optimized 02 structures containing NO2 group by Gaussian at 6 311G (d,p) level. In the output file i observed that the NO2 is not connected to the structure and appears as O=N=O .
Q1: When i to use this output as a starting structure for a TS search; should i reconnect the NO2 group to the structure by single bond or i have to keep the output structure as it is ?
i tried so many Keywords such as opt=(calcfc,ts,noeigen), opt=(calcall,ts,noeigen), # opt=(calcfc,tight,ts,noeigentest)... and many guesses but the irc showed that the TSs is not connecting the reagents and products!
Q2: is there any other option tofind the right TS of this pathway!
Any help is much appreciated
Relevant answer
Answer
In general, you should combine two methods: imaginary frequency analysis of transition state (TS) structure and the calculation of IRC, respectively. On the other hand, you could check doubly by the calculation of QST2 and QST3. Best regards.
  • asked a question related to Applied Optimization
Question
3 answers
Hi
I have a project in the field of optimizing the groundwater monitoring network through coding in MATLAB software (with NSGA2 algorithm), I have read the complete research background and I am completely familiar with the subject in theory, but I have no background to start coding. Does anyone have a related or educational code file in this subject?
Thank you for your help
  • asked a question related to Applied Optimization
Question
7 answers
Dear all, I want to solve three or four objective functions with the four or five decision variables (no. of decision variable> no of the objective function). There may be some game-theoretic approach to solve the problem. But I am searching for some MATHEMATICA code (Like the Direct Search method or other computational technique) to solve the problem. If there have, please suggest (other than GA).
Thank you.
  • asked a question related to Applied Optimization
Question
14 answers
Hello scientific community
Do you noting the following:
[I note that when a new algorithms has been proposed, most of the researchers walk quickly to improve it and apply for solving the same and other problems. I ask now, so why the original algorithm if it suffer from weakness, why the need for a new algorithm if there are an existing one that solved the same problems, I understand if the new algorithm solved the unsolved problem so welcome, else why?]
Therefore, I ask, is the scientific community need a novel metaheuristic algorithms (MHs) rather than the existing.
I think, we need to organized the existing metaheuristic algorithms and mentioned the pros and cons for each one, the solved problems by each one.
The repeated algorithms must be disappear and the complex also.
The dependent algorithms must be disappeared.
We need to benchmark the MHs similar as the benchmark test suite.
Also, we need to determine the unsolved problems and if you would like to propose a novel algorithm so try to solve the unsolved problem else stop please.
Thanks and I wait for the reputable discussion
Relevant answer
Answer
The last few decades have seen the introduction of a large number of "novel" metaheuristics inspired by different natural and social phenomena. While metaphors have been useful inspirations, I believe this development has taken the field a step backwards, rather than forwards. When the metaphors are stripped away, are these algorithms different in their behaviour? Instead of more new methods, we need more critical evaluation of established methods to reveal their underlying mechanics.
  • asked a question related to Applied Optimization
Question
3 answers
Dear all,
I want to start learning discrete choice-based optimization so that I can use it later for my research works. I want to know about free courses, books, study materials available on this topic. Any suggestions will be appreciated.
Thanks,
Soumen Atta
Relevant answer
Answer
You must to begin studying discrete optimization methods. in general. After that yout could to study models anf methids for choicing options. I am the author of the Selection of Proposals and of the Integration of Variables methods devoted to the options selection, that you can find in my researchgte profile, including applications.
  • asked a question related to Applied Optimization
Question
3 answers
For example, I have known variables x and y have nonlinear relationships with the score, however the model of x, y to the score is unknown. We want to know which values of x and y can lead to the highest possible score y. Could you kindly recommend related methods or researchs?
Relevant answer
Answer
You can solve this problem as optimization problem using Genetic algorithm, Ant colony algorithm, Simulated Annealing algorithm, PSO, etc.
  • asked a question related to Applied Optimization
Question
19 answers
Hi,
I'm interested in solving a nonconvex optimization problem that contains continuous variables and categorical variables (e.g. materials) available from a catalog.
What are the classical approaches? I've read about:
- metaheuristics: random trial and error ;
Are you aware of other systematic approaches?
Thank you,
Charlie
Relevant answer
Answer
Z. Nedelkov\'a, C.\ Cromvik, P.\ Lindroth, M.\ Patriksson, \and A.-B.\ Str\"omberg}, {\em A splitting algorithm for simulation-based optimization problems with categorical variables}, {\sf Engineering Optimization}, vol.~51 (2019), pp.~815--831.
It might help!
  • asked a question related to Applied Optimization
Question
8 answers
I am working on a problem for design optimisation. I would like to ask if for an uncertain problem, should design optimisation under uncertainty techniques be used for the design optimisation?
  • asked a question related to Applied Optimization
Question
1 answer
Hi,
I'm running BARON on an AMPL instance that I uploaded on the NEOS server. Unfortunately, it times out after a few minutes. It is possible to pass an option file to set the time limit (maxtime), but I struggle to find the right syntax.
I've tried:
option maxtime 10000
maxtime 10000
MAXTIME=10000
Either it is not recognized, or it has no effect.
Can you help me out?
Thanks,
Charlie
  • asked a question related to Applied Optimization
Question
2 answers
I am using stochastic dynamic dual programming for decision-making under uncertainty. Can we use stochastic dynamic programming to solve a min-max problem? for example, the max function is used in the objective function. Any good library for stochastic dynamic dual programming?
Relevant answer
Answer
Dear Dr. Lafifi, Mohamed-Mourad Lafifi
Your help is appreciated!
Best wishes,
Hussein
  • asked a question related to Applied Optimization
Question
17 answers
I am using the ANN for a product reliability assurance application, i.e.picking some sample within the production process and then estimating the overall quality of the production line output. What kind of optimization algorithm do you think works the best for solving the ANN in such a problem. ?
Relevant answer
Answer
Optimization algorithm in neural network
The process of minimizing (or maximizing) any mathematical expression is calledoptimization. Optimizers are algorithms or methods used to change the attributes of theneural network such as weights and learning rate to reduce the losses. Optimizers are used to solve optimization problems by minimizing the function.
Regards,
Shafagat
  • asked a question related to Applied Optimization
Question
5 answers
I am working on ECG arrhythmia classification by using SVM , implemented some kernels tricks
and using different kernels on MIT BIH dataset (features create 44187 row ,18 column matrix)
now it is difficult to plot support vector for such large data sets , now how can i plot it and please suggest any other plots or methods to show comparison between different kernels , i already have comparison chart of accuracy efficiency etc.
Relevant answer
Answer
It might interest you that there is a possibility to use complexity measures to assess the state of the observed complex system and make decision about arrhythmias.
An example how to do it can be bound in our paper on prediction of TdP arrhythmias from ECG recordings. Everything is explained in the paper in detail. The final version will contain rewritten entropy section and substantially improved methods, intro, etc.
Back to your question. Complexity measures when applied wisely enable us to substantially reduce the complexity of complex systems under the observation. This includes biosignals along with ECGs, EEGs, etc.
Hopefully this will enable you to orientate yourself in this exciting, yet quite complicated area of research.
  • asked a question related to Applied Optimization
Question
14 answers
Any decision-making problem when precisely formulated within the framework of mathematics is posed as an optimization problem. There are so many ways, in fact, I think infinitely many ways one can partition the set of all possible optimization problems into classes of problems.
1. I often hear people label meta-heuristic and heuristic algorithms as general algorithms (I understand what they mean) but I'm thinking about some things, can we apply these algorithms to any arbitrary optimization problems from any class or more precisely can we adjust/re-model any optimization problem in a way that permits us to attack those problems by the algorithms in question?
2. Then I thought well if we assumed that the answer to 1 is yes then by extending the argument I think also we can re-formulate any given problem to be attacked by any algorithm we desire (of-course with a cost) then it is just a useless tautology.
I'm looking foe different insights :)
Thanks.
Relevant answer
Answer
The change propagation models may give a great idea
  • asked a question related to Applied Optimization
Question
7 answers
I want to use optimization in classification tree tasks. I am not sure how can I do that?
Relevant answer
Answer
Are you working with a specific dataset?
  • asked a question related to Applied Optimization
Question
23 answers
Bat-inspired algorithm is a metaheuristic optimization algorithm developed by Xin-She Yang in 2010. This bat algorithm is based on the echolocation behaviour of microbats with varying pulse rates of emission and loudness.
The idealization of the echolocation of microbats can be summarized as follows: Each virtual bat flies randomly with a velocity vi at position (solution) xi with a varying frequency or wavelength and loudness Ai. As it searches and finds its prey, it changes frequency, loudness and pulse emission rate r. Search is intensified by a local random walk. Selection of the best continues until certain stop criteria are met. This essentially uses a frequency-tuning technique to control the dynamic behaviour of a swarm of bats, and the balance between exploration and exploitation can be controlled by tuning algorithm-dependent parameters in bat algorithm. (Wikipedia)
What are the applications of bat algorithm? Any good optimization papers using bat algorithm? Your views are welcome! - Sundar
Relevant answer
Answer
Bat algorithm (BA) is a bio-inspired algorithm developed by Xin-She Yang in 2010 and BA has been found to be very efficient .
  • asked a question related to Applied Optimization
Question
12 answers
Hello everyone,
We have the following integer programming problem with two integer decision variables, namely x and y:
Min F(f(x), g(y))
subject to the constraints
x <= xb,
y <= yb,
x, y non-negative integers.
Here, the objective function F is a function of f(x) and g(y). Both the functions f and g can be computed in linear time. Moreover, the function F can be calculated in linear time. Here, xb and yb are the upper bounds of the decision variables x and y, respectively.
How do we solve this kind of problem efficiently? We are not looking for any metaheuristic approaches.
I appreciate any help you can provide. Particularly, it would be helpful for us if you can provide any materials related to this type of problem.
Regards,
Soumen Atta
Relevant answer
Answer
The method for solving this problem depends on the properties of functions F, f, g (convex, concave, other properties).
If properties are not known - the only method is looking through all values of variables.
  • asked a question related to Applied Optimization
Question
4 answers
Is there a Python project where a commercial FEA (finite element analysis) package is used to generate input data for a freely available optimizer, such as scipy.optimize, pymoo, pyopt, pyoptsparse?
  • asked a question related to Applied Optimization
Question
16 answers
There are many research on metaheuristic optimization e.g. Particle Swarm Optimization, Genetic Algorithm, etc. Some study show that they are good for clustering task. But I don't find any comparation about it. 
Which one is the best to be applied for optimizing the clustering process?
Relevant answer
The following current-state-of-the-art optimization algorithms give you the answer:
N. K. T. El-Omari, "Sea Lion Optimization Algorithm for Solving the Maximum Flow Problem", International Journal of Computer Science and Network Security (IJCSNS), e-ISSN: 1738-7906, DOI: 10.22937/IJCSNS.2020.20.08.5, 20(8):30-68, 2020.
It has a complete discussion about your question.
Or refer to the same paper at the following link:
  • asked a question related to Applied Optimization
Question
4 answers
Hi,
I'm a researcher in optimization and a hobbyist photographer, and I'd like to get acquainted with lens design via the use of optimization methods. I found for example the paper "Human-competitive lens system design with evolution strategies" (2007).
Are you aware of more recent techniques to design camera lenses? Are there optimization models or benchmarks available?
Thank you,
Charlie
Relevant answer
Answer
I have many advanced books and can send you by e-mail, if you need.
  • asked a question related to Applied Optimization
Question
4 answers
What are some of the well-written good references that discusses why finding the penalty parameter when solving a nonlinear constrained optimization problem is hard to find from the computational perspective. What are some of the computational methods done to find the parameter as I understand finding such a parameter is problem-dependant.
Any insights also would be very helpful.
Thanks
Relevant answer
Answer
Some penalty methods are exact, in the sense that a finite penalty is enough to identify an optimum. Typically the penalty function then needs to be non-differentiable, which might be a bummer. With differentiable penalty functions, such as the augmented Lagrangian, you typically need the penalty to grow indefinitely, but there are those that can be exact without a growing penalty parameter. It however has a cost, in the sense that some constraint qualification holds - which is hard to check.
Here is a paper about just that:
Exact augmented Lagrangian functions for nonlinear semidefinite programming, by Ellen H. Fukuda & Bruno F. Lourenço
Computational Optimization and Applications volume 71, pages 457–482 (2018)
Hope it helps! :-)
  • asked a question related to Applied Optimization
Question
6 answers
Hello everyone,
My issue is about a water distribution system that I am working on a zone of the system where does not exist any plan for the place of pipes. However, we have the place of actuators like different types of valves, pressure relief valves, pressure meters, flow meters and tanks. Also, we have the place of demands where suffer from pressure loss. Now the question is how we can do pressure management using actuators to maximize water pressure for all demands of the zone based on the previous recorded data while we are going to minimize water loss as well as pipes damaging. Please let me know if you have any idea or you know any suitable paper for this issue.
Thanks.
Relevant answer
Answer
It is an interesting project. The first step should be collecting data. Open more or less systematically taps and valves and record angles, numbers of rotations, pressures, flow. With that data you can train a neural network. In the end the neural network is a model of your pipe system. You can then do simulations by varying inputs and try to achieve the desired outputs.
In hydrology, fluorescein sodium is used to find out the actual waterways. I would be surprised if there weren't any microbiology algorithms to infer pathways. Network optimization is often done with Ford Fulkerson.
Regards,
Joachim
  • asked a question related to Applied Optimization
Question
6 answers
Dear all
I am working on an inventory model in closed-loop supply chain system to optimize the cost of the system. There are lots of model to optimize the cost of the system, but I am looking forward to incorporate the concept of the Analytics to handle the real time inventory.
Looking forward to hearing from you.
with regards
Sumit Maheshwari
Relevant answer
Answer
This is a challenging problem, specially for manufacturing companies, needless to say this problem is undergoing lots of research and practically there are no viable examples of companies that have achieved success, best cases have hit a 90% mark (but the metrics of such proclaimed achievements have been highly debatable). As inventory management is being driven by close to real time demand and supply data(analytics) plugged to AI and Machine Learning tools, the potential to reach close to 99% efficiency in CLSCM might become a reality -- but how do we identify and embed external disruptions like COVID19 into this model? and to what extent will these external disruptions impact CLSCM based inventory dynamics?
  • asked a question related to Applied Optimization
Question
2 answers
Dear esteem researchers,
Greetings!
I applied a Markowitz risk theory on my power system operation objective function. In order to calculate the risk cost, i need to obtain the variance of the objective function which result to non-linear (the objective function is raised to power 2).
i know i can linearise the problem using piecewise linearization approach, however, my challenge is how to determine the segment/ grid points interval of the problem since by objective function is embedded with some decision variables with different bounds.
Please, your support and any recommendation will be highly appreciated.
Thanks
Relevant answer
Answer
Rahul Dewani Thank you very much for your contribution
  • asked a question related to Applied Optimization
Question
7 answers
Four years ago I was working on a genetic algorithm for vectorization and colors reduction. It is an open source project available in GitHub under the name EllipsesImageApproximator. The result of the algorithm is a list of simple shapes (in this case ellipses). Now I need this list to be translated in G Code instructions. I am working with 16 base colors. Each color will be plotted separately by the plotter (color by color plotting). For 16 colors there will be 16 CNC files with G Code instructions.
Please, suggest me what will be the most efficient way for G Code instructions generation?
Relevant answer
Answer
The color reduction is very high. I am going from 16M colors to 12 basic colors. It is expected the information lost to be very high.
Are you using colors or you are plotting black and white?
  • asked a question related to Applied Optimization
Question
4 answers
I am working on the optimization problem of multi-energy system using benders decomposition algorithm approach. I have written the subproblem and the master problem separately as a function using Matlab, but the benders algorithm I developed is not working as expected. Please your support will be appreciated, I need benders algorithm template in Matlab.
Thanks,
Michael
Relevant answer
Answer
And if the model is not stochastic?
Check out Leon Lasdon's book on large-scale optimisation, called
Optimization theory for large systems
  • asked a question related to Applied Optimization
Question
6 answers
We have a stochastic dynamic model: Xk+1 =f(Xk,uk,wk ). We can design a cost function to be optimized using dynamic programming algorithm. How do we design a cost function for this dynamic system to ensure stability?
In Chapter 4 of Ref. [a] for a quadratic cost function and a linear system (Xk+1 =AXk+Buk+wk), a proposition shows that under a few assumptions, the quadratic cost function results in a stable fixed state feedback. However, I think about how we can consider stability issue in the designation of the cost function as a whole when we are going to define the optimal control problem for a nonlinear system generally. Can we use the meaning of stability to design the cost function? Please share me your ideas.
[a] Bertsekas, Dimitri P., et al. Dynamic programming and optimal control. Vol. 1. No. 2. Belmont, MA: Athena scientific, 1995.
Relevant answer
Answer
Unfortunately, the attached article
" [a] Bertsekas, Dimitri P., et al. Dynamic programming and optimal control. Vol. 1. No. 2. Belmont, MA: Athena scientific, 1995."
is full of typing errors.
General talking, we consider the linearization of the nonlinear system. Next, we study the stability of the equilibrium state of the new linear system, which indicates the nature of the stability of the nonlinear system in some neighborhoods.
The signs of the real parts of the eigenvalues of the Jacobian matrix decide which approach we should follow. We have the direct and indirect Lyapunov methods to study the stability based on the eigenvalues.
Best regards
  • asked a question related to Applied Optimization
Question
5 answers
I would be grateful if anyone could tell me how the McCormick error can be reduced systematically. In fact, I would like to know how we can efficiently recognize and obtain a tighter relaxation for bi-linear terms when we use McCormick envelopes.
For instance, consider the simple optimization problem below. The results show a big McCormick error! Its MATLAB code is attached. Min Z = x^2 - x s.t. -1 <= x <= 3 (optimal: x* = 0.5 , Z* = -0.25 ; McCormick: x*=2.6!)
Relevant answer
Answer
Hi Morteza,
I am a new researcher in this field and I am working on similar problems. I hope you will find following paper useful:
Cheers,
Zaid
  • asked a question related to Applied Optimization
Question
4 answers
I am working on a multi-criteria optimization problem, but I am facing problems to define proper fitness function.
Please, can you advise me on how you are choosing such fitness functions?
Relevant answer
Answer
On trial and error principle I have decided to use fitness function like this:
  • asked a question related to Applied Optimization
Question
8 answers
For a industrial application (layout-planning) I am currently trying to globally optimize a discontinuous function f. The objective function is defined on some bounded parameter space in R^N where the dimension N of this space depends on some initiation parameter. The N lies typically between 30-100.
The goal is to run this optimization a number of times (each time for a slightly different layout) and afterwards choose the best one.
Currently I use the MLSL-algorithm provided by the NLOPT-library to compute the global minimum of the objective. Especially when N goes up the time needed for each run to obtain a good result increases a lot. This is why I am looking for a way to speed up my computations.
From the structure of the objective function I know that it is oscillating which slows down the convergence of typical global optimization-algorithms. On the other hand the function f is the sum of a differentiable function and a upper-semi-continuous step-function, so in particular f is upper-semi-continuous and almost everywhere differentiable. The objective function is bounded as well and as it is defined on a bounded set it is integrable.
My question now is: Does anyone here have experiences with optimization of such functions (or more generally noisy or black-box functions) and has experiences which algorithms work best?
Especially as I have more details about my function is it maybe possible to use a subgradient-method or first smooth out my function by let's say a smoothing kernel phi, i.e. g = f * phi, and then optimize g to obtain a result for f?
Relevant answer
Answer
What can be done is to create a representation of the objective space, by determining several (very many!!) points on the function surface, and from that create an explicit function that interpolates the points you have found. That function can hopefully be optimised by global optimisation software, and by evaluating new points in regions that may be interesting, you can refine the search towards the global optimum - at least if the function is not too weird. :-)
You need of course be aware that it may take some time, as the interpolated function needs to be re-optimized many times, perhaps, at several vectors.
  • asked a question related to Applied Optimization
Question
5 answers
I have programmed a method for solving quadratic optimization problems under linear constraints, this method is depends on the projection of a point onto a convex polyhedron in R^n, so I have programmed Dykstra's successive projection method and adapted it in my method, but Dykstra's successive projection algorithm dosen't work well, it's spend a lot of time even days to find a projection in 3 dimention real space!!!, I don't know if his algorithm is slow or I haven't programmed it properly!!! I have spend a lot of time on this method, so I'm very pluseur if somone can guide me to another projection method that I can get the algorithm code ready.
Relevant answer
Answer
It’s my PhD project to build a new method can solve those problems faster than those exist methods
  • asked a question related to Applied Optimization
Question
10 answers
Hello ... the objective here is maximizing Z for each product (i)
function [ Zi ] = myfitness( X )
P=X(1);
C=X(2);
Q=X(3);
% Zi= fitness value
% C,P,Q = variables vectors
for i=1:10;
Zi = P(i).*Q(i)-C(i).*Q(i);
end
end
the outputs should be a 1*n matrice
when i run the function it works but i get only one value , and doesn't work with ga toolbox i keep getting the same error (index exceeds Matrix dimensions) ....how can i fix this error?
any help would be appreciated ....Thank you
Relevant answer
Answer
Aicha Ghedamsi , you need to call myfitness with X as the input.
X=rand(3,10)
myfitness(X)
  • asked a question related to Applied Optimization
Question
5 answers
Research pointed me to reinforcement learning. However, I will not be able to obtain a realistic simulation of the machine.
Currently I aquired about 5 months worth of data about target values, measurement values and quality of the produced parts as tabular data.
Therefore I had the following idea for a simulated environment:
- The RL agent chooses a set of target values based on the table.
- The agent receives a random observation (measurement values) that match the selected target values.
- The reward depends on the quality of the produced part that matches the selected target values.
--> The agent should then proceed to learn the optimal target values.
My questions:
- Are there better ways to simulate the environment?
- Are there better ways than reinforcement learning to determine the best target values?
Thank you for reading this!
Relevant answer
Answer
Better to follow: https://web.stanford.edu/~boyd/cvxbook/ may be more helpful.
  • asked a question related to Applied Optimization
Question
4 answers
Hello,
i'm looking for slsqp algorithm for optimization. I have to implement it on a non linear problem of minimisation with constraints in python and i'd like to know if it is appropriate for my problem.
Thanks in advance
Relevant answer
Answer
you can use scipy.optimize.minimize function with method='SLSQP'.
  • asked a question related to Applied Optimization
Question
10 answers
Can higher order partial derivatives be derived or approximated from lower order partial derivatives?
There is no specific equation to state the partial derivatives, but they can be measured empirically.
Can higher order partial derivatives be derived from lower order partial derivatives, like 3rd (4th) order from 2nd (3rd) order? And how long can you continue this approximation of an order from the preceding order?
Empirically measuring the higher order partial derivatives is computationally too expensive in this case.
Relevant answer
Answer
To obtain an exact mathematical solution is necessary and sufficient.
1. Firstly, it is necessary to establish what quality of movement is being studied: translational, parabolic, elliptic, hyperbolic, stochastic.
2. Secondly - determine: how much data you want to use:
- so, for a stochastic movement one point is enough;
- for translational motion, two points are sufficient;
- for parabolic, elliptical and hyperbolic movements, three points are sufficient.
3. Thirdly, the generalized finite difference method is used to equation the model of established motion. The method gives a function of the dependence of the amplitude on a given interval from frequency. Each new frequency is a new derivative. Choose the right one for the direct task. Select the amplitude and find the frequency - for the inverse problem.
To obtain physical information from an exact mathematical solution, it is necessary and sufficient:
1. Choose the general quality of movement - stochastic.
2. Use the function of determinate probability.
3. Find the desired derivative and its value. You solve the direct or inverse problem by initially determining the total, kinetic and potential energies of the process at the measurement point.
  • asked a question related to Applied Optimization
Question
150 answers
I have found that some mathematicians disagree with meta-heuristic and heuristic algorithms. However, from a pragmatic point of view such algorithms often can find high-quality solutions (better than traditional algorithms) when tackling an optimization problem but the success of such algorithms usually depend on tuning the algorithms' parameters. Why some mathematicians are against these algorithms? is it because they don't posses a convergence theory?
I am looking for different arguments on the situation.
Thank You!
Relevant answer
Answer
This question is similar to another one that I have seen. My response to that one was basically this:
1. I don't know any mathematicians who are prejudiced against heuristics per se. Many of them (myself included) use them regularly.
2. I do know a lot of mathematicians who are fed up with people claiming to invent dozens of "new" meta-heuristics every year (like harmony search or the bat algorithm), when really they are just old ideas expressed in fancy new words. Actually, many people in the heuristics community are unhappy with it as well.
3. I also know people working in combinatorial and/or global optimisation who are fed up with people saying "problem X is NP-hard, and therefore one must use a heuristic". This shows a breathtaking ignorance of the (vast) literature on exact methods (and approximation algorithms) for NP-hard problems.
(By the way, I agree with Michael's comment about "matheuristics" being a very interesting research direction.)
  • asked a question related to Applied Optimization
Question
52 answers
  • Any Scientific or empirical evidence / reasons why out of 196 countries in the world only 25 of them are very rich.
Relevant answer
Answer
Because some countries have a management system for their resources, and others do not have a management system
  • asked a question related to Applied Optimization
  • asked a question related to Applied Optimization
Question
4 answers
I am looking simple method to design 1*16 microstrip power divider for wideband (impedance bandwidth 5GHz) at 30GHz. Please suggest me easy method to design power divider for wideband applications.
Thank you very much
Kanhaiya Sharma
Relevant answer
You can use active divider composed of emitter follower or source follower transistors connected in parallel. Every emitter follower has matching function to transform 50 ohm output resistance to 50 x16=800 ohm.
This will be a very wide band divider consisting of 16 emitter followers connected in parallel. The input will be capacitively coupled.
Best wishes
  • asked a question related to Applied Optimization
Question
20 answers
As you know, the null space of a matrix A is the set of vectors that satisfy the homogeneous equation Ax=0.
To find x (as the null space of A), I wrote two optimization models as below. I know, they are simple and straightforward and the solution may not be simply achievable but this is just my first basic idea.
--------------------------------------------------------------------------------------------------
1) Min Z=1
s.t.
sum(j , A(i,j) * x_null (j,m)) = zero(i,m);
where, Z is a dummy variable,
i*j is the dimension of A,
and m is assumed as a known column number of x.
But, the result is always x_null (j,m) = 0.
--------------------------------------------------------------------------------------------------
To deal with this problem, I modified (1) as below.
2) Max Z = sum((j,m) , x_null (j,m))
s.t.
sum(j , A(i,j) * x_null (j,m)) = zero(i,m);
Here, Z is the objective function.
In this model, the solver reports 'unbounded or infeasible'!
--------------------------------------------------------------------------------------------------
Note that, I let i<j make the number of equations less than the number of unknowns, and thus, the system is under-determined.
Any help would be highly appreciated!
Relevant answer
Answer
You can also use Hossein Karimi formulation although i wouldnt prefer a MIP as this would grow in O(e^N) where N is the row dimension of your matrix A. Here is CPLEX OPL formulation
  • asked a question related to Applied Optimization
Question
14 answers
I have been reading about performing sensitivity analysis of the solution of Linear Programming problem (calculating shadow prices, reduced costs and intervals within which the basic solution remains valid). It is clearly described on academical problems with 2 or 3 variables, but in fact, when tried to apply the same logic for real-life, scalable problem, I didn't get promising results. This is because only a few of variables values matters for me, while other are rather placed for another purposes (like changing hard constraints to soft ones etc). But all of them are taken into account when checking if basic solution has changed, hence the interval that is returned by a solver is a way more narrow than I want it to be.
Where can I find an example of real applied sensitivity analysis, if there is any?
Relevant answer
Answer
Thank you Katarzyna
I believe that we have a misunderstanding.
I agree with your definition of variables, but I DON’T CALL as ‘criterion’ an objective function. NOT in LP, although YES in SIMUS, because in this method criteria and objective functions are interchangeable.
But in general, in MCDM, my definition of criterion or constraint is exactly the same as yours, as well as your definition of performance values. However, if it is true that criteria ‘constrain’ the alternatives, I prefer to use the term ‘constrain’ for the values which are limits or thresholds of criteria, that is, the RHSs, because they constrain the range of validity of criteria. In reality, criteria ‘force’ the alternatives to respect some conditions, which in turn may be high or low, and bounded by the RHSs.
You mentioned before that you had 2500 variables or alternatives, and you say that there are 3 production units; does it mean that each production unit may have many alternatives?
Can a particular alternative, be replicated in any number of production units? For instance, alternative 789 can be in units A and C, or even in A, B and C, while other alternatives may only be in one of them?
When you say that other variables are placed, are you referring to artificial and slack variables? If this is so, it is strange, since the solver adds them automatically, or maybe I did not understand your statement.
I assume that you want to know how production of unit B changes when you modify the RHS of some constants, for instance # 1.
What do you mean by ‘I display’ constraint #1 up, constraint 1, and constrain # 1 down? I understand that you want to say is that you can put low and high limits for constraint # 1. Is this correct?
I have two questions for this:
1. Why you consider a constrain 1, when if you have low and high limits defined by ≥ and ≤ operators respectively, constraint 1 is included in the range? In my opinion that arrangement may cause you to get an infeasible solution, because if the model select a value between the two ends, how does it manage to also comply with constraint 1, which I imagine has the ‘=’ operator?
2. How do you know that constraint 1 or any other is the one you have to work with? It could very well be that said constraint does not have any influence in the alternatives selected. That is, you have to work with the criteria that are responsible for the selection, maybe one or several. Once determined, you will have the certainty that these will change the solution found. If you work with constraint that are not relevant, you can increment and decrement by changing their RHSs, and you will see that there will be no change in the selection.
This information, as you know, is in the dual, or you can see directly in the Solver screen for sensitivity analysis (SA). Of course, you know all of this, but my comment derives because you don’t explain why criterion 1 is selected for SA.
No, I was not referring to the objective function Z coefficients. I am referring to changes in the RHSs.
To your question, my answer is that yes, you can. Each time you modify the RHS of a binding constraint and use the Simplex, you can immediately see the new values of the alternatives.
It is extremely useful to work at the same time with the dual, and once you get the new values for the alternatives go to ‘sensitivity’ (in Solver you have to run the Solver twice for each RHS change, the first gives you the new alternative values, and then you have to run it again in order to get the sensitivity analysis screen), that gives a lot of information, such as the shadow prices, the relevant criteria, the validity range for each shadow price, as well as the reduced costs.
For my software, don’t worry, you can have it any time you want.
I understand that there is a mistake when you say that the slope is determined by the reduced cost value.
The slope is given by the shadow prices. The reduced cost tells you how much you have to modify a coefficient in the objective function in order that the corresponding alternative enters in the solution. Observe that the reduced cost for selected variables is zero.
Outside the range nothing happens, because the criteria that you were using is no longer binding, and the alternative keeps it last value.
In your next paragraph, you say: ‘But if I change RHS to some value outside of the range, I can see that indeed, basis have been changed as well, but the change is related to, for example, a variable indicating the production in unit A jumped from 50 to 0’.
Yes, you are right, if the base changes, there will be another variable instead of A, and this one will be zero.
The jump from 50 to 0, means that when the upper/lower limit of the range is exceeded, that variable no longer belongs to the solution, and since Z is equal to de sum of the products between the aijs and the solution found, and A is no longer in the solution, its value is zero, but the variable keeps it last value..
I am attaching a worksheet for an example that I did years ago, with two alternatives A1 and A2, subject two five criteria and which objective was to maximize production.
The first optimal result indicates that A3=0.16 and A6 = 1.22. This result is showing a Z=52,000 and that the corresponding criteria are C2 and C3.
Observe that:
· C2 has as shadow price (λ2) = 0.07 and C3 a shadow price of (λ3) = 17.95.
· Solver also shows that maximum increment for λ3 is 429, while the maximum increment for λ2 is 48.
Consequently, the upper limit for C3 = 335 (Its RHS) + 429 = 764 and for C2= 559 (Its RHS) + 48 =607.
· If we consider only C3 and we increase its RHS, as shown in the Excel spreadsheet (we can increase in any appropriate amount even if not equal). Observe how the A3 value progressively rises up to 2.74, which corresponds to the upper limit of C3, that is 764.
· Both λ3 and λ2 keep constant along the whole procedure
· Simultaneously the value or score of A6 progressively decreases from 1.22 up to 0, for RHS3 = 764.
· The original upper limit of RHS3 decreases from 420 to 0, for RHS3 = 764.
Correspondingly the original upper limit of RHS2 increases from 48 to 825, for RHS3= 764.
· If the value of 764 is surpassed, say we put 765, alternative A3 keeps its value, since C3 does not have influence on it, however A6 = 0, meaning that this alternative it is no longer a solution, however, notice that λ3= 0 while λ2= 24.61, after the 764 limit is reached.
In fact, we should change RHS3 and RHS2 simultaneously, each one with its own increments/or decrements, and then the performance curve will be the result of both acting at the same time, as in real-world scenarios.
The graphic shows Z performance curve with the λ3. It appears as a broken straight line because I was using different amounts of increments for RHS3, since
I believe that this elemental example shows you how the range is diminishing in the selected alternative
Now, if you want to keep the original production of B unchanged, why don’t you express it as a new constraint, using the binary format, that is in this criterion all alternatives are zero, except B which is 1.
If you have a value in mind for B, say for instance 820,you put it as the RHS, and indicate with the ≥, ≤, or = operators if you want to get a result for B larger than 820, or lower than 820, or equal to 820, respectively.
However, I would stay away from the equality, because it imposes very hard restrictions to the problem.
I sincerely hope that my comments help you, and indeed, I am very interested if you can keep me informed me if it works
  • asked a question related to Applied Optimization
Question
6 answers
Please give their appropriate cases.
Relevant answer
Answer
we can chose linear or dual dependent to conditional constrants
  • asked a question related to Applied Optimization
Question
3 answers
Dear community,
I would like to request some references related to results on parameter-dependent Pareto Front, if there are any. I am interested in studying the behavior of the Pareto Front with respect to an external parameter, for multi-objective problems.
Thanks for any recommendation!
Best,
Nathalie
Relevant answer
Answer
  • asked a question related to Applied Optimization
Question
10 answers
The attached picture is the result of a bi-objective optimization problem. Genetic algorithm was used. The "missed-out" band (i.e. approximately 41 to 43.5 in the vertical axis) is within the range of the respective objective function (in other words, there are values of design variables which result in values between 41 and 43.5 of the objective function).
The question is, is there any explanation for this discontinuity (physical or mathematical)? Or should I see this as a fault in my solution procedure?
It is worthy to add that I've carried out the procedure several times and with different optimization parameters (population, mutation fractions, etc), but the discontinuity seems to be always there...
Relevant answer
Answer
The discontinuity looks like it arises due to a concave section of the objective front, leading to regions of the objective front that are dominated. Concavities in the objective front are quite common on many real engineering problems, leading to discontinuities in the Pareto front. The other possible reason is that the objective front forms two distinct feasible regions with a gap in between; from the way the top section is curving just before the break, two separated regions could be the cause. For clarity: the objective front is the outer boundary of the objective space region that contains feasible solutions, the Pareto front is the subset of the objective front with non-dominated solutions. There has been work over the years trying to develop optimisers that locate the objective front so that the reason for Pareto discontinuities can be explored more easily, but designing algorithms to find the objective front is really not easy (especially for problems with more than two objectives). Evan
  • asked a question related to Applied Optimization
Question
5 answers
hi everyone
i want to find a solution manual for this book
"Solution Manual For Applied Optimization with Matlab By P.Venkataraman"
can anyone help me with that please ?
i know that this book has a solution manual in Wiley Publications but it's an instructor manual and i don't have access to that so.....
this book has so many examples and i want to learn them but without a proper manual it's impossible to learn and code and i don't have enough time for that.
Thank you so much in advance
Relevant answer
Answer
you can review the sample of another one to know the instructions
good luck
  • asked a question related to Applied Optimization
Question
7 answers
Hi All
I have the stress output of a structural analysis plotted against x ( x range is constant in all cases) that is a curve with minimums and maximums.
changing the model chracteristics ( stiffness ,etc) and doing a batch run, how could I code the optimization ?
preferably in Python
Relevant answer
Answer
Hi Farzad Torabi I think you need at first an idea of your optimization problem. What do you want to optimize (only mass or more e.g. multiple objectives?), what are your variables and how many (thickness, topology, ..., continuous or discrete) and what are the constraints (number and linear or nonlinear) you want to consider.Than roughly select an optimization method (gradient based, evolutionary,...). Than find a optimization toolbox providing a method you want (nlopt, Pyopt, pyoptsparse, scipy, midaco ...).In general you have to provide a function returning your objective and the constraints (e.g. max stresses, ...) based on a given variable vector. Sometimes objective and constraints are divided in two functions, depending on the toolbox.Now, in the function providing the objective and constraints you have to set up the FE run with the new variables (e.g. change the input file) and then solve the system running FEM. After finished FEM, you have to read the results back into Python (e.g. reading an ASCII-File) and provide the desired data as return values of your function.Starting FEM from Python is possible via "subprocess" module which is a build in module. Via e.g. subprocess.run(["path/to/executable", "input_file.xyz"]) you run your executable with a given input_file depending on your FEM solver.I hope this answers your question. More details depending on your concrete optimization problem.Kind regards Sascha
  • asked a question related to Applied Optimization
Question
17 answers
What are the links in their definitions? How do you interconnect them? What are their similarities or differences? ...
I would be grateful if you could reply by referring to valid scientific literature sources.
Relevant answer
Answer
All are approaches that exploits the computational intelligence paradigm. Machine learning is refered to data analitics. Evolutionary computation deal with optimization problems.
  • asked a question related to Applied Optimization
Question
11 answers
I'm trying to identify which approach would work best to select a set of elements that have different features that minimise a certain value. To be more specific, I might have a group of elements with Feature 1, 2, 3, 4 and another group with Feature 2, 3, 4, 5.
I'm trying to minimise the overall value of Feature 2 and 3, and I also need to pick a certain number of elements of each group (for instance 3 from the first group and 1 from the second).
From the research I did it seems that combinatorial optimization and integer programming are the best suited for the job. Is there any other option I should consider? How should I set up the problem in terms of cost function, constraints, etc.?
Many thanks,
Marco
Relevant answer
Answer
Differential evolution is a good method. You can try with this method
  • asked a question related to Applied Optimization
Question
3 answers
I have 3 objectives in ILP model.The first has to be maximized and the second, and the third should be minimized.
I would like to compute the knee point of the generated Pareto front.
Didi you have an idea about the formula ?
thanks
Relevant answer
Answer
Greetings,
The easiest way is to calculate the hypervolume of each solution and the one with the highest value is usually in the knee. But first, you have to flip the first goal, so that all goals are minimized. Just multiply the result of the first target by -1. Before you calculate the hypervolume, do not forget to normalize all the goals.
As a reference, you can check out "A Knee Point Driven Evolutionary Algorithm for Many-Objective Optimization" and "Finding Knees in Multi-Object Optimization".
Regards,
Miha