Science topic

# Optimization Methods - Science topic

Explore the latest questions and answers in Optimization Methods, and find Optimization Methods experts.
Questions related to Optimization Methods
Question
can anyone suggest or interested to have conversation on AI/ML and PLS based mathematical tool. I need to know more information.
Explainable ML using OLAP data for verification - that could be an interesting topic.
Question
Let's assume that I have solved a particular optimisation problem using several different algorithms. If I want to compare them with each other, which characteristics should I consider? For example, one of them may find the global optimum better than the others, or one of them may converge faster than the others. Apart from finding the global optimum and speed of convergence, are there any other characteristics I should consider?
The NFLT is valid only on a set of problems closed under permutations (c.u.p.). It never happens in practice.
And for a set of problem that is _not_ c.u.p. it can be proved that there exists a best algorithm.
Unfortunately the proof is not constructive.
So it is worth trying to improve algorithms and, indeed, we need to know how to compare then.
Here is what I usually do for two stochastic algorithms A1 and A2 on a given problem:
- run 100 times A1, with a given search effort (usually a number of evaluations), plot the CDF_A1 (cumulative distribution function) of the 100 final best results.
- do the same for A2 => CDF_A2
If, on the figure, CDF_A1 is completely "above" CDF_A2, then A1 can safely be said "better", for this function.
And vice versa, of course.
If the two curves cross on say a value r, the conclusion is not that clear, unless you consider only best final values smaller than r. Then you have to be more precise. Something like: "If I accept only final results smaller than r, then _this_ algorithm is better".
Question
What is the main disadvantage of a global optimization algorithm for the Backpropagation Process?
Under what conditions can we still use a local optimization algorithm for the Backpropagation Process?
Armin Hajighasem Kashani Non-linear data may be simply handled and processed using a neural network that is otherwise difficult in perceptron and sigmoid neurons. In neural networks, the agonizing decision boundary problem is reduced.
However, the downsides include the loss of neighborhood knowledge, the addition of more parameters to optimize, and the lack of translation invariance.
Question
May I have the MATLAB code of some well-known multi-objective benchmark functions like Schaffer, Fonseca, ZDT1, ZDT6, Srinivas, DTLZ5, DTLZ6, LZ09_F1, LZ09_F2, LZ09_F6, LZ09_F7, LZ09_F9, WFG4, CF1, CF2, CF4, CF5, CF6, CF9, and CF10?
Can I get MATLAB code for Fonseca, ZDT1, ZDT6, Srinivas, DTLZ5, DTLZ6, LZ09_F1, LZ09_F2, LZ09_F6, LZ09_F7, LZ09_F9, WFG4, CF1, CF2, CF4, CF5, CF6, CF9, and CF10
Thank you
Question
I already saw some examples of GA (genetic algorithm) applications to tune PID parameters, but I (until now) don't know a way to define the bounds. The bounds are always presented in manuscripts, but they appear without great explanations. I suspect that they are obtained in empirical methods.
Could Anyone recommend me research?
Dear Italo,
In general, the bounds selection is made empirically because the "suitable range" of a PID controller is problem-dependent. The way I use to select the bounds is: 1. I tune a PID controller that produces a stable response to the closed-loop system. Then, 2. I choose a range around this "nominal" value large enough such that the GA has still some degree of freedom to search in the optimization space. Finally, 3. if the GA converges, I start decreasing/increasing this range till I got a more or less good behavior of the GA, i.e., the GA doesn't stick in a sub-optimal minimum or so.
If you want to use a more rigorous approach, I would suggest computing the set of all stabilizing PID controllers for the particular system. Then, I would establish the bounds for the GA search space to be the this computed set. In that way, you would search for the optimal controller only within those producing a stable closed-loop response.
Best,
Jorge
Question
I have designed the optimization experiment using Box-Behnken approach.
What should I do if any of the factors combination fails, for example because the aggregation occurs.
Should I review whole optimization or is there any method to skip the particular factors combination?
And if I need to review the whole experiment, what method should I use to evaluate boundary factors values? Screening methods I have seen require at least 6 factors to be screened.
Any help is appreciated.
Greetings.
If particular factor combinations are leading to "loss of data" because you can not evaluate it, then you can set so-called constraints in your future DOE. As a result, you will work with D- (in screening) or I-Optimal (in optimization) Designs. You need these computer generated designs in your case, because the design is not orthogonal so classical factorial designs are not appropriate.
Question
I am looking for a scientific field or real-life subject where I can use some convex analysis tools like Stochastic Gradient Descent or unidimensional optimization methods. Any suggestions?
Convex optimization theory has an importent aspect: duality gap. This gap is occurring for bad constraints. To search for such problems, we use the analysis of orders of smallness of infinitesimal quantities
Question
Dear all,
I have set of results obtained from fea software using a particular DOE. I imported the data using an excel file to MATLAB and used the curve fitting tool to obtain the response surface.
I know the optimization has to be done using the "optimization tool box" but I do not understand how to specify the response surface data(design variables vs objective functions) to the solver. Any help would be highly appreciated!
Best Regards,
Rashiga
Question
I want to solve nonlinear optimization. I find its solution by using Brent's method. in the case that there is no regularization. Since the derivative is not used in this method, can anyone guide me on how I can solve the problem by considering regularization? Is there any way?
Orthogonal functions are often used for image processing (e.g., FFT, Radon transform, sinogram, SVD used in transmitting images from deep space probes). There are lots of examples in my book, which will be free on 5/10. I have attached a zip file containing all of the examples. https://www.amazon.com/dp/B07GT8TLDV
Question
I want to know what are the shortcomings of the two techniques and how did they differ from RSM.
Also, are they artificial intelligence algorithms or not.
Thank you
Very important topic.
Question
I have 2 functions f(x,y) and g(x,y) that depend on two variables (x,y), so I want to find a solution that minimize f(x,y) while maximizing g(x,y), simultaneously??
P.S: These functions are linearly independent.
Question
Can anyone provide me with PSO MATLAB code to optimize the weights of multi types of Neural Networks?
Application of PSO-BP Neural Network in GPS Height Fitting
Question
I am a beginner at the optimization of trusses with metaheuristic algorithms. anyone, please help me get any MatLab code that helps me optimize 10 bar truss?
Do you have a mathematical model of your problem? Have you implemented it?
I would run a classical optimization solver before resorting to a metaheuristic, though.
Question
I need code of Consensus + Innovations and OCD in any programming language preferably Matlab or R
Aamir Nawaz, Can you provide code for Consensus + Innovations and Optimality Conditions Decomposition?
I would appreciate it if you help me.
Question
For a multiobjective base optimization problem, I have linear equality, inequality and nonlinear constraints, which type of algorithm holds good.
by using multiple objective and mixed model
Question
I am starting to learn how to model an energy storage system for a wind farm. Would anybody know where to look for already existing program model optimizations in Matlab, Gurobi, Gams, ....ect. This is basically just so I can get a general feel of how it is going to work. Any suggestions will be appreciated.
Regards,
Giovanni Ponce
Looks like there are many resources available to you Ian William Gibson , these are just a few examples:
Dugan, R.C., Taylor, J.A. and Montenegro, D., 2016, May. Energy storage modeling for distribution planning. In 2016 IEEE Rural Electric Power Conference (REPC) (pp. 12-20). IEEE.
Sparacino, A.R., Reed, G.F., Kerestes, R.J., Grainger, B.M. and Smith, Z.T., 2012, July. Survey of battery energy storage systems and modeling techniques. In 2012 IEEE Power and Energy Society General Meeting (pp. 1-8). IEEE.
Jiang, Z. and Yu, X., 2009, July. Modeling and control of an integrated wind power generation and energy storage system. In 2009 IEEE Power & Energy Society General Meeting (pp. 1-8). IEEE.
Ma, Z., Pesaran, A., Gevorgian, V., Gwinner, D. and Kramer, W., 2015. Energy storage, renewable power generation, and the grid: NREL capabilities help to develop and test energy-storage technologies. IEEE Electrification Magazine, 3(3), pp.30-40.
Question
In most of AI research the goal is to to achieve higher than human performance on a single objective,
I believe that in many cases we oversimplify the complexity of human objectives, and therefore I think we should maybe step off improving human performance.
but rather focus on understanding human objectives first by observing humans in the form of imitation learning while still exploring.
In the attachment I added description of the approach I believe could enforce more human like behavior.
However I would like advice on how I could formulate a simple imitation learning environment to show a prove of concept.
One idea of mine was to build a gridworld simulating a traffic light scenario, while the agent is only rewarded for crossing the street, we still want it to respect the traffic rules.
Kind regards
Jasper Busschers master student AI
Interesting
Question
Currently, I am looking for a passionate teammate to do collaborative research in System Identification. We will try to combine an optimization technique to get the most optimal model. bachelor students, master students, no problem. You may also ask your friends to join this group. Sure, I will be the last author, no worries. My target is SCI Journal.
Best regards,
Yeza
hi, your Q sounds very close to my work. But more detail is needed for further consideration. like for what do you want to identify a model? what kind of SI method will you use? grey box modeling? black box modeling? white box modeling? ...
regards, Man
Question
Which optimization technique is best to find the minimum number of experiments to be performed with four factors and five levels?
Thank you, Reza Shahin and Gedefaye Achamu for your valuable responses.
Question
I have a multi-objective optimization with the following properties:
Objective function: thee minimization objective functions) two non-linear functions and one linear function
Decision variable: two real variables (Bounded)
Constraint: three linear constraint (two bounding constraint and one relationship constraint)
Problem type: non-convex
Solution required: Global optimum
I have used two heuristic algorithms to solve the problem NSGA-II and NSGA-III.
I have performed NSGA-II and NSGA-III for the following instances (population size, number of generations, maximum number of functional evaluations(i.e. pop size x no. of gen)): (100,10,1000), (100,50,5000),(100,100,10000), (500, 10, 1000), (500, 50, 25000), and (500,100,50000).
My observations:
Hypervolume increases with increase in number of functional evaluations. However, for a given population size, as the number of generation increases the hypervolume reduces. Which I think should rather increase. Why am I getting such an answer?
Greetings to you all.
Please how can I find MATLAB code for Accelerated Particle Swarm Optimization algorithm for tuning PID controller.
Question
For an Integer Linear Programming problem (ILP), an irreducible infeasible set (IIS) is an infeasible subset of constraints, variable bounds, and integer restrictions that becomes feasible if any single constraint, variable bound, or integer restriction is removed. It is possible to have more than one IIS in an infeasible ILP.
Is it possible to identify all possible Irreducible Infeasible Sets (IIS) for an infeasible Integer Linear Programming problem (ILP)?
Ideally, I aim to find the MIN IIS COVER, which is the smallest cardinality subset of constraints to remove such that at least one constraint is removed from every IIS.
Thanks for your time and consideration.
Regards
Ramy
Question
I have a multi-objective optimization with the following properties:
Objective function: thee minimization objective functions) two non-linear functions and one linear function
Decision variable: two real variables (Bounded)
Constraint: three linear constraint (two bounding constraint and one relationship constraint)
Problem type: non-convex
Solution required: Global optimum
I have used two heuristic algorithms to solve the problem NSGA-II and NSGA-III.
I have performed NSGA-II and NSGA-III for the following instances (population size, number of generations, maximum number of functional evaluations(i.e. pop size x no. of gen)): (100,10,1000), (100,50,5000),(100,100,10000), (500, 10, 1000), (500, 50, 25000), and (500,100,50000).
My observations:
Hypervolume increases with increase in number of functional evaluations. However, for a given population size, as the number of generation increases the hypervolume reduces. Which I think should rather increase. Why am I getting such an answer?
Question
Can any one provide a PSO optimization technique MATLAB code for Optimal sizing of Solar-Wind Hybrid System coupled with heat pump and battery storage?
Question
In the maintenance optimization context, researchers use a structure such that it results in the Renewal reward theorem and they use this theorem to minimize the long-run cost rate criteria and the maintenance optimization problem.
However, in the real-world, creating such structures that conclude to Renewal reward theorem maybe not happen. I am looking forward to how to dealing with these problems?
Bests
Hasan
If you want to, yes. Why not?
Question
I'm trying to calculate the effectiveness of my implementation of the D2Q9 solver using ArrayFire, which uses JIT compilation and other optimization techniques behind the scenes to output near-optimal CUDA/OpenCL code.
On lid-driven cavity test with 3000x3000 domain, I'm getting 3500 MLUPs. For the MLUPs calculation I'm using this formula:
float mlups = (total_nodes * iter * 10e-6) / total_time;
This is how the first 1000 iterations look like:
100 iterations completed, 2s elapsed (4645.152 MLUPS).
200 iterations completed, 5s elapsed (3716.1216 MLUPS).
300 iterations completed, 8s elapsed (3483.864 MLUPS).
400 iterations completed, 10s elapsed (3716.1216 MLUPS).
500 iterations completed, 13s elapsed (3573.1936 MLUPS).
600 iterations completed, 16s elapsed (3483.864 MLUPS).
700 iterations completed, 19s elapsed (3422.7437 MLUPS).
800 iterations completed, 21s elapsed (3539.1633 MLUPS).
900 iterations completed, 24s elapsed (3483.864 MLUPS).
1000 iterations completed, 27s elapsed (3440.853 MLUPS).
I want to calculate how far this is from the theoretical maximum and what is the effectiveness of the memory layout.
I usually don't prefer to test performance in this way.
Every computing source always have a certain threshold upto which it flattens the flops.
What I will suggest that, try to increase the number of lattice nodes and test the mlups.
This will show you when it is flattening the performance and that's the maximum the computing source can reach.
Now coming to theoretical maximum, I think it is basically the specs of the source e.g. A100 shows around 13-14 TFLOPS.
In my view when you are talking about "memory effectiveness", I will suggest to analyse "memory bandwidth" and not flops.
If it is CUDA then I think you can use NSight compute to compare the memory bandwidth from "memory%"
Question
Hi
I intend to define a frequency domain objective function to design an optimal controller for Load-Frequency Control (LFC) of a power system. My purpose is to optimize this objective function (finding the proper location of zeros and poles) using meta-heuristic methods.
I will be happy if you share your valuable relevant and informative experiences, references and articles in this field including how to define and how to code.
Thanks
Question
AI and machine learning increasing its importance in every area. How much significant for transistor design?
Optimization in device design is to determine the the most appropriate set of the physical and technological parameters that results in the specified performance parameters.
It could be also formulated as the set of the physical and technological parameters ranges that results in the specified performance parameters.
I think such problems can be solved machine learning as a solution selection problem.
One uses the device simulator to generate the all the performances at the possible set of input parameters and then one can use neural networks for the best selection.
What I introduced here is just a thought.
Best wishes
Question
I want to compare metaheuristics on the optimizattion of Lennard-Jones clusters. There are many papers available that optimize Lennard-Jones clusters. Unfortunately, none of them provide the upper and lower bounds of the search space. In order to conduct a fair comparison, all metaheuristics should search in the same bounds of the search space. I found the global minima here: http://doye.chem.ox.ac.uk/jon/structures/LJ/tables.150.html but the search space is not defined.
Can anyone please tell me what are the recommended upper and lower bounds of the search space?
Miha Ravber : for me, [-2, 2] was enough because I fixed the first atom at (0, 0, 0), the second at (>= 0, 0, 0), etc. If you don't, you get free coordinates between your bounds.
You can definitely start at [-10, 10] and see what the results are, then adjust.
Question
I have a term like xy with limited continuous variables in optimization problem that I need to linearize it.
Previously, I used this following approach:
z=xy=(1/4)*((x+y)^2-(x-y)^2)
X= (1/2)*(x+y) , Y= (1/2)*(x-y)
z=X^2-Y^2
Then, I used piecewise linear (PWL) function to linearize X^2 and Y^2. In this case, MILP tries to maximize the Y because of its minus which is not an optimal answer for me.
In the next try, I used iterative McCormick envelope as explained in:
In the case of using this approach, the amount of z is not be equal to the product of x and y. According to the obtained results, I realized that this approach is not proper, too.
So, could you suggest an approach for linearizing the term xy?
Xiaoyu Jin : Let y1 be a new binary variable, taking the value 1 if x1 is positive. Let y2 be defined similarly for x2. Add the constraints x1 <= u1 y1 and x2 <= u2 y2, where u1 and u2 are upper bounds on the values that x1 and x2 can take in a feasible solution. Then add the constraint y1 + y2 <= 1.
If you want to see more formulation tricks like this, I recommend the book "Model Building in Mathematical Programming" by H.P. Williams.
Question
The Plackett-Burman design mainly aims to screen the important factors that could have significant effects on the independent variable. However, as we performed the experiments, we have noticed that the significance of the factors and especially, the order of the effects of the factors changed according to how we set the highest and lowest levels of each factor. As a result, we faced the difficulty of what factors to screen out. Can anyone suggest the reasonable ways to set the levels of each factor in Plackett-Burman design?
Dear Yun,
the level of each factor is fixed in most cases based on the literature, whatever, the significance and the order of significant factors, once you set the levels, the results obtained by (for example t-student test based on the ratio of your model's coefficients and the standard error) describe the importance of your factors; the results would be more important by increasing the confidence level
Question
Suppose that if we compare two metaheuristics X and Y in a given real problem, X returns a better solution than Y, while when we use the same metaheuristics to solve global optimization problems, Y returns a better solution than X. Does this make sense? what is the reason?
No Free Lunch Theory
Question
I am using Optimizations techniques for my research work on demand response. To deal with uncertain parameters and variables stochastic and robust optimization are used. I wanted to learn these techniques and subsequently implement in my research work. What are the good books and optimization softwares to start with?
See Chapter 12 in:
Question
There are a lot of works that search for the subgraph of a given graph, what is the efficient method that can check if a given subgraph is connected?
The most common technique is to use DFS or BFS.
Question
If you build a parameter learning algorithm based on the Lyapunov stability theorem for updating the parameters of an adaptive fuzzy controller, how to determine the cost function and Lyapunov function? Is there a physical connection between them?
Lyapunov function is a point-wise measure of energy, whereas, cost-functional is an interval-based measure of energy. In this sense, you may connect them by assuming Lyapunov function as an explicit function in time which could get a negative decay-rate assuming V=x^2, and then a closed-form dissipation as; V_dot=-K*V(t) for stabilizing a system, see my pre-print at the URL:
Meanwhile, optimal quadratic cost functional, minimizes energy as a sum of the squared state and control signal during a time-interval as; J=int(x^2+u^2).dt, from zero to T. Please see the paper:
Nonlinear Optimal Control: A Control Lyapunov Function and Receding Horizon Perspective (1999)
The control performance could be different through the two scopes, and the energy consumption could be also different.
Question
I am working in a project to assist an experimental team in optimizing reaction conditions. The problem involves a large number of dimensions, i.e. 30+ reactants which we are trying out different concentrations to achieve the highest yield of a certain product.
I am familiar with stochastic optimization methods such as simulated annealing, genetic algorithms, which seemed like a good approach to this problem. The experimental team proposes using design of experiments (DoE), which I'm not too familiar with.
So my question is, what are the advantages/disadvantages of DoE (namely fractional factorial I believe) versus stochastic optimization methods, and are there use cases where one is preferred over the other?
When there are 30+ reactants, I first would make a network, with the input of the experimenters, of the relations between the reactants: you really have to understand parts of the chemical reactions . Modeling, without understanding the basics of what you are trying to model, is never a good idea. And, given my knowledge of chemistry, I fail to see the use of stochastic optimization in this context. Maybe systems and control theory could give you insights as well. Maybe you can view the whole as a system, with inputs and outputs.
Question
How to use optimization techniques like Genetic Algorithm and Particle Swarm Optimization in reliability analysis? Please give an idea about it
Your research approach is problematic. Before you ask a research question or ponder the answer to a problem, you are starting with a method and trying to fit the method to a field, not even to a specific problem. You should first ask a research question, formulate the problem, build the model and then find a suitable optimization method to solve it.
Question
I am trying to minimize the following objective function:
(A-f(B)).^2 + d*TV(B)
A: known 2D image
B: unknown 2D image
f: a non-injective function (e.g. sine function)
d: constant
TV: total variation: sum of the image gradient magnitudes
I am not an expert on these sorts of problems and looking for some hints to start from. Thank you.
Om Prakash Yadav Thank you, will take a look.
Question
I'm running a Geometry Optimization with DMol3 (Materials Studio) on 2D MXene (V2C). At the V2C structure, I doped oxygen atoms and optimized them to the V2CO2 structure. Then I created an oxygen vacancy, But upon optimizing this new Ov/V2CO2 structure the jobs fail with an error showing that the SCF is not converging.
Simulation Details:
Smearing: 0.005 Hartree with 1*e-6 SCF tolerance.
Grimme Method for DFT-D correction was employed.
BFGS algorithm is used with the 4.4 DNP basis set. (attachments available)
Error Details:
Message: SCF not converging. Choose larger smearing value in DMol3 SCF panel
or modify/delete "Occupation Thermal" in the input file.
You may also need to change spin or use symmetry.
Note: Tried changing the values but the error keeps occurring
Files attached: Ov-V2CO2.input (input file), Ov-V2CO2.outmol(output file)
Most probably your system is not converged or relaxed in the supplied condition.
One thing you can do that take the output structure from failed job and again go for the optimization.
I had faced this issue earlier but with different system and it was solved.
Question
Kindly help me by clearing my quary that how will the prediction interval work in the comparisons of the performance of a set of meta-heuristic algorithms?
It is certain that the prediction interval influences the good behavior of the results given by the metaheuristics, however it depends on the optimization procedures used such as the Monte Carlo method, the descent method or the decent of the graduating ...
Best regards
Question
For example, a model is having 5 nodes, where A is the dependent node and B, C, D, and E are the independent nodes (as shown in Figure 1). Now, if I know the probabilities of all independent nodes when A is 20%, 25%, 30%,.... up to 50% (as shown in Figure 2).
Can I find out the probabilities of all independent nodes for A is 70% or 80% or more?
If it is possible, then which algorithm or procedure I have to follow for getting the node A targeted value?
Thank you.
It seems Bayesian network fits for your subject.
Question
I have to solve a optimization problem. I already find Pulp library at Python but I want to solve it with metaheuristic algorithm. My problem involves with discrete decision variables and constraints, minimization objective function. I can't decide which algorithm go better with my problem. Also, I need a similar code for it.
You can check the CCSA algorithm implemented by a Conscious Neighborhood-based approach which is an effective mechanism to improve other metaheuristic algorithms as well. The CCSA and its full source code are available here:
Question
Need heuristic for assignment problem. Use it in order to allocation tasks to 2 or more vehicle. So it can work on the same network. The heuristic should be easy to implement for exempel not GA.
NOTE the allocation of the task can be for exmpel vehicle 1 pick a goods from nod A to B and vehicle 2 pick from C to D.
I can't understand the problem, either. At first, it sounded like a vehicle routing problem. But then, when you mention shelves, goods placed on them and a corresponding coordinate system, it sounds like optimizing warehouse operations. Are you trying to schedule the movements of forklifts in a warehouse? Optimizing an automated material handling system? You need to provide more information so that the problem is understood by everyone here.
Question
What should be the optimum number of maxcycles for a transition state optimization in Gaussian? I know the default is 20. If we can't get proper convergence or a stationary point, should we increase the value and by how much? Does higher number of maxcycles say 400 will increase the time and cost of calculations for a typical TS with 20 atoms?
Although the default value for cycles are 20, but as such there is not optimum cycles to reach to a convergence. When the structure is very far from the optimum, it may take long way to reach to minima using large numer of cycles. Hence, it is advisable to keep an eye on your submitted job time to time how it converges by checking the convergence parameters in your log file.
You can use the following command to check progress of your convergence:
grep Done name.log
It will show the SCF done. Moreover, you can check the convergence parameters in your log file where you can get a table for Max. force, Max. RMS, displacement with converfence indicator.
Question
My study involves establishing chronic nicotine addiction in mice and I would to check the cotinine level in mice urine using HPLC but currently protocols I found are done using human urine. Bioassays and GC are expensive, and I would like to use urine as the sample. Thus, how do I modify the HPLC protocol for mice urine?
I have no practical experience, however, I hope the provided link will be very much helpful for you. Please have a look on the following link:
Question
why do we need optimization techniques in feature subset selection? Is it necessary to use an optimizer to perform feature selection for a large number of features?
Meta-heuristics will not find optima.
Question
I am new at multi-objective optimization techniques. I have seen in many papers utilizing assessment metrices to find the best solution among the obtained non-dominated solutions in the pareto front. I have no idea about these matrices and how to calculate them. Also, how these matrices ensure that the obtained solution is better.
The following links may be useful to you.
Question
I have some equations attached as a file to the question.
I want to find the variables S, delta ,V ,Vc ,Te and beta_c resulting in the smallest overall error in the equations using genetic algorithm.
The problem is I don't know what do I have to do now?!
Does it mean that I must find the minimum or maximum of these equation? How should the fitness function be defined? as a first effort I have taken all the terms to one side of the equation and put a zero in the other side. after that add the three equations together and define them as a single fitness function...
but I'm not sure about the correctness.
Hi Neda,
It would look something like this if you use the l2 norm:
min (T_inf + 8*a*S3/(3*pi*k) - Tc)^2 + ... (write here the rest)
Be aware that you can get stuck in a local minimum (a point that is the best in its neighborhood but still doesn't satisfy your equations).
Best,
Charlie
Question
I am involved in small project that involves houseload and renewbles like PV , energy storage system & electric vehicle connected to house ! The challenge is to optimize the the load and find the right time for the EV to charge using matlab simulation . what is the best optimization algorithms for such problems involving load and stuff ? Any reference papers SUGGESTING THE SAME would be appreciated .
Thanks !
Generally, a well-developed meta-heuristic algorithm can be useful. You can start using numerical optimisation methods
Next test the performance of the below methods:
Question
I am using MATLAB's  'fmincon' to solve some nonlinear constrained optimisation problem. But it is very slow. What are the possible ways to speed up the simulation?
What is the best alternative to 'fmincon' to speed up the optimisation process so that I can use it with MATLAB?
In the Matlab page of fmincon mentioned that a parameter to speed up as follows:
UseParallel
"When true, fmincon estimates gradients in parallel. Disable by setting to the default, false. trust-region-reflective requires a gradient in the objective, so UseParallel does not apply. See Parallel Computing. "
Question
What is the best optimization technique for my deep learning model (Convolution Neural Network) which involves a facial dataset of 2000 images to solve facial occlusion challenges caused by nose masks due to covid-19, lightening variations and ageing. CNN will be done using Python
I have applied some optimisation techniques for deep learning hyper-parameters tuning. I guess, it might be useful to see
Question
I am working on a problem for design optimisation. I would like to ask if for an uncertain problem, should design optimisation under uncertainty techniques be used for the design optimisation?
Question
In daily life single period and multi period inventory system is very necessary things. When the selling period is fixed that is we cant sell things outside that fixed time then it is called may be single period. Lets talk about it what is the actual definition.
Single period inventory models are used typically for determining the optimal order quantity for a perishable product. The most famous example is the Newsboy (Newsvendor) problem, in which the demand for newspapers for a given day is random, there is a cost of overstocking (unsold papers) and understocking (lost profits). The objective is to minimize the expected daily cost and the decision variable is the number of papers to buy (for reselling). this model is useful for any other item that cannot be stocked to be sold in another period, you have to sell it during the period or it is wasted. Many products fit this category: Christmas trees, Halloween costumes, or any other item that is special day or event themed. Fresh produce and meat can be though of in this category as well.
On the other hand, multi-period models are used for items that can be stocked for long periods of time, and demand in subsequent periods can be satisfied from the inventory. The basic EOQ model is the simplest of these models.
Question
The problem is this: there is model of robot, created in Simscape. And knee of this robot is made of "pin-slot-joint", which allows one transnational and one rotational degree of freedom. In transnational motion, it is imposed the stiffness and damping factor, which gives influence also to rotational torque. My aim is to write optimization or control algorithm in such a way that this algorithm should provide such stiffness(in linear direction), which will reduce the rotational torque. By the way, rotational reference motion of knee is provided in advance(as input), and appropriate torque is computed inside of the joint by inverse dynamics. But to create such algorithm, I have no deep information about block dynamics, because block is provided by simscape, and the source code and other information is hidden. By having signals of input stiffness, input motion, and output torque, I need to optimize the torque.  I will be truly grateful if you suggest me something. (I tried to obtain equation, by using my knowledge in mechanics, but there are lots of details are needed such as the mass of the joint actuator, it's radius, the length of the spring and etc. AND I HAVE NO THIS INFORMATION.) If you suggest me something, I will be truly grateful.
very interesting
Question
In Design of experiments,
Response Surface Methodology,
If I conducted Experiments with  variables different level example 3 factors and 3 levels, 15 runs,
Under open Atmospheric Condition,
How can I validate the model,
Because For 15 runs adopted different Atmospheric Condition,
Is there any possible solutions,
Thanking you.
Question
Hello to everyone, I would like your help and maybe expertise. I am from Brazil and we bought a single axis solar tracker which follows the sun E-W to improve energy harvesting during morning and afternoon hours. The thing is, after the selling company installed the tracker for us, we didn't like the fact that the tilt angle is zero, not being optimized for our latitude.
Is it ok if we dismount the tracker and install it again elevating one of the pillars in order to get the optimized tilt for our location?
Can this affect motor's life span?
May I propose a simple solution.
You can leave the construction as it is and add fix a new tilted light weight structure to rest the panels on it.
This modification will be the easiest to make and adds to the trackers an additional tilted plane to support the panels.
Best wishes
Question
Most of the optimization methods utilize the upper and lower bounds constraints to handle this issue. At the same time, each variable can significantly affect the direction of the optimization method to find its optimal value. Thus, returning to the lower or/and upper values leads to a delay in finding the optimal values in each iteration.
Dears Dr. Cenk and Ghosh,
Thank you so much for your response. In fact, I have officially developed a new strategy ehich can handly this issue practically and return optimal solution within feasible ranges rather than upper and/or lower bounds. The results showed execellent performance of the proposed algorithm. The matlab code will be given in recent future.
Best regards,
Hussein
Question
I am using stochastic dynamic dual programming for decision-making under uncertainty. Can we use stochastic dynamic programming to solve a min-max problem? for example, the max function is used in the objective function. Any good library for stochastic dynamic dual programming?
Best wishes,
Hussein
Question
I am using the ANN for a product reliability assurance application, i.e.picking some sample within the production process and then estimating the overall quality of the production line output. What kind of optimization algorithm do you think works the best for solving the ANN in such a problem. ?
Optimization algorithm in neural network
The process of minimizing (or maximizing) any mathematical expression is calledoptimization. Optimizers are algorithms or methods used to change the attributes of theneural network such as weights and learning rate to reduce the losses. Optimizers are used to solve optimization problems by minimizing the function.
Regards,
Shafagat
Question
Is there a Python project where a commercial FEA (finite element analysis) package is used to generate input data for a freely available optimizer, such as scipy.optimize, pymoo, pyopt, pyoptsparse?
You can find one implementation of Python/ABAQUS optimization in the following paper:
However, this is not a black box optimization since the analytical derivatives are used. You can implement the black-box optimization by presenting a Python code that presents an artificial neural network (ANN) surrogate to predict the derivatives (ANN is implemented in Python). You can also predict the objective by ANN. Then you can perform the mathematical optimization so easily (for example the implementation of MMA is available in Python). You can find such a project in the following link (a Ph.D. thesis but the codes are not available):
Question
List the critical solved and unsolved global/constrained/complex optimization problems, combinatorial and engineering problems?
Metaheuristic algorithms can be classified in many ways
Several Metaheuristic algorithms are proposed to solve different optimization problems, but the solved is resolved and the unsolved is still unsolved.
List the unsolved problem or the critical solved problem that need an optimal solution to help the researcher to solve the unsolved problems.
Thanks for you contributions
Every year math programmers devise optimal solutions for more and more complex and larger problems, while metaheuristics enthusiasts still haven't caught up with the times. I find it puzzling - like it was a virtue to not finding an optimal solution. For several of us, it is quite puzzling, actually. I would be interested to hear why optimal solutions are not useful, or interesting.
Question
In this link shifted functions are defined as r*(x-o)/100 (where r is the original range)to keep the range between 100. But for optimizing the functions should I generate the values of X between [-100,100] or between [o-100,o+100]? If the function is shifted by o vector then the respective ranges should also change. Because if for 0<-100 or o>100, the global optimum won't fall in the range. And even if I generate the o values between [-100,100] then the function would be shifted in the range rather than being in the range where it is well defined.
Yes, you must shift all the conditions. All the problem has to be changed: The bounds, the conditions. Finally, you must come back the first problem: the first condition and the first bounds. To interpret the results, you need to return.
Question
Dear all,
I am currently writing my master thesis where I am analysizing my data using path analysis and SEM in R (lavaan package). Now that I like to write up my results, I am struggling with the R output. Since normality assumption is violated I decided to use the Yuan bentler correction.
Consequently, R is giving me an output containing a column with Robust values (that I thought were the corrected ones) as well as as extra robust fit indices.
Did I make a mistake? Otherwise, I would appreciate if you can give me a hint on which values to use (or better said, what the difference is between them)!
Thanking you in advance and best regards,
Alina
Hello A. Berger,
Yes, report the values given under the "Robust" column heading. You'll note that the Y-B "correction" factor, if multiplied by the robust estimate, yields the "standard" estimate (e.g., 1.201 x 11.154 = 13.400).
Question
I am working on solving an optimization problem with two objectives by using neuroevolution. I use NEAT to evolve solutions which need to satisfy objective A and objective B. I tried different configurations, changed mutation values etc., however I always run into the same problem.
The algorithm reaches 100% fitness for objective A quite effortless, however the fitness of objective B is mostly getting stuck at the same value (ca. 85%). Through heavy mutation I sometime manage to get objective B's fitness to >90% but then the fitness for objective A decreases significantly. I would not mind worsening A in favor for B here. However, I only reach a higher fitness of objective B in very rare cases. Most/all individuals converge to a fitness of (100%, 85%).
I extended my NEAT implementation to support Pareto fronts and Crowded Distance Search (NSGA-II). After some iterations this leads to an average population fitness of (100%, 85%), meaning every candidate approaches the same spot.
My desired fitness landscape would be much more diverse, especially I would like the algorithm to evolve solutions with fitnesses like (90%, 90%), (80%, 95%) etc.
My main problem seems to be that every individual arrives at the same fitness tuple sooner or later and I can only prevent that through lots of mutation (randomness). Even then only a few candidates break the 85% barrier of objective B.
I am wondering if anyone has had a similar scenario yet and/or can think of some extension of the evolving procedure to prevent stagnation in this particular point.
Thank you in advance, I am looking forward to any suggestions.
Hello everyone,
thank you all very much for your suggestions, I will look into the recommended publications/methods.
Regarding scaling objectives: I tried that even before considering Pareto fronts and NSGA-II. I think it helped to mitigate the stagnation effect a bit, but not enough.
However, after reading more and thinking about my problem, I believe that while my fitness function is appropriate, it can sometimes be deceptive.
The problem/solution might be discontinuous, such that e.g. a fitness tuple reaching (100%, 85%) might be further away from the (100%,100%) solution than a candidate with a fitness of (50%,30%). This could be due to I am dealing with highly quantized search space.
I am now looking into techniques like Novelty Search and similar approaches since these might be more applicable to my discrete problem which is using boolean neurons which realize simple gate functions and non-floating weights.
If anyone has suggestions in that direction, I would be happy to here from you. Thanks again for the helpful input.
Question
I'm currently working on my undergraduate thesis where I develop a genetic algorithm that finds suboptimal 2D positions for a set of buildings. The solution representation is a vector of real numbers where every three elements represents the position and angle of one building. In that every three elements, the first element represents the x position, the second represents the y position, and the third represents the angle. A typical solution representation would look like:
[ building 0 x position, building 0 y position, building 0 angle, building 1 x position, ... ]
I have already managed to create a genetic algorithm that produces suboptimal solutions and it uses uniform crossover and discards infeasible solutions. However, it is only fast for small problems (e.g. 4 buildings), and adding more buildings makes it too slow to the point that I think it devolves into brute force, which is definitely not what we want. I tried to keep infeasible solutions into the population but with a poorer fitness before, but that only results in best solutions that is worse than when I threw away the infeasible ones.
Now, I am looking for a crossover operator that can help me speed up the genetic algorithm and allow it to scale to more buildings. I have already experimented arithmetic crossover and box crossover but to no avail. So, I am hoping that the community can suggest crossovers that I could try. I would also appreciate any suggestions to improve my genetic algorithm (and not just for the crossover operator).
Thanks!
Hi,
Since your representation is in R^d, I strongly advice you to look into Evolution Strategies (ES) instead of GAs.
ES are evolutionary algorithms that are naturally suited to evolve real-valued solutions and are state-of-the-art. They operate by updating a distribution and sampling new real-valued solutions from it.
The most famous approach is Hansen's CMA-ES:
A modern approach that can operate very efficiently if linkage structure is known (e.g., what variables should be sampled at the same time) is Bouter's RV-GOMEA:
If your problem has many local minima and you want to explore them, you can look into Maree's Hill valley clustering-based ES:
Question
Hello everybody,
I am looking to study the optimal technique to initialize weights in a large neural network with multiple hidden layers.
A comparative study will be more than appreciated.
In addition to the answers above, this article might also contribute to an understanding on how to improve results by using weight initialization with special regard to ReLU and output layers:
Question
i am trying to conduct cfa anaylsis using R studio however, instead of giving me all the fit indecisive i am supposed to get , even with the summary function i dont get the GFI | AGFI | NFI | NNFI | CFI | RMSEA . can any one please help me with this issue
Estimator ML
Optimization method NLMINB
Number of free parameters 25
Number of observations 275
Model Test User Model:
Test statistic 228.937
Degrees of freedom 41
P-value (Chi-square) 0.000
By default, lavaan will always fix the factor loading of the first indicator to 1. In order to fix a parameter in a lavaan formula, you need to pre-multiply the corresponding variable in the formula by a numerical value. This is called the pre-multiplication mechanism.
# fit the model fit <- cfa(HS.model, data=HolzingerSwineford1939)
# display summary output summary(fit, fit.measures=TRUE)
Question
Multi-objective optimization through chicken swarm optimization technique. Both maximization and minimization problem.
I wonder if the Chicken Swarm Optimization (CSO) is as effective as the Coronavirus Swarm Optimization (another CSO contender). Almost every year we see a few enthusiastic researchers come up with some meta-heuristic optimization algorithms inspired by the animal behaviors. Technically, the coronavirus is not really a living organism, but it still can harm humans as any other viruses.
Question
Bat-inspired algorithm is a metaheuristic optimization algorithm developed by Xin-She Yang in 2010. This bat algorithm is based on the echolocation behaviour of microbats with varying pulse rates of emission and loudness.
The idealization of the echolocation of microbats can be summarized as follows: Each virtual bat flies randomly with a velocity vi at position (solution) xi with a varying frequency or wavelength and loudness Ai. As it searches and finds its prey, it changes frequency, loudness and pulse emission rate r. Search is intensified by a local random walk. Selection of the best continues until certain stop criteria are met. This essentially uses a frequency-tuning technique to control the dynamic behaviour of a swarm of bats, and the balance between exploration and exploitation can be controlled by tuning algorithm-dependent parameters in bat algorithm. (Wikipedia)
What are the applications of bat algorithm? Any good optimization papers using bat algorithm? Your views are welcome! - Sundar
Bat algorithm (BA) is a bio-inspired algorithm developed by Xin-She Yang in 2010 and BA has been found to be very efficient .
Question
Hello everyone,
We have the following integer programming problem with two integer decision variables, namely x and y:
Min F(f(x), g(y))
subject to the constraints
x <= xb,
y <= yb,
x, y non-negative integers.
Here, the objective function F is a function of f(x) and g(y). Both the functions f and g can be computed in linear time. Moreover, the function F can be calculated in linear time. Here, xb and yb are the upper bounds of the decision variables x and y, respectively.
How do we solve this kind of problem efficiently? We are not looking for any metaheuristic approaches.
I appreciate any help you can provide. Particularly, it would be helpful for us if you can provide any materials related to this type of problem.
Regards,
Soumen Atta
The method for solving this problem depends on the properties of functions F, f, g (convex, concave, other properties).
If properties are not known - the only method is looking through all values of variables.
Question
Hi everyone, i am making optimization for energy management system of EV charging station. I established the mathematical model objective function (minimization of electricity cost) and constraints then i made a Python simulation. In the simulation results, i got the charging station load profile (power profile and energy profile) the energy charged in the car is discharged later on in the charging station and i got in the end 0 cars charged. Anyone can help with ideas on how i limit the energy discharged to be equal only to the initial amount of the car and not every EV charged it discharge later. Thank you.
Energy is equal to a half of capacity time square of its voltage
Question
hi
I have designed a meta-heuristic algorithm and I used Taguchi Method on a small example should I repeat these experiments for each problem or that's enough because for my small example I can only create 38 neighbor solutions but for my bigger problem I can make 77 neighbor solutions and I think it's important that how many neighbor solutions I can Make & how many neighbor solutions I want to create?
PS: the only difference between the two problems is their size.
Question
I am looking for some analytical, probabilistic, statistical or any other way to compare the results of a different number of approaches implemented on the same test model. These approaches can be different optimization techniques implemented on the similar problem or different type of sensitivity analysis implemented on a design. I am looking for a metric (generic or application-specific) that can be used to compare the results in a more constructive and structured way.
I would like to hear if there is a technique that you use in your field, as I might able to drive something for my problem.
Thank you very much.