Science topic
Optimization Methods - Science topic
Explore the latest questions and answers in Optimization Methods, and find Optimization Methods experts.
Questions related to Optimization Methods
can anyone suggest or interested to have conversation on AI/ML and PLS based mathematical tool. I need to know more information.
Let's assume that I have solved a particular optimisation problem using several different algorithms. If I want to compare them with each other, which characteristics should I consider? For example, one of them may find the global optimum better than the others, or one of them may converge faster than the others. Apart from finding the global optimum and speed of convergence, are there any other characteristics I should consider?
What is the main disadvantage of a global optimization algorithm for the Backpropagation Process?
Under what conditions can we still use a local optimization algorithm for the Backpropagation Process?
May I have the MATLAB code of some well-known multi-objective benchmark functions like Schaffer, Fonseca, ZDT1, ZDT6, Srinivas, DTLZ5, DTLZ6, LZ09_F1, LZ09_F2, LZ09_F6, LZ09_F7, LZ09_F9, WFG4, CF1, CF2, CF4, CF5, CF6, CF9, and CF10?
I already saw some examples of GA (genetic algorithm) applications to tune PID parameters, but I (until now) don't know a way to define the bounds. The bounds are always presented in manuscripts, but they appear without great explanations. I suspect that they are obtained in empirical methods.
Could Anyone recommend me research?
I have designed the optimization experiment using Box-Behnken approach.
What should I do if any of the factors combination fails, for example because the aggregation occurs.
Should I review whole optimization or is there any method to skip the particular factors combination?
And if I need to review the whole experiment, what method should I use to evaluate boundary factors values? Screening methods I have seen require at least 6 factors to be screened.
Any help is appreciated.
Greetings.
I am looking for a scientific field or real-life subject where I can use some convex analysis tools like Stochastic Gradient Descent or unidimensional optimization methods. Any suggestions?
Dear all,
I have set of results obtained from fea software using a particular DOE. I imported the data using an excel file to MATLAB and used the curve fitting tool to obtain the response surface.
I know the optimization has to be done using the "optimization tool box" but I do not understand how to specify the response surface data(design variables vs objective functions) to the solver. Any help would be highly appreciated!
Thanks in advance.
Best Regards,
Rashiga
I want to solve nonlinear optimization. I find its solution by using Brent's method. in the case that there is no regularization. Since the derivative is not used in this method, can anyone guide me on how I can solve the problem by considering regularization? Is there any way?
I want to know what are the shortcomings of the two techniques and how did they differ from RSM.
Also, are they artificial intelligence algorithms or not.
Thank you
I have 2 functions f(x,y) and g(x,y) that depend on two variables (x,y), so I want to find a solution that minimize f(x,y) while maximizing g(x,y), simultaneously??
P.S: These functions are linearly independent.
Can anyone provide me with PSO MATLAB code to optimize the weights of multi types of Neural Networks?
I am a beginner at the optimization of trusses with metaheuristic algorithms. anyone, please help me get any MatLab code that helps me optimize 10 bar truss?
I need code of Consensus + Innovations and OCD in any programming language preferably Matlab or R
For a multiobjective base optimization problem, I have linear equality, inequality and nonlinear constraints, which type of algorithm holds good.
I am starting to learn how to model an energy storage system for a wind farm. Would anybody know where to look for already existing program model optimizations in Matlab, Gurobi, Gams, ....ect. This is basically just so I can get a general feel of how it is going to work. Any suggestions will be appreciated.
Regards,
Giovanni Ponce
In most of AI research the goal is to to achieve higher than human performance on a single objective,
I believe that in many cases we oversimplify the complexity of human objectives, and therefore I think we should maybe step off improving human performance.
but rather focus on understanding human objectives first by observing humans in the form of imitation learning while still exploring.
In the attachment I added description of the approach I believe could enforce more human like behavior.
However I would like advice on how I could formulate a simple imitation learning environment to show a prove of concept.
One idea of mine was to build a gridworld simulating a traffic light scenario, while the agent is only rewarded for crossing the street, we still want it to respect the traffic rules.
Kind regards
Jasper Busschers master student AI
Currently, I am looking for a passionate teammate to do collaborative research in System Identification. We will try to combine an optimization technique to get the most optimal model. bachelor students, master students, no problem. You may also ask your friends to join this group. Sure, I will be the last author, no worries. My target is SCI Journal.
Best regards,
Yeza
Which optimization technique is best to find the minimum number of experiments to be performed with four factors and five levels?
I have a multi-objective optimization with the following properties:
Objective function: thee minimization objective functions) two non-linear functions and one linear function
Decision variable: two real variables (Bounded)
Constraint: three linear constraint (two bounding constraint and one relationship constraint)
Problem type: non-convex
Solution required: Global optimum
I have used two heuristic algorithms to solve the problem NSGA-II and NSGA-III.
I have performed NSGA-II and NSGA-III for the following instances (population size, number of generations, maximum number of functional evaluations(i.e. pop size x no. of gen)): (100,10,1000), (100,50,5000),(100,100,10000), (500, 10, 1000), (500, 50, 25000), and (500,100,50000).
My observations:
Hypervolume increases with increase in number of functional evaluations. However, for a given population size, as the number of generation increases the hypervolume reduces. Which I think should rather increase. Why am I getting such an answer?
For an Integer Linear Programming problem (ILP), an irreducible infeasible set (IIS) is an infeasible subset of constraints, variable bounds, and integer restrictions that becomes feasible if any single constraint, variable bound, or integer restriction is removed. It is possible to have more than one IIS in an infeasible ILP.
Is it possible to identify all possible Irreducible Infeasible Sets (IIS) for an infeasible Integer Linear Programming problem (ILP)?
Ideally, I aim to find the MIN IIS COVER, which is the smallest cardinality subset of constraints to remove such that at least one constraint is removed from every IIS.
Thanks for your time and consideration.
Regards
Ramy
I have a multi-objective optimization with the following properties:
Objective function: thee minimization objective functions) two non-linear functions and one linear function
Decision variable: two real variables (Bounded)
Constraint: three linear constraint (two bounding constraint and one relationship constraint)
Problem type: non-convex
Solution required: Global optimum
I have used two heuristic algorithms to solve the problem NSGA-II and NSGA-III.
I have performed NSGA-II and NSGA-III for the following instances (population size, number of generations, maximum number of functional evaluations(i.e. pop size x no. of gen)): (100,10,1000), (100,50,5000),(100,100,10000), (500, 10, 1000), (500, 50, 25000), and (500,100,50000).
My observations:
Hypervolume increases with increase in number of functional evaluations. However, for a given population size, as the number of generation increases the hypervolume reduces. Which I think should rather increase. Why am I getting such an answer?
Can any one provide a PSO optimization technique MATLAB code for Optimal sizing of Solar-Wind Hybrid System coupled with heat pump and battery storage?
In the maintenance optimization context, researchers use a structure such that it results in the Renewal reward theorem and they use this theorem to minimize the long-run cost rate criteria and the maintenance optimization problem.
However, in the real-world, creating such structures that conclude to Renewal reward theorem maybe not happen. I am looking forward to how to dealing with these problems?
Bests
Hasan
I'm trying to calculate the effectiveness of my implementation of the D2Q9 solver using ArrayFire, which uses JIT compilation and other optimization techniques behind the scenes to output near-optimal CUDA/OpenCL code.
On lid-driven cavity test with 3000x3000 domain, I'm getting 3500 MLUPs. For the MLUPs calculation I'm using this formula:
float mlups = (total_nodes * iter * 10e-6) / total_time;
This is how the first 1000 iterations look like:
100 iterations completed, 2s elapsed (4645.152 MLUPS).
200 iterations completed, 5s elapsed (3716.1216 MLUPS).
300 iterations completed, 8s elapsed (3483.864 MLUPS).
400 iterations completed, 10s elapsed (3716.1216 MLUPS).
500 iterations completed, 13s elapsed (3573.1936 MLUPS).
600 iterations completed, 16s elapsed (3483.864 MLUPS).
700 iterations completed, 19s elapsed (3422.7437 MLUPS).
800 iterations completed, 21s elapsed (3539.1633 MLUPS).
900 iterations completed, 24s elapsed (3483.864 MLUPS).
1000 iterations completed, 27s elapsed (3440.853 MLUPS).
I want to calculate how far this is from the theoretical maximum and what is the effectiveness of the memory layout.
Hi
I intend to define a frequency domain objective function to design an optimal controller for Load-Frequency Control (LFC) of a power system. My purpose is to optimize this objective function (finding the proper location of zeros and poles) using meta-heuristic methods.
I will be happy if you share your valuable relevant and informative experiences, references and articles in this field including how to define and how to code.
Thanks
AI and machine learning increasing its importance in every area. How much significant for transistor design?
I want to compare metaheuristics on the optimizattion of Lennard-Jones clusters. There are many papers available that optimize Lennard-Jones clusters. Unfortunately, none of them provide the upper and lower bounds of the search space. In order to conduct a fair comparison, all metaheuristics should search in the same bounds of the search space. I found the global minima here: http://doye.chem.ox.ac.uk/jon/structures/LJ/tables.150.html but the search space is not defined.
Can anyone please tell me what are the recommended upper and lower bounds of the search space?
I have a term like xy with limited continuous variables in optimization problem that I need to linearize it.
Previously, I used this following approach:
z=xy=(1/4)*((x+y)^2-(x-y)^2)
X= (1/2)*(x+y) , Y= (1/2)*(x-y)
z=X^2-Y^2
Then, I used piecewise linear (PWL) function to linearize X^2 and Y^2. In this case, MILP tries to maximize the Y because of its minus which is not an optimal answer for me.
In the next try, I used iterative McCormick envelope as explained in:
In the case of using this approach, the amount of z is not be equal to the product of x and y. According to the obtained results, I realized that this approach is not proper, too.
So, could you suggest an approach for linearizing the term xy?
The Plackett-Burman design mainly aims to screen the important factors that could have significant effects on the independent variable. However, as we performed the experiments, we have noticed that the significance of the factors and especially, the order of the effects of the factors changed according to how we set the highest and lowest levels of each factor. As a result, we faced the difficulty of what factors to screen out. Can anyone suggest the reasonable ways to set the levels of each factor in Plackett-Burman design?
Suppose that if we compare two metaheuristics X and Y in a given real problem, X returns a better solution than Y, while when we use the same metaheuristics to solve global optimization problems, Y returns a better solution than X. Does this make sense? what is the reason?
I am using Optimizations techniques for my research work on demand response. To deal with uncertain parameters and variables stochastic and robust optimization are used. I wanted to learn these techniques and subsequently implement in my research work. What are the good books and optimization softwares to start with?
There are a lot of works that search for the subgraph of a given graph, what is the efficient method that can check if a given subgraph is connected?
If you build a parameter learning algorithm based on the Lyapunov stability theorem for updating the parameters of an adaptive fuzzy controller, how to determine the cost function and Lyapunov function? Is there a physical connection between them?
I am working in a project to assist an experimental team in optimizing reaction conditions. The problem involves a large number of dimensions, i.e. 30+ reactants which we are trying out different concentrations to achieve the highest yield of a certain product.
I am familiar with stochastic optimization methods such as simulated annealing, genetic algorithms, which seemed like a good approach to this problem. The experimental team proposes using design of experiments (DoE), which I'm not too familiar with.
So my question is, what are the advantages/disadvantages of DoE (namely fractional factorial I believe) versus stochastic optimization methods, and are there use cases where one is preferred over the other?
How to use optimization techniques like Genetic Algorithm and Particle Swarm Optimization in reliability analysis? Please give an idea about it
I am trying to minimize the following objective function:
(A-f(B)).^2 + d*TV(B)
A: known 2D image
B: unknown 2D image
f: a non-injective function (e.g. sine function)
d: constant
TV: total variation: sum of the image gradient magnitudes
I am not an expert on these sorts of problems and looking for some hints to start from. Thank you.
I'm running a Geometry Optimization with DMol3 (Materials Studio) on 2D MXene (V2C). At the V2C structure, I doped oxygen atoms and optimized them to the V2CO2 structure. Then I created an oxygen vacancy, But upon optimizing this new Ov/V2CO2 structure the jobs fail with an error showing that the SCF is not converging.
Simulation Details:
Smearing: 0.005 Hartree with 1*e-6 SCF tolerance.
Grimme Method for DFT-D correction was employed.
BFGS algorithm is used with the 4.4 DNP basis set. (attachments available)
Error Details:
Message: SCF not converging. Choose larger smearing value in DMol3 SCF panel
or modify/delete "Occupation Thermal" in the input file.
You may also need to change spin or use symmetry.
Note: Tried changing the values but the error keeps occurring
Files attached: Ov-V2CO2.input (input file), Ov-V2CO2.outmol(output file)
Kindly help me by clearing my quary that how will the prediction interval work in the comparisons of the performance of a set of meta-heuristic algorithms?
For example, a model is having 5 nodes, where A is the dependent node and B, C, D, and E are the independent nodes (as shown in Figure 1). Now, if I know the probabilities of all independent nodes when A is 20%, 25%, 30%,.... up to 50% (as shown in Figure 2).
Can I find out the probabilities of all independent nodes for A is 70% or 80% or more?
If it is possible, then which algorithm or procedure I have to follow for getting the node A targeted value?
Thank you.
I have to solve a optimization problem. I already find Pulp library at Python but I want to solve it with metaheuristic algorithm. My problem involves with discrete decision variables and constraints, minimization objective function. I can't decide which algorithm go better with my problem. Also, I need a similar code for it.
Need heuristic for assignment problem. Use it in order to allocation tasks to 2 or more vehicle. So it can work on the same network. The heuristic should be easy to implement for exempel not GA.
NOTE the allocation of the task can be for exmpel vehicle 1 pick a goods from nod A to B and vehicle 2 pick from C to D.
What should be the optimum number of maxcycles for a transition state optimization in Gaussian? I know the default is 20. If we can't get proper convergence or a stationary point, should we increase the value and by how much? Does higher number of maxcycles say 400 will increase the time and cost of calculations for a typical TS with 20 atoms?
My study involves establishing chronic nicotine addiction in mice and I would to check the cotinine level in mice urine using HPLC but currently protocols I found are done using human urine. Bioassays and GC are expensive, and I would like to use urine as the sample. Thus, how do I modify the HPLC protocol for mice urine?
why do we need optimization techniques in feature subset selection? Is it necessary to use an optimizer to perform feature selection for a large number of features?
I am new at multi-objective optimization techniques. I have seen in many papers utilizing assessment metrices to find the best solution among the obtained non-dominated solutions in the pareto front. I have no idea about these matrices and how to calculate them. Also, how these matrices ensure that the obtained solution is better.
I have some equations attached as a file to the question.
I want to find the variables S, delta ,V ,Vc ,Te and beta_c resulting in the smallest overall error in the equations using genetic algorithm.
The problem is I don't know what do I have to do now?!
Does it mean that I must find the minimum or maximum of these equation? How should the fitness function be defined? as a first effort I have taken all the terms to one side of the equation and put a zero in the other side. after that add the three equations together and define them as a single fitness function...
but I'm not sure about the correctness.
I am involved in small project that involves houseload and renewbles like PV , energy storage system & electric vehicle connected to house ! The challenge is to optimize the the load and find the right time for the EV to charge using matlab simulation . what is the best optimization algorithms for such problems involving load and stuff ? Any reference papers SUGGESTING THE SAME would be appreciated .
Thanks !
I am using MATLAB's 'fmincon' to solve some nonlinear constrained optimisation problem. But it is very slow. What are the possible ways to speed up the simulation?
What is the best alternative to 'fmincon' to speed up the optimisation process so that I can use it with MATLAB?
What is the best optimization technique for my deep learning model (Convolution Neural Network) which involves a facial dataset of 2000 images to solve facial occlusion challenges caused by nose masks due to covid-19, lightening variations and ageing. CNN will be done using Python
I am working on a problem for design optimisation. I would like to ask if for an uncertain problem, should design optimisation under uncertainty techniques be used for the design optimisation?
In daily life single period and multi period inventory system is very necessary things. When the selling period is fixed that is we cant sell things outside that fixed time then it is called may be single period. Lets talk about it what is the actual definition.
The problem is this: there is model of robot, created in Simscape.
And knee of this robot is made of "pin-slot-joint", which allows one transnational and one rotational degree of freedom.
In transnational motion, it is imposed the stiffness and damping factor, which gives influence also to rotational torque.
My aim is to write optimization or control algorithm in such a way that this algorithm should provide such stiffness(in linear direction), which will reduce the rotational torque.
By the way, rotational reference motion of knee is provided in advance(as input), and appropriate torque is computed inside of the joint by inverse dynamics.
But to create such algorithm, I have no deep information about block dynamics, because block is provided by simscape, and the source code and other information is hidden.
By having signals of input stiffness, input motion, and output torque, I need to optimize the torque.
I will be truly grateful if you suggest me something.
(I tried to obtain equation, by using my knowledge in mechanics, but there are lots of details are needed such as the mass of the joint actuator, it's radius, the length of the spring and etc. AND I HAVE NO THIS INFORMATION.)
If you suggest me something, I will be truly grateful.
In Design of experiments,
Response Surface Methodology,
If I conducted Experiments with variables different level example 3 factors and 3 levels, 15 runs,
Under open Atmospheric Condition,
How can I validate the model,
Because For 15 runs adopted different Atmospheric Condition,
Is there any possible solutions,
Please Suggest,
Thanking you.
Hello to everyone, I would like your help and maybe expertise. I am from Brazil and we bought a single axis solar tracker which follows the sun E-W to improve energy harvesting during morning and afternoon hours. The thing is, after the selling company installed the tracker for us, we didn't like the fact that the tilt angle is zero, not being optimized for our latitude.
Is it ok if we dismount the tracker and install it again elevating one of the pillars in order to get the optimized tilt for our location?
Can this affect motor's life span?
Thank you in advance
Most of the optimization methods utilize the upper and lower bounds constraints to handle this issue. At the same time, each variable can significantly affect the direction of the optimization method to find its optimal value. Thus, returning to the lower or/and upper values leads to a delay in finding the optimal values in each iteration.
I am using stochastic dynamic dual programming for decision-making under uncertainty. Can we use stochastic dynamic programming to solve a min-max problem? for example, the max function is used in the objective function. Any good library for stochastic dynamic dual programming?
I am using the ANN for a product reliability assurance application, i.e.picking some sample within the production process and then estimating the overall quality of the production line output. What kind of optimization algorithm do you think works the best for solving the ANN in such a problem. ?
Is there a Python project where a commercial FEA (finite element analysis) package is used to generate input data for a freely available optimizer, such as scipy.optimize, pymoo, pyopt, pyoptsparse?
List the critical solved and unsolved global/constrained/complex optimization problems, combinatorial and engineering problems?
Metaheuristic algorithms can be classified in many ways
Several Metaheuristic algorithms are proposed to solve different optimization problems, but the solved is resolved and the unsolved is still unsolved.
List the unsolved problem or the critical solved problem that need an optimal solution to help the researcher to solve the unsolved problems.
Thanks for you contributions
In this link shifted functions are defined as r*(x-o)/100 (where r is the original range)to keep the range between 100. But for optimizing the functions should I generate the values of X between [-100,100] or between [o-100,o+100]? If the function is shifted by o vector then the respective ranges should also change. Because if for 0<-100 or o>100, the global optimum won't fall in the range. And even if I generate the o values between [-100,100] then the function would be shifted in the range rather than being in the range where it is well defined.
Dear all,
I am currently writing my master thesis where I am analysizing my data using path analysis and SEM in R (lavaan package). Now that I like to write up my results, I am struggling with the R output. Since normality assumption is violated I decided to use the Yuan bentler correction.
Consequently, R is giving me an output containing a column with Robust values (that I thought were the corrected ones) as well as as extra robust fit indices.
Did I make a mistake? Otherwise, I would appreciate if you can give me a hint on which values to use (or better said, what the difference is between them)!
Thanking you in advance and best regards,
Alina
I am working on solving an optimization problem with two objectives by using neuroevolution. I use NEAT to evolve solutions which need to satisfy objective A and objective B. I tried different configurations, changed mutation values etc., however I always run into the same problem.
The algorithm reaches 100% fitness for objective A quite effortless, however the fitness of objective B is mostly getting stuck at the same value (ca. 85%). Through heavy mutation I sometime manage to get objective B's fitness to >90% but then the fitness for objective A decreases significantly. I would not mind worsening A in favor for B here. However, I only reach a higher fitness of objective B in very rare cases. Most/all individuals converge to a fitness of (100%, 85%).
I extended my NEAT implementation to support Pareto fronts and Crowded Distance Search (NSGA-II). After some iterations this leads to an average population fitness of (100%, 85%), meaning every candidate approaches the same spot.
My desired fitness landscape would be much more diverse, especially I would like the algorithm to evolve solutions with fitnesses like (90%, 90%), (80%, 95%) etc.
My main problem seems to be that every individual arrives at the same fitness tuple sooner or later and I can only prevent that through lots of mutation (randomness). Even then only a few candidates break the 85% barrier of objective B.
I am wondering if anyone has had a similar scenario yet and/or can think of some extension of the evolving procedure to prevent stagnation in this particular point.
Thank you in advance, I am looking forward to any suggestions.
I'm currently working on my undergraduate thesis where I develop a genetic algorithm that finds suboptimal 2D positions for a set of buildings. The solution representation is a vector of real numbers where every three elements represents the position and angle of one building. In that every three elements, the first element represents the x position, the second represents the y position, and the third represents the angle. A typical solution representation would look like:
[ building 0 x position, building 0 y position, building 0 angle, building 1 x position, ... ]
I have already managed to create a genetic algorithm that produces suboptimal solutions and it uses uniform crossover and discards infeasible solutions. However, it is only fast for small problems (e.g. 4 buildings), and adding more buildings makes it too slow to the point that I think it devolves into brute force, which is definitely not what we want. I tried to keep infeasible solutions into the population but with a poorer fitness before, but that only results in best solutions that is worse than when I threw away the infeasible ones.
Now, I am looking for a crossover operator that can help me speed up the genetic algorithm and allow it to scale to more buildings. I have already experimented arithmetic crossover and box crossover but to no avail. So, I am hoping that the community can suggest crossovers that I could try. I would also appreciate any suggestions to improve my genetic algorithm (and not just for the crossover operator).
Thanks!
Hello everybody,
I am looking to study the optimal technique to initialize weights in a large neural network with multiple hidden layers.
A comparative study will be more than appreciated.
Many thanks for your help.
i am trying to conduct cfa anaylsis using R studio however, instead of giving me all the fit indecisive i am supposed to get , even with the summary function i dont get the GFI | AGFI | NFI | NNFI | CFI | RMSEA . can any one please help me with this issue
Estimator ML
Optimization method NLMINB
Number of free parameters 25
Number of observations 275
Model Test User Model:
Test statistic 228.937
Degrees of freedom 41
P-value (Chi-square) 0.000
Multi-objective optimization through chicken swarm optimization technique. Both maximization and minimization problem.
Bat-inspired algorithm is a metaheuristic optimization algorithm developed by Xin-She Yang in 2010. This bat algorithm is based on the echolocation behaviour of microbats with varying pulse rates of emission and loudness.
The idealization of the echolocation of microbats can be summarized as follows: Each virtual bat flies randomly with a velocity vi at position (solution) xi with a varying frequency or wavelength and loudness Ai. As it searches and finds its prey, it changes frequency, loudness and pulse emission rate r. Search is intensified by a local random walk. Selection of the best continues until certain stop criteria are met. This essentially uses a frequency-tuning technique to control the dynamic behaviour of a swarm of bats, and the balance between exploration and exploitation can be controlled by tuning algorithm-dependent parameters in bat algorithm. (Wikipedia)
What are the applications of bat algorithm? Any good optimization papers using bat algorithm? Your views are welcome! - Sundar
Hello everyone,
We have the following integer programming problem with two integer decision variables, namely x and y:
Min F(f(x), g(y))
subject to the constraints
x <= xb,
y <= yb,
x, y non-negative integers.
Here, the objective function F is a function of f(x) and g(y). Both the functions f and g can be computed in linear time. Moreover, the function F can be calculated in linear time. Here, xb and yb are the upper bounds of the decision variables x and y, respectively.
How do we solve this kind of problem efficiently? We are not looking for any metaheuristic approaches.
I appreciate any help you can provide. Particularly, it would be helpful for us if you can provide any materials related to this type of problem.
Regards,
Soumen Atta
Hi everyone, i am making optimization for energy management system of EV charging station. I established the mathematical model objective function (minimization of electricity cost) and constraints then i made a Python simulation. In the simulation results, i got the charging station load profile (power profile and energy profile) the energy charged in the car is discharged later on in the charging station and i got in the end 0 cars charged. Anyone can help with ideas on how i limit the energy discharged to be equal only to the initial amount of the car and not every EV charged it discharge later. Thank you.
hi
I have designed a meta-heuristic algorithm and I used Taguchi Method on a small example should I repeat these experiments for each problem or that's enough because for my small example I can only create 38 neighbor solutions but for my bigger problem I can make 77 neighbor solutions and I think it's important that how many neighbor solutions I can Make & how many neighbor solutions I want to create?
PS: the only difference between the two problems is their size.
I am looking for some analytical, probabilistic, statistical or any other way to compare the results of a different number of approaches implemented on the same test model. These approaches can be different optimization techniques implemented on the similar problem or different type of sensitivity analysis implemented on a design. I am looking for a metric (generic or application-specific) that can be used to compare the results in a more constructive and structured way.
I would like to hear if there is a technique that you use in your field, as I might able to drive something for my problem.
Thank you very much.
I am undergraduate, majoring in public finance, also enthusiastic about data science and analysis. I would like to do research with a researcher at my institution who is studying in the areas of optimization, simulation, modelling. Here are some of her projects.
- Optimal production planning utilizing leftovers for an all-you-care-to-eat food service operation.
- Achieving Sustainability beyond Zero Waste: A Case Study from a College Football Stadium
My question is to be able to come up / find a topic to study together, something both of us can enjoy. Thank you in advance. I really appreciate any help you can provide.
I want well written and easy to understand study material on mutli-objective optimization techniques. I searched Google but couldn't get good one. Can anyone suggest me good refernces?