Science topic

# Multiobjective Optimization - Science topic

Explore the latest questions and answers in Multiobjective Optimization, and find Multiobjective Optimization experts.
Questions related to Multiobjective Optimization
• asked a question related to Multiobjective Optimization
Question
The steepest descent method proposed by Fliege et al. motivates the research on descent methods for multiobjective optimization, which has received increasing attention in recent years. In the context of convex cases, all the Pareto critical point can be obtained by weighted scalarization method with cheap iterates. Therefore, is it necessary to develop descent methods for convex multiobjective optimization problems?
I found that by proposing solutions in every iteration of your optimization algorithm that maximizes the hypervolume between a reference point and the Pareto frontier, you can deal with both convex and concave multi-objective problems. You might want to look into this if you are specifically interested in convex problems.
• asked a question related to Multiobjective Optimization
Question
In his name is the judge
Hi
There is a fuzzy logic control system in python. The system contain 2 inouts and 18 outputs.
inference of system is mamdani and shape function used to be guassian.
Then in term of refine performance of the controller I need to optimize specifications belong to shape functions of both input and output. In order to that I need to use multi objective optimization.
We have 2 input and 1 output in case of this problem. I have developed 3 shape functions for each entrance and 3 for output and the shape function is gaussian so we have 18 parameters totally.
I defined my problem as a function in python. But notice this I there is not any clear relationship between input and output of function. It’s just a function which is so complicated with 2 inputs and 18 outputs.
I made my decision to use NSGAII algorithm and I really don't want to change the algorithm.
So I try every way to optimize my function but I didn’t find any success. By searching about python library which can do multiobjective optimization I find Pymoo as the best solution but I really failed to optimize my function which is complicated custom function, with it.
So It’s really my pleasure if you can introduce me a library in python or suggest me a way that I can use Pymoo in order to this aim.
wish you best
Take refuge in the right.
You can stick with Mamdani Fuzzy System if it is comfortable for you to design a "workable" FIS. You probably don't need optimization if your FIS works satisfactorily. From the "workable" FIS to the "satisfactory" FIS, you probably just need make minor adjustment to the design parameters. That's the Spirit of Engineering Design.
• asked a question related to Multiobjective Optimization
Question
May I have the MATLAB code of some well-known multi-objective benchmark functions like Schaffer, Fonseca, ZDT1, ZDT6, Srinivas, DTLZ5, DTLZ6, LZ09_F1, LZ09_F2, LZ09_F6, LZ09_F7, LZ09_F9, WFG4, CF1, CF2, CF4, CF5, CF6, CF9, and CF10?
See platEMO:
• asked a question related to Multiobjective Optimization
Question
The NFLT is valid only on a set of problems closed under permutations (c.u.p.). It never happens in practice.
And for a set of problem that is _not_ c.u.p. it can be proved that there exists a best algorithm.
Unfortunately the proof is not constructive.
So it is worth trying to improve algorithms and, indeed, we need to know how to compare then.
Here is what I usually do for two stochastic algorithms A1 and A2 on a given problem:
- run 100 times A1, with a given search effort (usually a number of evaluations), plot the CDF_A1 (cumulative distribution function) of the 100 final best results.
- do the same for A2 => CDF_A2
If, on the figure, CDF_A1 is completely "above" CDF_A2, then A1 can safely be said "better", for this function.
And vice versa, of course.
If the two curves cross on say a value r, the conclusion is not that clear, unless you consider only best final values smaller than r. Then you have to be more precise. Something like: "If I accept only final results smaller than r, then _this_ algorithm is better".
• asked a question related to Multiobjective Optimization
Question
In his name is the judge
Hi
I want to learn multi-objective optimization with NSGAII in python for my research.
Please recommend a good source for learning NSGAII in python.
wish you best
Take refuge in the right.
• asked a question related to Multiobjective Optimization
Question
To solve multi-objective problems using the lexicographic method.
Can we use for each step different algorithms? For instance, when we have two objectives to be minimized, can we use firstly the Genetic Algorithm to optimize the first objective, then use Particle swarm optimization Algorithm to solve the second objective while taking the result of the Genetic Algorithm as a constraint?
You most certainly can. In fact, there may be an advantage in employing such a method since we know that some algorithms are better suited for some problems over others (NFL theorem). Just be sure to properly select the correct MH for any given objective function. This may require some experimentation on your part, but whatever you do, don't let the algorithm selection process be randomized. A methodological approach needs to be adopted here.
Personally, I would advise using Pareto-based techniques instead of conventional non-Pareto-based approaches like Lexicographic or sum weighted optimization. Nonetheless, good luck!
• asked a question related to Multiobjective Optimization
Question
Dear Members
I had recently gone through a sample program code (in MATLAB) corresponding to Pareto GA in Appendix II (Program 4, Page: 219) of the book PRACTICAL GENETIC ALGORITHMS by Haupt. However, I failed to understand the Niching strategy employed in the program code. Also, kindly explain the Pareto dominance tournaments that have been employed or used for mate selection in the example provided in the book. The flow diagram for Pareto GA delineated in Figure 5.2 of the book, mentions an increase in the cost of closely located chromosomes. But the code apparently contains no such steps. Kindly explain the anomaly.
If there is no layout available, nexttile creates one. Create a Pareto chart by passing the axes to the pareto function as the first argument. Call the nexttile function to add a second axes object to the layout. Then create a second Pareto chart.
• asked a question related to Multiobjective Optimization
Question
We have proposed an algorithm for multiobjective multimodal optimization problems and tested it on CEC 2019 benchmark dataset. We need to show the results on some real problem also. Kindly help.
Try implementing your methodology in structural optimization tasks. A quick literature search will get you up-to-date on all the details required.
Good Luck
• asked a question related to Multiobjective Optimization
Question
I am using an evolutionary-based GA optimisation tool that divides the space of possibilities into subsets to perform hypercube sampling within each subset, and thereafter generates multiple generation of results, the best of which will form the Pareto front and through each iteration move closer to the global optimum. However is it possible that through the process of hypercube sampling (and hence disregarding some options) , this tool might miss a true global optimum?
this is a natural thing in mathematics there is no perfect solution especially the no linear problem, and most metaH utilized in the no linear problem, so you need always to expect and accept that fact, and keep improving the study of the metaH
• asked a question related to Multiobjective Optimization
Question
I am doing NSGA ii optimization. For this I have developed some equation through rsm in minitab. Now the problem I am facing is to set the constraints. 4 variable 3 objective. I am getting optimized values which are beyond the experimental response domain. I think a proper constŕaint may solve this.
You did not specify the type of problem you are dealing with and why you have to resort to heuristics instead of exact optimization algorithms. That's why nobody will be able to give valuable advice.
Best regards
• asked a question related to Multiobjective Optimization
Question
I am coding a multi-objective genetic algorithm, it can predict the pareto fronts accurately for convex pareto front of multi-objective functions. But, for non-convex pareto fronts, it is not accurate and the predicted pareto points are clustered on the ends of the pareto front obtained from MATLAB genetic algorithm. can anybody provide some techniques to solve this problem. Thanks in advance.
The attached pdf file shows the results from different problems
• asked a question related to Multiobjective Optimization
Question
Hello RG Community,
As we have ZDT, DTLZ, WFG test functions for continuous multiobjective optimization problems, are there any test functions for mixed integer multiobjective optimization problems?
Thanks!
1-Burachik, R. S., Kaya, C. Y., & Rizvi, M. M. (2021). Algorithms for generating Pareto fronts of multi-objective integer and mixed-integer programming problems. Engineering Optimization, 1–13. doi: 10.1080 / 0305215x.2021.1939695
2- Sujeet Kumar Singh & Mark Goh (2018): Multi-objective mixed integer programming and an application in a pharmaceutical supply chain, International Journal of Production Research, DOI: 10.1080/00207543.2018.1504172
3- Tušar, T., Brockhoff, D., & Hansen, N. (2019). Mixed-integer benchmark problems for single- and bi-objective optimization. Proceedings of the Genetic and Evolutionary Computation Conference on - GECCO ’19. doi:10.1145/3321707.3321868
• asked a question related to Multiobjective Optimization
Question
When we do multi-objective optimization using MOGA algorithm in Ansys Workbench, the algorithm stop automatically after few generation saying that the algorithm is converged while the objective function still not converged and shows random behavior? The maximum allowable pareto percentage is 70%. Is the true convergence achieved?
Same problem. I've already tried to change the optimization parameters (Pareto percentage, population stability, and others) but the objective function still does not converge.
• asked a question related to Multiobjective Optimization
Question
I have been trying to find a way to fit two functions simultaneously using nonlinear least squares (I have to find the optimum 3 variables, common for both models, that fits best both of them). I typically use Python's scipy.optimize.least_squares module for NLLS work, which uses the Levenberg–Marquardt algorithm.
I tried some specialised multi-objective optimization packages (like pymoo), but they don't seem suitable for my problem as they rely on evolutionary algorithms that output a set of solutions (I only need one optimum solution per variable) and they are made to work for conflicting objectives.
I also tried to take the sum of the norms of the residuals of the two functions (making it into a single objective problem) and to minimize that by various gradient and non-gradient based algorithms from Python's scipy.minimize package, but I found this norm becomes so huge (even with parameter bounds!) that I get oveflow error (34, results too large), crashing the programme sooner or later. It didn't crash using Truncated Newton's Method, but the results produced were rubbish (and from running an optimization on this same data on a simpler model, I know they shouldn't be!)
I have to perform this fit for a few thousand data sets per experiment, so it has to be quite robust.
Surprisingly, I can not find a way to do multiobjective NLLS (only for linear regression). I have found some papers on this, but I'm not a mathematician so it's quite out of my depth to understand them and apply them in Python..
Has anyone had a similar problem to solve?
Many thanks!
Two functions fitted simultaneously to one dataset? Do I understand correctly? What is the objective function?
• asked a question related to Multiobjective Optimization
Question
What is the best way to solve multi-objective optimization problems with distance-based approaches when there are no clear referent points (ideal and anti-ideal points) to compare with the current solution of the applied model (metaheuristic)?
I excluded the possibility of using lexicography and ε-constraint based methods because they seem to neglect the multi-criterial nature of the problem.
Dear Jose
As I see it, to do what you say, you need to get a physical experiment done, and that is impossible in most MCDM scenarios
• asked a question related to Multiobjective Optimization
Question
Hi,
I just want to make sure that I understand the mechanics of the NSGA-II (the non-dominating sorting genetic algorithm) for a multiobjective optimization problem, since the resources I have (I am not satisfied with, I would be grateful if anyone can recommend me a good paper or source to read more about the NSGA-II)
Here is what I got so far:
1- We start with a random population lets call P_0 of N individuals
2- Generate an off-spring population Q_0 of size N from P_0 by using binary tournament selection, crossover and mutation)
3- Let R_0=P_0 U Q_0
While (itr<= maxitr) do,
5- Identify the non-dominating fronts in R_0, (F_1,..F_j)
6- create P_1 (of size N) as follows:
for i=1:j
if |P_1| + |F_i| <= N
set P_1=P_1 U F_i
else,
add the least crowded N - |P_1| solutions from F_i to P_1
end
end
7- set P_1=P_0;
8- generate an off-spring Q_0 from P_0 and set R_0=Q_0 U P_0
9- itr=itr+1;
end(do)
My question (assuming the previous algorithm is correct, how do I generate Q_0 from P_0 in step 8?
Do I choose randomly any 2 solutions from P_0 and mate them or do I choose according to or is it better to select the parents according to some condition like those who have the highest rank should mate?
Also, if you can leave me some well-written papers on NSGA-II I would be grateful.
Thanks
I am a bit muddled up with respect to tournament selection..
Should the population set size be the same as the size of the offspring?
If that is the case, why do we need a tournament selection strategy to form a mating pool- we might as well as directly use the entire population as a whole?
• asked a question related to Multiobjective Optimization
Question
Any recent literature on epsilon constraint handling method in multiobjective optimization. state the advantage of using this method.
The disadvantages of epsilon-constraint method are difficulty in choice of perfect value for epsilon vector, none equality-spaced Pareto solutions and requiring a large number of iterations like weighting method.
• asked a question related to Multiobjective Optimization
Question
In most MOEA publications, researchers will use IGD or hyper-volume as their evaluation metrics. However, in real-world problems, both nadir points and ideal points are unknown. What's worse, in many cases, the domain knowledge is unavailable. Thus, in this case, how do I compare the performance between different algorithms? For example, how do I compare the performance of different MOEAs on a real-world TSP problem without any domain knowledge?
Iago Augusto Carvalho sorry for the late response. I saw this approach in some papers. I can't remember all of them, but I found at least one where they used this method:
• asked a question related to Multiobjective Optimization
Question
Hi,
I have a bi-objective minimization problem. The first objective is in seconds and the second one does not have a unit (it counts the number of some product). How can I unscale this objective function properly?
Can I use the monetary value of both of them? If yes, what is the monetary value of time?
• asked a question related to Multiobjective Optimization
Question
Hello,
What is the interpretation of the pareto front graph when using three-objectives such that, we would like to maximize the first objective while minimizing the second and the third objectives.
attached, an example of 3D Pareto-Front that I would like to interpret.
VM:Max
resource wastage:Min
PMs:Min
Thanks
Thank you Mr. Rasel Ahmed for you response. I'm agree with you. The best way to read a Pareto Front is to take 2 by 2 objectives curves.
• asked a question related to Multiobjective Optimization
Question
R2 in fact is hood option for evaluations populations of solutions in Many-objective optimization since Hypervolume (at least WFG implementation) is really heavy to compute. I have tried a Quick Hypervolume implementation. However I found out R2 aa a better option when you don't have prior information about the Pareto Front.
Do you know something better than R2 for this purpose?
Based on our research, I come to conclusion that GD, IGD and HVI are truly distinct performance indicators.
As it is known that HVI is unsuitable for many objective optimization. I recommend to use IGD indicator for many objective optimization.
For further details, you can check the article:
• asked a question related to Multiobjective Optimization
Question
I have seen many researchers using these three algorithms for solving several different benchmarks. Sometimes I also see SPEA2-SDE as an option. These four algorithms are available in jMetal, which makes comparison really easy to perform.
Do you know some better? Are they available in Java? The implementation uses jMetal?
Suggest to read my papers. For example, the codes of MOSEO (Multi-objective Social Engineering Optimizer) is one the best and outperforms the aforementioned metaheuristics. See the codes:
I also suggest to read my other papers as I applied several multi-objective optimization algorithms from my studies.
Regards,
Amir
• asked a question related to Multiobjective Optimization
Question
For the multi-objective optimization problem is it possible to apply the concept of SN ratio to individual outputs obtained through RSM or full factorial design of experiment. Also is it possible that the design of experiment developed by full factorial can match with Taguchi orthogonal array more specifically 2 factors 3 level design problem?? Where for full factorial it is coming 9 experiments.
Thank you....
No.
• asked a question related to Multiobjective Optimization
Question
Hello,
I have some more multi-objective optimization questions using the exhaustive search method(brute-force method) that help would be much appreciated. An answer to all or one of the questions are all very very welcome.
I have 3 different multi-objective optimization problems with 3 discrete variables.
Optimization Problem 1) 2 objective functions
Optimization Problem 2) 2 objective functions + 1 constraint
Optimization Problem 3)  3 objective functions
The ranges for the variables are 15~286, 2~15, 2~6, respectively.
I have been told that the search space is small enough that exhaustive search space is my best bet and that it is easy to implement so I wanted to try it.
My questions are
1) Is it possible to apply the exhaustive search method to all three optimization problems?
2) How would I go about doing this using a computer software?
I was thinking that for
Optimization Problem 1 and 3
I would find all the function values first and then move on to finding and plotting the pareto fronts  -> Is this the right approach?
Also, is there any example code I could follow(especially for optimization with three objectives)
For Optimization Problem 2 with a constraint
How would I incorporate this constraint?
Just make the program give me the function solution values that do not violate the constraint (meaning ignoring solutions that do) and than use them in plotting the pareto front?
Is there example codes/programs for doing this?
I would in particular find any Matlab and R codes helpful.
If there is a specific program/software or what not that does this, I would be very grateful as well.
Thank you to everyone in advance!
I don't think brute or exhaustive search is convenient to solve a complicate problem such as multi-objective optimisation. You can use the brute search on small scale to examine the design variables and explore the search domain.
• asked a question related to Multiobjective Optimization
Question
Which are the PIs those are recommended for optimization model with many (=>4) conflicting objectives ?
How to use them if true pareto fronts are not practically known ?
Is there any methodology for generating the TRUE PF ?
@miha thank u
• asked a question related to Multiobjective Optimization
Question
I am researching on the optimization of road alignment between two given points using NSGA-II with two objectives: 1.cost, 2.safety
I implemented the optimization on a case study and got reasonable results. However i couldn't find any similar research with the same criteria to compare and validate my results. Actually it is normal to have different results since specified criteria are different.
Is it sufficient to have reasonable results or do i have to validate them anyway? If so, do you know any method or technic?
There are different ways to check the correctness and accuracy of the implemented code. The following approaches to check the optimization can be addressed briefly as follows:
A) Comparison with classical approaches: In some problems, it can be possible to run both the classical approach and multi-variate GA to make sure the results are correct and accurate. However, in most cases, because of mathematical complexity using a classical approach for comparison is not possible.
B) Using Well-known Benchmarks: The correctness of the code can be also proven though existing benchmarks. The used algorithm can be validated via well-known benchmarks [1-5].
C) Using Expert Idea: An expert of the system can check the results and their rationality. They should all agree that the final results are logical.
D) Convergence: The convergence of the implemented multi-objective genetic algorithm can be easily observed through the plot of cost vs. iterations. There are also other factors such as roundoff errors and local extrema that have the same perspective.
E) Domination Factor: The NSGA-II optimization algorithm and other recent versions have the function to separate dominant results from non-dominant one. So, all final results should be dominant and with a proper initial population, it is very rare to be trapped by a local minimum. For more information regarding the domination please check [6].
[1] Yao, X., Liu, Y., & Lin, G. (1999). Evolutionary programming made faster. IEEE Transactions on Evolutionary Computation, 3(2), 82-102.
[2] Digalakis, J. G., & Margaritis, K. G. (2001). On benchmarking functions for genetic algorithms. International journal of computer mathematics, 77(4), 481-506.
[3] Molga, M., & Smutnicki, C. (2005). Test functions for optimization needs. Test functions for optimization needs, 101.
[4] Yang, X. S. (2010). Test problems in optimization. arXiv preprint arXiv:1008.0549.
[5] Mirjalili, S., & Lewis, A. (2013). S-shaped versus V-shaped transfer functions for binary particle swarm optimization. Swarm and Evolutionary Computation, 9, 1-14.
[6] Elarbi, M., Bechikh, S., Gupta, A., Said, L. B., & Ong, Y. S. (2017). A new decomposition-based NSGA-II for many-objective optimization. IEEE transactions on systems, man, and cybernetics: systems, 48(7), 1191-1210.
• asked a question related to Multiobjective Optimization
Question
Dear altruists,
I used GA algorithm with an objective function F_obj= w1* f1 + w2*f2. where w1= 0 to 1 and w2=1-w1.
I am changing w1 value from 0 to 1 and get corresponding Fobj.
For plotting optimal pareto front, I put values w1*f1 in X axis, w2*f2 in Y axis, and corresponding Fobj.
Is it a correct way ? Becuase I am not getting the exact figure. I also attached the result.
Can I request to help me for plotting pareto front ?
Have you incorporated your problem into pre existing GA code ?
• asked a question related to Multiobjective Optimization
Question
I have rewritten an MPC problem to a QP formulation.
The QP solvers quadprog and Gurobi, which uses the interior-point algorithm, give me the same objective function value and optimized values x, but GPAD, a first-order solver, gives me the same optimized values x, but an objective function value, which is a factor 10 bigger compared to quadprog and Gurobi. Do you anyone know a possible explanation?
Good!
• asked a question related to Multiobjective Optimization
Question
I am trying to solve a QP problem.
Does anybody know the differences between the interior-point-convex algorithm of quadprog and the barrier method of Gurobi in terms which kind of matrices can the solvers handle the best? Sparse matrices or dense matrices? And what kind of problems: large problems or small problems? Furthermore, which solver needs more iterations (with less expensive cost per iteration) and which solver needs fewer iterations, but expensive cost/time per iteration.
From results, I think Gurobi gives more iterations and quadprog less, but I do not know why?
And what are the differences with GPAD by means of what is described above?
Dear Mohamed,
Quadprog and Gurobi are given me the same objective function value and optimized values x , but GPAD gives me the same optimized values x, but an objective function value, which is a factor 10 bigger compared to quadprog and Gurobi. How is this possible?
• asked a question related to Multiobjective Optimization
Question
Based on your expertise, what is the better optimizer tool between GAMS ( https://www.gams.com/ ) and Gurobi ( https://www.gurobi.com/ )?
GAMS itself is no solver, it is a modeling system with interfaces to a number of solvers - Gurobi being one of them.
So there is no point in trying to compare GAMS with any solver.
Gurobi is a solver for linear and (convex) quadratic mixed-integer problems.
GAMS has interfaces to solvers for different problem classes, too (like more general nonlinear optimization problems).
But if your problem is a MILP, you might compare the modeling via GAMS with the modeling via the Gurobi Python API.
I have worked with both (and with other modeling systems like AMPL, OPL and other solvers, especially CPLEX) in research on discrete stochastic optimization. The choice of the modeling interface depends simply on your preferences and the tasks to be performed (like modifying the problem according to the solution and solving a series of problems). If you are used to the GAMS language you might prefer it to others. GAMS and AMPL are not free software and you have to pay for many of the solvers for them, too.
Best regards
Ralf
• asked a question related to Multiobjective Optimization
Question
Does anybody has experience on  visualizing the Pareto-optimal front of a bi-objective optimization with any R package such as ggplot2, smoof or sth else.
I've been using the emoa package. You can get the nondominated points then use the results in the plot function directly.
• asked a question related to Multiobjective Optimization
Question
Could anyone suggest me how do I get ideal pareto front distribution of DTLZ3 test problem. I went through some research papers but still it is not clear to me.
Ideal Cone method is an appropriate way to find complete Pareto front.
• asked a question related to Multiobjective Optimization
Question
I would like to know if any of you know whether there is a direct link between the computational effort of a multi-objective problem depending on whether the objective functions are heterogeneous or not. For example, 1) one objective is cost and a second objective is some kind of security index with a range [0, 1], 2) one objective is cost and we convert the second objective also to some kind of a security related cost (or we can try to convert the first objective to some normalized cost in a way that both objective functions have some homogeniety). Is there any research evidence whether such strategies can improve the computational time?
Sub question: Is there a difference when it comes to heuristic and non-heuristic algorithms in this regards?
Thank you!
Of course, there is a marked difference between the one-objective, two-objective, and three-objective problems, and so on, in general. And then, there is an additional complication associated with how heterogeneous the objectives are. To cut it short - it's a mess. :-)
• asked a question related to Multiobjective Optimization
Question
Dear community,
I would like to request some references related to results on parameter-dependent Pareto Front, if there are any. I am interested in studying the behavior of the Pareto Front with respect to an external parameter, for multi-objective problems.
Thanks for any recommendation!
Best,
Nathalie
See the book
Here, there are many usefull examples
• asked a question related to Multiobjective Optimization
Question
I would like to test the performance of a modified algorithm developed to solve a real-world problem that has these characteristics : (1)Discrete (2)Multi-Objective (3) Black-box (4)Large-scale.
How we can do this? and if there are no such test problems, is it sufficient to show its performance on the real-world problem only? (where the true Pareto Front is unknown)
Best regards,
I suggest to search in the mathworks homepage, specifically in the Exchange file
• asked a question related to Multiobjective Optimization
Question
The attached picture is the result of a bi-objective optimization problem. Genetic algorithm was used. The "missed-out" band (i.e. approximately 41 to 43.5 in the vertical axis) is within the range of the respective objective function (in other words, there are values of design variables which result in values between 41 and 43.5 of the objective function).
The question is, is there any explanation for this discontinuity (physical or mathematical)? Or should I see this as a fault in my solution procedure?
It is worthy to add that I've carried out the procedure several times and with different optimization parameters (population, mutation fractions, etc), but the discontinuity seems to be always there...
The discontinuity looks like it arises due to a concave section of the objective front, leading to regions of the objective front that are dominated. Concavities in the objective front are quite common on many real engineering problems, leading to discontinuities in the Pareto front. The other possible reason is that the objective front forms two distinct feasible regions with a gap in between; from the way the top section is curving just before the break, two separated regions could be the cause. For clarity: the objective front is the outer boundary of the objective space region that contains feasible solutions, the Pareto front is the subset of the objective front with non-dominated solutions. There has been work over the years trying to develop optimisers that locate the objective front so that the reason for Pareto discontinuities can be explored more easily, but designing algorithms to find the objective front is really not easy (especially for problems with more than two objectives). Evan
• asked a question related to Multiobjective Optimization
Question
how can we validate our mathematical model(multi objective model)?Are there any techniques?
Dear David:
You could change in your model the multiobjective function by a penalty function for the non-fulfillment of restrictions, to generate a very big solutions population calculating, for each solution, every one of your real optimization criteria. So, you will obtain probalistic distributions for each criteria and you could to compare the results of your optimization problem with the best results of your auxiliary model (using penalty function). Do you understood?
• asked a question related to Multiobjective Optimization
Question
For instance, if i formulate mathematical equations to solve multi-objective optimization problem (1 value should decrease while other value should increase at the same time) then what algorithm can be the best algorithm to dealt with.
Exact methods: those of mathematical programming, depending on the nature of the equations and variables, i.e. Simplex, Branch & Bound, Non-linear methods, etc.
The multiobjective problem is transformed into a single-objective problem using diverse methods. The most used are weights method and epsilon-constraint method. I recommend you to read:
Ehrgott, M. (2005). Multicriteria optimization (Vol. 491). Springer Science & Business Media.
• asked a question related to Multiobjective Optimization
Question
Hello.
Is there a fully functional NSGA-III implementation?
There is an implementation in Java and C++, jMetal bases their implementation on : http://web.ntnu.edu.tw/~tcchiang/publications/nsga3cpp/nsga3cpp-validation.htm, the author of which says the algorithm does not scale well beyond 8 objectives.
I'm looking to use NSGA-III on many-objective problems (which it is designed to handle) i.e. 15 objectives.
Thank you
I am one of Dr. Deb's PhD student. Our Python framework contains a working and benchmarked NSGA-II and NSGA-III implementation.
Please have a look if you are still interested.
• asked a question related to Multiobjective Optimization
Question
For a multi-objective problem with uncertainty in demand, consider the scenario tree (attached herewith) for a finite planning horizon consisting of three time periods. It's a two objective minimization problem in which the augmented e-constraint method is utilized to obtain Pareto optimal solutions (POS).
In the time period T1, only the mean demand is considered. Then in T2, demand follows a certain growth rate for a scenario with expected probability of growth for each scenario. Similar trend is outlined for T3.
The deterministic counterpart envisaged for the problem is a set of time periods with specific pattern of growth rate for mean demand - say 15% in T1, 10% in T2 and 10% in T3.
I want to draw out a comparison of the POS obtained from the stochastic and deterministic analysis. What is the best way to proceed in order to give the decision maker a whole picture of the POS with the scenario and time period considered in both type of analyses?
Do I obtain POS sets for all the 13 scenarios from T1 to T3, or just the 9 scenarios in T3? It'd mean 13 or 9 Pareto fronts for the stochastic analysis alone. In other words - a Pareto front with POS for each time period and scenario! How do I compare whatever I obtained from the stochastic analysis with the deterministic one?
Once again, the aim is to analyze the stochastic analysis and draw out a comparison of the POS obtained from the stochastic and deterministic analysis for the time periods and scenarios considered.
Comments on the aforementioned approach and recommendations for alternatives are appreciated.
The problema seem to be small. Under such conditions clasical approaches are appropiated.
• asked a question related to Multiobjective Optimization
Question
Hi all,
I would like to know what is the difference between MOGA and NSGA-II, in some literature it classifies as two separate methods but in other it classify NSGA as a MOGA method.
Thank you.
Best Regards,
Rashiga
The MOEA approximated the set of efficient thinning regimes (with a discontinuous Pareto front) by employing a ranking scheme developed by Fonseca and Fleming (1993), which was a Pareto-based ranking (a.k.a Multiobjective Genetic Algorithm—MOGA). In this paper we solve the same problem using an improved version of a fitness sharing Pareto ranking algorithm (a.k.a Nondominated Sorting Genetic Algorithm—NSGA II) originally developed by Srinivas and Deb (1994) and examine the results. Our findings indicate that NSGA II approximates the entire Pareto front whereas MOGA only determines a subdomain of the Pareto points.
• asked a question related to Multiobjective Optimization
Question
I have a multi-objective optimization problem whose analysis is sought using weighted sum approach. The objectives are: minimizing transportation costs and minimizing transportation risks.
Consider i sources which supply Ai and j destinations which demand Bj quantities of a product. Cij and Rij represent the costs and risks of transportation. Qij is the unit of products to be shipped from i to j. I am trying to find a set of Pareto-optimal solutions that gives an overview of the trade-off between the two objectives. MPL for Windows 4.2 is used as the modelling language with CPLEX 10.0 as the solver.
Here's a simplified mathematical formulation of the problem:
Min z1 = Σi Σj Cij Qij
Min z2 = Σi Σj Rij Qij
Subject to:
Σj Qij <= Ai
Σi Qij >= Bj
Qij >= 0
How do I mathematically treat one objective as a constraint while the other remains the objective function? The goal is to develop a Pareto front of non-dominated solutions.
Comments, feedback and alternative recommendations are appreciated.
If you have only two criteria, then the trade-off curve approach is fairly practical You simply construct a curve, the so-called efficient frontier, that shows how you can trade off one criterion for another. You want the risk to be low, and the cost also low. Each point on the curve risk-cost is Pareto optimal, i.e. for any point on the curve there is no other point with lower cost and lower risk.
You can start with with some total cost limit C value, and solve the 1-criterion risk problem for that cost limit. Then you change that cost limit to various values, solve the 1-criterion risk problem again for that new cost limit constraint, and plot the curve cost-risk. You will get a trade-off curve, or efficient frontier. From that curve you then pick the cost for the acceptable level of risk, or the risk that will give you the best possible cost.
• asked a question related to Multiobjective Optimization
Question
What are the requirements for carrying out Multi-Objective Optimization? Is it possible to do the same if I have all non-beneficial criteria in my problem
The simplest way to deal with it is to swap signs of criteria values. Example: a given solution has score in three objectives of (4, 5, 6) with the 3rd a non-benefit (i.e. cost). Just change it to (4, 5, -6).
• asked a question related to Multiobjective Optimization
Question
What is the best method to solve Multiobjective Optimization , weight, bounded, goal....etc ?
There are really more than 50-60 algorithms. It looks like everyone is trying to change a few things in an algorithm and come up with a new algorithm. And of course everyone is promoting their own algorithms. I am checking papers which compare the performance of these algorithms, yet everyone choose to compare different sets of algorithms. At least, a few algorithms seem to appear more in these comparison papers. I just need multiobjective optimization for a FEM updating problem that I am encountering. But what I have seen that, multiobjective optimization research community needs to sit down and to do more surveys about which algorithms are most suitable for what kind of problems. In a few weeks, I can tell you which algorithms that I will choose and why. I am working on it too.
• asked a question related to Multiobjective Optimization
Question
I have a question regarding the Hypervolume indicator in multi-objective optimization:
As far as I know, the Hypervolume is the volume of the trade-off space covered by the Pareto-front. I.e., finding the best possible front implies maximizing the Hypervolume.
Does this definition imply that we want to minimize our multiple objectives? What happens to the sign of the Hypervolume when we are maximizing one or more of the objectives? In other words, are we always trying to maximize the Hypervolume, or are there cases where we want to minimize?
Dear Thomas,
In case if any objective is of maximization type, convert it into minimization type by multiplying it with -1 and then optimize the problem as usual. This wouldn't conflict with aim of maximizing the hypervolume.
• asked a question related to Multiobjective Optimization
Question
In reference point-based multi-objective optimization, the reference points are determined by DM.
What are the methods that can be used to calculate or determine reference points for a multi-objective optimization problem?
Dear Mohamed-Mourad Lafifi , thank you for your answer. The references provide information about the methods that work based on the reference-point and sampling reference-points to be used with performance indicators in the evaluation stage. However, there is no information about how does the DM determine the reference points, for a particular problem, to be used with reference point-based (or preference-based) optimization algorithms.
• asked a question related to Multiobjective Optimization
Question
I want to get optimized value using Optimization Trials in HEC-HMS, but it didn't work, the optimized value is same as initial value, and the objective  is zero, please give me some advice about how to make it.
Curve number values never above 100. That may be the case.
• asked a question related to Multiobjective Optimization
Question
- Using Mixed integer liner programming.
- I need to determine the optimal timing to invest(maximizing Net Present Value) in district heating power plants and at the same time minimizaing Carbon emissions.
- Main constraint: coverage of a given heat demand.
- Investment decision through mixed integer linear programming.
- Investment optimization as extension of unit commitment. (schedule optimization)
- Deterministic approach.
Thanks a lot Mr. Temitayo Bankole !
• asked a question related to Multiobjective Optimization
Question
I want to calculate inverted generational distance (IGD) to evaluate the performance of a multi-objective optimization algorithm. I have the approximate Pareto fronts. But, I could not find the true Pareto front for the structural engineering problems, such as, welded beam, spring, gear design problems,... etc. Any one has the data of true PF for them?
You surely use some heuristic to find an approximate Pareto front. You will not able to find the Pareto front. To approach better to it, in the supposition that you generate a solutions population, you should to simulate the real behavior of your process, for each solution in the population to recalculate the the multiple objective function value and to reorder the found solutions. You won't reach exactly the Pareto front but you will approach enough and to obtain very good close to the Pareto front solutions
• asked a question related to Multiobjective Optimization
Question
I am working on a project in which I need to optimize the scheduling of heat generating power plants.
I want to Maximize Net Present Value and at the same time minimize Carbon emission.
Mixed integer liner programming will be used through out the modelling.
I think you can use global algorithm that observe all of equations
Well, first you should find an equation or a function in order to find the value of two variables
then you can use GA algorithm or PSO algorithm
• asked a question related to Multiobjective Optimization
Question
I'm using optimization tool box in Matlab to solve multi-objective optimization, I have linear and nonlinear constraint, after running the optimization, I got Pareto front (see the file attached in this message), increasing the population size give me the same result.
what do you think ?
My graph also came the same way as that of Achour Hadjar . Do we have to add constraints in Genetic Algorithm (gamultiobj in MATLAB). Because if non-linear constraints are ignored then Parento front is being plotted in the correct way but on adding constraints its showing only 1 point. what might be the problem?
• asked a question related to Multiobjective Optimization
Question
In multiobjective optimization, some algorithms were designed to obtain a set of solutions that approximate the whole Pareto front (PF), while the others were designed to approximate a partial and preferred PF, known as, preference-based algorithms.
Performance indicators, such as HV and IGD, are widely used to compare between their performance. However, IGD is designed to measure the distance between whole true PF and approximate PF, rather than partial PF. Therefore, it is not suitable to evaluate the performance of preference-based algorithms.
Some studies proposed performance indicators to evaluate the performance of preference-based algorithms. However, these performance indicators can not be used with non-preference -based algorithms.
In this case, how to compare the performance of multiobjective optimization algorithms that are used different approaches (non-preference -based and preference -based)?
"The IGD measure is calculated as the average distance from each reference point to the nearest solution in the solution set, which can be viewed as an approximate distance from the Pareto front to the solution set in the objective space."
Would you please elaborate what do you mean by replace mean function with another one. Thank you
• asked a question related to Multiobjective Optimization
Question
The figures below are the result of R-NSGA-II algorithm in solving DTLZ2 problem with 3 objectives. The green rhomboids represent the reference points, (0.2 0.5 0.6) and (0.7 0.8 0.5).
the result in Figure 1 looks good. However, I do not know what is wrong with the Inverted generational distance (IGD) curve.
To calculate the IGD values I have used this Matlab code.
maxGen = 200;
for gen =1:maxGen
Distance = min(pdist2(PF,popObj),[],2);
IGD_values(gen) = mean(Distance);
end
I think you cannot use the original IGD to evaluate the performance of a preference-based EMO algorithm. We have developed systematic way for this purpose. Please find more details from the following reference:
K. Li, K. Deb, X. Yao, “R-Metric: Evaluating the Performance of Preference-Based Evolutionary Multi-Objective Optimization Using Reference Points”, IEEE Trans. Evolutionary Computation (TEVC), in press.
• asked a question related to Multiobjective Optimization
Question
Hi Everyone,
I am applying constrained NSGA-2 multi objective optimization method. In my problem, there are 2108 variables and there are three objective functions which need to be minimized. Can you please tell me what is the suitable population size and number of generations for applying constrained NSGA-2?
Thanks,
Karan
You should test your algorithm on different and independent runs. Parameters such as population size, number of generations, mutation probability start with lower values. Crossover starts from higher values.
Example:
- population size 50, 100, 200, ...
- number of generations 100, 200, 500, 1000, ...
- mutation probability 0.1, 0.2, ..., 1.0
- crossover rate 0.9, 0.8, ...
• asked a question related to Multiobjective Optimization
Question
In multiobjective optimization, what does the distance exactly means, is it:
1) The distance from reference point (V) to an individual (Xi) (candidate solution) in the population (decision space).
Euclidean Distance = d(Xi,V)
OR
2) The distance from reference point (V) to the objective vector f(X) in the objective space, where f(Xi) = f(f1(Xi),...fm(Xi)). m is the number of objectives.
Euclidean Distance = d(f(Xi),V)
• asked a question related to Multiobjective Optimization
Question
I have two objective functions. i want to minimize it using GA in MATLAB. One objective function is more important than other.
One way is to use weighted objective function and use different weights for finding the solution.
Other method is to use multiobjective GA function in matlab and find the solution from Pareto graph.
Among these two method, which one is advantageous?
Hi, if you do a weighted sum of the objectives, then you will only find a single solution to the optimisation problem for each run of the optimiser; a multi-objective GA will return many results that all are (hopefully) on the Pareto front. The Pareto front is the trade-off surface of the optimisation problem, so if one objective is more important than the other, the solutions that lie more to one end of the Pareto front will be the ones that interest you.
For many optimisation problems, the weighted sum approach is fine, however there are limitations; in particular if the shape of the Pareto front is concave (i.e. bends backwards on itself). Solutions in the concavity region are still optimal choices, but there are no combinations of weights for the weighted sum that will allow you to find them. Therefore if your ideal solution lies within a Pareto concavity, the weighted sum method will never find it, and the optimiser will report a result that is from the closest non-concave Pareto section it can reach.
If you have a situation where with one set of weights you find a solution, and then changing the weights slightly keeps giving you the same solution, up to a point when the optimiser output suddenly jumps to another solution; you have a concavity in that situation and should not be using a weighted sum.
Sometimes a good approach is to use a quick run (i.e. small population or a few generations only) of a multi-objective optimiser to give you an approximate Pareto front, allowing you to see the rough shape of the front (i.e. seeing if there is a concavity in the front shape), and also then allowing you to gauge what weightings may work to give you the solutions you need. You can then do a much more detailed run with a weighted sum optimiser (not necessarily even a GA if the problem allows it) to get a high precision single optimum.
• asked a question related to Multiobjective Optimization
Question
Hello everyone,
i'm applying gamultiobj (Multiobjective optimization using genetic algorithm) from optimization tool of matlab
* how do i export the results frome and to command window?
** is it possible to apply gamultiobj for fitness function who has 3 objective functions?
plz it's urgent .. i need answers ^^
• asked a question related to Multiobjective Optimization
Question
In the multi-objective optimization problems we often say that the objective functions are conflicting in nature. In what sense the objective functions are said to be conflicting with each other? Also, how it could be proved numerically that the objective functions in our multi-objective problem are conflicting in nature?
If you are, for example, designing automobile you might have many objectives. E.g. you want for automobile to be big (spacious) from inside, but also to have smaller dimensions from the outside so it is easier to park the automobile. Obviously inside size and outside size are very dependent in a way that you might can not considerably increase inside space without increasing outer size. So there you have conflicting objectives.
So if improvement in one objective lead to worsening of the other, then this two objectives are conflicted.
• asked a question related to Multiobjective Optimization
Question
Recently I've been studying dCOEA (https://ieeexplore.ieee.org/abstract/document/4553723/) and trying to implement it for a project. However I got stuck in the EA process. While the parent individuals of each subpopulation have already been evaluated in the cooperative/competitive process and their respective Pareto rank and niche count calculated, after the crossover and mutation operations on each subpopulation, how are the child individuals evaluated? Since an individual needs to be combined with representatives from other subpopulations to form a valid solution that can be evaluated. I've been thinking on simply combining the child individuals with those representatives to evaluate them. However, I don't know if that is the correct form to perform this process.
Also, since there is a possibility that the individual defined as the current subpopulation's representative might not be selected from the mating individual pool for the next generation, how is the new representative defined? I've been thinking on using a random selection but that will definitely affect the quality of the solutions obtained and I know that answering the first question might answer this one as well.
Hi Teodoro,
Recently, I have developed an improvement version of MOEA/D algorithm for bi-objectives, and its Matlab codes are available for users to download. If you are interested in it, you can download at the link below. For some other studies and their Matlab codes, you can refer to my researchgate.
Vinh
• asked a question related to Multiobjective Optimization
Question
I am using multi-objectve GA toolbox in Matlab to optimize 3 objective function. I can plot pareto two objective each time, but I am unable to plot the pareto fronts of 3 objectives together.
Yes, it's suitable for whatever number of objectives as long as you have a way to visualise it. In case of 3 objectives you can use scatter3!
• asked a question related to Multiobjective Optimization
Question
I would like to calculate Spread , HyperVolume and Generational Distance values to measure the performance of multi -objective optimization problems. I know how to calculate the performance in case true pareto front is available. However, for the test data available I don't have the true pareto front but I have the minimum and maximum values for each objective function. I would like to know the methodology or some one can share the code to find out the spread, GD and HV values of multi-objective optimization problem. Really appreciate the help.
Dear Pavitdeep,
You can find all mathematical equation in book Multi-objective evolutionary optimization for product design and manufacturing, written by Lihui Wang • Amos H. C. Ng • Kalyanmoy Deb
Regards,
Robert
• asked a question related to Multiobjective Optimization
Question
Pareto solutions are found below true PF in case of 3D DTLZ1 problem. Whereas they are found above true PF in case of 2D DTLZ1 problem.
I have checked and confirmed problem formulation and variable bound. ( variables used are 3).
Algorithm converges on DTLZ-2, 3, 4.
What might be the cause or area of improvement in MO algorithm.
Reference: Deb K., Thiele L., Laumanns M., Zitzler E. (2005) Scalable Test Problems for Evolutionary Multiobjective Optimization. In: Abraham A., Jain L., Goldberg R. (eds) Evolutionary Multiobjective Optimization. Advanced Information and Knowledge Processing. Springer, London
Hi Leonardo,
I understand that you might be busy.
Waiting for favourable response.
• asked a question related to Multiobjective Optimization
Question
I am looking for examples of the combination of ABM, MO optimization, and game theory, preferably the ones that have been used for practical purposes.
Dear Ali,
• asked a question related to Multiobjective Optimization
Question
My solution set in the Cartesian system placed in range x=0.5 and y=0.3 as diagonal line, but pareto set  is in range placed in range x=0.5 and y=0.5  as diagonal line.
Is it possible the solution set pass from the pareto set ?
or Is it possible the solution set being bether than  the pareto set ?
following
• asked a question related to Multiobjective Optimization
Question
Given a number of objectives, is there any quantitative method through which I can determine what objectives are more crucial than others
You may recognize some objective as "more important" and some as "less important" but it doesn't mean that there's any objective method to determine this importance. Instead, it may depend on a real-world application standing behind the optimization problem. You can assign weights to particular objectives or solve the problem in many stages.
Or, if you mean by "crucial" large influence of a particular objective on the "overall performance", you may consider normalizing objectives to [0,1] range of output values and maximizing the sum of them subject to the constraints of the problem. The larger is the value of a particular objective for the optimal solution, the more "crucial" this objective can be considered.
Just a quick idea, not based on any literature. :)
• asked a question related to Multiobjective Optimization
Question
How exactly can i choose the best solution among so many values ?
There is an efficient multi-objective decision making approach known as “technique for order of preference by similarity to ideal solution” (TOPSIS) which is presented in the following paper.
• asked a question related to Multiobjective Optimization
Question
How to find the pareto front for maximizing a fxn. and minimizing another fxn. at the same time ?
I am using this toolbox :
Hello
If I understand correctly, you are worried that MOPSO won't work because it requires that all objectives have to be minimized. One simple solution is to multiply the result of the maximizing objective function with -1. This will make it a minimization objective function. When the algorithm outputs the Pareto you take the absolute value of that objective to switch it back to normal.
• asked a question related to Multiobjective Optimization
Question
Can a search method that optimizes one criteria and uses other as a tie-breaker be called multi-objective? If yes, is there a subcategory in which it falls? If not, what is it called?
Thank you.
Dear Victor,
Judging by your question, you possibly have a situation with ordered criteria: you are looking for a solution that is optimal with respect to the first criterion and amongst these solutions, you are looking for a solution that is optimal with respect to the second criterion.
If my description is valid, then the terminology question has the following answer. The above approach to solving multicriteria problems is called an ordered criteria approach or a lexicographic approach.
Please, be careful: you should guarantee that the found solution is really optimal with respect to the second criterion amongst all solutions that are optimal with respect to the first criterion. To guarantee this, you should prove, for example, that IN EACH STEP, your ALGORITHM GENERATES ALL POSSIBLE ALTERNATIVES for the first criterion. Otherwise, the final solution may not be optimal with respect to the second criterion.
More than that, even if you generate all alternatives, you should guarantee, that the local choice in each step gives a global optimum with respect to the second criterion. If you have no such guarantees, you cannot speak about solving a multicriteria problem and your approach has no any special name.
• asked a question related to Multiobjective Optimization
Question
here, CMOEAs is an extension of EAs. However, for example DE is time consuming when deals with more than three objectives.
Thank u so much Mr. Mahesh.
my regards,
• asked a question related to Multiobjective Optimization
Question
Please guide for selection of many objective optimization algorithm.
Hi . We recently published new optimization algorithm entitled "WCC" in informatics in medicine unlocked journal. also we have introduced some recent optimization algorithms in this category.
• asked a question related to Multiobjective Optimization
Question
Suppose I am optimizing ZDT-1 2 objective test function. I want to stop the algorithm when there is no significant improvement in Pareto front. how can I achieve this?
Supposing that you can write/change the programming code of your computer program or computer program that you are using:
You can firstly test, improve your algorithm and adapt its parameters for a little example where you can find deterministically the optimal Pareto Front, for example by an enumeration model (probably exponential) if the space domain is discrete. For larger problems you can end the run in a similar way to what is normally done for a conventional optimization algorithm with a single objective function, for example when there is no improvement (punctual or in average) of the objective function value, with the necessary adaptations given that all objective functions must be considered simultaneously now.
• asked a question related to Multiobjective Optimization
Question
If anyone know about NSGA|| evolutionary algorithm then please let me know why crowded distance is not used in initial condition for the generation of the offspring.?
Hello, the reason is that it is a secondary criterion that is used if there is a surplus of solutions based on the non-domination rank, starting from the first/best front, and the last front considered has more solutions than are required in the next generation. More solutions tend to belong to the same non-domination rank as optimization proceeds.
• asked a question related to Multiobjective Optimization
Question
My goal is to minimize an unimodal function using direct optimization method. According to the problem characteristics, I decide apply a single-based optimization rather than a population-based one. However, I am a newbie in this field and don't know which is the best feasible method. Hence, who can help me point out the fastest single-based optimization?
Thank you very much.