Science topic

# Discrete Optimization - Science topic

Explore the latest questions and answers in Discrete Optimization, and find Discrete Optimization experts.
Questions related to Discrete Optimization
• asked a question related to Discrete Optimization
Question
I have a new idea (by a combination of a well-known SDP formulation and a randomized procedure) to introduce an approximation algorithm for the vertex cover problem (VCP) with a performance ratio of $2 - \epsilon$.
You can see the abstract of the idea in attached file and the last version of the paper in https://vixra.org/abs/2107.0045
I am grateful if anyone can give me informative suggestions.
Dear Majid Zohrehbandian,
I suggest you see the links on the topic.
Best regards
• asked a question related to Discrete Optimization
Question
I have two type of resources A and B. The resources are to be distributed (discretely) over k nodes.
the number of resources A is a
the number of resources B is b
resources B should be completely distributed (sum of resources taken by nodes should be b)
resources A may not be completely distributed over the nodes. In fact, we want to reduce the usage of resources A.
Given resources (A or B) to a node enhance the quality of the nodes, where the relation is non-linear.
All nodes should achieve a minimum quality.
What is the type of the problem and how I can find the optimal value?
Genetic algorithms never find an optimum. I hope you will use better tools than those. You should know that this forum has basically been hi-jacked by meta-heuristics enthusiasts - an inflated array of posts at RG are based on the fact that RG "scholars" do not know that there are better tools. So beware!
• asked a question related to Discrete Optimization
Question
Dear all,
I want to start learning discrete choice-based optimization so that I can use it later for my research works. I want to know about free courses, books, study materials available on this topic. Any suggestions will be appreciated.
Thanks,
Soumen Atta
You must to begin studying discrete optimization methods. in general. After that yout could to study models anf methids for choicing options. I am the author of the Selection of Proposals and of the Integration of Variables methods devoted to the options selection, that you can find in my researchgte profile, including applications.
• asked a question related to Discrete Optimization
Question
What is stochastic and combinatorial optimization problem.
Also, How I can identify the problem i am working is Continuous or Discrete Optimization.
Discrete optimization would be something like the classic Traveling Salesman problem - you are finding a sequence of discrete points that satisfies some optimization criteria. Continuous optimization involves finding a set of extreme points on a continuous hypersurface that is defined by a continuous cost function. Hamiltonian and Lagrangian approaches from physics are very old forms of this type of optimization.
Online search engines provide a treasure-trove of hits on stochastic and deterministic approaches. Give it a whirl!
• asked a question related to Discrete Optimization
Question
Hi,
I'm interested in solving a nonconvex optimization problem that contains continuous variables and categorical variables (e.g. materials) available from a catalog.
- metaheuristics: random trial and error ;
Are you aware of other systematic approaches?
Thank you,
Charlie
Z. Nedelkov\'a, C.\ Cromvik, P.\ Lindroth, M.\ Patriksson, \and A.-B.\ Str\"omberg}, {\em A splitting algorithm for simulation-based optimization problems with categorical variables}, {\sf Engineering Optimization}, vol.~51 (2019), pp.~815--831.
It might help!
• asked a question related to Discrete Optimization
Question
Some metaheuristics prove their superior performance in some kind of problems. Some of them are continuous optimization problems and others in discrete or binary optimization problems.
Simply look at this research:
N. K. T. El-Omari, "Sea Lion Optimization Algorithm for Solving the Maximum Flow Problem", International Journal of Computer Science and Network Security (IJCSNS), e-ISSN: 1738-7906, DOI: 10.22937/IJCSNS.2020.20.08.5, 20(8):30-68, 2020.
Or refer to the same paper at the following address:
• asked a question related to Discrete Optimization
Question
It seems that the quadprog function of MATLAB, the (conventional) interior-point algorithm, is not fully exploiting the sparsity and structure of the sparse QP formulation based on my results.
In Model Predictive Control, the computational complexity should scale linearly with the prediction horizon N. However, results show that the complexity scales quadratically with the prediction horizon N.
What can be possible explanations?
I gave the wrong information. Quadprog is not exploiting the sparsity and structure of the sparse QP formulation at all. So, the computation complexity scales cubically with N.
The barrier algorithm of Gurobi was not fully exploiting the sparsity and structure of the sparse QP formulation. So, the computation complexity scales cubically with N. Do you have some documentation about this? At the Gurobi website, I could not find anything relevant to answer this question.
Could you maybe also react to my last topic about why interior point algorithm of Gurobi and quadprog give same optimized values x and corresponding objective function value , but the first-order solver GPAD gives the same optimized values x, but another objective function value which is a factor 10 bigger?
Regards
• asked a question related to Discrete Optimization
Question
I would like to test the performance of a modified algorithm developed to solve a real-world problem that has these characteristics : (1)Discrete (2)Multi-Objective (3) Black-box (4)Large-scale.
How we can do this? and if there are no such test problems, is it sufficient to show its performance on the real-world problem only? (where the true Pareto Front is unknown)
Best regards,
I suggest to search in the mathworks homepage, specifically in the Exchange file
• asked a question related to Discrete Optimization
Question
In the mixed-variable heuristic optimization domain, what is done when a categorical variable determines the existence of continuous or ordered discrete variables in each possible solution?
To illustrate, imagine an optimization problem to determine the best tool to cut paper.
In this problem, a variable tool can have the values "knife" or "scissors".
• If its value is "scissors", there's the continuous-valued blade_size variable.
• If it's "knife", there is the same blade_size continuous variable and also a num_of_teeth discrete variable
How can I deal with this problems using some metaheuristic designed to hadle categorical, continuous and discrete ordered variables?
My first tought was to set the problem to the max possible dimensionality and, after choosing the value of the categorical variable, select (if commands) which other variables are going to be optimized and used to evaluate the solution.
This probably will work, but it seems naive to me. Do other more sophisticated methods to deal with this kind of problem exists? If yes, what are these methods?
• asked a question related to Discrete Optimization
Question
Dear scientists,
Hi. I am working on some dynamic network flow problems with flow-dependent transit times in system-optimal flow patterns (such as the maximum flow problem and the quickest flow problem). The aim is to know how well existing algorithms handle actual network flow problems. To this end, I am in search of realistic benchmark problems. Could you please guide me to access such benchmark problems?
Thank you very much in advance.
Yes, I see. Transaction processing has also a constraint on response time. Optimization then takes more of its canonical form: Your goal "as fast as possible" (this refers to network traversal or RTT) becomes the objective, subject to constraints on benchmark performance which are typically transaction response time and an acceptable range of resource utilization, including link utilization. Actual benchmarks known to me that accomplish such optimization are company-proprietary (I have developed some but under non-disclosure contract). I do not know of very similar standard benchmarks but do have a look at TPC to see how close or how well a TPC standard benchmark would fit your application. I look forward to seeing other respondents who might know actual public-domain sample algorithms.
• asked a question related to Discrete Optimization
Question
A trajectory is obtained for discrete points, what is the procedure for measuring the smoothness of this trajectory. The answer to this question will help me get a clear picture about the convergence rate of Legendre Pseudospectral method, where the rate of convergence is defined as 1/( N^2m/3−1 ). Here m is defined as the smoothness of the optimal trajectory and N is the number of nodes or points. This rate of convergence formula and further discussions can be found in the paper titled " Rate of convergence for the Legendre pseudospectral optimal control of feedback linearizable systems" written by Wei Kang .
Respected Dr.Xinwei Wang, first of all thank you for suggesting me the paper. It has cleared certain doubts of mine but my initial question about what is meant by smoothness of trajectory generated by joining discrete points still remains unanswered. Is there a way by which the smoothness of the trajectory generated by joining the values of state or control at the Legendre Gauss Lobatto(lgl) points can be quantified? More specifically is there a measure of the same?
• asked a question related to Discrete Optimization
Question
Given a graph, I need to find a vertex (or set of vertices) that needs to be remove from this graph in order to reduce it's chromatic number.
Finding "critical" nodes or edges is hard for both NP and co-NP. So any exact algorithm for your problem is going to take exponential time in the worst case. But there might exist an algorithm that works reasonably well in practice... depending on the structure of your instances.
• asked a question related to Discrete Optimization
Question
I have started programming binary bat algorithm to solve knapsack problem. i have misunderstanding of position concept in binary space :
Vnew= Vold+(Current-Best) * f;
S= 1/ ( 1+Math.exp(-Vnew));
X(t+1) = { 1  S>Rnd  , 0   Rnd>=S)
the velocity updating equation use both position from previous iteration (Current) and global best position (Best). In continuous version of BA, the position is real number but in the binary version, position of bat represented by binary number. In Knapsack Problem it means whether the item is selected or not. In the binary version, transfer function is used to transform velocity from real number to binary number. I'm confused whether the position in BBA is binary or real number ? if binary then the (Current-Best) can only be 1 - 0, 0 - 1, 1 - 1, etc. and if real number then how to get the continous representation if no continous equation to update the position (in original BA, the position updating equation is X(t+1) = X(t) + Vnew
Unless you are doing just an "exercise", I discourage you from trying "new" metaheuristics for knapsack. Besides being a widely studied problem, there are very good knapsack specific algorithms. Check David Pissinger's webpage for codes and test instances generators.
• asked a question related to Discrete Optimization
Question
Dear experts,
Hi. I appreciate any information (ideas, models, algorithms, references, etc.) you can provide to handle the following special problem or the more general problem mentioned in the title.
Consider a directed network G including a source s, a terminal t, and two paths (from s to t) with a common link e^c. Each Link has a capacity c_e and a transit time t_e. This transit time depends on the amount of flow f_e (inflow, load, or outflow) traversing e, that is, t_e = g_e(f_e), where the function g_e determines the relation between t_e and f_e. Moreover, g_e is a positive, non-decreasing function. Hence, how much we have a greater amount of flow in a link, the transit time for this flow will be longer (thus, the speed of the flow will be lower). Notice that, since links may have different capacities, thus, they could have dissimilar functions g_e.
The question is that:
How could we send D units of flow from s to t through these paths in the quickest time?
Notice: A few works have done [D. K. Merchant, et al.; M. Carey; J. E. Aronson; H. S. Mahmassani, et al.; W. B. Powell, et al.; B. Ran, et al.; E Köhler, et al.] relevant to dynamic networks with flow-dependent transit times. Among them, the works done by E Köhler, et al. are more appealing (at least for me) as they introduce models and algorithms based on the network flow theory. Although they have presented good models and algorithms ((2+epsilon)-approximate algorithms) for associated problems, I am looking for better results.
• asked a question related to Discrete Optimization
Question
Almost all the optimization algorithms considers Function Evaluations to compare performance among various algorithms.
Do Function Evaluations number is the most important criteria? If yes/no why?
Function evaluations are one of the most important criteria for comparison along with statistical analysis such as standard deviation, Friedmann's Test, Wilcoxon(due to the stochastic nature of metaheuristic), minimum fitness as well as average fitness. Function evaluation is important since several metaheurisitics have 2 or more stages in their optimization process (for example TLBO has teacher and student phase, GWO has only phase (searching and hunting), ABC also has two loops, Jaya has one , BFO has 5 etc). For all of them number of iterations may be same but that does not reflect the true computational requirement of the algorithm in terms of how much effort was required by the algorithm. Therefore, by using functional evaluations, a more fruitful comparison can be drawn since we are aiming for a common ground, basically the number of times the fitness function was called.
• asked a question related to Discrete Optimization
Question
Hello,
Is it possible to mathematically model a binary variable using continuous variables in the optimization problems?
For example, assume that 'X' is {0,1}. Can I define it as 0<=X<=1 in my problem and impose some additional constraints instead to force X to become only '0' or '1'?
Regards,
Morteza
Yes, x is binary if and only if x - x^2 = 0.  So the binary condition can be replaced by a quadratic equation.
(This is a folklore result, noted by, e.g., Shor; Koerner; Poljak, Rendl, & Wolkowicz; Lovasz & Schrijver; Lemarechal & Oustry, etc.)
Unfortunately, the quadratic equation is non-convex, so the resulting continuous quadratic optimisation problem is just as difficult as the original 0-1 optimisation problem.  But this is to be expected, since the binary condition is inherently non-convex.
• asked a question related to Discrete Optimization
Question
As there is no topology called best topology for all engineering Applications, I would like to study  different network topologies applied in different engineering field. I would like to make worth discussion to analysis different topologies with advantages and disadvantages. Can one suggest good book to study different Network Topologies and their Applications in different Engineering?
There is a huge literature on k-node connected and k-edge connected networks, network loading, network design, networks with bounded rings, etc. Try looking at past issues of the journal "Networks".
• asked a question related to Discrete Optimization
Question
Dear researchers,
I'm learning about data clustering, it presents a new area of research for me.
My questions are the following.
1. How can we formulate a data clustering problem as an optimization problem, in the other hand, how to construct good objective functions for data clustering ?
2 what is the best way to deal with data clustering problem ? (as a combinatorial problem or a continuous problem ).
There is not a better way to do the clustering, the objective function or the paradigm you choose  will be different according to the kind of data that you are processing or the application. There are some datasets in which you can find irregular forms in the clusters and some datasets with spherical clusters.
You can build a objective function taking into account the distances between objects belonging to the same cluster, you need to minimize the value of that distances, and also you want to maximize the distances of the objects belonging to different clusters.
I hope this help you in your problem, also i apoligize for my English, is not my native language.
• asked a question related to Discrete Optimization
Question
Does any one know about the meaning of offset and the formula of this link: http://oeis.org/A002898/internal ?
Perhaps you  could clarify? I don't see a mention of the word offset in the page as it exists today (2016/05/15). You probably already know the definition available at http://oeis.org/wiki/Offsets
• asked a question related to Discrete Optimization
Question
After testing many instances I found out that when r = V / Vtotal <= ϕ (Golden Ratio) the algorithm takes a lot of time to printout the result.
When the ratio r is so close to ϕ , I noticed that : V / Vtotal = (V + Vtotal) / V (which represents the geometric relationship of the two quantities V and Vtotal in the Golden Ratio).
However, few of the instances having a ratio r > ϕ can take too long to print the results too.
So can this problem be related to ϕ or not?
PS: I got the idea of comparing it to ϕ after checking this answer Lower bound on running time for solving 3-SAT if P = NP
@Fabrizio Marinelli : I am just using a simple combinatorial Branch&Bound that generates every permutation of the list of items by swapping,  then applies the Next Fit heuristics.
• asked a question related to Discrete Optimization
Question
I have a all-node routing problem with non linear constraints
I would include some local search in your evolutionary algorithms. These have been very useful in solving TSP instances.
Pat
• asked a question related to Discrete Optimization
Question
Hi, I am trying to implement Particle (or Genetical) Swarm Optimization. However, I am already stuck in the first step...
I am getting confused on how to initialise the particles, and what these particles (in terms of code) are.
Thanks.
Andrea.
The simplest way to represent a particle is a vector. For example, in an optimization problem with 3 design variables, each design (particle) is represented by a vector [x1, x2, x3]. So each particle is such a vector and represents a point in the 3D space (that we all know and can imagine). For problems with dimension > 3 it is difficult to imagine the particle and its position, but with N=3 you get the idea.
• asked a question related to Discrete Optimization
Question
It's an efficient new hybrid meta-heuristic
– named in other context ANGEL – for solving discrete size optimization of truss structures. ANGEL combines ant colony optimization (ACO), genetic algorithm (GA) and local search (LS)
strategy. The procedures of ANGEL attempt to solve an optimization problem by repeating the following steps. First time, ACO searches the solution space and generates structure designs to provide the initial population for GA. After that, GA is executed and the pheromone set in ACO is updated when GA obtains a better solution. When GA terminates, ACO searches again by using the new pheromone set. ACO and GA search alternately and cooperatively in the solution space
Dear Aymen Ammari,
According to the ANGEL procedure, the GA and ACO algorithms should share their information at each generation, while the local search do so only at the last generation. It is recommended for you to implement each algorithm separately and then couple them as described.  For example, you can find the implementation code of the GA in the following link. This code can be extended for both the other algorithms. If you have any problem in Matlab programming, please see the Ariadne Tsambani attachments.
• asked a question related to Discrete Optimization
Question
I am trying to solve discrete and mixed variable optimization problems for the same I want to know the best constraint handling techniques. Which helps the problem to solve in minimum time.
I do not appear to be allowed to. Don't you have an online library?
• asked a question related to Discrete Optimization
Question
ABSOLUTE VALUE OPERATOR LINEARIZATION
I have a nonlinear term in the objective function of my optimization problem as an Absolute Value function like |x-a|.
As far as I know, an Absolute Value operator makes the optimization problems nonlinear (i.e. NLP). How can I make it linear (LP or MILP)?
Max  f(x)=g(x) + b*|x-a|
s.t.   some linear constraints
Regards,
max f(x)=g(x)+b*p+b*q
s.t
other constraints
x-a+p-q=0;
p,q>=0;
• asked a question related to Discrete Optimization
Question
Binary Variable * Real Variable = ?
1) lead to an equivalent 'Nonlinear' variable (and thus => MINLP),
2) lead to an equivalent 'Integer' variable, 'Discrete' I mean (and thus => MILP).
Which one is correct and why?
What is your idea to deal with this problem by adding a constraint and make the resultant problem MILP (if it is not MILP).
Regards,
Your product just tries to express that the continuous variable should be zero if the binary variable is zero.
In the Framework of a MILP you could as well avoid the product by generalized upper bounds on the continuous variable - if it has a natural upper bound -- AND if the variable does not occur elsewhere without that product.
In that case just add the two inequalities
binary * lower_bound <= continuous <= binary * upper_bound
to the problem and your variable is forced to zero if binary is zero.
• asked a question related to Discrete Optimization
Question
I will be grateful if anyone could suggest a reference where I can find a formal definition of “binary discrete optimization”
BLP, MPEC and EPEC are usually regarded as nonlinear optimisation problems (i.e., problems with continuous variables and nonlinear constraints).  They are however NP-hard in general, since the complementarity constraints are non-convex.
See for example here:
• asked a question related to Discrete Optimization
Question
Hello,
I would like to know that how I can find the number of variables (especially the integer ones) in GAMS (General Algebraic Modeling System) codes.
Does GAMS platform have any options to show the number of variables?
Any help would be appreciated.
Regards,
Morteza
Hi,
If you are working with GAMS IDE (the Integrated development Environment of GAMS that runs in Windows), then you can find the number of variables (and their type) in the log file. The log file is the window that pops up when you run your code. You can look for the number of columns (i.e. the variables) and the number of integer-columns (i.e. the number of integer variables)
That is the easiest way for me to identify the number of variables, but maybe someone else has a better way to do it.
Regards,
Laura
• asked a question related to Discrete Optimization
Question
Hello,
As far as I know, the meta-heuristic algorithms such as GA, PSO, GSA, etc. generally find the optimal solution of 'unconstrained' optimization problems. If I have some constrains (equality and/or inequality equations), how will I be able to consider and model them in these kinds of algorithms?
I would greatly appreciate it if you kindly help me in this matter.
Regards,
Hi Morteza, I am new to participating in the Q & A. I hope I  have understood your question appropriately and that is that you are asking about "good" methods of constraint handling? If I have got tthe gist of your question, then this is perfect because I am very interested in this topic myself.
I put "good" in  quotation marks above because I am convinced that what is judged to be good is problem dependent, unfortunately/. To me, this is just a manifestation of Wolpert and MacReady's No Free Lunch theorem.
For a recent overview see: Mallipeddi et al (2010): Ensemble of Constraint HandlingTechniques in IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, 14 (4).
There are of course a number of other reviews of constrained optimisation techniques available in the literature. Prof  Carlos Coello-Coello's repository found on the web at:http://www.cs.cinvestav.mx/~constraint/index.html is a useful starting point.
Recognising that the definition of "good" method might be problem specific, I give a brief summary of what I have, for my problem setting investigated, found to be efficient. Note that these algorithms can be used with population based stochastic heuristics (like GA/DE/EP/ES/PSO), I am not so sure about using them with Simulated Annealing (SA) as canonical SA operates with a single trial point rather than a population.
Epsilon constraint handling technique as proposed by T. Takahama and S. Sakai, iin their paper "Constrained Optimization by the " Constrained Differential Evolution with an Archive and Gradient-Based Mutation", pp.1680-1688. (Winner of the CEC 2010 competition on constrained single level optimisation)
The Stochastic Ranking Method of Runnarsson and Yao with code in MATLAB and c: https://notendur.hi.is/tpr/index.php?page=software/sres/sres It is easy to get the wrong idea that it only works in Evolutionary Programming (Prof Runarsson and Yao's EA of choice but I have managed to use it as a constraint handling method with Differential Evolution.)
Interestingly one of the simplest and most effective method which I have tried is the method given in Kim et al (2010): T-H Kim, I. Maruta and T. Sugie. A simple and efficient constrained particle swarm optimization and its application to engineering design problems, Proceedings of the Institution of Mechanical Engineers Part C-Journal of Mechanical Engineering Science, Vol. 224, No. C2, pp. 389-400, 2010.
See particularly Equation 2 in this paper. It should be easy to program these in any language of your choice.
• asked a question related to Discrete Optimization
Question
i.e in methods like window method using different windows, FFT,  Fourier series method etc.
What are the total number of mathematical operations involved?
• asked a question related to Discrete Optimization
Question
I am looking for new methods and new trends in optimization, which have absorbed many interests. I will be grateful to you if you can give me some information.
There are diversified categories of heuristic and meta-heuristic methodologies which have been dynamically studied. Of meta-heuristic type methods, Harmony search and Firefly algorithm might be considered of somehow more recent approaches, however, there are various population based approaches which have shown promising ability in optimization problems such as Ant Colony, Bee Colony, Particle Swarm Optimization (PSO as one of most prominent ones) while Simulated annealing and Travelling salesman problem might be constructive. Applied Soft Computing as a leading journal in this field can further clarify this subject with detailed information.
• asked a question related to Discrete Optimization
Question
Suppose you are given a set of linear inequalities that define a polytope, and suppose for simplicity that each of the inequalities defines a facet. Now, suppose you wish to find a "small" collection of facets such that every vertex of the polytope lies on at least one of the facets in the collection. (As examples: the eight vertices of the cube can be covered by just two square facets, and the four vertices of the tetrahedron can be covered by two triangular facets.) This must be done without actually enumerating the vertices, which can be exponentially many. Are there any good exact or heuristic algorithms for this problem? (It can of course be viewed as a special kind of set covering problem, in which the elements to be covered are defined implicitly rather than given as an explicit list.)
I gave the problem another though and here is what I understand (I hope it is not too simplistic).
Let's assume F[i] for i = {1, 2, 3, ..., N} are our "N" facets. Now assume for "i, j" we know that F[i] does not intersect with F[j]. I claim that, as long as the union of all facets is not included in the union of F[i] and F[j] I can remove both F[i] and F[j] and still have all the original vertices on the remainder of the facets.
I can repeat the process until I am left with a subset of original facets where each two facets intersect in a non-empty subspace.
Here I assumed that intersection of any two facets (to determine whether or not they intersect) can be done efficiently, I believe this is the case, at least when your facets are planes.
For the case of the cube, this indeed leads to optimal solution. In general, it will be suboptimal, but you can get the smallest subset by branch and bound and brute-force.
To see how this greedy method "WON'T" work well, just imagine I tilt the x = 0 plane very slightly, such that it will intersect with x = 1 at some large y and z. Then the method says that x = 0 and x = 1 cannot be removed. If I do the same with y = 0 and y = 1 and z = 0 and z = 1, then the greedy method fails to reduce the original set.
There might be work arounds. I thought of finding a way to verify if the intersection of F[i] and F[j] intersects with the union of F[k] for k not equal to i and j, i.e., the rest of facets.
• asked a question related to Discrete Optimization
Question
The Convex Programing is a wide range field and uses many methods in order to solve every particular convex problem. After so many years of implementation, do you really believe that still exist any unsolved problem?
A large set of open problems is offered by the theory of quasiconvexity, that can be viewed as a continuous limit of the mathematical programming problem with infinitely many linear constraints. In other words, the minimizers - a gradient e=grad u or a y=curl v - satisfy differential constraints everywhere, curl y=0 and div y=0, respectively; these are the limits of difference constraints.
• asked a question related to Discrete Optimization
Question
The discrete multi-criteria optimization problem. I'm looking for methods to determine weights in the weighted objective function. I would like to set the weights automatically taking into account the expert database of optimal cases.
It is the problem of optimizing the objective function. I think it's near topics: training with teacher and case based reasoning, but I found a few publications according to weights determination. Does anyone know previous studies on this topic?
Dear Remigiusz,
Subjective methods of weight determination are based on expert evaluation. Expert's experience and knowledge allows for providing the most valuable information about the compared objects.
There are several methods to determine each criterion’s relative weight.
- AHP,
-ANP,
-Decision maker's preferences,
- Entropy, (Filar, et al. 1999. Environmental Assessment Based on Multiple Indicators. Technical Report, Department of Applied Mathematics, University of South Australia)
-Fuzzy weighted average, (Zamri et al. 2013. Novel Hybrid Fuzzy Weighted Average for MCDM with Interval Triangular Type-2 Fuzzy Sets)
- Delphi,
The Delphi process is applied in various forms, to develop consensus among a group of participants. In one version, questionnaires (containing criteria tables) are distributed by email or in a meeting setting to a group of participants/experts, seeking their estimates or weights on a group of indices/criteria (Suppose you have 4 criteria, which the sum of the weights must equal 100.)
Weights are then summarized and sent to each expert for review. With all expert's response known, individuals are asked to weight the preferences score of the various weights that have been submitted. The weights of criterion are again summarized and returned to the experts. Those whose weights do not conform to the majority may be asked to rearrange their assigned weights. A final round of questionnaires may be circulated, asking for each participant's final weight.
If you want to assess the sensitivity of the results to the changing weight, or if the presentation of the possible outcomes of different weights is your main concern, you can use the following approach.
-random weights
-rank order weights
-response distribution weights
I think it is better to present the resulted consequences of different weighting schemes. Allow your users to select the desired option. That is the recommended approach in most environment/natural resources related studies and management activities.
Raoof