Science topic

Combinatorial Optimization - Science topic

Explore the latest questions and answers in Combinatorial Optimization, and find Combinatorial Optimization experts.
Questions related to Combinatorial Optimization
  • asked a question related to Combinatorial Optimization
Question
6 answers
I have designed the optimization experiment using Box-Behnken approach.
What should I do if any of the factors combination fails, for example because the aggregation occurs.
Should I review whole optimization or is there any method to skip the particular factors combination?
And if I need to review the whole experiment, what method should I use to evaluate boundary factors values? Screening methods I have seen require at least 6 factors to be screened.
Any help is appreciated.
Greetings.
  • asked a question related to Combinatorial Optimization
Question
4 answers
Can anyone suggest an application of combinatorial optimization in real life? I am considering TSP(Travelling Salesman Problem), Minimum Spanning Tree, etc.
Relevant answer
Answer
There are indeed many. As far as (spatial) networks are concerned, you can find for example optimal hub location problems (see the papers by M O'Kelly), or also problems for optimal spatial networks such as subways (what is the optimal shape of a subway ?). I guess that new mobility services such as bikes, etc also display some combinatorial optimization problems...
  • asked a question related to Combinatorial Optimization
Question
2 answers
I have a new idea (by a combination of a well-known SDP formulation and a randomized procedure) to introduce an approximation algorithm for the vertex cover problem (VCP) with a performance ratio of $\rho < 2 - \epsilon$ on arbitrary graphs.
Here, I am going to clarify the main topics of the idea. Then, I am grateful if anyone identify any potential issues or give me informative suggestions.
You can see the last version of my paper in this open access site
https://vixra.org/abs/2107.0045 with a performance ratio of $1.999999$
https://vixra.org/abs/2202.0143 with a performance ratio of $1.885903$
It can be natural to reject new ideas right away. Yet, instead of immediate judgments and using negative words, it is better to use positive language. Even ideas that seem implausible can turn into outstanding innovations upon further exploration and development.
The Idea:
First of all, we prove that,
I. If the optimal value of the VCP is greater than $(n/2)+(n/k)$ then $\rho < (2k)/(k+2)$, and
II. If we can produce a feasible solution with objective value smaller than $(kn)/(k+1)$ then $\rho < (2k)/(k+1)$.
Hence, to introduce a performance ratio of $2 - epsilon$ on arbitrary graphs, it is sufficient to produce a feasible solution with a suitable fixed objective value or proving that the optimal value is greater than a suitable fixed value.
Therefore, we solve the well-known SDP relaxation proposed by Kleinberg and Goemans(1998). Note that, I know for sure that just by solving any SDP formulation, we cannot approximate the VCP with a performance ratio better than 2-o(1).
Then, let $V_{-1}=\{j: V_0V_j < 0\}$, and $V_1=V-V_{-1}$ which is a feasible solution for the VCP.
If $|V_{-1}| > 0.0625n$ then $|V_1| < 0.9375n= 15n/16$ and we have (based on II) $\rho < (2\times 15)/16 < 1.885903$.
Else, let $A=\{j: V_0V_j > 0.4\}$.
If $|A| > 0.3075n$, then, we can show that the optimal value of the VCP is greater than $(n/2)+(0.03025n)$ and we have (based on I) $\rho < (2k)/(k+2) < 1.885903$, where $k=1/0.03025$.
Else, let $G_{0.4}=\{j: 0 <= V_0V_j <= 0.4\}$, where based on above results we know that $|G_{0.4}| > (1-0.0625-0.3075)n= 0.63n$.
Now, it is sufficient to introduce a suitable feasible solution based on $G_{0.4}$.
To do this, we prove that for any normalized vector $w$, the induced subgraph on $H_w=\{j: |wV_j| > 0.700001\}$ is a bipartite graph and as a result,
if $|H_w| > 0.118472n$ then we can produce a feasible solution with objective value smaller than $(1-0.118472/2)n= 0.940764n < 16n/17$, and a performance ratio of $\rho < (2\times 16)/17 < 1.885903$.
Finally, to produce such a normalized vector $w$, we show that, by introducing two random vectors $u$ and $w$, one of the sets $H_u$ or $H_w$ has more than $0.118472n$ members, and as a result we can produce a suitable feasible solution based on $G_{0.4}$.
Therefore, we could introduce an approximation ratio of $\rho < 1.885903$ on arbitrary graphs, and, based on the proposed $1.885903$-approximation algorithm for the VCP, the unique games conjecture is not true.
#Combinatorial Optimization
#Computational Complexity Theory
# unique games conjecture
  • asked a question related to Combinatorial Optimization
Question
2 answers
I am interested in the use of Extreme Value Theory (EVT) to estimate global optima of optimization problems (using heuristic and metaheuristic algorithms), however, it is a bit difficult to find them since the use of EVT is not usually the main objective of the studies. Could you help me by sharing articles where this procedure is used? Thank you in advance.
Relevant answer
Answer
Bettinger, P., J. Sessions, and K. Boston. 2009. A review of the status and use of validation procedures for heuristics used in forest planning. Mathematical and Computational Forestry & Natural-Resource Sciences. 1(1): 26-37.
Bettinger, P., J. Sessions, and K.N. Johnson. 1998. Ensuring the compatibility of aquatic habitat and commodity production goals in eastern Oregon with a Tabu search procedure. Forest Science. 44(1): 96-112.
Boston, K. and P. Bettinger. 1999. An analysis of Monte Carlo integer programming, simulated annealing, and tabu search heuristics for solving spatial harvest scheduling problems. Forest Science. 45(2): 292-301.
  • asked a question related to Combinatorial Optimization
Question
8 answers
I proposed an algorithm for multicast in smart grid and I want to compare it with the optimal tree. I tried to to write a model by myself but I finally came up with shortest path tree instead of Steiner tree. Any suggestions will be appreciated .
Relevant answer
Answer
Slightly delayed answer:
If you want to solve a geometric Steiner tree problem,
I would recommend GeoSteiner: http://geosteiner.com/
If you want to solve a Steiner tree problem in graphs,
I would recommend SCIP-Jack: http://scipjack.zib.de/
  • asked a question related to Combinatorial Optimization
Question
2 answers
For an Integer Linear Programming problem (ILP), an irreducible infeasible set (IIS) is an infeasible subset of constraints, variable bounds, and integer restrictions that becomes feasible if any single constraint, variable bound, or integer restriction is removed. It is possible to have more than one IIS in an infeasible ILP.
Is it possible to identify all possible Irreducible Infeasible Sets (IIS) for an infeasible Integer Linear Programming problem (ILP)?
Ideally, I aim to find the MIN IIS COVER, which is the smallest cardinality subset of constraints to remove such that at least one constraint is removed from every IIS.
Thanks for your time and consideration.
Regards
Ramy
  • asked a question related to Combinatorial Optimization
Question
7 answers
I am solving Bi-objective integer programming problem using this scalarization function ( F1+ epslon F2). I have gotten all my result correct but it says Cplex can not give an accurate result with this objective function. It says cplex may give approximate non-dominated solution not exact. As I said before, I am very sure that my result is right because I already checked them. Do I need to prove that cplex give right result in my algorithm even sometimes it did mistake in large instance? 
Thanks in advance.
Relevant answer
Answer
Did you code the epsilon constraint method using OPL? May I ask how did you code this? I tried but could not get the right results.
Thanks a lot.
  • asked a question related to Combinatorial Optimization
Question
18 answers
Suppose that if we compare two metaheuristics X and Y in a given real problem, X returns a better solution than Y, while when we use the same metaheuristics to solve global optimization problems, Y returns a better solution than X. Does this make sense? what is the reason?
Relevant answer
Answer
No Free Lunch Theory
  • asked a question related to Combinatorial Optimization
Question
7 answers
I'm working on some optimal strategies for an environmental surveillance network. My solution is almost based on the meta-heuristic solution. I have to know what the advantages or disadvantages are of heuristic and meta-heuristic optimizations.
Relevant answer
  • asked a question related to Combinatorial Optimization
Question
12 answers
Hello scientific community
Do you noting the following:
[I note that when a new algorithms has been proposed, most of the researchers walk quickly to improve it and apply for solving the same and other problems. I ask now, so why the original algorithm if it suffer from weakness, why the need for a new algorithm if there are an existing one that solved the same problems, I understand if the new algorithm solved the unsolved problem so welcome, else why?]
Therefore, I ask, is the scientific community need a novel metaheuristic algorithms (MHs) rather than the existing.
I think, we need to organized the existing metaheuristic algorithms and mentioned the pros and cons for each one, the solved problems by each one.
The repeated algorithms must be disappear and the complex also.
The dependent algorithms must be disappeared.
We need to benchmark the MHs similar as the benchmark test suite.
Also, we need to determine the unsolved problems and if you would like to propose a novel algorithm so try to solve the unsolved problem else stop please.
Thanks and I wait for the reputable discussion
Relevant answer
Answer
The last few decades have seen the introduction of a large number of "novel" metaheuristics inspired by different natural and social phenomena. While metaphors have been useful inspirations, I believe this development has taken the field a step backwards, rather than forwards. When the metaphors are stripped away, are these algorithms different in their behaviour? Instead of more new methods, we need more critical evaluation of established methods to reveal their underlying mechanics.
  • asked a question related to Combinatorial Optimization
Question
3 answers
Let's say we have an undirected graph with only weighted nodes/vertices (representing an attribute/measure) and unweighted edges (where all nodes are fully connected).
Are there any theorems for represnting and computing the shortest path to traverse at least 2 nodes?
  • asked a question related to Combinatorial Optimization
Question
18 answers
Hi,
I'm interested in solving a nonconvex optimization problem that contains continuous variables and categorical variables (e.g. materials) available from a catalog.
What are the classical approaches? I've read about:
- metaheuristics: random trial and error ;
Are you aware of other systematic approaches?
Thank you,
Charlie
Relevant answer
Answer
Z. Nedelkov\'a, C.\ Cromvik, P.\ Lindroth, M.\ Patriksson, \and A.-B.\ Str\"omberg}, {\em A splitting algorithm for simulation-based optimization problems with categorical variables}, {\sf Engineering Optimization}, vol.~51 (2019), pp.~815--831.
It might help!
  • asked a question related to Combinatorial Optimization
Question
51 answers
We don't have a result yet, but what is your opinion on what it may be? For example, P =NP, P!=NP, or P vs. NP is undecidable? Or if you are not sure, it is feasible to simply state, I don't know.
Relevant answer
Answer
The answer is P=NP
  • asked a question related to Combinatorial Optimization
Question
9 answers
What are the standard parameter values of the commonly used classifiers such as Support-vector machine, k-nearest neighbors, Decision tree, Random forest?
Relevant answer
  • asked a question related to Combinatorial Optimization
Question
4 answers
The choice of something to ruin can be an implicit choice as to what should be preserved.  A heuristic for preservation can thus lead to a heuristic for ruin.  I've had what I think is a very interesting result for what to preserve (common solution components) in the context of genetic crossover operators that use constructive (as opposed to iterative) heuristics.  I tried to share it with the Ruin and Recreate community with no success.
I guess my real question is -- How should I Ruin and Recreate this research to make it more relevant to Ruin and Recreate researchers?
Relevant answer
Answer
In general, my impression of a ruin and recreate process would be to change assignment(s) to decision variables (randomly or otherwise) in a feasible solution, effectively ruining it (in value) and perhaps making the solution infeasible. Then, some sort of repair operator(s) are applied to place the solution back in the feasible region of the solution space.
  • asked a question related to Combinatorial Optimization
Question
14 answers
Any decision-making problem when precisely formulated within the framework of mathematics is posed as an optimization problem. There are so many ways, in fact, I think infinitely many ways one can partition the set of all possible optimization problems into classes of problems.
1. I often hear people label meta-heuristic and heuristic algorithms as general algorithms (I understand what they mean) but I'm thinking about some things, can we apply these algorithms to any arbitrary optimization problems from any class or more precisely can we adjust/re-model any optimization problem in a way that permits us to attack those problems by the algorithms in question?
2. Then I thought well if we assumed that the answer to 1 is yes then by extending the argument I think also we can re-formulate any given problem to be attacked by any algorithm we desire (of-course with a cost) then it is just a useless tautology.
I'm looking foe different insights :)
Thanks.
Relevant answer
Answer
The change propagation models may give a great idea
  • asked a question related to Combinatorial Optimization
Question
12 answers
Hello everyone,
We have the following integer programming problem with two integer decision variables, namely x and y:
Min F(f(x), g(y))
subject to the constraints
x <= xb,
y <= yb,
x, y non-negative integers.
Here, the objective function F is a function of f(x) and g(y). Both the functions f and g can be computed in linear time. Moreover, the function F can be calculated in linear time. Here, xb and yb are the upper bounds of the decision variables x and y, respectively.
How do we solve this kind of problem efficiently? We are not looking for any metaheuristic approaches.
I appreciate any help you can provide. Particularly, it would be helpful for us if you can provide any materials related to this type of problem.
Regards,
Soumen Atta
Relevant answer
Answer
The method for solving this problem depends on the properties of functions F, f, g (convex, concave, other properties).
If properties are not known - the only method is looking through all values of variables.
  • asked a question related to Combinatorial Optimization
Question
33 answers
Assume, we found an approximate solution A(D),
where A is a metaheuristic algorithm, D is concrete data of your problem.
How close the approximate solution A(D) to an optimal solution OPT(D)?
Relevant answer
  • asked a question related to Combinatorial Optimization
Question
1 answer
Hi,
I've recently read that the use of random keys in RKGA (Encoding phase) is useful for problems that require permutations of the integers and for which traditional one- or two-point crossover presents feasibility problems.
For example: Consider a 5-node TSP instance. Traditional GA encodings of TSP solutions consist of a stream of integers representing the order in which nodes are to be visited by the tour.1 But one-point crossover, for example, may result in children with some nodes visited more than once and others not visited at all.
My question is: if we don’t have a feasibility problems and our solutions are all feasible solutions so in this case is it correct to apply RKGA?
Relevant answer
Answer
Hi.
Random-key-based (RK) approaches are used when a swarm or evolutionary algorithm encodes its individuals with real-valued vectors, and it is applied to solve some permutation problem (solutions are sequences of integers and RK maps a real-valued vector in one sequence of integers). If the algorithm uses an integer-based individual, RK is not used, but you should guaranty that disturbing operators (crossover, mutation, or other) generate only feasible solutions.
Commonly, 1-point crossover (and other crossover operators) create infeasible integer-based offspring, and a repair mechanism is needed.
Please check the paper of Puljić and Manger: Comparison of eight evolutionary crossover operators for the vehicle routing problem ( ) for a detailed description of genetic operators used to generates feasible integer-based offsprings.
Furthermore, RK is also used when integer-based vectors are used as individuals, but the disturbing operators (the mutation operator employed by the differential evolution algorithm, by example) creates real-based offsprings, and these new individuals should be repaired.
In my opinion, if your algorithm uses integer-based individuals, and your crossover and mutation operators generates only feasible solutions, neither RK nor any repair mechanism should be applied.
Best regards!
  • asked a question related to Combinatorial Optimization
Question
14 answers
What is the effect of increasing or decreasing population size and the number of iterations on the quality of solutions and the computational effort required by the Swarm Intelligence algorithms?
Relevant answer
Answer
The increase of the population leads to (at least initially) increased diversity of the population, but if one increases the population size too much, there may be slow convergence of the population to the global optimum. But if the population is too small, it will lead to entrapment to local optima. It is widely suggested to increase the population in order to avoid local optima, especially if the objective function has many parameters. As for the number of iterations, an increased number will increase the possibility for convergence to global optimum, but you may start doing useless computations. In this case, you might need a good termination criterion that prevents useless computations after reaching global optimum.
For the latter, read the following paper.
Spanakis, C., Mathioudakis, E., Kampanis, N., Tsiknakis, M., & Marias, K. (2016). A Proposed Method for Improving Rigid Registration Robustness. International Journal of Computer Science and Information Security, 14(5), 1.
  • asked a question related to Combinatorial Optimization
Question
5 answers
I would be grateful if anyone could tell me how the McCormick error can be reduced systematically. In fact, I would like to know how we can efficiently recognize and obtain a tighter relaxation for bi-linear terms when we use McCormick envelopes.
For instance, consider the simple optimization problem below. The results show a big McCormick error! Its MATLAB code is attached. Min Z = x^2 - x s.t. -1 <= x <= 3 (optimal: x* = 0.5 , Z* = -0.25 ; McCormick: x*=2.6!)
Relevant answer
Answer
Hi Morteza,
I am a new researcher in this field and I am working on similar problems. I hope you will find following paper useful:
Cheers,
Zaid
  • asked a question related to Combinatorial Optimization
Question
6 answers
My genetic algorithm converges to an optimal solution (Global min known beforehand) after a very small number of iterations (4 to 5) is it considered a premature convergence?????
Relevant answer
Answer
Since you know the global minimum beforehand the implementation is correct. I guess you are using local optimization along with global optimization.
  • asked a question related to Combinatorial Optimization
Question
9 answers
I´m introducing a comparison between 10 metaheuristics coded in Java for solving large instances of the Variable Sized Bin Packing Problem, and also with independent cost, but I need published best results so I can compare it. Non of the revised articles or Monacci PhD thesis publish the optimal or at least best known solution for this particular problem in every combination of items set and bins types.
Thanx in advance!
Relevant answer
Answer
All the answers are great
  • asked a question related to Combinatorial Optimization
Question
7 answers
Hi All
I have the stress output of a structural analysis plotted against x ( x range is constant in all cases) that is a curve with minimums and maximums.
changing the model chracteristics ( stiffness ,etc) and doing a batch run, how could I code the optimization ?
preferably in Python
Relevant answer
Answer
Hi Farzad Torabi I think you need at first an idea of your optimization problem. What do you want to optimize (only mass or more e.g. multiple objectives?), what are your variables and how many (thickness, topology, ..., continuous or discrete) and what are the constraints (number and linear or nonlinear) you want to consider.Than roughly select an optimization method (gradient based, evolutionary,...). Than find a optimization toolbox providing a method you want (nlopt, Pyopt, pyoptsparse, scipy, midaco ...).In general you have to provide a function returning your objective and the constraints (e.g. max stresses, ...) based on a given variable vector. Sometimes objective and constraints are divided in two functions, depending on the toolbox.Now, in the function providing the objective and constraints you have to set up the FE run with the new variables (e.g. change the input file) and then solve the system running FEM. After finished FEM, you have to read the results back into Python (e.g. reading an ASCII-File) and provide the desired data as return values of your function.Starting FEM from Python is possible via "subprocess" module which is a build in module. Via e.g. subprocess.run(["path/to/executable", "input_file.xyz"]) you run your executable with a given input_file depending on your FEM solver.I hope this answers your question. More details depending on your concrete optimization problem.Kind regards Sascha
  • asked a question related to Combinatorial Optimization
Question
11 answers
I'm trying to identify which approach would work best to select a set of elements that have different features that minimise a certain value. To be more specific, I might have a group of elements with Feature 1, 2, 3, 4 and another group with Feature 2, 3, 4, 5.
I'm trying to minimise the overall value of Feature 2 and 3, and I also need to pick a certain number of elements of each group (for instance 3 from the first group and 1 from the second).
From the research I did it seems that combinatorial optimization and integer programming are the best suited for the job. Is there any other option I should consider? How should I set up the problem in terms of cost function, constraints, etc.?
Many thanks,
Marco
Relevant answer
Answer
Differential evolution is a good method. You can try with this method
  • asked a question related to Combinatorial Optimization
Question
3 answers
I have 3 objectives in ILP model.The first has to be maximized and the second, and the third should be minimized.
I would like to compute the knee point of the generated Pareto front.
Didi you have an idea about the formula ?
thanks
Relevant answer
Answer
The "knee point" is a useful point on the non-inferior set (NIS) to scrutinize, because it represents a point of most rapid change between the objective functions, and can give the decision maker a sense of where there may be a useful "compromise solution", but it has been vastly over-emphasized as the "ideal point" solution to a MO problem. The preferred solution from the pareto front will reflect the values and preference of the decision maker, and there is absolutely no a priori reason why that would correspond to the knee of the curve.
That said, a rather direct mechanical method to zero in on the knee of the curve for 2-objective problems is the the Non-inferior Set Generation Method (NISE) Cohon et al. (1979) Water Resources Research https://doi.org/10.1029/WR015i005p01001. Which has been extended and generalized to 3 or more objectives in subsequent work.
  • asked a question related to Combinatorial Optimization
Question
3 answers
As I know, the conventional cutting stock problem can be easily solved by column generation.
Now I want to carry these cuts by truck carriers and this time we want to minimize the number of trucks for transport. ( of course, less waste of stock leads to less number of trucks)
How to formulate this problem in one ILP? Which meets the orders for cuts and also minimize used truck carriers.
Any paper or other resources to help me with this problem?
Relevant answer
Answer
Dear Amir,
I suggest you to see links and attached files on topic.
-Solving Two-stage Robust Optimization ... - Optimization Online
-1 Column Generation and the Cutting Stock Problem
-Chapter 3: Discrete Optimization – Integer Programming - Polimi
-Column generation strategies and decomposition approaches to the ...
Best regards
  • asked a question related to Combinatorial Optimization
Question
2 answers
Pls, anyone with contributions on how i can use DEA to solve Graph Algorithms problems such, Network flow, Project management, Scheduling, Routing.etc
Majorly I need information on how to identify the input and output variables in this kind of problems(where there is no complete knowledge of the I/O ).
I think I can identify my DMUs.
I shall be glad to receive contributions on the appropriate general DEA model approach for solving Combinatorial Optimization problems of these kind.
Thanks
Relevant answer
Answer
DEA is generally applied to assess the relative performance a set of decision making units (DMUs) that consume inputs to produce outputs under a similar production technology. This may be valid whenever the systems under consideration fit within such a structure. In graph related problems such as scheduling, routing, etc., the objectives are completely different. Although there is a flow of material over the network, which may suggest that a node can be assimilated to a DMU, here, we are more concerned with finding an optimal route that satisfies constraints that may be as complex as the practical problem under study. In large scale problems, one may think of DEA for building clusters of nodes so that to reduce the problem size and, hence, related computational cost. I think that this aspect is worth to be investigated.
  • asked a question related to Combinatorial Optimization
Question
15 answers
As we know that any MILP/MINLP problem is feasible only on some points in its search space. Consequently, it is not possible get its JACOBIAN as well as HESSIAN matrices, as I think. As a result, for MILP/MINLP problems it is not important to know its convexity. Further, as MILP/MINLP problems are having their feasible search space in form of a set of some discrete points so these problems are NON-CONVEX.
How can you justify my observations? Am I right? or Am I missing something very important?
You comments about the above observations are highly appreciable.
With sincere regards,
M. N. Alam
Relevant answer
Answer
Hi everyone. I want just to make some comments regarding this issue, hoping it is still relevant for those involved in the conversation and for those who will look for a reference in it. As noted above, the fact that at least one of the variables is constrained to have discrete values makes any Mixed-integer problem by definition non-convex. Usually the way to deal with these kind of non-convex problems is through enumeration, which requires us to explore all the different possible values of those discrete variables. Since the number of combinations of the possible discrete solutions grows exponentially (e.g. with n binary variables you have 2^n possible combinations) one needs to rely on tools to avoid exploring all these combinations. Among these tools, we can solve the continuous relaxation of the original problem (defining the discrete variables as continuous variables bounded by the original discrete bounds) which can inform us on bounds for the original problem. Each time we find a solution that satisfies the original integrality constraints, i.e. that all the variables originally discrete have a discrete value and satisfy all the constraints, we find a feasible solution to the original problem which also provides a primal bound (upper bound in case you are minimizing). Every other solution of the continuous relaxation, that is solving a problem in a larger feasible region compared to the original, provides a dual bound (lower bound if minimizing) to the original problem. Using these bounds we can reduce the search space for our enumeration algorithms resulting in the famous Branch and Bound methods.
Soon after developing these methods for Mixed Integer linear programs (which by definition have a convex continuous relaxation), it was identified that the actual boundary between Polynomial solvable and Non-polynomial solvable was not in the linear/nonlinear boundary but in the convex/non convex boundary; meaning that there are polynomial algorithms to solve nonlinear convex programs. This allowed that one could solve the convex MINLP problems using the same Branch and Bound techniques with each node being Polynomial solvable. Finally, other methods relying on the decomposition of the MINLP in MILP and NLP problems were developed. The decomposition of the original problem was done though the gradient based approximation of the nonlinear functions (outer approximation), which produces a supporting hyperplane, i.e. an inequality that won't cut off any part of the feasible region, if the constraints defined a convex feasible region.
So, to the original question, whether if it is necessary to test convexity of the MILP/MINLP problems we have:
  • By definition the problem is nonconvex.
  • If we used the convention of naming these problems convex based on continuous relaxation for MILP it is not necessary, they are convex in this definition, for MINLP it depends on which solver/algorithm are you using.
  • In case you know it is convex, you can apply an algorithm which is guaranteed to return you the global optimal solution (e.g. Outer approximation). These algorithms are not even guaranteed to return a feasible solution in case you have a nonconvex MINLP, but are considerably efficient.
  • There are algorithms available to return the global optimal solution of nonconvex MINLP problems. These rely on convexification of the nonconvex terms in your problem, and you end up paying a cost in terms of computational time for these kind of guarantees.
Together with some colleagues we have recently published a paper comparing solvers for convex MINLP. It is open access and you can find it here . Please let us know if you have any other question regarding this topic.
Cheers!
  • asked a question related to Combinatorial Optimization
Question
5 answers
In what ways can one provide good initialization points to optimization problems that are NP-Hard. Are there heuristics out there for good initialization strategies which may lead to good solutions quickly.
Relevant answer
Answer
Thanks Michael Patriksson , definitely would try that on some of my problems. thanks Iago Augusto Carvalho i would look into approximate algorithms, from experience i know heuristics are usually good as well but I havent tried approximate algorithms. Maybe i can try random search first since its the simplest according to Todor Balabanov .
  • asked a question related to Combinatorial Optimization
Question
4 answers
Hi Dear colleagues
When dealing with some optimization problems such as the Time Tabling Problem "TTP", it seems like a CSP or a MOCOP !!!!?.
What can be the consequences of one or the other choice?
Sincerly, Djamel
Relevant answer
Answer
Dear Djamel,
each CSP is just like a puzzle game with puzzle pieces (objects) and different domains (see the 1st link below), but the MOCOPs are decision problems which have multiple objective functions not objects (see the second article).
Hope it helps,
  • asked a question related to Combinatorial Optimization
Question
7 answers
I have started programming binary bat algorithm to solve knapsack problem. i have misunderstanding of position concept in binary space :
Vnew= Vold+(Current-Best) * f;
S= 1/ ( 1+Math.exp(-Vnew));
X(t+1) = { 1  S>Rnd  , 0   Rnd>=S)
the velocity updating equation use both position from previous iteration (Current) and global best position (Best). In continuous version of BA, the position is real number but in the binary version, position of bat represented by binary number. In Knapsack Problem it means whether the item is selected or not. In the binary version, transfer function is used to transform velocity from real number to binary number. I'm confused whether the position in BBA is binary or real number ? if binary then the (Current-Best) can only be 1 - 0, 0 - 1, 1 - 1, etc. and if real number then how to get the continous representation if no continous equation to update the position (in original BA, the position updating equation is X(t+1) = X(t) + Vnew
Relevant answer
Answer
Unless you are doing just an "exercise", I discourage you from trying "new" metaheuristics for knapsack. Besides being a widely studied problem, there are very good knapsack specific algorithms. Check David Pissinger's webpage for codes and test instances generators.
  • asked a question related to Combinatorial Optimization
Question
4 answers
Please share your experience or literature on performance of harmony search for solving NP problems, scheduling problems, optimization problems.
Relevant answer
Answer
This particular algorithm has been strongly criticised by other researchers (alongside many other so called "nature-inspired algorithms").
The first paper I linked to proposes some convincing arguments that harmony search is a special case of evolutionary strategy. The second is a general criticism of metaphor based metaheuristics.
  • asked a question related to Combinatorial Optimization
Question
13 answers
Dear All,
I have obtained the individual level PISA data.
Existing works used the individual level's math score, reading score and science score for estimations.
However, I do not know how to calucuate these scores. In the code of PISA, to take a case of math, there are 5 plausible values  such as PV1MATH,  PV2MATH,  PV3MATH,  PV4MATH,  PV5MATH. Do researchers calculate mean score of them for individual math scores? 
  However, country level mean score of mathematics is not the same as its mean score culculated based on scores as above.
Relevant answer
Answer
Hi, you can use the R library called Intsvy or BIFIsurvey. Also it's possible to work with IDB analyzer.
  • asked a question related to Combinatorial Optimization
Question
12 answers
Given a set of m (>0) trucks and a set of k (>=0) parcels. Each parcel has a fixed amount of payment for the trucks (may be same for all or may different for all) . The problem is to pick up the maximum number of parcels such that the profit of each truck is maximized. There may be 0 to k number of parcels in the service region of a particular truck. Likewise, a parcel can located in the service region of 0 to m trucks. There are certain constraints as follows.
1. Each truck can pick up exactly one parcel.
2. A parcel can be loaded to a truck if and only if it is located within the service region of the truck.
The possible cases are as follows
Case 1. m > k
Case 2. m = k
Case 3. m < k
As far as I know, to prove a given problem H as NP-hard, we need to give a polynomial time reduction algorithm to reduce a NP-Hard problem L to H. Therefore, I am in search of a similar NP-hard problem.
Kindly suggest some NP-hard problem which is similar to the stated problem. Thank you in advance.
Relevant answer
Answer
Let p_{ij} denote the profit gained if parcel j is loaded onto truck i. If the parcel cannot be loaded onto that particular truck, we just set p_{ij} to zero. It looks like we just need to solve the following 0-1 linear program:
maximise
\sum_{i=1}^m \sum_{j=1}^k p_{ij} x_{ij}
subject to
\sum_{i=1}^m x_{ij} \le 1 (for all j)
\sum_{j=1}^k x_{ij} \le 1 (for all i)
x_{ij} \in \{0,1\} (for all i and all j).
If that's right, the problem is very easy. As stated by Przemysław and Helmut, it is equivalent to the linear assignment problem, which in turn is equivalent to maximum-weight bipartite matching.
  • asked a question related to Combinatorial Optimization
Question
13 answers
I would like to change the following linear programming model to restrict the decision variables to two integers, namely a and b (a<b):
minimize (1,1,...,1)' e
(Y-Zx) > -e
-(Y-Zx) > -e
where Y is a n-dimensional vector, Z is a n \times k matrix and x is a k-dimensional vector. e represents a n-dimensional vector of errors which need to be minimized. In order to make sure that x's only can have values equal to "a" or "b", I have added the following constraints keeping the original LP formulation:
-a/(b-a) - (1/2)' + I/(b-a) x > -(E/(b-a) +(1/2)')
-(-a/(b-a) - (1/2)' + I/(b-a) x ) > -(E/(b-a) +(1/2)')
where I stands for a k \times k identity matrix and E is a k-dimensional vector of deviations which needs to be minimized (subsequently, the objective would be minimize (1,1...,1)' (e; E)).
But, yet there is no guarantee that the resulting optimal vector only consists in a and b. Is there any way to fix this problem? Is there any way to give a higher level of importance to two latter constraints than to the two former's?
Relevant answer
Answer
Dear Fatemeh,
Maybe I am getting too late into this discussion. What I would do is to solve the following problem P:
minimize (1,1,...,1)' e
(Y-Zx) > -e
-(Y-Zx) > -e
a <= x_i <= b for each vector variable component
and programming myself a simple branch and bound algorithm:
1. Solve P
2. check whether some variable x_i has a value that is different from a or b.
3. (branching) Say you find that x_t is a < x_t < b. Then solve the following two problems:
P1
minimize (1,1,...,1)' e
(Y-Zx) > -e
-(Y-Zx) > -e
x_t = a
P2
minimize (1,1,...,1)' e
(Y-Zx) > -e
-(Y-Zx) > -e
x_t = b
4. Perform step 3 on the solutions of all problems you have.
5. (bounding) . Stop branching if:
- the solution of a problem is infeasible
- all variables in the solution have values either a or b ("integer" solution)
- the solution of the problem still contains variables with values different to a or b, but the objective is worse than a previously found "integer" solution.
I know it is brut force, but it will keep the structure of your problem and will guarantee what you want. And, it is very easy to program.
Hope it helps.
Vlad
  • asked a question related to Combinatorial Optimization
Question
3 answers
Does anybody know of an optimization tool which has a built in spatial branch and bound solver?
Relevant answer
Answer
Many nonconvex MINLPs can be solved easily nowadays with spatial branch-and-bound solvers.  For a recent review of optimization solvers that can solve this class of problems, see Kılınç, M. and N. V. Sahinidis, State-of-the-art in mixed-integer nonlinear programming, in T. Terlaky, M. Anjos and S. Ahmed (eds.), Advances and Trends in Optimization with Engineering Applications, MOS-SIAM Book Series on Optimization, SIAM, Philadelphia, 2017, pp. 273-292.
For freely available tools through the NEOS server, see https://neos-server.org/neos/solvers/index.html#minco
For some recent comparisons of MINLP solvers, see http://plato.asu.edu/ftp/minlp.html
  • asked a question related to Combinatorial Optimization
Question
4 answers
I am looking for Matlab code for Ant colony optmization or Simulated annealing which can handle mixed integer variables.
Thanks.
Relevant answer
Answer
Dear MARUTI PATIL, please check the following link  http://yarpiz.com/
You will find what you want, or check this link
  • asked a question related to Combinatorial Optimization
Question
6 answers
Can anyone suggest me some references (preferably papers or articles) that discuss the sensitivity of computational intelligent optimization algorithms, more specifically soft computing techniques, to an initial solution?
It seems, regardless of the type of the techniques, e.g. evolutionary, swarm, network-base, etc. the quality of ultimate solution of some techniques is affected by the initial solutions while others show less sensitivity. Please let me know if you have any comment, suggestion or information on this topic.
Relevant answer
Answer
you can check the following paper:
Amini, M. M., Racer, M., & Ghandforoush, P. (1998). Heuristic sensitivity analysis in a combinatoric environment: An exposition and case study. European journal of operational research, 108(3), 604-617.
  • asked a question related to Combinatorial Optimization
Question
5 answers
Deer peers,
I have encountered a difficulty and I am eagerly seeking your instructions and advice.
When resources are sufficient, i.e., resource constraints in master problem can easily be satisfied, the algorithm will convergent to a linear relaxation upper bound, see fig 3;
When resources are lesser, the algorithm will never convergent, see fig 1 and 2;
Can anybody tell me  whether it is the "tail-off" or degeneration?
Thank you!
Relevant answer
Answer
Dear Zhu,
Several problems solved by column generation are ddiscussed in the papers:
which describe a stable Primal Dual Column Generation Method (PDCGM):
With best regards,
Jacek Gondzio
  • asked a question related to Combinatorial Optimization
Question
3 answers
I'm trying to identify which approach would work best to get optimal decision among three layers in multilayer network.
From the research I did it seems that combinatorial optimization and integer programming are the best suited for the job. Is there any other option I should consider? How should I set up the problem considering parameters index and performance metrics to take optimal decision
Many thanks in Advance
Rashmi
Relevant answer
Answer
there is no such optimization technique which we said it is the best. so u have to choose according to your application what you basically required as in most of the optimization techniques there are two processes one is exploitation and second is exploration. now its depends upon your application what are you want .........
  • asked a question related to Combinatorial Optimization
Question
23 answers
I've faced a problem that may needs a special formulation before using MATLAB Optimization ToolBox: If we have a problem with the following expression:
OBJ1 = min f(X)
OBJ2 = max f(X)
and I want obtimized value of X with both objective function at a time. is it possible with toolbox
if yes then how?
Relevant answer
Answer
Dear Abdelmoumen,
As already pointed out by @Marek, @Przemysław and myself, the issue of the unit appears in the case of a linear combination of objectives.
>> "I don't understand why you are talking about this."
I am talking about this because you wrote
>>> "In optimization problems we don't care about the unit of the objective function.
Hence, I wanted to point out that we actually do need to care for the units if we should decide to convert the problem using a linear combination of objectives, even though such a conversion may not be the best option available.
  • asked a question related to Combinatorial Optimization
Question
6 answers
So that this could help for understanding theorems and mathematical treatment.
In most of the books and articles, it is assumed that user has prior knowledge, and while reading the book or article the dificulty is felt. Sometimes, these thoerems and proofs are skipped while reading.
Relevant answer
Answer
A favourite - in its 3rd edition - is Nonlinear Programming: Theory and Algorithms by Bazaraa, Sherali, and Shetty, published by Wiley. It's very clear, and while not being up-to-date on the newest and most efficient methods it provides very good basic material for developing methods, because it is quite thorough in explaining what can go wrong if you do not comply with the "rules", that is, what goes wrong if theory does not support your method. A - to you - natural-looking method may get stuck at the initial point, simply you have not understood the basics of what it means to be optimal, or rather, what non-optimality means. I repeat - it is very good.
  • asked a question related to Combinatorial Optimization
Question
6 answers
Assuming that ther is a function f(x) where x is the vector [n1,n2,...nm] where ni is the number of balls in the box i={1,..m}, and sum(ni) = n. f(x) being a non linear, non convex function.
What's the complexity of the problem of finding the distribution of balls that maximize f(x) ?
Also what's a good algorithm for solving this kind of problems ? GA, PSO, etc??
Relevant answer
Answer
Dear Victor,
First way : If the fonction is non-linear, non-quadratic, non-convex on linear and quadratic functions much better search policies are available.
Second way : support vector machine learning via quadratic programming or Genetic Algorithm.
Here is some reference in subject.
-Hansen, Ostermeier, Gawelczyk (1995). On the adaptation of arbitrary normal mutation distributions in evolution strategies: The generating set adaptation. Sixth ICGA, pp. 57-64, Morgan Kaufmann
- 2Salomon (1996). "Reevaluating Genetic Algorithm Performance under Coordinate Rotation of Benchmark Functions; A survey of some theoretical and practical aspects of genetic algorithms." BioSystems, 39(3):263-278
Best regards
  • asked a question related to Combinatorial Optimization
Question
3 answers
Below I have attached two graphs of issues over time for a site in Chattanooga. (one is all issues over time, and the other is all issues restricted to date of the software update that causes the spikes) I am wondering what method of time series I could use to fit this data? I do not want to get rid of the "outliers"/spikes because this is what I am trying to fit to be able to "lay" the graph on another site location and make a prediction about its future spike. (therefore, I think smoothing the data or differencing defeats the purpose of what I am trying to do, but maybe I am misunderstanding) There are factors that go into play at Chattanooga and other sites, such as experience and size. Is this possible to incorporate and fit? 
Any help is greatly appreciated! 
Relevant answer
Answer
Hi Charleigh,
if you want to do smoothing that prevents the spikes (at least to a parameterizable degree), maybe a Savitzky-Golay-Filter could be a suitable means for you. 
Best regards!
  • asked a question related to Combinatorial Optimization
Question
8 answers
After testing many instances I found out that when r = V / Vtotal <= ϕ (Golden Ratio) the algorithm takes a lot of time to printout the result.
When the ratio r is so close to ϕ , I noticed that : V / Vtotal = (V + Vtotal) / V (which represents the geometric relationship of the two quantities V and Vtotal in the Golden Ratio).
However, few of the instances having a ratio r > ϕ can take too long to print the results too.
So can this problem be related to ϕ or not?
PS: I got the idea of comparing it to ϕ after checking this answer Lower bound on running time for solving 3-SAT if P = NP
Relevant answer
Answer
@Fabrizio Marinelli : I am just using a simple combinatorial Branch&Bound that generates every permutation of the list of items by swapping,  then applies the Next Fit heuristics.
  • asked a question related to Combinatorial Optimization
Question
4 answers
VRP is a combinatorial optimization problem. I hope to begin with symmetrical distance, but I am a new programmer in AMPL and need help.
Relevant answer
Answer
Its an old question. But did you get lucky with this one? Coz I am now doing the same and am new to AMPL
  • asked a question related to Combinatorial Optimization
Question
3 answers
Hi, 
I need some help with applying techniques of combinatorial optimization in Requirement Prioritization and MULTI-CRITERIA DECISION. 
Any articles, advices any useful stuff are all welcome.
Best regards
Relevant answer
Answer
I advise you to read  articles of the founder of the main approach for solving these problems --- Thomas Saaty.
  • asked a question related to Combinatorial Optimization
Question
7 answers
Considering:
  • test problems
  • quality indicators used in the evaluation
Relevant answer
Answer
Simulated Annealing is good but the cooling schedule needed to be properly monitored because of the fact that it determines the quality of the final output. Most of the time, since its problem-dependent, it must be properly studied for accuracy
  • asked a question related to Combinatorial Optimization
Question
5 answers
what are the classes of posets closed under taking ordinal sum of posets?
Relevant answer
Answer
Let X and Y be two posets on the disjoint sets P and Q. The  disjoint union P+Q is defined as:
(1)if (x 'less than, or equal to' y) is in P, for x and y in P, then x <=  y in P+Q and
(2)if  (x 'less than, or equal to' y) is in Q, for x and y in Q, then x <= y in P+Q. 
The ordinal sum P*Q is defined as the partial order on P U Q, that satisfy (1) and (2), and the additional condition: (3) (x 'less than, or equal to' y) in P*Q for x in P, and y in Q.
The disjoint union + is commutative, but the ordinal sum '*' is not. 
  • asked a question related to Combinatorial Optimization
Question
4 answers
and how ?
Thank you
Relevant answer
Answer
OK I see now. Thank you.
But I am more interested in using game theory for finding such solutions not for sorting them. So I still have no answer.
  • asked a question related to Combinatorial Optimization
Question
6 answers
In ant colony optimisation, at each decision point an ant makes a selection from from a set of options. Heuristic information is available at each decision point to bias selection of options suspected to be favourable without precluding selection of less favourable options. If the favourability of subsequent decisions is known to be influenced by options selected at previous decision points, can the local heuristic information be updated to reflect this? Is there a known technique or category of techniques to handle this?
Relevant answer
Answer
Although heuristic value is often precalculated as an a priori knowledge about a problem and remains static, it is possible to calculate heuristic value dynamically.
One such approach was used by Maniezzo (Vittorio Maniezzo, Exact and approximate nondeterministic tree-search procedures for the quadratic assignment problem, INFORMS Journal on Computing (1999), 11(4), 358–369).
He was using ANST, a special version of ACO, to solve Quadratic assignment problem which is minimization problem. While constructing partial solution he was calculating component's heuristic values by assessing what would be a lower bound for the solution if this component is added to the solution.
If you decide to read this paper you might be confused by the rule for calculating probabilities of choosing particular component in the solution construction phase. This is specific for ANTS version of ACO, but principle of dynamically calculating heuristic value can be used for any other ACO version including those using random proportional rule (eg MAX-MIN Ant System, Three bound ant system, Best-Worst Ant System, etc.)
  • asked a question related to Combinatorial Optimization
Question
4 answers
I am working on Inverse Protein Folding Problem.
I have created a database of protein sequences, in which every sequence is identical to its native sequence upto 65%, i need to select sequences that can fold into the native structure.
Is there any algorithm that can solve this optimization problem and which computer language is it suitable to implement it. Any pointers will be appreciated.
Relevant answer
Answer
Cannot be done, we do not understand yet, what are the exact rules of protein folding, we only know some of basic principles and sometimes it is enough to fold some small proteins if we are lucky and have a lot of computational power
if I understood correctly what you have done:
You had sequences (e.g. from PDB) and then changed randomly up to 35% of residues, and now you would like to know which one will fold into similar to "seed" structure?
Then I would say: almost none. Even small changes (if not done with caution) e.g. 3% can destroy fold (and function) of the protein (most of genetic disorders are point mutations) . We even know examples that this 3% can completely reverse fold (from alpha helical to beta-sheet protein)
Of course, you can use homology based approach to filter your sequences, to find those which should in principle resemble your "seed", but it is far from saying that they fold in the same structure, they will be similar (e.g. 3-4 A). This approach is useful if you want to analyse individual protein, but if you want to derive from it some general rules then it will not help much. Moreover, people already introduced alignment information in most of protein folding analysis, so to be better you need to add something new.
  • asked a question related to Combinatorial Optimization
Question
7 answers
I've already completed the algorithms in vehicle routing problems, that is the last-mile-problem. But I wonder, is there any do's or don'ts in developing my own system to solve the vehicle routing problem, with the algorithm developed by myself.
Thank you for the response in advance.
Relevant answer
Answer
Have a look at these implementations, which are all combinatorial and permutation type optimization problems, and solved using various algorithms:
Despite the differences between these problems, however mathematically they are almost equivalent in the sense of solution representation. The solution of all problems mentioned above, can be represented as a permutation.
  • asked a question related to Combinatorial Optimization
Question
14 answers
It would be helpful, if anybody could redirect me to any techniques (deterministic and non-deterministic) that have solved well the vehicle routing problem. 
Many thanks.
Patricia 
Relevant answer
Answer
Hello Patricia,
The recent book by Toth and Vigo (2014) is a definitely recommended option for a description of the most recent state-of-the-art  heuristic and exact algorithms for deterministic or stochastic problems.
From the heuristic standpoint, you may also find this survey helpful: "Vidal, et al. Heuristics for multi-attribute vehicle routing problems: a survey and synthesis. European Journal of Operational Research. 231(1), 1–21, 2013". We focused on analyzing the successful strategies of recent state-of-the-art methods, rather than trying to put all these algorithms into boxes with name-tags such as SA, Tabu, VNS, ILS, GA, among others. Looking at the recent literature, you will indeed observe that the wide majority of state-of-the-art methods are hybridizations of several concepts rather than one pure implementation of a classical metaheuristic. As a simple rule of thumbs, a classical and successful recipe is to combine 1) efficient “intensification” procedures via local searches with 2) “diversifications” methods such as crossovers, shaking, restarts, or decomposition phases. For small and medium problem instances (50 to 1000 deliveries), any efficient method built on these concepts should be able to produce solutions within 1% of the best known ones.
@Alfonso, I do not agree with your comment about “Genetic algorithms are not really advisable, since their spirit is to simulate biological processes more than to optimize functions”.  As demonstrated by many experiments on the VRP in the past decade, the ability to 1) cross solutions together and inherit good solution fragments, and 2) maintain a population of candidates has greatly helped to achieve solutions of extremely high quality. Whether this follows accurately or not the concept of biological processes is not really relevant to our goal, which is to produce good solutions to an optimization problem.
  • asked a question related to Combinatorial Optimization
Question
16 answers
Binary Variable * Real Variable = ?
1) lead to an equivalent 'Nonlinear' variable (and thus => MINLP),
2) lead to an equivalent 'Integer' variable, 'Discrete' I mean (and thus => MILP).
Which one is correct and why?
What is your idea to deal with this problem by adding a constraint and make the resultant problem MILP (if it is not MILP).
Regards,
Morteza Shabanzadeh
Relevant answer
Answer
Your product just tries to express that the continuous variable should be zero if the binary variable is zero.
In the Framework of a MILP you could as well avoid the product by generalized upper bounds on the continuous variable - if it has a natural upper bound -- AND if the variable does not occur elsewhere without that product.
In that case just add the two inequalities
binary * lower_bound <= continuous <= binary * upper_bound
to the problem and your variable is forced to zero if binary is zero.
  • asked a question related to Combinatorial Optimization
Question
12 answers
Hello everyone!
I have a non-linear scalar function that depends on a binary sequence of a fixed number N of elements:
i.e. :
010011001010010110 --> 1.5
110111010010000101 --> 0.8
010101101110101011 --> 1.9
How can I find the combination that yields the maximum value of the scalar function?
I've never dealt with combinatorial optimization, could you suggest me some books that may help me to solve this problem?
Thank you,
Alessandro.
Relevant answer
Answer
If you do not know anything about the function, you should simply enumerate all possible combinations of zeros and ones. In the case you describe it's only 2^18 numbers, that is, slightly more than 250,000 numbers. So compute the value for each combination, then save the best one. If you know nothing about the function, you can do very little that is better than that, in fact. I hope your calculations are relatively fast! :-) 
Now, if you need to do this repeatedly in some fashion for many such strings of zeroes and ones, then we need to come up with something better. But then YOU need to tell us how those combinations of 1 and 0 are generated - perhaps there is a pattern that can be utilized somehow.
  • asked a question related to Combinatorial Optimization
Question
8 answers
I am trying to calculate the most compact way of grouping a set of pixels together. Does anyone have a readable guide on how to do this?
My initial results are given below for clusters of up to 10 pixels. Results are expressed in terms of the sum of unique interpixel distances for a given cluster (e.g. for a 3 pixel cluster it is the sum of the the distances ab, ac, bc).
1 = 0
2 = 1
3 = 3.4
4 = 6.8
5 = 13.5
6 = 21.1
7 = 31.4
8 = 44.1
9 = 58.9
10 = 78.5
Relevant answer
Answer
I looked at the paper. So at least for L1 distances, we do not have uniqueness, the incremental approach partially fails (some optimal solutions do not extend to +1 optimal solutions, while some optimal solutions are not extension of -1 optimal solutions), and 4x4 square would yield L1 sum of 320, while the optimal result of 318 is displayed in the paper. Good luck.
  • asked a question related to Combinatorial Optimization
Question
14 answers
I will be grateful if anyone could suggest a reference where I can find a formal definition of “binary discrete optimization”
Relevant answer
Answer
BLP, MPEC and EPEC are usually regarded as nonlinear optimisation problems (i.e., problems with continuous variables and nonlinear constraints).  They are however NP-hard in general, since the complementarity constraints are non-convex.
See for example here:
  • asked a question related to Combinatorial Optimization
Question
2 answers
This ordinary diff. equation should  depend on parameter to be varied and it is of II-nd order as a majority of dynamical eqs. After some infinitesimal change of the parameter the period of solution drops 1.5, 2,2.5 ... times. It would be well if by this disappears anti-periodicity.
Relevant answer
Answer
Such examples exist. For example, the dynamical system defined on the surface of a torus. The periodic solution can turn into almost periodic solution (irrational winding of the torus) or other periodic solution in a small change the parameter of system.
  • asked a question related to Combinatorial Optimization
Question
2 answers
I adopted the B&P algorithm to solve the integer programming model, and the branching strategy is to branch on the original variable. However, the branch on the original variable cannot make the relevant variable of the master problem be branched, which still is fractional, and the branching will continue on the same original variable, which will result in an infinite loop. Has anybody gotten this trouble, how to solve it?
Relevant answer
Answer
  • asked a question related to Combinatorial Optimization
Question
32 answers
Normally, to enhance the performance of the meta-heuristic algorithms, local search based techniques are integrate with them. Instead of using local search techniques, if multiple meta-heuristics algorithms (like GA, ACO or PSO) are integrated to each other, is that a better approach?
Relevant answer
Answer
Combining different (meta-)heuristics results in something that is known as hybrid (meta-)heuristic. There is a lot of literature on how to properly hybridize two (or more) classical (meta-)heuristics, and also an annual workshop on this topic, see http://iwi.econ.uni-hamburg.de/hm14/ for last year's workshop.
  • asked a question related to Combinatorial Optimization
Question
3 answers
I am aware that the minimum (cardinality) vertex cover problem on cubic graphs (i.e., 3-regular) graphs is NP-hard. Say positive integer k>2 is fixed. Has there been any computational complexity proof (aside from the 3-regular result, note this would be k=3,) that shows the minimum (cardinality) vertex cover problem on k-regular graphs is NP-hard (e.g., 4-regular)? Since k is fixed, you aren't guaranteed the cubic graph instances needed to show the classic result I mentioned above.
Note that this problem would be straightforward to see is NP-hard from the result I mentioned at the start if we were to state that this were for any regular graph (since 3-regular is a special case), we don't get that when k is fixed.
Does anybody know of any papers that address the computational complexity of minimum (cardinality) vertex cover on a k-regular graph, when k is fixed/constant? I have been having difficulties trying to find papers that address this (in the slew of documents that cover the classic result of minimum (cardinality) vertex cover on cubic graphs being NP-hard.)
My goal is to locate a paper that addresses the problem for any k>2 (if it exists), but any details would be helpful.
Thank you so much!
Relevant answer
Answer
I see that there is a reduction proof on the CSTheory StackExchange website already. But if it is a reference that you need, here it is:
Fricke, G. H., Hedetniemi, S. T., Jacobs, D. P.,
Independence and irredundance in k-regular graphs.
Ars Combin. 49 (1998), 271–279.
Summary from MathSciNet: "We show that for each fixed k≥3, the INDEPENDENT SET problem is NP-complete for the class of k-regular graphs. Several other decision problems, including IRREDUNDANT SET, are also NP-complete for each class of k-regular graphs, for k≥6.''
Now, if the summary is correct, the authors prove that the decision version of the independent set problem is NP-complete for the class of k-regular graphs. Therefore, the optimization problem of finding the maximal independent set is np-hard for the same class. And of course, the minumum vertex cover is the complement of the max independent set. Hope this helps.
  • asked a question related to Combinatorial Optimization
Question
4 answers
I have a VRP mode that considers stochastic simultaneous pickup and delivery services in a public bus transport. How can I solve it using GA? How to define the population, the crossover the mutation using Matlab tool box or using a pic of matlab code.
I have attached the detail description of the model.
Relevant answer
Answer
In Matlab, GA is  already provided and  practically "user-friendly",   you can  modify the population size, crossover/mutation function etc by just defining that in the option part of the syntax, as long as you can model the fitness function properly. I am not an expert in  OR but judging from your description you might also try to use multi-objective GA  (which is also available in Matlab).
Go to the  Help documentations in the software, they are very good and provide detail explanation on how to do it.
  • asked a question related to Combinatorial Optimization
Question
11 answers
To MAXIMISE certain specified input to P1 from P2?
In this context, a production system comprises a set of IF-THEN rules, a working memory, and an execution process including a conflict resolution convention
A working memory is a set of variable/value combinations with each variable appearing at most once
Here an IF-THEN rule is of the form {IF A/x & B/y & C/z … THEN write K/t to <specified wm>}
Execution process for a PS – REPEATEDLY, each rule in the PS  is simultaneously tested -- if the IF part of a rule fully matches in the working memory then the variable/value combination of its THEN part is written to the specified working memory.
Two production systems interact if the rules of each sometimes write to the other. NB rules always match only on the working memory of their own PS.
A conflict resolution convention is needed when contradictory variable/value combinations (i.e. same variable, different values) can be simultaneously written to the same working memory. In this context an equi-probable random choice is to be made between the alternatives
To “MAXIMISE the input to P1 from P2” means to maximise the frequency with which any rule in P2 writes some particular variable/value combination (e.g. K8/0) to P1
Note that the algorithm to be designed (call it MECE) can read P2 but NOT alter it. All MECE can do is to add additional rules to P1 (possibly with new variables and values in consequence)
Relevant answer
Answer
I was probably adding some constraint that you hadn't made. Maybe that you can make an improvement.
  • asked a question related to Combinatorial Optimization
Question
1 answer
I know that the FAP has already been solved using graph coloring, but there is another variant of the FAP, often called the real frequency assignment problem.
I would like to know what the main head lines of this variant are when compared to graph coloring. Best regards.
Relevant answer
Answer
You could try asking Fabrizio Rossi, who works at the University of L'Aquila in Italy:
  • asked a question related to Combinatorial Optimization
Question
6 answers
I am using scenred (Gams) to apply scenario reduction for a stochastic LP model. I have 10800 scenarios. I have three uncertain parameters, two of them are in objective function and one of them is right hand side value. Two stage stochastic programming solution should be worse than the wait-and-see result which is the expected value of solutions obtained from all 10800 scenarios. But in my case, I found a stochastic programming solution better than wait-and-see. I know it is impossible but I couldn't figure out the problem in my model or code. I checked all probabilities, scenarios and tree. They all seem right. Also the Gams code dosen't give any error. Is there anyone using scenred tool of Gams and encounter similar problem before?
Relevant answer
Answer
Thank you for the answers. I found the problem. My model is a MIP model. The reason that wait-and-see solution is worse than stochastic programming solution is scenario reduction. I had compare the result of reduced scenarios (300 scenarios) with 10000 scenarios. That is why the comparison doesn't make sense.
  • asked a question related to Combinatorial Optimization
Question
7 answers
I am wondering if anybody can provide any handy resources (for a theoretical computer scientist) in relation to the convex cost flow problem?  I have found texts (mostly my combinatorial optimization texts on my shelf), but they sparingly discuss the problem and its algorithmic properties, and the ones I've found so far take a very deep dive into it without explaining a whole lot or providing any examples.  I get the formulation of the problem, but a bit more would be helpful.
 I'm new to the problem, and wondering if anybody can suggest some good texts, or papers that do a good job of covering the problem, and some of the major algorithmic results for this problem (computational complexity, and algorithms primarily), or maybe applications of it being used to see how researchers have applied it to solve other problems in theoretical computer science, combinatorial optimization, or operations research.
If you have resources or suggestions, that would be helpful!  Thank you so much, and have a beautiful day :).  
Relevant answer
Answer
Ah yes, I forgot earlier. Dimitri Bertsekas has made available his network flows book for free. It has god stuff on single-commodity network flows - dual methods in particular. Here is a link to download the book from: 
Again - good luck!
  • asked a question related to Combinatorial Optimization
Question
6 answers
Hello,
I would like to know that how I can find the number of variables (especially the integer ones) in GAMS (General Algebraic Modeling System) codes.
Does GAMS platform have any options to show the number of variables?
Any help would be appreciated.
Regards,
Morteza
Relevant answer
Answer
Hi,
If you are working with GAMS IDE (the Integrated development Environment of GAMS that runs in Windows), then you can find the number of variables (and their type) in the log file. The log file is the window that pops up when you run your code. You can look for the number of columns (i.e. the variables) and the number of integer-columns (i.e. the number of integer variables)
That is the easiest way for me to identify the number of variables, but maybe someone else has a better way to do it.
Regards,
Laura
  • asked a question related to Combinatorial Optimization
Question
8 answers
We are currently working on ways of assessing the quality of heuristic solutions to combinatorial optimization problems. SOET (see the question) is one way of doing so. There is a recent review in Journal of Heuristics by Giddings, Rardin, and Uszoy (2014). But is someone able to suggest ongoing work on this topic?
Relevant answer
Answer
I prefer "to be certain that b0 is somewhere between b1 and a1" with confidence level = 1 and even don't consider "almost certain that b0 is between b1 and s1" with any confidence level < 1 . This is not a case where the theory of probability. As example, you can image the following statement: "This theorem was proven with confidence level = 0,999" . As to me, my answer would be NO. Here we can consider only
a single case: "This theorem was proven with confidence level = 1". For all other confidence level < 1 theorem not be considered to be proven.  Well, you can say a phrase: "I found solution with confidence level = 0.999". I think that answer would be NO. Thus, my resume is the following: we must consider seriously only reliable events (a1) with confidence level = 1. To as s1 > a1, this metric can be consedered as any "recomendation". To as me, in case of b1 = s1 I will not interrupt the search of finding b0 by claiming a phrase: "I found an optimal solution b0".
In conclusion, I again want to remind about great importance of finding quality a1 to evaluate a heuristic solution b1 at the moment.
  • asked a question related to Combinatorial Optimization
Question
7 answers
I recall once seeing a paper or talk in which it is shown that it is NP-complete to decide whether the clique number of a graph is equal to its chromatic number. 
Does anyone know the correct reference?
Relevant answer
Answer
I found it eventually, after a long search:
S. Busygin & D.V. Pasechnik (2006) On NP-hardness of the clique partition - independence number gap recognition and related problems. Discrete Mathematics, 306, 460-463.
An alternative NP-hardness proof is given in this recent paper:
D. Cornaz & P. Meurdesoif (2014) Chromatic Gallai identities operating on Lovász number. Mathematical Programming, 144, 347-368.
  • asked a question related to Combinatorial Optimization
Question
6 answers
Someone told me that the tournament with 16 players is the best, but I'm not sure if this is correct and why. Anybody know more about it ?
Relevant answer