Science topic

# Combinatorial Optimization - Science topic

Explore the latest questions and answers in Combinatorial Optimization, and find Combinatorial Optimization experts.

Questions related to Combinatorial Optimization

I am working on flexible job shop scheduling problem, which I want to solve using a hybrid algorithm like VNS based NSGA-II or TS/SA + NSGA. Can I use Pymoo to implement this problem? Pymoo has specific structure to call the built-in algorithms, but in case of customization and using two algorithms how can I use pymoo framework?

Over the last few decades, there have been numerous metaheuristic optimization algorithms developed with varying inspiration sources. However, most of these metaheuristics have one or more weaknesses that affect their performances, for example:

- Trapped in a local optimum and are not able to escape.
- No trade-off between the exploration and exploitation potentials
- Poor exploitation.
- Poor exploration.
- Premature convergence.
- Slow convergence rate
- Computationally demanding
- Highly sensitive to the choice of control parameters

Metaheuristics are frequently improved by adding efficient mechanisms aimed at increasing their performance like opposition-based learning, chaotic function, etc. What are the best efficient mechanisms you suggest?

Hello! I am going to conduct a study on the application of Vehicle Routing Problem in real world. However, I am struggling with how I construct my networks. I would like to ask on how to define the edges/arcs/links between the vertices.

For example, what should the edge between cityA and cityB represent? Most literatures use travel times which is based on road distances from cityA to cityB. However, there are a lot of paths from cityA to cityB. The way to address this is to use the shortest path from cityA to city B. Are there any alternatives to address this issue? What should the edge between two vertices represent?

In robust optimization, random variables are modeled as uncertain parameters belonging to a convex uncertainty set and the decision-maker protects the system against the worst case within that set.

In the context of nonlinear multi-stage max-min robust optimization problems:

What are the best robustness models such as Strict robustness, Cardinality constrained robustness, Adjustable robustness, Light robustness, Regret robustness, and Recoverable robustness?

How to solve max-min robust optimization problems without linearization/approximations efficiently? Algorithms?

How to approach nested robust optimization problems?

For example, the problem can be security-constrained AC optimal power flow.

I have designed the optimization experiment using Box-Behnken approach.

What should I do if any of the factors combination fails, for example because the aggregation occurs.

Should I review whole optimization or is there any method to skip the particular factors combination?

And if I need to review the whole experiment, what method should I use to evaluate boundary factors values? Screening methods I have seen require at least 6 factors to be screened.

Any help is appreciated.

Greetings.

Can anyone suggest an application of combinatorial optimization in real life? I am considering TSP(Travelling Salesman Problem), Minimum Spanning Tree, etc.

I have a new idea (by a combination of a well-known SDP formulation and a randomized procedure) to introduce an approximation algorithm for the vertex cover problem (VCP) with a performance ratio of $2 - \epsilon$.

You can see the abstract of the idea in attached file and the last version of the paper in https://vixra.org/abs/2107.0045

I am grateful if anyone can give me informative suggestions.

I am interested in the use of Extreme Value Theory (EVT) to estimate global optima of optimization problems (using heuristic and metaheuristic algorithms), however, it is a bit difficult to find them since the use of EVT is not usually the main objective of the studies. Could you help me by sharing articles where this procedure is used? Thank you in advance.

Suppose that if we compare two metaheuristics X and Y in a given real problem, X returns a better solution than Y, while when we use the same metaheuristics to solve global optimization problems, Y returns a better solution than X. Does this make sense? what is the reason?

I proposed an algorithm for multicast in smart grid and I want to compare it with the optimal tree. I tried to to write a model by myself but I finally came up with shortest path tree instead of Steiner tree. Any suggestions will be appreciated .

For an Integer Linear Programming problem (ILP), an irreducible infeasible set (IIS) is an infeasible subset of constraints, variable bounds, and integer restrictions that becomes feasible if any single constraint, variable bound, or integer restriction is removed. It is possible to have more than one IIS in an infeasible ILP.

Is it possible to identify all possible Irreducible Infeasible Sets (IIS) for an infeasible Integer Linear Programming problem (ILP)?

Ideally, I aim to find the MIN IIS COVER, which is the smallest cardinality subset of constraints to remove such that at least one constraint is removed from every IIS.

Thanks for your time and consideration.

Regards

Ramy

I am solving Bi-objective integer programming problem using this scalarization function ( F1+ epslon F2). I have gotten all my result correct but it says Cplex can not give an accurate result with this objective function. It says cplex may give approximate non-dominated solution not exact. As I said before, I am very sure that my result is right because I already checked them. Do I need to prove that cplex give right result in my algorithm even sometimes it did mistake in large instance?

Thanks in advance.

I'm working on some optimal strategies for an environmental surveillance network. My solution is almost based on the meta-heuristic solution. I have to know what the advantages or disadvantages are of heuristic and meta-heuristic optimizations.

Hello scientific community

Do you noting the following:

[I note that when a new algorithms has been proposed, most of the researchers walk quickly to improve it and apply for solving the same and other problems. I ask now, so why the original algorithm if it suffer from weakness, why the need for a new algorithm if there are an existing one that solved the same problems, I understand if the new algorithm solved the unsolved problem so welcome, else why?]

Therefore, I ask, is the scientific community need a novel metaheuristic algorithms (MHs) rather than the existing.

I think, we need to organized the existing metaheuristic algorithms and mentioned the pros and cons for each one, the solved problems by each one.

The repeated algorithms must be disappear and the complex also.

The dependent algorithms must be disappeared.

We need to benchmark the MHs similar as the benchmark test suite.

Also, we need to determine the unsolved problems and if you would like to propose a novel algorithm so try to solve the unsolved problem else stop please.

Thanks and I wait for the reputable discussion

Let's say we have an undirected graph with only weighted nodes/vertices (representing an attribute/measure) and unweighted edges (where all nodes are fully connected).

Are there any theorems for represnting and computing the shortest path to traverse at least 2 nodes?

Hi,

I'm interested in solving a nonconvex optimization problem that contains continuous variables and categorical variables (e.g. materials) available from a catalog.

What are the classical approaches? I've read about:

- metaheuristics: random trial and error ;

- dimensionality reduction: https://www.researchgate.net/publication/322292981 ;

- branch and bound: https://www.researchgate.net/publication/321589074.

Are you aware of other systematic approaches?

Thank you,

Charlie

We don't have a result yet, but what is your opinion on what it may be? For example, P =NP, P!=NP, or P vs. NP is undecidable? Or if you are not sure, it is feasible to simply state, I don't know.

What are the standard parameter values of the commonly used classifiers such as Support-vector machine, k-nearest neighbors, Decision tree, Random forest?

The choice of something to ruin can be an implicit choice as to what should be preserved. A heuristic for preservation can thus lead to a heuristic for ruin. I've had what I think is a very interesting result for what to preserve (common solution components) in the context of genetic crossover operators that use constructive (as opposed to iterative) heuristics. I tried to share it with the Ruin and Recreate community with no success.

I guess my real question is -- How should I Ruin and Recreate this research to make it more relevant to Ruin and Recreate researchers?

Conference Paper The GENIE is out! (Who needs fitness to evolve?)

Any decision-making problem when precisely formulated within the framework of mathematics is posed as an optimization problem. There are so many ways, in fact, I think infinitely many ways one can partition the set of all possible optimization problems into classes of problems.

1. I often hear people label meta-heuristic and heuristic algorithms as general algorithms (I understand what they mean) but I'm thinking about some things, can we apply these algorithms to any arbitrary optimization problems from any class or more precisely can we adjust/re-model any optimization problem in a way that permits us to attack those problems by the algorithms in question?

2. Then I thought well

*then by extending the argument I think also we can re-formulate any given problem to be attacked by any algorithm we desire (of-course with a cost) then it is just a useless tautology.***if we assumed that the answer to 1 is yes**I'm looking foe different insights :)

Thanks.

Hello everyone,

We have the following integer programming problem with two integer decision variables, namely x and y:

Min F(f(x), g(y))

subject to the constraints

x <= x

_{b},y <= y

_{b},x, y non-negative integers.

Here, the objective function F is a function of f(x) and g(y). Both the functions f and g can be computed in linear time. Moreover, the function F can be calculated in linear time. Here, x

_{b}and y_{b}are the upper bounds of the decision variables x and y, respectively.How do we solve this kind of problem efficiently? We are not looking for any metaheuristic approaches.

I appreciate any help you can provide. Particularly, it would be helpful for us if you can provide any materials related to this type of problem.

Regards,

Soumen Atta

Assume, we found an approximate solution A(D),

where A is a metaheuristic algorithm, D is concrete data of your problem.

How close the approximate solution A(D) to an optimal solution OPT(D)?

Hi,

I've recently read that the use of random keys in RKGA (Encoding phase) is useful for problems that require permutations of the integers and for which traditional one- or two-point crossover presents feasibility problems.

For example: Consider a 5-node TSP instance. Traditional GA encodings of TSP solutions consist of a stream of integers representing the order in which nodes are to be visited by the tour.1 But one-point crossover, for example, may result in children with some nodes visited more than once and others not visited at all.

My question is: if we don’t have a feasibility problems and our solutions are all feasible solutions so in this case is it correct to apply RKGA?

What is the effect of increasing or decreasing population size and the number of iterations on the quality of solutions and the computational effort required by the Swarm Intelligence algorithms?

I would be grateful if anyone could tell me how the McCormick error can be reduced systematically. In fact, I would like to know how we can efficiently recognize and obtain a tighter relaxation for bi-linear terms when we use McCormick envelopes.

For instance, consider the simple optimization problem below. The results show a big McCormick error! Its MATLAB code is attached.
Min Z = x^2 - x
s.t.
-1 <= x <= 3
(optimal: x* = 0.5 , Z* = -0.25 ; McCormick: x*=2.6!)

My genetic algorithm converges to an optimal solution (Global min known beforehand) after a very small number of iterations (4 to 5) is it considered a premature convergence?????

I´m introducing a comparison between 10 metaheuristics coded in Java for solving large instances of the Variable Sized Bin Packing Problem, and also with independent cost, but I need published best results so I can compare it. Non of the revised articles or Monacci PhD thesis publish the optimal or at least best known solution for this particular problem in every combination of items set and bins types.

Thanx in advance!

Hi All

I have the stress output of a structural analysis plotted against x ( x range is constant in all cases) that is a curve with minimums and maximums.

changing the model chracteristics ( stiffness ,etc) and doing a batch run, how could I code the optimization ?

preferably in Python

I'm trying to identify which approach would work best to select a set of elements that have different features that minimise a certain value. To be more specific, I might have a group of elements with Feature 1, 2, 3, 4 and another group with Feature 2, 3, 4, 5.

I'm trying to minimise the overall value of Feature 2 and 3, and I also need to pick a certain number of elements of each group (for instance 3 from the first group and 1 from the second).

From the research I did it seems that combinatorial optimization and integer programming are the best suited for the job. Is there any other option I should consider? How should I set up the problem in terms of cost function, constraints, etc.?

Many thanks,

Marco

I have 3 objectives in ILP model.The first has to be maximized and the second, and the third should be minimized.

I would like to compute the knee point of the generated Pareto front.

Didi you have an idea about the formula ?

thanks

As I know, the conventional cutting stock problem can be easily solved by column generation.

Now I want to carry these cuts by truck carriers and this time we want to minimize the number of trucks for transport. ( of course, less waste of stock leads to less number of trucks)

How to formulate this problem in one ILP? Which meets the orders for cuts and also minimize used truck carriers.

Any paper or other resources to help me with this problem?

Pls, anyone with contributions on how i can use DEA to solve Graph Algorithms problems such, Network flow, Project management, Scheduling, Routing.etc

Majorly I need information on how to identify the input and output variables in this kind of problems(where there is no complete knowledge of the I/O ).

I think I can identify my DMUs.

I shall be glad to receive contributions on the appropriate general DEA model approach for solving Combinatorial Optimization problems of these kind.

Thanks

As we know that any MILP/MINLP problem is feasible only on some points in its search space. Consequently, it is not possible get its JACOBIAN as well as HESSIAN matrices, as I think. As a result, for MILP/MINLP problems it is not important to know its convexity. Further, as MILP/MINLP problems are having their feasible search space in form of a set of some discrete points so these problems are NON-CONVEX.

How can you justify my observations? Am I right? or Am I missing something very important?

You comments about the above observations are highly appreciable.

With sincere regards,

M. N. Alam

In what ways can one provide good initialization points to optimization problems that are NP-Hard. Are there heuristics out there for good initialization strategies which may lead to good solutions quickly.

Hi Dear colleagues

When dealing with some optimization problems such as the Time Tabling Problem "

**TTP**", it seems like a**CSP**or a**MOCOP**!!!!?.What can be the consequences of one or the other choice?

Sincerly, Djamel

I have started programming binary bat algorithm to solve knapsack problem. i have misunderstanding of position concept in binary space :

Vnew= Vold+(Current-Best) * f;

S= 1/ ( 1+Math.exp(-Vnew));

X(t+1) = { 1 S>Rnd , 0 Rnd>=S)

the velocity updating equation use both position from previous iteration (Current) and global best position (Best). In continuous version of BA, the position is real number but in the binary version, position of bat represented by binary number. In Knapsack Problem it means whether the item is selected or not. In the binary version, transfer function is used to transform velocity from real number to binary number. I'm confused whether the position in BBA is binary or real number ? if binary then the (Current-Best) can only be 1 - 0, 0 - 1, 1 - 1, etc. and if real number then how to get the continous representation if no continous equation to update the position (in original BA, the position updating equation is X(t+1) = X(t) + Vnew

Please share your experience or literature on performance of harmony search for solving NP problems, scheduling problems, optimization problems.

Dear All,

I have obtained the individual level PISA data.

Existing works used the individual level's math score, reading score and science score for estimations.

However, I do not know how to calucuate these scores. In the code of PISA, to take a case of math, there are 5 plausible values such as PV1MATH, PV2MATH, PV3MATH, PV4MATH, PV5MATH. Do researchers calculate mean score of them for individual math scores?

However, country level mean score of mathematics is not the same as its mean score culculated based on scores as above.

Given a set of

*m*(>0) trucks and a set of*k*(>=0) parcels. Each parcel has a fixed amount of payment for the trucks (may be same for all or may different for all) . The problem is to pick up the maximum number of parcels such that the profit of each truck is maximized. There may be 0 to*k*number of*parcels in the service region of a particular truck. Likewise, a parcel can located in the service region of 0 to**m*trucks. There are certain constraints as follows.1. Each truck can pick up exactly one parcel.

2. A parcel can be loaded to a truck if and only if it is located within the service region of the truck.

The possible cases are as follows

Case 1.

*m*>*k*Case 2.

*m*=*k*Case 3.

*m*<*k*As far as I know, to prove a given problem H as NP-hard, we need to give a polynomial time reduction algorithm to reduce a NP-Hard problem L to H. Therefore, I am in search of a similar NP-hard problem.

Kindly suggest some NP-hard problem which is similar to the stated problem. Thank you in advance.

I would like to change the following linear programming model to restrict the decision variables to two integers, namely a and b (a<b):

minimize (1,1,...,1)' e

(Y-Zx) > -e

-(Y-Zx) > -e

where Y is a n-dimensional vector, Z is a n \times k matrix and x is a k-dimensional vector. e represents a n-dimensional vector of errors which need to be minimized. In order to make sure that x's only can have values equal to "a" or "b", I have added the following constraints keeping the original LP formulation:

-a/(b-a) - (1/2)' + I/(b-a) x > -(E/(b-a) +(1/2)')

-(-a/(b-a) - (1/2)' + I/(b-a) x ) > -(E/(b-a) +(1/2)')

where I stands for a k \times k identity matrix and E is a k-dimensional vector of deviations which needs to be minimized (subsequently, the objective would be minimize (1,1...,1)' (e; E)).

But, yet there is no guarantee that the resulting optimal vector only consists in a and b. Is there any way to fix this problem? Is there any way to give a higher level of importance to two latter constraints than to the two former's?

Does anybody know of an optimization tool which has a built in spatial branch and bound solver?

I am looking for Matlab code for Ant colony optmization or Simulated annealing which can handle mixed integer variables.

Thanks.

Can anyone suggest me some references (preferably papers or articles) that discuss the sensitivity of computational intelligent optimization algorithms, more specifically soft computing techniques, to an initial solution?

It seems, regardless of the type of the techniques, e.g. evolutionary, swarm, network-base, etc. the quality of ultimate solution of some techniques is affected by the initial solutions while others show less sensitivity. Please let me know if you have any comment, suggestion or information on this topic.

Deer peers,

I have encountered a difficulty and I am eagerly seeking your instructions and advice.

When resources are sufficient, i.e., resource constraints in master problem can easily be satisfied, the algorithm will convergent to a linear relaxation upper bound, see fig 3;

When resources are lesser, the algorithm will never convergent, see fig 1 and 2;

Can anybody tell me whether it is the "tail-off" or degeneration?

Thank you!

I'm trying to identify which approach would work best to get optimal decision among three layers in multilayer network.

From the research I did it seems that combinatorial optimization and integer programming are the best suited for the job. Is there any other option I should consider? How should I set up the problem considering parameters index and performance metrics to take optimal decision

Many thanks in Advance

Rashmi

I've faced a problem that may needs a special formulation before using MATLAB Optimization ToolBox: If we have a problem with the following expression:

OBJ1 = min f(X)

OBJ2 = max f(X)

and I want obtimized value of X with both objective function at a time. is it possible with toolbox

if yes then how?

So that this could help for understanding theorems and mathematical treatment.

In most of the books and articles, it is assumed that user has prior knowledge, and while reading the book or article the dificulty is felt. Sometimes, these thoerems and proofs are skipped while reading.

Assuming that ther is a function f(x) where x is the vector [n1,n2,...nm] where ni is the number of balls in the box i={1,..m}, and sum(ni) = n. f(x) being a non linear, non convex function.

What's the complexity of the problem of finding the distribution of balls that maximize f(x) ?

Also what's a good algorithm for solving this kind of problems ? GA, PSO, etc??

Below I have attached two graphs of issues over time for a site in Chattanooga. (one is all issues over time, and the other is all issues restricted to date of the software update that causes the spikes) I am wondering what method of time series I could use to fit this data? I do not want to get rid of the "outliers"/spikes because this is what I am trying to fit to be able to "lay" the graph on another site location and make a prediction about its future spike. (therefore, I think smoothing the data or differencing defeats the purpose of what I am trying to do, but maybe I am misunderstanding) There are factors that go into play at Chattanooga and other sites, such as experience and size. Is this possible to incorporate and fit?

Any help is greatly appreciated!

After testing many instances I found out that when r = V / Vtotal <= ϕ (Golden Ratio) the algorithm takes a lot of time to printout the result.

When the ratio r is so close to ϕ , I noticed that : V / Vtotal = (V + Vtotal) / V (which represents the geometric relationship of the two quantities V and Vtotal in the Golden Ratio).

However, few of the instances having a ratio r > ϕ can take too long to print the results too.

So can this problem be related to ϕ or not?

PS: I got the idea of comparing it to ϕ after checking this answer Lower bound on running time for solving 3-SAT if P = NP

VRP is a combinatorial optimization problem. I hope to begin with symmetrical distance, but I am a new programmer in AMPL and need help.

Hi,

I need some help with applying techniques of combinatorial optimization in Requirement Prioritization and MULTI-CRITERIA DECISION.

Any articles, advices any useful stuff are all welcome.

Best regards

Considering:

- test problems
- quality indicators used in the evaluation

what are the classes of posets closed under taking ordinal sum of posets?

In ant colony optimisation, at each decision point an ant makes a selection from from a set of options. Heuristic information is available at each decision point to bias selection of options suspected to be favourable without precluding selection of less favourable options. If the favourability of subsequent decisions is known to be influenced by options selected at previous decision points, can the local heuristic information be updated to reflect this? Is there a known technique or category of techniques to handle this?

I am working on Inverse

*Protein Folding Problem.*I have created a database of protein sequences, in which every sequence is identical to its native sequence upto 65%, i need to select sequences that can fold into the native structure.

Is there any algorithm that can solve this optimization problem and which computer language is it suitable to implement it. Any pointers will be appreciated.

I've already completed the algorithms in vehicle routing problems, that is the last-mile-problem. But I wonder, is there any do's or don'ts in developing my own system to solve the vehicle routing problem, with the algorithm developed by myself.

Thank you for the response in advance.

It would be helpful, if anybody could redirect me to any techniques (deterministic and non-deterministic) that have solved well the vehicle routing problem.

Many thanks.

Patricia

Binary Variable * Real Variable = ?

1) lead to an equivalent 'Nonlinear' variable (and thus => MINLP),

2) lead to an equivalent 'Integer' variable, 'Discrete' I mean (and thus => MILP).

Which one is correct and why?

What is your idea to deal with this problem by adding a constraint and make the resultant problem MILP (if it is not MILP).

Regards,

Morteza Shabanzadeh

Hello everyone!

I have a non-linear scalar function that depends on a binary sequence of a fixed number N of elements:

i.e. :

010011001010010110 --> 1.5

110111010010000101 --> 0.8

010101101110101011 --> 1.9

How can I find the combination that yields the maximum value of the scalar function?

I've never dealt with combinatorial optimization, could you suggest me some books that may help me to solve this problem?

Thank you,

Alessandro.

I am trying to calculate the most compact way of grouping a set of pixels together. Does anyone have a readable guide on how to do this?

My initial results are given below for clusters of up to 10 pixels. Results are expressed in terms of the sum of unique interpixel distances for a given cluster (e.g. for a 3 pixel cluster it is the sum of the the distances ab, ac, bc).

1 = 0

2 = 1

3 = 3.4

4 = 6.8

5 = 13.5

6 = 21.1

7 = 31.4

8 = 44.1

9 = 58.9

10 = 78.5

I will be grateful if anyone could suggest a reference where I can find a formal definition of “binary discrete optimization”

This ordinary diff. equation should depend on parameter to be varied and it is of II-nd order as a majority of dynamical eqs. After some infinitesimal change of the parameter the period of solution drops 1.5, 2,2.5 ... times. It would be well if by this disappears anti-periodicity.

I adopted the B&P algorithm to solve the integer programming model, and the branching strategy is to branch on the original variable. However, the branch on the original variable cannot make the relevant variable of the master problem be branched, which still is fractional, and the branching will continue on the same original variable, which will result in an infinite loop. Has anybody gotten this trouble, how to solve it?

Normally, to enhance the performance of the meta-heuristic algorithms, local search based techniques are integrate with them. Instead of using local search techniques, if multiple meta-heuristics algorithms (like GA, ACO or PSO) are integrated to each other, is that a better approach?

I am aware that the minimum (cardinality) vertex cover problem on cubic graphs (i.e., 3-regular) graphs is NP-hard. Say positive integer k>2 is fixed. Has there been any computational complexity proof (aside from the 3-regular result, note this would be k=3,) that shows the minimum (cardinality) vertex cover problem on k-regular graphs is NP-hard (e.g., 4-regular)? Since k is fixed, you aren't guaranteed the cubic graph instances needed to show the classic result I mentioned above.

Note that this problem would be straightforward to see is NP-hard from the result I mentioned at the start if we were to state that this were for any regular graph (since 3-regular is a special case), we don't get that when k is fixed.

Does anybody know of any papers that address the computational complexity of minimum (cardinality) vertex cover on a k-regular graph, when k is fixed/constant? I have been having difficulties trying to find papers that address this (in the slew of documents that cover the classic result of minimum (cardinality) vertex cover on cubic graphs being NP-hard.)

**My goal is to locate a paper that addresses the problem for any k>2 (if it exists), but any details would be helpful.**

**Note: I also asked this question on CSTheory StackExchange. http://cstheory.stackexchange.com/questions/29175/minimum-vertex-cover-on-k-regular-graphs-for-fixed-k2-np-hard-proof**

Thank you so much!

I have a VRP mode that considers stochastic simultaneous pickup and delivery services in a public bus transport. How can I solve it using GA? How to define the population, the crossover the mutation using Matlab tool box or using a pic of matlab code.

I have attached the detail description of the model.

To MAXIMISE certain specified input to P1 from P2?

In this context, a

*production system*comprises a*set of IF-THEN rules*, a*working memory*, and an*execution process*including a*conflict resolution convention*A

*working memory*is a set of variable/value combinations with each variable appearing at most onceHere an

*IF-THEN rule*is of the form {IF A/x & B/y & C/z … THEN write K/t to <specified wm>}*Execution process*for a PS – REPEATEDLY, each rule in the PS is simultaneously tested -- if the IF part of a rule fully matches in the working memory then the variable/value combination of its THEN part is written to the specified working memory.

Two production systems

*interact*if the rules of each sometimes write to the other. NB rules always match only on the working memory of their own PS.A

*conflict resolution convention*is needed when contradictory variable/value combinations (i.e. same variable, different values) can be simultaneously written to the same working memory. In this context an equi-probable random choice is to be made between the alternativesTo “

*MAXIMISE the input to P1 from P2*” means to maximise the frequency with which any rule in P2 writes some particular variable/value combination (e.g. K8/0) to P1Note that the algorithm to be designed (call it

**MECE**) can read P2 but NOT alter it. All MECE can do is to add additional rules to P1 (possibly with new variables and values in consequence)I know that the FAP has already been solved using graph coloring, but there is another variant of the FAP, often called the

*.***real frequency assignment problem**I would like to know what the main head lines of this variant are when compared to graph coloring. Best regards.

I am using scenred (Gams) to apply scenario reduction for a stochastic LP model. I have 10800 scenarios. I have three uncertain parameters, two of them are in objective function and one of them is right hand side value. Two stage stochastic programming solution should be worse than the wait-and-see result which is the expected value of solutions obtained from all 10800 scenarios. But in my case, I found a stochastic programming solution better than wait-and-see. I know it is impossible but I couldn't figure out the problem in my model or code. I checked all probabilities, scenarios and tree. They all seem right. Also the Gams code dosen't give any error. Is there anyone using scenred tool of Gams and encounter similar problem before?

I am wondering if anybody can provide any handy resources (for a theoretical computer scientist) in relation to the convex cost flow problem? I have found texts (mostly my combinatorial optimization texts on my shelf), but they sparingly discuss the problem and its algorithmic properties, and the ones I've found so far take a very deep dive into it without explaining a whole lot or providing any examples. I get the formulation of the problem, but a bit more would be helpful.

I'm new to the problem, and wondering if anybody can suggest some good texts, or papers that do a good job of covering the problem, and some of the major algorithmic results for this problem (computational complexity, and algorithms primarily), or maybe applications of it being used to see how researchers have applied it to solve other problems in theoretical computer science, combinatorial optimization, or operations research.

If you have resources or suggestions, that would be helpful! Thank you so much, and have a beautiful day :).